text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
A new statistical method to assess potential debris flow erosion
. Debris-flow erosion patterns were investigated for two adjacent catchments, Molinara and Val del Lago creeks (Eastern Alps, Trento Province, Italy), where two debris flows were triggered by an intense storm in the summer of 2010. Both basins have been inactive over the last two centuries. The debris flows were activated by channel and bank erosion under stable bed conditions before the event. The erosive process was analysed by combining a field campaign (two hundred cross sections were surveyed along the creeks) and pre-and post-event LiDAR surveys. Data were analysed by selecting morphologically-homogenous channel reaches and deriving for each reach: erosion depth, creek width, eroded volume and peak discharge. Investigating the frequency distribution of the erosion depth we found out that it follows an EV1 probability distribution. On this basis, a new approach has been proposed to predict event volumes when the expected maximum potential depth erosion is known. The procedure would be of high interest in predicting debris flow volume in mountain channels characterized by long silent periods.
Introduction
Villages and infrastructure in Alpine regions are exposed to rapid mass movements, including debris flows. Several studies aimed to understand these processes, but fundamental questions concerning hazard assessment, mass growth along the flow path, and the variability of the processes remain open [1][2][3]. Adopting a qualitative approach, the first studies on erosion showed a high variability of the process and a strong influence of the local channel slope. The methods generally used to quantify the expected debris-flow volume initially focused on the field estimation of entrainable sediment. Such criteria are based on a geomorphic reach-by-reach estimation of sediment availability along the stream network [4,5]. More recently, the quantification of debris flow erosion is increasingly supported by high-resolution topography surveys and digital surface analysis [6,7] combined into the geomorphic estimation procedures. A database collected by [8] shows a statistical fingerprint of erosion depth. Other studies proposed empirical equations able to predict the depth of erosion based on channel and basin geomorphic variables such as channel slope [9][10][11]. In contrast, [12,13] did not find a major slope influence on erosion. Combining field survey data with high-resolution digital terrain models DTM, [2] emphasized the flow front height to be the key variable in erosion processes. Laboratory experiments and numerical models have took into consideration bulking and de-bulking processes along with sediment concentration, yield and shear stresses [14][15][16]. However, studies that require the back-calculation of * Corresponding author: Tommaso Baggio<EMAIL_ADDRESS>complex field conditions suffer from difficulties in describing the channel bed geology, the boundary conditions and the basin/channel morphometry /morphology [8]. The two debris flows that occurred in the Molinara and Val del Lago catchments, investigated in this study, enlarge the knowledge of in-channel erosion processes through the analysis of data and conditions scarcely investigated in the literature. The research objective is to test the hypothesis that at the scale of a formative/highly erosive debris-flow a statistical fingerprint of the erosion depth exists and it can be used as a tool to predict the expected debris-flow volume.
Study area and the 2010 event
The study area is composed of two adjacent catchments located in Eastern Trentino (Alps, Northern Italy, 11.288091°, 46.140849°) on the NW oriented side of the Costalta peak (1955 m a.s.l., Fig. 1). Both basins are mainly covered by forest (Norway spruce and larch). Their channels are incised on a thin alluvial Quaternary cover over a massive porphyritic platform (Ring and Richter, 1994). The drainage network is well developed and rainstorms normally produce flood events without significant bed load transport. Effectively, there is no documentation noting the occurrence of significant debris flows in the 200 years preceding the severe event that occurred in summer of 2010. The catchment area of the Molinara torrent is 0.88 km 2 , extending from 1113 to 1955 m a.s.l. The mean slope of the catchment and main channel is high, equal to 70% and 37%, respectively. The total stream network is 4 km long, 1.7 km of which constitutes the main stream. The Molinara torrent network flows in a deeply incised valley and is characterised by low sinuosity. The catchment of Val del Lago torrent has an extension of 0.42 km 2 and the elevation ranges from 1024 to 1722 m a.s.l. The mean slope of the main channel (1.5 km long) is 28%. On the 14 th of August 2010, a storm hit the study area at 2:45 p.m. (CEST time) and continued until 5:00 a.m. of the following day, discharging a cumulated rainfall amount of 169.1 mm (rain gauge located 5 km from the two catchments). The bulk of the storm affected the area from 11:45 p.m. to 4:45 p.m. and was characterised by two main bursts of rainfall lasting two hours: maximum 1-hour rainfall intensity of 39.3 mm and 38.2 mm, separated by 45 minutes of low-intensity rainfall. The return period of the event was estimated equal to 100 years for the 3-hour maximum rainfall (73.1 mm) and greater than 200 years for the 6-hour maximum rainfall (156.3 mm). During the first storm, the basin hillslopes were partially saturated, due to previous rainfalls, and the debris-flood discharge increased considerably. During the second rainfall burst, massive destabilisation originated from the channel heads and triggered debrisflow surges in both basins. The surges progressively entrained sediment from the channel bed, enlarging in this way the debris-flow volumes. The Val del Lago debris flow filled the retention check dam (Fig. 1), whereas the Molinara torrent debris flow flooded the village of Campolongo di Pinè damaging roads and houses ( Fig. 1). After-event field inspections did not indicate particular sediment source areas other than the channel heads and the subsequent channel reaches eroded by the debris-flow passage (Fig. 2).
Material and methods
The Val Molinara and Val del Lago debris flows were investigated by means of a field survey of the torrent reaches affected by the flow passage, available LiDAR surveys acquired before and after the event, correction of field survey through LiDAR data and statistical data analysis. Accounting for advances in debris-flow erosion [17] and event evidence, three basic conditions can be assumed. i) the channel bed was quite stable after the first storm, ii) the erosion depth Z (Fig. 2 and 3) was generated during the second highest debris-flow front (eyewitness of two surges), and iii) the average height of the debris flow front (Zd) resulted from the difference between the average maximum flow depth (observed in the field in accordance with the top flow-width line joining opposite banks) and pre-event channel bed elevation (Fig. 3).
Accurate field investigations were conducted in the summer of 2011 estimating the sediment yield following the methodologies described by [5]. The torrent network was divided into homogeneous reaches in terms of slope, bed/banks morphology and sediment transport type. For each reach, representative cross-sections were measured (maximum of 3 sections) using a rangefinder (Disto Trupulse® 360B, precision 0.1 m and 0.1°). The shape of the section was approximated to a trapezium, and the following measurements were taken ( -Maximum cross-section flow widths (b) and heights (hr, hl) with respect to the post-event thalweg elevation, whose shape is clearly the result of the debrisflow passage. The channel bed average erosion (Z) was estimated in the surrounding reach of each cross-section by different types of field evidence (Fig. 2-D and Fig. 3) [18]. Afterwards, the following variables were calculated: the erosive yield rate Y (m 3 The resulting DoD could provide an error estimation of the ground differences between the pre-and post-event surfaces (Weathon et al., 2010). Nevertheless, the low resolution of the pre-event DTM and the significant irregularity of the banks made difficult a precise assessment of the local variation of the channel characteristics (B and heights of each bank Hr, Hl). The integration of remote sensing data with field survey was then necessary to generate the erosion-related variables (channel yield rate Y and volume V). In this context, we used the cross-section post-event surveys (Z, Zd, Hr, Hl, hr, hl, b, B; assumed to be the more correct) as primary information and then we accounted for them to obtain an adjusted DoD based on matching the cross-section field surveys. The analysis of the erosion depths (Z) has considered their statistical distribution with the aim to test the existence of a characteristic erosion pattern produced by a severe debris flow after a long period of inactivity. Data were tested both through continuous asymmetric probability distributions and a number of symmetrical distributions. In particular, following the suggestions of [19], the adaptation considered the exponential, logistics, Gumbel, Fuller and log-normal distribution and was completed applying the Kolmogorov-Smirnov test with a confidence level of 95%. The most appropriate distributions were selected by comparing the values of root mean square deviation (RMSD).
Results
In summer 2011, the field survey consisted of 190 crosssection measures in the Molinara catchment, corresponding to 155 homogenous reaches. Regarding the Val del Lago catchment, a total of 73 cross-sections, grouped into 45 homogeneous reaches were surveyed. The DoD calculation of debris-flow volume and depths of erosion for the Molinara torrent provided a volume of 68400 m 3 (with an error of the estimate: Errv,high = 13 100 m 3 , [7]) and 8700 m 3 (Errv,high = 2700 m 3 ) in the Val del Lago catchment.
Both DoD volumes were substantially confirmed by the post-event surveys of the Torrent Control Service (Trento Province authority). Afterwards the average erosion depths from the field survey and LiDAR analysis were compared. The analysis highlighted that field observations have systematically underestimated the erosive depths, evidencing a linear correlation between field (ZField) and DoD values (ZDoD). Underestimation was significant and can be expressed by the following equation: = (1) where the coefficient q showed a value of 0.39 for Molinara (p value < 0.01, R 2 = 0.46) and 0.44 in Val del Lago (p value < 0.01, R 2 = 0.49). Thanks to equation 1 the final correct depths of erosion reached maximum values (Zmax) of 4.86 and 3.74 m in the Molinara and Val del Lago torrents respectively. The analysis of the datasets of erosion depth (Z) indicated that are significantly skewed with a long tail for the largest values. The Z sample characteristics in terms of the cumulative distribution function (CDF) are shown in Fig. 4a, where the variable has been normalised to the dimensionless depth: Zr = Z/Zmax, being Zmax the maximum Z measured in each stream. The Zr sample has proven not to be adaptable to the exponential and logistic probability distributions (Kolmogorov-Smirnov test, confidence level of 95%), while it resulted adaptable to all three right-skewed distributions. Comparing the measured Z values with those expected from the CDF of the log-normal, Fuller and EV1 distributions, the latter was the most accurate (RMSD equal to 0.22 and 0.17 for the Molinara and Val del Lago basins, respectively). Afterwards, the hypothesis that the datasets of the two basins belonged to the same population was tested focusing on the dimensionless erosion depth. The statistical analysis used the non-parametric Mann-Whitney test (95% confidence level) assuming the null hypothesis corresponding to samples with equal distribution (Gumbel, Fuller, log-normal), and the alternative hypothesis to samples with different distributions. The analysis proved that the Z/Zmax values of the adjacent catchments belong to the same population with a p-value << 5%. The joint Molinara-Val del Lago Zr sample (176 values) has an average of 0.308 and a standard deviation of 0.228. This sample has been shown to follow an extreme value EV1 (Extreme Value Type I) probability distribution at best (Fig. 4b). The function of the cumulative probability (P) of non-exceedance can then be written as: where y is the reduced variable of the distribution and the parameter estimation (method of moments) yields: α= 5.628 and µ=0.206. The corrected adaptation of the sample to the EV1 probability distribution was positively verified even by means of [20] test assuming a confidence level equal to 95%. Equation (2) was then tested to recalculate the eroded volume Ve through the assumption of a number (n) of equal-spaced Zr,i (=Zi/Zmax) intervals as follows: where pr,i is the relative probability density (Equation 2) associated with the normalised erosion rate Zr,i of the ith interval, L is the total length of the erodible channels and bm the mean torrent width. The computation of Ve assuming 10 equal intervals of Z/Zmax (class width of 0.1) provided values of eroded material equal to 63000 m 3 in the Molinara channel and 8000 m 3 in the Val del Lago reach. These volumes do not vary significantly with an increase in the number n of the intervals (i.e., 13% reduction in volume for n=100) and substantially agree with those calculated through DoD analysis.
Discussion and conclusions
The mobilised sediment volumes in the triggering areas were negligible compared to the final magnitudes. The investigated two adjacent debris flows were dominated by in-channel sediment entrainment, equal to 68400 and 8700 m 3 in the Molinara and Val del Lago basins respectively. The maximum erosion depths (4.9 m and 3.7 respectively) are comparable to those found by [8,10]. According to [21], long silent periods have been confirmed as preparation for important sediment recharge, particularly for the geological settings of volcanic and compact metamorphic rocks. When the hazard has to be assessed in these lowfrequency/apparently stable mountain streams, the difficulty of the geomorphic estimates of the expected sediment volumes makes the identification of a statistical erosion pattern a practical and valuable tool. As already shown by [8], the distribution of the erosion depth Z has been confirmed to be markedly asymmetrical towards the right tail. Starting from the scale of the expected maximum deepening of the bed (Zmax), recalculating the volume -accomplished by means of the fitting probability distribution EV1 (Equation 2) and a simplified sediment budget (Equation 3) -has proven to be very precise, requiring only a few data. The computation for the Molinara/Val del Lago basins has obviously benefitted from the backanalysis and thus from the Zmax field measurement, and parameter (α and μ) adaptation. Nevertheless, the computational method could be quite promising and innovative when used in a purely predictive task. Indeed, assuming the channel length to be invariant under erosion and the average channel width quasiinvariant, Zmax could be measured by carrying out geophysical surveys to estimate the sediment thickness. A certain degree of subjectivity would remain in the distribution parameters but the proposed model for real cases and their comparison with the measurable values of bank heights before the event would help to make a reliable choice. Additional verifications are then recommended, in terms of the number of events (e.g., monitored cross-sections within a network of experimental debris-flow catchments) and spatial continuity of the information provided by intensive post-event surveys. | 3,448.2 | 2023-01-01T00:00:00.000 | [
"Geology"
] |
A SECOND OPINION ON RELATIVE TRUTH
In ‘An undermining diagnosis of relativism about truth’, Horwich claims that the notion of relative truth is either explanatorily sterile or explanatorily superfluous. In the present paper, I argue that Horwich’s explanatory demands set the bar unwarrantedly high: given the philosophical import of the theorems of a truth-theoretic semantic theory, Horwich’s proposed explananda, what he calls acceptance facts, are too indirect for us to expect a complete explanation of them in terms of the deliverances of a theory of meaning based on the notion of relative truth. And, to the extent that there might be such an explanation in certain cases, there is no reason to expect relative truth to play an essential, ineliminable role, nor to endorse the claim that it should play such a role in order to be a theoretically useful notion.
Introduction
In (2014), Horwich argues that semantic theories based on the notion of relative truth are empirically useless, for they are unable to account for the phenomena any suitable theory of meaning ought to explain: certain observable facts concerning our linguistic activity that he calls acceptance (and rejection) facts.Moreover, even if this were not so and there were an adequate explanation of those phenomena in terms of relative truth, the notion of relative truth itself would be explanatorily idle.Put more bluntly, relative truth is either explanatorily sterile or explanatorily superfluous.In either case, truth-theoretic semantic theories based upon the notion of relative truth ought to be abandoned in favor of a use-theoretic theory of meaning.
In this paper, I'll essay a defense of relativistic theories of meaning against Horwich's criticism.It should be noted from the start that, although Horwich mounts his case against MacFarlane's way of spelling out a truthrelativistic theory of meaning (cfr.MacFarlane (2014, ch. 3-4)), the criticism easily applies to any relativistic theory of meaning cast in the semantico-pragmatic style of Kaplan (1989).Hence, Horwich's objection affects more than a single brand of relativism.In order to discuss the objection, I'll start with a brief account of what relativistic theories of meaning are (section 2).Then, I'll move on to a consideration of Horwich's objection to theories of this kind, temporarily granting the assumption that theories of meaning ought to explain acceptance (and rejection) facts (section 3).Finally, I'll provide a reply to Horwich's claims (section 4), and I'll provide some reasons for rejecting acceptance (and rejection) facts as the proper explananda of a relativistic theory of meaning (section 5).
Semantico-pragmatic theories based on relative truth
The first order of business is to provide an adequate characterization of Horwich's targets, the semantico-pragmatic theories of meaning that deploy the notion of relative truth.As Horwich's objection is explicitly directed against MacFarlane's brand of relativism, I'll start with MacFarlane's characterization.
MacFarlane (2014) favors a purely semantic presentation of relativism (which justifies calling it truth relativism) according to which propositional truth is assessment sensitive, in the sense that the truth value of evaluative propositions (i.e., the propositions expressed by sentences such as 'Sushi is delicious' and 'Stealing is wrong', among others) partly depends on a context of assessment.The properly semantic part of a relativistic theory of meaning, which MacFarlane calls semantics proper, aims to define, in the style of Kaplan (1989), a predicate 'true at <w, s>' for propositional contents, where <w, s> is a circumstance of evaluation composed of a possible world w and an evaluative parameter s (a standard of taste, a system of moral norms, etc.).After semantics proper comes post-semantics, the part of a theory of meaning whose job is to characterize a predicate of immediate pragmatic relevance, 'true as used at c and as assessed from c'', in terms of the properly semantic predicate 'true at <w, s>', along the following lines: The proposition expressed by a sentence S at a context c is true as used at c and as assessed from c' iff it is true at <wc, sc'>, where wc is the world of c and sc' is the evaluative parameter relevant at the context of assessment c'.This is a predicate of immediate pragmatic relevance in the sense that it provides the interface between the theoretical concept of truth at a circumstance for propositions and central pragmatic concepts.In particular, the interface with pragmatics comes by linking this predicate with speech acts such as assertion, retraction and rejection by means of principles like the following: Reflexive Truth Rule.An agent in context c is permitted to assert p only if p is true as used at c and as assessed from c. Retraction Rule.An agent in context c' is required to retract an (unretracted) assertion of p made at c if p is not true as used at c and as assessed from c'.Rejection Rule.An agent in context c' is permitted to reject an assertion of p made at c if p is not true as used at c and as assessed from c'.
As we've already remarked, the Reflexive Truth Rule, the Retraction Rule and the Rejection Rule act as semantics-pragmatics bridging principles, for they connect the theoretical truth predicate 'true at <w, s>' with normative conditions for the speech acts of assertion, retraction, and rejection, via a predicate with immediate pragmatic relevance, the propositional predicate 'true as used at c and as assessed from c''.
Of course, other presentations are also possible.In the presentation I favor, relativism about truth -or, more properly, radical relativism-is a semantico-pragmatic approach to natural languages according to which the correctness of assertive utterances of sentences belonging to evaluative discourses is sensitive to the perspective from which those utterances are assessed. 1As before, the properly semantic part of radical relativism aims to define the predicate 'true at <w, s>' for propositional contents.The pragmatic part contains certain principles linking truth at a circumstance for propositions with correctness for utterances-for example: Utterance correctness.An utterance of a sentence S made at c is correct, as assessed from c', if and only if the proposition expressed by S at c is true at <wc, sc'>, where wc and sc' are as before.Thus, Utterance correctness is a semanticspragmatics bridging principle linking the semantic-theoretic notion of truth at a circumstance of evaluation for propositions with a notion of immediate pragmatic relevance, correctness as assessed from a context c' for assertive utterances (cfr.also Kölbel (2008aKölbel ( ,2008bKölbel ( ,2009) ) for a similar articulation of a moderate version of relativism).
Be it under one style of semantico-pragmatic theorizing or the other, we should note the existence of a general theoretical schema that consists of a (properly) semantic-theoretic predicate 'true at <w, s>' and a predicate with immediate pragmatic relevance, which allows for the connection between semantics and facts about our use of language.This is an important feature of relativistic theories of meaning, for Horwich's objection will crucially rely on the need for these principles in order to connect the semantic-theoretic predicate of truth at a circumstance with language use.
A final twist: since, for Horwich, the explananda of a semantic theory are facts about acceptance (and rejection) of sentences, we'll also need semantic and pragmatic concepts defined for sentences.The structure of a Kaplan-style semantic theory makes this easy, for the propositional truth predicate 'true at <w, s>' is actually introduced on top of the recursive definition of a sentential truth predicate, 'true at a context c with respect to a circumstance of evaluation <w, s>', where c is a context of utterance.More precisely, the propositional truth predicate is introduced by the following equivalence: The proposition expressed by S at c is true at <w, s> iff S is true at c with respect to (w.r.t.) <w, s>, where the proposition expressed by S at a context c is (represented by) the function that maps a world-perspective pair <w, s> to truth just in case S is true at c with respect to <w, s>, and to falsity otherwise. 2Now we may restate the 2 Even more precisely, the predicate that is actually defined is 'true at a context c with Manuscrito -Rev.Int. Fil., Campinas, v. 38, n.2, pp.65-88, jul.-ago. 2015.bridging principles in terms of this last predicate: Reflexive Truth Rule (for sentences).An agent in context c is permitted assertively to utter S only if S is true at c w.r.t.<wc, sc>.Retraction Rule (for sentences).An agent in context c' is required to retract an (unretracted) assertive utterance of S made at c if S is not true at c w.r.t.<wc, sc'>.Rejection Rule (for sentences).An agent in context c' is permitted to reject an assertive utterance of S made at c if S is not true at c w.r.t.<wc, sc'>.Utterance correctness (for sentences).An utterance of a sentence S made at c is correct, as assessed from c', if and only if S is true at c w.r.t.<wc, sc'>.
We are now in a position to address Horwich's objection.
Horwich on relative truth
As we've already advanced, in (2014) Horwich attempts to argue that either relativistic theories of meaning are unable to explain acceptance/rejection facts, or, to the extent that they are able to do so, relative truth is doing no real, explanatory work and is, therefore, a superfluous concept that we ought to reject.
As the starting point of his case against relativism, Horwich notes that a set of truth-theoretic semantic axioms can have the required empirical import only if supplemented with principles linking the truth conditions assigned to the sentences of the language under study with facts about our use of language.This is true, particularly, of the ways in which we've presented relativistic theories of meaning, for the role of the semantics-pragmatics bridging principles we've identified is precisely that of linking the distribution of truth values at the different circumstances of evaluation specifiable in the properly semantic part of a relativistic theory of meaning with facts about the correctness or incorrectness of the corresponding assertive utterances.It is through the respect to a circumstance of evaluation <w, s>, under an assignment f' for formulas.We are dropping the assignment in order to make things more legible.specification of correctness conditions for assertive utterances that these principles have normative consequences concerning the acceptance and rejection of utterances, as well as concerning assertion and retraction.(Obviously, this is why Horwich's objection will apply equally to both kinds of presentation, for both share this feature: properly semantic notions have empirical import only insofar as they are connected with pragmatics by means of this kind of principles.) Now, Horwich raises the stakes at this point, by insisting that the empirical consequences of a semantic theory should be statements about linguistic activity: the connection between semantic theory and language use should be between distributions of truth values at different circumstances and concrete and observable facts about our linguistic activity.These are basically what Horwich (2010, ch. 8) calls acceptance facts (and we could -and shouldinclude facts about rejection as well), i.e., facts about acceptance (and rejection) of sentences by particular speakers on given situations.Certainly, at this point, the advocate of relativism might be tempted to halt Horwich's considerations by holding that the empirical basis of a semantic theory (at least for the quarters of truth-theoretic semantics) is usually thought to consist of the intuitive judgments competent speakers make regarding the truth conditions of sentences in context, or concerning the (objective) correctness conditions of the corresponding utterances-so that we should not grant that the explananda are concrete, observable cases of acceptance and rejection.However, certain considerations recommend letting Horwich's objection roll, at least pro tem, and consider the possibility of coming up with an explanation of actual linguistic activity in terms of the deliverances of a truth-theoretic semantic theory: as is well known, intuitive judgments may not be as stable and as clear as desirable; and the careful consideration of Horwich's requirement to account for actual linguistic practice will allow us to conclude that intuitive judgments (their relative lack of stability and clarity notwithstanding) constitute an empirical basis for semantic theories that is more immediate than acceptance/rejection facts, for the explanation of these forces us to take into consideration a wider set of theories.
So, what are the principles that should make the connection between truth-theoretic axioms and theorems and the use of language?Since the facts to explain are facts about acceptance and rejection, the natural candidates are the norms that guide the speech acts we normally use in performing those activities.We've already seen the norms for assertion, retraction and rejection.To these we Manuscrito -Rev. Int. Fil., Campinas, v. 38, n.2, pp.65-88, jul.-ago. 2015.should add the norm for acceptance: Acceptance Rule (for sentences).An agent in context c' is permitted to accept an assertive utterance of S made at c only if S is true at c w.r.t.<wc, sc'>.
For Horwich, these principles have the burden of providing the connection between the semantic theorems and the use of language.But how might they do it?
A problem that Horwich immediately points out is that these principles actually do not allow us to explain any linguistic activity at all, for normative principles like these cannot effectively constrain what we actually do.That is, from the fact that we have, e.g., permission assertively to utter or to accept a sentence only if it is true, or the obligation to retract its assertive utterance if it is false, it doesn't follow that we actually do so.Thus, for Horwich, what would have the required import are not those principles themselves, but that we had a tendency or propensity to abide by those principles.Given such a propensity, the purely semantic theorems that constitute the deliverances of the truth-theoretic axioms would allow us to explain the acceptance/rejection facts.Horwich (2010, ch. 8) is quite explicit about how such a tendency could help in explaining linguistic activity in the case of acceptance facts, though what he says easily applies to rejection facts as well.As we've already seen, acceptance facts are concrete cases in which a speaker accepts a certain sentence.As it turns out, sentence acceptance is somewhat complex, and encompasses two cases: one might accept a sentence by assertively uttering it, or by accepting someone else's utterance of that sentence. 3In the first case, the primary target of acceptance is the sentence itself (or the sentence as used and as assessed from the context of assertion), whereas in the second case, the primary target of acceptance is the utterance (or the sentence as used at its original context and as assessed from the context at which the acceptance takes place).Thus, acceptance facts are facts about speakers uttering sentences in certain circumstances, and about speakers accepting others' utterances in certain circumstances.In the spirit of truth relativism we could (and should) add rejection facts to acceptance facts.These also come in two varieties: the rejection of someone else's assertive utterance, and the retraction of one's own assertive utterances.So, the linguistic activity to be explained also includes concrete cases of rejection and concrete cases of retraction.(Without forgetting these nuances, I'll speak, for the time being, simply of acceptance and rejection facts.)Horwich (2010, ch. 8) is quite explicit about the form of a possible explanation for acceptance facts, a form of explanation that makes it clear in which sense the alleged tendency to abide by the principles governing assertion and acceptance plays a central role in the explanation of linguistic activity.It should be noted that Horwich gives an explicit formulation for the case of truthconditional semantics based on the notion of absolute truth in the semantic style of Davidson (1967).However, what he says can be transposed to a semantic theory based on the notion of relative truth and couched in the semantic style we've chosen.
Let's start with Horwich's criticism as it applies to a Davidsonian semantic theory.In (2010, ch.8), Horwich holds that Davidsonian truth conditional semantics would be incapable of explaining acceptance facts because there wouldn't be any causal-explanatory link between the (alleged) semantic fact that a sentence has certain truth conditions and the fact that, in particular circumstances, an agent accepts that sentence.That is, truth-conditional semantics wouldn't be able to predict, with high enough probability, that a given acceptance fact will take place, on the basis of the truth conditions of the accepted sentence.
Thus, we start to see what form an explanation of an acceptance fact should take: it should employ an alleged causal-explanatory link between the possession of certain truth conditions by the accepted sentence and its acceptance by an agent under certain circumstances in order to assign a high enough probability to the occurrence of a particular case of acceptance in circumstances of that type.The explanatory schema that Horwich deems initially plausible is quite illustrative of this point.For him, an explanation of an acceptance fact could take the form of the following derivation: 1. S is true iff p 2. A will probably accept S iff S is true 3. A will probably accept S iff p Manuscrito -Rev.Int. Fil., Campinas, v. 38, n.2, pp.65-88, jul.-ago. 2015.
A will probably accept S
In this derivation, premise 1 is provided by the proposed truth-conditional analysis of sentence S. Premise 2 captures the alleged propensity to abide by the norm for sentence acceptance (i.e., the alleged propensity to accept S just in case S is true); 3 follows from 1 and 2; 4 results from observation, or from some other way of determining that a truth condition for S actually takes place; 5 is just the conclusion that A will probably accept S, given its truth, in the circumstances she's in.If we pause to check this derivation, it is easy to see that the crucial step (indeed, the step that offers the causal-explanatory link required by Horwich) is step 2, the step that expresses the assumption that we have a tendency to abide by the normative requisite for sentence acceptance.
In the case of rejection, a similar schema could be provided: 1. S is true iff p 2. A will probably reject S iff S is not true 3. A will probably reject S iff not-p 4. not-p 5.A will probably reject S Here, the crucial step linking the semantic analysis of sentence S with a concrete fact of rejection of S by A is step 2 again, that is, the step that expresses the alleged tendency to reject a sentence just in case it is false.The problem for this line of thought, of course, is that we do not have the alleged tendencies.That is, we do not have a tendency to accept a sentence just in case it is true, and we do not have a tendency to reject a sentence just in case it is false.Thus, the pretension of explanatory adequacy of truth-conditional semantics seems to crash at this point.This is particularly clear in the case of non-evaluative sentences, such as: (1) Snow is white.
(2) Red soils have a high concentration of iron.
If we actually had those tendencies, then our epistemic lives would be considerably happier than they actually are.The consideration of evaluative sentences such as: (3) Sushi is delicious, on the other hand, might generate the hope that a semantic theory based on the notion of relative truth may succeed where one based on the notion of absolute truth fails.Of course, even so, relativistic theories of meaning would end up being inadequate, since they yield, for non-evaluative sentences, the same results as non-relative truth-theoretic semantic theories.Thus, insofar as a theory of meaning based on the notion of relative truth should also account for nonevaluative sentences, it would have the same fate as semantic theories based on the notion of absolute truth.However, relativism would be adequate at least for evaluative sentences, that is, those sentences that motivate the truth-relativistic approach in the first place.The reason is the following: even though it is implausible to suggest that we have a tendency to accept (reject) non-evaluative sentences just in case they are true (false), it might be plausible to claim that we have a tendency to accept evaluative sentences just in case they are true (false) as assessed from our own perspectives.Following this line of thought, Horwich (2014) points out that, in the case of evaluative sentences, the following principles have certain plausibility: (4) At a context c, A will probably assertively utter S just in case S is true at c w.r.t.<wc, sc>.
(5) At a context c', A will probably accept an assertive utterance of S made at c just in case S is true at c w.r.t.<wc, sc'>. 4 Again, these principles might serve as a link between the (alleged) semantic fact 4 Actually, Horwich considers an undifferenciated principle for sentence acceptance: At a context c, A will probably accept S just in case S is true in A's context of assessment.
(cfr.Horwich (2014, 745).)However, once we take into account the difference between assertion and acceptance, we need to provide two different principles.Also, relativity to a context of utterance is required in order to make room for indexicality, as well as for non-indexical dependence on features of the context of utterance, such as the world of that context.Manuscrito -Rev. Int. Fil., Campinas, v. 38, n.2, pp.65-88, jul.-ago. 2015.
that an evaluative sentence has certain truth conditions relative to an evaluative perspective, and the concrete, observable fact of its acceptance by an agent on a particular occasion, since they express the tendency to accept an evaluative sentence just in case it is true relative to one's own evaluative perspective.Similar principles for rejection and retraction, (6) At a context c', A will probably reject an assertive utterance of S made at c just in case S is not true at c w.r.t.<wc, sc'>.(7) At a context c', A will probably retract an (unretracted) assertive utterance of S made at c just in case S is not true at c w.r.t.<wc, sc'>, would allow us to link semantic facts with concrete, observable cases of rejection and retraction.
Assuming the plausibility of these principles, we could have the hope of being able to explain acceptance and rejection facts by means of derivations analogous to the one explicitly given by Horwich. 5 If we distinguish clearly between acceptance and assertion, on the one hand, and rejection and retraction, on the other, it is possible to come up with four schemata for explaining acceptance and rejection facts: These principles don't seem to be plausible even when restricted to evaluative sentences, for there are evaluative questions whose decision might be quite complex-just think about the numerous factual considerations that may be relevant in order to determine whether a given action is morally wrong, or whether a given belief is justified, and about all the possible interactions and incompatibilities between moral or epistemic norms or policies of different levels that constitute moral and epistemic systems.Maybe these principles are plausible only for evaluative properties that are simple from the factual and normative point of view, such as those involved in matters of taste, matters of humor, etc.In these cases, given the simple character of the normative side of the judgment, first hand knowledge of the facts of the disputed question (e.g., knowing how sushi tastes) might be sufficient to know everything there is to know in order to decide the corresponding evaluative question (e.g., whether sushi is delicious or not).However, I'll keep the assumption that these principles are plausible for evaluative sentences in general, since my answer to Horwich's concerns won't depend on any take on this issue.
Sentence acceptance (assertion)
1.For all S, c, w, s: S is true at c w.r.t.<w, s> iff the proposition expressed by S at c is true at <w, s>
Retraction
1.For all S, c, w, s: S is true at c w.r.t.<w, s> iff the proposition expressed by S at c is true at <w, s> 2. A subject A at a context c' will probably retract an (unretracted) assertive utterance u of S made at c iff S is not true at c w.r.t.<wc, sc'> 3. A will probably retract u iff the proposition expressed by S at c is not true at <wc, sc'> 4. The proposition expressed by S at c is not true <wc, sc'> 5.A will probably retract u Through these schemata, the distributions of truth values relative to circumstances of evaluation posited by a truth-relativistic semantic theory, together with the tendencies to abide by the norms for assertion, acceptance, rejection and retraction -when restricted to evaluative sentences-, would be enough to explain acceptance and rejection facts concerning evaluative sentences.
However, Horwich continues, these principles are not really illuminating of acceptance and rejection facts, for the explanations remain silent about what it is for a sentence to be true (or false) at a context of use and a context of assessment, and nothing is said about what it is for a context to be a context at which an evaluative sentence is true (or false, as the case may be).Thus, in order to have a full explanation of acceptance and rejection facts, we also have to make explicit the implications of certain non-semantic features of the contexts of use and assessment for the possession of one or other truth value by an evaluative sentence relative to those contexts.That is, besides citing principles linking relative truth and the use of language, we are owed some principles linking nonsemantic features of context with the notion of relative truth.That is, we must answer the question: What it is for a context to be a context at which an evaluative sentence is true (or false)?
Now, once we do this and we make explicit what is involved in a sentence's being true relative to a context of assessment, it becomes clear that any deployment of the notion of relative truth in the explanation of acceptance and rejection facts is, at best, accessory.Indeed, e.g., to the question, 'What it is for a context to be a context at which "Sushi is delicious" is true?', the answer is: It is to be a context such that its agent likes the taste of sushi.That is, the It's clear that we can replace the right hand side of step 2 with the right hand side of (8), so as to obtain: 2'.A subject A at a context c will probably assertively utter 'Sushi is delicious' iff A likes the taste of sushi (at tc and wc) Thus, it is clear that the explanatory work is being done by the non-semantic features of the context at which the acceptance takes place, not by the sentence being true relative to that context: once we arrive at 2', any deployment of the notion of relative truth in the explanation of an acceptance fact becomes eliminable in terms of the non-semantic features of context.(And the same holds for the other varieties of acceptance and rejection.) In this way, Horwich's dilemma for truth relativism is set: either a semantic theory based on the notion of relative truth is incapable of explaining acceptance and rejection facts (in the case of non-evaluative sentences), or the notion of relative truth plays an inessential role and can be eliminated (in the case of evaluative sentences).In either case, we should simply abandon the notion of relative truth.Manuscrito -Rev. Int. Fil., Campinas, v. 38, n.2, pp.65-88, jul.-ago. 2015.
To this we could add the following consideration.The attentive reader will have already noticed two things.First, that steps 2 and 3, in each of the proposed schemata, are unnecessarily strong: all we need, in each case, is the right-to-left direction of the biconditional.Second, only an alleged tendency to abide by the normative principle for retraction would allow us to obtain this direction.The alleged tendency to abide by the normative principles for acceptance and assertion, on the other hand, would only vindicate the left-toright direction.This is due to the fact that the normative principle for retraction offers sufficient conditions for that action to be mandatory, whereas the normative principles for assertion and acceptance only offer necessary conditions for acceptance and assertion to be permissible.As for the principle for rejection, it provides a sufficient condition for rejection to be permissible, not mandatory, so any alleged tendency to abide by it wouldn't quite explain linguistic facts concerning rejection either.Thus, even if we had the alleged tendencies, most acceptance and rejection facts would still be quite hard to explain.So, it seems that relativism is in pretty bad shape.
Semantic theory and linguistic activity
Fortunately, it's not necessary to accept Horwich's objection.In order to see this, it should be noticed that the dilemma that Horwich presents to the relativist rests on four claims: 1.A semantic theory must explain (i.e., assign high probability to) the occurrence of concrete facts of acceptance and rejection.2. In the case of a semantic theory based upon the notion of truth (be it absolute or relative), the explanation must exploit a causal-explanatory link between the possession of certain truth conditions on the part of a given sentence and the concrete facts of acceptance and rejection involving it.3. In most cases, there is no causal-explanatory link between the possession of certain truth conditions by a given sentence and the corresponding acceptance/rejection facts 4. In those cases in which there is such a causal-explanatory link, what is really performing the explanatory work is a non-semantic feature of the context in which the acceptance/rejection takes place, not a semantic fact about that sentence.
Horwich's considerations regarding the inexistence (in most cases) of a propensity to abide by the normative principles that guide acceptance, rejection, assertion and retraction seek to provide support to claim 3, and claim 4 receives its support, e.g., from the observation that what would actually explain a sincere acceptance of a sentence like 'Sushi is delicious' by a speaker at a given context would be the non-semantic fact that the speaker likes the taste of sushi, not the semantic fact that said sentence is true at that same context.
How may we resist Horwich's dilemma?Well, premise 1 seems suspicious, for it's not clear that an explanation of an event must require an assignment of high probability of occurrence to it.Simplifying a bit, a probabilistic prediction is considered explanatory when it assigns, to a certain event, a probability of taking place that is in the vicinity of an observed frequency for that event.If the observed frequency is low (and this, as is well known, depends, among other things, upon the way in which the event is described), a good probabilistic explanation of this event will be one that assigns to it a low probability of taking place.Thus, it's not obvious that we should grant claim 1, at least without a caveat.However, I'm prepared to grant it for the sake of argument.
Now, what about claim 2? Are there any good reasons to accept it?The answer seems to be, decidedly, no.In order to understand why, we should consider, in a more detailed manner, the theoretical role played by the theorems of a semantic theory based on the notion of truth (be it relative or not).
Allow me to start with semantic theories based upon the notion of absolute truth, since here the point is best appreciated.In its Davidsonian presentation, the goal of a semantic theory is recursively to assign truth conditions to the (declarative) sentences of a given language.Thus, the theorems of such a semantic theory associate sentences with truth-conditions in a formulation that, with some simplification, adopts a familiar form: (9) 'Snow is white' is true iff snow is white.(10) An occurrence of 'Snow is white' at a context c is true iff snow is white (at the world of c).
the notion of meaning in their formulation, are put forward as theoretical articulations of the meaning of sentences-or, more properly, as a theoretical articulation of that dimension of sentential meaning that is responsible for the objective correctness of the corresponding utterances.Thus, the goal of these theorems is to capture (employing a vocabulary that is, allegedly, better understood than, or otherwise susceptible of a more systematic treatment than, intensional vocabulary) those facts about the meaning of sentences that, more informally, could be captured by means of clauses like: (11) 'Snow is white' (as uttered in c) means that snow is white.6 Essentially the same holds for those semantic theories whose goal is recursively to assign entities encapsulating truth conditions to sentences, such as propositions.These theories seek to prove theorems that, informally, may be formulated along the following lines: (12) 'Snow is white' expresses the proposition that snow is white.
(13) 'Snow is white' (as used at c) expresses the proposition that snow is white.
Again, the idea behind these theorems is to capture, in a systematic way, the dimension of sentence meaning responsible for the objective correctness of the corresponding assertive utterances.
And the same holds for the way in which relativism has been articulated: the recursive definition of 'true at c w.r.t.<w, s>' for sentences (and of 'true at <w, s>' for propositions) seeks to articulate the dimension of sentence meaning responsible for the objective correctness of the corresponding assertive utterances.This might be obscured by the fact that the truth definition achieves this systematization only when supplemented with an appropriate bridging principle.In any event, with such a principle at play, the theorems of the properly semantic part of the theory of meaning may also be seen as specifying and systematizing those facts about sentence meaning.Now, if this is the philosophical content that we ought to read into the theorems of a truth-based semantic theory, then it is to be expected that they play a role in the explanation of actual linguistic activity.After all, the fact that our words and sentences mean what they do plays an important part (even though it is usually taken for granted) of why we utter them.(Just to belabor an obvious point, the fact that 'I'm hungry' means, on an approximate account, that the speaker is hungry -and not that she is bored-partially explains why a hungry speaker utters that sentence.As usual, the air of triviality is dispelled, and the explanatory role of facts about meaning is highlighted, when we deal with a sentence that belongs to a language different from the one in which the explanation is offered: the fact that the Italian sentence 'Ho fame' means that the speaker is hungry partially helps to explain why a hungry speaker utters that sentence on a particular occasion.)However, this role theorems play is quite restricted: they allow us to explain why a speaker utters a particular sentence and not a different one, but only if we already have an explanation of why she considered it relevant to express the corresponding proposition; in a similar way, they allow us to explain why she accepted or rejected a given utterance, but only if we already have an explanation of why she considers acceptable or unacceptable the proposition expressed by that utterance.
To put it differently, the semantic fact that a sentence means what it means (that it has certain truth conditions, that it expresses certain proposition) plays a role in the explanation of linguistic activity, but the burden of the explanation doesn't fall upon it, but upon an explanation of a different kind.In the case of the assertive utterance of a given sentence, it might be an explanation of why the speaker considered it relevant to make that claim instead of a different one (or instead of remaining silent).In the case of the acceptance of someone else's utterance, it might be an explanation of why she deemed acceptable what was said by means of that utterance.In the case of rejection, why it ought to be rejected.And in the case of retraction, why she considered that what was said by means of her earlier assertion ought to be now rejected, and why she deemed it relevant to make explicit her rejection.In all these cases, the explanation has an ineliminable epistemic component: it is necessary to explain why an agent considers certain proposition as true or false, or why she considers that there are other good reasons (e.g., appropriate or inappropriate evidence) to grant approval or to refrain from assenting.And there is also an ineliminable component of conversational rationality: it is necessary to explain why, independently of the epistemic evaluation, the agent deemed it relevant, or appropriate, to make the Manuscrito -Rev. Int. Fil., Campinas, v. 38, n.2, pp.65-88, jul.-ago. 2015.claim or retract it, rather than remaining silent, to accept or reject a claim instead of just letting it go through.Thus, the explanation of acceptance/rejection facts requires considerations that by far outreach what can be plausibly demanded of a truth-theoretic semantic theory, for they pertain to general issues having to do with epistemic and conversational rationality.This is why Horwich's second claim is not a plausible thesis: the link between semantic facts and actual linguistic activity is too mediated by epistemic and pragmatic considerations so as to think in the existence of a causal-explanatory route from those facts to this activity that allows us to assign, on its own, a high probability to concrete facts of acceptance and rejection.
What about claims 3 and 4? I think that they can (and should) be accepted.However, this acceptance is now far from being problematic.In the context of Horwich's reasoning, the acceptance of 3 generated the first horn of the dilemma: the inexistence of a causal-explanatory route mentioned in 3, in the presence of claim 2, lead to the conclusion that a semantic theory based on the notion of relative truth is incapable of explaining facts about sentence acceptance and rejection; now, once we reject the idea that an explanation of acceptance/rejection facts ought to exploit such a causal-explanatory route (an idea that places an unmeetable explanatory demand on relativism alone), accepting claim 3 presents no problem.
Regarding 4, the claim responsible for the second horn of the dilemma, its acceptance should have never been regarded as problematic.The reason is the following.Let's consider an assertive utterance of: (14) Sushi is delicious, made by A at a context c.In the style of explanation favored by Horwich, the attempt to explain this utterance in semantic terms relies on a clause like the following: (15) At a context c, A will probably assertively utter 'Sushi is delicious' iff 'Sushi is delicious' is true at c w.r.t.<wc, sc>.Now, given that 'Sushi is delicious' is true at c w.r.t.<wc, sc> just in case A likes the taste of sushi, what is actually doing the explanatory work seems to be: (16) At a context c, A will probably assertively utter 'Sushi is delicious' iff A likes the taste of sushi (at wc and tc).
Does it follow from this that the semantic fact that ( 14) is true at c w.r.t.<wc, sc> is eliminable in favor of an explanation that relies only on non-semantic facts in its place?Yes, but this, on itself, doesn't constitute a problem, since there is a sense in which ( 15) and ( 16) say the same thing, for they cite the same fact in explaining A's tendency to assertively utter ( 14) on that particular occasion: the difference between them is just that, while (15) describes that fact in a formal mode of speaking, ( 16) describes that same fact in a material mode of speaking.Thus, an explanation involving ( 15) and an explanation involving ( 16) would accomplish the same thing, e.g., explain an assertive utterance of the sentence 'Sushi is tasty' by an agent A in context c, by citing the same fact described in different ways.So, it is true that explanations invoking semantic facts can be eliminated in favor of explanations invoking non-semantic facts.But this doesn't mean that relative truth is an idle notion: it just provides a different way of describing the fact that the agent of the context likes the taste of sushi.And it should also be remarked that the usefulness of the predicate 'true at c w.r.t.<w, s>' (and of the propositional predicate 'true at <w, s>') doesn't lie in the introduction of a substantial property that would be explanatorily ineliminable, but in allowing for the systematization of facts about sentential meaning.So, even though truth talk is strictly eliminable in the explanations devised by Horwich, we were wrong in expecting it not to be.Something similar applies to the predicates with immediate pragmatic relevance, 'true as used at c and as assessed from c'' in the case of MacFarlane's view of truth relativism, and 'correct as assessed from c', in the case of the other way of spelling out radical relativism: they do not introduce substantial, potentially analyzable properties that could feature in explanations in an essential, ineliminable way, but they function as bridges that allow for the connection between semantics proper and normative facts concerning the use of language.So requiring that these predicates be ineliminable in a putative explanation of acceptance and rejection facts would be going beyond the explicit purposes for which these predicates are introduced in a relativistic theory of meaning.Manuscrito -Rev. Int. Fil., Campinas, v. 38, n.2, pp.65-88, jul.-ago. 2015.
Conclusion
In the previous section, I argued against Horwich's objection by pointing out that, given the philosophical content that truth-theoretic semantic theorems are supposed to have, they are supposed to play a role in the explanation of facts of acceptance and rejection, but only a restricted one, equivalent to the role played by the fact that a sentence means what it means (and not something else) in an explanation of why someone accepts it (or rejects it) on a given occasion.Thus, we shouldn't really expect the existence of a causal-explanatory route between facts about sentential meaning and facts of acceptance and rejection, at least in the following sense: a causal link between the truth or falsity of a sentence and a particular case of acceptance or rejection, such that the probability of someone accepting, or rejecting, a sentence S, given that S is true, or false, is high (or high enough).Indeed, considerations regarding the epistemic or doxastic life of the agent are relevant, as well as considerations concerning conversational rationality.So we shouldn't grant Horwich's assumption that acceptance and rejection facts are the explananda of a theory of meaning, at least in the sense that considerations pertaining the theory of meaning alone -that is, independently of epistemic and pragmatic considerations-should be enough to assign high probability to particular events of acceptance and rejection, on the assumption that the corresponding sentences are true, in the case of acceptance, or false, in the case of rejection.
This much is obvious from the point of view of truth-theoretic semantics: given an explanation of what a sentence means, what is relevant for explaining a fact of acceptance (rejection) on those grounds is not whether that sentence is true (false), but whether the agent thinks it is true (false), or whether she thinks there are good grounds for accepting (or rejecting) it-and whether she thinks it is convenient to voice her acceptance (rejection) or to remain silent.Now, Horwich obviously knows this: his insistence on acceptance and rejection facts being the explananda of a theory of meaning stems from the requirement that semantics be an empirical science.After all, if semantics is supposed to be empirical, shouldn't it deal with the observable?The considerations we've developed thus far, on the other hand, point towards taking intuitions concerning objective correctness and intuitions concerning normative consequences for speech acts as being the evidential basis for a semantic theory, thus relegating the evidential role of acceptance and rejection facts.So, it seems that we may reject Horwich's acceptance and rejection facts as the explananda of a theory of meaning only by rejecting the claim that semantics is an empirical discipline.This is not the place to tackle such a complex issue as that of the sense in which a semantic theory might be empirical, even if grounded upon speakers' intuitions concerning the objective correctness of assertive utterances.A consideration that may ameliorate the situation might be the following: speakers' intuitions are usually revealed by means of verbal activity, so being in agreement with intuition might count as explaining certain facts about our use of language-certainly, a highly specific use of language, but a possible use nonetheless.However, a consideration that is more relevant to Horwich's take on the empirical basis for a theory of meaning is the following: from the point of view of truth-theoretic semantics, a semantic theory is part of a cluster of theories which, together, have empirical consequences concerning linguistic activity.
As we've already remarked, in order to explain an acceptance (rejection) fact, we have to take into account what the sentence object of acceptance (rejection) means, why the speaker had reasons to think that the sentence was true (false) or otherwise warranted (unwarranted), why he deemed it convenient/relevant to voice her view instead of remaining silent, and more generally, if insincere, why she decided to reject a sentence she considered true or warranted, or why she decided to accept a sentence she considered false or unwarranted, etc.
In order to drive the point home, let's consider two cases, one of acceptance, one of rejection.In the first case, A utters 'I'm hungry'.In the second case, A rejects B's utterance of that same sentence, by tokening, 'No, you are not, I saw you having lunch just a few minutes ago'.How may an explanation of those facts go?In the first case, we may essay the following explanation: A assertively uttered 'I'm hungry' because 'I'm hungry' means that the speaker is hungry, A was hungry (and she knew it), and she deemed it relevant to convey the information that she was hungry because she wanted everyone to go out and have lunch.In the second case, we may essay the following explanation: A rejected an assertive utterance of 'I'm hungry' made by B because 'I'm hungry' means that the speaker is hungry, A had good reasons to think that B wasn't actually hungry (A saw B eating a few minutes before), and A deemed it relevant/convenient explicitly to contradict B (maybe because she knows that B finds C's company unpleasant, and A deems that getting the group to go out for Manuscrito -Rev. Int. Fil., Campinas, v. 38, n.2, pp.65-88, jul.-ago. 2015.lunch is a good way to avoid contact with C, and A doesn't like B, so she wants her to be uncomfortable).
So, we may explain why a particular event of acceptance or rejection took place only by taking into account these kinds of considerations.And, using Horwich's model of explanation, we'll be able to assign high probability to an event of acceptance or rejection on a given situation only if we describe the situation in terms that make the acceptance or rejection highly likely-in particular, by appealing to considerations that go well beyond sentence meaning and sentential truth or falsity, and delve into the epistemic, the pragmatic and even the psychological.This is why, even if we regard acceptance and rejection facts as suitable explananda for a cluster of theories, speakers' intuitions will still be more directly relevant to semantic theorizing than facts of acceptance and rejection: fewer theories will be involved when confronting a truth-theoretic semantic theory with its evidential basis if we take intuitions to be the main evidence for or against a theory of meaning.
2.A subject A at a context c will probably assertively utter S iff S is true at c w.r.t.<wc, sc> 3. A will probably assertively utter S iff the proposition expressed by S at c is true at <wc, sc> 4. The proposition expressed by S at c is true at <wc, sc> 5.A will probably assertively utter S Utterance acceptance 1.For all S, c, w, s: S is true at c w.r.t.<w, s> iff the proposition expressed by S at c is true at <w, s> 2. A subject A at a context c' will probably accept an assertive utterance u of S made at c iff S is true at c w.r.t.<wc, sc'> 3. A will probably accept u iff the proposition expressed by S at c is true at <wc, sc'> 4. The proposition expressed by S at c is true at <wc, sc'> 5.A will probably accept u Utterance rejection 1.For all S, c, w, s: S is true at c w.r.t.<w, s> iff the proposition expressed by S at c is true at <w, s> 2. A subject A at a context c' will probably reject an assertive utterance u of S made at c iff S is not true at c w.r.t.<wc, sc'> 3. A will probably reject u iff the proposition expressed by S at c is not true at <wc, sc'> | 11,274.2 | 2015-11-25T00:00:00.000 | [
"Philosophy"
] |
The VEGA Tool to Check the Applicability Domain Gives Greater Confidence in the Prediction of In Silico Models
A sound assessment of in silico models and their applicability domain can support the use of new approach methodologies (NAMs) in chemical risk assessment and requires increasing the users’ confidence in this approach. Several approaches have been proposed to evaluate the applicability domain of such models, but their prediction power still needs a thorough assessment. In this context, the VEGA tool capable of assessing the applicability domain of in silico models is examined for a range of toxicological endpoints. The VEGA tool evaluates chemical structures and other features related to the predicted endpoints and is efficient in measuring applicability domain, enabling the user to identify less accurate predictions. This is demonstrated with many models addressing different endpoints, towards toxicity of relevance to human health, ecotoxicological endpoints, environmental fate, physicochemical and toxicokinetic properties, for both regression models and classifiers.
Introduction
Confidence in using (quantitative) structure-activity relationship ((Q)SAR) models is a critical issue to increase their acceptability as new approach methodologies (NAMs) in next generation risk assessment (NGRA). The difficulty is establishing whether a given (Q)SAR model can be used for a specific substance of interest. A model is based on the information within its training set, and one may expect that a model built from specific substance classes may poorly predict properties for substances belonging to other classes. Thus, if a model has been developed for anilines, it is not reasonable to apply it to alcohols. The problem arises in the case of modern models, which are often built up on heterogeneous classes using a training set that may seem diverse but will never cover all possible chemical differences. Furthermore, there are often families of chemicals that are difficult to predict. One of the reasons for this is that some substances have a peculiar behavior that is poorly represented in the training set, so the model cannot obtain suitable information representing these substances' particular effect, which is masked within the larger set of substances. Therefore, regulatory authorities require evaluating the applicability domain (AD) of the model, as in the European legislation on industrial substances (the Registration, Evaluation, Authorisation and Restriction of Chemicals-REACH-regulation) [1] and in the OECD principles for (Q)SAR [2].
From a chemometric point of view, the task of defining the AD depends on whether the prediction is an interpolation or an extrapolation [3,4]. Several approaches to evaluate the AD of (Q)SAR models have been presented [5][6][7][8][9]. Usually, the information on the training set is used to characterize the chemical diversity of the target substance and to verify whether the substance to be predicted is similar or not; chemical descriptors are used for this purpose [3]. Outliers related to the chemical space have been identified, also providing tools for building (Q)SAR models that cover the AD [6]. In some cases, the SMILES format is used to examine rare features of the molecule [10,11].
Some software programs for AD provide a binary outcome, so that the predictions are identified as either inside or outside the AD. This is the case of the Danish (Q)SAR Database [12]. The Toxicity Estimation Software Tool (T.E.S.T.) of the US Environmental Protection Agency (US-EPA) applies a similar binary outcome and filters predictions by considering whether the (Q)SAR predictions are inside or outside the AD. These two platforms and other commercial ones, such as Leadscope, feature a checklist of considerations for the AD [13,14]. In some cases, the AD is addressed specifically in relation to specific toxicity alerts [15].
Since different methods are used to measure the AD, the percentage of substances outside the AD varies, and in some cases, it can be as low as 2% [14,16].
The VEGAHUB [17] includes VEGA, which is a platform providing more than 100 (Q)SAR models, and other tools for prioritization, risk assessment, and read-across; users can download the software as an open-access resource. Over the last few years, the VEGAHUB has been used by the European Chemical Agency (ECHA) for screening substances that have been pre-registered under REACH [18]. VEGAGHUB is linked to the OECD QSAR Toolbox version 4.4 and is also available as a stand-alone tool for predictions within other platforms, such as AMBIT [19] and CCLIC [20].
For each (Q)SAR model, VEGA employs quantitative measurements to address the AD, composed of multiple factors. Basically, besides checking the chemical similarity between the target substance and the substances in the training set, VEGA makes additional checks, specific to the endpoint and the algorithm. In practice, several checks are performed and the algorithm provides quantitative results. Predictions on the most similar substances are used to assess whether the prediction is reliable for the target substance. The ad hoc software checks whether the predictions of substances similar to the target one are correct. The experimental values for the most similar substances are then compared with the predicted value of the target substance. In this case, the software compares the agreement between the two values and any potential inconsistencies are indicated to the user. This is intended to help the user specifically address certain points; since the process is automated, it allows to filter out predictions with doubts related to the AD. Specific features regarding AD measurements within VEGA have been discussed elsewhere in several studies [21][22][23][24][25][26][27][28][29].
Overall, the AD tool within VEGA has been shown to provide satisfactory results. In this work, the assessment of the AD tool within VEGA was applied to a range of models of relevance to human health, ecotoxicity, environmental fate, physicochemical and toxicokinetic properties for both regression models and classifiers [30].
Results
The use of the (Q)SAR models is still limited and one of the reasons behind such limited use lies in the fact that users are not always confident in the prediction reliability of such in silico models. Notwithstanding, several means are available to assess the reliability of (Q)SAR model predictions. The first approach includes statistics related to the model outputs based on the whole set of available compounds (including training and test sets) used to build the model. This assessment provides an optimized evaluation of the model; it is probable that when the model is applied to new substances, predictions may not perform as well as the results based on the whole set, particularly because the results based on the substances in the training set are "facilitated" since they are inside the model. Furthermore, such a generic evaluation is based on a population of substances; however, predictions for the target compound may be different from the average results for the whole population.
A second approach is to assess the results of the test set and to constitute a sounder procedure, since the statistical outputs reflect predictions for substances that are external to the training set; this case is closer to the practical use of the model. Appropriate measurements using internal validation procedures are also a useful complimentary procedure. In this case, more confidence in the results can be obtained; still, uncertainty in the prediction performance for the target substance may arise. The third approach relies on the use of a tool that allows the assessment of the AD of the model, filtering results that are outside the AD; this method is available in some in silico platforms.
One key issue is to assess how effective these tools are for AD evaluation so that the most reliable predictions can be identified. In this case, the evaluation requires classifying a substance as either inside or outside the AD, while the algorithm at the basis of these tools refers to a distance, which is not a categorical entity; thus, arbitrary thresholds are applied.
The fourth method relies on applying a quantitative measurement of the AD (such as the Applicability Domain Index, ADI) and refers to the one described and discussed here for the VEGA tool (see Section 4. "Materials and Methods"). The aim is to assess whether the tool implemented in VEGA for the measurement of the AD can identify predictions which may be inconsistent. Here, the ADI is calculated using a set of substances never used to build up the model. Finally, a fifth approach is also available to assess prediction accuracy and relies on full evaluation of all elements provided by VEGA, such as similar substances and information on the mechanism(s) associated with predicted endpoints; currently, this process requires manual implementation.
The use of the ADI tool within VEGA has a range of advantages as follows: 1.
Allowing the identification of issues related to prediction accuracy and providing the user with an opportunity for thorough analysis.
2.
Allowing identification of mechanisms associated with structural features of the substances.
3.
Analyzing similar substances through a read-across approach.
4.
Filtering substances with more reliable predictions, to be used in batch mode for a range of substances.
The first three advantages refer to the use of in silico models within a weight-ofevidence (WoE) approach following the scheme provided in the EFSA Guidance on WoE [31] and further detailed for non-testing methods elsewhere [32]. The user should evaluate all three lines of evidence specified in the first three points discussed above: the prediction, the reasoning, and the experimental evidence. What VEGA provides should be evaluated in an integrated way. If the ADI value is low due to the presence of similar substances with conflicting results that affect the ADI but are irrelevant because they contain fragments of an adverse effect absent in the target substance, the user may disregard these substances and consider the prediction reliable, even if the ADI tool serves warnings. Conversely, if there is a very similar substance with a property value conflicting with the predictions, this may over-rule such prediction and the ADI will automatically indicate the issue.
In this study, the use of the ADI tool is described to identify more reliable predictions, which is useful in addressing many substances. Supplementary Materials reports the details of the calculations of all the statistical parameters. Below, we reported only the most representative parameters for classification and regression models, to compare the overall performance in a simplified way. Figure 1 shows the statistical results, expressed as accuracy, for the substances in the test set, according to the classification models towards human toxicity, ecotoxicity, and environmental properties. In practice, the ADI can recognize potentially inconsistent results, and predictions in AD have the highest values. Figure 1a illustrates the prediction accuracy for human toxicological endpoints related to relevant in silico models. The predictions in the AD have the highest value; the only exception is the model for the molecular initiating event for PPAR alpha. For the CORAL model predicting chromosomal aberration, satisfactory results are shown for predictions that are also outside the AD. For two models, the CAESAR model for developmental toxicity and carcinogenicity oral classification, the values outside the AD are somehow better than the values of the predictions potentially outside the AD; but regardless, the predictions in the AD are always the better ones. Overall, the predictions potentially outside the AD are still satisfactory, while the predictions outside the AD are often less reliable. Figure 1 shows the statistical results, expressed as accuracy, for the substances in the test set, according to the classification models towards human toxicity, ecotoxicity, and environmental properties. In practice, the ADI can recognize potentially inconsistent results, and predictions in AD have the highest values. Figure 1a illustrates the prediction accuracy for human toxicological endpoints related to relevant in silico models. The predictions in the AD have the highest value; the only exception is the model for the molecular initiating event for PPAR alpha. For the CORAL model predicting chromosomal aberration, satisfactory results are shown for predictions that are also outside the AD. For two models, the CAESAR model for developmental toxicity and carcinogenicity oral classification, the values outside the AD are somehow better than the values of the predictions potentially outside the AD; but regardless, the predictions in the AD are always the better ones. Overall, the predictions potentially outside the AD are still satisfactory, while the predictions outside the AD are often less reliable. Figure 2 shows the R2 related to endpoint predictions of the test set for the quantitative models towards human toxicity, ecotoxicity, environmental/toxicokinetic and physicochemical properties. In this case too, the use of ADI can identify potential issues with prediction correctness. As expected, quantitative models are generally more complex compared to classifiers, and so the results are not always ideal, mainly for the most complex endpoints, such as human toxicology and ecotoxicology. For many of these models for which no robust support from an ADI perspective was concluded, the input data included tests on few substances in the test set, and most often fewer than ten. This is the case for the three (Q)SAR models predicting LOAEL/NOAEL relevant to human toxicity (Figure 2a). When considering models predicting ecotoxicological properties (Figure 2b), predictions from the zebrafish embryo toxicity model did not perform very well and this can be rationalized by the fact that only seven molecules have been used. Hence, from a statistical point of view, more substances would need to be tested. For the other models too, only a few substances were used, and more substances would be required to get meaningful statistics. Thus, results are poor for the COMBASE models, particularly towards Daphnia and the EPISuite model for fish acute toxicity (as Figure 2 shows the R2 related to endpoint predictions of the test set for the quantitative models towards human toxicity, ecotoxicity, environmental/toxicokinetic and physicochemical properties. In this case too, the use of ADI can identify potential issues with prediction correctness. As expected, quantitative models are generally more complex compared to classifiers, and so the results are not always ideal, mainly for the most complex endpoints, such as human toxicology and ecotoxicology. For many of these models for which no robust support from an ADI perspective was concluded, the input data included tests on few substances in the test set, and most often fewer than ten. This is the case for the three (Q)SAR models predicting LOAEL/NOAEL relevant to human toxicity (Figure 2a). When considering models predicting ecotoxicological properties (Figure 2b), predictions from the zebrafish embryo toxicity model did not perform very well and this can be rationalized by the fact that only seven molecules have been used. Hence, from a statistical point of view, more substances would need to be tested. For the other models too, only a few substances were used, and more substances would be required to get meaningful statistics. Thus, results are poor for the COMBASE models, particularly towards Daphnia and the EPISuite model for fish acute toxicity (as implemented in VEGA). Figure 2c illustrates the results for the environmental and toxicokinetic properties in fish. In this case, the statistics are satisfactory if the results are within the AD, with R2 values from 0.76 to 0.96. If the predictions are potentially out of the AD, the prediction correctness is weaker, and worse if the predictions are outside the AD. It is easier to model these properties because they are associated with less complex processes compared to those discussed above. Figure 2d shows the prediction results for physicochemical properties and, for these substances within the ADI, predictions are excellent since these properties are relatively simple to model. These predictions are also satisfactory for substances potentially outside the AD, but the performance is weaker when the predictions are outside the AD.
Examples
To show the use of the ADI, two examples are reported below, one with a high ADI value and one with a low value. The outputs of the models are reported in the Supplementary Materials (Trifluralin_NOAEL_LIVER_CORAL.pdf and Diethyl(nitroso)amine_ HENRY_OPERA.pdf). Table 1 reports the parameters that compose the ADI. The similarity index is high and, indeed, the two most similar chemicals found in the training set have a similarity higher than 0.96. The third similar substance has a high similarity (0.895) too, but it was not considered in the ADI calculation (which is based on the first two similar chemicals only). Observing the predicted and experimental values of these two similar substances, the second has the highest difference, but in general, they are around 0.5 log units. The last two parameters indicate that Trifluralin has no rare or unknown groups and that the descriptors' values are in the range of the descriptors of the entire training set. The good reliability of this prediction is confirmed by the experimental value available for Trifluralin: 2.19 log units (around 154 mg/kg bw).
Diethyl(nitroso)amine
Diethyl(nitroso)amine is an industrial chemical with a predicted Henry's law constant of −4.94 log atm-m3/mole (see Table 1 for details on the ADI).
In this case, the ADI is based on the first three similar chemicals, that have a sufficiently high similarity (similarity between 0.76 and 0.803). They are correctly predicted (accuracy index of 0.452), even if the first similar chemical has a moderate error (0.765). The prediction for the Diethyl(nitroso)amine is not concordant with the experimental values of the similar substances (especially similar 1 and 3). This may be due to the structural differences. Indeed, none of the similar substances have the nitrosoamine group. The presence of unknown fragments is also highlighted by the ACF index.
The experimental value confirms the low quality of this prediction (−5.44 log atm-m3/mole).
Discussion
There are several AD tools available in the various software platforms. In this work, a systematic process has been described and investigated their effectiveness with regards to inconsistencies in predictions. The use of the ADI tool within VEGA allows supporting expert judgment without replacing it and the three categories of ADI values (high, moderate, and low) have different statistical qualities. Indeed, this is particularly helpful in the case of prediction inaccuracies with a high ADI or when accurate predictions are reported with a low ADI. However, the prevalence of these predictions inconsistencies is higher for substances with a low ADI. Confidence in these results is related to linking predictions to available information for the substances that are at the basis of the model, i.e., those in the training set. The composite ADI tool proved efficient in capturing this information. The AD should not be evaluated simply on the basis of the chemical information, and the ADI tool can detect prediction issues resulting from the in silico model itself that is specific to a certain endpoint.
The advantage of the ADI tool implemented in VEGA lies in the fact that it is convenient to use VEGA for models available also within other platforms. For instance, VEGA contains the same (Q)SAR model for mutagenicity (Ames test) implemented in Toxtree. However, Toxtree does not provide an evaluation of the AD and the user cannot identify the most similar substance, which is useful for evaluation and read-across procedures. Other models include those available in EPISuite for BCF for which the AD must be analyzed manually, which is quite a complex process.
This manuscript highlighted that it is possible to identify prediction accuracy for a range of models resulting in a range of statistical quality, depending on the ADI value. What is typically described for the results of a model are the statistical results, for instance on the training and test sets which are provided at the level of the whole population of chemicals. In our case, the statistical quality of the results for substances with a high ADI was higher in most cases. Thus, if the ADI is high, the expectation is that the prediction accuracy of the model will be higher than those observed on the whole population of substances. In a few cases, the ADI does not improve prediction accuracy. In this case, sound statistical values for prediction accuracy are reflected at the population level, or one may expect even lower prediction accuracy for low ADIs. In cases where the prediction of the (Q)SAR model is not satisfactory, applying read-across is recommended. This can be performed by considering similar substances provided by VEGA using ToxRead as another tool present in VEGAHUB and offering tens of modules for different endpoints, or using the VERA tool [33].
Applicability Domain Index within VEGA
Since in silico models, including those available within the VEGAHUB, are constructed on three pillars, namely endpoint, chemical information, and the algorithm providing predictions, the applicability domain index (ADI) requires a thorough assessment on these three components. Depending on the specific model, there are some specific components of the ADI, for instance, if the model is a regression model or a classifier.
The chemical information is assessed by considering the chemical similarity. This is measured according to several parameters and provides values ranging from 0 to 1 (1 indicates identity) [34]. Such values can be used to assess how similar the substances in the training set are. The chemical similarity, as in the case of all similarities, is not an objective measurement, and there are many possible ways to measure it. For this reason, VEGA provides images of the six most similar substances, so that users can weigh the evidence depending on the context of the chemical assessment. Another parameter related to the chemical information within the ADI is the chemometric check, which allows users to assess whether the target substance has descriptors outside the range of the descriptor values of the substances in the training set. In all cases, the range of the molecular weights is checked, even if the molecular weight is not one of the available descriptors. In addition, the software assesses whether there are rare fragments in the target substance; for this purpose, VEGA uses atom-centered fragments.
There are three components of the ADI related to specific endpoints, namely prediction accuracy, concordance between the predicted value for the target substance and the experimental values, and presence of fragments associated with outliers for a given fragment:
1.
VEGA checks the accuracy of the predictions for similar substances. In this case, the predicted value of the similar substance is compared with the experimental value. If the value is a label, such as mutagenic or not, the comparison is provided instantly.
In the case of quantitative values, the software considers the quantitative differences across substances and an additional factor reports whether the difference in the prediction is very large or not.
2.
Concordance between the predicted value for the target substance and the experimental value of the similar compound is another very important parameter for the ADI. In this case, the prediction (i.e., the prediction accuracy of the in silico model) can be related to the "read-across" use of the VEGA output, showing the most similar substances. In particular, if predictions are different from the experimental values of similar substances, this poses a question, while if there is agreement, this increases the ADI. If the model provides structural alerts, VEGA provides an additional check and indicates whether for a similar substance, one or more structural alerts are present, and if such a structural alert is present in the target substance too. This is a valuable piece of information, highlighting to the user, for instance, that there is a structural alert only for a similar substance. Thus, the user can decide to disregard such a similar substance as non-relevant.
3.
The last component of the ADI regarding the specific endpoint is the presence of fragments associated with outliers for that endpoint. This component is present only for a few models, where the model poorly predicted a particular chemical family.
In addition, two components are associated with the algorithm, namely uncertainty of the model for specific endpoints and the sensitivity of the prediction for a given descriptor reported for the target compound. VEGA modifies the descriptor value for a small value and checks whether this causes a large difference in the predicted value.
Based on all these components, the overall ADI value is calculated, and VEGA reports the values for each component of the ADI and the overall sum. To help the user, a graphical symbol is shown and indicates warnings qualified as "no", "moderate", or "strong".
The components are measured for the two or three most similar compounds, even though the software shows the six most similar compounds.
Categories of ADI Values
An ADI serves as a continuous quantitative value and, to help the user, VEGA gives an indication regarding the quality of this value, indicating whether the prediction seems reliable, moderate or of low quality. The main purpose of the ADI should be to highlight the sources of concern and the severity of those concerns rather than simply to identify good predictions. A low ADI highlights that the user should carefully check the issues indicated by the ADI. In contrast, a moderate ADI highlights that some issues require further assessment. Overall, all predictions should be assessed thoroughly, including those with satisfactory results, and the ADI provides a useful tool to the user to do this.
When many predictions are available, the ADI supports a classification of the results according to their probable reliability. Thus, already in the summary of the output of the prediction, VEGA provides the prediction and this evaluation, presented with one, two, or three stars. In our experience, the ADI is high if it is >0.85, moderate if it is between 0.75 and 0.85, and low if it is <0.75. These are general values and may vary according to the endpoint. All the threshold values for the ADI and its individual components, which are model specific, are described in the model user guide.
Modified ADI
Results from VEGA model predictions are used to exploit all three lines of evidence: (1) prediction; (2) similar substances to be used for read-across; and (3) reasoning with regards to the potential mechanism of toxicity for a given endpoint, for instance, structural alerts.
These lines of evidence have already been introduced above in an implicit manner while discussing the accuracy of the prediction in the ADI, the concordance (evidence of the experimental data used in the read-across strategy), and the reasoning, as indicated by the presence of structural alerts. Since VEGA is a tool for the evaluation of chemicals, the evidence of experimental data (all data, in the training and the test sets) is very important. Thus, particularly for the purpose of read-across, VEGA shows the most similar substances both in the training and test sets.
Other software systems use a different perspective. For instance, T.E.S.T., a valuable platform, highlights the statistical quality of the results when considering the substances in the training or the test set separately. The quality of the predictions for the substances in the test set is valuable for evaluating the statistical quality of the model. The assessment within VEGA is not focused on the (Q)SAR model itself, but uses all lines of evidence, including read-across and reasoning [32]. However, for the purposes of this study, VEGA has been modified, so that the ADI is calculated only for the training set. This made it possible to calculate the results for new substances, avoiding the risk that the software finds the target substance in the test set. Hence, only the substances in the training set are used in our study for the AD assessment so that the results can be examined for substances that were not used to build up the model.
Test Set
To assess the results of the (Q)SAR models in VEGA, only substances in the test set were examined, as described in Section 4.3. However, some models in VEGA are not of statistic nature, since they are not based on a training set, but on expert-based rules. For instance, this is the case for models derived from Toxtree based on mutagenicity rules, or on Cramer classes. For these reasons, we do not have the assessment based on the ADI for all models.
In other cases, the number of substances in the test set was quite small, so we looked for substances in other sources. For statistical analysis, only the molecules outside the training set of each model were considered. This was the case for the assessment of the BCF model performance (Arnot-Gobas) and the BCF model (KNN/Read-Across), for which 1129 compounds were collected from the literature as an external dataset [35].
For the mutagenicity (Ames) endpoint, the data were selected from a large dataset (about 18,000 compounds) containing public and proprietary data [30,43].
Performance Parameters
The performance of the models was evaluated on the basis of accuracy or R2 for classifier or regression models, respectively. More detailed information about these and other parameters can be found in the Supplementary Materials (Supplementary_Material.xlsx).
Conclusions
This manuscript investigated the systematic use of the ADI from the VEGA tool to gain confidence in using (Q)SAR models as part of NAM batteries within NGRA of chemicals. This tool offers a powerful way to identify critical issues for the specific substance and model. VEGA provides not only the predicted value, but also many more parameters that should be thoroughly assessed. The information includes the prediction accuracy itself, the presence of similar substances, and the elements for reasoning in relation to mechanisms of toxicity. All these elements are provided and should be evaluated. The ADI tool serves as a relevant tool to assess these multiple elements and provided reliable results, increasing confidence in using such models. Overall, the ADI is a quantitative value, but for convenience, it can also be represented graphically as categories using stars as a graphical metric of prediction reliability. The warning messages identified through the ADI analysis help to identify the critical aspects that the user should carefully assess. Finally, when the user is working in batch mode, the results with a higher ADI are preferable, and this approach can be used to filter results. Since the tool is transparent, sophisticated, and detailed, it provides a sound way to obtain accessible, intelligible, useful, and assessable results. | 6,932.8 | 2023-06-01T00:00:00.000 | [
"Computer Science"
] |
RETRACTED ARTICLE: Atomic insight into hydration shells around facetted nanoparticles
Nanoparticles in solution interact with their surroundings via hydration shells. Although the structure of these shells is used to explain nanoscopic properties, experimental structural insight is still missing. Here we show how to access the hydration shell structures around colloidal nanoparticles in scattering experiments. For this, we synthesize variably functionalized magnetic iron oxide nanoparticle dispersions. Irrespective of the capping agent, we identify three distinct interatomic distances within 2.5 Å from the particle surface which belong to dissociatively and molecularly adsorbed water molecules, based on theoretical predictions. A weaker restructured hydration shell extends up to 15 Å. Our results show that the crystal structure dictates the hydration shell structure. Surprisingly, facets of 7 and 15 nm particles behave like planar surfaces. These findings bridge the large gap between spectroscopic studies on hydrogen bond networks and theoretical advances in solvation science.
We would like to thank both reviewers again for their fast review and their suggestions for how to improve further our manuscript.For a detailed point-by-point response, please see below.Reviewers' comments are copied from your email, our comments start with "Our response:", and the description of what we have changed with "Our alteration:".The changes introduced by us are marked in the revised manuscript with a yellow background.
Reviewer One:
An interesting manuscript to revisit.The technical difficulty of this experiment is underlined by the scepticism of my fellow reviewer to the experimental method and conclusions.The authors reply is admirable and helps me to put a lot of faith in the data and its analysis.The authors tap into the power of the PDF method to explore the effects of a small particle on structure of a solvent.I believe this application of the PDF method is an important step forward.Still my major reservation of this really excellent piece of experimentation is the wider context of the surface perturbation on the structure and dynamics of water.As the authors point out this is a time averaged structure over the time of the x-ray measurement.They point to the work of Mamonotov et al. (ref 13 in rebuttal and ref 14 in text of the manuscript) where two clear types of water are identified.Is there any link to the distance from the solvent/particle interface and the perturbation of the structure from that of the bulk?Could the same be said of dynamics?
Our response: We thank the Reviewer for getting back into this broad discussion.The work by Mamonotov et al. uses quasielastic neutron scattering (QENS).Since neutrons have low flux compared to X-rays, the average measurement times are minutes or hours.The derived dynamics of the hydration layers are in the pico-and nanosecond range, but measurement times also average over way longer time scales in order to collect sufficient statistics.That said, Mamonotov et al. identified "three structurally distinct sorbed water layers L1, L2, and L3, where the L1 species are either associated water molecules or dissociated hydroxyl groups in direct contact with the surface, L2 water molecules are hydrogen bonded to L1 and structural oxygen atoms at the surface, and L3 water molecules are more weakly bound".L3 molecules were shown to move fast as they are loosely bound and L2 water molecules are more strongly bound and slowed down in their dynamics to a nanosecond timescale.Since Mamonotov et al. studied different hydroxylated and nonhydroxylated surfaces of rutile (TiO2) and cassiterite (SnO2), even within their study, they did not make a generalized answer to the structure and dynamics of the different layers, see the quote "L3 molecules on the hydroxylated surface (cassiterite) form a distinct peak at 6 Å, which suggests ordering of water molecules that differs considerably from bulk and the nonhydroxylated surface of rutile."Their own comparison between QENS and MD results in Figure 12 connects the different dynamics from QENS with the distance from the metal oxide surface.This, in general, agrees well with our observations with having a strongly adsorbed water layer (their L1 and our first sharp peaks between 1 -2.5 Å) and more loosely bound hydration layers (their L2 and L3 and our extended oscillation).Mamonotov et al. states right in the introduction, that due to differences in bulk dielectric constants, electronegativities and lattice spacings, an easy comparison even of isostructural oxides is not trivial.In our manuscript, we explicitly only compared our dd-PDFs to theoretical findings on magnetite surfaces, as hematite produces different structural fingerprints.
To summarize, metal oxides in general show hydration layers, which have varying properties with the distance from the solvent/particle interface, and which are different from the bulk solvent.The hydration layers possess a substructure, which consists of most commonly at least two different layers -one strongly adsorbed and one more loosely bound layer.The number of the latter strongly varies dependent on the metal oxide / solvent combination.The differences of those hydration layers, or respectively types of water, are reflected in both dynamics and structure.However, it is the metal oxide and its exposed facets which can lead to large variations in the observed dynamics and structure.
Our alteration: In order to further address this general issue, we added to the manuscript right after the quote of Mamonotov et al.: "The hydration layers at metal oxide surfaces differ, in general, from the bulk water properties and show modified dynamics and structure, typically varying within the individual layers." I note that the two different methods for examining the solution structure of the particles in the supplementary material, DLS and SAXS, one which are dependant on the dynamics or Brownian motion of the whole particle and electron density variations within the solution give slightly different values of the radius.Is the fact the DLS radius is slightly larger indicative of a strongly perturbed layer of water which contributes to the hydrodynamic radius of the particle?
Our response: We thank the reviewer for this observation.SAXS provides the electron density variation (I ~ Δρ 2 VP 2 ).Since our organic molecules are small and light scatterers with much accessible particle surface, a spherical shape model was applied to fit the particle size in SAXS.SAXS gives a diameter of 6 ± 1.8 nm for the ligand cysteamine.DLS provides the hydrodynamic (or Stokes) diameter, which is that of a sphere that has the same translational diffusion coefficient of Brownian motion as the particle being measured.The size estimation also assumes spherical shape.For the same ligand the SAXS measurement was carried out, i.e. cysteamine, the numberweighted average DLS radius of three measurements with a Particle Analyzer Litesizer 500 was 7.0 ± 0.2 nm.There is a variety of literature, which in general correlates enhanced DLS diameters with water layers and the wording ranges widely, such as structured water, water layer, ordered water or hydration layers.Since we know (see introduction) from various techniques that there are sub-layers to the overall hydration shell, it is hard to tell which of those different layers truly contribute to an enlarged hydrodynamic diameter in DLS.Our dd-PDFs extend over up to 15 Å, but the DLS diameters are not larger than the dry diameter (projected area) observed in TEM by 3 nm (see Supporting Table 1).Therefore, only part of the layers which we see with PDF can contribute to the DLS diameter.But it is for sure true to say that for strongly adsorbed water molecules, there will likely be an effect on the hydrodynamic diameter in DLS.
Our alteration: In order to make it clear which different diameters are in fact measured, we included this information in the manuscript in the Results section with the subheading "Synthesis and characterization of the IONPs": "IONP sizes were determined using transmission electron microscopy (TEM) (diameter from projected area, Supplementary Figure 3), dynamic light scattering (hydrodynamic diameter, Supplementary Table 1; Supplementary Figure 4) and SAXS (particle size based on X-ray scattering contrasts, Supplementary Figure 4)." | 1,801.8 | 2019-03-01T00:00:00.000 | [
"Physics",
"Chemistry",
"Materials Science"
] |
SYNTHESIS OF MOLECULARLY IMPRINTED POLYMERS (MIPS) USED FOR ESTIMATION OF BETAMETHASONE DISODIUM PHOSPHATE (BMSP) USING DIFFERENT FUNCTIONAL MONOMERS
Betamethasone sodium phosphate (BMSP) selective molecularly imprinted polymers(MIPs) were based on ion-pair by prepared four polymers(MIPs) using BMSP as the template a well as (Acryl amide) (AAM), 2-Acrylamido-2-Methyl-1-Propane sulphonic Acid (2-AAMMPSA as monomer, used N,N-ethylenebismethacrylamide (EBMAA) ,ethylene glycol dimethacrylate ethylene glycol(EGDMAC), N, N-methylene bisacrylamide (NNMBAAM)) as cross linker and used benzoyl peroxide as initiator . NIPs prepared by using the same composition of MIPs except the template (BMSP). The MIPs were prepared using variation ratio of monomer and cross linker .These MIPs applicate as solid phase extraction for determination BMSP in pharmaceutical preparation used UV as detector .the results gave good response, where the reconstruction percentage (Rec%) value of BMSP drug took the range (99.058149 % 101.887004 %), and the relative standard deviation (RSD%) value took the range (0.224149 % 0.743651 %) for standard solution and Rec% took values of (98.400035 99.404218) %, and RSD% took values of (0.572589 1.012777) % of BMSP drug for the Betamethasone sodium phosphate
العراقية الزراعية العلوم مجلة
, Chemically its is known as 9-( -Fluoro-11β,17-dihydroxy- 16 β-methyl-3,20dioxopregna-1,4-diene-21-yl), White or almost white powder, very hygroscopic molecular formula (C22H28FNa2O8P), The chemical structure of betamethasone sodium phosphate is shown in figure. 1, its is Freely soluble in water, slightly soluble in ethanol (96 per cent), practically insoluble in methylene chloride, Natural and synthetic glucocorticoids are known to be highly effective drugs for the treatment of inflammatory diseases. They are widely administrated to relieve joint pain,symptoms of inflammatory skin problems and inflammation due to arthritis, asthma and rhinitis in clinical , It is active in replacement therapy for adrenal insufficiency and as an anti-inflammatory and immunosuppressant, inflammatory bowel disease, reactive airways disease, and respiratory distress syndrome in preterm infants and pruritus in corticosteroidresponsive dermatoses, ulcerative colitis, lupus erythematosus, acute leukemia (14,18), BMSP was estimated in several ways, including the use of UPLC / MS / MS (6,11) , and was estimated using a voltammetric method (19), and the use of prepared and modified silica compounds (13), Methods were also developed using (RP-HPLC) (12,13), this study was formulation and evaluation of betamethasone sodium phosphate (BMSP) loaded chitosan nanoparticle(CNPs) using cross-linked chitosan malic acid derivative for better therapeutic effect. The prepared BMSP loaded CNPs (16) , A chiral biosensing platform was developed using (BMSP) as chiral recognition element through multilayered electrochemical deposition of BMSP, overoxidized polypyrrole, and nanosheets of graphene (OPPy-BMSP /GR), for enantio-recognition of mandelic acid (MA) enantiomers (9), Were Estimated (BMSP) using Novel magnetic molecularly imprinted polymer nanoparticles (MMIPs) using methacrylic acid as a functional monomer, MAEMA as a cross-linker, and betamethasone as a template The Fe3O4 nanoparticles were encapsulated with a SiO2 shell and functionalized with ACH@CH2 and MMIPs(7), were as Estimated (BMSP) using Novel magnetic molecularly imprinted polymer nanoparticles (MMIPs) using BY precipitation polymerization were prepared MMIPs were prepared by using methacrylic acid as a functional monomer, N,N-pphenylene bismethacryl amide as a crosslinking agent and betamethasone as template (8) There are a variety of ion selective electrode determined drugs that depended on MIPs as recognition membranes like ibuprofen(18), warfarin (1), phenytoin (3) and metronidazole benzoate (2).
Instrumentation
Monitoring of the analyses was performed using UV-Vis (SHIMADZU UV -Visible Spectrophotometer 1800 pc (Japan)) using the (1cm) quartz cells and Scanning Electron Microscopy (SEM) (JSM.6390A) (Tokyo Japan) and SHIMADZU IRAffinity-1S (FTIR) -8000 (Japan), heating/ stirring (Germany).During the polymerization process, pure Betamethasone Sodium Phosphate shows absorption band at 238nm, this band can be used to ensure that all Betamethasone Sodium Phosphate was removed after washing, then it measured by using UV-Vis spectrophotometer An Ultrasonic Sensitive Water Bath from (SONERX) (W.GERMANY) was used for stirring the polymer solution. Preparing of Standard solutions preparing of standard solution (100 µg.ml -1) Betamethasone Sodium Phosphate by dissolving (0.01 gm )of standard Betamethasone Sodium Phosphate in the methanol and completed to(100 mL) in the volumetric flask .The other solutions were prepared in100 mL at the ranged from (10-100 µg.ml -1 ) in the same procedure.
Synthesis of the Imprinted Polymer BMSP-(MIP 1 -AAM)
Unbreakable glass tube (25 ml) was utilized, and 0.42 mmol from the mold material BMSP was added to the tube. BMSP was dissolved in 7 ml of methanol. Furthermore. An amount of 4.6 mmol of Acrylamide (AAM) was added to the mixture. Further, the combination was stirred via the ultrasonic waves for 5 minutes. Later, cross linkers of Ethylene Glycol Dimethacrylate (EGDMAC) (9.9 mmol) and Benzoyl Peroxide (0.165 mmol) (BPO), which acts as a starting point for polymerization, were added to the glass tube. Bubbles in liquid were moved out by using high-purified Nitrogen for 30 minutes. Immediately thereafter, a rubber cap tightly locked the tube orifice, and the resulting liquid was placed in a water bath at 60 C o for two days without moving. After polymerization finishes, the mold was removed by frequent washing of the polymer using a combination of (10%) (v/v) of Acidic acid/Methanol utilizing the extractor (Soxhlet) for 24 hours. Following mold removal, it was necessary to guarantee that there were no reactive materials by checking it, following the process of frequent washing and drying at 40 C o for one hour. After drying, the material was smashed into powder using a grinder of Granit and a steel sieve whose porosity is 125μm. For evaluating the extracted material, a plastic syringe (3 ml) was exploited by filling it with a polymer material. Furthermore, a standard liquid, which lies within the calibration curve, was prepared and permitted to pass through the plastic syringe. Finally, the liquid was removed from the plastic syringe by a washing solution and under a pressure of 5 pa. Synthesis of the Imprinted Polymer BMSP -(MIP 2 -2-AAMMPSA) Unbreakable glass tube (25 ml) was utilized, and 0.6 mmol from the mold material BMSP was added to it. BMSP was dissolved in 7 ml of methanol. In addition. An amount of 3.5 mmol of 2-Acrylamido-2-Methyl-1-Propane Sulphonic Acid (2-AAMMPSA) was added to the blend. Further, the combination was stirred via the ultrasonic waves for 5 minutes. Later, cross linkers of N, N-Methylene Bisacrylamide (NNMBAAM) (25 mmol) and Benzoyl Peroxide (0.32 mmol) (BPO), which represents a beginning point for polymerization, were added to the glass tube. Bubbles in the liquid were moved out using high-purified Nitrogen for 30 minutes. Directly thereafter, a rubber lid tightly locked the tube outlet, and the resulting liquid was placed in a water bath at 60 C o for two days without moving. After polymerization finishes, the mold was removed by frequent washing of the polymer using a combination of (10%) (v/v) of Acidic acid/Methanol and utilizing the extractor (Soxhlet) for 24 hours. Succeeding mold removal, It was necessary to be certain that there were no reactive ingredients by checking it following the process of frequent washing and drying at 40 C o for one hour. After drying, the material was smashed into powder using a grinder of Granit and a steel sieve whose porosity is 125μm. For evaluating the extracted material, a plastic syringe (3 ml) was exploited through filling it with the polymer material. Furthermore, a standard liquid, which lies within the calibration curve, was prepared and permitted to pass through the plastic syringe. Finally, the liquid was removed from the plastic syringe by a washing solution and under a pressure of 5 pa.
Preparation of pharmaceutical BMSP solutions
The pharmaceutical form, which is available in local markets and contains BMSP, has tablets shape and is produced by the company "The Gulf Jilfar for medical industry" in UAE. Ten tablets of pharmaceutical form, which have 0.5 mg of the effective material, were weighed to get an average weight of 1.905 g. The collection was smashed and well mixed using a ceramic grinder. Then, an average of one tablet weight (0. 10905 g) was considered and dissolved in a volumetric vial (100 ml) using Methanol as a solvent. Following the process of placing in a water bath to dissolve by ultrasonic waves, the liquid was filtered through an infiltration paper (Whatman No. 42) to get rid of any undissolved materials. Additionally, the leachate, containing 50 µg.ml -1 of the effective material BMSP, was obtained and applied in tests.
Procedure of BMSP standard solution
Different quantities of (1 -10) ml of the standard liquid BMSP, whose concentration is 100 µg.ml -1 , were moved to a collection of volumetric bottles having 10 ml each, and were slaked up to the mark of this solvent. Then, the UV ray device scanned the wavelength (190 nm-400 nm) of the combination to plot the zero spectrum and the absorption spectrum record (for each bottle) to calculate the range of concentrations that were consistent with Pier -Lambert law. The study showed that the maximum absorption was at 238 nm.
RESULTS AND DISCUSSION Absorption spectra:
Absorption of Betamethasone sodium phosphate versus its photo liquid was measured. Consequently, BMSP showed a maximum absorption at 238 nm, as in figure. 2.a. Then, a calibration curve for BMSP drug was organized by plotting absorption versus concentration, as in figure. 2.b. The linearity of BMSP drug was in the range (10 -100) µg.ml -1 , the gradient coefficient of BMSP (R 2 ) was 0.9999, the molar absorption coefficient with Sandal indication of BMSP were 11722.28 L.mol -1 .cm -1 and 0.044053 2 µg.cm -1 respectively, and the identification limit with the estimation limit of BMSP were 0.002985 µg.ml -1 and 0.009949 µg.ml -1 respectively. This method depicted satisfying accuracy and harmony, where the reconstruction percentage (Rec%) value of BMSP drug took the range (99.058149 % -101.887004 %), and the relative standard deviation (RSD%) value took the range (0.224149 % -0.743651 %).
Figure 2. (a) zero-order spectra of (BMSP) at 238 nm and (b) calibration curve of (BMSP) with concentrations (10 -100) µg.ml -1 Accuracy and precision
Accuracy and consistency of the method were computed through Rec% and RSD% for two concentrations within the calibration curve, where have a great role in reacting with mold and forming molecular printed polymers. Two types of monomers were utilized, which were Acrylamide (AAM) and 2-Acrylamido-2methyl-1-propane Sulphonic Acid (2-AAMMPSA) that supports checking of the printing process. The molecular printed polymers needed appropriate type and quantity of cross linkers to complete polymerization to become a hard and a high selective polymer. Many attempts to prepare molecular printed polymers were conducted, and they included finding the perfect ratios of (monomer: cross: linker drug) to prepare NIPs and MIPs, The prepared NIPs and MIPs included convenient properties regarding their performance, as shown in Table 2. FTIR analysis FTIR spectra of BMSP drug appear at forming MIPs that stand on the monomer Acrylamide and 2-Acrylamido-2-methyl-1-propane Sulphonic acid. Before and after drug removing, basic functional groups perform, as shown in figure. (3 -7). FTIR spectra of pure Betamethasone sodium phosphate were measured. The same operation occurred to the molecular printed polymers (before and after removing the mold) through scanning within the range (400 -4000) cm -1 utilizing the solid tablets method (KBr).Through FTIR spectra, a wide band of OH group was observed. The frequency band of this group became less than its previous value, because of the linkage between OH of BMSP drug with atoms existing within the monomer (AAM) via hydrogen bonds.
Consequently, the hydrogen bonds drag the (O-H) bond and change the dynamics of this bond. Furthermore, we can observe that Carbonyl group (C=O) disappeared after the process of removing the mold molecule finished. In addition, groups (C=O amid) and (N-H) that belong to monomer AAM appeared. In spite of conducting the process of removing the mold molecule, the groups did not disappear. This verifies that washing and removing actions were effective. FTIR referred to the existing of a wideband of OH group having frequencies that became higher than its preceding value, because the new band represents a summation of OH frequencies of BMSP drug and the frequencies existing in 2-AAMMPSA monomer. Moreover, we observed that the Carbonyl groups (C=O) disappeared after the operation of removing the mold molecule. In addition, the groups (C=O amid), which belongs to the monomer, appeared during the formation of MIPs and did not disappear after removing the mold molecule. The operation proves that the frequent washing using a combination of 10 % (v/v) of Acetic acid/Methanol and mold molecule removal was effective.
b) after (BMSP) removal Application of Method
The aforementioned method was applied utilizing Solid Phase Extraction and was conducted for two concentrations (within the calibration curve) that are (25 and 50) µg.ml -1 for two materials. The materials are BMSP (the standard material) and Betasone pharmaceutical and have the same concentrations for three repetitions for every measurement process. Then, a scan with wavelengths of (200 -400) nm for the prepared combinations was carried out; hence, the results exhibited efficient accuracy and consistency. Moreover, Rec% took values of (98.400035 -99.404218) %, and RSD% took values of (0.572589 -1.012777) % of BMSP drug for the Betazone pharmaceutical, as depicted in Tables (5) and Tables (6).
Method comparison
The proposed method was compared with a reference method, which is the Constitution of British Medicine, through the test F-test at a confidence level of 95 % confidence level and at the rate of three replicates. The results showed significant differences as compared to F (Table 19). The calculated values of F were 15.2 and 14.7 for the polymers BMSP-MIP1-AAM and BMSP-MIP2-2-AAMMPSA respectively. The results signifies the successful method of the printed molecule polymer in estimating Betamethasone sodium phosphate in pharmaceuticals. | 3,143 | 2020-02-28T00:00:00.000 | [
"Chemistry",
"Materials Science"
] |
Cloud Model-Based Method for Infrared Image Thresholding
,
Introduction
Image thresholding converts a gray level image into a binary image, and it is one group of popular and simple methods.Many different techniques have been proposed and developed over the years [1][2][3][4].Comprehensive overviews and comparative studies of image thresholding can be found in the recent literature [5,6].
Among these thresholding methods, the most common idea is to optimize some threshold dependent functions, which include the information and properties of the images, as known to all, the statistical image thresholding.The Otsu method, a typical example, has been widely used [7] as one of the best threshold selection methods for general real world images.Based on the Otsu method, many modified methods or other statistical variations had been proposed, such as minimum variance method (Hou for short) [8], standard deviation-based method (Li) [9], and median-based method (Xue) [10].The method in [11] proposed a cloud model-based framework for range-constrained thresholding and improved four traditional methods, while the method in [12] converted the histogram of image into a series of normal cloud models by cloud transformation, which is as an improvement of Gaussian Mixed Model.
In general, the existing statistical methods have been proven useful and successful in many applications [13].However, none of them is generally applicable to all images, and different algorithms should be usually not equally suitable for any given particular application.We believe that image thresholding is also an essential part in infrared image tracking system, since target detection is an important problem in infrared image sequences with various cluttered environments, and image thresholding can be usually used to separate candidate targets in the image because of its simplicity and efficiency.Unfortunately, most statistical methods cannot provide satisfied results for infrared image thresholding without considering the practical features, which are not exclusive and lack sufficient attention on specific application prospect.In this sense, the automatic selection of optimum threshold for infrared image is still a challenge.
Almost all of infrared images are with mixture non-Gaussian model, narrow grayscale range, and low-contrast objects, and there are similar statistical properties between classes of objects and background.In addition, small targets to be detected exist in many cases [14].These features of infrared images are our major concerns.To enforce the weak point of the previous statistical thresholding methods, we propose a cloud model-based approach for infrared image thresholding.Our intentions are twofold: (1) using cloud models to depict the classes of background and object in a more robust way, (2) presenting a new statistical threshold criterion related with cloud models to determine the optimal threshold.Different from the existing methods, especially our previous publications [11,12], the proposed method used and only used cloud model and did not use any existing methods.In other words, cloud model is no longer an assistant tool for existing methods.Cloud model is a cognitive model between a qualitative concept and its quantitative instantiations [15][16][17] and has been used in image thresholding with uncertainty [11,12,18].We have done quantitative and qualitative validation of the proposed approach against several infrared images.Comparison has been made with respect to seven methods, including three traditional state-of-the-art algorithms [19][20][21] and four relative methods [7][8][9][10].The experimental results, both image thresholding and target detection, demonstrate that our approach is efficient and effective.
The rest of the paper is organized as the following: Section 2 presents an overview of related works.Then, Section 3 proposes a novel cloud model-based algorithm for infrared image thresholding, and the algorithm analysis is also discussed, as well as implementation and computational complexity.Section 4 shows the experimental results, both infrared image thresholding and an application of target detection.Section 5 provides some discussions on the proposal.Finally, the conclusion is drawn in Section 6.
The Otsu Method.
The Otsu method is one of the simple and popular techniques for statistical image thresholding.Otsu's rule for selecting the optimal threshold can be written as where (), () are the cumulative probability of two classes, that is, background pixels () and object pixels (), and can be defined as and (), () are the standard deviations of these classes: in addition, (), () are the means of these classes: () . (4) 2.2.The Hou Method.Hou et al. [8] proved that the Otsu method tends to divide an image into object and background of similar sizes and presented an improved method for image thresholding.Hou's criterion obtains the optimal threshold by minimizing the sum of class variance: The Hou method overcomes the class probability and the class variance effects using the relative distance and the average distance, but there still exist some disadvantages, such as noise or inhomogeneity.
The Li Method.
Li et al. [9] believed that both Otsu and Hou neglect specific characteristic of practical images and get unsatisfactory segmentation results when applied to those images with similar statistical distributions in both object and background.In other words, for two Gaussian classes with equal variances but distinct sizes or with equal sizes but distinct variances, the Otsu and Hou methods would perform not as perfectly as for two classes with more equal sizes and more equal variances.Aiming at the images with similar distributions in classes of background and object, especially for infrared images, Li improved the weakness of the Otsu and Hou methods and proposed the new criterion related with the minimal standard deviation, which can be rewritten as 2.4.The Xue Method.The above methods in (1), ( 5), and ( 6) choose class variance of some form as criterions for threshold determination, while Xue and Titterington [10] argued that when the class distribution is skew or heavy-tailed, or when there are outliers in the sample, the mean absolute deviation from the median is a more robust estimator of location and dispersion than the class variance.Based on the consideration, Xue presented a median-based extension for the Otsu method and stated improving the robustness with the presence of skew or heavy-tailed class-conditional distributions.Xue's rule can be stated as where (), () denote the mean absolute deviations from the median () = med{ | ∈ ()}, () = med{ | ∈ ()} for two classes, and are defined as Although Xue's extension could accomplish more robust performance than that of the original Otsu method, the Xue method seemed not to notice Hou's motivation originated from the Otsu method, and then it is destined that there are several dissatisfactions if involving some applications, as well as infrared images.
The Cloud Model-Based Method
3.1.Preliminaries.Cloud model, proposed by Li et al. [15,16], is the innovation and development of membership function in fuzzy theory and uses probability and mathematical statistics to analyze the uncertainty [15,22].In theory, there are several forms of cloud model, which are successfully used in various applications, including knowledge representation [15,23], intelligence control [16,24], intelligent computing [25][26][27], data mining [28], and image segmentation [12,18].However, the normal cloud model is commonly used in practice, and the universality of normal distribution and bell membership function are the theoretical foundation for the universality of normal cloud model [16].
Let be a universe set described by precise numbers and let be a qualitative concept related to .Given a number ∈ , which randomly realizes the concept , satisfies ∼ (Ex, En 2 ), where En ∼ (En, He 2 ), and the certainty degree of on is as below: then the distribution of on is defined as a normal cloud, and is defined as a cloud drop.
The MATLAB function of the normal cloud generator is included in the supplementary files (available online at http://dx.doi.org/10.1155/2016/1571795)(see Appendix A).The overall property of a concept (Ex, En, He) can be represented by three numerical characters of normal cloud model, expected value Ex, entropy En, and hyper-entropy He.Ex is the mathematical expectation of the cloud drop distributed in universal set.En is the uncertainty measurement of the qualitative concept, and it is determined by both randomness and fuzziness of the concept.He is the uncertain measurement of entropy, which is determined by randomness and fuzziness of En [16].
It is worth noting that hyper-entropy He of a cloud model is a deviation measure from a normal distribution, which is the quantification on how a distribution deviates the Gaussian distribution.For comparison, Wang (Lixin Wang, written personal communication, May 2011) construct a random variable, whose central moments are as close as possible to those of the cloud model.And the mean, the variance, and the third central moment of the constructed variable are equal to cloud model.Therefore, the quantification of the difference between cloud model and Gaussian distribution is achieved in some extents.Accordingly, an accurate quantity on the deviation measure can be obtained, 6He 4 + 12En 2 He 2 , from the point of view of statistical characteristics, especially the fourth central moment.Hence, the distribution of cloud drops can be regarded as a generalized normal distribution.The details on this property are included in the supplementary files (see Appendix A).
Compared with interval type-2 fuzzy sets widely researched and used [29], cloud model is based on probability and mathematical statistics, its hyper-entropy lets us capture and handle the higher-order uncertainty, and it is equivalent to the secondary grade of Gaussian type-2 fuzzy sets [18], which have been little studied but may be very useful [30].
The Cloud Model-Based Criterion.
Given a threshold , the background pixels () can be obtained from the original image ().Let the cloud model for the background class be (Ex (), En (), He ()).Considering () as the input, three numerical characters would be generated by backward cloud generator [15].More specifically, the expected value Ex () is the grayscale mean of background pixels, and it is formalized as where the cardinality | ⋅ | of a set is the number of members.Notice that (10) is clearly equivalent to (4).Next, the entropy En () is directly related to the firstorder absolute central moment from the mean, written as The derivation on (11) is included in the supplementary files (see Appendix A).
The last parameter, hyper-entropy He (), can be defined as Similarly, the corresponding cloud model for object class can be also calculated.We take the original image in Figure 1(a) as a typical example, whose ground-truth image and the grayscale histogram are shown in Figures 1(b) and 1(c).We fix the optimal threshold = 214 according to its ground-truth image in Figure 1(b), and then the numerical characters of cloud models for background and object are calculated, respectively, that is, (60.6, 461.5, 111.7) and (249.1, 0.2, 91.8). Figure 1(d) demonstrates the joint distribution of cloud drops and its certainty degree.Cloud model has depicted the gray level distribution of the sample image, and it is an approximate normal distribution, or a generalized normal distribution, rather than a normal distribution.Furthermore, the grayscale distributions between background and object must not be similar since the shapes of two cloud models are seriously different from Figure 1(d), as well as the ratio of these numerical characters.We will further analyze the difference in another section below.
Once the cloud models are prepared, the remaining is to construct an appropriate criterion for image thresholding.Suppose the cloud model for the background class is (Ex (), En (), He ()) and that of the object class is (Ex (), En (), He ()); our criterion is related to the hyper-entropies He (), He () of cloud models , .In an effort to eliminate the above limitation, we propose a novel criterion based on cloud model, which can be formulated as Then the optimal threshold can be determined as follows: The segmented result by the proposed method is shown in Figure 2(a), from which one can observe that the cloud model-based method yields the acceptable performance, and the result image is closed with ground-truth image in Figure 1(b).To view the details, the evolution curve of function values is shown in Figure 2(b).With varied grayscale level, the function value increases steadily and achieves the maximum near 150 and then decreases dramatically.At the optimal gray level, the curve is with the minimal value, followed by a slight growth.
It should be noted that the curve reaches max near 150, where the cloud models have little overlap (see Figure 1).In other words, this interval is full of chaos from the perspective of grayscale intensity, and the differences between the hyper-entropy of two cloud models are relatively highest.This means that one class achieves better partition while the other achieves worse.Maybe, these two classes even have similar standard deviation or population size in this condition.Unfortunately, this is not very compatible with the practical features of infrared images.As can be seen from ( 13) and ( 14), the new criterion actually attempts to divide an image into two parts with lower and similar hyper-entropy.Therefore, the classes of background and object have the higher intraclass similarity without distractions and certainly the lower interclass similarity.This is intuitively appealing as a good image segment.In ideal cases with two zero hyper-entropies, the two classes will yield bimodal Gaussian distribution, regardless of the population size.
For the image in Figure 1(a), the segmented results by four traditional methods are shown in Figure 3, including Otsu, Hou, Li, and Xue.The result images in Figures 3(a), 3(b), and 3(d) by Otsu, Hou, and Xue almost completely misclassify the object pixels, and the Li method exhibits an acceptable result as shown in Figure 3(c), but there is still suffering from undersegmentation; the background pixels are misclassified into object, as labelled by a yellow rectangle.For reference, each evolution curve of function value is listed in Figure 1.The results by Otsu and Xue are totally wrong.Although the Hou method finds a faulty threshold, it is competitive, and its function value is very closed to the real optimal solution.With the modification of the Hou method, result by Li is more qualified, but the final threshold is still imperfect.
To make an in-depth analysis on features of the sample image, we plot the histograms with -axes on both left and right sides, as shown in Figure 4(a); the grayscale level versus frequency of background with -axis is labelled on the left, while frequency of object is on the right.This image appears as a distinct unimodal grayscale distribution, and the peak ratio between the classes of background and object is very startling, which is about 300 : 1, which is high above tolerable ability of the Otsu, Hou, and Xue methods.Thus, the failure by these methods is a natural consequence.This is also a reason leading to the difference between each pair of parameters of cloud models, as mentioned in previous section.
To observe whether or not the grayscale distributions of two classes satisfy Gaussian form, we use double Gaussian functions to fit the histograms of background and object, respectively, and the fitting error rate is plotted in Figure 4(b).The grayscale distribution of background is more likely a Gaussian form, whose fitting error rate is smaller and acceptable.And the distributions are with difference variances, which is not satisfying the assumption of the Li method.Hence, the just passable result is produced by this method.Besides, this is another reason leading to the difference between cloud models.Furthermore, the evolution curves of function values by various methods are listed on semi-logarithmic coordinate, as shown in Figure 4(c).For simplicity, we only plot Xue's curve with omitting the Otsu method, since these two methods achieve similar results and have no essential difference.Additionally, the Li method searches similar standard deviation, while our method searches similar hyper-entropy; then we plot variance difference and hyper-entropy difference for a clear comparison with a close magnitude.A threshold locating in the interval [180, 225] is more preferable.Li's rule produces = 171, while ours produces = 213 obtaining a significant improvement.Of course, the pixels with grayscale value in [180,225] are not very much.Hence, there seems to be so little difference between the visual results of Li and our method.Therefore, we will verify this point using various images and then further investigate the performance on target detection application, as shown in the following section.
The quantitative comparison is also listed in Table 1, where ME denotes misclassification error and will be explained in Section 4. Our method yields the preferable result with the lowest misclassified pixels and the smallest ME value, since it can provide a more objective representation and accord with actual information of the image.Additionally, the running times by the methods are listed in Table 1.The proposed method, compared with the related methods, is also a competitive one from the perspective of time performance.is as shown in Algorithm 1.The method firstly generates the cloud models corresponding to image background and object, respectively, and defines a novel threshold dependence criterion related with the hyper-entropy of these cloud models and then determines the optimal grayscale threshold by the minimization of this criterion.
Implementation and Computational Complexity.
In Algorithm 1, there is a for loop with iterations, and each execution costs the time ().Theoretically, the time complexity of the proposed algorithm is ( 2 ).However, the proposed algorithm can be easily implemented by modifying the traditional methods, including Otsu and Hou.
The efficient implementations for traditional methods are also applied equally to the cloud model-based technique.By comparison, the additional time cost is required to calculate the entropy of two classes, and calculations of the hyper-entropy only execute a subtraction as (12) showed.In sum, the time complexity of the proposed algorithm with efficient implementation is (), approximately linear in the grayscale level of the original image.For an image with the size of 320 × 240, the time consumption is usually about 0.1 s in our practice.
Experimental Setup.
To demonstrate the efficiency of the proposed method, we conduct two groups of experiments, and seven methods are involved into comparison, including Kittler and Illingworth [19], Kapur et al. [20], Ramesh et al. [21], Otsu [7], Hou et al. [8], Li et al. [9], and Xue and Titterington [10].The parameter setting is very important for performance evaluation.For a fair comparison, all are autoparameters or free parameters, but not from a guess.The latter five methods, including the Otsu, Hou, Li, and Xue method, as well as the proposed method, implemented by us, are based on the counts of the histogram, and the only parameter is the number of the bins when calculating the image histogram; we fixed it as 256, which is a conventional rule, so we can consider them as free parameters.The other three methods, including the Kittler, Kapur, and Ramesh method, are performed by the executable file from the public website maintained by Mehmet Sezgin [5] (http://mehmetsezgin.net/otimec.zip),which we cannot modify; these methods can be considered as autoparameters.
The used data includes two groups.The partial data is composed of four images including different objects, and the original images and ground-truth images are shown in Figure 5, named airplane, person, boat, and star, respectively.The others come from Terravic Motion IR Database and can be registered to download from the website (http://vciplokstate.org/pbvs/bench/).We take four images as examples, named irw101, irw102, irin011, and irin012, respectively.The original images and ground-truth images are listed in Figure 6.Half of them are selected from irw10 with one subject entering the field of view (FOV) from the left, and the remaining are from irin01 with monitoring an indoor hallway.
We quantify the performance of these methods by means of misclassification error (ME) [31].Considering image segmentation as a pixel classification process, the percentage of pixel misclassification is a measure of discrepancy.ME reflects the percentage of background pixels wrongly assigned to foreground and, conversely, foreground pixels incorrectly assigned to background.For the two-class segmentation problem, it can be expressed as [ where background and foreground are denoted by and for the ground-truth image and by and for the test image. ∩ is the number of background pixels rightly assigned to background, and ∩ vice versa.ME varies from 0 for a perfectly classified image to 1 for a totally wrongly classified image.
Input:
the original image .
Output:
the optimal threshold and the result image .(1) For the original image , obtain the initial information, such as the number of pixels , the grayscale level , and then calculate the histogram ℎ().(2) Initialize the parameters, including the optimal criterion value and the optimal threshold .
Comparison on Infrared Image Thresholding.
In this group of experiments, we compare the segmentation results on the infrared images.The seven methods are involved into the comparison.The visual comparisons are shown in Figures 7 and 8.Each test image consists of eight components from up to down, the segmentation results obtained by Otsu, Hou, Li, Xue, Kittler, Kapur, Ramesh, and the proposed method, respectively.
The airplane image is relatively simple to segment; however Otsu, Hou, Xue, and Kittler cannot present valid results, and other methods provide good results.The quantitative comparisons are listed in Table 2. Results of the boat image demonstrate similar conclusion with the airplane image.For the person image, Li's result is unsatisfied.For the star image, all these methods yield no good results, but only our segmented image is the closest to ground-truth image.
Results of the second group images by various methods show the similar performance.For images irw101 and irw102, our method appears with the best results, followed by the Li method.But there are several misclassified blocks scattered in the background, which should impact the performance of the subsequent steps.For the last two images, our method suffers from oversegmentation, and the person is not that complete, and the other methods yield undersegmentation.In summary, by a visual evaluation, the experimental results of these images indicate the proposed method is effective to yield the approximately ideal results.
The visual evaluation is, however, applicationindependent.For a more detailed portrait, quantitative comparisons of segmentation results yielded by various methods are listed in Table 2.The proposed method outperforms other methods, obtaining the less misclassified pixels and demonstrating the lower ME values.In other words, the proposed method yields better segmentation by a quantitative evaluation.
For the comparison purpose, the processing times of the proposed approach and other methods are also listed in Table 2. From this perspective, the proposed method is similar with the traditional Otsu method, as well as the Hou and the Li method, since the main time costs of all these methods lie in the calculation of the histogram and the scan of the gray level one by one.The time performance of these methods depends on the complexity of the histogram, rather than the image size.As a result, the time cost by each method is quite similar, even the different images by the various methods.It is noted that three methods, including the Kittler, the Kapur, and the Ramesh method, are performed by the executable file, as mentioned above.Thus, the running times of these three methods cannot be obtained.Then the processing times of the methods are omitted in Table 2.According to experimental experience, the involved methods belong to 1-D thresholding and very high efficiency, and the average time costs of these methods are all less than 2 s in our experiments.The comparison between the time cost of the existing methods and ours is not strictly necessary.Even so, the comparison on running time in Table 2 still indicates that the proposed method is efficient because our method runs with less time cost, and the segmented results are acceptable.
In addition, Figure 9 shows a boxplot diagram that summarizes the misclassification error rate statistics of individual thresholding method applied on all selected images.As can be seen, the lowest median is achieved with our method, followed by the Li method and next the Ramesh method.For reference, the detail views of these three methods are also provided.The proposed method has also the smallest interval within ±1.5 IQR of the first/third quartiles (1 and 3).The Hou and Otsu methods provided, overall, less accurate results.These results demonstrate the high level of competitiveness of our method.
Results of Infrared Image with Noise.
To further investigate the performance of the proposed methods under noisy Figure 7: The first group of segmentation results applying the proposed method compared to selected algorithms.From left to right, named airplane, person, boat, and star.From up to down, the result images by Otsu, Hou, Li, Xue, Kittler, Kapur, Ramesh, and the proposed method, respectively.Figure 8: The second group of segmentation results applying the proposed method compared to selected algorithms.From left to right, named irw101, irw102, irin011, and irin012.From up to down, the result images by Otsu, Hou, Li, Xue, Kittler, Kapur, Ramesh, and the proposed method, respectively.environment, the irin011 image is involved into comparison.According to Table 2, eight methods gain the closest performance for the clean irin011 image.Hence, the image is used in this subsection.The original image is contaminated by Gaussian noise with zero mean, as well as salt and pepper noise.Each noise tests 20 images; that is, Gaussian noise is with variances in {0.01, 0.02, . . ., 0.20}, and salt and pepper noise is with intensities in {0.01, 0.02, . . ., 0.20}.Since the noise contamination is a random process, we repeat the process of each variance or intensity 10 times and average the results.
The ME values corresponding to various noises are shown in Figure 10.The proposed method performs lower ME values than the other methods for all the Gaussian noise images.The Kapur method ranks the second, followed by the Ramesh method and the Li method with a little difference.The other four methods, including the Otsu, the Hou, the Xue, and the Kittler method, show a poorer performance to Gaussian noise.On the other hand, the proposed method exhibits an acceptable performance under the condition contaminated by the salt and pepper noise, which is similar with the Otsu, the Hou, and the Li method.The Kittler method comes next, while the Kapur and the Ramesh methods are invalid under this condition.Coincidently, these two methods show a better performance in the above round.Unfortunately, the Xue method is far from satisfactory under the conditions of both Gaussian noise and Salt and pepper noise.In general, the quantitative comparison with various noisy images suggests that our method is more robust to noise, especially under the condition contaminated by the Gaussian noise.The result videos in this subsection are included in the supplementary files (see Appendix B).Our result presents better performance without any preprocessing or postprocessing, although there exist two images with poor segmentation, where the shelter appears.Of course, we must objectively point out that our method may lead to slight oversegmentation in some cases, which does not affect target detection substantially.Our method mainly aims at the infrared image thresholding, and other postprocessing steps can be introduced into our method to improve the performance if involving target detection.The Li method provides just passable results, but it should suffer undersegmentation in most cases and even seriously degrade the performance.By comparison, our cloud model-based method performs a compelling performance in both image thresholding and target detection.
Discussions
In this section, we discuss four problems: one is the relationship between our method and related methods, as well as the theoretical principle or foundation, next is applicability for infrared images, followed by the limitation of the proposed Li's method is motivated by the isoperimetric graph partitioning [32], in which the intraclass similarities of object and background are measured by the sum of degrees of their corresponding vertices, while the interclass similarity is measured by a cut of the graph.From this perspective, one can also take the hyper-entropy as the intraclass similarity instead of vertices' degree sum.The standard deviation is a common statistical measure representing degree of deviations between mean and individuals, and He is as the second-order or higher-order standard deviation.Therefore, He is a higherorder statistical measure representing degree of deviations between mean and individuals.Accordingly, He can also be used to measure intraclass similarity of each class in image thresholding.Thus, the cloud model-based method is equivalent to the isoperimetric constant when omitting interclass similarity of object and background, and the proposed criterion is equivalent to Li's equation in some cases with En () = 0 and En () = 0, and they are approximate in many cases with smaller En () and En ().
For an extreme comparison, we randomly build two simulated data sets, denoted by 1 , 2 .The data in different sets correspond to pixel intensities in different types of virtual image of two groups (i.e., the background and the objects).The size of image = 256 × 256, and the number of background pixels is denoted by ||, while that of object is || = − ||.The statistical parameters, including the mean, , , the variance, 2 , 2 , and the first-order central moment, 1 , 1 , are listed in Table 3.Since intensities are integers in the range [0, − 1], we round the simulated data into [0, − 1].
The segmented results are shown in Figure 12.With a larger entropy of classes (En , En ≈ 5), the results by our method are clearly different from Li's results.Our method achieves good results, while Li's method generates invalid results (completely misclassifying the background or the objects).Li's method is unexpectedly inferior to the other methods, even though the assumption of the equal size class for these traditional methods (e.g., the Otsu method) is not satisfied.
Applicability for Infrared Images.
We confine the proposed method to infrared image thresholding, and there are two main reasons: (1) Our new criterion actually attempts to divide an image into two parts with lower and similar hyper-entropy, even higher-order entropy.Thus, the proposed method, especially the statistical criterion, is only suitable for a special histogram from the point of view of mathematical statistics, as discussed in the above subsection.Almost all of infrared images have the statistical properties.Therefore, the proposed method is more suitable for applying to infrared images, since the infrared images have the special features more observably and more directly, especially for those infrared images with small targets.(2) The cloud model-based method is inspired by the Li method, which is specific for infrared images.Hence, the comparison is more meaningful with the same application.Compared with the Li method, our method improves attention to detail in target detection for infrared images.Thus, the proposed method is also positioned in specific for infrared images.
Of course, our method can be also applied to the gray level images with similar statistical features, since the method is proposed to process the histogram only, but not the type of images.In fact, the proposed method performed well in the above subsection, in which there are simulated data sets, not infrared images.But we still think one cannot find out other types of images with the special features, which is more suitable than the infrared images.In other words, our method would generally achieve better performance by applying to infrared images rather than gray level images.
Theoretically, another statistical criterion can be provided for gray level images which are more appropriate than the infrared images.Unfortunately, this criterion is still open and to be further investigated.Nevertheless, our method does not want to apply to all images.Meanwhile, we cannot do so, since each technique has both strengths and weaknesses.
The Limitation.
In general, our method demonstrates good performance, both efficiency and effectiveness.Nonetheless, each algorithm has its advantages and disadvantages.None is generally applicable to all images; our method is no exception.The proposed method belongs to a special type of statistical approach, and it takes the histogram as the The results of simulated data applying the proposed method compared to selected algorithms.The simulated images and histograms are in the first row, and result for each image consists of two rows.From left to right, from up to down, the result images by Li, the proposed method, Otsu, Hou, Xue, Kittler, Kapur, and Ramesh, respectively.fundamental basis; the division of background and object pixels depends on the statistical feature reflected by the image.The proposed cloud model has a certain capacity of processing uncertainty, when a small number of the object pixels are disguised as the background pixels from the point of view of the grayscale values.But it is never omnipotent, and when the statistical features of the object pixels in images were influenced seriously, the proposed method would suffer the oversegmentation.In extreme cases, the results would be unsatisfied, even wrong.For example, the proposed method currently cannot deal with texture; thus it does not work well if involved into the gray level images with texture, which can be easily processed by the other specific methods.In this view, our method is more specified for the infrared images, since it is difficult to reflect the target texture information by infrared images.Additionally, incorporating semantics into image segmentation and scene understanding is recently a new trend.The lack of introducing semantics information is another limitation of the proposed method, although the statistical method is simple and feasible.
The Extension.
In this paper, the cloud model, 2nd-order essentially, uses three numerical characteristics Ex, En, He to represent the classes of background and objects pixels.Furthermore, cloud model can generate the next-order entropy, which is the higher-order cloud model, if hyperentropy is not enough to support or represent the statistical properties of a given image.Theoretically, the entropy of cloud model has the ability of an infinite extension.Wang et al. [17] have proposed the th-order generic normal cloud model, which is expressed by + 1 numerical characteristics.The recursive definition of the th-order normal cloud is the popularization of the 2nd-order normal cloud and the normal distribution.The th-order normal cloud has the unimodal and long-tail property, and then it is able to represent a power-law distribution [33].From normal distribution to power-law distribution, cloud model has flexibility to represent the classes of background and objects in images.
For the application on image thresholding using higherorder cloud model, only one extra step is inserted as preprocessing of the proposed method.A possible way is to search the optimal order from 1 to (the right bound of order) so as to achieve that th-order normal cloud can represent the two classes with the least error, compared with the original histogram.The step is easy to implement, but the searching process of should take the additional time.Theoretically, with a larger , the representing error is less, and the segmented results are more perfect, while the time cost is more.In practice, how to keep a balance between them is really a challenge.From the perspective of time complexity, we only use 2nd-order cloud model in this paper, which is a special case of the th-order normal cloud model.
Summary and Conclusion
In this paper, a new algorithm for infrared image thresholding has been described.We analyze the rationale of the proposed approach.Cloud model is introduced into the representation of image background and objects, as named by cloud model with three parameters (Ex, En, He).The method possesses only one free parameter, the hyper-entropy He, over which a criterion function is evaluated and the optimal threshold can be determined by optimizing the function value.A comparison with seven other thresholding methods has been presented.The results indicate that the proposed method is effective and efficient and highly competitive with other popular methods.Like any other methods, the proposed method maybe also has limitations and suffers from oversegmentation in some cases, which will be further considered and essentially improved in our future research.
Figure 1 :
Figure 1: The sample image named irw10-000217: (a) the original image, (b) the ground-truth image, (c) the grayscale histogram, and (d) the cloud models for two classes.
Figure 2 :
Figure 2: The segmented result of sample image by the proposed method: (a) the result image, (b) the cloud model-based function value.
Figure 4 :
Figure 4: The analysis of sample image: (a) histograms of background and object, (b) frequency error rate of background and object using Gaussian fitting, and (c) the semi-log plot of function values by various methods.
Figure 5 :
Figure 5: The first group of test images: the first row is the original images and the second is ground-truth images-from left to right, named airplane, person, boat, and star, respectively.
Figure 6 :
Figure 6: The second group of test images: the first row is the original images and the second is ground-truth images-from left to right, named irw101, irw102, irin011, and irin012, respectively.
Figure 10 :
Figure 10: The overall results for the irin011 image with noise: (a) total curves for the images corrupted by Gaussian noise of different variances, (b) total curves for the images corrupted by Salt and pepper noise of different intensities.
Figure 12 :
Figure12: The results of simulated data applying the proposed method compared to selected algorithms.The simulated images and histograms are in the first row, and result for each image consists of two rows.From left to right, from up to down, the result images by Li, the proposed method, Otsu, Hou, Xue, Kittler, Kapur, and Ramesh, respectively.
Table 1 :
Thresholds, misclassified pixels, ME values, and running times obtained by four traditional methods, Otsu, Hou, Li, and Xue, as well as the proposed method.
Table 2 :
Thresholds (), misclassified pixels (MP), ME values, and running times (RT) obtained by selected methods and the proposed method.
4.4.Application on Infrared Image Target Detection.In this group of experiments, we apply the proposed method into infrared image target detection.By the way, six methods, including Otsu, Hou, Xue, Kittler, Kapur, and Ramesh, have inevitable weakness for infrared images according to the previous section; thus these methods are not included in the comparison.We only compare Li's results with ours. | 8,632.4 | 2016-05-16T00:00:00.000 | [
"Computer Science"
] |
An auto-balancer device for high spin-drying frequencies ( LoWash Project )
Auto-balancing or active control balancing can be efficient solutions for high speed rotors with changing out-ofbalance loads like washing machines in spin-drying mode. In the LoWash EU project, Vibratec is in charge to design, to build and to validate a balancing system for reducing the vibrations at high spin-drying speeds. The system is based on two trolleys rolling in a ring linked to the drum. The trolley shape allows a ring cross section optimization and they are equipped with a mechanism for escaping the disadvantage encountered at low speeds by similar devices. Analytical and multi-body models are first made for understanding the mechanisms, highlighting the driving parameters and drawing the final design of a first prototype which is inserted in a washing machine drum. Different tests are carried out for different initial unbalances and different rotation speeds: the residual unbalance is measured by means of a set of accelerometers mounted on the tub, while the mobile masses behaviour is observed by means of a large aperture swift camera. The test results highlight the auto-balancer high efficiency but also the sensitivity to geometrical defects which should be corrected in the next systems. According the theory, the balancing is efficient when the rotation frequency is significantly greater than the hanging frequencies. The multi-body model relevance is also demonstrated. A washer-dryer prototype including an auto-balancer second prototype and two other innovations, regarding thermal exchange efficiency and drum insulation, will be tested in operating conditions.
Introduction
The out-of-balance loads are the main source of vibration for the rotating machines.It is especially the case of the washing machines which are submitted to unbalances which are not only high but also changing, in magnitude and location, during the spin-drying cycle.In this case, an active control balancing or an auto-balancer are required.This paper deals with the second solution which has been designed and assessed within the LoWash EU project which aims at developing a new innovative washer-dryer.After few considerations about the assumptions and the theory of rotor balancing, the second section presents the simulations models which have been used for designing a first prototype of an auto-balancer device.
The third section is dedicated to the measurements and the results analysis led on the first prototype.The results allow investigating the balancing device behaviour and assessing the global efficiency.The multi-body model results are also assessed by comparison to the measurement results.
The fourth section presents the results obtained on a second prototype, aimed to be inserted in the final washer-dryer prototype, and in which some improvements, deduced from the first prototype results are implemented.
The conclusion is made in the fifth section and dedicated to the balancing device efficiency, its potential improvements and its potential extensions to other industrial applications.
Theory
Many studies ( [1] to [6]) deal with the automatic balancing by mobile masses (in general balls).The most important remark concerns the rotational speed of the drum ω, which has to be higher than the last suspension natural frequency ω n in order to have the correcting masses out of phase with the unbalance ( [1], see Fig. 1).
Let us consider the hanged system ( [2]) on the Fig. 2 constituted by a rotor (mass M), an unbalance (mass m u ) and several balancing masses m i .
The equations of motion are derived from Lagrange's equations which lead to: Where, M is the total mass of the system, thus c j is the suspension damping, k j is the suspension stiffness, e is unbalance radius, l i is the radius at which the correcting masses are located, δ i is the damping acting on the mass i, β i is the angle between the unbalance and the correcting mass i.
Multi-body model
As a multi-body is planned for designing the auto-balancer, a model of the auto-balancer included in a single plane rotor is validated by comparison to the analytical model (see Fig. 3) issued from the theory and implemented in Matlab software.
Parameters analysis and final design
By considering the previous results, a final Adams model (Fig. 3) can be proposed which consists in inserting a ring model in the global washing machine model and replacing the balls by shaped trolleys.This replacement aims at optimizing the ring section while minimizing the mobile masses number and cancelling the disadvantage of such a system at low speed to geometric defects and to determine the conditions to fulfil for an efficient balancing.The most important is the comparison of the efficiency between 1 balancing plane and 2 balancing planes (see Fig. 5) which shows that the benefits of a second plane is low (regarding the industrial constraints) and that one balancing ring is sufficient (the drum is a short rotor with a length lesser than the diameter).
Measurements -1 st prototype 3.1. Experimental set-up and test matrix
A first prototype is built (see on Fig. 6), based on the simulation results.It is fixed inside a washer dryer drum for tests.The aim is to validate the auto-balancer and to assess its efficiency.
The experimental set-up includes: -A laser speed sensor for the rotation speed measurement.-4 3-D accelerometer for the tub and drum displacements measurements (the main used result is the YZ plane trajectory built from the average magnitude in Y and Z direction).-A wide aperture camera the frequency of which can be synchronised or slightly unsynchronised with the drum rotation.
The rotation speed is manually driven by a specific power supply.
Preliminary tests
The system is supposed to be efficient after the first natural frequencies (the hanging modes frequencies).The hanging mass is made of the tub and the equipments fixed on it (including the drum).From one side, a short experimental modal analysis is carried out in order to identify the natural frequencies, from the other side the resonances are identified during a drum speed up (see Fig. 7).
The resonances are located from 180 rpm (3 Hz) to 360 rpm (6 Hz) while the EMA frequencies are between 3.4 Hz to 6.5 Hz.Operational and natural mode shapes are similar and the frequency shift (10%) is probably due to non linearity.Thus the auto-balancer is expected to be efficient after 400 rpm.
Running tests and analysis
The tests are performed for different spin-drying frequencies from 600 rpm to 1400 rpm and 3 unbalance values: 150 g, 300 g and 400 g.
The Fig. 7 presents the results obtained at different speeds with a 300 g initial unbalance.The dynamic equation says that the ellipse radius is proportional to the unbalance load: the external circle on the Fig. 7 shows that a 3.5 mm radius corresponds to a 300 g unbalance at 1100 rpm.The displacement magnitude is expected to become independent from the speed when it is high enough because the inertia term became significantly preponderant at high speeds.Actually, if a significant difference can be observed between 600 rpm (which is close to the highest hanging frequency) and 1100 rpm, after 1100 rpm the ellipse radius is constant and the results can be easily extrapolated to the higher speeds.Moreover, the Fig. 8 highlight the efficiency of the auto-balancer with a final unbalance weight located around 80 g.The camera is used for observing the trolleys behaviour.Colour marks are made on the drum and the trolleys for the angles estimation because of a large optical distortion (the camera cannot be centred in the drum).Video films allow observing the trolleys displacement during the time.A typical result is shown on Fig. 9 which presents a run at 620 rpm with a 150 g unbalance.At the final location the trolleys make a 120 • angle while the theoretical expected value is 115 • .This deviation is small and the balancing is efficient with a residual unbalance around 20 g. 10 trajectories analysis issued from 10 consecutive runs are carried out for each tested configuration.The system is globally very efficient but a significant dispersion can be observed.The relative dispersion is higher when the initial imbalance to correct is low but in any case the residual imbalance is lesser than 100 g for 98% of the runs.The results for a 400 g unbalance at 1100 rpm are plotted on Fig. 10.The system is very efficient and the residual unbalance is between 40 g and 100 g with this high initial unbalance.A 40 g residual unbalance is the maximum of achievable performance because the trolleys total mass is 360 g.
Regardless the initial unbalance, the worst residual unbalance seems to be close to 100 g.It could be due to a trolley wheel blocked on a rolling path junction: for the insertion in an existing drum, the ring is made of two half rings and there is a residual step at each ring junction.The optical observation of the trolleys locations confirms this assumption.The 100 g of residual unbalance could correspond to a threshold above of which the balancing forces are sufficient to free the trolley (to climb the step).
Simulation vs. measure
A computation measurement comparison is made in order to validate the model; the drum initial unbalance (17 g) is set in the model for improving the comparison.A good correlation is observable on Fig. 11.
Measurements on the 2 nd prototype
As a second prototype is required for final tests with water, the ring and trolleys design is modified for reducing the junction effects.Despite these modifications, steps remain at the rolling path junctions (probably due to the forces generated during the glue polymerisation inside the drum).03001-p.4Thus although the average performance is improved the maximum of residual unbalance remains around 100 g.A sequence of tests is performed for observing the sensitivity to the unbalance location: the unbalance test mass is alternatively set on 3 different angular locations separated from 120 • .This test show a strong dependence from the initial unbalance location; this conclusion is enforced by the good reproducibility for a given unbalance location (see Fig. 12).These results seem to confirm that trolleys are blocked by geometrical defaults.A video observation is not possible for validating this assumption because this second prototype is equipped with a cover for waterproofness.
Conclusions
Designed with a multi-body dynamic model, an autobalancer has been mounted inside a washer-dryer drum.While other systems use generally balls for mobile masses, the LoWash system uses shaped trolleys allowing the ring cross section optimization and the control of the mobile masses behaviour at low speed.The test results highlight the efficiency of such a device but also the sensitivity to rolling track defects.There are 2 ways to reduce the defects impact: -Deleting or reducing the defect itself -it could be done by tooling the ring in one piece.-Adding a soft rolling band on the wheel for facilitating the step crossing.
Because it can reduce significantly the drum vibrations and the dynamic loads on the drum bearing, the auto-balancer can be used for: -Increasing the spin-drying speed.
-Lightening the hanging mass and components.
A work has to be done to adapt the system design to the requirements of washing machines massive production.
The potential dissemination of the auto-balancer to other industrial machines (pumps, turbines, fans . . . ) has also to be investigated.
Figure 7 .
Figure 7. 1 st prototype.Drum displacement during a drum speed up. | 2,611 | 2015-01-01T00:00:00.000 | [
"Engineering"
] |
Effect of annealing temperature on microstructures and mechanical properties of a hot-rolled Ti-6Al-4V-0.5Ni-0.5Nb alloy for offshore applications
Titanium alloys have been increasingly used for downhole tubular and components for corrosive oil wells due to their combination of strength, density, and corrosion resistance. The present study investigated the effect of annealing temperature on microstructures and mechanical properties of a Ti-6Al-4V-0.5Ni-0.5Nb alloy developed for oil well application. The α/β- and β-rolled Ti-6Al-4V-0.5Ni-0.5Nb alloy plates were subjected to annealing at different temperatures ranging from 750 to 900 °C and held for 1 h, and the microstructure and mechanical properties were evaluated. The experimental results show that the developed alloy exhibited both high strength and ductility than that of the conventional Ti-6Al-4V alloy. The α/β- and β-rolled alloy after annealing exhibited bimodal structures and provided well-balanced strength, ductility, and impact toughness, high strength of 950−1000 MPa, elongation of 17%−18%, and impact toughness of 90−120 J cm−2. From a mechanical viewpoint, the new alloy after appropriate annealing is suitable for oil well applications.
Introduction
In the past decades, titanium alloys demand for offshore application have increased tremendously. The offshore oil and gas industries rely heavily on titanium and its alloys for a wide range of applications because of their high strength-to-weight ratio and, excellent corrosion resistance, not only in seawater but also in the petroleum refinery environment [1]. For instance, in recent years, titanium alloys have begun to be used for the making of downhole tubular, and set screws of drills. These are components of oil and gas wells, requiring excellent resistance to corrosion and high strength and toughness. Titanium plays a well-deserved role in the oil and gas industries [2].
Among titanium and its alloys, commercially pure titanium (CP-Ti), i.e., Grade 2, is by far the most used titanium for offshore environments, because of its good corrosion resistance, good formability, and weldability [3,4]. However, unalloyed titanium often exhibits relatively low strength, and thus cannot be used as structural components when high strength is needed [5,6]. Therefore, alloying with a trace amount of elements such as Pd, Ni, or Mo can strengthen pure titanium and also enhance its corrosion resistance. Therefore, Ti-0.05Pd and Ti-0.3Mo-0.8Ni alloy systems were developed for higher strength and corrosion resistance [7,8]. In addition, Ti-3Al-2.5V and Ti-6Al-4V alloys were also used in offshore applications [9] because of their high strength, typically a minimum yield strength of 485 MPa and 760 MPa, respectively. Based on these alloys, Ti-3Al-2.5V-0.1Ru and Ti-6Al-4V-0.1Ru were also developed for higher-temperature applications [10] because Pd or Ru contained titanium alloys offer excellent crevice corrosion resistance in high-temperature saltwater. However, the Pd or Ru is rare and expensive. As a result, cheaper elements that can enhance both the mechanical properties and corrosion resistance are preferable to use as alloying elements. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
A recent study reported that a small amount of Ni and Nb addition can improve the corrosion resistance of titanium and its alloys. For example, the addition of Ni can enhance the strength [11], crevice corrosion resistance, and wear resistance of pure titanium [12]. Perez reported that Nb can prohibit the formation of TiO 2 and significantly reduce the water penetration effect of pure titanium oxidized in water [13]. A recent study also reported that Nb addition significantly promoted the formation of continuous Al 2 O 3 layers in the oxide scales because the addition of Nb increased the formation energy barrier of TiO 2 , while it decreased the formation energy barrier of Al 2 O 3 [14].
Based on these studies, a Ti-6Al-4V-0.5Ni-0.5Nb (wt%) alloy was developed for the purpose of offshore applications, such as oil country tubular goods and fasteners, using a small amount of Ni and Nb as the alloying elements. Ni and Nb are relatively cheaper elements than Pd or Ru. However, as a newly designed alloy, the relationship between microstructures and mechanical properties of the Ti-6Al-4V-0.5Ni-0.5Nb alloy remained unknown. The effect of hot working and subsequent heat treatments on the microstructural evolution and resulting mechanical properties also needs further investigation. Therefore, in this study, we investigated the effect of heat treatments on the microstructures and mechanical properties, tensile properties, and impact toughness, of the hot-rolled alloy samples. This study would provide a reference and guidance for the alloy design and engineering application of cost-effective and high-strength titanium alloys for offshore applications, such as oil country tubular goods and set screws of drills.
Materials and experimental 2.1. Materials
The raw material had a nominal composition of Ti-6Al-4V-0.5Ni-0.5Nb (wt%). An ingot with a diameter of 600 mm and a length of 800 mm was smelted by using vacuum arc remelting process. The consumable electrodes were remelted three times to obtain a homogeneous composition. The microstructure of the alloy ingot is shown in figure 1, which is typical of widmannstatten structure of as-cast titanium alloys, consisting of α colonies and thin β layers between α laths. The β-transus temperature of this alloy was measured to be 950 ± 5°C, by using a metallurgical method by observing the microstructures (martensite formation) of the samples heat treated at 930, 940, 950, 960, and 970°C for 1 h and followed by water quench. The ingot was first homogenized at 1200°C and then hot forged into a billet of Φ200 mm multiple times. A piece of billet with a dimension of Φ200 × 300 mm was cut from the forged billet and then hot forged at 930°C into a plate of 60 mm in thickness and 400 mm in width. The forged plate was then hot rolled at 920°C and 1020°C into a thinner plate of 30 mm in thickness and 400 mm in width. They are hereafter referred to as the α/β rolling and β rolling samples, respectively. The hot rolled plates were used as the initial material in this study. The as-rolled samples were then annealed in a preheated furnace at different temperatures of 750, 800, 850, and 900°C and held for 1 h and then followed by air cooling.
Microstructural observations
The samples for microstructural observations were mechanically ground using abrasive papers and then polished using 5 μm, and 1.5 μm diamond paste and 0.04 μm colloidal silica suspension solution. The polished The optical microstructures of the as-rolled and the annealed samples were observed by using an optical microscope (Axiovert A1). A field-emission scanning electron microscope (FESEM: TESCAN MIRA III) equipped with an electron backscatter diffraction (EBSD) detector was also employed to detect the microstructures and grain orientations. The EBSD mapping was performed at a voltage of 20 kV, with a working distance of 15 mm and a step size of 2 μm.
Mechanical properties
The samples for tensile testing and Charpy impact testing were cut from the as-rolled and heat-treated plates, the length direction of the samples paralleled to the rolling direction. The room-temperature tensile tests and Charpy impact tests were carried out as per GB/T 228.1-2010 and HB 5144-96, respectively. The samples for tensile testing had a gauge dimension of 5 mm in diameter and 35 mm in length. The samples were tested at room temperature with a strain rate of 0.001 s −1 using an MTS E45.0 tensile machine. The samples for Charpy impact testing were machined to a standard dimension of 10 × 10 × 55 mm and, a 2 mm V-notch. Charpy impact tests were performed at room temperature by using a NI300C instrumented Charpy impact testing equipment. Three samples of each condition of the tests were measured for average values to ensure the consistency and repeatability of experimental results.
Results and discussion
3.1. As-rolled microstructure Figure 2 shows the initial microstructures of the α/βand β-rolling samples. It is observed that the α/β-rolling sample exhibited an equiaxed grain structure with a large fraction of equiaxed α grains, approximately 65%, and the equiaxed α in the α/β-rolling sample had an average size of approximately 20 μm, as shown in figure 2(a). In the β-rolling sample, owing to the rolling at the temperature above the β transus temperature, the prior β grains are clearly observed, with a grain size of approximately 200 μm, as shown in figure 2(b). In addition, grain boundary α layers are formed along the prior β boundaries, as shown in figure 2(b). The β-rolling sample exhibited a lamellar structure having many α colonies and a very small amount of equiaxed α grains (approximately 8%). The grain size of the equiaxed α grains is approximately 10 μm. Figure 3 shows the microstructures of the α/βand β-rolling samples after annealing at different temperatures. The α/β-rolling samples after annealing at this temperature range exhibited a standard bimodal structure that consisted of both equiaxed α and very fine α laths. The β-rolling samples after annealing also exhibited a bimodal structure; however, they contained slightly finer equiaxed α but coarser lamellar α. Grain boundary α can be also observed in these samples, as shown in figures 3(e)−(h). Additionally, the total fractions of equiaxed α are much lower than those of the α/β-rolling samples after annealing. It is observed in figure 4(a) that the fraction of equiaxed α in the α/β-rolling samples after annealing exhibited a range from 50% to 70%, and the equiaxed α fraction in the β-rolling samples after annealing ranged from 20% to 30%. In addition, annealing results in grain refinement of equiaxed αin the samples, as shown in figure 4(b); however, increasing the annealing temperature did not significantly change the grain size of equiaxed α. Figure 5 shows the EBSD IPF maps of the annealed samples. When annealed at 800°C, grains with orientation gradients can be observed as indicated by arrows in figures 5(a) and (c). This implies that the recrystallization did not accomplish at this temperature. However, the samples annealed at 900°C are fully recrystallized, because in-grain orientation gradients are hardly observed as shown in figures 5(b) and (d). The βrolling samples exhibited finer grain sizes than those of the α/β-rolling samples when annealed at the same temperature. This is consistent with the optical microstructures in figure 3. Additionally, the α/β-rolling and βrolling samples after annealing at 800 and 900°C exhibited significantly different textures. 〈1120〉α tends to align with the rolling direction in the α/β-rolling samples, while 〈0001〉α aligns with the rolling direction in the β-rolling samples. This finding is consistent with the previous study on Ti-6Al-4V [15]. Figure 6 shows the mechanical properties of the samples as a function of annealing temperature. The α/βrolling and β-rolling samples exhibited UTSs of 975 MPa and 1027 MPa, YSs of 955 MPa and 995 MPa, Els of 17% and 18%, respectively. Note that both the strength and ductility are higher than that of Ti-6Al-4V alloy [16]. The β-rolling sample exhibited a similar ductility to the α/β-rolling sample, but higher strength. because the β-rolling sample had the finer equiaxed α and a larger amount of lamellar α. Although the two samples had a similar ductility, the α/β-rolling sample exhibited a much higher impact toughness, 72.3 J cm −2 . In general, Ti-6Al-4V with lamellar structure shows larger impact toughness than that of the alloy with equiaxed structure because lamella α could significantly deflect the crack [17]. The present results showed an opposite phenomenon, probably because the thickness of α lath in the α/β-rolling sample is much smaller than that of the β-rolling sample, as shown in figure 2.
Mechanical properties
After annealing, the strengths of the α/β-rolling and β-rolling samples both decreased compared to their asrolled conditions, as shown in figure 6(a); however, the annealing has no significant impact on the ductility of the α/β-rolling and the β-rolling samples, as shown in figure 6(b). The impact toughness of both the samples after annealing significantly increased, approximately 50%, as shown in figure 6(c). The grain refinement in both samples after annealing does not have a contribution to a noticeable strengthening but results in improved impact toughness.
Regarding the effect of annealing temperature, the UTS of the α/β-rolling and β-rolling samples decreased with increasing annealing temperature, however, the YS of the samples did not change significantly with increasing temperature, as shown in figure 6(a). The ductility of the β-rolling sample gradually increased with increasing temperature, but the ductility of the α/β-rolling sample maintained an Els of 17%−18%, as shown in figure 6(b). Both the two samples exhibited an increased toughness with increasing annealing temperature, from 55 to 100 J cm −2 for the β-rolling sample, and from 72 to 140 J cm −2 , as shown in figure 6(c). Note that the α/βrolling samples always had much higher impact toughness than those of the β-rolling samples.
Recommendations and future works
Considering the actual environment of offshore applications, further study needs to investigate the corrosion resistance of the new alloy after different annealing under high-temperature and high-pressure conditions and hydrogen sulfide, carbon dioxide, high concentrations of chloride ions, and even elemental sulfur environments.
Conclusions
The effect of annealing on the microstructure and mechanical properties of an α/β-rolled and β-rolled Ti-6Al-4V-0.5Ni-0.5Nb alloy developed for oil application was investigated. The conclusions are as follows: (1)The α/β-rolled alloy exhibited a bimodal structure and had moderate strength and high impact toughness, while the β-rolled alloy exhibited a nearly lamellar structure and had high strength but low impact toughness.
(2)Annealing introduced a significant grain refinement in both the α/β-rolled and β-rolled samples, however, the grain refinement does not contribute to grain-refinement strengthening but gives rise to improved impact toughness.
(3)The β-rolled alloy after annealing always exhibited higher strength but lower impact toughness than that of the α/β-rolled alloy after the same annealing.
(4)Further study needs to investigate the corrosion resistance of this alloy in hostile service environments, especially for high-pressure and high-temperature well applications. | 3,196.2 | 2023-02-08T00:00:00.000 | [
"Materials Science",
"Engineering"
] |
Experimental and theoretical study of friction torque from radial ball bearings
In this paper it is presented a numerical simulation and an experimental study of total friction torque from radial ball bearings. For this purpose it is conceived a virtual CAD model of the experimental test bench for bearing friction torque measurement. The virtual model it is used for numerical simulation in Adams software, that allows dynamic study of multi-body systems and in particularly with facility Adams Machinery of dynamic behavior of machine parts. It is manufactured an experimental prototype of the test bench for radial ball bearings friction torque measurement. In order to measure the friction torque of the tested bearings it is used an equal resistance elastic beam element, with strain gauge transducer to measure bending deformations. The actuation electric motor of the bench has the shaft mounted on two bearings and the motor housing is fixed to the free side of the elastic beam, which is bended by a force proportional with the total friction torque. The beam elastic element with strain gauge transducer is calibrated in order to measure the force occurred. Experimental determination of the friction torque is made for several progressive radial loads. It is established the correlation from the friction torque and bearing radial load. The bench allows testing of several types and dimensions of radial bearings, in order to establish the bearing durability and of total friction torque.
Introduction
In the literature are presented a large number of studies concerning the friction, lubrication and wear of materials. Test procedures for experimental measuring for the friction torque in rolling bearings are presented in [1]. For this purpose it is used a modified four ball machine, in order to test rolling bearings. It is monitored the friction torque and operating temperature. Other research [2] examines the frictional power loss of a needle roller bearing lubricated with grease. Obtained results reveal that the test bearing has higher friction compared with bearings lubricated with conventional lubrication. Low cost systems for the force and torque measurement, for wheel bearings are presented in [3]. Also, a study for the friction of a ball screw is presented in [4]. It is presented a theoretical model for the friction between balls of the screw. The study is useful because provides a theory support for reasonably reducing the screw ball friction. Studies based on the lubrication theory are presented in [5]. It is studied the wear behavior, using a four ball wear tester. Tests are made with a low viscosity additivated mineral oil. Two types of balls are used, steel and ceramic and obtained results shows greater resistance of the ceramic balls. The effects of roughness upon friction of the plastics, intended for bearing applications are studied in [6]. Obtained results shows that not exists an optimal roughness for minimum friction for polymers and the friction depends on the bulk properties of the polymer. Aspects concerning automotive tribology are presented in [7]. In this research is presented an overview of various lubrication aspects, for typical power train, including engine, transmission and driveline. Also an aspect presented consist in the currents status and future trends in automotive lubricants. Other comparative studies on the tribological behavior of lubricants are presented in [8]. Test methods for engine lubricants are presented in ASTM (American Society for Testing and Materials) standards and also are available a large number of patents [9], concerning the bearing friction.
Theoretical considerations
The resistant torque that appears in roller bearings is produced by a complex of friction appearance ways. The rolling friction phenomena complexity is generated by a large number of factors that occurs simultaneously. The most important sources for bearing friction are: friction generated by the contact deformations, rolling friction on the contact surfaces, friction produced by the lubricants, sliding of bearing elements, friction from seals. For usual calculations the friction torque can be estimated, with sufficient precision, with relations obtained from experimental results. The equation (1) establishes the total friction torque (Mtc).
where: Ml is the resistant torque produced by the fluid friction of bearing elements in contact with the lubricant; Mf is the resistant torque produced by the bearing load.
Equation (1) is used for bearings that operate to moderate speed and loads. In case of the ball bearings, used for high speed, when the frictions produced by the spin and gyroscopic motion are important, should be considered the friction torque produced by these motions. The resistant torque (Mf) produced by the bearing load, is computed with the equation (2).
where: F [N] is the bearing radial load; dm is the bearing medium diameter, in [m].
For radial ball bearings the factor f1 is established with the relation: where: [N]; C0= 8000 N, for radial ball bearing type 6204. In order to compute the resistant torque (Ml) produced by the fluid friction of bearing elements in contact with the lubricant are used equations (4) or (5):
Experimental test bench description
For experimental measurement of the bearing torque it is used the test bench shown in figure 1, where the tested bearing is mounted on the shaft (2) and it has oscillating exterior housing, as see in figure 1. The actuation is made with an electric motor, with the nominal speed of 1450 rpm. Components of the test bench are: (1) shaft, (2) -tested bearing, mounted on oscillating housing; (3)shaft bearings supports; (4) -lever to produce the radial load; (5) -shaft bearings; (6) -elastic coupling; (7) -beam with strain gauge transducers; (8) -electric motor. Radial ball bearing (2) subjected to experimental tests is mounted on the shaft (1). The outer rig of the tested ball bearing is mounted to an oscillating cylindrical bush. The radial load of the bearing is created with the lever (4) that is articulated to point O (figure 1). To point A are added additional weights G (as see in Fig. 4) in order to create different radial loads for the bearing. Electric motor (8) is used to rotate the shaft (1). The shafts of tested bearing and electric motor are connected with an elastic coupling (6). Electric motor stator is fixed with a tie rod to the elastic beam (7) free side. The elastic beam is subjected to deformations proportional with the motor torque. In order to measure the beam bending force, produced by the motor torque are used the strain gauge transducers (10), as see in figure 2. With experimental calibration is established the dependency between elastic deformation versus bending force and motor torque, which is necessary to be determined. Bending stress that appears in the beam is computed with equation (6) . For experimental measurement of the deformation it is used the MGCPlus acquisition system from Hottinger Baldwin Messtechnik. In order to evaluate the bending force, the beam transducer it is calibrated. The calibration diagram of the strain gauge transducer is presented in figure 3. stabilized to 12.5 μm/m. To this deformation corresponds a bending force of 11.06 N. Considering the distance between electric motor shaft and beam longitudinal center, as 71.3 mm, it results a friction measured torque by 788.57 Nmm. The radial force in order to load the bearing is created by adding metal discs with 1.8 kg weight of the lever (4), as see in figure 4. Registered results of the beam deformations, by adding three supplementary loads are presented in figure 6. Obtained results, corresponding to radial load regime, shows an increase of the total friction torque. Obtained theoretical dependence of the bearing friction torque versus radial load is presented in figure 7. Also, experimental obtained dependence of the bearing friction torque versus redial bearing load is presented in the same graphic, figure 7. It is observed a linear dependence, the friction torque increasing with the bearing radial load. Similar linear dependencies for the friction torque versus radial load are obtained by bearing manufacturer, to performed tests [10].
Numerical simulation in ADAMS of bearing test bench
ADAMS software offers the possibility with machinery plug-in to define a various number of machine elements. To design the dynamic model of the bearing test bench in ADAMS, for the kinematic elements have been specified the materials and are defined the revolute joints as ball bearings. The ball bearing construction in Adams Machinery is specified as shown in figure 8. The motion for the electric motor is defined to 157 rad/s. To obtain accurate results the shaft (1) of the test bench is considered as a deformable body. Based on Adams Machinery feature, the tested bearing life is 526 hours, as presented in figure 9. Figure 9. Bearing life report using Adams Machinery feature.
In figure 10, is presented the bearing center marker translational displacement upon X axis (axial). In figure 11, is presented the computed translational displacement of bearing center marker upon Y axis (radial). The axial displacement has small values (reaches 0.0015 mm), and the radial displacement is larger, reaching the value of 0.022 mm. Adams computed translational deformation of the shaft (1) center marker are shown in figure 12.
Conclusion
In this paper it is presented the design of a test bench for bearing friction torque measurement. The ball bearing subjected to tests in this study is type SKF 6204. From theory and experimental tests are obtained similar linear dependencies between the friction torque and bearing radial load. A graphic comparison for the obtained theoretical and experimental obtained results is presented in figure 7. Theoretically, it is computed a total friction torque of 1647.96 Nmm for the radial load of 291 N and the torque increases to 5018.17 Nmm in case of 1164 N bearing radial load. Experimentally, at 291 N bearing radial load it is measured a total torque of 1383.22 Nmm and for the maximum radial load of 1164 N the value of the friction torque reaches 5532.88 Nmm. The numerical simulation of the test bench is performed with Adams software, considering as flexible body the shaft mounted on bearings. Adams machinery reports a bearing life of 526.26 hours, as presented in figure 9. Are computed in Adams and presented in the paper the bearing center marker translational displacements. Because of the radial load it is computed a translational of 0.022 mm for the marker attached to the bearing center. The shaft deformations shown in figure 12, have the maximum amplitude of 0.13 mm. Proposed test bench can be used to tests different types of radial bearings, with different lubricants. | 2,426.2 | 2017-08-25T00:00:00.000 | [
"Engineering"
] |
Multiscale Analysis of Mechanical Properties of 3D Orthogonal Woven Composites with Randomly Distributed Voids
Voids are common defects in 3D woven composites because of the complicated manufacturing processes of the composites. In this study, a micro–meso multiscale analysis was conducted to evaluate the influence of voids on the mechanical properties of three-dimensional orthogonal woven composites. Statistical analysis was implemented to calculate the outputs of models under the different scales. A method is proposed to generate the reasonable mechanical properties of the microscale models considering randomly distributed voids and fiber filaments. The distributions of the generated properties agree well with the calculated results. These properties were utilized as inputs for the mesoscale models, in which void defects were also considered. The effects of these defects were calculated and investigated. The results indicate that tensile and shear strengths were more sensitive to the microscale voids, while the compressive strength was more influenced by mesoscale voids. The results of this study can provide a design basis for evaluating the quality of 3D woven composites with void defects.
Introduction
The past few decades have witnessed significant advances in the application of 3D woven composites owing to their superior mechanical properties, such as high specific stiffness/strength, high damage tolerance and high energy absorption capability [1][2][3]. The wide and extended use of these composites requires higher mechanical property standards. However, defects such as voids in the matrix are almost inevitable during the molding process because of the complex spatial structure of the woven composite. The presence of random void defects can significantly reduce the mechanical properties and cause the fluctuation of mechanical properties [4][5][6]. Moreover, the randomly distributed fiber filaments in microscale models also significantly influence the mechanical performance of composites [7]. Thus, this paper presents a micro-mesoscale analysis to evaluate the influence of randomly distributed voids on the mechanical behaviors of three-dimensional orthogonal woven composites (3DOWCs), considering randomly distributed fiber filaments in bundles.
Three-dimensional woven composites have complicated spatial structures, and their heterogeneity mainly reflects in micro and mesoscales. In the microscale model, the fiber bundles, also treated as unidirectional (UD) composites, of 3D woven composites consist of a matrix and thousands of randomly distributed fiber filaments [7][8][9]. In the mesoscale model, the weave architecture, orientations of different fiber bundles and the matrix are incorporated [10][11][12][13][14]. In both scales, the representative volume element (RVE) models are treated as periodic structures and utilized to calculate the mechanical properties by applying periodic boundary conditions (PBCs). Outputs from microscale models are used as inputs for mesoscale models. For example, Wang et al. [15] used a hexagonal-array microscale model to predict effective elastic properties of fiber bundles. The outcomes were used as inputs for predicting the moduli of a 3D orthogonal woven fiber-reinforced polymer matrix composite. Zhou et al. [16] conducted a progressive damage analysis for 2D plain woven composites. The authors utilized RVE models of hexagonally arranged fiber filaments in microscale analysis, calculated the elastic properties and strength properties, and then applied the results to mesoscale models. In recent years, an increasing number of researchers have used randomly distributed fiber models instead of regularly arranged models in microscale analysis. Representative volume elements with more fibers can capture the variation in strength, especially in transverse directions. Vajari [17,18] used randomly distributed models to study the influence of micro-voids in the matrix on biaxial strength properties and failure modes. Wang [19] studied the transverse tensile strengths of UD composites, considering thermal residual stress, using randomly distributed fiber models, and found variations due to the distributions. The arrangement of fibers is uncertain in advance. Randomly distributed fibers in micro-scale models will result in different elastic and strength properties. Thus, Monte-Carlo simulations are implemented in this paper to study the influence of randomness on the mechanical behaviors. To obtain a stable result, a process should be repeated numerously because of uncertainties [20][21][22]. In addition to the randomly distributed fibers, there are many other uncertainty factors, such as material properties, volume fractions, random cross sections and randomly distributed voids. Tao et al. [7,23] proposed a multiscale simulation to quantify the effects of numerous uncertainties on mechanical properties for 3DOWCs, including fiber and void distributions, modulus of the matrix, fiber volume contents and fiber bundle dimensions. Zhou et al. [24] coupled the multiscale method and perturbation to study the variability in the effective elastic properties of composites with random material properties. Shi et al. [14] used Weibull distribution material properties to conduct a multiscale damage analysis for 2.5D fiber-reinforced ceramic matrix composites. The results showed significant relationships existed between the constituent properties and the effective mechanical properties. Roham and Mohammad [25] conducted a progressive damage analysis for composite vessels, considering several uncertainties, including fiber volume fraction, winding angle and material strength properties. The importance of considering uncertainties is emphasized using statistical analysis. Guo et al. [26] proposed a random weft cross-section model based on experimental observations. The predicted results agreed well with the experimental results.
Void defect is also a common uncertainty generated from manufacturing processes, and it can mainly be classified into two types: microscale and mesoscale void defects. Recently, several researchers have focused on the effects of these defects on the mechanical performance of composites. For microscale defects, Dong [27] provided a practical method for predicting the effects of process-induced voids on the mechanical properties of carbon fiber/epoxy composites using the RVE model. Tensile and interlaminar shear strengths were calculated and compared with available experimental data. Jiang et al. [28] compared the axial stiffness and strength properties of a single-fiber bundle in UD composites with and without void defect based on a three-representative unit cell model. Carrera [29] used a 1D finite element model based on the Carrera unified formulation to evaluate the influence of voids on UD composites. Hyde et al. [30] conducted comprehensive research on the effects of micro-voids on the strength of UD composites. The effects of void shapes, void volume fractions and void orientations were considered. For mesoscale defects, Dong and Huo [31] studied the elastic properties of 3D braided composites with internal defects using a two-scale finite element model. Huang and Gong [32] used a similar method to predict the void effects on the effective elastic properties of 3DOWC. Gao et al. [13] recently predicted the mechanical properties of 3D braided composites with void defects, establishing a constitutive model and a finite element model of 3D braided composites. The results provided the design basis for evaluating the influence of void defects on mechanical behaviors. An integral part of the multiscale finite element research is to build a microscale RVE that considers uncertainties. Monte-Carlo simulations are then conducted to obtain the statistical distributions of required mechanical properties. Some researchers used mean values as inputs for higher-scale analysis, while other researchers considered the distributions of these parameters and then generated distributed inputs for higher-scale analysis. However, most researchers ignored the relationships between these mechanical properties; the relationships are not stated in their papers. Thus, the main goal of this paper is to devise a multiscale approach to evaluate the influence of microscale and mesoscale void defects on the mechanical properties of 3DOWCs. The relationships between microscale properties are considered. The rest of this paper is organized as follows: Section 2 describes the multiscale models and boundary conditions. Section 3 introduces the constitutive models and failure criteria used in this paper. Section 4 provides the results and discussion on the multiscale analysis. Section 5 presents the concluding remarks and future work.
Multiscale Models and Methods
A 3D orthogonal woven fiber-reinforced polymer matrix composite was investigated in this study, and the material properties of the basic components are listed in Table 1. The 3DOWC had a one-by-one weave architecture. In the thickness direction, there were two layers in the warp direction and three layers in the weft direction. Figure 1 illustrates the framework of the multiscale stochastic model used in this study. Multiscale models with randomly distributed fiber filaments and voids were generated. Statistical analysis was conducted to obtain the correlations between the mechanical properties, and the results are described in Section 4. Then, based on the above correlations, new mechanical properties were generated as the input parameters for mesoscale models. Finally, statistical analysis was conducted to evaluate the effect of voids on the mechanical properties of 3DOWCs.
Microscale Model for UD Composites
In the microscale model, randomly distributed fibers were considered. Since the focus of this study is to evaluate the influence of void defects on mechanical properties, randomly distributed voids were also implemented in the matrix. Therefore, the material properties of fibers and matrix remained constant, and the randomly distributed fibers and voids were the variables. CATIA V5R21 (Vélizy-Villacoublay, France) was used to generate the randomly distributed geometry model of UD composites considering the periodicity, and then HyperMesh 14.0(Altair Engineering, Inc., Troy, MI, USA) was used for meshing. Corresponding nodes were on opposite surfaces to ensure the following application of PBCs. There are two ways to generate the void defects: One is considering the real geometry of voids and distributions and then establishing the finite element model; the other one is randomly selecting the matrix elements, putting them into one set and modifying the material properties of these elements. These two methods result in different void densities or numbers of voids, depending on the grid density. According to Vajari [17,18], increasing the number of matrix voids or void density in the model has no significant effect on the strengths or moduli but alters the crack path. In this study, the latter method was adopted, and its feasibility has been verified by many researchers [28,29,31,32]. To generate the random voids in the matrix, a Python (Python Software Foundation, Beaverton, OR, USA) script was used to modify the input file for ABAQUS 2020(Dassault Systemes Simulia Corp., Johnstone, RI, USA). Random elements were selected from the matrix set to the void set until the content of voids reached the set value. This way, 150 models were generated. Illustrations are shown in Figure 2. The fiber diameter was 7 µm, and the RVE model was 90 µm in length and width and 2 µm in thickness. Over one hundred fibers filaments are included in microscale model. Fibers, voids and matrix were meshed with eight-node linear hexahedral reduced integration elements (C3D8R). There were 84,519 elements and 113,888 nodes in this model. After the mesh sensitivity analysis, accurate mechanical properties could be obtained using the mesh size.
Materials 2021, 14, x FOR PEER REVIEW 5 of 22 [28,29,31,32]. To generate the random voids in the matrix, a Python (Python Software Foundation, Beaverton, OR, USA) script was used to modify the input file for ABAQUS 2020(Dassault Systemes Simulia Corp., Johnstone, RI, USA). Random elements were selected from the matrix set to the void set until the content of voids reached the set value. This way, 150 models were generated. Illustrations are shown in Figure 2. The fiber diameter was 7 μm , and the RVE model was 90 μm in length and width and 2 μm in thickness. Over one hundred fibers filaments are included in microscale model. Fibers, voids and matrix were meshed with eight-node linear hexahedral reduced integration elements (C3D8R). There were 84,519 elements and 113,888 nodes in this model. After the mesh sensitivity analysis, accurate mechanical properties could be obtained using the mesh size.
Mesoscale Model for 3DOWCs
The 3DOWC has a relatively complicated weave architecture, which may cause difficulty in the periodic meshing process. In this study, the textile geometry models of 3DOWC RVEs were generated using TexGen software (version 3.10.0, University of Nottingham, Nottingham, UK) [33]. Compared with the consistent meshing method that considers the real geometry of the fiber bundle section, the voxel meshing method has advantages in periodic meshing with a balance of efficiency and accuracy. Liu [34] studied the strength properties of 3DOWCs based on the voxel model, and the results agreed well with the experimental results. Owing to the efficiency and accuracy of this method, the voxel meshing method for the textile geometry model generated from TexGen was utilized in this study. Figure 3a,b present the mesoscale geometry model and voxel meshing
Mesoscale Model for 3DOWCs
The 3DOWC has a relatively complicated weave architecture, which may cause difficulty in the periodic meshing process. In this study, the textile geometry models of 3DOWC RVEs were generated using TexGen software (version 3.10.0, University of Nottingham, Nottingham, UK) [33]. Compared with the consistent meshing method that considers the real geometry of the fiber bundle section, the voxel meshing method has advantages in periodic meshing with a balance of efficiency and accuracy. Liu [34] studied the strength properties of 3DOWCs based on the voxel model, and the results agreed well with the experimental results. Owing to the efficiency and accuracy of this method, the voxel meshing method for the textile geometry model generated from TexGen was utilized in this study. Figure 3a,b present the mesoscale geometry model and voxel meshing for fiber bundles, respectively. The generation technique of voids in mesoscale model of 3DOWCs is the same with that in microscale model as is discussed in Section 2.1. The material orientations for each element of fiber bundles were automatically assigned by TexGen. The numbers of mesh seeds were 40, 80 and 32 in the warp, weft and thickness directions, respectively. Eight-node linear hexahedral reduced integration elements were assigned for fiber bundle and matrix components. A total of 102,400 elements and 109,593 nodes were included in this model. The 3DOWC has a relatively complicated weave architecture, which may cause difficulty in the periodic meshing process. In this study, the textile geometry models of 3DOWC RVEs were generated using TexGen software (version 3.10.0, University of Nottingham, Nottingham, UK) [33]. Compared with the consistent meshing method that considers the real geometry of the fiber bundle section, the voxel meshing method has advantages in periodic meshing with a balance of efficiency and accuracy. Liu [34] studied the strength properties of 3DOWCs based on the voxel model, and the results agreed well with the experimental results. Owing to the efficiency and accuracy of this method, the voxel meshing method for the textile geometry model generated from TexGen was utilized in this study. Figure 3a,b present the mesoscale geometry model and voxel meshing for fiber bundles, respectively. The generation technique of voids in mesoscale model of 3DOWCs is the same with that in microscale model as is discussed in Section 2.1. The material orientations for each element of fiber bundles were automatically assigned by TexGen. The numbers of mesh seeds were 40, 80 and 32 in the warp, weft and thickness directions, respectively. Eight-node linear hexahedral reduced integration elements were assigned for fiber bundle and matrix components. A total of 102,400 elements and 109,593 nodes were included in this model.
Periodic Boundary Conditions
To satisfy the boundary displacement and boundary stress continuity of the RVE, PBCs should be employed to obtain accurate mechanical behavior. The PBCs should satisfy the following equations: Here, a 1 , a 2 and a 3 represent the RVE length in the first, second and third directions, respectively; ε 0 i1 denotes the strain in the i direction applied on the RVE surface in the first direction; and u i denotes the displacement in the i direction.
The PBCs described in this paper were applied using nodes constraint equations by Python script in ABAQUS. To obtain the RVE mechanical properties, including moduli and strengths, separate PBCs should be considered. Figure 4 presents the program of applying the PBCs for the microscale model. In the first step, the effective moduli of the RVE are calculated in the post-processing stage using σ = Ω σdV V . Here, I 6 is the unit matrix. Details for computing tensor C and the related elastic moduli can refer to Barbero [35]. Then, PBCs for axial and biaxial strength analysis are applied with the Poisson effect. The normal and shear stresses of the RVE are computed from the resultant normal and tangential forces acting on the surfaces divided by the cross section at every sub-step in ABAQUS. Finally, the strength properties are extracted from σ − ε curves in the direction of interest. For the microscale RVE model, PBCs in all directions are applied, while for mesoscale analysis, PBCs in the thickness direction are ignored because of the full-thickness RVE model [36]. One corner for each RVE model is fixed in all directions to remove the rigid motion. Then, PBCs for axial and biaxial strength analysis are applied with the Poisson effect. The normal and shear stresses of the RVE are computed from the resultant normal and tangential forces acting on the surfaces divided by the cross section at every sub-step in ABAQUS. Finally, the strength properties are extracted from σ ε − curves in the direction of interest. For the microscale RVE model, PBCs in all directions are applied, while for mesoscale analysis, PBCs in the thickness direction are ignored because of the full-thickness RVE model [36]. One corner for each RVE model is fixed in all directions to remove the rigid motion.
Distance Function for Generating Mechanical Properties of Microscale Model
The randomness of filament and void distributions will result in different mechanical properties for microscale models. In order to evaluate these parameters under certain void content, fitting functions are implemented to show the distributions of these mechanical properties. However, some of these parameters are positive correlated or negative correlated, see Section 4.1. Thus, it is inappropriate to directly use the fitting functions to generate the parameters as inputs for higher scale models.
To tackle this problem, a distance function is proposed here. After sufficient simulations, all elastic and strength properties are obtained. Thus, the scattered diagrams can be drawn between every two above parameters. As shown in Figure 5, we assume the fitting line between the two parameters is y = a + bx. Next, two points A and B which forms the error line should be found manually. It should be ensured that nearly all the scatter points are below the error line. Define the distance between the two lines at arbitrary point x 1 as D(x 1 ). Then the relationship between x and y can be written as follows: where x and y represent the mechanical parameters, a and b are the linear fitting parameters of the fitting line, and the random function rand(c, d) returns a random value between c and d. D(x) is the distance function, which returns the linear distance value in y-direction between the linear fitting line and the error lines.
Distance Function for Generating Mechanical Properties of Microscale Model
The randomness of filament and void distributions will result in different mechanical properties for microscale models. In order to evaluate these parameters under certain void content, fitting functions are implemented to show the distributions of these mechanical properties. However, some of these parameters are positive correlated or negative correlated, see Section 4.1. Thus, it is inappropriate to directly use the fitting functions to generate the parameters as inputs for higher scale models.
To tackle this problem, a distance function is proposed here. After sufficient simulations, all elastic and strength properties are obtained. Thus, the scattered diagrams can be drawn between every two above parameters. As shown in Figure 5, we assume the fitting line between the two parameters is y a bx = + . Next, two points A and B which forms the error line should be found manually. It should be ensured that nearly all the scatter points are below the error line. Define the distance between the two lines at arbitrary point Then the relationship between x and y can be written as follows: where x and y represent the mechanical parameters, a and b are the linear fitting parameters of the fitting line, and the random function
Failure Initiation Criteria
For the microscale model, the fiber component is regarded as a transversely isotropic material. In actual situations, no damage behavior is observed in transverse directions;
Failure Initiation Criteria
For the microscale model, the fiber component is regarded as a transversely isotropic material. In actual situations, no damage behavior is observed in transverse directions; thus, the maximum stress failure criteria are used to describe the failure behavior of fiber filaments because of their brittle damage behavior, as shown in Equation (3): where X t and X c are failure stresses for fiber tensile and compressive failure modes, respectively, and σ 11 denotes the stress of the fiber component in the fiber longitudinal direction. Matrix materials in fiber bundles (microscale) and between fiber bundles (mesoscale) were modeled as an isotropic material, and the modified Drucker-Prager yield model developed by Lubliner et al. [37] and Lee and Fenves [38] was applied to estimate the failure as follows: where I 1 is the first invariant of the stress tensor; J 2 is the second invariant of the deviatoric stress tensor; σ 1 is the maximum principal stress; x = x+|x| 2 ; α is the pressure-sensitivity parameter and the value 0.13 is adopted in is paper; and β is a function of tensile (σ xt ) and compressive (σ xc ) yield stress and is defined as follows: The sudden stiffness degradation law is used to describe the damage behavior after failure initiation for the fiber filament and matrix components.
For the mesoscale model, the fiber bundles can be regarded as fiber-reinforced UD composites. Thus far, numerous failure criteria have been proposed to describe the failure initiation and damage evolution behavior for UD composites. Puck's criterion, developed by Puck and Schürmann [39], ranks high among the several failure criteria in "World Wide Failure Exercise I and II" and was adopted in this study. Moreover, Gu and Chen [40] extended Puck's inter-fiber failure (IFF) criterion, and the extended criterion was also considered in this study. Puck's theory can be classified into two sets of equations: fiber failure (FF) and IFF.
The refined FF criteria were selected to identify the failure initiation for fiber damage under tensile and compressive loads as follows [41]: where X t and X c are the longitudinal tensile and compressive strengths of UD composites, respectively; E 1 and E 1 f are the longitudinal moduli of UD composites and fiber, respectively; υ 12 and υ 12 f are the Poisson ratios of UD composites and fiber, respectively; and m σ f is a magnification factor for the matrix stress caused by the mismatch in the moduli of matrix and fibers; in this study, m σ f = 1.1 was adopted, as suggested in [42,43]. Puck's IFF criterion is based on the Mohr-Coulomb theory, and failure will occur on the fracture plane where only shear stresses τ nt and τ nl and normal stress σ n exist, as shown in Figure 6. The IFF criterion is classified into two modes: matrix tension (σ n (θ) ≥ 0) and matrix compression (σ n (θ) < 0), which can be introduced as follows: The seven parameters need to be determined before Puck's criterion is implemented. According to Gu and Chen [40], UD composites are divided into three categories, resulting in different formulas to obtain the above parameters. Details can be found in ref. [40].
In Equation (7), IFF f is a function of the potential fracture angle θ, namely the fracture plane. The plane with the highest IFF f is the actual fracture plane. Thus, for an arbitrary stress state and uncertain material properties, the potential fracture angle should be determined before the failure onset is predicted. However, it is extremely difficult to obtain the analytical solution of fracture angle for such a 3D problem. Thus, the semianalytical procedure is utilized. In this study, an extension and combination algorithm of the selective range golden section search (SRGSS) [44] and the semi-analytical algorithm (SAA) [45] was developed to accurately determine the fracture angle. The algorithm implemented in this paper is more accurate than the SRGSS and SAA. However, a detailed description of the algorithm is beyond the scope of this paper and is thus not presented here. Once the fracture angle is obtained, the global maximum value IFF f is calculated, and the matrix damage is initiated when IFF f reaches 1.
In Equation (7), f IFF is a function of the potential fracture angle θ, namely the fracture plane. The plane with the highest f IFF is the actual fracture plane. Thus, for an arbitrary stress state and uncertain material properties, the potential fracture angle should be determined before the failure onset is predicted. However, it is extremely difficult to obtain the analytical solution of fracture angle for such a 3D problem. Thus, the semi-analytical procedure is utilized. In this study, an extension and combination algorithm of the selective range golden section search (SRGSS) [44] and the semi-analytical algorithm (SAA) [45] was developed to accurately determine the fracture angle. The algorithm implemented in this paper is more accurate than the SRGSS and SAA. However, a detailed description of the algorithm is beyond the scope of this paper and is thus not presented here. Once the fracture angle is obtained, the global maximum value f IFF is calculated, and the matrix damage is initiated when f IFF reaches 1.
Once the fiber damage and matrix damage starts, damage evolution processes begin and the equivalent stresses and strains of the fiber and matrix are defined as follows [46]: With: ε n = ε 2 cos 2 θ + ε 3 sin 2 θ + 2ε 23 sin θ cos θ ε nt = (ε 3 − ε 2 ) sin θ cos θ + ε 23 (cos 2 θ − sin 2 θ) ε nl = ε 13 sin θ + ε 13 cos θ With ε 0 eq and ε f eq as the equivalent strains of damage initiation and complete damage, respectively, the equivalent strain ε 0 eq for FF and IFF can be written as: As for the equivalent strain of complete damage for IFF, the energy-based damage law under mixed-mode is built with the element characteristic length to alleviate mesh dependency during material softening: where g n , g nt and g nl are the strain energy density of the corresponding stress component, respectively; G mt(c) is the energy dissipated during damage for transverse tensile and compressive directions, respectively; G 23c and G 12c are in-plane and out-of-plane dissipated energies for the corresponding directions; L is the characteristic length; ξ is a material parameter, set as 2 [46] in this paper. The strain energy density associated with each effective stress component due to complete damage is defined as: where σ 0 j is the stress component on the fracture plane when the damage is initiated, and β j represents the mix-mode ratio and can be expressed as: Thus, the equivalent failure strain ε m f eq can be formulated by substituting Equations (15) and (16) into Equation (14): The IFF damage variable is given by the following expression based on the bilinear model: Similarly, the equivalent strain of complete damage for FF can be obtained through the same procedure.
In the damage evolution process, the generalization of the Duvaut-Lions regularization model [47] is adopted for the mesoscale model to improve the convergence of the numerical calculation and smoothen the stiffness degradation. Thus, the viscous damage variable is defined as follows:
Damage Constitutive Model
In this paper, the damage modes for fiber bundles are mainly classified into four categories: longitudinal tensile damage D f t , longitudinal compressive damage D f c , transverse tensile damage D mt and transverse compressive damage D mc . When the material is damaged, its stiffness is degraded. If the damage occurs only in the y direction (θ f r = 0), the material can still bear load in the z direction. The degraded stiffness matrix can therefore be expressed as follows: where, Thus, the stresses are calculated using the following equation: The Jacobian matrix can be formulated by differentiating Equation (21) as follows: ∂∆σ ∂∆ε Figure 7 displays the typical stress-strain curves of UD composites with 3% void content under different uniaxial loadings. The modulus, strength and energy release rate properties were all obtained from the curves. In this study, 150 randomly distributed microscale models were established. For each non-zero void content (1%, 3% and 5%), three models with randomly distributed voids were established for each microscale model. Therefore, there were a total of 150 random models without voids and 1350 random models with voids. Figure 8 gives the maximum principal stress plots for one microscale model with different void contents under transverse tensile loads. The fiber component is hidden in the figure. The presence of voids does not affect the general stress distribution but will cause discontinuities of the distribution. Meanwhile, a higher stress concentration occurs between two adjacent fiber filaments in the load direction, at which the matrix elements will be damaged compared with the void-free case. Figure 9 presents the counts distribution histograms of E 1 and E 2 of the UD composite with various void contents, and Gaussian distributions are implemented to fit the probability density functions, which will be utilized for generating mechanical properties in the mesoscale models. It should be noted when the Gaussian distribution is implemented in mesoscale model, the generated values should be in the minimum-maximum range, obtained from finite element simulations, in order to avoid the appearance of too large or too small values. The statistical parameters of Gaussian distributions for all mechanical properties are listed in Appendix A (Table A1). be noted when the Gaussian distribution is implemented in mesoscale model, the generated values should be in the minimum-maximum range, obtained from finite element simulations, in order to avoid the appearance of too large or too small values. The statistical parameters of Gaussian distributions for all mechanical properties are listed in Appendix A (Table A1). be noted when the Gaussian distribution is implemented in mesoscale model, the generated values should be in the minimum-maximum range, obtained from finite element simulations, in order to avoid the appearance of too large or too small values. The statistical parameters of Gaussian distributions for all mechanical properties are listed in Appendix A (Table A1). Even though all the mechanical parameters follow the Gaussian distributions, they cannot be directly utilized as inputs for the mesoscale analysis because there are correlations among these parameters. For instance, Figure 10 displays the relations between (G 12 , E 2 ), (G 23 , E 2 ), σ yt , E 2 , σ yc , σ yt , G yt IC , σ yt and G 23 IC , τ 23s when the void content is 3%. In Figure 10a, the blue scattered bubbles denote the distributions of transverse modulus and longitudinal shear modulus. The solid line denotes the linear fitting curve, and the dashed error line is used to measure the deviation, which is also reflected in the bubble color intensity. The further away from the fitting curve, the lighter the bubble color. Upward and downward trends can be observed; therefore, the mechanical parameters cannot be directly and separately obtained from Gaussian distributions in the case of unreasonable situations such as UD composites with high transverse tensile strength and low transverse compressive strength. Even though all the mechanical parameters follow the Gaussian distributions, they cannot be directly utilized as inputs for the mesoscale analysis because there are correlations among these parameters. For instance, Figure 10 Figure 10a, the blue scattered bubbles denote the distributions of transverse modulus and longitudinal shear modulus. The solid line denotes the linear fitting curve, and the dashed error line is used to measure the deviation, which is also reflected in the bubble color intensity. The further away from the fitting curve, the lighter the bubble color. Upward and downward trends can be observed; therefore, the mechanical parameters cannot be directly and separately obtained from Gaussian distributions in the case of unreasonable situations such as UD composites with high transverse tensile strength and low transverse compressive strength.
Microscale Model
Owing to the above reasons, Gaussian distributions were utilized for parts of the mechanical properties 1 G as basic inputs for mesoscale analysis. Other mechanical properties were calculated based on the above parameters using Equation (2). To predict the mechanical properties for unknown microscale models, Python scripts were utilized to generate all the microscale properties using the abovementioned method. Figure 11a,b displays the distributions of calculated results and generated results. Figure 11c gives the box plots of the distributions of mechanical properties. From the figures, the distributions of the generated mechanical properties agree well with the numerical results. Hence, the method proposed in this paper is accurate enough for predicting the mechanical properties considering randomly distributed fibers and void defects.
In the next scale analysis, the UD composites will be regarded as transverse isotropic materials, and Puck's criterion will be adopted to predict the failure onset. Thus, biaxial loading simulations for UD composites were conducted to verify the applicability of Puck's criterion for predicting the strength properties under uncertain models and complex loading conditions. Figure 12 Owing to the above reasons, Gaussian distributions were utilized for parts of the mechanical properties E 1 , E 2 , σ xt , σ xc , G xt IC and G xc IC as basic inputs for mesoscale analysis. Other mechanical properties were calculated based on the above parameters using Equation (2).
To predict the mechanical properties for unknown microscale models, Python scripts were utilized to generate all the microscale properties using the abovementioned method. Figure 11a,b displays the distributions of calculated results and generated results. Figure 11c gives the box plots of the distributions of mechanical properties. From the figures, the distributions of the generated mechanical properties agree well with the numerical results. Hence, the method proposed in this paper is accurate enough for predicting the mechanical properties considering randomly distributed fibers and void defects. In the next scale analysis, the UD composites will be regarded as transverse isotropic materials, and Puck's criterion will be adopted to predict the failure onset. Thus, biaxial loading simulations for UD composites were conducted to verify the applicability of Puck's criterion for predicting the strength properties under uncertain models and complex loading conditions. Figure 12 displays the failure envelopes in σ 2 − τ 12 stress space under different void contents. In this case, 10 random models were selected for each void content case, and 12 stress ratios were considered. The numerical results agree well with the analytical results, verifying the applicability of Puck's criterion.
Mesoscale Model
The elastic properties and strength properties of 3DOWC mesoscale models in both warp and weft directions and in-plane shear properties were analyzed considering the randomly distributed voids in fiber bundles and between bundles. For the mesoscale models, the bundles were homogenized from the microscale model. Stochastic properties
Mesoscale Model
The elastic properties and strength properties of 3DOWC mesoscale models in both warp and weft directions and in-plane shear properties were analyzed considering the randomly distributed voids in fiber bundles and between bundles. For the mesoscale models, the bundles were homogenized from the microscale model. Stochastic properties of bundles were generated based on the microscale models discussed above. Considering that the voids were randomly distributed in fiber bundles and that the fiber distribution is different at different places, the properties of fiber bundle elements were different from each other. In this study, we assumed that the distributions of voids and fibers differed among the bundle elements, which results in different mechanical properties. Thus, each bundle element had its own sections, and each section had unique mechanical properties generated from the constructed microscale models in the given void content case. Figure 13 shows the random material RVE model. Different color represents different materials or sections. A total of 50 groups of models were established for each load case to characterize the stochasticity of the mechanical properties. The above process was conducted using Python script by modifying the input files generated from ABAQUS.
contents between calculated and theoretical results.
Mesoscale Model
The elastic properties and strength properties of 3DOWC mesoscale warp and weft directions and in-plane shear properties were analyzed c randomly distributed voids in fiber bundles and between bundles. For models, the bundles were homogenized from the microscale model. Stoch of bundles were generated based on the microscale models discussed abov that the voids were randomly distributed in fiber bundles and that the fib is different at different places, the properties of fiber bundle elements were each other. In this study, we assumed that the distributions of voids and among the bundle elements, which results in different mechanical proper bundle element had its own sections, and each section had unique mechan generated from the constructed microscale models in the given void conte 13 shows the random material RVE model. Different color represents diff or sections. A total of 50 groups of models were established for each load terize the stochasticity of the mechanical properties. The above process using Python script by modifying the input files generated from ABAQUS Figure 13. Representative volume element of mesoscale model with random mater generated from microscale outputs. Figure 14 depicts the typical stress-strain curves of 3DOWCs with n different loading directions. From the image, the initial slopes of tensile an stress-strain curves were consistent; therefore, the elastic moduli were obt tensile stress-strain curves. Figure 15 presents the statistical results for ela Figure 13. Representative volume element of mesoscale model with random material properties generated from microscale outputs. Figure 14 depicts the typical stress-strain curves of 3DOWCs with no voids under different loading directions. From the image, the initial slopes of tensile and compressive stress-strain curves were consistent; therefore, the elastic moduli were obtained from the tensile stress-strain curves. Figure 15 presents the statistical results for elastic properties and strength properties in the warp, weft and in-plane shear directions. The void contents in fiber bundles were set as 0%, 1%, 3% and 5%, which are the same as those of the microscale models. Void contents of 0% and 2% were considered between bundles. The fiber volume content remained constant in all models. From the figures, the modulus and strength properties all exhibited a decreasing trend as the void content increased. Figure 15a-c illustrates the relationships between the modulus and void content. Even though the presence of voids reduced the moduli in the warp, weft and in-plane shear directions, the decreasing amplitude was unnoticeable. Moreover, the variance of the moduli was not significant. Taking the void-free case as an example, the minimum and maximum moduli in the warp direction were 33,679.79 MPa and 33,687.56 MPa, respectively. In the microscale model, the standard deviation of the longitudinal modulus was small, while that of the transverse modulus was relatively large. The void had nearly no influence on the longitudinal modulus, while the transverse modulus reduced by 9.35% when the void content was increased to 5%. In the mesoscale model, the 3DOWCs had longitudinal fiber tows in both the warp and weft directions, and the longitudinal fiber tows were the main load-bearing components. Different from the results for modulus, the models with randomly distributed element properties showed significant variation in strength.
With the uneven materials distributed in fiber bundles, some elements had high tensile or compressive strengths, while others had lower ones. Low-strength properties resulted in the early failure of the elements. Evaluation using the Puck's criterion revealed that the elements exhibited reduced stiffness, lost the load-bearing capacity and featured stress concentration, which may further propagate the damage process. However, the elements with low strength properties may not be located at the key positions or in the load directions; if so, the overall strengths of the 3DOWCs will be relatively high. Moreover, the variations in mechanical properties, including moduli and strengths, were generally remarkable when voids were randomly distributed in the matrix between bundles because of the inclusion of one more uncertainty source.
the presence of voids reduced the moduli in the warp, weft and in-plane shear directions, the decreasing amplitude was unnoticeable. Moreover, the variance of the moduli was not significant. Taking the void-free case as an example, the minimum and maximum moduli in the warp direction were 33679.79 MPa and 33687.56 MPa, respectively. In the microscale model, the standard deviation of the longitudinal modulus was small, while that of the transverse modulus was relatively large. The void had nearly no influence on the longitudinal modulus, while the transverse modulus reduced by 9.35% when the void content was increased to 5%. In the mesoscale model, the 3DOWCs had longitudinal fiber tows in both the warp and weft directions, and the longitudinal fiber tows were the main load-bearing components. Different from the results for modulus, the models with randomly distributed element properties showed significant variation in strength. With the uneven materials distributed in fiber bundles, some elements had high tensile or compressive strengths, while others had lower ones. Low-strength properties resulted in the early failure of the elements. Evaluation using the Puck's criterion revealed that the elements exhibited reduced stiffness, lost the load-bearing capacity and featured stress concentration, which may further propagate the damage process. However, the elements with low strength properties may not be located at the key positions or in the load directions; if so, the overall strengths of the 3DOWCs will be relatively high. Moreover, the variations in mechanical properties, including moduli and strengths, were generally remarkable when voids were randomly distributed in the matrix between bundles because of the inclusion of one more uncertainty source. From Figure 15, the randomly distributed voids in fiber bundles and between bundles caused different degrees of reduction in modulus and strength. To further evaluate the influence of voids on these mechanical properties, the reductions were calculated using the mean values. The total content of fiber bundles in the mesoscale model was 55.56%; thus, the void content in the microscale model should be translated to the mesoscale model by multiplying the void content by 0.5556. Assume V 1 is the void content in fiber bundle, then 0.5556V 1 denotes the void content in the fiber bundles relative to the whole 3DOWC model. V 2 means the void content between fiber bundles and it is relative to the whole composite. For the convenience of subsequent descriptions, solid, dashed, black, red and blue lines or symbols are used to describe the various cases. The solid lines and dashed lines denote the reduction and reduction efficiency of the corresponding mechanical parameter at different void content cases. Black symbols describe the V 2 = 0 cases, and the baseline is (V 1 , V 2 ) = (0, 0). It denotes there is no void between fiber bundles and only voids inside the bundles are considered. Red symbols describe the V 2 = 2% cases, and the baseline is (V 1 , V 2 ) = (0, 2%). Blue symbols denote the V 2 = 2% cases, but the baseline is (V 1 , V 2 ) = (0, 0). The difference between red and blue symbols is that they have different baselines. In Figure 16a-c, the square and circle solid lines almost coincide, which demonstrates that the voids between bundles did not influence the effect of voids in bundles on the elastic properties; that is, the modulus decreased to the same extent as the increase in the void content inside the fiber bundle for different V 2 cases. The decrease rate increased linearly with the increase in the void content inside the fiber bundles. In Figure 16, the blue solid line is almost parallel to the red and black solid line; this indicates that the reduction in modulus caused by the increase in the void content between fiber bundles was the same when the void content inside the fiber bundle was constant. The reduction efficiency is the corresponding mechanical parameter reduced per unit void content compared with the baseline. The void content here represents the total voids in 3DOWC including voids inside and between fiber bundles. Even though the pores between and inside the fiber bundles exhibited a linear reduction in the modulus, the efficiencies of the modulus reduction for both void contents were not the same. As illustrated in the figure, the black and red dashed lines do not differ much for a certain V 2 , but the blue dashed line is significantly smaller than the black and red dashed lines, indicating that the effect of voids between fiber bundles on the 3DOWC modulus was smaller than the effect of pores inside fiber bundles with the same content on the 3DOWC modulus. Moreover, the blue dashed line tends to rise more slowly. The effect of the two types of voids on the strength properties was much more complicated than that on the modulus properties. For the tensile strength, σ xt increased with V 1 at a certain V 2 , but the increase tended to decrease after 3% void content. As seen from the dashed line, the black line had the highest reduction efficiency, and the reduction in strength due to voids became less significant with the appearance of microscale void defects, indicating that the voids inside the fiber bundle had a greater effect on the 3DOWC tensile strength, and the internal defects were more likely to cause a reduction in the 3DOWC strength. However, for σ xc , the blue curve has the highest reduction efficiency, which indicates that the voids between fiber bundles had a greater effect on the overall compressive strength under compression in the warp direction. Similar conclusions can be reached for σ yc . According to [48,49], the tensile strength of 3D woven composites is strongly influenced by the fiber bundle properties, while the compressive properties are largely controlled by the matrix properties. In this study, the internal voids of fiber bundles caused the reduction of the properties of 3DOWC fiber bundle components, while the mesoscale voids caused the reduction of the properties of 3DOWC matrix components. These findings are consistent with the results in the literature [48,49] that intra-fiber bundle voids have a greater effect on tensile and shear properties, while the mesoscale defects have a greater effect on compressive strength properties. In the current study, for shear strength properties, the decrease in τ s12 with increasing V 1 was the same for the same V 2 and was approximately linear in the given range of void contents. At the same V 2 , the τ s12 reduction efficiency tended to decrease with increasing V 1 and eventually tended to level off. At V 2 = 2%, the overall τ s12 reduction efficiency was significantly lower than that at V 2 = 0, but the reduction efficiency featured a growing trend and eventually leveled off, which is inconsistent with the trend of the black line, indicating that coupling effects that affect the shear strength performance existed between the two kinds of voids.
In general, the void content inside the fiber bundle and between the fiber bundles both exhibited a roughly linear decreasing trend on the modulus, and the effect of the voids inside fiber bundles was higher than that of the voids between the fiber bundles. The two defects seemed to be independent of each other, with no significant coupling effect. Regarding strength properties, coupling relationships existed between the effects caused by the two defects, and the presence of voids between fiber bundles changed the pattern of the effect of voids inside fiber bundles on strength properties. Moreover, the internal voids had a greater effect on tensile and shear properties, but the compressive strength was more influenced by the mesoscale voids. cases, but the baseline is ( ) The difference between red and blue symbols is that they have different baselines. In Figure 16a-c, the square and circle solid lines almost coincide, which demonstrates that the voids between bundles did not influence the effect of voids in bundles on the elastic properties; that is, the modulus decreased to the same extent as the increase in the void content inside the fiber bundle for different 2 V cases. The decrease rate increased linearly with the increase in the void content inside the fiber bundles. In Figure 16, the blue solid line is almost parallel to the red
Conclusions
In this paper, a multiscale analysis is proposed to evaluate the influence of microand meso-void defects on the elastic properties and strength properties of 3DOWCs. Randomly distributed fiber filaments and voids were considered in the microscale model. The outputs from lower-scale models were utilized as inputs for higher-scale models. Nonuniform material properties were assigned to the fiber bundle elements of 3DOWCs. Sta-
Conclusions
In this paper, a multiscale analysis is proposed to evaluate the influence of micro-and meso-void defects on the elastic properties and strength properties of 3DOWCs. Randomly distributed fiber filaments and voids were considered in the microscale model. The outputs from lower-scale models were utilized as inputs for higher-scale models. Non-uniform material properties were assigned to the fiber bundle elements of 3DOWCs. Statistical results are presented to provide an insight into the effect of voids on the mechanical performance of the composites.
In the microscale analysis, 150 randomly distributed fiber filaments models were established, and three models with randomly distributed voids were generated for each void-free model and each void content. Mean values and standard deviations were calculated to provide intuitive results of mechanical properties. The relationships between the two properties were obtained by plotting the bubble figures between any two parameters.
The mean values and dispersity of the material properties could be considered to describe the relationships through linear fitting and the generation of random errors. The proposed method ensures the reasonableness of newly generated material properties. The generated parameters agree well with the original calculated results.
In the microscale analysis, a voxel finite element model for 3DOWC was established. Generated material properties using the previous method were assigned for the fiber bundle properties as uncertainty sources from microscale models. Another uncertainty source was the void defects between bundles. Extended Puck's criterion was implemented to predict the failure initiation, and an energy-based damage evolution model was adopted for the progressive damage processes. The elastic properties and strength properties in the warp/weft tension/compression and in-plane shear directions were calculated. The results indicate the following: (1) the void defects will reduce the elastic properties, but the variations are not sensitive to the uncertainties; (2) the strength properties are sensitive to the uncertainties caused by the non-uniform distributed material properties of fiber bundles; (3) the elastic properties roughly linearly decreased with the void contents, including voids in and between fiber bundles; (4) coupling effects that affect the strength properties occurred between the two kinds of void defects; and (5) tensile and shear strengths were sensitive to the voids inside the bundles, while the compressive strength was sensitive to the voids between bundles.
The present research can provide the design basis for evaluating the influence of two kinds of void defects on 3DOWC mechanical properties. In future work, a similar procedure will be conducted for macroscale models, such as 3D woven structures, considering not only the present uncertainties but also the bundle section uncertainties. | 11,914 | 2021-09-01T00:00:00.000 | [
"Engineering"
] |
mARC vs. IMRT radiotherapy of the prostate with flat and flattening-filter-free beam energies
Background There as yet exists no systematic planning study investigating the novel mARC rotational radiotherapy technique, which is conceptually different from VMAT. We therefore present a planning study for prostate cancer, comparing mARC with IMRT treatment at the same linear accelerator equipped with flat and flattening-filter-free (FFF) photon energies. Methods We retrospectively re-contoured and re-planned treatment plans for 10 consecutive prostate cancer patients. Plans were created for a Siemens Artiste linear accelerator with flat 6 MV and FFF 7 MV photons, using the Prowess Panther treatment planning system. mARC and IMRT plans were compared with each other considering indices for plan quality and dose to organs at risk. All plans were exported to the machine and irradiated while measuring scattered dose by thermoluminescent dosimeters placed on an anthropomorphic phantom. Treatment times were also measured and compared. Results All plans were found acceptable for treatment. There was no marked preference for either technique or energy from the point of view of target coverage and dose to organs at risk. Scattered dose was significantly decreased by the use of FFF energies. While mARC and IMRT plans were of very similar overall quality, treatment time could be markedly decreased both by the use of mARC and FFF energy. Conclusions Highly conformal treatment plans could be created both by the use of flat 6 MV and FFF 7 MV energy, using IMRT or mARC. For all practical purposes, the FFF 7 MV energy and mARC plans are acceptable for treatment, a combination of both allowing a drastic reduction in treatment time from over 5 minutes to about half this value.
Background
The mARC ("modulated arc") technique has recently been introduced as a rotational intensity-modulated radiation therapy (IMRT) technique for Siemens linear accelerators [1,2]. Although the dosimetric accuracy has been assessed by various methods and first patient treatment has been reported [3,4], no systematic planning studies have yet been carried out to assess the quality of mARC treatment as compared with IMRT delivered at the same linear accelerator.
First applications have centered on prostate treatment for mARC [4,5], which appears to be an ideal indication as it benefits from inverse planning due to the proximity of organs at risk (OAR), yet only requires one gantry rotation to achieve a highly conformal dose distribution (compare, e.g., [6]). We therefore present a planning study for prostate cancer with mARC for a Siemens Artiste machine equipped with flat 6 MV and flatteningfilter-free (FFF) 7 MV energies. The combination of FFF beams with mARC treatment is of particular interest since this offers the greatest potential for a reduction in treatment time.
Although the mARC technique is a Siemens Artiste specific modality and primarily interesting for Siemens customers, we hope that the comparison of the flat and FFF beam lines, both for mARC and IMRT treatment, will be useful for a wide range of readers.
Patients and methods
For the present study we chose 10 consecutive patients diagnosed with intermediate and high-grade prostate cancer that had previously been treated in our department. Due to the retrospective nature of this study, no ethics board approval was required. Patient characteristics are shown in Table 1. Computed tomography (CT) datasets had been acquired on a dedicated scanner (Brilliance CT -Big Bore Oncology, Philips, Koninklijke) with 3 mm spacing between slides. For all patients additional MRI data of the small pelvis was present for coregistration. For reasons of standardization contouring was completely redone by one radiation oncologist according to the Radiation Therapy Oncology Group (RTOG) Trial 0126 [7] contouring guidelines. Gross tumor volume (GTV), clinical target volume (CTV), planning target volume (PTV) and normal tissues were outlined on all CT slices in which the structures existed.
The GTV encompassed the prostate gland, the CTV encompassed the GTV plus the proximal bilateral seminal vesicles (only the first 1.0 cm of the seminal vesicle tissue adjacent to the prostate). The PTV was generated by adding a surrounding margin of 7 mm to the CTV. Organs at risk were outlined according to the Male RTOG Normal Pelvis Atlas [8]. A dose of 76 Gy was prescribed to the PTV, for plan evaluation we used our in-house DVH (dose-volume histogram) criteria as shown in Table 2 that are mainly based on the data published by Quantitative Analysis Of Normal Tissue Effects In The Clinic (QUANTEC) [9,10].
At our institution, the mARC technique is available at one Siemens Artiste linear accelerator with flat 6 MV and FFF 7 MV energy, equipped with 160 multi-leaf collimator (MLC, leaf width 5 mm). The two energies are particularly well suited for comparative planning as the slightly increased nominal energy of the FFF 7 MV beam compensates for the spectral softening caused by the removal of the flattening filter, so that the percent depth dose of the FFF 7 MV beam matches the flat 6 MV beam closely [11]. Therefore, any differences in plan quality will reflect mainly the difference in beam profiles.
For mARC plans, one complete (360°) gantry rotation was used with optimization points spaced 10°and arclet length 4°. IMRT plans consisted of 11 beams (gantry angles 205°, 235°, 265°, 295°, 330°, 0°, 30°, 65°, 95°, 125°, 155°), with 3 segments per beam. In a prior test, it was checked if plans were improved by allowing 5 segments per beam (the "gold standard" for prostate IMRT at our institution). Since no significant difference was observed, we here limit our analysis to IMRT plans with a total of 33 segments or less, which is nearly the same number of degrees of freedom as for the mARC plans (36 optimization points). The collimator angle was 90°for all plans, also based on previous tests.
Planning is performed in the Prowess Panther V5.10r2 treatment planning system (TPS) on a 3 mm dose grid using the collapsed cone dose algorithm. IMRT and mARC inversion are closely similar, both using a simulated annealing approach for direct aperture optimization. Based on a set of inversion objectives, the optimization can be carried out interactively by adjusting the DVH constraints and weights until the desired shape is reached. Criteria for optimization are listed in Table 2.
Plan quality was compared for the four scenarios (IMRT vs. mARC, 6 MV vs. FFF 7 MV) based on the conformity index (CI), the homogeneity index (HI), V(50Gy) for bladder and rectum, and V(40Gy) of the posterior rectal wall. The conformity index is defined by [12]: where TV denotes the volume of the PTV, PIV is the volume enclosed in the prescribed isodose (95%), and TV PIV is the volume of the PTV surrounded by the prescribed isodose (95%). The homogeneity index is calculated as where D PTV (x %) means the dose received by x % of the PTV volume. All plans were exported to the machine for treatment and irradiated on an Alderson anthropomorphic phantom positioned with the prostate at the approximate location of the isocenter. Thermo-luminescent dosimeters (Harshaw TLD 100H) were placed at three positions on the surface of the phantom (navel, manubrium sterni, right eye lens) to measure the scattered dose outside the treatment field. At each position, three TLDs were placed in close proximity and the measurements averaged. The average standard deviation of the three measurements was below 5% for the measurements at the navel, and between 5 and 10% for the lower dose values measured at the sternum and lens. During irradiation, treatment times were measured for comparison.
Plans were compared pair-wise for mARC vs. IMRT and for 6 MV vs. 7 MV energy, considering the measures of quality defined above, monitor units, treatment time and scattered dose. The Shapiro-Wilk test was performed to check for normality. In cases where this could not be refuted, the t-test for paired data was applied, otherwise the Wilcoxon signed-rank test was used. A value of p = 0.05 or below was considered to be statistically significant.
Results
All plans (DVH and dose distributions) were reviewed by at least one senior radiotherapist and were all deemed acceptable for treatment. All plans satisfied the criteria that at least 95% of the planning target volume received 95% of the prescribed dose of 76 Gy, and all organs at risk remained below the imposed limits (Table 2).
A visual comparison of the four plan scenarios for each patient did not show a marked preference for either technique or energy (example dose distributions and DVH shown in Figures 1 and 2). DVHs of the four scenarios are closely similar for each patient. Relying on the quality measures (Table 3), the comparison of IMRT with mARC plans for the same energy never yielded a significant difference. For comparison of 6 MV with FFF 7 MV, homogeneity index and conformity index were significantly better for 6 MV than 7 MV both for mARC and IMRT plans, separately. Considering dose to OAR, there was no significance for comparison of IMRT plans (6 MV vs. 7 MV), but 6 MV mARC plans performed better than 7 MV mARC. However, even in those cases where a statistical significance was shown, the differences were very small and hardly of clinical significance: For the bladder, a median of 20.8% of the volume received a dose of 50 Gy in the 6 MV mARC plans, whereas this was 22.4% for 7 MV mARC. The rectum V50 increases from 16.8% for 6 MV mARC to 17.5% for 7 MV mARC. As all these values remain far below the allowed limits, they would not have caused rejection of the plans for clinical treatment. For all these values, the variation between different patients was considerably larger than the variation from one plan scenario to the next, which can be seen by the overlapping ranges of values (Table 3) even in cases where a small statistical significance was found. The mARC plans required slightly more monitor units than the IMRT plans, but the difference was negligible (not significant except for 6 MV IMRT vs. mARC, with 402 and 414 MU, respectively, p = 0.047). However, treatment time could be markedly decreased both by the use of mARC and FFF energy. In moving from 6 MV to FFF 7 MV, about one minute treatment time was saved both in IMRT and mARC, respectively. In moving from IMRT (11 beams) to mARC, treatment times were reduced by about two minutes both for 6 MV and FFF 7 MV, respectively. By combining mARC treatment with FFF 7 MV energy, the treatment time could effectively be reduced by half (median 2:27 min for FFF mARC versus 5:21 min for 6 MV IMRT).
Scattered dose is significantly decreased by the use of FFF energies, which is physically reasonable since head scatter is reduced in the absence of a flattening filter. For identical plan scenarios, the FFF 7 MV energy produces only about 58-85% of the scattered dose measured for flat 6 MV, with strongest reduction at larger distance from the treatment field (lens). A difference between IMRT and mARC plans cannot be observed for 6 MV; for 7 MV, the out-of-field dose is slightly higher for mARC (up to 108% of the 7 MV IMRT plan, but still much lower than for the 6 MV plans).
Monitor units
In this study, monitor units were not observed to differ significantly between mARC and IMRT, and for 6 MV vs. FFF 7 MV, respectively. This changes markedly if the isocenter is displaced from the centre of the PTV. In this case, the monitor units required by the 6 MV plans do not change systematicallysometimes increasing, sometimes decreasing, but remaining within 30 MU of the original value. For the FFF 7 MV plans, however, the value increases strongly, sometimes exceeding 500 MU. This is plausible, because if the isocenter moves to the side or even outside of the PTV, the dose intensity decreases with distance from the central axis, creating a constant dose gradient in the target. Additional monitor units are hence required to add dose at greater distance from the axis. This effect does not occur for the flat intensity profile of the 6 MV beam. If the isocenter is placed in the centre of the PTV, the dose profile of the FFF 7 MV beam peaks inside the PTV and only deviates from a flat profile at distances of several centimetres from the axis. It appears that the prostate PTV is sufficiently small to exhibit no notable difference in monitor units between 6 MV and 7 MV plans if the isocenter is centrally placed. If the PTV extended farther in the craniocaudal direction, it might be expected that the dose fall-off to the sides of the central axisalthough symmetricalwould also require more monitor units for the FFF beam: this was indeed observed for large PTV [13,14]. Therefore, the position of the isocenter is more critical for the FFF energies.
Treatment times
Treatment times depend on the time for gantry and MLC movement on the one hand and on the time required to irradiate the monitor units on the other hand. As it is more time-consuming to stop the gantry at precise angles rather than just move it through an angular range, mARC saves treatment time in comparison with step-and-shoot plans that would use the same number of gantry angles (i.e., 36 with one segment per beam). The use of FFF beam energies saves time as the higher dose rate allows faster irradiation of the ca. 400 MU. We therefore assess the dependence of treatment time on the number of MU for the four scenarios ( Figure 3). In all cases, a linear fit can be made, with parameters given in Table 4. Based on the above considerations, the y-axis intercept should be the same for both IMRT plans and for both mARC plans, respectively, since it is mainly determined by the irradiation geometry. The slope of the curves should be similar for the 6 MV plans and the 7 MV plans, respectively, since it depends on the available dose rate.
Indeed, the y-axis intercepts of the IMRT plans for different energies differ less than one standard error; the same applies to the mARC plans. The slope of the curves for 6 MV plans also agree within less than one standard error, with an approximate value of 0.27 s/MUthis corresponds to an average dose rate of 219 MU/min. For the 7 MV plans, the slope (again within less than one standard error) of ca. 0.122 s/MU corresponds to an average dose rate of 492 MU/min. These values are reasonablethe maximum available dose rate for 6 MV is 300 MU/min. For FFF 7 MV, 2000 MU/min are theoretically available. However, for small segments/arclets with low MU, the linac firmware automatically reduces the dose rate for better linearity (for about 400 MU distributed over 33 segments or 36 arclets, the linac will nearly always operate at a reduced dose rate since most segments receive only about 10 MU). We therefore find that both the FFF IMRT and mARC plans operate at an average dose rate considerably below the maximum available; but still about twice as fast as the flat energy.
Considering the treatment time, the question arises what the technical limit for the mARC operation may be. Given the results above, it might be imagined that treatment times could be further improved if the plans were irradiated with the maximum available dose rates (300 MU/min for 6 MV and 2000 MU/min for FFF 7 MV). In addition, the spacing of optimization points and distance of MLC leaf travel between successive arclets will influence treatment time. Current work at our institution is aiming to find the optimum mARC scenarios for fast irradiation, and the technical constraints imposed on treatment time (Dzierma et al., in prep.). From the technical/legal point of view, gantry rotation is restrained to be no faster than 360°per minute, but only plan scenarios with hardly any MLC movement and few monitor units per arclet appear to be capable of achieving this speed. Future work will show where the practical limits for treatment times are, and treatment planning systems will then be evaluated by how closely they can approach this technical limit.
Comparison with other studies
Only few studies have investigated the plan properties and treatment times associated with mARC planning [1,4,5]. Their observed treatment times for prostate cancer are of the same order of magnitude as those reported here, again for comparable plan qualities between mARC and IMRT treatments.
Several past studies have evaluated plan quality for VMAT/RapidArc treatment as compared with IMRT (for an overview, see [15]). While details differ between these studiesdue on the one hand to variations in the considered prostate PTV contours, and on the other hand to different planning approachesit was generally found that VMAT treatment offers at least as good quality plans as IMRT treatment, sometimes even with better sparing of organs at risk. Depending on whether one or two arcs were chosen for VMAT treatment and whether constant or variable dose rate irradiation was allowed, plan quality measures and OAR DVH values sometimes favoured IMRT, sometimes VMAT treatment [16][17][18][19][20][21][22][23][24]; however, all studies observed a marked reduction in treatment time by VMAT treatment, and generally a drastic reduction in monitor units.
Our study explicitly created the scenarios in such a way to offer approximately the same number of degrees of freedom to both IMRT and mARC optimization processes, which may explain the similar quality outcome. Besides, it should be pointed out that the studies comparing VMAT with IMRT all relied on IMRT plans with fewer fieldsmost chose 5 or 7 gantry angles for the IMRT, so it can be expected that our IMRT plans with 11 beams might yield better plans, hence closing the gap to rotational modulated treatment. Still, it has been observed that the plan quality can be improved by including more gantry angles even for the same number of segments [25], which is not surprising since it offers more freedom to the optimization from a geometrical point of view and might allow for better mARC plan quality even with a similar number of free parameters for the optimization. However, it also appears plausible that this effect should saturate for plans with many gantry anglesthe more beams are Figure 2 Example dose-volume histogram (same patient as in Figure 1).
used, the smaller would be the improvement by any additional beam, with an optimum number of ca. 10-20 beams depending on the target size [26]. In fact, our in-house standards have evolved over the past few years from IMRT plans with 5-7 beams to 11-13 beams for prostate cancer. More beams are never used since they have not shown to yield any further benefit, which may explain why the mARC plan quality is not notably improved over the IMRT plans.
Considering the monitor units, we do not observe a marked decrease for mARC vs. IMRT plans, whereas most studies considering VMAT vs. IMRT treatment do. The monitor units for the mARC plans are comparable to those found by, e.g., [6,16,[19][20][21]23,24,27] all of whom except for Ost et al. [27] had considerably higher IMRT monitor units. It cannot be decided whether the low amount of MU also for IMRT is indebted to the planning scenario (more degrees of freedom offered by 11 beam angles) or the planning system (the direct aperture optimization algorithm used in Prowess Panther has been observed to require fewer MUs for IMRT in comparison with other planning systems [28]). The low number of monitor units for our IMRT plans also entails relatively fast IMRT treatment (5:21 minutes for flat and 4:31 minutes for FFF beams), which is at the lower limit of what is observed in other studies (4-6 min [6,17,19,20,22,23,27,29,30]; 8 min [21,31]). mARC treatment times of 3:35 min for flat and 2:27 min for FFF treatment plans are slower than times reported for VMAT treatment with single arcs (1-2 minutes [6,17,[19][20][21]27]; FFF: 60-90 sec [14,30,32]), but within the range of times found for two arcs (3-5 min [6,20,21,23]).
Conclusion
Although small differences in plan quality exist, none of these were found to be clinically significant. Highly conformal treatment plans could be created both by the use of flat 6 MV and FFF 7 MV energy, using IMRT or mARC. For all practical purposes, the FFF 7 MV energy and mARC plans are acceptable for treatment, allowing a drastic reduction in treatment time from over 5 minutes to about half this value. As expected on physical grounds and based on past studies, scattered dose is reduced by the FFF 7 MV energy.
Consent
Written informed consent was obtained from the patient for the publication of this report and any accompanying images. | 4,816 | 2014-11-26T00:00:00.000 | [
"Medicine",
"Physics"
] |
Quasinormal modes of Reissner–Nordström–AdS: the approach to extremality
We consider the quasinormal spectrum of scalar and axial perturbations of the Reissner–Nordström–AdS black hole as the horizon approaches extremality. By considering a foliation of the black hole by spacelike surfaces which intersect the future horizon we implement numerical methods which are well behaved up to and including the extremal limit and which admit initial data which is nontrivial at the horizon. As extremality is approached we observe a transition whereby the least damped mode ceases to be oscillatory in time, and the late time signal changes qualitatively as a consequence.
Introduction
Numerical [1,2] and observational [3] evidence shows that a black hole spacetime will, in response to a perturbation, produce radiation at (complex) frequencies which are characteristic of the black hole.These frequencies are the quasinormal frequencies, and to each such frequency is associated a quasinormal mode -a solution of a linear equation on the black hole background, satisfying suitable boundary conditions at any horizons and (if relevant) at null infinity [4][5][6].
In recent years, a satisfactory mathematical understanding of the quasinormal modes of subextremal de Sitter black hole spacetimes has developed, starting with results for Schwarzschild-de Sitter [7,8], culminating in a general theory for de Sitter black holes [9] including the full subextremal range of Kerr-de Sitter black holes [10,11].In the subextremal anti-de Sitter setting, analogous results have been shown [12,13].For a thorough overview see [14], and for an explicit worked example using this approach, see [15].
The majority of the works cited in the previous paragraph impose regularity at the future horizon(s) in order to characterise the quasinormal modes, an approach which in the physics literature goes back to Schmidt [16].It can be shown that time-harmonic solutions to the linearised equations which extend smoothly 1 across the future horizons exist only for a discrete set of complex frequencies, which can be identified with the quasinormal frequencies.Regularity at the horizon plays the role of 'in/outgoing' boundary conditions in more traditional treatments.This approach breaks down when the black hole horizon is extremal or the spacetime is asymptotically flat.(For the purposes of our discussion, an asymptotically flat end may be thought of as the extremal limit of a subextremal cosmological horizon, and when we refer to a subextremal spacetime we implicitly assume that it has no asymptotically flat ends).
The behaviour of the quasinormal spectrum as a spacetime approaches extremality has been the topic of significant interest in the physics literature [19][20][21][22][23][24][25][26].In particular, going back at least to Detweiler [27], two distinct behaviours have been observed for the quasinormal frequencies as the surface gravity, κ approaches zero.Firstly, it appears that a generic feature of near-extremal spacetimes is the existence of a sequence of 'zero damped modes' with damping rates approximately nκ, n = 1, 2, 3, . .., which accumulate at some given frequency in the limit κ → 0. On the other hand, in certain regions of the complex frequency plane, the quasinormal frequencies are largely unaffected by the extremal limit -the frequencies settle down to limiting values without accumulating (called 'damped modes').
In [28] the second author, together with Gajic, considered the Reissner-Nordström-de Sitter black hole and showed that in the limit where both horizons become extremal there is a sector in the complex plane in which, away from the origin, only the damped mode behaviour is observed.In [29] Joykutty established the existence of purely damped modes in several situations involving a horizon approaching extremality, including that of the Reissner-Nordström-de Sitter black hole with either horizon becoming extremal (see also [30,31] for a closely related result).
In this paper, we aim to study numerically the quasinormal spectrum of the Reissner-Nordströmanti-de Sitter black hole in the approach to extremality.Our approach is motivated by that taken in [32,28], and involves working in coordinates which are regular at the horizon (see [33] for an alternative approach).We choose a foliation by spacelike surfaces as this most easily can be adapted to the case of multiple horizons, but a null (or mixed null/spacelike) slicing could also be considered and should give similar results.The quasinormal spectrum of the Reissner-Nordström-anti-de Sitter black hole has been studied in [34], but the parameter ranges considered in that paper do not include a neighbourhood of the extremal case.We extend their results for scalar and axial perturbations all the way to extremality.
We work both in the time and frequency domains, enabling cross-checking of results between independent computations.In the time domain our choice of slicing permits us to simulate time evolution up to and including the horizon, without introducing any artificial boundaries.In the frequency domain we use a modified Leaver method to determine quasinormal frequencies (this was demonstrated to correctly locate the quasinormal frequencies in [32]).In both cases our methods are designed such that there is no degeneration as κ → 0 and so that we are able to consider initial data which is non-trivial at the future event horizon.
A particularly interesting feature we observe for both the conformal wave equation and the perturbations is a threshold in the black hole parameter space at which the qualitative behaviour of the fields changes.This happens when the surface gravity is sufficiently small that the slowest decaying purely damped mode becomes the dominant late-time behaviour.On one side of this threshold the late time behaviour is oscillatory, but closer to extremality the dominant behaviour becomes pure exponential decay.In the extremal limit, this ever-slower exponential decay becomes the polynomial decay expected for an extremal black hole [35].
After this brief introduction, in Section 2 we briefly consider a scalar toy equation which can be solved explicitly in terms of special functions in order to verify our numerical methods, before moving on to study the conformal wave equation and the scalar and axial perturbations for the Reissner-Nordström-anti-de Sitter black hole as the horizon approaches extremality in Section 3. Our results are summarised in Section 4
Explicitly solvable toy-model
We consider the following equation which models the behaviour of a wave exterior to an extremal AdS black hole (throughout the whole article we use the geometrised unit system c = G = 1): where (t, x, θ) ∈ [0, T ) × [0, 1] × S 1 .The principle part of this operator agrees with that of the wave operator for the spacetime with metric g given by The causal diagram of this spacetime is shown in Fig. 1.It enjoys the presence of an extremal horizon at x = 0, as can be seen from the fact that g(∂ t , ∂ t ) behaves like x 2 in its vicinity.Let us point out that the time coordinate variable is chosen in such a way that the surfaces of constant time t penetrate this horizon and the metric extends smoothly across x = 0.In this section we investigate solutions to Eq. ( 1) satisfying the Dirichlet condition ψ(t, 1, θ) = 0 at x = 1.At x = 0 we do not require a boundary condition, owing to the presence of the horizon.We assume that initial conditions ψ(0, x, θ) and ∂ t ψ(0, x, θ) are specified.
According to the prescription of [32,28], in order to find the quasinormal frequencies of Eq. ( 1) we should seek solutions to the equation of the form ψ(t, x, θ) = e st+imθ u(x) which satisfy the boundary condition at x = 1 and which have improved regularity (relative to a generic solution) at x = 0.Here we have made use of the rotational symmetry to restrict attention to a single angular mode.The function u can be seen to satisfy the following ODE Introducing a function v given by the left hand side of Eq. ( 3) becomes the modified Bessel equation Thus, the general solution of (3) can be written as where a, b ∈ C and I λ (z), K λ (z) are the modified Bessel functions of the first and second kind respectively [36]. -10 Figure 2: Locations of the lowest solutions to Eq. ( 5) in the complex plane.The dashed line represents the branch cut.
In order to discuss the regularity condition that we impose at x = 0, recall that a smooth function Note that for fixed k, increasing σ imposes a more stringent regularity condition on f .By a careful analysis of the asymptotic series of the modified Bessel functions using the approach of [36, §7.31], together with [37,Prop 8] and [33, eqn (A.1)] it is possible to show that for fixed s with | arg s| < π: In particular, this implies that we should make the choice b = 0 in (4) to single out the more regular branch of solutions (in the Gevrey sense) at x = 0.In order that our solution also satisfies the boundary condition at x = 1 we require Thus the quasinormal frequencies are precisely the solutions to Eq. ( 5).Since K λ (z) is an entire, even, function of λ, the branch points at ±im are removable, however, a branch point at s = 0 will be present in general.
The solutions of Eq. ( 5) for various m are given in Table 1 and presented in Fig. 2. As well as the locations of the quasinormal frequencies, we also obtain an explicit formula for the corresponding quasinormal modes: where s is a solution to Eq. ( 5) and a is any constant.
The presented results can be confronted with the numerical approach.Equation (1) can be solved with the use of the pseudospectral scheme [38].Since at x = 1 we impose the Dirichlet condition, to control it we employ the Gauss-Radau quadratures [38].The solutions that we are looking for are decaying exponentially with time in the quasinormal regime.Hence, to improve the precision of the scheme one can evolve an auxiliary function ψ(t, x, θ) = e αt ψ(t, x, θ) with α > 0 being a suitably chosen constant.The accuracy of this method can be controlled by the energy One can use the Hardy inequality to show that it is positive.Due to the absence of term ∂ t ψ in Eq.
(1) and the coefficient next to ∂ τ x being constant, this energy changes in time only via the leakage through the horizon.This change is given by a simple expression involving only an integration over the angular variable As can be seen in Fig. 3 for larger values of m one can easily observe the quasinormal regime.The quasinormal frequencies obtained via fitting agree with the lowest values from Table 1.Note that the initial data is chosen to be non-zero at the horizon -a key feature of the approach of [32,28] is that such initial data is permissible.
An alternative approach to finding the quasinormal modes frequencies for our toy-model is by the Leaver method [39,40].Let us fix some angular number m and again look for solutions of the form ψ(t, x, θ) = e st e imθ u(x).Then u satisfies Eq. ( 3) and we can expand it into a Taylor series around The coefficients H k must satisfy the following recurrence relation for k ≥ 2. Since we impose the Dirichlet condition at x = 1, we need to set H 0 = 0.The regularity condition at x = 0 suggests that H k should converge to zero as k → ∞.It gives us a quantization condition on s.One can obtain the appropriate values of s using the method of continued fractions, but in our case it is enough to assume that for some sufficiently large value of n one has H n = 0 (in none of the cases considered in this article the continued fraction method led to significantly faster convergence).It leads to a polynomial, whose zeroes include approximations to s we seek and many superfluous values.To identify the correct values of s one can change n and see which roots converge.It is presented in Fig. 4. From the analytical solution to the toy model we know that proper quasinormal frequencies have non-zero imaginary part (red dots in the plot).Quasinormal frequencies obtained with this method agree with the ones resulting from Eq. ( 5).
3 Reissner-Nordström-anti-de Sitter black hole Now we would like to apply the same methods to study the quasinormal modes in the Reissner-Nordström-anti-de Sitter (RNAdS) spacetime.Let us start by investigating the wave operator in this spacetime.Our first step consists of finding a suitable coordinate system in which it becomes similar to the one from the toy model.In spherical cooridnates (t, r, θ, ϕ) the line element of the RNAdS spacetime is given by where dΩ 2 is a line element on a two-dimensional unit sphere.The values of M and Q are interpreted as the mass and the charge of the black hole, respectively, while ℓ gives a specific length-scale connected with the cosmological constant Λ via Λ = −3/ℓ 2 .In the generic case this spacetime has two spherical horizons called the Cauchy horizon (of a radius r C ) and the event horizon (of a radius r H ).However, , where H n is given by the recurrence relation (6).Blue dots show spurious solutions that are purely real, while the red ones have non-trivial imaginary part and converge to the quasinormal frequencies.
if Q = 0, the latter vanishes and we get a Schwarzschild-anti-de Sitter spacetime.On the other hand, for Q large enough these horizons coincide, i.e., r H = r C = (3M − 9M 2 − 8Q 2 )/2 and then their position does not depend on the cosmological constant.This situation is called an extremal case, in contrast to the regular case in which r C < r H .In the following we want to cover both regular and extremal cases so we need a framework that will suitably handle both possibilites.For this purpose it is convenient to introduce the following quantities.Let ρ = r/r H be a new radial variable and t H = t/r H a new temporal variable.We also define the parameters σ = r C /r H , and λ = r 2 H /ℓ 2 .Then Eq. ( 7) can be written as where In this parametrisation σ = 1 gives the extremal case, σ = 0 represents the black hole with no charge, and λ = 0 is the case with no cosmological constant.One can easily switch between parameters (M, Q, ℓ) and (r H , σ, λ) using the following relations Let us point out that r 2 H takes a role of a scale factor in Eq. ( 8) so from now on we assume r H = 1.For other values of r H one needs to perform elementary rescalings to recover appropriate results (as we do when plotting Fig. 9 in order to compare our results with [34]).
Next, we introduce a new time coordinate τ defined by dt H = dτ + h ′ (ρ) dρ.The function h here is chosen in such a way that the surfaces of constant τ cross the horizon, the new coordinate τ behaves like t H as ρ → ∞, and the resulting wave operator behaves sufficiently well near the horizon.The last condition in fact means that the combination f (ρ)h ′ (ρ) 2 − f (ρ) −1 , being the coefficient next to ∂ 2 τ in the wave operator, does not blow up as ρ → 1.This point is a little bit more subtle since we want to cover both regular (where f behaves like (ρ − 1) near ρ = 1) and extremal (where this behaviour is quadratic) cases with a single framework.It turns out that these conditions are satisfied by a function Finally, we compactify the spatial domain by introducing new coordinate x given by where a is some fixed nonnegative number and its choice will be discussed later.As a result, in these new coordinates the spacetime has a horizon at x = 0 and an infinity is compactified to x = 1, similarly to our toy-model.The metric then is given by where ρ inside the functions f and h needs to be replaced by ρ = (1 + ax)/(1 − x).
For the sake of simplicity let us focus for a moment on the conformally invariant equation □ g ψ − 1 6 R g ψ = 0 [41].The wave operator □ g resulting from our metric (10) contains a non-zero ∂ τ derivative term.It can be removed by employing the conformal invariance: one can check that the conformal transformation (g, ψ) → (Ω 2 g, Ω −1 ψ) with leads to □ Ω 2 g with no ∂ τ terms.The final step that needs to be done to get a problem similar to Eq. ( 1) is to fold the spatial derivatives ∂ x and ∂ 2 x into a single expression.It can be achieved by simply dividing the whole equation □ Ω 2 g ψ − 1 6 R Ω 2 g ψ = 0 by an appropriate integrating factor: Then, the resulting equation can be written as The dependence on the angular dimensions can be factored out with the help of the spherical harmonics Y l,m eventually leading to The coefficients for general parameters σ and λ are rather complicated so we do not provide them explicitly.Instead, we note that for every λ > 0 and 0 ≤ σ ≤ 1 we have a τ τ < 0 and a τ x is a negative constant.In regular cases (σ < 1) the coefficient a xx (x) behaves like a linear function near x = 0, while for extremal charge (σ = 1) this behaviour is quadratic, similarly to the toy-model ( 1).Since the structure of the obtained equation is the same as of the toy-model, we can use the same numerical schemes to evolve it in time.For Eq. ( 11) one can define an energy 2 Thanks to the lack of ∂ τ ψ term and due to a τ x being a constant, E is monotone decreasing as for the toy-model Results of the numerical simulations for various parameters σ are presented in Fig. 5. Generically the evolution can be divided into three parts: initial behaviour, quasinormal oscillations (which get more distinctive with larger angular numbers), and a monotone decrease.However, the last stage exhibits a power-law decay only in the extremal case, as for the toy-model.For regular black holes the decay is exponential or even absent.To better understand these differences, we calculate quasinormal frequencies with the Leaver method.Again, the structure of Eq. ( 11) lets us use the methods developed in the previous chapter also in this case.However, for this approach to be applicable, one needs to carefully choose the value of a in Eq. ( 9).In a generic case f (ρ), when expressed via x, has four roots: one at x = 0, one real negative root, and two conjugated complex roots.For our method to converge one needs to choose a in such a way that the three latter zeroes lie outside the circle |x − 1| = 1 in the complex plane.In general the convergence is faster the further the zeroes are from this circle.
Figure 6 shows how solutions to H n = 0 converge for a = 2, l = 0, λ = 1, and various σ.The blue dots denote real solutions (purely damped modes), while the red ones are complex solutions (oscillatory modes).For no charge (Schwarzschild-anti-de Sitter spacetime) only the latter are present.When the charge is non-zero, the purely damped modes appear.As σ increases, they get closer to zero but their convergence becomes worse.Finally, in the limit of the extremal black hole we observe similar situation as for the toy model (Fig. 4): the real solutions become spurious.
Figure 7 shows how real parts of the oscillatory modes and purely damped modes depend on σ.One can observe that at some point (for λ = 1, l = 2 it is σ ≈ 0.7) the real part of the lowest decaying mode starts dominating over the lowest oscillatory mode.This transition is reflected in Fig. 5 by the emergence of the exponential tail.As σ grows further, this tail decays.Finally, for σ = 1 all the purely damped modes vanish (they converge to zero) and the tail is described by the power law.This is consistent with the same behaviour that has been proven for the Reissner-Nordström-de Sitter black hole by Joykutty [29].Dependence of the oscillatory modes frequencies on σ is less severe and is presented in Fig. 8.The same approach can be employed also to studies of perturbations of the spacetime.In a typical framework [42] they are described by the generalised eigenproblem where V is the suitable spherically symmetric potential, depending on the type of the perturbations one studies (for its form in case of the RNAdS spacetime see [34], let us emphasize here that in RNAdS spacetime with Q ̸ = 0 the electromagnetic and gravitational perturbations are mixed and can be resolved to their axial and polar parts), and By r * we denote here the tortoise coordinate that for metric (8) with r H = 1 can be defined by This eigenproblem can be obtained from the dynamical equation □ψ + U ψ = 0, where □ is a wave operator for the metric (8) with r H = 1 in (t H , ρ, θ, ϕ) coordinates and .
The equation □ψ + U ψ = 0 can be easily written in coordinates (τ, x, θ, ϕ).Hence, let us consider a wave equation with a general potential U : with g denoting metric (10).As we have already pointed out, the wave operator □ g contains a non-zero ∂ τ derivative term.Before we were able to get rid of it using the conformal invariance, however, for general potential U equation ( 12) does not possess this feature.Luckily, in 3 + 1 dimensions the wave operator has a useful property that under the conformal transformation g = Ω 2 g, ψ = Ω −1 ψ it behaves like [41] □ As a result, we can use the same factor Ω as for the conformally invariant equation to get rid of the ∂ τ ψ term and the resulting operator will be the same differential operator plus an additional potential term.It leads us to the equivalent wave equation This equation is no longer regular since Ω −3 (□ g Ω) behaves like (1 − x) −2 near x = 1.However, it does not pose any problem since we are interested in solutions that satisfy Dirichlet condition at this end.Assuming that the solution vanishes at x = 1 at least linearly together with an additional factor coming from the conformal transformation makes sure that the considered problem is sufficiently regular.Equation ( 13) can be studied for the whole range of charges up to the extremal case with the same methods as discussed before.As an example, in Fig. 9 we show real and imaginary parts of the lowest quasinormal frequencies of the scalar (with l = 0) and axial (with l = 2) perturbations (let us point out that due to a different convention real parts of QNFs obtained by us correspond to imaginary parts in their approach, and vice versa).These results regard spacetime with the AdS radius ℓ = 1, and various masses and charges set in such a way that the event horizon is localised at r H = 5.The plots are parametrised by the ratio of the charge Q and the extremal charge Q ext = 10 √ 19 in this setting (in the extremal case M ext = 255).It lets us compare the results of our approach with the previous results from [34], where the authors were considering an analogous problem for Q ≤ 0.55 Q ext , and we find good agreement in this range.In particular, our results agree with relevant numerical values provided in Tables II and III of [34].In the same work, the authors propose approximating quasinormal frequencies for small charges by a simple polynomial relation.Tables IV and V contain fitted parameters of these polynomials.They strongly depend on the range of the data used to obtain the fit, nevertheless, they agree with our results within reasonable limits.
We expect the same approach to also work for polar perturbations, however, in this case potential U in Eq. ( 12) introduces additional poles in the complex plane.Then coordinate x defined in Eq. ( 9) is not sufficient to move these additional poles outside of the disk of convergence required for the Leaver method, independently of the value of a.This feat can be done by considering more complicated compactifications, however, their proper choice seems to heavily depend on the values of l, λ, and σ.Due to these technical difficulties we decided to focus only on scalar and axial perturbations in this article.
Conclusions
The main goal of this work was to investigate the behaviour of the quasinormal modes of Reissner-Nordström-AdS black hole as the horizon approaches extremality.We pursued it by using spacelike surfaces intersecting the future horizon.At first we tested this approach on the explicitly solvable toy-model for the waves propagating outside of an extremal black hole.Then we successfully used it to reproduce and extend previous results regarding scalar and axial perturbations of RNAdS black holes [34].Thanks to the appropriate choice of the slicing we were able to study black holes with any charge, including the extremal case, within a single framework.We observed several interesting phenomena for strongly charged black holes, such as a qualitative change of the least damped mode behaviour for some critical charge value or vanishing of the purely damped modes as black hole becomes extremal.
In Section 3 we have pointed out some difficulties arising in the case of polar perturbations in RNAdS spacetime.Overcoming them and comparing the obtained results with previous works [34] would constitute a pretty straightforward extension of our work.Another potential future prospect involves employing similar approach to asymptotically flat black hole spacetimes.Then, by conformal transformation infinity can be identified with an extremal horizon and the methods analogous to the presented above shall be applicable.
Figure 4 :
Figure4: Log-log plot of absolute values of real parts of solutions to equation H n = 0 for various values of n, where H n is given by the recurrence relation(6).Blue dots show spurious solutions that are purely real, while the red ones have non-trivial imaginary part and converge to the quasinormal frequencies.
Figure 8 :Figure 9 :
Figure 8: Oscillatory quasinormal frequencies for conformally invariant equation in Reissner-Nordström-anti-de Sitter spacetime with Λ = −1 and various masses M and charges Q.The dashed lines indicate extremal cases.The solid lines bifurcating from them have constant mass M and are parametrised by decreasing charge. | 6,203.4 | 2023-08-30T00:00:00.000 | [
"Mathematics"
] |
The Budyko framework beyond stationarity
We sincerely thank Fernando Jaramillo for his positive, extensive and helpful review. We overall agree with his general comments and we will revise the text accordingly. In combination with his comprehensive list of specific comments, the manuscript will significantly benefit from his review. We will first address the general comments of Fernando Jaramillo, followed by a response to the main specific comments that are not related to the general comments and are not too technical or related to typos and grammatical issues.
General comments
1) It needs to state from the beginning that it only deals with temporal climatic intra-annual non-stationarity, not with landscape non-stationarity due to water use, land use and land cover change.The authors imply in the beginning that the term "stationarity" only relates to the climate component, but this is not true (See Milly et al., 2008, Science).Landscape nonstationarity is a fact and has been shown to have major implications for E/P at basin scales, I mention some references.I know the authors know this, but they should mention from the beginning what type of non-stationarity they are dealing with.They should mention from the beginning that their advance does not deal with the understanding of landscape nonstationarity.
This is an important point and we thank the reviewer for bringing this up.We will clarify in the revised version of the manuscript that we are primarily looking at temporal nonstationarity.(since the respective storage term in the water balance equation is a time derivative).However, we also like to mention that the additional parameter y0 is simply representing the amount of additional water (besides P) that is available to E (see Eq. 4b).In our understanding this does not necessarily exclude other processes (such as e.g.landscape non-stationarity) besides storage changes.We would like to see further investigations on this topic in future assessments.
2) The methodology is also rather cryptic and confusing.I think they are missing a complete methods section where they explain how they estimate their yo and K parameters at the global scale, boot strapping, resampling, calibration, validation, etc.If they want other scientists to use their model or "framework", they should clearly state what is the procedure to derive yo and K. Importantly, they also need to be more precise in introduction and conclusion on what ways this study differentiates from the former works (e.g., empirical Milly, 1993, Potter and Zhang, 2007, Zhang, 2008;and stochastic Zanardo et al, WWR, 2012) that also tackle climatic intra-annual non-stationarity.In other words why is their work "the robust, theoretical incorporation into the Budyko framework that is missing".Right now, it is not very clear.
Thanks!We will include additional, in-depth explanations on the applied calibration and validation techniques.We will also revise the main text accordingly.We also see the need of putting our work in context to previous assessments on similar topics and will enhance the respective part in the introduction and add a small paragraph to the conclusions.
3) Their exploration of their model (sensitivity analysis) is very complete and well developed, and the model appears to be robust based on the high correlations at the global scale and for the period 1990-2000.But in order to know really the advance that their models implies for the Budyko framework, it is necessary to show in what way it predicts E and E/P better than Fu 1981or Zhang 2001,2004and even better than Budyko's (1956, 1974).They indeed show good correlations between predicted and observed E at the global scale, but how much better than the previous models?For instance, Zhang's model 2004 was able to explain 89% of the variance, how much better is this one then?They should repeat their global analysis with the previous models they mention and compare.I would redo similar Figures 8 and 9 but with the former empirical models and compare.
We will update figures 8 and 9 in the revised version of the manuscript.But in this context, it is also important to argue from a physical perspective.The original approach might perform rather well on paper, but it does not at all represent the fact and the underlying physics of an exceedance of the original suppply limit.Strictly speaking, the original model is not at all applicable at monthly time scales, since the physically possible exceedance of the supply limit lies outside the physical limits of the model.The new two-parameter approach does in lieu correctly represent the points overshooting the original supply limit, but is still subject to huge data spread lowering its performance.Hence, the new two-parameter model does not necessarily need to perform significantly better than the original model, since it is in fact physically correct at monthly time scales (thus doing the right thing for the right reason).The two-parameter approach is further flexible enough (when adjusting y0 accordingly) to also represent more complex seasonal hydroclimatological patterns.
Suggestion for the title: The Budyko framework beyond climatic stationarity
Thanks for the nice suggestion!We will change the title accordingly.
-Line 12-I don't think this study is a new framework but rather a good improvement or advance to Budyko's framework, or can this work compare to Budyko's framework to be also called a framework?But of course, this is the authors' choice.I think we scientists are now drowning in so many frameworks...This is a good point and also refers to some comments of the other reviewer.We will change the wording in the revised manuscript and will introduce the new formulation rather as a modification of Fu's equation than as a new framework.
-Page 6801 line 9 Water use should be included in the list of factors affecting the scatter in the Budyko space.Position in Budyko space is a cause, but also a consequence, of movement in Budyko Space.Movement in Budyko and hence non-stationarity is also attributed to human changes in landscape conditions by land use and water use or by changes in water phase (landscape changes;Jaramillo and Destouni, GRL, 2014 ), and this should be clearly stated here in this paragraph.Land use change is mentioned by the authors by including Donohue 2007, Zhang 2001,Li et al. 2013, however water use and most water phase changes are neglected.Hydropower and irrigation CAN affect, and rather substantially, the position of a basin in Budyko space, see as example Destouni et al. Nature Climate Change, 2013).
Thanks!We are well aware of the related and interesting work of the reviewer and will change the text accordingly to discuss issues of land-scape nonstationarity.See also our response to the first comment.
-Zanardo et al., WRR, 2012 deals closely with what the authors deal here, but from a stochastic point of view and should be included in the introduction, I think.
We will add a sentence on the findings of Zanardo et al. (2012) to the revised version of the manuscript.
-Page 6801.Line 23.Since the definition of "steady conditions" or "stationarity" is an important part of this study, the terms should be defined appropriately in the beginning.What do you mean by these terms?I assume the authors relate stationarity to steady state conditions."Stationarity" is mentioned in the title of the manuscript but nowhere else in the text.Since steady-conditions instead are mentioned in several parts of the manuscript, I assume they mean stationarity as "steady conditions", i.e. no change in the storage term of the water budget.The authors relate "stationarity" to that dealt in the manuscript, i.e., that of the intra-annual climatic conditions that may change water storage at the annual scale.However, again, the stationarity assumption is affected also "by water infrastructure, channel modifications, drainage works, and land-cover and land-use change" (Milly, 2008, Science) and Review Fernando Jaramillo changes in water phase (Jaramillo and Destouni, GRL, 2014).This last work shows that changes in the landscape were responsible for nonstationarity in up to 74% of the basins of a global study once intraannual climatic nonstationarity was coarsely ruled out.Since this is not explored in their manuscript, I would appreciate if the authors could be more specific and mention the type of stationarity that their framework is dealing with, i.e. changes in water storage due to intra-annual changes in climatic conditions (Ep and P) as they mention in the first two lines of the Conclusions.
Thanks!As already mentioned in response to previous comments, we will revise the text to make clear what kind of stationarity is investigated throughout the manuscript.We will also more thoroughly explain the definition of steady-state conditions.As mentioned by the reviewer, our main intention is to explicitly represent the storage term in the water budget equation.
Page 6802, line 15 -Isn't the relationship found by the authors Eq. 9 also empirical?Please specify the difference between "empirical" and "analytical", since this is a main justification of this work.
An empirical relationship (or evidence) is usually derived directly from data or observations.Here we use very simple phenomenological assumptions, from which a mathematical relationship is derived analytically.We will clarify this in the revised manuscript.This is based on the assumption that E<=Ep and hence the minimum value of (P-E)/Ep is -1 (if P=0 and E=Ep).
Line 11 -Again, in relation to my recent question, it should state if additions of water due to changes in the landscape conditions of water phase (melting glaciers, thawing permafrost, closing stomata by rising CO2 concentrations or systematic anthropogenic changes linked to water use, are accounted in this boundary condition y0.Or if these additions/subtractions of water are rather represented by changes in the mathematical constant k, following Zhang's w.Or if they are not accounted for at all.This is again related to the response to the first general comment.The parameter y0 is a measure of additional water that is besides P available to E. This does in our understanding not exclude other storage components.However, investigating controls and drivers of the two parameters is a complex task and clearly beyond the scope of this study.
Figure 2 and 4 and text. There is something strange with the sign of y0 along the manuscript! In the
We apologize for this unfortunate mistake!The parameter y0 is defined between 0 and 1.The figure captions are wrong and will be corrected accordingly.
Line 7 to line 13 -Let's say I want to replicate the results.This explanation for the derivation of y0 and k for the global grid requires more wording because as it is now is rather cryptic.Forgive me if I understood incorrectly but since you use several combinations of P, E and Ep for each grid cell to minimize and thus estimate y 0 (Fig. 8 a and c), why do you then need the dataset values of P, E and Ep at all?Also, please explain in more detail the resampling, bootstrapping and least-square fit.Maybe a flow diagram of the procedure would be helpful.Also the difference between panel a and c or between b and d in Fig. 8 should be better explained.
See also our comment to the second general comment.We will include an appendix with an in-depth explanation of the methods.We further need dataset values of P, E and Ep, since y0 is minimized at each grid cell according to the given data.We basically identify the particular month (with the respective P, E and Ep values) that minimizes eq. 10 and maximizes y0.
Line 23 and 25-This procedure also requires more information, it is difficult to understand what was done here, it is cryptic: "anomaly correlations between "detrended" time series with removed annual cycles???? Explain please.
We apologize for the strange wording!We basically compute anomalies (i) by detrending and (ii) by removing the mean annual cycle of both the modeled and observed time series of E. The anomaly correlation is the correlation between the obtained anomaly time series.We will revise the text to clarify this Line 28, 29.I do not know how to see that "...the annual cycle is well represented by the model" by looking at the four panels.
We will explain this in more detail in the revised version of the manuscript.The high correlation between both the modeled and the observed time series is basically an indication of a similar seasonal pattern (and thus an indication for a reasonable representation of the seasonal cycle in the modeled time series). | 2,804.2 | 2014-05-01T00:00:00.000 | [
"Economics"
] |
Logistics UAV Air Route Network Capacity Evaluation Method Based on Traffic Flow Allocation
A bi-level optimization model for the logistics UAV air route network capacity evaluation based on traffic flow allocation is designed in order to meet the future trend of large-scale and normalized operation of logistics UAVs. The maximum sorties of logistics UAVs that can be served by the air route network are the upper-bound model objective, namely, the maximum flow of the logistics UAV air route network. The impedance function is constructed by considering safety and efficiency factors, and the lower-bound model objective function with the minimum logistics UAV air route network impedance value. An improved particle swarm optimization(PSO) algorithm is combined with the method of the successive algorithm(MSA) for solving the bi-level optimization model. To verify the effectiveness of the proposed model and algorithm, a simplified logistics UAV air route network is built. The results show that the proposed algorithm obtains reliable results after 26 iterations, and most segments capacity utilization rate is more than 70%. Parametric analysis of safe separation and algorithm population size shows that the capacity of logistics UAV air route network decreases with the increase of safe separation and the decreasing trend is gradually slowed down, and the optimal algorithm population size corresponding to different safe separations also varies. Based on the study described above, a logistics UAV air route network based on actual geographic information data is constructed, and the experimental results demonstrate that the suggested technique could be used to a specific scale of logistics UAV route network capacity evaluation and had validity.
I. INTRODUCTION
The worldwide civil aviation sector has had an unprecedented impact since the onset of COVID-19, with people's travel habits altering, flight numbers plummeting, and several carriers declaring bankruptcy. Entering the post-epidemic period, countries' prevention efforts are uneven, and although the civil aviation industry is increasingly recovering, it is difficult to return to its peak, but it creates development opportunities for Urban Air Mobility(UAM) and Unmanned Aerial Vehicle(UAV) industry. The United States, Europe, Japan, The associate editor coordinating the review of this manuscript and approving it for publication was Emre Koyuncu . and South Korea have successively proposed urban lowaltitude airspace management and UAV operation control planning strategies. National Aeronautics and Space Administration(NASA) spearheads Advanced Air Mobility(AAM) program to integrate air cabs, UAV delivery, and other advanced aircraft concepts into the national airspace system [1]. The European Single Sky program jointly related companies to release the U-Space design blueprint will provide a new intelligent service program for the future of largescale UAV hybrid operations. Among the industries related to UAM and UAVs, UAVs logistics is one of the most focused research fields. Amazon analyzed the weight of parcels, and the statistics showed that about 86% of the parcels can meet the requirements of UAV logistics capacity [2]. NASA proposed that UAV logistics is expected to undertake 500 million orders of parcel delivery services in 2030 [3]. Other multinational enterprises with global influence, such as DHL and ZipLine, also have made breakthroughs in logistics UAV manufacturing and pilot applications, and continue to explore and promote the UAV logistics scale development. In terms of core technologies, Kuru et al. developed an intelligent delivery platform for logistics UAVs, which compares multiple delivery methods [4].
The development of UAV logistics and other emerging industries has put new demands on low-altitude airspace management. Airspace capacity evaluation, as a core element of air traffic management, is an important prerequisite for the rational allocation of airspace resources. Airspace capacity evaluation originated in the 1940s [5]. The main methods commonly used today are mathematical model-based methods [6], controller workload-based methods [7], computer simulation-based methods [8], and data-driven methods [9]. Cheung et al. proposed a Mixed Integer Programming (MIP) airport scheduling optimization model considering runway capacity to address the problem of capacity-demand imbalance during peak hours, and verified the advantages of dynamic airport capacity and dynamic runway configuration over fixed capacity [10]. Mohamed et al. developed a dynamic neural network model based on the workload of air traffic controllers for terminal area capacity evaluation and adjusted the model parameters using Neural Partial Differentiation (NPD) equation [11]. Wang considered two factors: controller workload and acceptable delay level, summarized the process of airspace capacity evaluation based on computer simulation and elaborated the simulation principle, and reproduced the real airspace environment to verify the effectiveness of the proposed method [12].
However, low-altitude airspace is complex and variable, and there are limitations in applying existing typical airspace capacity evaluation methods to low-altitude airspace. Since the research on low-altitude airspace capacity evaluation technology is still in the early stage, a unified definition of low-altitude airspace capacity has not yet been formed. The Cal Unmanned Lab at the University of California believes that intelligent aircraft such as UAVs will be the main operating vehicle in the future low-altitude airspace, and with the emergence of Unmanned Aircraft Systems(UAS), the capacity of low-altitude airspace can be defined as the maximum number of aircraft that can be accommodated in the airspace under an acceptable level of conflict to ensure operational safety [13]. On the other hand, the change in airspace capacity can also be characterized by a sudden change in a specific index, when the number of aircraft in the airspace exceeds its capacity due to a sharp change in a specific index caused by the addition of an aircraft to the airspace [14]. The low-altitude airspace environment is complex and volatile, and flying according to pre-planned flight paths can effectively improve the supervisability and safety of the low-altitude airspace operating environment, Nanyang Technological University (NT) of Singapore proposed a low altitude capacity evaluation method based on the flight path network and defined the low altitude flight path network capacity as the maximum number of aircraft that can be carried by the whole air route network in specific airspace at a specified time [15].
In recent years, research scholars have gradually carried out research on the low-altitude airspace capacity evaluation method according to the operational characteristics and airspace management rules. The exploration of low-altitude airspace capacity evaluation techniques at the University of California, Berkeley, USA sprang from the estimation of air traffic complexity in an unmanned environment. In 2016, Bulusu et al. argued that in the future low-altitude environment, UAVs will present an organized free flight state, and proposed two air traffic complexity measures, Conflict Cluster Size and Normalized Time Spent in Conflict (NTSC), respectively, to build a simulation platform using the San Francisco Bay Area as a prototype, and based on simulation experiments, it was concluded that the future San Francisco Bay Area can carry an average of 100,000 daily UAV flights [13]. On this basis, motivated by the study of the impact of UAS on the operation of lowaltitude airspace, Bulusu et al. clarified the definition of low-altitude airspace capacity and realized the evaluation of low-altitude airspace capacity using mathematical methods [14]. Since then, Bulusu and his team have gradually established the research route of 'constructing specific metrics -simulating experiments -locking threshold mutationdetermining airspace capacity', measuring safety in terms of Total Loss of Flight per Flight Hour and performance in terms of Change in Direct Operating Cost, and comparing airspace capacity under two modes of UAV operation: cooperative and non-cooperative [14], [16]. Subsequently, a throughputbased capacity evaluation method for low-altitude airspace is proposed, in which three conflict detection and deconfliction algorithms and two minimum spacing requirements are evaluated by simulating unmanned aerial vehicle traffic in the airspace, considering the variability of traffic flow, and analyzing their impact on throughput. The results show that the throughput tends to decrease before the system security decreases, and this index has reference meaning for low altitude airspace capacity evaluation [18]. These studies take the future airspace operation characteristics as the background, weaken the consideration of controller factors, more consideration of the aircraft's conflict avoidance ability, and is not limited by the airspace structure, oriented to the future free flight setting, with a certain degree of foresight.
Sunil et al. at Delft University predict a large number of small UAVs to operate in urban airspace in the future. In this context, the Metropolis project, in which Sunil is involved, investigates the impact of airspace structure on the capacity, complexity, safety, and efficiency of high-density operational airspace. Therefore, four concepts of airspace structure are proposed, including full mix, layers, zones, and tubes. Simulation experiments show that the layers structure has the best performance considering capacity, safety, and efficiency, and can better adapt to the future high-density urban air traffic operation [2]. This study proposes four airspace structures with different degrees of freedom, and the simulations verify the performance characteristics of different types of airspace. The simulations consider different traffic densities, tidal characteristics of urban traffic, and the effects of stochastic factors such as wind and rogue aircrafts.
Cho et al. analyzed the urban airspace capacity in terms of available airspace identification using two types of geofences: keep-out and keep-in. Define inaccessible boundaries for UAVs around static obstacles by using keep-out geofences, and identify available airspace using keep-out geofences in combination with an alpha-shaped method. The simulation results show that the available airspace identified by keep-in is the upper limit of keep-out. Meanwhile, geofence parameters should be decided according to the complexity of the geospatial and the purpose of the flight in practical applications, rather than relying on fixed values [19]. According to the above studies, a mature architecture has not yet been formed for the study of low-altitude airspace capacity evaluation, but it follows a combination of mathematical analysis models and simulation validation. This research combines geo-fencing technology for UAVs and airspace identification, utilizing the relevant technologies already applied to UAVs to make it more compatible with real-world conditions.
For the complex airspace environmental conditions at low altitude, most of the current research uses computer simulation methods. Most of the current simulation platforms for capacity evaluation lack the verification of actual operation data, and the parameters are mostly hypothetical, which cannot expose the airspace capacity regulation in the real low-altitude airspace in a three-dimensional environment, and have certain limitations. Based on the research presented above, this paper proposes a method for evaluating the capacity of logistics UAV air route networks based on traffic flow distribution, which includes optimization modeling and heuristic algorithms. This approach considers the safety and efficiency of logistics UAVs during operation, and can be applied to different sizes of logistics UAV air route networks.
Logistics UAV air route network capacity is defined as the maximum number of UAV sorties that the air route network can serve, namely, the maximum flow of the logistics UAV air route network. The main contributions are as follows: 1) A bi-level optimization model for evaluating the capacity of logistics UAV air route network based on traffic assignment is established, considering efficiency and safety factors. 2) A bi-level optimization model solving algorithm mixed improved particle swarm optimization algorithm with the method of the successive algorithm is designed. 3) The simplified logistics UAV air route network is established, and several simulation experiments were carried out for parameters comparisons such as safe separation and algorithm population size, and the experimental results are analyzed. 4) Based on the real geographic information data, the logistics UAV air route network is built to verify the effectiveness of the proposed model and algorithm.
II. METHODOLOGIES
A bi-level logistics UAV capacity evaluation optimization model is developed in this study to address the trend of large-scale logistics UAV operation. The bi-level optimization model is a popular tool for studying urban transportation networks. With the development of UAV logistics, the operation scale is gradually expanded, the network structure is increasingly complex, and the factors to be considered are also increased, and these factors should be placed at different levels. The bi-level optimization model can construct a bi-level decision mechanism. The upper bound has the right of control and guidance over the lower-bound. The upperbound makes a decision and passes it to the lower-bound. The lower-bound receives the decision, then makes appropriate decisions and feeds back to the upper-bound. The two different level models interact iteratively to find the optimal solution [23]. The improved particle swarm optimization algorithm is combined with the method of the successive algorithm to solve the model. The operation mechanism of the algorithm is shown in Figure 1. Suppose there is a simple logistics UAV air route network consisting of a starting point O, an end point D. There are two air routes available from O to D. The PSO algorithm will input a set of solutions, which is the flight flow between OD. According to the MSA algorithm, the flight flow will be allocated to different air routes. Then, the PSO algorithm evaluates the allocation results, and if the segment capacity requirement is satisfied it is a set of feasible solutions, if not, it is an invalid solution [22].
A. PROBLEM DESCRIPTION
This paper aims to propose a logistics UAV air route network capacity evaluation method, which provides support for future urban air mobility management. The logistics UAV air route network structure is known, including air route length, Origin-Destination (OD) pair location, etc. The VOLUME 11, 2023 vertical take-off and landing logistics UAV is used as the delivery tool to execute logistics distribution. On this basis, in order to better build the model, assuming: 1) Logistics UAVs must follow a fixed air route and are not allowed to change during flight. 2) Logistics UAVs fly at constant speed in the air route network, ignoring the influence of parcels' weight on flight speed. 3) The minimum safe separation must be maintained between logistics UAVs in the same flight segment. 4) Ignore the power consumption of logistics UAVs. 5) Ignore the impact of weather on the logistics UAVs operation. 6) Ignore the transmission signal delay and loss during the logistics UAVs operation. 7) Ignore the possible collisions and crashes of logistics UAVs in the operation process.
B. UPPER-BOUND MODEL ESTABLISHMENT 1) OBJECTIVE FUNCTION
According to the definition of the logistics UAV air route network capacity in Section I, the Upper-bound model objective function is set as the maximum sum of the logistics UAV air route network flow.
where, C is the capacity of logistics UAV air route network; Q is the total flow of the capacity of logistics UAV air route network; i is the origin node, namely, express station; I is the set of origin nodes, i ∈ I ; j is the destination node, namely, receipt station; J is the set of destination nodes, j ∈ J ; f ij is the flow from the origin node i to destination node j.
2) CONSTRAINT CONDITIONS
The flow between the origin node i to the destination node j in the logistics UAV air route network cannot exceed its capacity and must be nonnegative integers.
where C ij is the capacity of the OD pair, the same OD pair has one or more air routes, C ij is equal to the sum of all air routes between OD pairs; The air route capacity will be limited by the minimum segment capacity, therefore, the capacity of each air route is equal to the minimum capacity in the n segments. The specific formulas are: In formula (3), where a is the air route between OD pairs, A ij is the set of air routes, a ∈ A ij ; k represents the segment that constitutes the air route a, K is the set of segments, k ∈ K ; C k n is the capacity of the nth segment k in the air route a, the calculation formula is as follows: where, L k n is the length of the segment k; d u is the length of the logistics UAV; d s is safe separation.
When the OD pairs flight traffic input from the upper-bound model to the lower-bound model, the lower-bound model needs to allocate these flight traffic to different air routes according to impedance value. The lower-bound model objective function is set as the minimum sum of logistics UAV air route network impedance value. Considering the safety and efficiency factors, the total impedance function of the logistics UAV air route network w a ij is constructed.
where, q a ij is the flight flow between the origin node i to the destination node j; The sum of the air route flight flows is equal to the flight flows between the OD pairs, as shown in formula (7).
w a ij is the impedance value of the air route a, which is summed by the impedance value of segment k, the calculation formula is as follows.
where, w k is the impedance value of the segment k, which is a combination of safety factors and efficiency factors, the calculation formula is as follows.
where, r k is the safety sub-impedance function, which is calculated from formula (11). t k is the efficiency sub-impedance function, which is calculated from formula (15). σ is the weighting parameter. Due to the value range difference between these two subimpedance functions, the min-max normalization method was used to standardize the data, the calculation formula is as follows.
where, f ′ (x) is the sub-impedance normalized value. f (x) is the original sub-impedance value. f (x) min and f (x) max are the minimum and maximum values of the sub-impedance function respectively.
a: SAFETY SUB-IMPEDANCE FUNCTION
The logistics UAV crash causing injuries to ground personnel is considered as the main factor of the logistics UAV air route network safety sub-impedance function r k . r k = P uav N people F die (11) where, r k is the safety impedance of the segment k. P uav is the probability that logistics UAVs break down and crash on the ground. N people is the number of fatalities after the logistics UAV crash. F die is logistics UAV crash fatality rate [20], [21]. In formula (11), the number of fatalities after a logistics UAV crash on the ground N people is calculated as follows: (12) where, A is the area where logistics UAV crash on the ground; ρ people is the population density on the ground in the segment k.
In formula (11), the logistics UAV crash fatality rate F die is related to the logistics UAV state and ground environment. According to reference [20], F die is calculated as follows: In formula (13), S is sheltering parameter, S ∈ (0, 1], namely, exposure of ground personnel in the logistics UAV air route area. λ is the energy required that the logistics UAV crash fatality rate reaches 50% when S = 50. µ is the energy threshold required for ground personnel to be injured when the sheltering parameter S approaches 0. E is the impact of kinetic energy, the calculation formula is as follows: In formula (14), m is the mass of logistics UAV and parcels; q is drag coefficient; ρ A is air density; h is the logistics UAV flight altitude.
b: EFFICIENCY SUB-IMPEDANCE FUNCTION
The efficiency impedance t k is related to the segment length and the logistics UAV flight speed the calculation formula is as follows: where, L k is the length of the segment k; V is the speed of the logistics UAV.
2) CONSTRAINT CONDITIONS
The segment is the basic unit of the logistics UAV air route network, and the air routes have shared segments. Therefore, the flight flow assigned to the segment must be non-negative and less than or equal to its capacity.
where, C k is the capacity of the segment k, the calculation method is shown in formula (5). x k is the flow assigned to the segment k, an air route consists of one or more segments, K is the set of segments, k ∈ K .
The segment k flight flow is the flight flow between the origin node i to destination node j multiplied by the segmentair route relationship matrix, the calculation formula is as follows: where, x k is the flight flow of segment k, q a ij is the flight flow of air route a, δ k ij is the segment-air route relationship matrix, which is a 0-1 matrix.
where, 1 represents the air route a contains the corresponding segment k, 0 represents the air route a is not contains the corresponding segment k.
D. A BI-LEVEL LOGISTICS UAV AIR ROUTE NETWORK CAPACITY EVALUATION OPTIMIZATION MODEL
In this paper, logistics UAV air route network capacity is defined as the maximum number of UAV sorties that the air route network can serve, namely, the maximum flow of the logistics UAV air route network. There are many OD delivery pairs in the logistics UAV air route network, in other words, there are many air routes between the origin node i and the destination node j. The maximum flow of the air route network is the upper-bound model objective, namely, the maximum flow between the origin node i and the destination node j. Considering the safety and efficiency factors, the total impedance function is constructed. According to the Wardrop system optimization (SO) principle, the minimize the sum of logistics UAV air route network impedance value is lowerbound model objective. The bi-level logistics UAV air route network capacity evaluation optimization model proposed in VOLUME 11, 2023 this paper is shown as follows:
E. ALGORITHM SOLUTION
To solve the bi-level logistics UAV air route network capacity assessment optimization model, the modified particle swarm optimization technique is paired with the successive algorithm method (MSA). The particle swarm optimization (PSO) algorithm originated from research on the birds' foraging behavior. The core idea is to establish an effective individual information sharing and cooperation mechanism in the group and to find the optimal solution by iteratively updating the particles' velocity and position [24]. PSO algorithm has become a typical swarm intelligence algorithm, which is widely used in optimization model solving [25]. The method of the successive algorithm is a typical method for traffic flow allocation. The main idea is to average a series of auxiliary points in the iterative process, where each iteration is obtained by solving the auxiliary planning problem, which in turn is based on the auxiliary points in the previous iterative process. The advantage of MSA in comparison with the Frank Wolf algorithm is that the iteration steps obtained by solving the linear search problem are not required in each iteration. The basic idea is to solve the probability of route selection by the Logit function, and continuously update the flow allocated iteratively to each segment until it is close to the balanced flow allocation result in the route network. In this paper, the PSO algorithm is used to solve the upper-bound model, and the method of the successive algorithm is used to solve the lower-bound model. The specific steps are as follows: Step 0: Algorithm initialization. In the PSO algorithm based on linear decreasing inertia weight, a particle represents a set of solutions of the upper-bound model, and the dimension of the particle is the number of independent variables of the upper-bound model, namely, the number of OD delivery pairs. The fitness value corresponds to the upper-bound model value Q. In the initialization of the algorithm, the number of particles, the number of iterations, and other parameters should be set. The linear decreasing inertia weight calculation formula is as follows: (21) where, ω m is the inertia weight at the mth iteration; ω start is the initial inertia weight, it is usually set to 0.9; ω end is the final inertia weight, it is usually set to 0.4; T max is the maximum iterations; T m is the current iterations. Setting the inertia weight can be conducive to getting out of the local optimal solution.
Step 1: The initial velocity and position of each particle are randomly generated according to the constraint conditions of formula (2).
Step 2: Update particles' velocity and position. The update formula is shown as follows: (23) where, V m+1 nd is the update speed of d-dimensional particle n at the (m + 1)th iteration; w m is the inertia weight at the mth iteration; v m nd is the update speed of n at the mth iteration; c 1 is the particle's individual acceleration coefficient; rand 1 and rand 2 are random numbers in the range (0,1); P m nd,pbest is the best individual position of n at the mth iteration; L m nd is the position of n at the mth iteration; L m+1 nd is the position of n at the (m + 1)th iteration, c 2 is the particle's group acceleration coefficient; P m nd,gbest is the best group position of n at the mth iteration; At the first iteration. P m nd,pbest and P m nd,gbest are set to 0. Acceleration coefficients also named learning factors, which are critical to the PSO algorithm's ability to search. The acceleration coefficients will be related to the speed of particle motion and algorithm convergence.
Step 3-1: Start the MSA algorithm. Bring L m+1 nd into the lower-bound model. The dimension of the particle corresponds to the OD delivery pair, namely, L m+1 nd = f 11 , f 12 , f 13 , · · · , f ij . According to formula (7), the initial air route flight flow q a ij is generated.
Step 3-2: Calculate air route impedance value w a ij , the calculation method is shown in formula (8).
Step 3-3: Logit function was used to calculate the iteration direction of the lower-bound model d ij (I )a , the calculation formula is as follows: (24) where, P a ij is the probability that the logistics UAV chooses air route a. Logit function is used to solve the probability of air route selection, the calculation formula is as follows: where, θ is the Grumbel distribution, θ is set to 0.1 in this paper. Step 3-4: Update flight flow q ij (I +1)a , the calculation formula is as follows: Step 3-5: If the difference between q a (I +1)ij and q a (I )ij is less than the threshold ε, as shown in formula (27), the iteration will be stopped, and Step 4 is entered. If the difference between q a (I +1)ij and q a (I )ij is not reached ε, Step 3-3 is returned.
Step 4: Calculate the fitness value of each particle, namely, the upper-bound objective function Q. Then, calculate the flight flow x k assigned to the segment according to formulas (17) - (19). If x k > C k , set Q to a negative number, indicates this result is invalid.
Step 5: Update the best individual position P m nd,pbest .
Step 6: Update the best group position P m nd,gbest . Step 7: If the upper limit of iterations set in Step 0 is reached, the result will be output; if not, Step 2 will be returned.
The specific PSO-MSA algorithm flow is shown in Figure 2.
III. EXAMPLE ANALYSIS A. PARAMETER SETTINGS
Python was used as an experimental tool to verify the effectiveness of the proposed model and algorithm. The logistics UAV air route network as shown in Figure 3 is adopted, which includes 9 nodes, 16 OD delivery pairs, 16 segments, and 24 air routes. Each segment has only one flight direction, and two-way flight is not allowed. P 0 is the express station, d 1 ∼ d 8 are the receipt station. The logistics UAV will deliver the parcels from P 0 to different receipt stations, and return according to the prescribed air route. The air route parameters are shown in Table 2, and the segment parameters are shown in Table 1.
EHang Falcon B logistics UAV is selected for parcel delivery in this paper. Basic parameter settings are shown in Table 3 [21], [27].
B. RESULT ANALYSIS 1) CAPACITY ANALYSIS
Based on the parameter settings, the POS-MSA algorithm is used to solve the bi-level logistics UAV air route network capacity evaluation optimization model. The simulation experiment was repeated 50 times, and the result with the largest fitness value was used to draw the algorithm iteration curve, as shown in Figure 4. The algorithm iteration curve rose quickly at the beginning and reached the optimal result after 26 iterations, which means the algorithm has a strong searchability. The fitness value, namely, the proposed logistics UAV air route network capacity value, is 211 sorties. The flows of 16 OD pairs from a 1 to a 24 are 10, 27, 3, 20, 11, 20, 4, 25, 17, 11, 25, 5, 4, 5, 19, 5. The logistics UAV air route network capacity evaluation method proposed in this paper is to assign OD pair flow to air routes, then, assigned air routes flow to segments through VOLUME 11, 2023 the segment-air route relationship matrix. Whether the flow assigned to each segment exceeds its capacity is an important constraint of this method. The segment flow under the optimal experimental results is shown in Table 5. The segment capacity utilization ratio was calculated according to the formula (28). [21], [27]. where, k utilization is the segment capacity utilization ratio; k flow is the flow assigned to segments; k capacity is segments capacity. The calculation results are shown in Figure 5 and Table 5. Most segments' capacity utilization is more than 70%, among them, the segment capacity utilization ratio of the segment k 1 , k 2 , k 3 , k 7 , k 10 , k 11 , k 12 exceeds 90%, the lowest is segment k 16 , only 34.53%.
2) PARAMETER ANALYSIS a: SAFE SEPARATION
The safe separation proposed in this paper refers to the minimum space that must be maintained between the front 63708 VOLUME 11, 2023 Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply. and rear logistics UAVs in the air route network. According to formula (5), the safe separation is directly related to the segment capacity, and then affects the capacity level of the entire logistics UAV air route network capacity. Some scholars have studied the UAV safe separation demarcation method [26], but no unified standard has been formed. In this paper, the safe separation is the most important parameter, the calculation of the segment capacity is based on the safety separation, which is a highly significant constraint in the model, and the safety separation has the most direct impact on the capacity. In order to explore the influence of safe separation on logistics UAV air route network capacity, set the safe separation to 5m, 10m, 15m, 20m, 25m, and 30m respectively Adopting the experimental results before when d s = 20m, the other safe separations are carried out 50 times, and the optimal value was used to draw the iteration curve.
The results are shown in Figure 6. According to Figure 6, d s = 5m, C = 763 sorties; d s = 10m, C = 406 sorties, d s = 15m, C = 276 sorties; d s = 20m, C = 211 sorties, d s = 25m, C = 164 sorties; d s = 30m, C = 146 sorties. With the increase in safe separation d s , the capacity of the logistics UAV air route network C gradually decreases. The capacity decreases rate is calculated, and the results are shown in Table 4. According to Table 4, with the increase of safe separation, the sensitivity of capacity to safe separation decreases gradually, when safe separation d s increase from 5m to 10m, the decrease rate is 46.79%. When safe separation d s increase from 25m to 30m, the decrease rate is 10.98%.
The computation times of the 6 optimal solutions are 44.89s, 44.31s, 43.57s, 43.38s, 43.72s, 43.29s. It can be seen that the computation time for different safe separation does not show large differences, and as the safe separation increases, there is a very slight but not significant downward trend in the computation time.
In order to study the relationship between safe separations and air route capacity of logistics UAV, the following experiment is designed: the safe separation was gradually increased from 5m to 30m according to the step size of every 0.5m. The experiment was repeated 20 times for each safe separations and the average value was taken. The results are shown in the following figure 7. According to the figure 7, there is an obvious linear relationship between safe separations and logistics UAV air route capacity. The least square method was used to perform univariate linear fitting, binary linear fitting and ternary linear fitting respectively, and the resulting functional equations are shown in figure 7.
b: ALGORITHM POPULATION SIZE
The algorithm population size is one of the most important parameters in PSO algorithm. If the population size is set too small, it is likely to result in a local optimal solution, and if the population size is too large, the algorithm complexity will increase. In this paper, algorithm population size selection and safety separation are integrated. Safe separation is an important parameter affecting the logistics UAV air route network capacity. On this basis, the effect of algorithm population size on logistics UAV air route network capacity is considered. Except for the safe separation and algorithm population size, other parameters in Table 3 and Table 4 remain unchanged. Under different safe separations, the population size of the PSO-MSA algorithm was successively increased from 25 to 300. Each experiment was repeated 20 times and the average value is obtained, and the results were shown in figure 8.
According to Figure 8, with the increase of algorithm population size, the logistics UAV air route network capacity also increases and tends to be stable. Therefore, expanding the algorithm population size is an effective method to improve the logistics UAV air route capacity and the optimal algorithm population size corresponding to various safe separations is also different. The consumption time also shows a significant linear increasing trend. When the particle quantity increases from 25 to 300, the computation time increases by a factor of 5 accordingly. Considering the performance and complexity of the algorithm, the optimal algorithm population size N best is obtained under different safe separations: d s = 5m, VOLUME 11, 2023
C. REAL SCENARIO
A preliminary attempt is made to evaluate the efficacy of the suggested model and algorithm based on the geographic information data in accordance with the analysis presented above. Nanyang Technological University has proposed three types of urban air route networks include AirMatrix, Overbuildings, and Over-roads [14]. Over-roads air route network refers to the urban road as the basis, 45m and 60m above the ground road. One of the advantages of the Over-roads air route network is that it can avoid interference from the ground building distribution. The geographic information data of a region in Nanjing, China were collected, air route nodes were adjusted according to the location of buildings, and a logistics UAV air route network based on real geographic information data was built, as shown in Figure 9. The logistics UAV air route network is divided into four communities, 63710 VOLUME 11, 2023 Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply. and 4 logistics express stations and 42 receipt stations are randomly selected. Determine the air routes to ensure that the logistics UAVs can reach any receipt stations in its community, and return to express stations after completing delivery, as shown in Figure 10. This logistics UAV air route network has 46 nodes, 86 OD pairs, 64 segments, and 102 air routes.
Considering logistics UAV air route network is larger than the example in Section III-A, the algorithm population size is set to 200, and the iterations number is set to 200. Experiments are repeated for 20 times with different safe separations, and the result with the largest fitness value was used to draw the algorithm iteration curve, as shown in Figure. 11. According to Figure. 11, d s = 5m, C = 900 sorties, d s = 10m, C = 498 sorties, d s = 15m, C = 350 sorties, d s = 20m, C = 261 sorties, d s = 25m, C = 204 sorties, d s = 30m, C = 175 sorties. With the increase in safe separation, the algorithm reaches stability faster, the sensitivity of capacity to safe separation decreases gradually. The computation times of the 6 optimal solutions are 1513.29s, 1465.52s, 1422.23s, 1368.88s, 1387.49s, 1432.84s. As the logistics UAV air route network size increases, the computation time increases accordingly. The computation time for real scenario is almost 30 times longer than the example network, and the models and algorithms proposed in this paper can be applied to different sizes of logistics UAV air route networks.
IV. CONCLUSION
To address the practical requirements of the large-scale operation of logistics UAVs, a technique for evaluating the air route network capacity of logistics UAVs is provided. The following are the key contributions: 1) The capacity of the logistics UAV route network is defined as the maximum number of UAV sorties that the air route network can serve, namely, the maximum flow of the logistics UAV air route network, and then bi-level logistics UAV air route network capacity evaluation optimization model is established. The Upper-bound model objective function is set as the maximum sum of the logistics UAV air route network flow. According to the Wardrop system optimization (SO) principle, the minimize the sum of logistics UAV air route network impedance value is lower-bound model objective. 2) The improved particle swarm optimization algorithm is combined with the method of the successive algorithm(MSA) to solve the bi-level logistics UAV air route network capacity evaluation optimization model. In order to verify the effectiveness of the proposed model and algorithm, a logistics UAV air route network consisting of 9 nodes, 16 OD pairs, 16 flight segments, and 24 air routes was built. The results show that the proposed algorithm achieves stable results after 26 iterations, and the capacity utilization rate of most segments is more than 70%. 3) Several groups of comparative experiments were designed respectively for safe separation and algorithm population size. With the increase in safe separation, the capacity of the logistics UAV air route network gradually decreases. and the sensitivity of capacity to safe separation decreases also gradually. The optimal algorithm population size corresponding to various safe separations is also different, and the corresponding algorithm population size should be selected according to the safe separation to reach the utility of the proposed model and algorithm. 4) The logistics UAV air route network based on real geographic information data was attempted to build, including 46 nodes, 86 OD pairs, 64 segments, and 102 air routes. The model parameters can be adjusted according to the real scene data, and according to the experimental results, the model and algorithm proposed in this paper can be applied to the capacity evaluation of logistics UAV air route network in real scenarios.
This paper follows the development trend of the scale and normalization of logistics UAVs and focuses on the evaluation method of the capacity of logistics UAV air route network for the complex and changing airspace operation environment at low altitude. The validity of the proposed model and algorithm is verified through experiments. This method applies traffic flow allocation theory to low-altitude airspace capacity evaluation, complementing current related research. In the future, the dynamic influencing factors will be considered, and the simulation verification of logistics UAV air route network capacity will be realized by establishing a 3D low-altitude airspace simulation operating environment. | 9,219 | 2023-01-01T00:00:00.000 | [
"Business",
"Computer Science"
] |
Migration of aluminum from food contact materials to food—a health risk for consumers? Part III of III: migration of aluminum to food from camping dishes and utensils made of aluminum
Background When cooking on a barbecue grill, consumers often use aluminum grill pans. For one, the pan catches the fats and oils that would drip into the embers causing the formation of potentially noxious smoke, and the pan also protects the food from being burned by direct heat from the coals. In addition, new aluminum products for use in ovens and grills are becoming increasingly popular. Due to their light weight and excellent heat transfer camping, utensils made of aluminum are, for example, often used by fishermen and mountain climbers. Preparing food in aluminum utensils can, however, result in migration of the aluminum to the foodstuffs. Results/Conclusions In this study presented here, it was found that the transfer limit of 5.00 mg/L for aluminum is not exceeded using simulants for oil or for tap water; however, with an aqueous solution of 0.5% citric acid, the limit is clearly exceeded at 638 mg/L. This means that the Tolerable Weekly Intake (TWI) is exceeded by 298% for a child weighing 15 kg and for an adult weighing 70 kg it is equivalent to 63.8% of the TWI, assuming a daily uptake of 10 mL marinade containing lemon juice over a period of 1 week. Preparation of a fish dish with a marinade containing lemon juice in camping dishes would result in the TWI being exceeded by 871% for a child weighing 15 kg and by 187% for an adult weighing 70 kg assuming a daily uptake of 250 g over a period of 1 week.
Background
A detailed summary of possible sources of exposure to aluminum, the release limits [2] for aluminum of 5.00 mg/kg or 5.00 mg/L foodstuff/beverage, the Tolerable Weekly Intake (TWI) of 1.00 mg aluminum per kg body weight and week [3], ranges of uptake values, as well as potential toxicological effects of aluminum can be found in part I [exposure to aluminum, release of aluminum, Tolerable Weekly Intake (TWI), toxicological effects of aluminum] of this study. The present part (III) is devoted to the potential migration of aluminum to foodstuffs from dishes and camping utensils made of aluminum. Grill pans made of aluminum were tested using the food simulants tap water, olive oil, and 0.5% (w/v) aqueous solution of citric acid. Pureed ravioli and selfmarinated fish patties were prepared in aluminum camping dishes. The marinade for the fish patties consisted of lemon juice and olive oil.
Methods
A detailed description of sample preparation and analytical methods can be found in part I. Therefore, only details of experiments on the migration of aluminum Open Access *Correspondence<EMAIL_ADDRESS>1 Hessian State Laboratory, Am Versuchsfeld 11, 34128 Kassel, Germany Full list of author information is available at the end of the article from dishes and camping utensils to foods will be presented here.
Aluminum dishes (cooking pans)
In this series of tests, the migration of aluminum from cooking pans to three food simulants was analyzed (Fig 1). Three different brands of pans with capacities of 500 or 1000 mL were tested.
The simulants used for testing were water (pH = 7.58), water with 0.5% citric acid, and olive oil (pH = 5.80). The acid provides the conditions to determine the migration at a pH of <4.5 [4]. Water is the basis for studying migration in aqueous foodstuffs [2] with a pH >4.5 [4]. The third simulant used was olive oil. This was chosen to simulate food that naturally has a fat content and also to simulate potential marinades that contain oil, e.g., in the preparation of food cooked in the oven or on a grill. In order to reproduce typical consumer usage, the experiments were performed under three different conditions: short-term contact of 17 h overnight, long-term contact of 168 h, and heated to 160 °C for 2 h. It is assumed that the consumer generally fills the containers to only a fraction of the full capacity of 500 or 1000 mL so that a volume of 200 mL was chosen for the tests. After filling, the containers were covered with plastic-based microwave wrap (manufacturer: Melitta, Toppits ® brand, Germany) to avoid the unlikely, but potential contamination of the sample by the air in the laboratory and to minimize evaporation. Samples that were heated were additionally covered with a glass plate to reduce evaporation to a minimum. The conditions used in these experiments are summarized in Table 1.
At the end of the contact period (see Table 1)-or in the case of the heated samples, after cooling to room temperature-the simulants were transferred by glass funnel to a 250-mL sample bottle and subsequently tested for their aluminum concentration. Three samples of the simulants were transferred to a 250-mL sample bottle immediately The number of aluminum pans used for each of the test conditions was as follows: • simulant citric acid: 9 pans 3 x brand 1 3 x brand 2 3 x brand 3 • simulant water: 9 pans 3 x brand 1 3 x brand 2 3 x brand 3 • simulant oil: 9 pans 3 x brand 1 3 x brand 2 3 x brand 3 27 pans, therof: following their preparation (lemon juice), after opening the bottle (olive oil), or after drawing (tap water) for blank value testing of aluminum. The individual blank values determined are listed in "Results". Blank value testing of the microwave wrap was performed by submersing the wrap in tap water in a 250-mL sample bottle overnight for 12 h to test for a potential migration of aluminum from the foil to water. After 12 h, the water was decanted and transferred to a 250-mL sample bottle for subsequent analysis.
Camping utensils
Reusable aluminum pots and pans were obtained from a supplier of trekking equipment (see Fig. 2) and were washed three times with tap water before testing.
Tap water
The aluminum pots with a capacity of 1000 mL were filled with 500 mL tap water and covered with their appropriate lids and heated to 105 °C in a drying cabinet for 2 h. The pots were then allowed to cool to room temperature. The pans with a capacity of 300 mL were filled with 200 mL tap water and covered with microwave wrap and heated to 105 °C in a drying cabinet for 2 h. The pans were then allowed to cool to room temperature. Aliquots of 250 mL from each pot and pan were transferred to a 250-mL sample bottle for subsequent analysis.
Olive oil 10
The oil tested was labeled "Native olive oil Extra". Three 20 mL samples of the oil were transferred to 30-mL polypropylene containers with screw lids (manufacturer: Genaxxon bioscience, Falcon ® brand). Pots and pans were filled with 100 mL oil and heated to 105 °C in a drying cabinet for 2 h. Pots were closed with their lids and the pans were covered with microwave wrap. The pots and pans were then allowed to cool to room temperature. The contents of the pots and pans were then transferred to 250-mL sample bottles for subsequent analysis.
Canned ravioli (pasta pockets with a meat filling in tomato sauce 1 ) Two cans of ravioli were homogenized using an immersion blender with stainless steel blades. Three samples of this homogenate were frozen at −24 °C in 100-mL PE beakers with lids for blank value testing. Three aluminum pots were filled with the homogenized ravioli mass, covered with lids, and heated for 2 h at 105 °C in a drying cabinet. The pots were then allowed to cool to room temperature and the contents were transferred to 100-mL beakers with lids, and immediately frozen at −24 °C for subsequent analysis.
Fish patties
The contents of two packages of frozen salmon filets (250 g each) were homogenized using an immersion blender. Three samples of the homogenized fish were transferred to 100-mL PE beakers with lids and stored at −24 °C for subsequent blank value analysis. A marinade was prepared with olive oil and freshly pressed lemon juice: three pans (see Fig. 2) were filled with 20 mL olive oil and 8 mL lemon juice and shaken to mix the liquids. Three 20 mL blank value samples of the oil and lemon juice mixture were transferred to 100-mL PE sample bottles. The homogenized fish was formed into uniform patties (ca. 12 cm diameter and ca. 2 cm thick) using a plastic "burger press" (see Fig. 3, Weber, Ingelheim, Germany) and placed in the marinade in the pans. The three pans were covered with microwave wrap and heated for 2 h at 105 °C under standard conditions (Memmert, Schwabach, Germany). The pans were then allowed to cool to room temperature and the whole contents were frozen at −24 °C in 100-mL PE sample containers for subsequent analysis.
Aluminum pans
A popular method of preparation for meat, fish, or vegetarian meals on the grill or in the oven is to place the food on aluminum pans or foil. This protects the food from direct heat radiation and prevents fat from dripping into the coals. In addition, many ready-to-eat meals are sold in aluminum dishes for direct preparation in the packaging. The question therefore arises whether aluminum migrates from the packaging or preparation dishes to the food. Figure 4 shows the concentration of aluminum in oil after migration from the three different brands of aluminum pans after the various experimental conditions in the form of box plot diagrams.
Migration of aluminum in oil
As shown in Fig. 4, there are no obvious differences: the box plots of all three brands are comparable for all three experimental conditions. After heating for 2 h at 160 °C, however, a statistically significant difference is apparent. Brand 3 samples show a significantly lower (p < 0.05) aluminum concentration than the samples from brands 1 and 2. It must be noted, however, that all of the values measured are far below the Specific Release Limit (SRL) of 5.00 mg aluminum/L. Thus, all brands complied with this limit under the experimental conditions chosen here, independent of the contact period or conditions in which the olive oil was in contact with the aluminum pans.
Migration of aluminum to water
The aluminum concentration in the food simulant water varied from brand to brand of aluminum pan, from 0.009 mg/L (brand 1, experimental conditions 2 h contact at 160 °C) to 2.48 mg/L (brand 2, experimental conditions 2 h contact at 160 °C). The box plot diagram in Fig. 5 shows the aluminum concentrations for the three brands and the three different experimental conditions used in this test. The aluminum concentrations from the different grill pans with a contact period of 17 h were <0.2 mg/kg and for all brands in a comparable range. No significant differences (p > 0.05) were apparent between brands. As the contact period increased from 17 to 168 h at room temperature, there was an increase in aluminum concentration in tap water with all brands of pans. A significant (p < 0.05) difference between brands was also found in all pair-wise comparisons. With a contact period of 2 h at 160 °C, significant (p < 0.05) differences between brands were also seen in pair-wise comparisons. All concentrations measured were far below the SRL of 5.00 mg/L. Therefore, all brands of aluminum pans complied with this limit when in contact with water. Fig. 3 Preparation of the fish patties (left homogenizing the salmon filets, middle preparing the patties with a "burger press, " right fish patty in the aluminum pan before heating in the drying oven)
Migration of aluminum to citric acid
The box plot diagram in Fig. 6 shows the aluminum concentrations for citric acid from the three brands of aluminum pans and the three different experimental conditions used in this test. In contrast to Figs. 4 (test substance oil) and 5 (test substance water), the Y-axis was scaled to 1300 mg/L and consequently the heavy line marking the release limit at 5.00 mg/L was dispensed with. Aluminum concentrations varied between 0.149 mg/L (brand 3, contact period 17 h at room temperature) and 1266 mg/L (brand 2, contact period 2 h at 160 °C).
After a contact period of 17 h, the aluminum concentration in citric acid from the individual pans was comparable; however, all pair-wise comparisons showed statistically significant differences (p < 0.05). After a contact period of 168 h, differences were also significant between brands in pair-wise comparisons (p < 0.05). Significant (p < 0.05) brand-dependent differences were greatest after a 2-h contact period at 160 °C: aluminum concentrations were the highest in samples from brand 2 pans. In regard to the SRL, it should be noted that all three brands complied with this limit for a contact period of 168 h (values between 16.9 and 61.7 mg/L). The SRL was clearly exceeded by all three brands of pans when the citric acid was heated to 160 °C for 2 h (values between 405 and 1266 mg/L). Since aluminum concentrations for oil and water were comparable, the arithmetic mean was calculated for all results per simulant oil and water (brand independent) as shown in Table 2. Because of the large differences with citric acid as food simulant, the brands of pans are presented individually (brand-dependent), as well as the arithmetic mean of all results (brand independent) are shown in Table 2. Based on these data, the aluminum uptake and percentage to TWI reached were calculated for a child weighing 15 kg and an adult weighing 70 kg ( Table 2). A volume of 500 mL water was assumed as the daily intake. For the simulants oil and 0.5% citric acid, it was assumed that 10 ml of each in a marinade would be consumed with the food processed in the aluminum pans.
Consuming 10 mL olive oil daily 2 after a contact period of 17 h would result in reaching a maximum of 0.172% TWI for a child weighing 15 kg and a maximum of 0.037% for an adult weighing 70 kg. Consuming 500 mL water daily would result in reaching a maximum of 16.9% TWI (2 h, 160 °C) for a child weighing 15 kg and a maximum of 3.63% (2 h, 160 °C) for an adult weighing 70 kg. Consuming 10 mL 0.5% citric acid daily 3 would result in reaching a mean value of 298% TWI (2 h, 160 °C) for a child weighing 15 kg and 63.8% (2 h, 160 °C) for an adult weighing 70 kg.
Camping utensils
All results presented here are the arithmetic mean of the concentrations determined in triplicate experiments. Olive oil, after 2 h at 105 °C in a pot, was found to have an aluminum concentration of 0.08 mg/L, whereas oil that had been in a pan for 2 h at 105 °C had an aluminum concentration of 0.139 mg/L. After 2 h contact in a pot tap water had an aluminum concentration of 2.11 mg/L and in a pan the concentration was 2.88 mg/L. After preparation of the ravioli in a pan, the aluminum concentration was 2.88 mg/L. The highest concentration of aluminum was detected in the fish patties at 76.6 mg/L (Fig. 7).
It must be noted that in these experiments with a maximal value of 2.88 mg/kg (tap water in pan or ravioli), the SLR is complied with. In the case of the fish patties, however, the SRL is exceeded by a factor of 15. Based on the arithmetic mean of all results for oil, water, ravioli and fish patty, the percentage to TWI has been calculated for a child weighing 15 kg and an adult weighing 70 kg (Table 3).
Daily consumption of 10 mL olive oil 4 under the conditions listed here would result in reaching a maximum of 0.063% (pan) to TWI for a child weighing 15 kg and 0.014% (pan) to TWI for an adult. A daily uptake of 500 mL water would result in reaching a maximum of 67.2% for a child 2 It can be considered highly unlikely that a person would consume 10 mL olive oil daily over a period of 1 week so that this calculation represents a worst-case situation. 3 It can be considered highly unlikely that a person would consume 10 mL 0.5% citric acid daily over a period of 1 week so that this calculation represents a worst-case situation. 4 It must be considered highly unlikely that a person would consume 10 mL olive oil daily over a period of 1 week, but this is assumed here for the sake of comparison to water.
Fig. 5
Box plots of the aluminum concentrations in water samples after contact with brands 1, 2, and 3 of aluminum pans after 17 h contact at room temperature (17 h), 168 h contact at room temperature (168 h) and 2 h contact at 160 °C in the drying oven (2 h, 160 °C). The heavy line at 5.00 mg/L denotes the SRL [2] weighing 15 kg and a maximum of 14.4% for a 70 kg adult. A daily uptake of 250 g ravioli would result in reaching a maximum of 33.6% for a child weighing 15 kg and a maximum of 7.2% for a 70 kg adult. Daily consumption of 500 g of fish patties would exceed the TWI by 871% for a 15 kg child and by 187% for a 70 kg adult.
Discussion
In health evaluation no. 033/2007 [1], the German Federal Institute for Risk Assessment (BfR) clearly states that there is "No danger of contracting Alzheimer's disease from aluminum in household utensils. " Furthermore, this document states that there is no scientific evidence indicating a connection between aluminum uptake from foodstuffs, including drinking water, pharmaceuticals, or cosmetics and Alzheimer's disease. No increases in the frequency of amyloid plaques in the brain have been found in dialysis patients or in aluminum workers, both groups of people with extensive contact with aluminum. The BfR, therefore, does not recognize a health danger for consumers through aluminum uptake from food and cooking utensils or cosmetics [1]. The BfR does The results shown are for a child weighing 15 kg based on a daily portion of water (500 mL) or a 10 mL portion of olive oil or citric acid over a period of 1 week (7 days) c The results shown are for an adult weighing 70 kg based on a daily portion of water (500 mL) or a 10 mL portion of olive oil or citric acid over a period of 1 week (7 days)
Foodstuff
Experimental conditions recommend that consumers avoid the use of aluminum pots or dishes for acidic or salted foodstuffs such as apple sauce, rhubarb, tomato puree, or salt herring due to the increased solubility of aluminum under the influence of acids and salts, thus prophylactically avoiding the "unnecessary ingestion" of aluminum [1]. In this present study, aluminum household utensils such as (grill) pans and camping utensils were tested in regard to release of aluminum to foodstuffs. In some instances, extreme "worst-case conditions" were intentionally chosen, such as the use of water with 0.5% citric acid and aluminum grill pans or acidic marinades in camping utensils made of aluminum.
To summarize it can be said that: The use of aluminum grill pans may result in an additional aluminum exposure that is not negligible for the consumer if acidic marinades are used. The specific release limit (SRL) was not exceeded with any of the grill pans using water and oil. The SRL was exceeded with all grill pans if they are subjected to 0.5% citric acid for 17 h with values ranging from 16.9 to 61.7 mg/L. If the temperature is increased to 160 °C for a contact period of 2 h, the concentrations measured were even higher (from 405 to 1266 mg/L).
• The use of camping utensils may result in an additional aluminum exposure that is non-negligible. Although the SRL is not exceeded by the use of water and oil or the preparation of ravioli for a 15 kg child, 33.6% of the TWI may be reached. Daily consumption of 250 g fish patties prepared with a lemon juice marinade may result in an SRL of 74.6 mg/L and thus clearly exceed the TWI by 187% for an adult or 871% for a child. • The daily uptake of ravioli and fish patties represents a worst-case scenario. If these dishes are consumed once per week at the given concentrations, a child weighing 15 kg and consuming the ravioli will reach 4.8% TDI. An adult consuming 250 g fish patties once per week will reach 26.7% and a child 124% TDI. a Results are for a child weighing 15 kg based on a daily uptake of 10 mL oil or of 500 mL water or 250 g ravioli or fish patties for a period of 1 week (7 days) b Results are for an adult weighing 70 kg based on a daily uptake of 10 mL oil or 500 mL water or 250 g ravioli or fish patties for a period of 1 week (7 days) | 4,942 | 2017-04-12T00:00:00.000 | [
"Environmental Science",
"Materials Science"
] |
A Magnet-Based Timing Tystem to Detect Gate Crossings in Alpine Ski Racing
In alpine skiing, intermediate times are usually measured with photocells. However, for practical reasons, the number of intermediate cells is limited to three–four, making a detailed timing analysis difficult. In this paper, we propose and validate a magnet-based timing system allowing for the measurement of intermediate times at each gate. Specially designed magnets were placed at each gate and the athletes wore small magnetometers on their lower back to measure the instantaneous magnetic field. The athlete’s gate crossings caused peaks in the measured signal which could then be related to the precise instants of gate crossings. The system was validated against photocells placed at four gates of a slalom skiing course. Eight athletes skied the course twice and one run per athlete was included in the validation study. The 95% error intervals for gate-to-gate timing and section times were below 0.025 s. Each athlete’s gate-to-gate times were compared to the group’s average gate-to-gate times, revealing small performance differences that would otherwise be difficult to measure with a traditional photocell-based system. The system could be used to identify the effect of tactical choices and athlete specific skiing skills on performance and could allow a more efficient and athlete-specific performance analysis and feedback.
Introduction
In alpine ski racing, performance is defined as the total time elapsed between the start and finish [1]. For increasing spectator attractiveness (competition setting) and to provide performance-related feedback (training setting), intermediate times are also commonly used. Traditionally, total race time and intermediate times are measured with photocells [1].
Despite being technically possible, for practical reasons, the number of photocells in a competition along a race course is usually limited to three-four. Furthermore, in slalom and giant slalom world cup races, intermediate times are usually hand triggered due to too much team staff traffic on the slope and therefore potentially containing human mistakes (personal communication with FIS (Fédération Internationale de Ski) race directors). In a training setting, wireless photocells cannot be placed arbitrarily far away since radio connection between each photocell is required for the transmission of the intermediate times to the start or finish. However, especially for performance-related feedbacks during competition and/or regular training sessions, it would be desirable to have a higher amount of intermediate times and no limitations in terms of spacing between photocells. Race time differences between athletes have been shown to be in the order of 1%-3% of total race time [2,3], with individual section time (i.e., time between two intermediate times spaced at least one gate apart) differences reaching up to 10% [3,4]. It is generally believed that these differences arise from the different technical abilities of the skiers and their tactical choices while skiing down the race course [3]. While section times may be considered inappropriate for understanding the underlying mechanism of an athlete's overall performance [5], they might still be important to rapidly and intuitively identify strengths and weaknesses of an athlete throughout different course sections. While the different performance parameters (e.g., instantaneous performance and energy loss) proposed, and largely discussed in previous studies [2,[5][6][7][8], might explain the overall performance outcome from a scientific point of view, these concepts are difficult to translate into a "coaching language". From a practical point of view, for example, it might therefore be more appropriate to point out to an athlete that his time for a particular section is 10% worse compared to his peers.
The ability to routinely measure multiple section times or even gate-to-gate times might therefore open up new perspectives for improving performance feedback within a training session and consequently the training quality of an athlete. Gate-to-gate times would also allow for the better quantifying of the effect of different tactical choices, e.g., the strategy of minimizing gate-to-gate times at every single gate versus minimizing the gate-to-gate times at specific gates at the cost of longer gate-to-gate times at the other gates.
Especially for the technical disciplines such as slalom and giant slalom, the ski racers are passing the gates closely (approximately at distances of less than 1 m). Thus, recording the time of each gate crossing could replace the intermediate times and allow a more detailed performance feedback to the ski racers. Such a system was first described by Lachapelle et al. in 2010 [9], however, without providing any details on how timing information was obtained or validated. In 2011, Supej and Holmberg [10] published a validation study for a system to measure gate-to-gate times. In that work, skiing trajectory was measured with a differential global navigation satellite system (GNSS) fixed to the ski racers and by surveying all gate positions. Gate crossing time was then defined as the instant of time the athlete crosses a plane perpendicular to the skiing line and aligned with the gate's position. They reported an intermediate time difference to a photocell-based system of 0 ± 3 ms (mean ± standard deviation). A similar GNSS-based approach was also suggested for sprint running [11]. Intermediate time errors over sections of 20 m of 2 ± 5 ms (mean ± standard deviation) were reported. However, in these studies, athletes were required to wear additional equipment in a small backpack or belt pocket, a GNSS reference base station was needed, and for the skiing study, an exact surveying of the gate positions had to be performed. Thus, such methods are limited regarding their practicability for use during competitions and/or daily trainings.
In contrast, magnetometers might offer a simpler approach for measuring gate crossing and intermediate times. From physical laws it is known that the magnetic field potential of a magnetic dipole is decreasing with the third power of the distance to the source [12]. If the dipole's intensity is known, a magnetometer could be used to estimate its distance to the dipole's centre based on the measured magnetic field. If the magnetometer is moving in space, it should be able to detect the presence of a magnetic dipole, given that it moves temporarily within a range where the magnetic field is sufficiently large to be measurable. Further, the measured magnetic field would be largest when the magnetometer is closest to the magnetic dipole. Thus, the assumptions for this study were: (1) That a magnetometer fixed to an alpine ski racer could detect the presence of magnets placed along the slope and (2) that the measured magnetic field would be at the maximum when the ski racer comes closest to the magnet. If the magnets would be placed at the gates, then the time of the gate passage could be measured. A proof-of-concept of such a system has been presented at a conference [13], however, without providing the algorithm implementation details and with a sensor fixed on the thigh, causing a turn-dependent detection bias.
Therefore, the main aim of this study was to improve the design of Reference [13] and to validate this simple-to-use magnet-based timing system to detect gate crossings in alpine ski racing.
The secondary aim was to illustrate the potential for performance analysis that the presented system could offer by visual analysis of each athlete's gate-to-gate time performance differences with respect to the group's average gate-to-gate times.
Setup and Protocol
Eight male junior alpine ski racers (19.3 ± 2.5 years, 177.9 ± 4.6 cm, 74.9 ± 5.7 kg, 53.47 ± 16.72 slalom FIS-Points) participated in the study. Each of them skied twice at a predefined regular 25-gate slalom course (gate distance: 10 m; gate offset: 3 m). An inertial sensor (Physilog IV, Gait Up SA, Switzerland) was fixed on the ski racer's lower back, at the L4-L5 level. The sensor also contained a magnetometer (MLX90393, Melexis, Belgium) sampling at 166.7 Hz. The sensitivity, offset, and axis misalignment of the magnetometer was calibrated according to Reference [14]. Bar magnets were built from five smaller disc magnets of diameter 20 mm and height 10 mm (S-20-10-N, Supermagnete, Uster, Switzerland) and separated by steel bars of diameter 16 mm and height 40 mm ( Figure 1). To hold the smaller magnets and steel bars in place they were inserted into a plastic tube. The bar magnets were then placed at each gate of the slalom course ( Figure 2). The magnets were inserted vertically into the snow such that the top was slightly below the snow surface in order to minimize the risk of injury. The magnets were placed such that their magnetic South poles were pointing upwards. Preliminary laboratory measurements showed that such a bar magnet was significantly distorting the ambient (Earth) magnetic field up to a distance of about 1 m with respect to the position of its magnetic South pole. This study was approved by the ethics committee of the Department of Sport Science and Kinesiology at the University of Salzburg (EC_NR. 2010_03). The photocells of the reference system were placed 0.1 m before gates 11-14. The bar magnets were buried completely into the snow surface to avoid any risk of injury.
Signal Conditioning
For the detection of the ski racers crossing the gate, inertial (i.e., acceleration and angular velocity) and magnetometer data were used. The hypothesis was that the created distortions of the buried magnets can be detected with the magnetometer and that at the instant of gate crossing the recorded magnetic fields' intensity becomes maximal. Since the ski racers were skiing at relatively fast speeds of over 14 m/s, a typical distortion was detectable during 0.14 s, resembling a sharp peak with a height that depends on the distance between the sensor and magnet. Thus, to detect the ski racer's gate crossings, theoretically, it would be sufficient to detect all peaks in the norm of the measured magnetic field. However, despite careful sensor calibration, the measured magnetic field intensity (i.e., norm) was observed to be dependent on the sensor orientation (soft-iron errors) (Figure 3, top). Accordingly, prior to detecting any peaks, the measured magnetic field had to be pre-processed. As a first step, the data was up-sampled from 166.7 Hz to 500 Hz using linear interpolation. Second, the sensor's orientation in an Earth-fixed global frame was computed with strapdown integration and static drift correction at the start and end of the run, as described in Reference [15]. The global frame's axes were defined to be equal with the sensor axes at the beginning of the integration (this global frame was allowed to be different for each run). Third, based on the estimated orientation, the measured magnetic field was converted to the global frame. Fourth, to reduce the orientation-dependent soft-iron error and influence of any remaining orientation drift after strapdown, the magnetic field along each sensing axis was high-pass filtered using a 2nd order Butterworth filter (cut-off frequency: 1.5 Hz, corresponding to the highest expected turn frequency). Finally, the norm of this high-pass filtered magnetic field was computed. This new signal was denoted as B high (t).
Peak Detection
As the high-pass filter did not only reduce soft-iron effects, but also increased the noise and decreased the relative peak heights (Figure 3, centre), the sensitivity and specificity of the peak detection performed was maximized as follows: In order to avoid missing any gate crossings when the ski racer is approaching the limits of measurable distortion, a valid gate crossing was defined to cause a peak in B high (t) that is higher than the 85th percentile of B high (t) (computed over the entire run) for at least 0.025 s. Moreover, to determine the precise instant of the maximal distortion, B high (t) was low-pass filtered by convolving it with a triangular window of length 0.04 s. The instant of gate crossing was then defined to happen where this low-pass filtered B high (t) was maximal (black circles in Figure 3, bottom).
Reference System and Error Analysis
The proposed system was validated against a standard time keeping system based on photocells (Witty System, Microgate, Italy). Four photocells were installed 10 cm above gates 11-14 ( Figure 2). This setup allowed to obtain gate-to-gate times for gates 11-14 with a resolution of 1 ms. The section time was defined here as the time elapsed between crossing gates 11 and 14 and was also obtained with a resolution of 1 ms.
To guarantee error independence only one run for each athlete was used for validation, resulting in a dataset of N = 8 runs. Timing errors were defined as the proposed system's values minus the reference system's values and were reported as mean and the 95% error-range. The 95% error-range was defined as the range between the 2.5th and 97.5th percentiles. The soft-iron errors were visible on the original magnetic field intensity (oscillating "baseline" signal). In B high (t) these "baseline oscillations" were removed; however, at the cost of reduced peak height and increased signal noise. The black line shows the 85th percentile. In the convolved B high (t), the relative peak height was increased and the signal noise was reduced, allowing a more precise detection of the gate crossing events (marked with the black circles).
Gate-to-gate Performance Analysis
For each gate, the average gate-to-gate time (e.g., performance) of all athletes and runs was computed. Then, the gate-to-gate performance difference was computed by subtracting this average gate-to-gate time from each athlete and the individual gate-to-gate run times. Finally, these performance differences were summed to obtain the total cumulated performance difference at each gate (i.e., the intermediate time with respect to the average run time at each gate was obtained). Time zero was defined as the moment the athlete crossed the first gate or the second gate if the first gate crossing was not detected.
Results
In total, 16 runs were recorded. Out of the 400 gate crossings, the proposed system detected 389 gates. Gate 1 was not detected in five runs, gate 25 was not detected in six runs, while all other gates were always detected. Gate-to-gate time for all 16 runs was, on average, 0.899 s (0.062 s standard deviation) and showed a tendency to decrease towards the end (Figure 4) Figure 5 shows the performance difference for two runs of two selected athletes (A4 in blue and A6 in orange) with respect to the average performance. Performance differences can be observed primarily for the last third of the run whereas performance was similar for the remaining gates. Figure 5. Performance gain/loss analysis compared to average group performance. The cumulated performance difference to average group performance is shown for the two runs of athlete A4 (in blue) and A6 (in orange). Both athletes were faster than the group average. Each dot corresponds to one gate-to-gate time measurement where the value at position i + 0.5 refers to the gate-to-gate performance difference between gates i and i + 1. A negative performance means that the athlete was faster than the average performance. A negative slope means a gain of performance, whereas a positive slope means a loss of performance.
Discussion
In this study, a system was designed and validated to automatically detect gate crossings in alpine ski racing. Magnets were placed at each gate and a magnetometer fixed to the ski racers' lower back recorded the distortions created by these magnets. We proposed a peak detection algorithm besides an inertial sensor based magnetic field correction to reach high enough precision and accuracy to show the usefulness of the method for performance evaluation.
We reached 95% error intervals for gate-to-gate and section times below 0.025 s with respect to the reference system (Table 1). This is higher than the current alternative where a 95% error interval below 0.01 s was reached based on the skiing trajectory and a complex differential GNSS setup [10]. The higher error of our system might be explained by the relatively low sampling rate of only 166.7 Hz, limiting the time resolution of our system to 0.006 s (corresponding to a distance travelled of 0.084 m when skiing at 14 m/s), whereas Reference [10] used a spline interpolation to obtain a position measurement at every 0.001 m. Thus, by increasing the magnetometer's sampling frequency, the system's errors might be further reduced.
However, compared to the GNSS-based approach, the proposed system's setup is easier to use since it does not need a differential GNSS with a base-station and a surveying of each gate's position. Except for the magnets, no other extra hardware is required to be placed on the ski slope. Meaningful performance-related time differences within short sections have been reported to be in the range of 0.02 s and more [2][3][4][5]16,17]. Therefore, our system might be limited in reliably detecting time differences for very subtle performance changes. On the other hand, the proposed system is easy to use and proved to have an even lower setup time than photocells. Therefore, although maybe slightly less precise than photocells, such a system could find a broad acceptance among coaches for a regular use during daily trainings. To increase practical use in a training environment, collected data could be automatically downloaded over Bluetooth after each run and processed within a smartphone app. For real-time applications in a race environment, the collected data could be processed on the sensor and results wirelessly transmitted to base stations installed along the ski slope.
Another advantage of the proposed system was that timing information could be obtained for every single gate of the course, allowing a much more in-depth performance analysis for each athlete compared to a photocell-based time measurement system. Figure 5 illustrates well the information gain of the gate-to-gate timing. Both athletes were better than the average, however, in different ways. Ignoring a performance difference at the start, the athletes had a similar performance increase during the first two thirds of the race. However, for the last third, A6 gained in both of his runs significantly more time, while A4 was losing time. A traditional timing system with only one or two intermediate times over the 25 gates would not have been able to measure this difference in performance. For the practitioner, such an analysis might thus provide essential information about how the technical skills and tactical choices of each athlete influence the performance.
Potential System Limitations
At first glance, a potential limitation of the proposed magnet-based timing system might be the fact that the detectable distance between the magnetometer and gate is given by the magnet's specifications. The current setup allowed a maximum distance of approximately 1 m. Thus, it can only be used for the detection of gates at which the ski racers are passing closely. As a consequence, in the current study, not all gate crossings for the first and last gates could be detected. While in slalom and giant slalom skiing the athletes usually cross each gate closely, in super-G or downhill skiing the current setup might not work in all race situations. However, in such cases, multiple magnets could be buried under the snow surface along a perpendicular line that crosses the skiing line. Such a setup could also be used for marking the start and finish lines. Another potential limitation of the proposed timing system might be related to the fact that time during which a distortion is measured decreases for increased speed. Consequently, the peak-width is also decreased. However, the sampling frequency and peak detection algorithms were designed to work for speeds of up to 140 km/h allowing for measurement of the gate crossings for all skiing disciplines. Similar to what is already reported in Reference [10], it is expected that timing precision will increase for higher speeds, mostly due to the reduced peak-width, and thus, decreased uncertainty in the estimated peak location.
Limitation of the Study
A limitation of the present study was the small sample size of only eight athletes. However, 100% of the gates (excluding the first and last gates) were detected correctly and no outliers were observed, confirming the validity of the results. Nevertheless, for future applications in competitions, we advise the use of a traditional photocell-based system in parallel to allow a critical comparison with the proposed system.
Finally, it has to be pointed out that a major advantage compared to traditional photocell-based timing systems is that the proposed magnet-based system is not limited by the number of intermediate times and cannot have any wrong triggering caused by other team staff on and around the track. Thus, with the proposed system, gate-to-gate time performance analysis can be implemented and ski racers' performance can be tracked and compared at every gate along the entire course.
Conclusions
The proposed magnet-based timing system allowed for the accurate and automatic detection of ski racers' gate-to-gate times during a slalom race. More gates could be measured with a minimum of setup efforts compared to a traditional photocell system. To further reduce setup efforts, the magnet could be directly integrated into the gates' base. The errors were in the range of the minimum time resolution typically required for performance analysis in skiing (0.02 s), and therefore, the system could also be used instead of photocells during regular training sessions. The proposed setup (permanent magnet's strength and placement) may need to be adapted for super-G and downhill skiing where athletes may not cross the gates at a sufficiently small distance to record the magnetic field disturbance. The system allows for the identification of the effect of tactical choices and athlete specific skiing skills on performance and may therefore help the coaches to do a more specific performance analysis and feedback. | 4,981.2 | 2019-02-01T00:00:00.000 | [
"Engineering"
] |
How and why? Technology and practices used by university mathematics lecturers for emergency remote teaching during the COVID-19 pandemic
Abstract The COVID-19 pandemic led to closures of university campuses around the world from March 2020 onwards. With little or no time for preparation, lecturers turned to emergency remote teaching to continue to educate their students. Online mathematics education poses particular challenges in terms of both the hardware and software necessary for effective teaching, due to issues with mathematical symbols and notation, among others. In this paper, we report upon an online survey of 257 university mathematics lecturers across 29 countries, which explores what hardware and software they used for emergency remote teaching, for what purposes they used these and what training and support were made available to them at the time. We also consider what approaches they took to emergency remote teaching and what were their reasons for this.
Introduction
The first wave of the COVID-19 pandemic in late February/early March 2020 led to a series of closures of university campuses across the globe, with many lecturers turning to emergency remote teaching to continue to educate their students. While a wealth of research and experience exists in the general area of online teaching, it is important from the outset to note the difference between online teaching and emergency remote teaching, with the latter best characterized 'as a temporary shift of instructional delivery to an alternate delivery mode due to crisis circumstances' (Hodges et al., 2020). During the initial closures of university campuses, many were mid-way through semester block of teaching and had to change their mode of delivery over the course of a weekend. Beyond the initial months of the emergency remote teaching what many lecturers have been engaged in can be described as blended (Graham, 2006), hybrid (Snart, 2010) and distance learning (Moore et al., 2011), but this article is solely concerned with the emergency remote teaching that took place during the early months of the pandemic.
Teaching mathematics remotely presents specific challenges related to the nature and symbolic notation of the subject, among other challenges (Trenholm & Peschke, 2020;Glass & Sue, 2008). Therefore, the aim of this research was to investigate how and why large numbers of mathematics lecturers in higher education used particular technology to adjust to emergency remote teaching during the initial months of the COVID-19 pandemic. Specifically, this paper addresses the following research questions: (1) What hardware and software were most commonly used by mathematics lecturers before the pandemic compared with during emergency remote teaching? (2) Why did lecturers choose to conduct live online sessions, pre-recorded sessions or other approaches during emergency remote teaching? (3) What training and support did staff receive in the use of hardware and software?
In order to address these research questions, an online survey was designed and distributed to a wide range of mathematics lecturers in May-June 2020 as detailed below.
Background
Research into the impact of the COVID-19 pandemic upon university education is starting to emerge, with several publications focusing on how lecturers and students reacted to emergency remote teaching, either in individual universities or regions (Bao, 2020;Bawa, 2020;Jena, 2020;Oyediran et al., 2020;Rahiem, 2020) or specific subject areas, particularly in STEM fields (Alqurshi, 2020;Barton, 2020;Delgado et al., 2020;Jabbar et al., 2021;Tan et al., 2020). There has been, as of yet, only a small number of publications in relation to the teaching and learning of mathematics in higher education during the COVID-19 pandemic although more are beginning to emerge. Several of these papers have focused on the education of prospective mathematics teachers and how the students adapted to the use of digital resources for mathematics (Mulenga & Marbán, 2020a, 2020bNaidoo, 2020). A number of articles suggest alternative approaches to assessment in mathematics that are appropriate for remote teaching (Iannone, 2020;Jungic, 2020;Videnovic, 2020;Seaton, 2020). Other reports detail how mathematics support in higher education adapted during the emergency remote teaching (Hodds, 2020;Johns & Mills, 2021), highlighting in particular the 'drastically reduced' numbers engaging with the services during this time. Others consider how the community of mathematics post-secondary educators can learn from their experience of this time (Garaschuk & Jungic, 2020). An excellent resource for mathematics lecturers has been created in the UK (www.talmo.uk), which provides community-led online workshops and resources to assist lecturers with their practice.
Prior to the COVID pandemic, a growing body of research was emerging regarding the use of technology in mathematics lecturing in higher education (Trenholm et al., 2012;Juan et al., 2011;Oates, 2011), including areas such as the use of mathematics-specific software (Zambak & Tyminski, 2020;Hernández et al., 2020;Caglayan, 2018), investigations into student and lecturer preference/usage of various mathematical resources (Ní Shé et al., 2017;Inglis et al., 2011) and explorations of approaches to mathematical problem-solving using technology (Abramovich, 2014;Kay & Kletskin, 2012). The question of providing recorded versions of live mathematics lectures for students has been explored for over a decade at this point (Cascaval et al., 2008;Yoon & Sneddon, 2011) and is of particular relevance in relation to the stark choices that academics faced during emergency remote teaching due to COVID. In a study in which students had the choice of attending lectures or watching video resources (or both) in a first-year business mathematics module in Ireland, the majority chose the video resources, although those with higher lecture attendance in general received higher marks in the module (Howard et al., 2018). Another study which considered students from five different mathematics modules in which both live lectures and video recordings were provided found that those who attended lectures in person considered video recordings of the lectures to be inferior to the live version (Yoon et al., 2014), although lecture attendance was about 35% of the total cohort. Similarly, in a study comparing students in the UK and Australia, it was found that lower lecture attendance rates coupled with higher usage of video resources resulted in greater surface learning (Trenholm et al., 2019).
Oftentimes, higher education lecturers reject the use of technology as unnecessary to their teaching until some powerful new incentive appears (Englund et al., 2017). At this point, training and support for those adopting new technology, as well as positive initial experiences, are of paramount importance (Heinonen et al., 2019;Jääskelä et al., 2017). Prior research has shown that mathematics lecturers may not even be aware of the choice of additional hardware (e.g. visualizers, audio recording devices) accessible to them for teaching purposes within their own university, and there is a need for greater communication of such resources (Wood et al., 2011). In addition, there is some evidence to suggest that mathematics lecturers are particularly wedded to their traditional form of lecturing (Sfard, 2014) and have been slow to embrace online teaching (Lokken & Mullins, 2014), with Engelbrecht & Harding (2005) questioning whether this was even possible in a discipline like mathematics. Maclaren (2014) postulates that the reluctance of mathematics lecturers to embrace newer technologies in place of a physical blackboard/whiteboard may be related to their perception that the trade-off is not worthwhile. Indeed, even the standard qwerty keyboard is tailored for text-based disciplines but does not easily translate into usage for representations of mathematics in an online environment (Trenholm & Peschke, 2020). Emergency remote teaching forced lecturers to adapt at short notice to teaching online, and so we explored what they used and for what purpose when they did so.
Materials and methods
In order to address the three research questions highlighted above, we designed and implemented a survey for mathematics lecturers in higher education.
Survey instrument
The online survey instrument began with a series of profile questions to determine the age profile, gender, country in which the respondent currently worked, years of experience teaching mathematics in higher education and current employment status. Following further profiling questions about class sizes, contact teaching hours and modules taught, there were six other sections in the survey (for the sections on assessment, see Fitzmaurice & Ní Fhloinn, 2021
Data collection
Ethical approval to conduct the study was granted by one of the author's local ethics committee. The survey was conducted exclusively online using Google Forms and was emailed to mathematics department mailing lists and advertised via online conferences in mathematics education.
Data analyses
The quantitative data were analysed using Excel. General inductive analysis (Thomas, 2006) was employed to code the qualitative data. Coding was undertaken by both researchers independently. The codes were then compared and re-coded where necessary to ensure reliability. Throughout this paper, N is used to report the total number of respondents to a given question, while n is used for the number who selected a certain answer for a given question.
Sample
A summary of the profiling questions can be seen in Tables 1 and 2. There were 257 respondents. There was a fairly even breakdown of gender among the respondents, which would not be typical of surveys of mathematicians, given the fact that there are more male academic mathematicians than female. However, the survey was specifically emailed to a mailing list for female mathematicians, which contributed to the higher proportion of females answering the survey. The age of respondents showed a good spread with lower proportions under the age of thirty as would be expected among the lecturing population, and their years of experience teaching mathematics in higher education reflected this. The vast majority of respondents were in permanent employment. By far, the highest proportion of respondents was based in Ireland, which is where both researchers are also based, and almost all respondents were based in Europe at the time of answering. However, in total, respondents from 29 different countries took part in the survey. These are displayed in Table 2. The countries in which only a single respondent was based are the following: Austria, Czech Republic, Denmark, Faroe Islands, Malta, Republic of Moldova, Sweden and Switzerland are listed collectively in the table (Other). Six respondents did not provide a base country (Not provided). Note that otherwise the table is a compilation of the exact responses given by respondents; for example, we have not collated England, Scotland, Wales and Northern Ireland with the general heading of UK (region not specified) as the education systems are not identical in each of these four countries.
The subjects being taught by the respondents could influence their approach to online teaching or their experience of this, particularly in terms of assessment (Iannone & Simpson, 2011), or whether they were teaching students specializing in mathematics (who would be studying many mathematics modules online) or students taking only a single mathematics module. Respondents were asked to list all courses they were teaching. Almost three-fifths of respondents were teaching students who were majoring in mathematics, with just over half of respondents teaching service-mathematics students (those studying one or more mathematics modules as part of their degree programme but not specializing in mathematics). How many contact teaching hours (total per WEEK) did you expect to have this semester (pre-COVID pandemic)? The number of scheduled contact teaching hours that respondents had planned to do per week, prepandemic, was of interest as it would give a rough indication of the teaching workload involved. Figure 1 shows this for the 247 responses, with the vast majority of those teaching 8 hours a week or fewer.
Finally, the class sizes involved could impact upon a lecturer's approach to emergency remote teaching and so respondents were asked to indicate the class sizes with which they were dealing with small being up to 30, medium being 30 to 100 and large being over 100. Of the 256 responses, 59.4% had small classes, 50.4% had medium classes and 22.7% had large classes. (1) The current country of employment of 93% of respondents is somewhere in Europe with 30% of respondents in Ireland, meaning that the results may not be generalizable to other continents. (2) The survey was only available in English and was conducted online, advertised via mailing lists and online conferences. There is no way of knowing how representative a sample it is of mathematics lecturers. (3) The survey was undertaken during May/June 2020. At this point, some universities may not have completed their semester or examinations, which would impact some of the answers. (4) Many respondents were in different stages in their academic year depending on the country in which they were located.
Results: hardware and software used by mathematics lecturers
To gain an insight into what hardware and software mathematics lecturers used during emergency remote teaching due to the COVID-19 pandemic, it was firstly important to ascertain what they used prior to this. As such, respondents to the survey were initially asked about the hardware/software they used prior to the pandemic and for what purposes, before then being asked the same question but for during emergency remote teaching. The full results are presented separately for hardware and software, to enable an easier comparison between the two time periods involved. Figure 2 shows the responses to the questions 'What types of hardware did you use in the workplace before the pandemic/during emergency remote teaching?'. Respondents were asked to indicate for what purposes they used the hardware in question, with three options available to them: for preparing materials, for teaching and for communicating with students. It can be seen from Fig. 2 that there was very little use of different types of hardware prior to the pandemic, with almost all respondents indicating use of a laptop and only one quarter of respondents also using a visualizer (or document camera) for teaching purposes. By contrast, as might have been expected, a far wider variety of hardware was utilized by those involved in emergency remote teaching. Laptop usage was almost universal, with high usage of webcams, external microphones and stylus pens also reported. Visualizer usage for teaching had decreased, most likely due to lack of access.
Hardware used before and during emergency remote teaching
In terms of the 'other' hardware used in various situations, the most common response (n = 29) was the use of a PC instead of a laptop, followed by a blackboard/whiteboard (n = 15). The other responses mostly would have been encapsulated in those shown in the figure or else were mentioned by only a couple of respondents.
Software used before and during emergency remote teaching
Having established what hardware was in use, respondents were then asked about their use of software both before the pandemic and during emergency remote teaching. Again, they were also asked for what purposes they used this software. The results are shown in Fig. 3 The virtual learning environment (VLE) in their university was in use for all three purposes prior to the pandemic, with LaTeX also featuring strongly in the preparation of materials. PowerPoint also featured for both preparation and teaching purposes. During emergency remote teaching, the list became more diverse with communication systems such as Zoom, Skype, MS Teams or Blackboard Collaborate showing a wider usage.
In terms of 'other' software used, there were 101 different software listed by respondents, but 87 of these were mentioned by four respondents or fewer. The others are shown in Table 3 below.
Selection of technology
Respondents were asked a general question regarding who chose the technology that they used and were allowed to select as many options as were applicable: 81.3% (n = 208) of respondents said they chose it themselves, 21.5% (n = 55) said decisions were made at departmental level, while 53.1% (n = 136) said it was at university level. From comments made throughout the survey, this would seem to indicate that the department/university chose what paid technologies to make available, and most lecturers were then in a position to choose which technology they would use from what was available.
Results: remote teaching practices
Having looked in Section 4 at a macro level at the ways in which the hardware and software were used by mathematics lecturers, both before the pandemic and during emergency remote teaching, we then focussed on uncovering the specific forms of teaching mathematics lecturers pursued during the latter. Almost three-quarters of respondents (n = 189) gave some form of live online sessions, and over threefifths produced pre-recorded sessions (n = 156). Just over two-fifths undertook both live online sessions and pre-recorded sessions (n = 104). There were no significant differences between male and female lecturers in terms of which form of online teaching they undertook, but some differences were observable between lecturers in varying age groups, as shown in Fig. 4. In general, lecturers were less likely to use pre-recorded sessions as they got older and slightly more likely to do neither live online nor pre-recorded sessions. It should be remembered, however, that there were smaller numbers of the 20-29 years and 60+ years cohorts within the overall sample, and so these results must be interpreted with this in mind. About three-quarters of respondents conducted lectures online (n = 193), almost 68% gave online tutorials (n = 174) and nearly 17% did online computing labs (n = 43). Figure 5 shows the breakdown of what combinations of teaching sessions were undertaken. There were no significant differences between lecturers of different genders or age groups in this case.
In order to further explore the reasons for choosing to give live sessions versus pre-recording material, respondents were asked to comment further on these issues, as reported below.
Live sessions
A total of 203 of the 257 who responded about whether or not they gave live online sessions commented further to explain why. Of these 203 commenters, 71.4% had given live sessions. The comments were split into those made by respondents who conducted live sessions, and those made by those who did not, as shown in Fig. 6. Comments were coded under more than one theme, where applicable.
The most common theme among those who had conducted live online sessions was that of facilitating 'Question and Answer' sessions (n = 35, 17%) with most highlighting the importance they placed upon 'giving students the opportunity to ask questions'. Closely linked to this was the lecturers' desire to make sessions 'interactive' (n = 24, 12%) ('I believe that interactions with students are essential') and their overall feeling that offering live sessions was the 'most similar' to what students were used to (n = 20, 10%) ('All sessions were live as I felt it was best to try and normalize the experience as much as possible'). Many also mentioned that 'communication' was an important part of their teaching (n = 15, 7%) ('personal communication is absolutely necessary to teach the type of material I taught for undergrads') and that this was facilitated by live sessions. Another theme to emerge was that of 'student support' (n = 11, 5%), with respondents feeling that 'the psychological impact of live sessions helped my students greatly'. The theme of 'assessment' emerged also (n = 11, 5%) with respondents using live sessions to conduct student presentations ('in this course the students are supposed to present their seminars'), group-work assignments or project supervision. Some felt strongly that regular live sessions were useful in providing a 'structure' with which students were familiar (n = 10, 5%) ('I felt it would help students stay motivated and structure their time if there was still a schedule to follow'). Finally, lecturers found live sessions important 'to get direct feedback from the students' (n = 7, 3%) but also acknowledged that they were particularly suitable for a 'small class' group (n = 5, 2%) ('Had a small Masters class. So Teams meeting seemed the best way to go. For my other module, there were 180 students. Felt that was too big for the meeting format').
There were four common themes across those who had and had not given live sessions: for example, respondents from both groups mentioned being given a 'university directive' to either give live online sessions or not to (live: n = 7, 3%; not live: n = 7, 3%). Similarly, 'student preference' was a deciding factor for both groups (live: n = 10, 5%; not live: n = 5, 2%). In terms of 'engagement' (live: n = 11, 5%; not live: n = 3, 1%), those who did live sessions 'felt it was very important, both academically and from a well-being perspective, to keep the students engaged', while those who did not avoided them due to 'uncertainty about participation rate'. The most frequent common theme between both groups was that of their approach being the 'best approach' (live: n = 11, 5%; not live: n = 15, 7%), with those who did live sessions stating 'this was the best thing given the nature of some of my teaching' and those who did not feeling that there was not 'any great pedagogical advantage versus recorded lectures'.
There were six other themes identified among the responses of those who did not offer pre-recorded sessions. Most prominent among these was 'poor internet' (n = 10, 5%), with lecturers concerned about the low quality of their own internet connection or that of their students. Other respondents felt that it was important to allow students to engage with material in their 'own time' (n = 9, 4%), rather than having scheduled sessions ('I thought best . . . that students could participate when it suited them best. This was to try to reduce any pressure on them'). Some lecturers spoke of 'technology difficulties' that prohibited them from conducting live sessions (n = 7, 3%) ('the technology (tablet+mic+camera) adds sufficient extra difficulty that I do not think it would be manageable'), while others mentioned the fact that their students live in different 'time zones' 'who would not have been able to tune into the live sessions' (n = 6, 3%). Finally, 'large class' sizes dissuaded some from live online sessions (n = 4, 2%) ('class is too large for live session (400 students)'), along with 'lack of equipment' (n = 4, 2%), particularly during the initial weeks ('I did not have a visualizer to show students calculations in real time. I did not have a tablet for several weeks').
Pre-recorded sessions
A total of 201 of the 254 who responded about whether or not they did pre-recorded sessions commented further to explain why. As above, the comments were split into those who had conducted pre-recorded sessions and those who had not, as shown in Fig. 7. Again, comments were coded under more than one theme, where applicable.
A large number of themes emerged from those who pre-recorded material. Chief among these was the 'flexible' nature of such content (n = 34, 17%), both for students ('Allowed students to work when suited them') and lecturers themselves ('since I am at home with my family, it was easier to work at any particular time, rather than following the school's timetable'). The next most common theme was that of pre-recorded sessions being the 'best approach' under the circumstances (n = 18, 9%) ('It seemed like the best available substitute for live lectures'), closely followed by the fact that others intended these pre-recorded sessions as a 'supplement' for other approaches (n = 14, 7%) ('For higher level modules, where additional materials are not easily available online from other sources, I though it is of benefit to offer short focussed videos on key concepts'). Other respondents found pre-recorded sessions to be the 'easiest' approach (n = 11, 5%) ('Since I have an iPad and my institution already subscribes to Explain Everything, recording pencasts was very easy to do and was an obvious tool to use'), with others mentioning the ability for students to 'replay' the material as a clear advantage (n = 10, 5%) ('Students can pause and replay if they do not get the idea at once'). Different 'time zones' emerged as a theme for a number of respondents (n = 10, 5%), with pre-recorded sessions allowing them 'to address students being in multiple time zones'. For some respondents, pre-recorded sessions ensured that their teaching was of 'better quality' (n = 8, 4%) ('I would demand of myself that such videos were more polished than a livestream. Even though those are also recorded'), while for others, this approach was 'already in use' (n = 8, 4%) ('I have always used short podcasts') and as such, it made sense to continue to incorporate these during remote teaching. Fear of 'technical difficulties' during live sessions drove some towards pre-recording (n = 7, 3%) ('Technical issues made me fear that I couldn't do anything reliably at the beginning'), while others reported that it was the 'student preference' (n = 6, 3%) ('I had some positive feedback from students about having both the video and the annotated slides from the end of the video. They could look over the annotated slide to see if they understand what was done in the video, and then watch the video, knowing that they might need to pay special attention to some parts, or be able to skip over parts that already make sense to them'). The sense that students 'need to see calculations develop' was put forward as another reason for pre-recorded sessions (n = 6, 3%), while other respondents stated that it was a university directive (n = 5, 2%), either because 'initially, the university could not provide services for live sessions' or they 'preferred us to upload recorded classes instead of performing live, so as not to overload the university server'.
There were only two common themes between those who pre-recorded sessions and those who did not: 'poor internet' (pre-recorded: n = 11, 5%; no pre-recorded: n = 5, 2%) and 'interaction' (pre-recorded: n = 3, 1%; no pre-recorded: n = 7, 3%). In terms of those who pre-recorded sessions, most of those who mentioned internet issues were 'concerned that poor broadband would negatively impact on student engagement with live lectures' while those who did not pre-record were either unable to upload recordings due to their own poor internet speed or worried that students would not be able to download them for similar reasons. In terms of interaction, those who pre-recorded mentioned low interaction levels in lectures, so they felt there would be little lost by simply pre-recording while those who did not pre-record spoke of how they 'believe that interactions with students are essential' and felt pre-recording would not allow for this.
There were four other themes identified among the responses of those who did not do pre-recorded sessions. The most common of these was that respondents felt they were 'unnecessary' (n = 15, 7%), and they did not comment further on this. Several respondents mentioned preferring to do 'live sessions' instead (n = 10, 5%) ('Live sessions seemed to me better since that way I could speak directly with students and answer to their questions and preoccupations'), while others specifically referenced 'recording live sessions' as an alternative approach (n = 9, 4%) ('Live sessions were recorded, which (in part) did away with the need for pre-recording'). Finally, the 'time-consuming' nature of pre-recording also emerged as a theme (n = 9, 4%) ('Generating additional pre-recorded material would have required significant additional time').
Alternative approaches
From those who did not offer lectures, tutorials or computing labs online, there were a number of alternative approaches taken, with 32 respondents providing feedback on this. The most common approach (n = 11, 34%) taken was to conduct online assessment or give students exercises to complete and submit for feedback. Next most popular (n = 10, 31%) was to help students on an individual basis, either through email or video consultation sessions. Another popular approach (n = 9, 28%) was to provide videos for the students, made available through their VLE, YouTube or directly emailed to students. Nine respondents (28%) provided lecture notes, which they sent to their students for self-study, though a couple mentioned that they only took this approach because their module was almost completed and there was little content remaining to be covered. Finally, a small number (n = 4, 12%) of respondents set up discussion forums for student use.
Training
Respondents were asked about the formal training they received in the use of any technology. 44.9% of respondents (n = 115) stated that they were offered formal training in the use of technology by their employer prior to the pandemic, with 54% (n = 62) of these availing of such training. Once the pandemic began, 65% of respondents (n = 166) were offered formal training, and 45% (n = 74) participated in this. Of this pandemic-related training, 10.3% (n = 12) stated that the training took place at the outset of the pandemic just prior to emergency remote teaching, 56% (n = 65) stated it took place during emergency remote teaching, with 33.6% (n = 39) stating training was available both before and during emergency remote teaching.
For those who did not give tutorials and/or computing labs online themselves, 53% had a teaching assistant do so instead (N = 103). In terms of training these teaching assistants in the use of the necessary software, the responsibility for this lay in a variety of areas: 35% (n = 22) of respondents said it was their responsibility as lecturer, a further 32% (n = 20) said it was the teaching assistant's own responsibility, while 19% (n = 12) stated it was up to the university and 14% (n = 9) that it was up to their department.
Technical support
When asked if they had access to technical support if needed during emergency remote teaching, 92.5% (n = 234) of respondents stated that they did, with 5.9% (n = 15) having no such access. However, numerous comments under this question stated that the technical support was overwhelmed at this point in time ('In theory yes, but have received no help. Staff were either overloaded and could not help or just weren't able to do what I requested'). A total of 55.1% (n = 141) of respondents stated that they experienced technical difficulties during online teaching that took place during the early period of the COVID-19 pandemic. The most common technical difficulties encountered at this time were poor internet connection (n = 67, 26%), software issues (n = 60, 24%), hardware problems (n = 30, 12%), communication difficulties (n = 22, 9%) and technical difficulties for students (n = 22, 9%). Poor internet connections were generally mentioned in relation to issues providing live online classes ('very poor internet connection from my house . . . making it impossible to run live lectures'). Software issues ranged from 'trouble with installation of programmes', to 'unfamiliarity with software -needed to learn how to master it', to software errors, particularly in relation to assessments ('issues with online assessment of maths via BrightSpace-no records being sent to the GradeCentre'), and finally, software deemed not fit for purpose ('I followed the advice of the university on what software and hardware to use to teach a highly-interactive class. The advice was only theoretically good. In practice, the proposed combination is not workable'). Hardware problems are usually related to old equipment ('very old laptop that I have to have an icepack under else it overheats'), lack of equipment ('no printer/scanner available in home office') or unsuitable equipment ('the external webcam does not work well to shoot on my handwriting'). Communication difficulties often referred to problems with Zoom/Teams/Adobe Connect ('issues with Zoom using dual monitors with different resolutions'), although a few respondents also had difficulties with their email system ('earlier I did not use my University mail system, it was difficult to get passwords etc.'). Finally, technical difficulties for students resulted from 'students without access to stable internet' or lack of hardware/software to participate as needed in the class.
When asked to list all those who resolved these difficulties for them, there were 139 responses of which 53% said they resolved their difficulties themselves, 26% stated it was technical support, 25% with issues still outstanding and 9% had help from colleagues or family members.
Discussion and conclusion
In this paper, we posed three main research questions. The first of these is related to the hardware and software used by mathematics lecturers both before the pandemic and during emergency remote teaching. Over four-fifths of lecturers were in a position to choose the hardware/software they used themselves, allowing them a large degree of autonomy in this regard, although they were usually making their selection from a pre-determined menu of options in relation to anything that was not free to use. Prior to the pandemic, there was low usage of different types of hardware, with laptops the only exception to this, with 88% of respondents using these for some purpose (additionally, it should be noted that some respondents reported using PCs instead).Visualizers, stylus pens and smartphones saw some low levels of usage for particular purposes with 27%, 17% and 18% usage, respectively, although the low usage could have been due to lecturers' lack of knowledge of how to access these, as was the case in Wood et al. (2011). In fact, only two-fifths of respondents reported using laptops for teaching purposes and a third did not report using any form of hardware for teaching purposes, with many of these relying instead on physical blackboards/whiteboards ('Before the closure, I used a chalkboard in all of my classes'). This echoes the findings of Billman et al. (2018), who found in their study of mathematics lecturers in a South African university that '(d)espite this remarkable increase in technology usage for teaching, half of the teaching staff still prefers the use of a chalkboard to technology for teaching'. Indeed Greiffenhagen (2014) argues that the blackboard has an 'almost iconic stance' in mathematics, due to the nature of mathematics and the importance lecturers attach to the visibility of the 'process of mathematical reasoning' it provides. This is backed up in our study by the observation that, among our respondents, there was little discernible difference between age groups or experience levels in terms of their reliance upon physical blackboards, with between 10% and 14% of 30-39, 40-49 and 50-59 year olds explicitly mentioning their use of physical blackboards at various points during the survey and 6% and 5%, respectively, of 20-29 year olds and 60+ year olds.
During emergency remote teaching as might be expected the types of hardware in use both diversified and increased considerably, as well as the range of functions for which the hardware was being used. The percentage of respondents reporting no hardware usage for teaching reduced from a third to 3.5%. However, respondents reported difficulties accessing hardware at short notice, with numerous cases of lecturers resorting to borrowing ('I needed a tablet and pen but the shops were closed. I borrowed') or relying on personally owned equipment in their home ('Had I not had a number of devices at home, personally bought, then I would not have been able to perform my role as successfully as I had'). Respondents in many cases aimed to replicate the experience of having access to a physical blackboard/whiteboard, the format most familiar and comfortable to a large number of mathematics lecturers (Artemeva & Fox, 2011). Despite this, the number using visualizers or even make-shift visualizers based on adapting their smartphones (e.g. University of Oxford, Mathematical Institute, 2020) remained surprisingly low, suggesting that lecturers may not have been aware of this possibility.
In relation to software, the results were similarly sparse before the pandemic with 76% of respondents using their institutional VLE for some purpose, 74% using LaTeX and 28% using PowerPoint. Here, the low numbers using PowerPoint may be as a result of its perceived ineffectiveness for teaching mathematics, between the difficulty of representing mathematics on the slides and the static nature of this mode of presenting (Loch & Donovan, 2006). Indeed during emergency remote teaching, the numbers using PowerPoint for teaching purposes actually decreased slightly to 26%, showing it was not a perceived solution during emergency remote teaching to online teaching in mathematics. The number using their institutional VLE increased slightly to 79% during emergency remote teaching. However, it has been observed that VLE usage in those moving initially to online teaching can simply be an attempt to create a digital version of an existing paper-based resource, without any further alteration for the online environment (Borba et al., 2016). Our figures show that lecturers were more likely to use their VLE for communication purposes with students rather than for teaching purposes, suggesting that while they were making use of the various communication tools within the VLE, they were less inclined to use (or less familiar with) the teaching technologies therein. Large increases were reported during emergency remote teaching in communication software usage as might have been expected with 39% now using Zoom, 29% using Teams, 17% using Skype and 14% using Blackboard Collaborate. There was a far greater diversity in the range of 'other' software used by respondents but no strong trends towards particular software emerged. A total of 18% of respondents used some 'other' software before the pandemic and this increased to 28% during emergency remote teaching, but respondents encountered a range of difficulties in implementing and adapting to new software: 'There was a VERY steep learning curve in obtaining the necessary software, learning how to use it and getting the laptop connected to successfully continue with the classes' and 'One software allows you up to so many users but your class is too big, the other has issues with security, the third is better for recording but not real-time teaching'. This feedback suggests that many respondents did not have the 'positive initial experience' that Heinonen et al. (2019) identified as being so valuable when introducing new software for teaching.
Our second research question aimed to delve deeper into the purposes for which lecturers used this hardware and software by exploring why lecturers opted to conduct live online sessions, pre-recorded sessions or alternative approaches. The most common approach was to give live sessions online with pre-recorded sessions also a prominent approach, and many respondents doing both. Only a very small number (n = 8) of respondents mentioned having used pre-recorded sessions previously, and yet 156 developed pre-recorded sessions during this period, bearing out the observation of Englund et al. (2017) that a powerful new incentive can lead lecturers to use technology they would previously have considered unnecessary. The importance placed by lecturers on interactivity and the ability to ask and answer questions in real time emerged strongly in the responses, and this was reflected in student reasons given for their preferences for live lectures in the work of Yoon et al. (2014). Indeed, strong justifications were given by respondents both in favour of and against providing live online sessions, with similar responses for pre-recorded sessions. These responses mirror in many ways the opposing opinions expressed in relation to the recording of live in-person lectures, highlighted earlier (Howard et al., 2018;Trenholm et al., 2019;Yoon & Sneddon, 2011). At the time of the survey, the impact of one choice over another is not yet clear in this situation, where many lecturers and students were living in lockdown conditions for at least a portion of this time and where assessments also had to change at short notice. However, it is worth noting that respondents from both sides deemed their approach to reflect 'student preference'. It should be observed that students do not always choose the most effective strategies for their own study (Bjork et al., 2013) and 'popular' choices may not necessarily result in the most learning, as evidenced by the low correlation between student evaluations of teaching and evidence of learning (Uttl et al., 2017). The observation made by Trenholm et al. (2019) about the higher level of surface learning associated with those who chose to engage only with video resources when live in-person lectures were available may not correspond to this cohort of students, given the particular circumstances in play, and the fact that students may have only had one option available in their module. Yet, engagement levels emerged as a strong concern among lecturers, regardless of the approach they took, with some observing that 'viewing figures for pre-recorded sessions was very poor'and others that 'tutorials were offered by our tutors but not many availed of them' or that 'I offered "class time" for answering questions etc. There was very little takeup'. This echoes the experience of those involved in mathematics support at that time, who universally reported far lower engagement figures with their services during this period (Hodds, 2020). This lack of engagement with online teaching in mathematics is not unique to emergency remote teaching, with Glass & Sue (2008) reporting that, although students in their fully online mathematics module engaged well with homework, they placed little value on any discussion forums or similar. Many lecturers spoke of the benefits of the flexibility of pre-recording sessions, both for themselves but primarily for their students, some of whom were in different time zones or shouldering known and unknown responsibilities at the time. It should be noted that, across many of the themes that emerged in responses to questions regarding the purpose for which technology was used, a concern for their students and a desire to provide the best possible educational experience for them under the circumstances permeated many of the comments made by respondents, with one simply observing 'I miss my students'.
Our final research question focused on the training and support that lecturers received in any hardware or software they used. There was a steep learning curve involved, one which is often made more difficult in the case of mathematics-specific software by the different requirements of software packages when representing mathematical language (Smith et al., 2008). Effective support of lecturers moving to a fully online environment has been hypothesized to rely on three components: administrative support, peer support and professional development (Covington et al., 2005), and it was the last of these that we particularly investigated here. The percentage of respondents who were offered professional development opportunities in use of educational technology was relatively low, given the circumstances, with less than half reporting any training available prior to the pandemic and only two-thirds during emergency remote teaching. Given the importance of effective training and support to ensure lecturer engagement with new technologies of this kind (Jääskelä et al., 2017;Heinonen et al., 2019), it is vital for universities to support their teaching staff in this manner in the event of any sudden closure. It is also incumbent upon the lecturers to avail of such training as befits their needs, with less than half reporting engaging with the training offered by their university during emergency remote teaching. It would be of interest in the future to investigate their reasons for non-engagement: whether they were personal circumstances during emergency remote teaching or whether the training was not bespoke to technology relevant to mathematics and therefore perceived not to be of use. The vast majority of respondents reported having technical support available to them during this time period, even if it was, as might have been expected, often overwhelmed with the number of requests. It is worth noting however that at the time of this survey some 3-4 months after the beginning of emergency remote teaching, 25% of those who experienced technical difficulties had not yet had these resolved. This may, in part, have been due to the fact that many were still in a state of lockdown and there were difficulties with accessing in-person support with hardware or wifi provision but also points to a difficult transition to emergency remote teaching for a significant portion of respondents.
The results of this survey provide a snapshot into a unique period of emergency remote teaching, as mathematics lecturers adjusted their teaching practices, making use of whatever hardware and software were available to them and their students in the circumstances of emergency remote teaching. The creation of a student-centred environment which was both interactive and engaging was mentioned in some format by the majority of respondents. However, the tension between replicating a teaching and learning environment as close as possible to that with which they were most comfortable prior to emergency remote teaching, under conditions that were less than conducive to teaching and learning, was evident in the responses given. Adequate access to hardware/software, and professional development in the use of ICT to teach mathematics, will be a requirement in all universities, as it is likely in the future that there will be a move towards more online provision of teaching both for the duration of the COVID pandemic and beyond (Blankenberger & Williams 2020). We believe that it is simultaneously incumbent on universities to provide hardware, software and training and for staff to uptake and participate. Consideration of student access is critical.
Further research into best practice is also necessary. We plan to reissue our survey a year after the original survey was distributed to ascertain the developments within online teaching of mathematics during this time. A spontaneous move to remote teaching is a significantly different experience to online teaching with a year's experience. A comparison of the initial starting point as reported in this paper, with the developments over the course of a year, should provide new insights into the extent of this difference.
Types of Technology
This section explores what technology types you used (1) What types of hardware did you use in the workplace before the pandemic? (5) What types of software did you use during the university closure? | 10,572.8 | 2021-10-04T00:00:00.000 | [
"Education",
"Mathematics",
"Computer Science"
] |
Effective Gibbs State for Averaged Observables
We introduce the effective Gibbs state for the observables averaged with respect to fast free dynamics. We prove that the information loss due to the restriction of our measurement capabilities to such averaged observables is non-negative and discuss a thermodynamic role of it. We show that there are a lot of similarities between this effective Hamiltonian and the mean force Hamiltonian, which suggests a generalization of quantum thermodynamics including both cases. We also perturbatively calculate the effective Hamiltonian and correspondent corrections to the thermodynamic quantities and illustrate it with several examples.
Introduction
There are a lot of physical models which use averaging with respect to fast oscillations one way or another. For example, many derivations of master equations use secular approximation directly ([1] Subsection 3.3.1), ([2] Section 5.2) or as result [3,4] of perturbation theory with Bogolubov-van Hove scaling [5,6] (see also corrections beyond the zeroth order in [7]). Moreover, there is a wide discussion of the applicability of the rotating wave approximation (RWA) and the systematic perturbative corrections to it in the literature [8][9][10][11][12][13][14][15][16][17]. However, in this work, we consider such averaging not as an approximation but as a restriction of our observation capabilities. In addition, we analyze the thermodynamic equilibrium properties of a quantum system, assuming such restrictions. Due to this averaging, the thermodynamic equilibrium properties can be defined by some effective Gibbs state, which is averaged with respect to these fast oscillations, instead of the exact Gibbs state. Similarly to strong coupling thermodynamics, this effective Gibbs state can be defined by some effective temperature-dependent Hamiltonian, which is an analog of the mean force Hamiltonian (see, e.g., ([18] Chapter 22), [19,20] for recent reviews).
In Section 2, we describe the setup of our problem and develop a systematic perturbative calculation for the effective Hamiltonian. We show that the zeroth and the first term of the expansion coincide with the RWA Hamiltonian and, in particular, are temperature independent. In this point, it is similar to effective Hamiltonians also arising as corrections to the RWA but in dynamical and non-equilibrium problems. The second-order term is temperature-dependent. We show that both this term and its derivative with respect to the inverse temperature are non-positive definite.
In Section 3, we show that this definiteness is closely related to the positivity of the information loss due to the fact that we have access only to the averaged observables discussed above rather than all possible observables. We show that information loss leads to energy loss, which is hidden from our observation. We prove (without perturbation theory) that these losses are always non-negative, but in the leading order, they are defined by the second-order temperature-dependent term in the effective Hamiltonian expansion. Additionally, we prove that exact non-equilibrium free energy is always larger than the free energy observable in our setup. If one assumes that the effective Gibbs state is an exact state, then this difference is also defined by the second-order term of the effective Hamiltonian expansion. At the end of Section 3, we argue that the analogy between our effective Hamiltonian and the mean force Hamiltonian is because they are special cases of the general setup, based on so-called conditional expectations.
To dwell on this analogy, in Section 4, we consider a compound system and the mean force Hamiltonian of one of the subsystems for the effective Gibbs state discussed above. We also give the systematic perturbative expansion for it.
In Section 5, we consider several simple examples to illustrate the results of the previous sections. Namely, we consider two interacting two-level systems, two interacting oscillators and a two-level system interacting with the oscillator. We calculate the effective Hamiltonians for such systems and the information losses due to the restriction to the averaged observables.
Both the effective Hamiltonian we define in this work and the explicit perturbative expansion for it are novel, but such a Hamiltonian has much in common with the mean force Hamiltonian (see the end of Section 3 for a more precise discussion). The main difference consists of the choice of a projector. Thus, our results suggest the possibility to generalize equilibrium quantum thermodynamics to effective equilibrium quantum thermodynamics by different choices of the projector.
Effective Hamiltonian
We are interested in equilibrium properties of fast oscillating observables which are in resonance with the free Hamiltonian. We assume that the equilibrium state has the Gibbs form with inverse temperature β > 0 and the Hamiltonian of the form where H 0 is a free Hamiltonian and H I is an interaction Hamiltonian, λ is a small parameter. In addition, we consider the observables which are explicitly time-dependent with very specific time dependence. Namely, they depend on time in the Schrödinger picture as follows i.e., they depend on time in such a way that they become constant in the interaction picture for the "free" Hamiltonian H 0 . A widely used example of such an observable is a dipole operator interacting with the classical electromagnetic field in resonance with a free Hamiltonian (see, e.g., [21] Section 15.3.1). In addition, we assume that one could actually observe the long-time averages where X(t) ≡ Tr ρ β X(t). By "long", we mean long with respect to inverses of nonzero Bohr frequencies, where Bohr frequencies are the eigenvalues of the superoperator [H 0 , · ] (see, e.g., [4] p. 122). The observation of such long-time averages is usual for spectroscopy setups ( [22] Section 4). Moreover, we will further discuss the perturbation theory in λ, assuming that this averaging is already performed, so this long timescale remains "long" even being multiplied by any power of λ. Otherwise, one should introduce the small parameter in the averaging procedure as well, which leads to more complicated perturbation theory depending on how the small parameter in the averaging and in the Hamiltonian are related to each other. Average (4) can be represented as whereρ β is some effective Gibbs state, which could be calculated as where because From the thermodynamical point of view, it is natural to represent this effective Gibbs state in the Gibbs-like formρ with some effective HamiltonianH similarly to the mean force Hamiltonian ( [18], Chapter 22). Let us remark that we have the same partition function for both exact and effective Hamiltonians due to the fact that P is a trace-preserving map (see Appendix A) Tr e −βH = Tr P e −βH = Tr e −βH . Let us summarize several properties of the superoperator P which will be used further (see Appendix A for the proof).
1. P is completely positive.
2.
P is a self-adjoint (with respect to trace scalar product Tr X † Y) projector 3.
For the case of one-dimensional projectors Π ε , superoperator (11) is sometimes called the dephasing operation [23]. In the general case, it is usually called pinching [24], p. 16. It can also be understood as a special case of twirling [25] (with one-parameter group).
Effective HamiltonianH can be calculated by cumulant-type expansion. Namely, we have the following proposition (see Appendix B for the proof).
Proposition 1. The perturbative expansion ofH has the form
where and H I (β) ≡ e βH 0 H I e −βH 0 .
In particular, the first terms of the expansion have the form To make this expansion more explicit, let us represent the interaction Hamiltonian in the eigenbasis of the superoperator [H 0 , · ] in the same way as it is usually performed for Markov master equation derivation ([1] Subsection 3.3.1) where sum is taken over the Bohr frequencies and Moreover, as H I is Hermitian, then D −ω = D † ω . Hence, we have the following explicit expressions for M k (β). (16) and (17) are held, then
The proof can be found in Appendix C. The first terms of expansion (15) take the form (see Appendix C)H Thus, the first two terms are temperature-independent and recover the Hamiltonian in the rotating wave approximation (similarly to effective Hamiltonians for dynamical evolution [26,27]) On the other hand, the next term of expansion (20) is the first temperature-dependent correction to the RWA Hamiltonian. This term is always non-positive definitẽ due to the fact that it has the form where ψ|D ω D † ω |ψ = ||D † ω |ψ || 2 ≥ 0 for arbitrary |ψ , is a positive function f (x) > 0 for all real x and β is assumed to be positive as we consider the positive temperature (but if one considers a negative temperature, which is possible for finite-dimensional systems, thenH (2) becomes non-negative). Moreover,H (2) is a monotone function of temperature, because ∂ ∂βH where is also a positive function for all real x. In the next section, we will see that if one averages this result with respect to the effective Gibbs state, then this result becomes closely related to general thermodynamic properties which are valid in all the orders of perturbation theory. Let us also remark that lim x→+0 f (x) = 1, so for the low temperature limit, i.e., when βω 1 for all non-zero Bohr frequencies, Equation (23) takes the form i.e., the second-order correction in λ is linear in β.
In the recent literature, there is also rising interest in the ultrastrong coupling limit. Let us remark thatH (2) is also the leading order difference between effective Hamiltonians for steady states for the ultrastrong coupling limit conjectured in [28] and the one obtained in [29], if one takes the interaction Hamiltonian as a free Hamiltonian in our notation and vice versa. The perturbative corrections for such steady states are discussed in [30].
Effective Hamiltonian as Analog of Mean Force Hamiltonian
The free energy F can be defined by the partition function Z as where, as it was mentioned before, Z could be defined by the same formula Z = Tr e −βH = Tr e −βH both by exact Hamiltonian H and by effective HamiltonianH. If one calculates the entropy and the internal energy by equilibrium thermodynamics formulae then it also obviously does not matter if we use the exact or effective Hamiltonian. For initial temperature-independent Hamiltonian, they also could be calculated as: However, for the effective Hamiltonian, the similar formulae need additional corrections due to its dependence on temperature. Namely, whereS andŨ are defined by the formulae similar to Equation (30) In addition, the corrections have exactly the same form as for the mean force Hamiltonian (see, e.g., [31], Equations (11) and (12)) Here, · ∼ denotes the average with respect to the effective Gibbs state, i.e., · ∼ ≡ Tr( ·ρ β ). The derivation of these formulae is exactly the same as for analogous formulae for the mean force Hamiltonian (see ( [18], Chapter 22), [32]), because it is valid for an arbitrary temperature-dependent Hamiltonian and is based only on the Feynman-Wilcox Due to the fact that P is a completely positive trace preserving and unital map (P I = I), the entropy is monotone [36], p. 136 under its action, i.e.,S ≥ S. Thus, ∆S ≥ 0 and ∆U = β −1 ∆S ≥ 0.S andŨ could be interpreted as entropy and as energy which are accessible to our observations. Our observable entropy isS, but due to our restricted observational capabilities, we have the information loss quantified by ∆S. This information loss comes with energy loss quantified by ∆U and is hidden from our observations.
For second-order expansion in λ, we have where · 0 is the average with respect to the Gibbs state for the free Hamiltonian. Thus, the non-negativity of ∆S in the second order of perturbation theory agrees with Equation (25). Moreover, it could be calculated (see Appendix D) by the following formula where sum is taken only over the positive Bohr frequencies.
The analogy with Equation (22.6) of ( [18] Chapter 22) also suggests the following definition of non-equilibrium free energy in a given state ρ where · P ≡ Tr(P ρ · ) and S(ρ||σ) is relative entropy ([36], Chapter 7.1). The only difference from Equation (22.6) of ( [18] Chapter 22) consists of the fact that we use averaged state P ρ instead of ρ, which is natural in our setup. The exact free energy is defined as where · ≡ Tr(ρ · ), which leads to where similarly to Equation (33), ∆F ρ has a definite sign, namely due to monotonicity of the relative entropy under the completely positive map P ([36], Theorem 7.6). Similarly toS andŨ,F ρ can be interpreted as observable free energy and ∆F ρ as free energy hidden from our observations. As ∆F ρ ≥ 0, we are always further from equilibrium than we think based on our restricted measurement possibilities. For example, if our exact non-equilibrium state isρ β , then it is impossible to distinguish it from ρ β . Thus, its observable free energy coincides with the equilibrium onẽ but ∆Fρ β is positive as in the general case. Namely, by Equations (37) and (38), we have As This formula is useful for asymptotic expansion of ∆Fρ β as the first two terms of the expansion ofH cancel H RWA and the first non-trivial contribution is of order of λ 2 as in Equation (35). Namely, we have The analogy with the mean force Hamiltonian can be made more explicit if one notes that the mean force Hamiltonian is closely related to the projector P = Tr B ( · ) ⊗ ρ B which is usually used for derivation of Markovian master equations and their perturbative corrections ([1], Subsection 9.1.1).
where Z mf = Z/Z B [19]. Thus, a stricter analog of our effective Hamiltonian should be H mf + H B with partition function Z. However, it seems that for operational meaning of the mean force Hamiltonian, the information about H B is also important, which makes this analog more natural. Nevertheless, importance of information about H B (not H mf only) is still discussible [37,38]. From the mathematical point of view, both of these projectors are so-called conditional expectations [39][40][41][42]. They are correspondent to different choices of observable degrees of freedom. This suggests that the mean force Hamiltonian theory could be generalized to arbitrary conditional expectations, and for specific conditional expectation P, it is performed in this work. Thus, it is possible to say that the effective Gibbs state with such generalized projectors define different effective quantum equilibrium thermodynamics.
Let us also mention that similarly to mean force Hamiltonian theory, we assume in our work that the whole system (containing both the system and the reservoir in the mean force Hamiltonian case) is at the same temperature. However, there are possible generalizations of such a setup when the system interacts with two (or more) reservoirs at different temperatures [43]. In such a case, a natural analog of P is a projector P = Tr B 1 ,B 2 ( · ) ⊗ ρ B 1 ,β 1 ⊗ ρ B 2 ,β 2 , where ρ B 1 ,β 1 and ρ B 2 ,β 2 are states of the heat baths with inverse temperatures β 1 and β 2 , respectively. The above equations assuming only one temperature, e.g., Equations (28) and (29), are not applicable in this case, but Equations (30)- (32), which are fundamental for our approach, still have their meaning. This suggests that it is possible to generalize the framework presented here to include such a multitemperature case, but it is not fully covered by the approach presented here as the scope of the current paper was focused on the one-temperature case. Nevertheless, we think that it is one of the most promising directions for future study.
Mean Force Hamiltonian for Effective Gibbs State
Let us now consider a compound system, consisting of two subsystems A and B. Let us consider subsystem B as "reservoir". Let us assume that H 0 = H A ⊗ I + I ⊗ H B . Then, it is possible to define a mean for the HamiltonianH mf for the effective Gibbs state by the following formulaρ whereZ mf =Z/Z B , Z B ≡ Tr B e −βH B . Then, similarly to Proposition 1, it is possible to obtain the perturbative expansion in λ forH mf (see Appendix E).
Proposition 3.
The perturbative expansion ofH mf in λ has the form Here, M k (β) can also be calculated by Proposition 2. The first terms of the expansion forH mf have the form This formula can be made even more explicit if one considers the decomposition of D ω into sum of eigenoperators of [H A , · ] and [H B , · ], i.e., similarly to Equation (17) introducing A ω and B ω such that where ω 1 and ω 2 run over all possible Bohr frequencies of the Hamiltonians H A and H B , respectively. Then, expansion (49) takes the form (see Appendix F) where it is assumed that f (0) = 1.
Examples
In this section, we consider several examples, and the notations are chosen in such a way as to emphasize the similarity between them. We use these examples to illustrate our formulae, but let us remark that, at least for the first and second model, it is possible to calculate the effective Hamiltonian exactly without perturbation theory; however, it is not the aim of our work. For all these examples, we consider two cases: the off-resonance and the resonance one. In this section, only the results are presented, all the calculations are given separately in Appendix G.
Two Interacting Two-Level Systems
Let us consider the two interacting two-level systems [44,45] a and b where ω a > 0, ω b > 0 and σ ± i are usual ladder operators for two-level systems i = a, b.
H off−res =ω a n a where n i ≡ σ + i σ − i are number operators for i = a, b. In the leading order, the information loss has the form (2) Resonance case ω b = ω a + λδω.
H res =ω a n a In the leading order, the information loss has the form Let us remark that it does not coincide with the off-resonance case with ω b → ω a . Namely, we have Thus, off-resonance averaging leads to larger information loss even in the "resonance" limit than resonance averaging.
Two Interacting Harmonic Oscillators
Let us consider the two interacting harmonic oscillators where ω a > 0, ω b > 0 and a, a † and b, b † are oscillator (bosonic) ladder operators. Averaging with respect to fast oscillations needed for so-called quasi-stationary states was recently discussed in [46].
where n a ≡ a † a, n b ≡ b † b. In the leading order, the information loss has the form (2) Resonance case ω b = ω a + λδω.
H res =ω a n a In the leading order, the information loss has the form Interestingly, this quantity asymptotically coincides with Equation (56) for ω a β 1 (see Figure 1). Similarly to Equation (57), we have
Two-Level System Interacting with Harmonic Oscillator
Let us consider a two-level system interacting with a harmonic oscillator where ω a > 0, ω b > 0 and σ + , σ − and b, b † are two-level and bosonic ladder operators, respectively.
H res =ω a n a In the leading order, the information loss has the form This also asymptotically coincides with Equation (56) for ω a β 1 (see Figure 1). Similarly to Equation (57), we have
Conclusions
We have developed a systematic perturbative calculation of the effective Hamiltonian which defines the effective Gibbs state for the averaged observables. We have shown that the first two terms of the perturbative expansion of such an effective Hamiltonian coincide with the RWA Hamiltonian, and the second-order term of the expansion is the first non-trivial temperature-dependent term. It defines the leading order of the information loss due to the restricted observation capabilities in this setup and the leading order of the energy, which is not observable in our setup due to the same reason. We have shown the analogy between our setup and the mean force Hamiltonian. To deepen this analogy, we have also obtained the perturbative expansion for the mean force Hamiltonian for the effective Gibbs state. At the end, we have considered several examples, which illustrate the preceding material.
We think that the analogy between the mean force Hamiltonian and our effective Hamiltonians suggests the possibility to generalize our approach to form effective equilibrium quantum thermodynamics.
As it was already mentioned at the end of Section 3, a multitemperature generalization similar to [43] of the framework discussed in this work is a possible direction for further study. In particular, such a study could be important due to modern interest in such a multitemperature setup from the separability viewpoint [47].
Data Availability Statement: Not applicable.
Acknowledgments: The author thanks A. S. Trushechkin for the fruitful discussion of the problems considered in the work.
Conflicts of Interest:
The author declares no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript:
Appendix A. Properties of Averaging Projector
Trace preservation of P follows from Then, let us prove Property 3 first. For H 0 = ∑ ε εΠ ε , we have As Π ε = Π † ε , then they define Kraus representation [36], p. 110 of P, which proves Property 1. Calculating and we obtain Property 2.
Appendix B. Perturbative Expansion for Effective Hamiltonian
Proof of Proposition 1. Let us define V(β) ≡ e βH 0 e −βH , then it satisfies where H I (β) is defined by Equation (14). Namely, Then, representing V(β) by the Dyson series and applying the projector P, one has with M k (β) defined by Equation (13). By the Richter formula ( [48], Equation (11.1)), one has log PV(β) = By substituting it in Equation (A9) and taking the integral, we have Taking into account we have P e −βH = e −βH 0 PV(β). Let us remark that H 0 commutes with any operator P X where Equation (11) was used. Thus, we have , which along with Equation (A12) leads to Equation (12).
Taking the limit we obtain Equation (A22).
Proof of Proposition 2. Using expansion (16) and (17), we have Then Let us calculate Substituting it in Equation (A26), we have Then, by Equation (13) and Lemma A2, we have Equation (18).
This leads to Substituting this expression and M 1 (β) in Equation (15) leads to Equation (20). Similarly, higher-order cumulants could be calculated, e.g.,
Appendix D. Average of Second Correction with Respect to Gibbs State for Free Hamiltonian
Let us express D † ω D ω 0 in terms of D ω D † ω 0 as Taking into account Equation (23), we have Similarly, taking into account Equation (25), we have − ∂ βH
Appendix E. Perturbative Expansion of Mean Force Hamiltonian for Effective Gibbs State
Proof of Proposition 3. Taking into account Equation (A13), we have Due to Equation (A14), it can also be written as Taking into account Equation (A37), we have Then, the proof follows the proof of Proposition 1 (see Appendix B), replacing M k (β) with M k (β) B and H 0 with H S .
Appendix F. Calculation of Mean Force Hamiltonian
Due to Equation (50), we have Then, Equations (A43) and (A44) take the form and Hence, after substituting these formulae into Equation (49), we havẽ Assuming by continuity f (0) = 1, this equation reduces to Equation (51).
Appendix G. Calculations for the Examples
We provide fewer details for the second and third examples because they are fully analogous to the first one.
Appendix G.1. Two Two-Level Systems (1) For the off-resonance case, we have As Substituting it in Equation (20), we obtain Equation (53). As n i 0 ≡ Tr n i e −βω i n i Tr e −βω i n i = 1 e βω i + 1 (A55) for i = a, b, then by Equation (A35), we have Thus, by Equation (36), we obtain Equation (54).
(2) For the resonance case, we have Now, the terms analogous to D ω a −ω b and D ω b −ω a contribute to D 0 Substituting it in Equation (20), we obtain Equation (55). As Thus, by Equation (36), we obtain Equation (56). | 5,716.2 | 2021-10-27T00:00:00.000 | [
"Physics"
] |
A real-time field bus architecture for multi-smart-motor servo system
The multi-motor servo system (MMSS) is an electro-mechanical system widely used in various fields, including electric vehicles, robotics, and industrial machinery. Depending on the application, the number of motors in the system can range from several dozens to tens of thousands, which imposes additional communication demands. Thus, ensuring synchronization and control precision of the system requires addressing the challenge of guaranteeing the performance and reliability of communication among motors in the MMSS. In this paper, we design a smart servo motor (SSM) to upgrade the system to the multi-smart-motor servo system (MSMSS) based on a distributed real-time field bus architecture, namely, Multi-Motor Bus (MMB) architecture. The proposed MMB architecture is lightweight and stable, providing real-time support for Control Area Network connections to a central user computer and inter-integrated circuit connections to SSM units. This MMB architecture facilitates the synchronization of command transmission across SSMs and ensures the consistency of motors in the MSMSS. Additionally, a serial experiments to examine 3 key system performance and reliability characteristics are conducted, including command transmission time, transmission jitters, and rotation consistency. The analysis of these characteristics demonstrates the system’s potential and feasibility to be applicable in industry.
Communication strategy used in the MMSS
With the widely use of the MMSS, the number of servo motors connected in a MMSS can range from dozens ( 10 1 ∼ 10 2 ) in electric cars 5 and robotics 10 , to tens of thousands ( 10 2 ∼ 10 4 ) in cellular conveyor 7 and radial line helical phased arrays 8 .With the increase of motors in the MMSS, the communication process between control systems and motor units, as well as between individual motors, can be highly intricate.To ensure proper operation of such a large number of motors, a suitable communication strategy should be carefully considered, taking into account the performance requirements of the MMSS.Several communication strategies with different bus protocols have been selected for MMSS, such as Recommended Standard (RS) 485 bus, RS 232 bus, Ethernet, and Control Area Network (CAN) bus.Each of them has its own features as shown in Table 1 and is suited to specific application domains.As a differential protocol, RS-485 is widely used in the remote control of industrial settings due to its long transmission distance and stability 11 .However, the number of nodes accommodated in the RS-485 protocol is limited into 32 nodes, which makes it difficult to meet the growing demand for an increasing number of motors in large MMSS 12 .As for the RS-232, it is widely available on computers and measurement equipmentS for short distance 13 .However, due to its low data transmission speed, short transfer distance, and point-to-point transmission characteristics, RS-232 can only be used as a branch of the communication component of a system for some non-time critical functions.As the next-generation network protocol increasingly adopted in vehicle gateways, Ethernet can provide bandwidth exceeding 100 Mbps, satisfying the real-time requirements of time-critical systems 14 .However, Ethernet tends to be a more expensive physical-layer interface and requires costly technology such as routers and switches, which increases the complexity and cost of network deployment in MMSS.Compared with the wired communication bus protocols mentioned above, the CAN bus not only incorporates some of their advantages, but also optimizes certain performance aspects [11][12][13][14][15][16] .As a multi-master protocol, CAN bus allows multiple devices to transmit data with each other, and also provides a reliable transmission mechanism due to its robust error detection mechanisms.Therefore, for complex working environments and an increasing number of electric motors, the CAN bus is better to meet the system's communication requirements and achieve consistent control of motors.
As a kind of serial communication protocol, CAN was developed in the mid-1980s by Robert Bosch, to solve the communication problem of the point to point method 17 .Over the years, the CAN bus has demonstrated excellent stability and efficiency, making it a widely used network in MMSS such as automobile manufacturing and vehicle networks 18 .However, the rapid development of MMSS and the increasing volume of data have led to a significant increase in system complexity.
Related technology approach used in the MMSS
Nowadays, several works have been conducted on MMSS based on the CAN bus that integrate numerous motors from various segments to achieve control of the entire system.Some systems employ Micro Controller Units (MCU) as controllers, achieving functionality through code compilation.This approach is highly flexible and capable of meeting various algorithmic requirements 19 .However, these codes are executed sequentially, resulting in lower real-time performance..Furthermore, some systems design their controller on Field Programmable Gate Array (FPGA).The parallel processing capability of FPGA, coupled with its modular design approach, not only greatly reduces the control cycle but also enables synchronous control of the motors 21 .However, this design approach places high demands on FPGA's logic resources.Additionally, because it is designed purely in hardware, it leads to limited computational power, especially in floating-point calculations, which will stress the burden for network analysis 22 .To achieve a fast and precise response in MMSS, this paper proposes a design solution for multi-smart-motor servo system (MSMSS) on both MCU and FPGA with smart servo motors (SSMs).By combining these two elements, it maximizes the parallel processing capabilities of FPGA and the computational power of MCU.As shown in the Table 2, the distributed control structure of the MSMSS not only efficiently conserves the resources of the main controller but also exhibits excellent scalability.
Contributions and organisations
In order to further understand the command transmission process of the MMSS, and tackle the limitation based on the review of related work, we propose the MSMSS with a real-time multi motor bus (MMB) architecture.
The main contributions of this work are in the following respects: (1) A SSM has been designed as a control component within the system.The SSM integrates driving, controlling, communication, and feedback capabilities, enabling it to independently perform assigned tasks without relying on external control resources; (2) A MMB architecture is proposed to expand the number of motor units by taking the heterogeneous gateway to connect the user command center and SSM units; (3) A series of performance and reliability tests are conducted to show the feasibility of the proposed communication network for interlinked SSM units.Particularly, real-time characteristics are evaluated through analysis of the transmission time, transmission jitter, and consistency of SSMs.The rest of the paper are organized as follows: In "Method" section, diverse aspects of the experimental setup, such as the design of the system architecture, the implementation of the hardware with the selection of relevant electronic components, and the testing procedures are explained in detail.In "Result analysis" section, the experiment result are presented and analyzed.And finally in "Conclusions" section, the proposed work is concluded that key performance evaluations are presented.
Method
A typical MMSS comprises of a control panel that functions as the master of the system, and servo motors that serve as the system's actuators 1 .The fundamental principles are also applicable to the our proposed MSMSS.However, in addition to these principles, conventional servo motors are replaced by SSMs, which possess the ability to drive, control, communicate, and feedback 23 .Furthermore, the direct connection between the master controller and motors is removed.Instead, a gateway is used as a medium to connect the user command center and SSM units, enabling further expansion of the system's motor count.Figure 1 shows the layout of our proposed MSMSS with the SSM based on the MMB architecture.
System architecture
To meet requirements of complex data analysis, a personal computer (PC) is used to as the command center to connected to the CAN bus via a USB-CAN adapter.With powerful calculation capability, the user command center can provide efficient delivery of key information or instructions to SSMs, such as the direction of rotation, the sequence of controlled motors, and the zero position of the motor, as well as managing data received from SSMs for analysis and investigation.To accommodate an increasing number of motors in the system, a hybrid gateway is added to the system as an internal transfer station that connects the user command center and SSMs.The gateway is designed using a hybrid of FPGA, which is composed of a MCU as an computation center and a FPGA fabric.Figure 2 shows the internal function connection of heterogeneous gateway.The MCU serves as the processing core of the gateway, which is equipped with CAN controllers and other hardware devices.Due to the high clock frequency and abundant peripheral resources, MCU has sufficient computing capability to execute complex algorithms, including receiving, parsing, classifying, and converting data.The instructions resulting from this processing are transmitted to the FPGA frabric via the Flexible Static Memory Controller (FSMC) module.As a parallel bus, FSMC significantly reduces the communication latency between the MCU and FPGA.After receiving the instructions, the switch state machine within the FPGA distributes these instructions to various SSMs connected to different ports via the inter-integrated circuit (I2C) bus.By exploiting the parallel processing capabilities of the FPGA and the processing capability of MCU, the gateway distributes the received instructions from the computer to the SSMs, while simultaneously ensuring the synchronization of data transmission and precision control of SSMs.
As the control object of the MSMSS, SSM is a relatively independent entity equipped with a control unit, which is highly integrated with a micro-controller, a driver chip, and a magnetic encoder as shown in Fig. 3a.As the core component of SSM, we have employed a 32-bit MCU as the primary controller at the bottom of the motor.The controller operates at a frequency of 180MHz and possesses abundant peripheral resources.This configuration not only enables precise motor control and rapid response but also leaves room for future upgrades.As shown in the Fig. 3b, the overall structure of the MCU functions forms a closed-loop system, in which various modules progress step by step to collectively accomplish the process from data reception and motor control to the return of working information.Upon the receiving message from the gateway via the I2C bus, the MCU determines the type of command and starts up the position-speed PID algorithm to produce pulse width modulation (PWM).To achieve rapid response and precise control of the system, the modulation frequency for both is 10KHz and 200KHz, respectively.Through a timer, the main controller continuously invokes the PID algorithm to adjust the driver's output and control the motor's rotation.To ensure the accuracy of rotation and collect SSM's status, we have designed an absolute magnetic encoder, which is placed at the rear of the motor.The rotation of the motor drives the rotation of the magnetic ring at the rear, resulting in changes in the magnetic field.The encoder obtains information about the motor's rotation based on changes in the magnetic field and provides feedback to the MCU to adjust PWM output, enabling closed-loop control.After all of functions have been added to hybrid FPGA gateway and SSMs, the layout of the our proposed SSM system is shown in the Fig. 4.
System prototype
Based on the MSMSS proposed in the Fig. 4, we have built a prototype to demonstrate the function and performance of the system.The prototype consists 3 key components: (1) a user command center of the system, (2) a hybrid FPGA gateway, (3) a number of SSMs.Through the USB-CAN converter, the computer operates as a node of the CAN bus, with the ability to transmit and receive CAN frames through the user interface.As for the gateway, We have designed a hybrid circuit board used for gateway development which consists of two main components: a MCU(STM32F429) and a Altera Cylone 4 FPGA chip.The MCU is used as a computation core of the gateway, which is responsible for the analysis of the commands.And the FPGA chip has 22320 logic cells, which is equipped with abundant hardware resource for gateway function development.
The SSM is composed with a motor and a control unit.The motor used is DC micro motor, which is costefficiency and has rapid response 24 .The control unit consists 3 component, MCU, driver chip and magnetic encoder.The controller uses STM32F042 based on the Cortex-M0, which is convenient to be deployed and provide sufficient resource to execute complex algorithm.The motor driver chip is from DRV series, which provide integrated motor driver performance for low-voltage motion control applications.As for magnetic encoder, an absolute encoder, MT series, is taken to collect the information of motor, which provide 14 bits resolution of angle.Table 3 summarizes the parameters of the components used in our experimental prototype.
Evaluation process
To analyze the performance parameters of MSMSS, we conduct series evaluations to create sufficient results for out analysis under a predefined command pool.The aim of evaluations is to measure transmission time(TT) and jitter of packet for latency demonstration and record the motion movement of the SSMs to evaluate the control precision and consistency of the system.During the evaluation, we have programmed a user command center to control movement of SSMs in the CAN standard format and selects 6 commands to verify the performance of the system by time-trigger approach.Figure 5 illustrates the packet routes during the communication in the experiments.As for the packets used in the experiments, they are selected messages from parameter groups(PGs) www.nature.com/scientificreports/with different priority as shown in the Table 4.Each piece of message corresponds to a control instruction, such as communication selfcheck, multi motor rotation, etc.
Communication performance
The transmission time of commands is a crucial metric that indicates the latency of a communication process, which refers to the time delay for the successful end-to-end transmission of a single test packet.This metric is composed of several factors, including the worst-case response time of the CAN frame T CAN , the conversion time of the gateway t CON , and the transmission time of the I2C frame T I2C as follows: where t CAN represents the time required for a CAN frame command to be generated, queued, transmitted, and ultimately received by the gateways, T CON represents the time required for analysing commands and converting formats in the gateway, T I2C is the time for the packet transmission via the I2C bus from the port of gateway to (1) www.nature.com/scientificreports/ the terminal of SSMs.To gain a deeper understanding of the transmission process of data within the system, we conduct theoretical analysis of the three phases.Firstly, as the bridge for communication between the user interface and the MSMSS, the worst case response time of CAN frame T CAN can be calculated by the Tindell equation 25 as follows: where W m represents the longest time interval between placing message in queue and the start of transmission.C m is the time taken to transmit message on CAN bus. Figure 6a shows the structure of command message used in the system, and Fig. 6b is the format of the CAN message.According to the format of a standard CAN frame, C m can be defined as follows: where S CAN represents the number of the data bytes within the CAN message, τ CAN is the time taken to transfer one bit on the bus.With respect to the duration T CON required for the transmission of the command within the gateway subse- quent to its arrival, it includes not only the time taken for the command analysis but also the conversion time between the CAN frame and I2C structure.The primary determinant of the duration T CON is the frequency of the system clock as well as the level of optimization of the algorithm utilized.According to the I2C structure mentioned in the Fig. 7, the converted I2C frame information primarily consists of a 7-bit address segment, 8-bytes data segments, and the response signals between each byte, so the T I2C can be defined as follows: where Ack I2C represents the number of responses between the master and slave, S I2C represents the quantity of byte data segments, and τ bit represents the time required for transmission of a bit in I2C communication.
After analysing the command transmission process in the system, we can assess the real-time characteristics of the system by using the transmission time.For a fixed-priority system, we can employ the transmission factor to characterize the real-time characteristics of the system.Assuming that there are a set of instructions in the system, we use T 1 , T 2 , ..., T n to donate the transmission period of n pieces of command messages with their transmission time T TT,1 , T TT,2 , ..., T TT,n , respectively.Thus, the transmission factor can be calculated as follows: where U represent the transmission status of a set of periodic messages in the system.We can obtain the boundary condition for transmission time iteratively, as represented by the following equation 26 : (2) where n represent the number of messages in the system.A smaller value of transmission factor U indicates better real-time performance of the system.
System consistency
In order to demonstrate the consistency of our MSMSS prototype and the synchronization of SSMs, we focus on evaluating the following metrics: the responding time of commands (RTC), the rotation precision of angles (RPC), and the consistency of motion movement (CMM).RTC represents the duration between the motors starting to operate and achieving the functional target, which reflects the responding speed of SSM units.RPC represents the degree of task completion, indicating the precision of SSM units.Finally, CMM represents the consistency of SSMs under the MSMSS, which is a vital metric for reflecting the cooperation condition among motors.
Result analysis
To verify the performance and the synchronization of the MSMSS, we have built a prototype as shown in the Fig. 8.The MSMSS is composed with 4 SSMs and a hybrid gateway, which is controlled by a computer.We have conducted a series experiment to create sufficient results for analysis.The real-time characteristics of system are discussed based on the evaluation results.Afterwards, the metrics of performance and the consistency are analysed for the reliability and performance of our proposed MSMSS.
Transmission time
As a crucial metric that indicates the latency of a communication process, the whole transmission time is composed with 3 parts of time, CAN transmission time T CAN , conversion time of CAN frame and I2C frame T CON and I2C transmission time T I2C .The Fig. 9 separately analyses the transmission time of commands with 6 differ- ent priority through a time-trigger approach.With the decrease of the priority, the transmission time increases.According to the Eq. 3, the T CAN consists of two parts W m and C m .Since the data segment length of the instruc- tion is 64 bits, the C m of commands is essentially constant based on the Eq. 2. Thus, the fluctuation of T CAN primarily arises from the queuing time of commands between different priority levels.As the highest priority command message, the mean transmission time T CAN of PGN 0 is approximately 300µs , making it the shortest transmission time among the message pool.This is approximately 1.5 times shorter than the mean transmission time of PGN P01, 2 times shorter than the mean transmission time of PGN P60 and PGN P61, 3 times shorter than the mean transmission time of PGN P08, and 4 times shorter than the mean transmission time of PGN P09.Compared the transmission time of different CAN message in the MSMSS, it is important to note that the range of transmission time of message is also increasing which means the increasing of the jitter for different priority CAN message during the transmission.
AS one part of TT in the MSMSS, the conversion time from CAN frame to the I2C frame also reflects some transmission characteristics.According to the Fig. 9b, the conversion time T CON is stable at around 160µs no matter what kind of priority CAN message are.This is because during the conversion process, the instruction parsing takes place in the MCU.The parsed data is synchronized and processed in the FPGA in parallel, converting it into I2C frame.The priority of commands will not exert an influence on the entirety of the process.The time required for the conversion process is determined by the length of the information and is independent of the type of information.After the conversion progress, the control commands assigned to the corresponding port and then transmitted into SSMs in I2C format.As shown in the Fig. 9c, the T I2C is stable around at 815µs .Because the parallel characteristics of the gateway, these I2C frames do not need to queue or compete with each other.Thus, no matter the I2C message converted from what kind of the CAN frame, the transmission time of I2C messages is approximately the same.www.nature.com/scientificreports/Thus, based on the Eq. 1, we can summary the entire transmission time of command in the MSMSS.As shown in the Table 5, the minimum T TT in the command pool is PGN 0, which is about 1108µs from the time command is generated to the time command arrived at MSMSS.And the maximum T TT is CAN frame with the lowest priority, the PGN P09, which takes 3369µs when the network is under busy condition.By substituting the maximum transmission time and trigger periods of each command into Eq.5, we can obtain a value of transmission factor U as 0.31, while the system's boundary condition in Eq. 6 is 0.73.By comparing the values of U with the boundary condition, we can observe that the U is significantly smaller than the boundary condition, with a numerical value that is only half of the boundary condition.This implies that the completion of system instruction transmission is excellent, and the system exhibits good real-time characteristics.
Jitter of communication
Jitter is an important parameter for real-time systems, which reflects the variability in the system response time.Table 5 shows the maximum and minimum transfer time for the message in the command pool from the moment the message queued for transference to the moment they arrive at the SSMs in the network.By subtracting the maximum and minimum values of transmission time, we can obtain the jitter of each command during the transmission process.As seen in the Table 5, the PGN 0 has the shortest jitter, primarily due to its highest priority in the communication network.Based on our previous analysis of the communication network, we can conclude that the most significant factor affecting the transmission time in the system is the queuing time W m for command.when multiple commands are waiting in the queue of the bus for transmission, they compete with each other, resulting in the generation of jitter.That is why the highest priority command PGN 0 has the shortest jitter, which makes it always transmitted in the first place, while PGN P09 has the longest jitter.
Reliability of network
To meet the increasing trend of the number of motors and the need for synchronization control among motors, the reliability of the communication network within the system is of paramount importance.While sending commands to the system, we sequentially increased the number of motors in the MSMSS.At this point, there was only rotation command PGN P61 in the transmission network, so there was no competition on the bus.As www.nature.com/scientificreports/shown in the Fig. 10a, as the number of SSMs within the system gradually increased, the transmission time of instruction in various parts of the system, T CAN , T CON , T I2C , remained stable.This is because MSMSS employs parallel control, utilizing the parallel capabilities of the FPGA to transmit instructions to the SSMs.Additionally, SSMs operate independently of each other, without mutual interference.Thus, an increase in the number of SSMs does not impact the performance of the communication network.Figure 10b shows the packet loss rate of the network.As the input data rate of the system increases, packet loss rate gradually rises.Especially when the input data rate approaches 500Kbps, the packet loss rate experiences a steep increase.This is because, in addition to the input data, MSMSS also generates feedback information.The combined data volume of both has exceeded the system's bandwidth capacity of 500Kbps, causing communication congestion and an increase in the data loss rate.However, for a system with infrequent triggers, where the data rate always remains below 350Kbps, the packet loss rate of MSMSS remains consistently below 0.2%.The stable transmission time and a 0.2% packet loss rate indicate excellent scalability and reliability of the system.
Performance and reliability
After receiving the command from the gateway, SSMs start to execute the function and incorporate with each other.We focus on SSMs' performance and reliability, such as RTC, NPC, and CMM, and evaluate the performance of the proposed MSMSS.Therefore, we have recorded the rotation condition of SSMs under rotation command including individual motors rotating at multiple angles and multiple motors rotating at the same angle.
Response time and rotation performance
As shown in the Fig. 11a, SSMs were demanded to rotate for 45 The RTC of SSMs is less than 20ms from the motors' response to the rotation command the completion of the rotation.Additionally, the SSMs' position trajectory remains relatively smooth.The RPC of SSMs is approximately 0.25 • , which reflects the difference between the stabilized angle of the motor and the target angle fluctuates.Figure 11b analyzed the position trajectory of the SSMs during a position step change.During a position step change, the SSMs' rotational process remains smooth without significant overshoots.According to the results of the Fig. 11, our proposed SSMs not only exhibits fast response time, but also high rotational precision.This can be attributed primarily to our compact hardware structure, faster modulation cycles, and precise encoder feedback.In Table 6, we presented the performance of SSM in terms of response time, overshoot, steady-state error, and logical resource.Compared with motor controlled by the MCU 28 or FPGA 27 , SSM exhibits faster response times, lower overshoot, and smaller steady-state errors, which can achieve control more smoothly and quickly.
After qualifying the performance of the SSMs, we conducted experiments to rotate 4 SSMs to verify the consistency of the system for complexity and synchronization requirements.we sent the rotation command PGN P61 to 4 SSMs for rotating 45 • , 90 • , 135 • and 180 • .We compared the rotation trajectories of the four SSMs with the average position curve and analyzed the response error of the system during the rotation process in the Fig. 12.All SSMs responded simultaneously and completed the rotation in approximately 20ms.The position trajectories of system highly overlapped, which indicated a high degree of consistency among the motors.As the rotation angle increases, the response error gradually increases.As shown in Fig. 12a, when rotating 45 • , the system's position trajectory is very smooth, and the response error remains within 0.15 • .When the rotation angle reaches 90 • , the entire trajectory curve remains smooth, but the response error increases to 0.3 • in the Fig. 12b. Figure 12c records the system's response when it rotates 135 • .There isn't significant fluctuation during the rotation process, and the response error remains within 0.3 • .Until the system reaches 180 • of rotation, the system's position curve exhibits a considerable overshoot, exceeding the target angle by 2 • in the Fig. 12d.This is due to the PID algorithm causing an excessive output magnitude during the adjustment process.The maximum difference in the rotation trajectory among the SSMs remains at around 0.5 • indicating that the SSMs of the system exhibits excellent consistency and synchronization.
Stability under different temperature environment
To further validate the applicability of the MSMSS in industrial application, robustness tests were conducted to evaluate the system's performance in high-temperature and low-temperature working environments with China National Standards 29 .In this tests, the MSMSS was deployed in both low-temperature (i.e., − 20 • C) high- temperature (i.e., 85 • C) environment, which was close to the extreme working temperature conditions of the DC micro motors.We send 180 • rotation command to the MSMSS to observe 4 SSM units operation results, and compared the average position curve with the reference position curve which is the average position under normal temperature.As shown in the Fig. 13a, the average position curve shown in red closely aligns with the reference position shown in blue.Figure 13b analyzes the response error between the average position curve and the reference position curve.The fluctuation in the difference of position is within 2 degrees, indicating that the entire system is highly reliable.Figure 14 illustrates the system's control performance under high-temperature conditions.As the temperature increases, the performance of the SSMs begins to degrade.As shown in the Fig. 14a, b, compared to the trajectory curve of the system at room temperature, the MSMSS's position trajectory is slower.Not only does the system's position trajectory show a slower rising trend, but the system's response time also increases.However, due to the system's short modulation cycle and accurate feedback information, the system is still able to reach the target angle in 30ms and remain stable.It can be seen that although the MSMSS's performance may decrease under high-temperature conditions, it can still complete the the control tasks.
Conclusions
In this paper, we firstly reviewed the related works on multi-motor system that used for the industry domain, and addressed the importance to analyze the performance and reliability of communication network for the MSMSS.To quantify the performance and verify the feasibility of our proposed MSMSS and MMB architecture, we designed smart servo-motors that were interlinked with a hybrid gateway based on the FPGA, to guarantee the control command flow simultaneously within the system.In order to achieve such objectives, we conducted a series of evaluation based on MSMSS prototype in the laboratory.The experimental results demonstrated its capability to ensuring synchronized transmission within 5ms of comprehensive command messages, owing rotation precision within 0.25 • , maintaining rotation consistency among the SSMs motor units within 0.5 • , and exhibiting industrial applicability within motor working temperature conditions.
Figure 2 .
Figure 2. The architecture of the hybrid gateway.
Figure 3 .
Figure 3. (a) The structure diagram of SSM unit, (b) The diagram of SSM functions.
Figure 6 .
Figure 6.(a) Command Structure used in the MSMSS, (b) Standard CAN Frame Format.
Figure 8 .
Figure 8.The prototype in the MSMSS.
Figure 9 .
Figure 9. Transmission time: (a) Transmission time of CAN frame, (b) Conversion time, (c) Transmission time of I2C frame.
Figure 10 .Figure 11 .
Figure 10.Performance of the communication network: (a) Scalability of the network, (b) Reliability of the network with 500Kbps bandwidth.
Figure 12 .
Figure 12.Synchronization control of MSMSS: (a) Consistency of multi SSMs for 45 • and response error, (b) Consistency of multi SSMs for 90 • and response error, (c) Consistency of multi SSMs for 135 • and response error, (d) Consistency of Multi SSMs for 180 • and response error.
Figure 14 .
Figure 14.The results of the high and low temperature reliability tests: (a) Rotation condition under − 20 • C, (b) Response errors.
Table 1 .
Characteristics of different types of networks.
But if you want to scale up the system with more motors, it requires multiple MCUs interconnected, making the communication network of the system considerably complex
SSM Unit#1 SSM Unit#2 SSM Unit#n …… Motor Array Motor Interface …… FPGA Gateway PC External Memory Motor Interface Motor Interface Figure 1. The
architecture of the MSMSS based on the MMB architecture.
Table 3 .
The experimental prototype and components' parameters.
Table 4 .
CAN message of the command pool.
Table 5 .
Transmission time of command in the MSMSS.
Table 6 .
CAN message of the command pool. | 7,281.4 | 2024-02-16T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Audio Time Stretching Using Fuzzy Classification of Spectral Bins
: A novel method for audio time stretching has been developed. In time stretching, the audio signal’s duration is expanded, whereas its frequency content remains unchanged. The proposed time stretching method employs the new concept of fuzzy classification of time-frequency points, or bins, in the spectrogram of the signal. Each time-frequency bin is assigned, using a continuous membership function, to three signal classes: tonalness, noisiness, and transientness. The method does not require the signal to be explicitly decomposed into different components, but instead, the computing of phase propagation, which is required for time stretching, is handled differently in each time-frequency point according to the fuzzy membership values. The new method is compared with three previous time-stretching methods by means of a listening test. The test results show that the proposed method yields slightly better sound quality for large stretching factors as compared to a state-of-the-art algorithm, and practically the same quality as a commercial algorithm. The sound quality of all tested methods is dependent on the audio signal type. According to this study, the proposed method performs well on music signals consisting of mixed tonal, noisy, and transient components, such as singing, techno music, and a jazz recording containing vocals. It performs less well on music containing only noisy and transient sounds, such as a drum solo. The proposed method is applicable to the high-quality time stretching of a wide variety of music signals.
Introduction
Time-scale modification (TSM) refers to an audio processing technique, which changes the duration of a signal without changing the frequencies contained in that signal [1][2][3].For example, it is possible to reduce the speed of a speech signal so that it sounds as if the person is speaking more slowly, since the fundamental frequency and the spectral envelope are preserved.Time stretching corresponds to the extension of the signal, but this term is used as a synonym for TSM.Audio time stretching has numerous applications, such as fast browsing of speech recordings [4], music production [5], foreign language and music learning [6], fitting of a piece of music to a prescribed time slot [7], and slowing down the soundtrack for slow-motion video [8].Additionally, TSM is often used as a processing step in pitch shifting, which aims at changing the frequencies in the signal without changing its duration [2,3,7,9,10].
Audio signals can be considered to consist of sinusoidal, noise, and transient components [11][12][13][14].The main challenge in TSM is in simultaneously preserving the subjective quality of these distinct components.Standard time-domain TSM methods, such as the synchronized overlap-add (SOLA) [15], the waveform-similarity overlap-add [16], and the pitch-synchronous overlap-add [17] techniques, are considered to provide high-quality TSM for quasi-harmonic signals.When these methods are applied to polyphonic signals, however, only the most dominant periodic pattern of the input waveform is preserved, while other periodic components suffer from phase jump artifacts at the synthesis frame boundaries.Furthermore, overlap-add techniques are prone to transient skipping or duplication when the signal is contracted or extended, respectively.To solve this, transients can be detected and the time-scale factor can be changed during transients [18,19].
Standard phase vocoder TSM techniques [20,21] are based on a sinusoidal model of the input signal.Thus, they are most suitable for processing of signals which can be represented as a sum of slowly varying sinusoids.Even with these kind of signals however, the phase vocoder TSM introduces an artifact typically described as "phasiness" to the processed sound [21,22].Furthermore, transients processed with the standard phase vocoder suffer from a softening of the perceived attack, often referred to as "transient smearing" [2,3,23].A standard solution for reducing transient smearing is to apply a phase reset or phase locking at detected transient locations of the input signal [23][24][25].
As another approach to overcome these problems in the phase vocoder, TSM techniques using classification of spectral components based on their signal type have been proposed recently.In [26], spectral peaks are classified into sinusoids, noise, and transients, using the methods of [23,27].Using the information from the peak classification, the phase modification applied in the technique is based only on the sinusoidally classified peaks.It uses the method of [23] to detect and preserve transient components.Furthermore, to better preserve the noise characteristics of the input sound, uniformly distributed random numbers are added to the phases of spectral peaks classified as noise.In [28], spectral bins are classified into sinusoidal and transient components, using the median filtering technique of [29].The time-domain signals synthesized from the classified components are then processed separately, using an appropriate analysis window length for each class.Phase vocoder processing with a relatively long analysis window is applied to the sinusoidal components.A standard overlap-add scheme with a shorter analysis window is used for the transient components.
Both of the above methods are based on a binary classification of the spectral bins.However, it is more reasonable to consider the energy in each spectral bin as a superposition of energy from sinusoidal, noise, and transient components [13].Therefore, each spectral bin should be allowed to belong to all of the classes simultaneously, with a certain degree of membership for each class.This kind of approach is known as fuzzy classification [30,31].To this end, in [32], a continuous measure denoted as tonalness was proposed.Tonalness is defined as a continuous value between 0 and 1, which gives the estimated likelihood of each spectral bin belonging to a tonal component.However, the proposed measure alone does not assess the estimation of the noisiness or transientness of the spectral bins.Thus, a way to estimate the degree of membership to all of these classes for each spectral bin is needed.
In this paper, a novel phase vocoder-based TSM technique is proposed in which the applied phase propagation is based on the characteristics of the input audio.The input audio characteristics are quantified by means of fuzzy classification of spectral bins into sinusoids, noise, and transients.The information about the nature of the spectral bins is used for preserving the intra-sinusoidal phase coherence of the tonal components, while simultaneously preserving the noise characteristics of the input audio.Furthermore, a novel method for transient detection and preservation based on the classified bins is proposed.To evaluate the quality of the proposed method, a listening test was conducted.The results of the listening test suggest that the proposed method is competitive against a state-of-the art academic TSM method and commercial TSM software.
The remainder of this paper is structured as follows.In Section 2, the proposed method for fuzzy classification of spectral bins is presented.In Section 3, a novel TSM technique which uses the fuzzy membership values is detailed.In Section 4, the results of the conducted listening test are presented and discussed.Finally, Section 5 concludes the paper.
Fuzzy Classification of Bins in the Spectrogram
The proposed method for the classification of spectral bins is based on the observation that, in a time-frequency representation of a signal, stationary tonal components appear as ridges in the time direction, whereas transient components appear as ridges in the frequency direction [29,33].Thus, if a spectral bin contributes to the forming of a time-direction ridge, most of its energy is likely to come from a tonal component in the input signal.Similarly, if a spectral bin contributes to the forming of a frequency-direction ridge, most of its energy is probably from a transient component.As a time-frequency representation, the short-time Fourier transform (STFT) is used: where m and k are the integer time frame and spectral bin indices, respectively, x[n] is the input signal, H a is the analysis hop size, w[n] is the analysis window, N is the analysis frame length and the number of frequency bins in each frame, and ω k = 2πk/N is the normalized center frequency of the kth STFT bin. Figure 1 shows the STFT magnitude of a signal consisting of a melody played on the piano, accompanied by soft percussion and a double bass.The time-direction ridges introduced by the harmonic instruments and the frequency-direction ridges introduced by the percussion are apparent on the spectrogram.The tonal and transient STFTs X s [m, k] and X t [m, k], respectively, are computed using the median filtering technique proposed by Fitzgerald [29]: and where L t and L f are the lengths of the median filters in time and frequency directions, respectively.For the tonal STFT, the subscript s (denoting sinusoidal) is used and for the transient STFT the subscript t.Median filtering in the time direction suppresses the effect of transients in the STFT magnitude, while preserving most of the energy of the tonal components.
Conversely, median filtering in the frequency direction suppresses the effect of tonal components, while preserving most of the transient energy [29].
The two median-filtered STFTs are used to estimate the tonalness, noisiness, and transientness of each analysis STFT bin.We estimate tonalness by the ratio We define transientness as the complement of tonalness: Signal components which are neither tonal nor transient can be assumed to be noiselike.Experiments on noise signal analysis using the above median filtering method show that the tonalness value is often approximately R s = 0.5.This is demonstrated in Figure 2b in which a histogram of the tonalness values of STFT bins of a pink noise signal (Figure 2a) is shown.It can be seen that the tonalness values are approximately normally distributed around the value 0.5.Thus, we estimate noisiness by The tonalness, noisiness, and transientness can be used to denote the degree of membership of each STFT bin to the corresponding class in a fuzzy manner.The relations between the classes are visualized in Figure 3.
Figure 4 shows the computed tonalness, noisiness, and transientness values for the STFT bins of the example audio signal used above.The tonalness values in Figure 4a are close to 1 for the bins which represent the harmonics of the piano and double bass tones, whereas the tonalness values are close to 0 for the bins which represent percussive sounds.In Figure 4b, the noisiness values are close to 1 for the bins which do not significantly contribute either to the tonal nor the transient components in the input audio.Finally, it can be seen that the transientness values in Figure 4c
Novel Time-Scale Modification Technique
This section introduces the new TSM technique that is based on the fuzzy classification of spectral bins defined above.
Proposed Phase Propagation
The phase vocoder TSM is based on the differentiation and subsequent integration of the analysis STFT phases in time.This process is known as phase propagation.The phase propagation in the new TSM method is based on a modification to the phase-locked vocoder by Laroche and Dolson [21].The phase propagation in the phase-locked vocoder can be described as follows.For each frame in the analysis STFT (1), peaks are identified.Peaks are defined as spectral bins, whose magnitude is greater than the magnitude of its four closest neighboring bins.
The phases of the peak bins are differentiated to obtain the instantaneous frequency for each peak bin: where κ[m, k] is the estimated "heterodyned phase increment": Here, • 2π denotes the principal determination of the angle, i.e., the operator wraps the input angle to the interval [−π, π].The phases of the peak bins in the synthesis STFT Y[m, k] can be computed by integrating the estimated instantaneous frequencies according to the synthesis hop size H s : The ratio between the analysis and synthesis hop sizes H a and H s determines the TSM factor α. In practice, the synthesis hop size is fixed and the analysis hop size then depends on the desired TSM factor: In the standard phase vocoder TSM [20], the phase propagation of ( 7)-( 9) is applied to all bins, not only peak bins.In the phase-locked vocoder [21], the way the phases of non-peak bins are modified is known as phase locking.It is based on the idea that the phase relations between all spectral bins, which contribute to the representation of a single sinusoid, should be preserved when the phases are modified.This is achieved by modifying the phases of the STFT bins surrounding each peak such that the phase relations between the peak and the surrounding bins are preserved from the analysis STFT.Given a peak bin k p , the phases of the bins surrounding the peak are modified by: where Y[m, k p ] is computed according to (7)-( 9).This approach is known as identity phase locking.
As the motivation behind phase locking states, it should only be applied to bins that are considered sinusoidal.When applied to non-sinusoidal bins, the phase locking introduces a metallic sounding artifact to the processed signal.Since the tonalness, noisiness, and transientness of each bin are determined, this information can be used when the phase locking is applied.We want to be able to apply phase locking to bins which represent a tonal component, while preserving the randomized phase relationships of bins representing noise.
Thus, the phase locking is first applied to all bins.Afterwards, phase randomization is applied to the bins according to the estimated noisiness values.The final synthesis phases are obtained by adding uniformly distributed noise to the synthesis phases computed with the phase-locked vocoder: where constants b n and b α control the shape of non-linear mappings of the hyperbolic tangents.
The values b n = b α = 4 were used in this implementation.
The phase randomization factor A n , as a function of the estimated noisiness R n and the TSM factor α, is shown in Figure 5.The phase randomization factor increases with increasing TSM factor and noisiness.The phase randomization factor saturates as the values increase, so that at most, the uniform noise added to the phases obtains values in the interval [−0.5π, 0.5π].
Transient Detection and Preservation
For transient detection and preservation, a similar strategy to [23] was adopted.However, the proposed method is based on the estimated transientness of the STFT bins.Using the measure for transientness, the smearing of both the transient onsets and offsets is prevented.The transients are processed so that the transient energy is mostly contained on a single synthesis frame, effectively suppressing the transient smearing artifact, which is typical for the phase vocoder based TSM.
Detection
To detect transients, the overall transientness of each analysis frame is estimated, and denoted as frame transientness: The analysis frames which are centered on a transient component appear as local maxima in the frame transientness.Transients need to be detected as soon as the analysis window slides over them in order to prevent the smearing of transient onsets.To this end, the time derivative of frame transientness is used: where the time derivative is approximated with the backward difference method.As the analysis window slides over a transient, there is an abrupt increase in the frame transientness.These instants appear as local maxima in the time derivative of the frame transientness.Local maxima in the time derivative of the frame transientness that exceed a given threshold are used for transient detection.
Figure 6 illustrates the proposed transient detection method using the same audio excerpt as above, containing piano, percussion, and double bass.The transients appear as local maxima in the frame transientness signal in Figure 6a.Transient onsets are detected from the time derivative of the frame transientness, from the local maxima, which exceed the given threshold (the red dashed line in Figure 6b).The detected transient onsets are marked with orange crosses.After an onset is detected, the analysis frame which is centered on the transient is detected from the subsequent local maxima in the frame transientness.The detected analysis frames centered on a transient are marked with purple circles in Figure 6a.
Transient Preservation
To prevent transient smearing, it is necessary to concentrate the transient energy in time.A single transient contributes energy to multiple analysis frames, because the frames are overlapping.During the synthesis, the phases of the STFT bins are modified, and the synthesis frames are relocated in time, which results in smearing of the transient energy.
To remove this effect, transients are detected as the analysis window slides over them.When a transient onset has been detected using the method described above, the energy in the STFT bins is suppressed according to their estimated transientness: This gain is only applied to bins whose estimated transientness is larger than 0.5.Similar to [23], the bins to which this gain has been applied are kept in a non-contracting set of transient bins K t .When it is detected that the analysis window is centered on a transient, as explained above, a phase reset is performed on the transient bins.That is, the original analysis phases are kept during synthesis for the transient bins.Subsequently, as the analysis window slides over the transient, the same gain reduction is applied for the transient bins, as during the onset of the transient (16).The bins are retained in the set of transient bins until their transientness decays to a value smaller than 0.5, or until the analysis frame slides completely away from the detected transient center.Finally, since the synthesis frames before and after the center of the transient do not contribute to the transients' energy, the magnitudes of the transient bins are compensated by where m t is the transient frame index, |K t | denotes the number of elements in the set K t , and k t ∈ K t , which is the defined set of transient bins.
This method aims to prevent the smearing of both the transient onsets and offsets during TSM.In effect, the transients are separated from the input audio, and relocated in time according to the TSM factor.However, in contrast to methods where transients are explicitly separated from the input audio [13,14,28,34], the proposed method is more likely to keep transients perceptually intact with other components of the sound.Since the transients are kept in the same STFT representation, phase modifications in subsequent frames are dependent on the phases of the transient bins.This suggests that transients related to the onsets of harmonic sounds, such as the pluck of a note while strumming a guitar, should blend smoothly with the following tonal component of the sound.Furthermore, the soft manner in which the amplitudes of the transient bins are attenuated during onsets and offsets should prevent strong artifacts arising from errors in the transient detection.
Figure 7 shows an example of a transient processed with the proposed method.The original audio shown in Figure 7a consists of a solo violin overlaid with a castanet click.Figure 7b shows the time-scale modified sample with TSM factor α = 1.5, using the standard phase vocoder.In the modified sample, the energy of the castanet click is spread over time.This demonstrates the well known transient smearing artifact of standard phase vocoder TSM. Figure 7c shows the time-scale modified sample using the proposed method.It can be seen that while the duration of the signal has changed, the castanet click in the modified audio resembles the one in the original, without any visible transient smearing.
Evaluation
To evaluate the quality of the proposed TSM technique, a listening test was conducted.The listening test was realized online using the Web Audio Evaluation Tool [35].The test subjects were asked to use headphones.The test setup used was the same as in [28].In each trial, the subjects were presented with the original audio sample and four modified samples processed with different TSM techniques.The subjects were asked to rate the quality of time-scale modified audio excerpts using a scale from 1 (poor) to 5 (excellent).
All 11 subjects who participated in the test reported having a background in acoustics, and 10 of them had previous experience of participating in listening tests.None of the subjects reported hearing problems.The ages of the subjects ranged from 23 to 37, with a median age of 28.Of the 11 subjects, 10 were male and 1 was female.
In the evaluation of the proposed method, the following settings were used: the sample rate was 44.1 kHz, a Hann window of length N = 4096 was chosen for the STFT analysis and synthesis, the synthesis hop size was set to H s = 512, and the number of frequency bins in the STFT was K = N = 4096.The length of the median filter in the frequency direction was 500 Hz, which corresponds to 46 bins.In the time direction, the length of the median filter was chosen to be 200 ms, but the number of frames it corresponds to depends on the analysis hop size, which is determined by the TSM factor according to (10).Finally, the transient detection threshold was set to t d = 10 −4 = 0.00010.
In addition to the proposed method (PROP), the following techniques were included: the standard phase vocoder (PV), using the same STFT analysis and synthesis settings as the proposed method; a recently published technique (harmonic-percussive separation, HP) [28], which uses harmonic and percussive separation for transient preservation; and the élastique algorithm (EL) [36], which is a state-of-the-art commercial tool for time and pitch-scale modification.The samples processed by these methods were obtained using the TSM toolbox [37].
Eight different audio excerpts (sampled at 44.1 kHz) and two different stretching factors α = 1.5 and α = 2.0 were tested using the four techniques.This resulted in a total of 64 samples rated by each subject.The audio excerpts are described in Table 1.The lengths of the original audio excerpts ranged from 3 to 10 s.The processed audio excerpts and Matlab code for the proposed method are available online at http://research.spa.aalto.fi/publications/papers/applsci-ats/.To estimate the sound quality of the techniques, mean opinion scores (MOS) were computed for all samples from the ratings given by the subjects.The resulting MOS values are shown in Table 2.A bar diagram of the same data is also shown in Figure 8.
As expected, the standard PV performed worse than all the other tested methods.For the CastViolin sample, the proposed method (PROP) performed better than the other methods, with both TSM factors.This suggests that the proposed method preserves the quality of the transients in the modified signals better than the other methods.The proposed method also scored best with the Jazz excerpt.In addition to the well-preserved transients, the results are likely to be explained by the naturalness of the singing voice in the modified signals.This can be attributed to the proposed phase propagation, which allows simultaneous preservation of the tonal and noisy qualities of the singing voice.This is also reflected in the results of the Vocals excerpt, where the proposed method also performed well, while scoring slightly lower than HP.For the Techno sample, the proposed method scored significantly higher than the other methods with TSM factor α = 1.5.For TSM factor α = 2.0, however, the proposed method scored lower than EL.The proposed method also scored highest for the JJCale sample with TSM factor α = 2.0.The proposed method performed more poorly on the excerpts DrumSolo and Classical.Both of these samples contained fast sequences of transients.It is likely that the poorer performance is due to the individual transients not being resolved during the analysis, because of the relatively long analysis window used.Also, for the excerpt Eddie, EL scored higher than the proposed method.Note that the audio excepts were not selected so that the results would be preferable for one of the tested methods.Instead, they represent some interesting and critical cases, such as singing and sharp transients.
The preferences of subjects over the tested TSM methods seem to depend significantly on the signal being processed.Overall, the MOS values computed from all the samples suggest that the proposed method yields slightly better quality than HP and practically the same quality as EL.
CastViolin Classical
JJCale DrumSolo Eddie Jazz Techno Vocals Mean The proposed method introduces some additional computational complexity when compared to the standard phase-locked vocoder.In the analysis stage, the fuzzy classification of the spectral bins requires median filtering of the magnitude of the analysis STFT.The number of samples in each median filtering operation depends on the analysis hop size and the number of frequency bins in each short time spectra.In the modification stage, additional complexity arises from drawing pseudo-random values for the phase randomization.Furthermore, computing the phase randomization factor, as in Equation ( 13), requires the evaluation of two hyperbolic tangent functions for each point in the STFT.Since the argument for the second hyperbolic tangent depends only on the TSM factor, its value needs to be updated only when the TSM factor is changed.Finally, due to the way the values are used, a lookup table approximation can be used for evaluating the hyperbolic tangents without significantly affecting the quality of the modification.
Conclusions
In this paper, a novel TSM method was presented.The method is based on fuzzy classification of spectral bins into sinusoids, noise, and transients.The information from the bin classification is used to preserve the characteristics of these distinct signal components during TSM.The listening test results presented in this paper suggest that the proposed method performs generally better than a state-of-the-art algorithm and is competitive with commercial software.
The proposed method still suffers to some extent from the fixed time and frequency resolution of the STFT.Finding ways to apply the concept of fuzzy classification of spectral bins to a multiresolution time-frequency transformation could further increase the quality of the proposed method.Finally, although this paper only considered TSM, the method for fuzzy classification of spectral bins could be applied to various audio signal analysis tasks, such as multi-pitch estimation and beat tracking.
Figure 1 .
Figure 1.Spectrogram of a signal consisting of piano, percussion, and double bass.
Figure 2 .
Figure 2. (a) Spectrogram of pink noise and (b) the histogram of tonalness values for its spectrogram bins.
Figure 3 .
Figure 3.The relations between the three fuzzy classes.
where u[m, k] are the added noise values and Y[m, k] are the synthesis phases computed with the phase-locked vocoder.The pseudo-random numbers u[m, k] are drawn from the uniform distribution U (0, 1).A n [m, k] is the phase randomization factor, which is based on the estimated noisiness of the bin R n [m, k] and the TSM factor α:
Figure 5 .
Figure 5.A contour plot of the phase randomization factor A n , with b n = b α = 4. TSM: time-scale modification.
Figure 6 .
Figure 6.Illustration of the proposed transient detection.(a) Frame transientness.Locations of the detected transients are marked with purple circles; (b) Time derivative of the frame transientness.Detected transient onsets are marked with orange crosses.The red dashed line shows the transient detection threshold.
Figure 7 .
An example of the proposed transient preservation method.(a) shows the original audio, consisting of a solo violin overlaid with a castanet click.Also shown are the modified samples with TSM factor α = 1.5, using (b) the standard phase vocoder, and (c) the proposed method.
Table 1 .
List of audio excerpts used in the subjective listening test.Early in the Morning, performed by Eddie Rabbit Jazz Excerpt from I Can See Clearly, performed by the Holly Cole Trio Techno Excerpt from Return to Balojax, performed by Deviant Species and Scorb Vocals Excerpt from Tom's Diner, performed by Suzanne Vega | 6,238.4 | 2017-12-12T00:00:00.000 | [
"Computer Science"
] |
Increased burden of cardiovascular disease in people with liver disease: unequal geographical variations, risk factors and excess years of life lost
People with liver disease are at increased risk of developing cardiovascular disease (CVD), however, there has yet been an investigation of incidence burden, risk, and premature mortality across a wide range of liver conditions and cardiovascular outcomes. We employed population-wide electronic health records (EHRs; from 1998 to 2020) consisting of almost 4 million adults to assess regional variations in disease burden of five liver conditions, alcoholic liver disease (ALD), autoimmune liver disease, chronic hepatitis B infection (HBV), chronic hepatitis C infection (HCV) and NAFLD, in England. We analysed regional differences in incidence rates for 17 manifestations of CVD in people with or without liver disease. The associations between biomarkers and comorbidities and risk of CVD in patients with liver disease were estimated using Cox models. For each liver condition, we estimated excess years of life lost (YLL) attributable to CVD (i.e., difference in YLL between people with or without CVD). The age-standardised incidence rate for any liver disease was 114.5 per 100,000 person years. The highest incidence was observed in NAFLD (85.5), followed by ALD (24.7), HCV (6.0), HBV (4.1) and autoimmune liver disease (3.7). Regionally, the North West and North East regions consistently exhibited high incidence burden. Age-specific incidence rate analyses revealed that the peak incidence for liver disease of non-viral aetiology is reached in individuals aged 50–59 years. Patients with liver disease had a two-fold higher incidence burden of CVD (2634.6 per 100,000 persons) compared to individuals without liver disease (1339.7 per 100,000 persons). When comparing across liver diseases, atrial fibrillation was the most common initial CVD presentation while hypertrophic cardiomyopathy was the least common. We noted strong positive associations between body mass index and current smoking and risk of CVD. Patients who also had diabetes, hypertension, proteinuric kidney disease, chronic kidney disease, diverticular disease and gastro-oesophageal reflex disorders had a higher risk of CVD, as do patients with low albumin, raised C-reactive protein and raised International Normalized Ratio levels. All types of CVD were associated with shorter life expectancies. When evaluating excess YLLs by age of CVD onset and by liver disease type, differences in YLLs, when comparing across CVD types, were more pronounced at younger ages. We developed a public online app (https://lailab.shinyapps.io/cvd_in_liver_disease/) to showcase results interactively. We provide a blueprint that revealed previously underappreciated clinical factors related to the risk of CVD, which differed in the magnitude of effects across liver diseases. We found significant geographical variations in the burden of liver disease and CVD, highlighting the need to devise local solutions. Targeted policies and regional initiatives addressing underserved communities might help improve equity of access to CVD screening and treatment.
Background
Although cardiovascular disease (CVD) prevention policies have had some success, for the past two decades, they have remained limited in reducing the number of deaths globally with CVD still ranked as the leading cause of death. A large proportion of deaths are attributable to preventable illnesses-most premature deaths from CVD in people younger than 75 are avoidable [1,2]. As major risk factors for CVD, type 2 diabetes and obesity have received the most attention in policy, public awareness and guidelines. However, despite reported associations between liver disease and cardiovascular risk, liver conditions other than non-alcoholic fatty liver disease (NAFLD) have mostly been overlooked.
Liver disease encompasses a spectrum of conditions including viral hepatitis, NAFLD, steatosis to end-stage cirrhosis. Other causes of liver cirrhosis include autoimmune liver disease and alcoholic liver disease (ALD). The heart has been implicated in the progression of aggravating liver disease, with the liver-heart axis more extensively explored in patients with NAFLD [3]. With a global prevalence of 25% in the general population [4], the number of individuals living with NAFLD far exceeds the number of individuals with diabetes and obesity [5] combined. NAFLD is characterised by lipid accumulation in the liver and systemic metabolic aberrations, which leads to an increased risk of developing CVD. However, the pathophysiological associations between other liver diseases and CVD remain poorly understood. The World Health Organisation estimated that 250 million individuals are living with chronic hepatitis B caused by infection with the hepatitis B virus (HBV) where prevalence is the highest in African and Western Pacific regions [6]. An estimated 71 million individuals have chronic hepatitis C virus (HCV) infection [7] and emerging evidence suggests an association between HCV and CVD, where CVD associated with HCV infection is responsible for the loss of 1.5 million disability-adjusted life-years each year [8]. HBV and HCV transmissions continue to rise in low-and middle-income countries and sustained chronic infection may lead to the development of CVD due to chronic inflammation and metabolic derangements [9].
Additionally, liver disease often co-exists with type 2 diabetes [10] and chronic renal disease [11], both of which are independent cardiovascular risk factors [12].
Current guidelines from the American Association for the Study of Liver Diseases recommended the modification of CVD risk factors in these individuals and the screening of CVD during liver transplant evaluation [13]. Furthermore, since CVD is the most common cause of death in patients with NAFLD, the European Association for the Study of Liver recommends mandatory screening for CVD in individuals with NAFLD [14]. As liver disease progresses, liver-specific risk factors (e.g., raised International Normalised Ratio) and prevalent comorbidities that are independently associated with increased CVD risk may come into play and could result in more severe illness in these individuals. Yet, CVD is often underdiagnosed in patients with liver disease and there are no policies on screening patients with chronic liver disease.
Harnessing an emerging data science opportunity from population-based electronic health records (EHRs), we sought to identify population groups, among patients with liver disease, who might be at high risk for CVD. Employing linked EHRs from primary and secondary care on 4 million individuals, our study aims to address the value, or futility, of targeted monitoring of CVD risk in patients with liver disease. We characterised clinical features of patients with any of the five liver conditions, ALD, autoimmune liver disease, HBV, HCV and NAFLD and explored the associations between risk factors and future risk of 17 of the most common initial cardiovascular presentations. A report on the atlas of variation in risk factors and healthcare for liver disease by the Public Health England found that for the past three decades, there have been limited improvements in mortality rates and incidence of liver disease [15]. Deaths from liver disease have increased by fourfold and progress on earlier diagnosis and better treatment have been slow [16,17]. For these reasons, we investigated regional variations in liver disease burden and CVD burden to highlight potential gaps in the provision of services, inequality of access and prevention initiatives, drawing attention to regions where improvements are most needed. Specific
Conclusions:
We developed a public online app (https:// lailab. shiny apps. io/ cvd_ in_ liver_ disea se/) to showcase results interactively. We provide a blueprint that revealed previously underappreciated clinical factors related to the risk of CVD, which differed in the magnitude of effects across liver diseases. We found significant geographical variations in the burden of liver disease and CVD, highlighting the need to devise local solutions. Targeted policies and regional initiatives addressing underserved communities might help improve equity of access to CVD screening and treatment.
Keywords: Cardiovascular risk, Incidence, Liver disease, Electronic health records, Years of life lost, Geographical variations objectives of this study were: (i) to estimate the regional variations in incidence rates for liver disease in England, (ii) to estimate regional variations in incidence rates for CVD in patients with and without liver disease, (iii) to estimate the associations of age, clinical biomarkers, preexisting comorbidities and smoking with risk of initial presentation of CVD and (iv) to estimate excess years of life lost (a marker of pre-mature mortality) based on the age of CVD onset by comparing patients with liver disease who subsequently developed CVD to those who did not.
We provide an open-access online app, with implications for clinical risk assessment and targeted policy on CVD prevention, that shows incidence rates, years of life lost estimations and cause-specific hazard ratios for associations between clinical features and initial presentation of CVD.
Study design and data source
We used linked EHRs from primary and secondary care, which consisted of a population of 3,929,596 adults aged ≥ 30 years during the study period of 1998-2020. Individuals were followed up until the occurrence of a primary endpoint, death, date of last data collection for the practice, date of administrative censoring (June 2020) or deregistration from the practice (i.e., loss to followup), whichever occurred first. Baseline characteristics were analysed and stratified by liver disease. Information governance approval was obtained from the Medicines Healthcare Regulatory Authority (UK) Independent Scientific Advisory Committee (21_000363) Clinical Practice Research Datalink (CPRD).
Open electronic health record (EHR) definitions of diseases and covariates
Phenotype definitions of liver disease, CVD, comorbidities and risk factors are available at https:// pheno types. healt hdata gatew ay. org/ home/ and have previously been validated [18][19][20][21][22][23]. Phenotypes for primary care records were generated using Read clinical terminology (version 2). Phenotypes for secondary care records were generated using ICD-10 terms. BMI and smoking were considered as the nearest record to entry within 1 year prior to entry. We examined nine biomarkers; albumin, alanine aminotransferase, aspartate transaminase, bilirubin, C-reactive protein, gamma-glutamyltransferase, International Normalised Ratio (INR), platelets and triglycerides. The primary outcome was the first record of one of the following 17 cardiovascular presentations in data from either primary or secondary care, which were grouped into five CVD categories: (i) coronary heart disease: stable angina, unstable angina, myocardial infarction, heart failure and coronary heart disease unspecified; (ii) strokes and transient ischaemic attack (TIA): ischaemic stroke, stroke unspecified and TIA; (iii) peripheral vascular disease: peripheral arterial disease, pulmonary embolism, venous thromboembolic disease and abdominal aortic aneurysm; (iv) cardiomyopathy: dilated cardiomyopathy, hypertrophic cardiomyopathy and cardiomyopathy unspecified; (v) arrhythmia: atrial fibrillation and sick sinus syndrome. We examined 14 comorbidities and considered records for comorbidities prior to cohort entry (prevalent comorbidities). The comorbidities were diabetes mellitus, complications of diabetes (i.e., diabetic nephropathy, diabetic retinopathy and neurological complications of diabetes), dyslipidaemia, hypertension, jaundice, proteinuric kidney disease, oesophagitis or oesophageal ulcer, proteinuria, chronic kidney disease and gastrointestinal conditions such as Barrett's oesophagus, Crohn's disease, diverticular disease of the intestine, gastro-oesophageal reflux disease and irritable bowel syndrome (Additional file 1 and Additional file 2).
Estimations of age-standardised and age-specific incidence rates
Age-standardised incidence rates were estimated per 100,000 person-years and based on a 5-year study period from 01.01.2015 to 31.12.2019. We analysed geographical variations in incidence rates based on CPRD practice region definitions. We retrieved the 2019 population estimates (overall and by age groups) from the Office for National Statistics [24] for age-standardisation of incidence rates. Additionally, we estimated incidence rates by age groups (i.e., age-specific incidence rates) to explore differences across the age groups. Patient years were calculated by age group (30-39, 40-49, 50-59, 60-69, 70-79 and ≥ 80) using the 'pyears' function in the survival package in R (version 3.2.10). Confidence intervals for incidence rates were calculated based on the central limit theorem given dichotomous outcome in a single population. For event numbers smaller than five, agestandardised incidence rate was not reported. Using these approaches, we estimated both age-standardised and age-specific incidence rates for liver disease and incidence rates for CVD in patients with and without liver disease.
Estimations of excess years life lost (YLL)
YLL was estimated using the lillies package [25], which was validated by other studies [26][27][28]. YLL was estimated based on CVD onset at ages 30, 40, 50, 60, 70 and 80 (Additional file 3). We estimated excess YLL based on the specific age of onset of CVD and compared the average of these individuals to the patients without CVD of the same age. We also examined excess YLL by the type of first CVD presentation within age groups.
Statistical analyses
Cox proportional hazards models were fitted. Hazard ratios (HRs) from the fully adjusted models were reported with 95% confidence intervals (CI). All P values were two-sided. Models were also refitted with age group as a categorical variable to return HRs by age group. Proportional hazards assumption violations were tested for a zero slope in the scaled Schoenfeld residuals. Biomarker and BMI measurements were indicated as the presence or absence of a particular measurement above or below the stated threshold. In the primary analysis, those with missing BMI information were assumed to be non-obese. Those with missing biomarker information were assumed to be within the normal threshold on the assumption that abnormal blood test results were likely to be recorded if present. Additionally, sensitivity analyses were conducted on complete records to demonstrate that the estimates were robust to our assumptions around missing data.
To ascertain whether patients with liver disease had a higher incidence of CVD, we considered 17 cardiovascular conditions (see "Methods" Section). The agestandardised incidence rate for CVD in patients with any liver disease was two-fold higher than in people without liver disease. Incidence rate was 2634.6 per 100,000 person years (CI 2524.4-2744.8) in people with liver disease ( Fig The age-specific incidence rates of CVD in patients with any liver disease exhibited an upward trend with increasing age, peaking at ages 70-79 (766 per 100,000 person years). Similarly, in people without liver disease, this upward trend was maintained, albeit at a lower magnitude (highest CVD incidence was observed in individuals aged 80 and above; 562 per 100,000 person years) (Additional files 9, 10). Unlike the incidence rates of CVD which increased with age peaking in the highest age groups, the incidence rates of liver disease peaked in middle-aged individuals as shown earlier (Fig. 1B). Geographical variations of liver disease burden and CVD burden (in people with or without liver disease) were further explored and presented in the supplementary appendix.
Patterns of the first presentation of CVD in patients with liver disease
We analysed the first presentation of 17 types of incident CVD. When considering the first presentation of CVD, the number of incident CVD events for patients with liver disease were as follow; ALD (3283/12,845, 25.6%), autoimmune liver disease (667/2210, 30.2%), HBV (270/1753, 15.4%), HCV (522/3112, 16.8%) and NAFLD (4362/20,928, 20.8%). When comparing across patients with different liver diseases, atrial fibrillation was the most common condition while hypertrophic cardiomyopathy was the least common (Fig. 3A). When individual CVDs were grouped into five categories, coronary heart disease, which is a composite of stable angina, unstable angina, myocardial infarction, heart failure and coronary heart disease unspecified, was the most common, followed by arrhythmia, stroke/TIA, peripheral vascular disease and cardiomyopathy (Fig. 3B). When delving into specific liver conditions, coronary heart disease was the first presentation in 53% of patients with NAFLD who had cardiovascular events, while cardiomyopathy was the first presentation in only 3% of patients. By contrast, 44% of patients with autoimmune liver disease presenting with cardiovascular events had coronary heart disease, and 1% had cardiomyopathy (Fig. 3B). The first presentation of CVD was influenced by age and sex (Additional file 2). CVD was more common in older individuals and there are sex differences in the type of CVD presented. In patients with HCV, for example, 35% of men and 43% of women aged 70-79 had a cardiovascular event (Additional file 2). In men, 5%, 5%, 6% and 19% of these events account for peripheral vascular disease, stroke/TIA, arrhythmia and coronary heart disease respectively. While in women, the proportion differs with 4%, 15% and 24% events accounting for stroke/ TIA, arrhythmia and coronary heart disease respectively (Additional file 2).
Clinical features associated with the first presentation of CVD
The associations between patient-level factors (i.e., prevalent comorbidities, biomarkers and smoking) and the risk of incident CVD were shown in Fig. 4 and Additional file 11. We noted strong positive associations between increasing age and risk of incident CVD. Among patients with ALD, individuals aged 60-69 had a four-fold higher risk compared to 30-39-year-olds (adjusted HR Comorbidities such as diabetes mellitus, complications of diabetes, hypertension, proteinuric kidney disease and chronic renal disease were associated with a higher risk of CVD when comparing across all liver diseases. Diverticular disease and gastro-oesophageal reflex disease were associated with a higher risk of CVD in patients with autoimmune liver disease or NAFLD. Dyslipidaemia was also associated with a higher risk in patients with ALD (adjusted HR: 1.22, CI 1.11-1.34), HCV (adjusted HR: 1.86, CI 1.32-2.60) or NAFLD (adjusted HR: 1.25, CI 1.17-1.34) ( Fig. 4; Additional file 11).
We observed that albumin levels of < 35 g/L were associated with a higher risk of CVD in patients with ALD (adjusted HR: 1.20, CI 1.10-1.32), autoimmune liver disease (adjusted HR: 1.55, CI 1.26-1.91), HBV (adjusted HR: 2.39, CI 1.54-3.71), HCV (adjusted HR: 1.75, CI 1.29-2.39) or NAFLD (adjusted HR: 1.59, CI 1.42-1.79). In patients with ALD or NAFLD, those who had elevated C-reactive protein levels of ≥ 10 mg/L, a marker of inflammation, had at least a 1.2-fold increased risk of CVD. Raised International Normalized Ratio (INR) of ≥ 1.7 is associated with 3.5-fold, 6.4-fold and 1.5-fold increased risk of CVD in patients with autoimmune liver disease, HCV or NAFLD. Interestingly, we noted an inverse association between alanine aminotransferase (ALT) levels and CVD risk in patients with ALD
Excess years of life lost from CVD
Among patients with liver disease, we estimated excess years life lost (YLL), which was calculated as the average number of years that individuals with a specific CVD condition lose in excess of that found in people without CVD of the same sex and age. We estimated excess YLL based on the age of CVD onset at ages 30, 40, 50, 60, 70 and 80 (Fig. 5A). Overall, individuals with NAFLD experienced the highest excess YLL upon CVD diagnosis, a pattern that was most pronounced at younger ages. Patients with NAFLD who developed CVD at the age of 40 years experienced an excess YLL of 18.2 years (CI 16.5-20.2) compared to those without CVD. In contrast, at the same age of CVD onset, patients with HBV experienced excess YLL of only 9.5 years (CI 7.1-12.1) (Fig. 5A). When investigating sex differences, the difference in excess YLL between men and women appeared to be marginal in patients with ALD, autoimmune liver disease and NAFLD (Additional file 3). In patients with HBV, women diagnosed with CVD at ages 70 or 80 experience higher excess YLL than men. At age 70, women lost We also evaluated excess YLL by age of CVD onset for each of the five liver diseases stratified by the five CVD categories. In the interest of maintaining conciseness and to aid comparison across liver diseases, we graphically displayed results as radar plots (Fig. 5B). Each radar showed excess YLL estimates for a specific age of CVD onset, with each spoke representing excess YLL for one CVD category. A line is drawn to connect the excess YLL values for each CVD category where each liver disease is represented by different coloured lines. All categories of CVD were associated with shorter remaining life expectancies. When evaluating the surface areas covered by the lines, as expected, the radar plots demonstrated that younger age of CVD onset had higher excess YLL (as represented by a larger surface area connecting the CVD categories) that was consistent across the five liver conditions. Conversely, as the age of onset increased, excess YLL decreased. The radar plots also facilitate the visualisation of the differences by liver disease and CVD category. Notably, when comparing across the five liver conditions, the differences in excess YLL by CVD category were more pronounced at younger ages (Fig. 5B). At age 40, patients with autoimmune liver disease, compared to other liver diseases, experience the highest excess YLL when diagnosed with peripheral vascular disease (25.7 years; CI 12.7-41.6) compared to other CVD categories. In contrast, patients with autoimmune liver disease who are diagnosed with incident arrythmia at age 40 lost only 6.7 years (9.5% CI 4.5-9.5) (Fig. 5B).
Discussion
Our study examines incidence, comorbidity patterns, risk of initial presentation of CVD and excess YLL associated with CVD in patients with any of the five liver diseases. Patients with liver disease have an enhanced incidence burden of CVD compared to the general population. Coronary heart disease and arrhythmia are two of the most common first presentation of CVD, which show similar patterns across all liver diseases. This finding is consistent with previous studies in patients with NAFLD reporting an increase in the prevalence of coronary heart disease (including myocardial infarction), cardiac arrhythmias and cardiomyopathy [29]. Interestingly, the different aetiology of liver disease did not impact upon CVD presentation, perhaps suggesting that the liver disease drives CVD rather than the causative aetiology (i.e., viral infection or alcoholism). We provide the public, researchers and policymakers with an interactive online tool for exploration and visualisation of the incidence rates of liver disease, incidence rates of CVD in patients with and without liver disease and excess YLL associated with CVD. The tool also brings together information on other analyses such as comorbidity patterns in patients with liver disease and adjusted hazard ratios for CVD risk by age, biomarkers and comorbidities.
Our results highlight significant variations in the burden of liver disease across geographical regions in England. This suggests that there might be geographical variations in risk factors, health and risk awareness among the public, and access to diagnostic services where local solutions are required. Our observation that incidence rates for liver disease (except for HBV) were the highest in North East and North West regions was corroborated by the Public Health England's (PHE's) second atlas of variation for liver disease [30]. The PHE atlas revealed that Northern regions had a high rate of expenditure for hepatobiliary problems, liver disease admissions rates, liver disease mortality and rate of years of life lost. We observed that the incidence rate for HBV was the highest in London. This result was consistent with the PHE atlas which found that the percentage of women who tested positive for HBV during pregnancy screening and the rate of hospital admission for HBV-related endstage liver disease and mortality were high in London, presumably caused by a high prevalence of migrant communities in this region [30].
When examining the regional variations of CVD in patients with or without liver disease, we found that variation in incidence rates is ubiquitous across England. Our results draw attention to the need to coordinate CVD and liver disease services across regions to maintain equity in service access at a local level by helping commissioners, service providers and clinicians to compare the disease burden of their region with the national figure to ensure that care pathways are planned optimally. Regional variations signify that a one-size-fits-all approach is less likely to be effective. Services within each region need to identify challenges and adapt solutions at different rates in different scenarios. Our maps of disease burden variation could serve as a tool for benchmarking where each region stands relative to other regions and to provide a starting point for further investigation on reasons explaining the variation.
Our population-wide study of nearly 4 million individuals provides evidence that associations between clinical factors and 17 cardiovascular disease outcomes differ in terms of magnitude of effects when comparing across the five liver diseases. Raised BMI and current and former smoking were associated with an increased risk of CVD. Similarly, diabetes mellitus, hypertension, dyslipidaemia and chronic kidney disease are comorbidities associated with a higher risk. CVD risk is increased by at least two times in patients with impaired renal function [12]. We also observed that proteinuria, which is an indication of kidney damage, is associated with a higher risk of CVD. C-reactive protein is a marker of systemic inflammation, where it is found to be a predictor of mortality in patients with liver disease [31]. We found that patients with raised C-reactive protein have an increased risk of CVD, which corroborates previous findings that inflammation may serve as a pathogenic mechanism for triggering atherothrombotic CVD events [32]. Low albumin levels were associated with a higher risk. An inverse association between albumin and CVD may be explained by mechanisms related to the pathogenesis of CVD and inflammation associated with hypoalbuminaemia [33]. We found that markers of liver function (i.e., bilirubin, aspartate transaminase and gamma-glutamyltransferase) do not appear to be associated with CVD risk. Evidence on the association between these measurements and CVD risk has been mixed [34]. When analysing the effects of alanine aminotransferase (ALT) levels, we observed an inverse relationship between ALT and CVD risk. A systematic review and meta-analysis of 29 cohort studies demonstrated that there was limited evidence for association between ALT and CVD events [35]. In contrast, stratified analysis on cause-specific CVD endpoints demonstrated that ALT was inversely associated with coronary heart disease but positively associated with stroke [35]. A study on three large prospective cohorts (West of Scotland Coronary Prevention Study, the Prospective Study of Pravastatin in the Elderly at Risk, and the Leiden 85-plus Study) with ages ranging from 45 to 85 years, also described an inverse association between ALT and cardiovascular outcomes, even after adjusting for confounders [36]. The authors found no evidence of any association between higher levels of ALT and increased risk of clinical outcomes when examining ALT levels > 27 U/L. In contrast, other studies reporting raised ALT levels with adverse outcomes were employing much higher ALT levels [37] (defined as 2 times the upper limit of normal) than in our present study (we defined raised ALT as ≥ 35 U/L) and in the study by Ford et al. [36]. High ALT in the normal range was also found to be negatively associated with death from ischaemic heart disease [38]. This suggests that there might be potential variations of CVD risk based on ALT levels that are closer to the population median. Furthermore, another report suggested a possible U-shaped association of ALT with vascular risk [39]. Findings from our study along with others suggest that the relationship between ALT and CVD risk is more complex than currently appreciated and may be caused by differences in underlying aetiology. It is also important to note that ALT levels are not useful indicators to risk-stratify the severity of liver disease [40]. Patients with cirrhosis can have normal values when scar tissue replaces the damaged liver cells and can no longer produce ALT.
Several studies have shown that NAFLD is a risk factor for CVD, especially coronary heart disease, in a manner that is independent of other risk factors such as hypertension, diabetes, smoking and obesity [41,42]. For example, Hamaguchi et al. found that NAFLD at baseline predisposed individuals to future CVD events, independent of typical cardiovascular risk factors [42]. All subtypes of NAFLD were linked to increased CVD risk, but individuals with fibrosis and NASH have an even higher risk [41]. The pathophysiological mechanisms of NAFLD and CVD have been elegantly reviewed by Przybyszewski et al. recently [43]. Although there is a lack of complete understanding of the exact causal mechanisms, it was thought that inflammation and dyslipidemia may promote atherogenesis leading to coronary heart disease [43]. During NAFLD progression, inflammation of visceral adipose tissue may trigger the activation of proinflammatory pathways (JNK and NF-κB) and downstream synthesis of procoagulant factors, collectively causing increased CVD risk [43].
Our results confirm that patients with liver disease experience premature mortality attributable to CVD. These findings are consistent with a report from the Centers for Disease Control and Prevention stating that CVD ranks third in YLL prior to age 65 [44] along with other studies demonstrating that premature death due to CVD continues to increase globally [45,46]. Although each type of CVD is associated with excess YLL, we observed that women lost more years of life compared to men at a younger age of CVD onset among patients with HCV and at an older age of CVD onset among patients with HBV.
Strengths and weaknesses
A major strength of this study is the ability to differentiate 17 cardiovascular outcomes using a real-world population cohort, which is larger, contemporary and more representative of the general population compared with investigator-led cohorts. Given that we have analysed records from both primary and secondary care, we are able to capture conditions that are treated by general practitioners and specialist clinicians in hospitals. Primary care records also provide more detailed data than those recorded on admission to hospital, where the former account for the total population while the latter only for a subset of individuals who attended hospitals. In addition, given the longevity of follow-up and the breadth and depth of variables available, we were able to examine the associations between a wide range of comorbidities [18,19,21]. We note important limitations in this study. First, our study employs only data in England, which may limit generalisability to other geographical locations. We have only sampled a subset of English general practices accounting for almost 4 million individuals. Second, as in all observational studies, there is the potential for residual confounding. Third, missing data is a common phenomenon in EHRs, however, sensitivity analyses on complete records have shown that our estimates were robust to our assumptions around missing data. There remains a possibility of underreporting of NAFLD in the nineties and early 2000s.
Policy implications
Our study has policy implications in CVD prevention and targeted recommendations. CVD prevention policies and guidelines should identify groups of high-risk individuals among people with liver disease for targeted screening and intervention. Treatment of HCV with direct-acting antiviral therapy is linked to a decrease in the risk of CVD events in patients who have sustained virological responses [47]. This suggests that HCV infection and potentially HBV infection may be modifiable CVD risk factors [48] and treating the underlying infection could improve both liver and CVD outcomes.
This work has important implications for addressing inequalities in HBV and HCV screening and treatment. According to recent estimates, around 1,13,000 and 1,80,000 individuals are infected with HCV and HBV in England respectively [49]. Although effective treatment for HCV exists, there remains inequality in accessing treatment as HCV may remain undiagnosed in underserved populations such as people who are homeless, in prison or drug users. Migration from HBV endemic countries has resulted in an increased prevalence of HBV infection in the UK. However, migrants remained an underserved or invisible population due to low disease awareness and low engagement with health services [50]. A study investigating the uptake of HBV testing in England revealed that there were low awareness levels of HBV among the migrant community, which resulted in limited engagement with healthcare services [50]. Furthermore, HBV infection is a stigmatised condition, which further exacerbated the testing and diagnostic barriers. HBV infection in the UK is thought to be an 'invisible disease within an invisible population' [50]. These observations were corroborated in another study which found limited evidence of HBV testing among migrants in the UK and testing in children and young people remain very low [51]. Our results suggest that regional initiatives targeting underserved communities might help complement interventions from two perspectivestreatment of chronic HBV/HCV by addressing social challenges that cause healthcare inequality and targeted prevention of CVD in patients with liver disease.
Conclusion
We propose that management of liver disease should include regular cardiovascular risk assessment in addition to checking for undiagnosed diabetes and renal comorbidities. Furthermore, CVD risk assessment should be provided for all types and all stages of liver disease. At the individual level, communicating risk information may affect people's attitude to seek opportunities for early detection and treatment of CVD to reduce premature mortality. For example, this can be accomplished by improving access to the NHS Health Check programme [52], aimed at individuals aged 40-74, to identify early signs of CVD, stroke, kidney disease and diabetes. Access to risk information could help stimulate conversations between patients and their doctors on decisions on managing their health and care. It is now possible to generate and disseminate novel risk information using real-world data that takes a step towards making decisions that are not based on the group average but that are closer to specific patient scenarios (i.e., patients like me) encountered in clinical settings [53]. | 7,509.6 | 2022-01-03T00:00:00.000 | [
"Medicine",
"Biology"
] |
A First Digit Theorem for Square-Free Integer Powers
For any fixed integer power, it is shown that the first digits of square-free integer powers follow a generalized Benford law (GBL) with size-dependent exponent that converges asymptotically to a GBL with inverse power exponent. In particular, asymptotically as the power goes to infinity the sequences of squarefree integer powers obey Benford’s law. Moreover, we show the existence of a one-parametric size-dependent exponent function that converge to these GBL’s and determine an optimal value that minimizes its deviation to two minimum estimators of the size-dependent exponent over the finite range of square-free integer powers less than 10 ,..., 4 , 10 = ⋅ m m s , where 10 , 5 , 4 , 3 , 2 , 1 = s is a fixed integer power. Mathematics Subject Classification: Primary 11A25, 11K36, 11N37, 11Y55; Secondary 62E20, 62F12
Abstract
For any fixed integer power, it is shown that the first digits of square-free integer powers follow a generalized Benford law (GBL) with size-dependent exponent that converges asymptotically to a GBL with inverse power exponent.In particular, asymptotically as the power goes to infinity the sequences of squarefree integer powers obey Benford's law.Moreover, we show the existence of a one-parametric size-dependent exponent function that converge to these GBL's and determine an optimal value that minimizes its deviation to two minimum estimators of the size-dependent exponent over the finite range of square-free integer powers less than
Introduction
It is well-known that the first digits of many numerical data sets are not uniformly distributed.Newcomb [14] and Benford [3] observed that the first digits of many series of real numbers obey Benford's law (1.1) The increasing knowledge about Benford's law and its applications has been collected in various bibliographies, the most recent being Beebe [2] and Berger and Hill [4].It is also known that for any fixed power exponent 1 ≥ s , the first digits of integer powers, follow asymptotically a Generalized Benford law (GBL) with exponent ) 1 , 0 ( 1 ∈ = − s α such that (see Hürlimann [7]) (1.2) Clearly, the limiting case 0 → α respectively 1 → α of (1.2) converges weakly to Benford's law respectively the uniform distribution.
We study the distribution of first digits of square-free integer powers.The method consists to fit the GBL to samples of first digits using two size-dependent goodness-of-fit measures, namely the ETA measure (derived from the mean absolute deviation) and the WLS measure (weighted least square measure).In Section 2, we determine the minimum ETA and WLS estimators of the GBL over finite ranges of square-free powers up to 4 , 10 a fixed power exponent.Computations illustrate the convergence of the size-dependent GBL with minimum ETA and WLS estimators to the GBL with exponent 1 − s .Moreover, we show the existence of a one-parametric size-dependent exponent function that converge to these GBL's and determine an optimal value that minimizes its deviation to the minimum ETA and WLS estimators.A mathematical proof of the asymptotic convergence of the finite sequences to the GBL with inverse power exponent follows in Section 3.
Size-dependent GBL for square-free integer powers
To investigate the optimal fitting of the GBL to first digit sequences of squarefree integer powers, it is necessary to specify goodness-of-fit (GoF) measures according to which optimality should hold.First of all, a reasonable GoF measure for the fitting of first-digit distributions should be size-dependent.This has been observed by Furlan [5], Section II.7.1, pp.70-71, who defines the ETA measure, and by Hürlimann [8], p.8, who applies the probability weighted least squares (WLS) measure used earlier by Leemis et al. [12] (chi-square divided by sample size).Let , be an integer sequence, and let n d be the (first) . Then, Furlan's ETA measure for the GBL is defined to be A first digit theorem for square-free integer powers 131 where is the mean absolute deviation measure.The latter measure is also used to assess conformity to Benford's law by Nigrini [15] (see also Nigrini [16], Table 7.1, p.160).The WLS measure for the GBL is defined by (e.g.[12]) where the counting function ) (n S is given by (e.g.Pawlewicz [18], Theorem 1)
, together with the sample size , is provided in Table A.1 of the Appendix.Based on this we have calculated the optimal parameters which minimize the ETA (or equivalently MAD) and WLS measures, the so-called minimum ETA (or minimum MAD) and minimum WLS estimators.Together with their GoF measures, these optimal estimators are reported in Table 2.1 below.Note that the minimum WLS is a critical point of the equation For comparison, the ETA and WLS measures for the size-dependent GBL exponent ) , called LL estimator, are listed.This type of estimator is named in honour of Luque and Lacasa [13] who introduced it in their GBL analysis for the prime number sequence.Through calculation one observes that the LL estimator minimizes the absolute deviations between the LL estimator and the ETA (resp.WLS) estimators over the finite ranges of square-free powers Table 2.1 displays exact results obtained on a computer with single precision, i.e. with 15 significant digits.The ETA (resp.WLS) measures are given in units of ).Taking into account the decreasing units, one observes that the optimal ETA and WLS measures decrease with increasing sample size.
Asymptotic counting function for square-free integer powers
The following is a slight extension of the argument by Luque and Lacasa [13], Section 5(a).It is well-known that a random process with uniform density where the integral pre-factor is chosen to fulfill the asymptotic limiting value for the square-free number counting function, that is (note that In fact, two improved asymptotic expansions of ) (N S are known, namely ( ) The first one is classical and proved in Hardy and Wright [6], p.269, and Jameson [9], Section 2.5, for example.The second improved estimate is due to Jia [11] (see also Pappalardi [17]).However, it suffices to use the simple estimate (3.3), which is obtained as follows.From (3.2) one gets for arbitrary ,...
With (3.1) this transforms to
A first digit theorem for square-free integer powers 135 which is independent of s and simply denoted by reflects the fact that there are as many square-free powers in ? Clearly, the factor ( ) for any fixed c .Its derivative with respect to c satisfies the property which implies the following min-max property of ( The size-dependent exponent (3.1) with 1 = c not only minimizes the absolute deviations between the LL estimator and the ETA (resp.WLS) estimators over the finite ranges of square-free powers 10 , .Moreover, the following limiting asymptotic result has been obtained.
First Digit Square-Free Integer Power Theorem (GBL for square-free integer powers).The asymptotic distribution of the first digit of square-free integer power sequences 4 , 10 , is given by .10 in Section 2, but it turns out to be uniformly best with maximum error less than 3 10 − against the asymptotic estimate, at least if 4 10 ≥ N
Table 2 . 1 :
GBL fit for first digit of square-free powers: ETA vs. WLS criterion
Table 3 . 1 :
Comparison of square-free number counting functions for | 1,659 | 2014-01-01T00:00:00.000 | [
"Mathematics"
] |
Comparison of Chitosan Nanoparticles and Soluplus Micelles to Optimize the Bioactivity of Posidonia oceanica Extract on Human Neuroblastoma Cell Migration
Posidonia oceanica (L.) Delile is a marine plant endemic of Mediterranean Sea endowed with interesting bioactivities. The hydroalcholic extract of P. oceanica leaves (POE), rich in polyphenols and carbohydrates, has been shown to inhibit human cancer cell migration. Neuroblastoma is a common childhood extracranial solid tumor with high rate of invasiveness. Novel therapeutics loaded into nanocarriers may be used to target the migratory and metastatic ability of neuroblastoma. Our goal was to improve both the aqueous solubility of POE and its inhibitory effect on cancer cell migration. Methods: Chitosan nanoparticles (NP) and Soluplus polymeric micelles (PM) loaded with POE have been developed. Nanoformulations were chemically and physically defined and characterized. In vitro release studies were also performed. Finally, the inhibitory effect of both nanoformulations was tested on SH-SY5Y cell migration by wound healing assay and compared to that of unformulated POE. Results: Both nanoformulations showed excellent physical and chemical stability during storage, and enhanced the solubility of POE. PM-POE improved the inhibitory effect of POE on cell migration probably due to the high encapsulation efficiency and the prolonged release of the extract. Conclusions: For the first time, a phytocomplex of marine origin, i.e., P. oceanica extract, has enhanced in terms of acqueous solubility and bioactivity once encapsulated inside nanomicelles.
Introduction
Posidonia oceanica (L.) Delile is a marine angiosperm belonging to Posidoniaceae family endemic of the Mediterranean Sea forming expanse underwater meadows of considerable importance for marine ecosystems [1]. The decoction of P. oceanica leaves has been dated to ancient Egypt; but more recently, it has been documented to be used by villagers of the sea coast of Western Anatolia as a traditional natural remedy for diabetes, hypertension, and for its antiprotozoal activity [2,3]. In addition, P. oceanica has proved to be a promising reservoir of bioactive compounds with antibacterial and antimycotic properties [4]. Over the years, P. oceanica has gained a growing interest for its potential benefits on healthcare, mostly related to the antioxidant and antiradical action of its phenolic component. Recently, a study on P. oceanica extract highlighted its biological activity even in the dermatological field. In fact, P. oceanica has proved to be an efficient anti-aging agent by improving fibroblast activity and collagen production [5]. Moreover, the hydroalcoholic extract of P. oceanica (POE) was found to prevent human cancer cell migration with non-toxic mechanism of action. Specifically, the P. oceanica phytocomplex has been proven to reduce the motility of human fibrosarcoma cells and the activity of metalloproteases (MMP-2/9) through the activation of a transient autophagic process without any detectable effect on cell viability [6,7]. The anti-inflammatory mechanism of P. oceanica phytocomplex was recently elucidated [8].
Neuroblastoma is a common childhood extracranial solid tumor with high mortality originating from the sympathetic nervous system. It represents about 10% of solid tumors and occurs in very young children with an average age of 17 months at diagnosis. The clinical picture of neuroblastoma is very variable and depends on the stage and location of the tumor [9]. In the clinical field, various anti-cancer drugs and therapies are used to prevent the high proliferation of neuroblastoma, including surgery, chemotherapy, immunotherapy, radiotherapy, myeloablative treatment, and retinoids therapy [10]. Despite this, high-stage neuroblastoma presents a poor prognosis with extremely low overall survival. Therefore, the search for novel therapeutics is important in the case of pediatric malignancies to improve patient survival by reducing high toxicity associated with anticancer drugs. Over decades, crude extracts derived from medicinal plants are of great interest for scientific research due to their natural origin and their interesting bioactive compounds which can act synergistically in the prevention or treatment of various human diseases. Furthermore, innovative strategies, like nanotechnology, have achieved great results toward ameliorating cancer therapeutics. The use of new therapeutics delivery system, as nanocarriers, may improve efficacy and decrease systemic toxicity during treatment of malignancies compared to the use of "free" drugs [11][12][13]. Among the varieties of nanoformulations known in the literature, nanoparticles and polymeric micelles are of great interest for pharmacological applications.
In particular, chitosan is one of the polymeric constituents most used in the formulation of nanoparticles, due to its advantageous characteristics and interesting biological activities. It is biocompatible, biodegradable, and free of toxicity. It is a versatile compound, suitable for various routes of administration and multifunctional due to the possibility of functionalizing the molecule to obtain specific targeting. Thanks to its qualities, chitosan is used as nanocarrier of various types of active ingredients: proteins, antibodies, genes, hormones, drugs, but also natural molecules [14]. Chitosan nanoparticles for plant extracts have also been described, such as the Nigella sativa L. aqueous extract or the cherry extract from Prunus avium L. [15,16].
Soluplus is a tri-block copolymer consisting of polyvinylcaprolactam-polyvinylacetate-polyethylene glycol. PEG is the hydrophilic portion and polyvinylcaprolactam-polyvinylacetate moieties are arranged in the hydrophobic core. The polymeric micelles have the possibility of incorporating functionality in both core and shell regions: the hydrophobic molecules in the core, less hydrophobic molecules in the core, but near the hydrophilic moiety [18].
Soluplus is biodegradable, and it has a low CMC (7.6 mg/L), which gives its micelles high stability even after dilution [19,20]. Soluplus micelles have been applied to delivery natural and chemical compounds. For example, it has been seen that the use of Soluplus micelles significantly improves the solubility of silymarin, extracted from the fruits of Silybum marianum (L.) Gaertn. (Asteraceae), increasing its solubility and its intestinal permeability [21]. Other applications described the use of Soluplus for doxorubicin delivery in the treatment of resistant tumors [20] or to increase acyclovir permeability across the cornea and sclera [22] or even to enhance oral bioavailability and hypouricemic activity of scopoletin [23]. Numerous other applications are described in the literature [24][25][26][27].
Considering the high migratory and metastatic capacity of neuroblastoma, it is possible to exploit new therapies loaded in nanocarriers to improve the drug efficacy in order to counteract these specific neuroblastoma abilities. Nanoformulations can also be used for the delivery of molecules of natural origin or phytocomplexes to optimize the effectiveness of herbal medicines [13,20,27]. In this perspective, the possibility of using the whole crude extract carried by nanocarriers leads to having better biological efficacy due to the synergistic action of the bioactive compounds of the phytocomplex with respect to the activity of the single compounds.
In this work, we therefore studied-for the first time-the anti-migratory ability of POE loaded in nanoformulations on the human neuroblastoma cell line SH-SY5Y. Our goal was to improve both the aqueous solubility of the POE and its inhibitory effect on cancer cell migration, providing a sustained and prolonged release. For this purpose, we developed and compared two types of POE-loaded nanocarriers, such as chitosan nanoparticles (NP-POE) and Soluplus polymeric micelles (PM-POE), usually applied to the delivery of single compounds. This study aims to develop biocompatible, biodegradable and easy to prepare carriers and to extend their application to carry a phytocomplex, containing molecules with different polarity. Given the chitosan characteristics, this polymer has already been applied to the formulation of nanoparticles for the delivery of polar extracts [15,16]. Polymeric micelles are easy to prepare, stable, biocompatible, and suitable for compounds with different polarity. The authors developed mixed polymeric micelles of Soluplus/TPGS-Vit. E for the formulation of silymarin [21].
The nanoformulations were chemically and physically characterized in terms of size, homogeneity, ζ−potential, morphology, encapsulation efficiency, and storage stability. In vitro release studies were also performed. Finally, the inhibitory effect of both NP-POE and PM-POE on SH-SY5Y cell migration was evaluated by the wound healing assay and compared to that of unformulated POE.
P. oceanica Extract (POE) Preparation
The leaves of P. oceanica were extracted as previously described [6]. Briefly, 10 mL of EtOH/H 2 O (70:30 v/v) per gram of dried and minced P. oceanica leaves were left to shake overnight at 37 • C. Hydrophobic compounds were removed from the water-ethanol extraction by repeated shaking in n-hexane (1:1), whereas the hydrophilic fraction, recovered in the lower phase, was dispensed in 1 mL aliquots and then dried. Single batch of P. oceanica extract was dissolved in 0.5 mL of EtOH/H 2 O (70:30 v/v) before to use and hereafter referred to as POE. Freshly-dissolved POE was characterized for total polyphenols and carbohydrates content and antioxidant and radical scavenging activities, according to previously described methods [6,7]. Drug: extract ratio (D.E.R) was 8:1.
Preparation of Chitosan Nanoparticles (NP and NP-POE)
NP-POE were prepared using the ionotropic gelation method reported in literature [28,29], modified for the optimization of our formulation. A solution (2 mg/mL) of chitosan (CS) in 1% acetic acid was prepared and kept under magnetic stirring for 24 h, then filtered with a 0.45 µm filter membrane. Tripolyphosphate (TPP) water solution (2 mg/mL) was also prepared and 2 mL of this solution was added to 4 mL of CS solution to prepare empty nanoparticles. The resultant mixture was kept under magnetic stirring for 30 min.
To prepare NP-POE, 4 mL of POE hydroalcoholic solution (5 mg/mL in EtOH/H 2 O 70:30 v/v) were added to 4 mL of CS solution (2 mg/mL), then 1.5 mL of TPP solution (2 mg/mL) were added dropwise. The mixture was stirred (500 rpm) at room temperature for 30 min, followed by 15 min of sonication in the ultrasonic bath. The final concentrations of POE, CS, and TPP were 2.11 mg/mL, 0.84 mg/mL, and 0.32 mg/mL, respectively.
Preparation of Soluplus Polymeric Micelles (PM and PM-POE)
PM-POE were prepared by the thin film method [17,30,31]. In brief, 250 mg of Soluplus, and 10.55 mg of POE were dissolved in 20 mL of a mixture EtOH/H 2 O (70:30 v/v). Then, the solvents were evaporated under vacuum at 30 • C until the formation of a thin film. Finally, the film was hydrated with 5 mL of distilled water under sonication for 5 min followed by 20 min of magnetic stirring at 200 rpm. The final concentration of POE was 2.11 mg/mL. Empty micelles were prepared with the same method.
Physical Characterization by Dinamic Light Scattering (DLS)
Hydrodynamic diameter, size distribution and ζ-potential were measured by Dinamic Light Scattering (DLS), using a Zsizer Nanoseries ZS90 (Malvern Instruments, Worcestershire, UK) outfitted with a JDS Uniphase 22 mW He-Ne laser operating at 632.8 nm, an optical fiber-based detector, a digital LV/LSE-5003 correlator and a temperature controller (Julabo water-bath) set at 25 • C. Time correlation functions were analysed by the Cumulant method, to obtain the hydrodynamic diameter of the vesicles (Z-average) and the particle size distribution (polydispersity index, PdI) using the ALV-600 software V.3.X provided by Malvern. ζ-potential, instead, was calculated from the electrophoretic mobility, applying the Helmholtz-Smoluchowski equation using the same instrument. The samples were opportunely diluted in distilled water and an average of three measurements at stationary level was taken. A Haake temperature controller kept the temperature constant at 25 • C. The analyses were performed in triplicate.
Morphological Characterization by Transmission Electron Microscopy (TEM)
The morphological characterization of nanoformulations was performed by TEM CM12 (Philips, The Netherlands) equipped with an OLYMPUS Megaview G2 camera at accelerating voltage of 80 keV. Before the analyses, the samples were diluted in distilled water and placed onto a 200-mesh copper grid coated with carbon. Most of the sample was blotted from the grid with filter paper to form a thin film. After the adhesion of formulation, 5 µL of phosphotungstic acid solution (1% w/v in sterile water, Electron Microscopy Sciences, Hatfield, PA, USA) were dropped onto the grid as a staining medium and the excess solution was removed with filter paper. Samples were dried for 3 min, after which they were examined with the electron microscope [32].
Encapsulation Efficiency (EE%)
The encapsulation efficiency of the NP-POE was calculated employing the indirect method, as reported in the literature [33]. In brief, the NP-POE were centrifuged at 18000 rpm (Ultracentrifuge Mikro 22, Hettick, Kirchlengern, Germany) for 30 min at 4 • C. The supernatant, containing the non-encapsulated extract, was analyzed by HPLC analysis. POE encapsulation efficiency was calculated according to Equation (1) In the case of PM-POE, the dialysis bag method was applied to remove non-encapsulated POE. The bag (cellulose membranes, MWCO 3.5-5 kD, Spectrum Laboratories, Inc., Breda, The Netherlands) was kept in 1 L of distilled water for 30 min at room temperature with continuing stirring at 150 rpm. Then, POE retained in the PM was quantified after dilution with ethanol and sonication for 30 min in an ultrasonic bath. The resulted mixture was analysed by HPLC after centrifugation for 10 min at 14000 rpm [34][35][36]. The Equation (2) was applied to EE% determination.
Stability Studies
Stability studies were conducted over 3 months. NP-POE and PM-POE were stored at 4 • C, in the test tubes coated with aluminium foils. The particle size, PdI, ζ-potential and encapsulation efficiency were investigated at regular intervals to assess the chemical and physical stability of the samples.
In Vitro POE Release from NP-POE and PM-POE
The release of POE from NP-POE and PM-POE were studied using the dialysis bag method (cellulose membranes, MWCO 3.5-5 kD, Spectrum Laboratories, Inc., Breda, The Netherlands). In order to mimic the sink condition, as release medium a PBS solution at pH 7.4 was used. Each formulation (2 mL) was introduced into the dialysis membrane and placed in the release medium (200 mL) at 37 • C, under magnetic stirring. The dissolution medium was 0.01 M PBS (pH 7.4, NaCl 0.138 M, KCl 0.0027 M). At different time points, 1 mL of the release medium was taken and replaced with the same volume of PBS to maintain the sink condition [21,37]. The experiment was conducted for 24 h and 72 h for NP-POE and MP-POE, respectively. The amount of released extract was quantified by HPLC. A hydroalcoholic solution of POE (2 mg/mL) was used as a control. All samples were in a sink condition with the same amount of POE. The released extract in the dissolution media was quantified by HPLC [33].
Cell Culture and Culture Conditions
The SH-SY5Y human neuroblastoma cell line, purchased from American Type Culture Collection (ATCC ® , Manassas, VA, USA), were grown in a 1:1 mixture of Ham's F12 and DMEM supplemented with 2 mM l-glutamine, 100 µg/mL streptomycin, 100 U/mL penicillin and 10% FBS, at 37 • C in a humidified atmosphere of 5% CO 2 . Once cells reached 70% to 80% confluence, they were detached by trypsinization (0.25% trypsin, 0.5 mM EDTA solution) and propagated after appropriate dilutions. SH-SY5Y experiments were performed in serum-free medium (starvation medium) after exposure to unformulated POE and POE-loaded nanocarriers, as NP-POE and PM-POE, opportunely diluted in culture medium to obtain 3 µg/mL final concentration of POE. Untreated cells were used as control.
Cell Viability Assay
Cell viability was assessed using the colorimetric MTT activity assay. SH-SY5Y cells were seeded in a 96-well plate (5 × 10 3 cells/well) in complete medium overnight. Then, cells were treated with POE, NP-POE, and PM-POE in starvation medium for 24 h. Culture medium was removed and adherent cells were washed with PBS. Subsequently, 100 µL/well of 0.5 mg/mL MTT solution were added and incubated in the dark at 37 • C for 1 h. Next, cells were washed with PBS and lysed in 80 µL/well of lysis . Absorbance values were measured at 595 nm using iMARK microplate reader (Bio-Rad, Philadelphia, PA, USA). Data on relative cell viability were expressed in terms of percentage of the untreated cells.
Wound Healing Assay
The wound healing assay [6,7] was used to test SH-SY5Y cell migration. Cells were plated in 6-well plate at 5 × 10 5 cells/well density in complete medium. Once cells reached confluence, a longitudinal scratch was performed through the cell monolayer using a 200 µL sterile plastic tip. Plates were then washed three times with PBS to remove non-adherent cells. Fresh starvation medium containing POE, NP-POE, PM-POE, or empty nanoformulations at appropriate dilutions was added. Cell-free area was observed under phase contrast microscopy and images were captured at 0, 5, 7, and 24 h after wounding using a Nikon TS-100 microscope equipped with a digital acquisition system (Nikon Digital Sight DS Fi-1, Nikon, Minato-ku, Tokyo, Japan). Marked edges along each wound were used to measure cell migration considering the horizontal distance between the initial scratch and the scratch following migration.
Statistical Analysis and Graphics Preparation
The experiments were repeated three times and results were expressed as a mean ± standard deviation, after centralizing mean as a normalization strategy between replicated experiments. The statistical analysis of cell assay was performed with Tukey's test. Furthermore, the graphs were drawn using LibreOffice Calc. Panels were assembled with LibreOffice Impress and adapted with Gimp 2.8.
NP and NP-POE Preparation
POE has been characterized for the total content of polyphenols and carbohydrates and for its antioxidant and radical scavenging activities as described in previous works [6][7][8] and reported in Table S1 of Supplementary material.
The presence of polyphenols was also confirmed by the HPLC analysis of extract [6]. The percentage of polyphenols was 10% of dried extract of P. oceanica, the main peak at 11.83 min is catechin, other compounds identified by UV-vis spectrum and by comparison with the standards are reported in the Figure 1, and they are in agreement with previous findings [6]. For the preparation of the empty NP various concentrations and ratio of CS and TPP solutions have been considered. The best system in terms of size PdI and ζ-potential (Table 1) was obtained by reacting 4 mL of the CS solution (2 mg/mL) with 2 mL of the TPP solution (2 mg/mL). Our results are in agreement with previous studies performed with similar conditions [28,29,38]. The incorporation of the extract into the NP was carried out by adding 4 mL of POE hydroalcoholic solution (5 mg/mL) to the CS solution, before adding the TPP solution. In the presence of the extract the amount of the TPP solution was decreased to 1.5 mL respect to the preparation of empty NP, to optimize size and PdI of the sample.
Thus, the final concentrations of POE, CS, and TPP were 2.11 mg/mL, 0.84 mg/mL, and 0.32 mg/mL, respectively. The NP are homogeneous, with a positive ζ-potential, due to the presence of chitosan. The loading of the extract inside the NP induces an increase in the size, but the system still remains useful for pharmaceutical administration. As reported in the literature, after drug loading into chitosan nanoparticles, the particle size becomes bigger [39,40]. A possible reason for this phenomenon is that the loaded drug (or extract in this case) reduces the cohesive force between chitosan and tripolyphosphate [41]. TEM analysis shows spherical and not aggregated particles (Figure 2A). For the preparation of the empty NP various concentrations and ratio of CS and TPP solutions have been considered. The best system in terms of size PdI and ζ-potential (Table 1) was obtained by reacting 4 mL of the CS solution (2 mg/mL) with 2 mL of the TPP solution (2 mg/mL). Our results are in agreement with previous studies performed with similar conditions [28,29,38]. The incorporation of the extract into the NP was carried out by adding 4 mL of POE hydroalcoholic solution (5 mg/mL) to the CS solution, before adding the TPP solution. In the presence of the extract the amount of the TPP solution was decreased to 1.5 mL respect to the preparation of empty NP, to optimize size and PdI of the sample.
Thus, the final concentrations of POE, CS, and TPP were 2.11 mg/mL, 0.84 mg/mL, and 0.32 mg/mL, respectively. The NP are homogeneous, with a positive ζ-potential, due to the presence of chitosan. The loading of the extract inside the NP induces an increase in the size, but the system still remains useful for pharmaceutical administration. As reported in the literature, after drug loading into chitosan nanoparticles, the particle size becomes bigger [39,40]. A possible reason for this phenomenon is that the loaded drug (or extract in this case) reduces the cohesive force between chitosan and tripolyphosphate [41]. TEM analysis shows spherical and not aggregated particles (Figure 2A). In the case of NP-POE, the encapsulation efficiency was calculated by the indirect method [33], as reported in the experimental section, because the direct method employs drastic conditions [38] which can alter the stability of the extract. The EE% value is 10.63% ± 0.71. This value is not high, but it corresponds to a final POE concentration of 2.11 mg/mL, with 0.22 mg/mL encapsulated into the NP. This represents a remarkable improvement of the aqueous solubility of the extract, that is completely insoluble. This aspect was also confirmed by the change of the color of the mixture, that from colorless becomes light yellow in the presence of nanoparticles. Similar values of EE% were reported in literature for analogous nanoparticles of chitosan carrying natural substances such as eugenol and carvacrol. In the case of eugenol, the EE value ranges from 2% to 29% increasing the initial eugenol content respect to chitosan from 1:0.25 to 1:1.25 w/w. In the case of carvacrol, the EE% ranges from 13% to 31% with increasing initial carvacrol content [28,38].
Our research represents one of the few studies in which an attempt was made to formulate an extract rather than a single compound. In the literature, there are few examples of nanoformulations of extracts using nanoparticles, such as gelatin NP of Centella asiatica and Cardamono extracts [42,43] and chitosan NP of Nigella sativa and cherry extracts [15,16]. The nanoparticles are not easy to prepare for extract delivery, due to the presence of various compounds with different polarity, but the application of nanotechnology to extract is of great interest in the phytotherapy field given the remarkable benefits that traditional medicine attributes to the synergistic action of the bioactive compounds present in phytocomplexes.
PM and PM-POE
Soluplus micelles were prepared using the "thin film hydration" technique [17,23,31]. Soluplus is a tri-block copolymer consisting of polyvinylcaprolactam-polyvinylacetate-polyethylene glycol. PEG is the hydrophilic portion and polyvinylcaprolactam-polyvinylacetate moiety are arranged in the hydrophobic core. PM have the possibility of incorporating functionality in both core and shell regions: the hydrophobic molecules of POE in the core, less hydrophobic molecules in the core, but near the hydrophilic moiety [18].
During the optimization process, the hydroalcoholic solution (EtOH/H2O 70:30 v/v) was selected as the best solvent mixture to solubilize both the extract and the polymer. Micelles of Soluplus (5% w/v), containing 2.11 mg/mL of extract, have the physical parameters showed in Table 2. In the case of PM-POE, the increase of sizes was not observed, probably for the high extract solubilisation in the PM core-shell structure. Indeed, Soluplus exhibits the capability of solubilizing both hydrophobic and hydrophilic drugs into the core and shell of the micelles. This ability might be attributed to the interactions between the drug and the polymer. For example, the phenolic groups might interact with the terminal −OH and ether oxygen groups in Soluplus and form hydrogen bonds [44]. In the case of NP-POE, the encapsulation efficiency was calculated by the indirect method [33], as reported in the experimental section, because the direct method employs drastic conditions [38] which can alter the stability of the extract. The EE% value is 10.63% ± 0.71. This value is not high, but it corresponds to a final POE concentration of 2.11 mg/mL, with 0.22 mg/mL encapsulated into the NP. This represents a remarkable improvement of the aqueous solubility of the extract, that is completely insoluble. This aspect was also confirmed by the change of the color of the mixture, that from colorless becomes light yellow in the presence of nanoparticles. Similar values of EE% were reported in literature for analogous nanoparticles of chitosan carrying natural substances such as eugenol and carvacrol. In the case of eugenol, the EE value ranges from 2% to 29% increasing the initial eugenol content respect to chitosan from 1:0.25 to 1:1.25 w/w. In the case of carvacrol, the EE% ranges from 13% to 31% with increasing initial carvacrol content [28,38].
Our research represents one of the few studies in which an attempt was made to formulate an extract rather than a single compound. In the literature, there are few examples of nanoformulations of extracts using nanoparticles, such as gelatin NP of Centella asiatica and Cardamono extracts [42,43] and chitosan NP of Nigella sativa and cherry extracts [15,16]. The nanoparticles are not easy to prepare for extract delivery, due to the presence of various compounds with different polarity, but the application of nanotechnology to extract is of great interest in the phytotherapy field given the remarkable benefits that traditional medicine attributes to the synergistic action of the bioactive compounds present in phytocomplexes.
PM and PM-POE
Soluplus micelles were prepared using the "thin film hydration" technique [17,23,31]. Soluplus is a tri-block copolymer consisting of polyvinylcaprolactam-polyvinylacetate-polyethylene glycol. PEG is the hydrophilic portion and polyvinylcaprolactam-polyvinylacetate moiety are arranged in the hydrophobic core. PM have the possibility of incorporating functionality in both core and shell regions: the hydrophobic molecules of POE in the core, less hydrophobic molecules in the core, but near the hydrophilic moiety [18].
During the optimization process, the hydroalcoholic solution (EtOH/H 2 O 70:30 v/v) was selected as the best solvent mixture to solubilize both the extract and the polymer. Micelles of Soluplus (5% w/v), containing 2.11 mg/mL of extract, have the physical parameters showed in Table 2. In the case of PM-POE, the increase of sizes was not observed, probably for the high extract solubilisation in the PM core-shell structure. Indeed, Soluplus exhibits the capability of solubilizing both hydrophobic and hydrophilic drugs into the core and shell of the micelles. This ability might be attributed to the interactions between the drug and the polymer. For example, the phenolic groups might interact with the terminal −OH and ether oxygen groups in Soluplus and form hydrogen bonds [44]. The morphological characterization of PM-POE is reported in Figure 2B. The micelles appear as spherical with the dimensions consistent with those detected by DLS. The dialysis purification method is usually employed for the determination of the encapsulation efficiency in the case of micelles [45,46] and it has been employed as direct process to determine the EE% of PM-POE. The EE% value is 85.55% ± 2.54, corresponding to 1.81 mg/mL of POE effectively encapsulated.
As evidenced for NP-POE, also PM-POE increased the solubility of the extract but with a higher EE% compared to NP-POE. Furthermore, while in an aqueous solution the extract remains completely undissolved, the micellar solution is able to solubilize about 2 mg/mL of POE and it becomes colored in yellow, proof of a change in solubility of the extract. As reported in literature, the nanomicelles have achieved good results in improving the solubility of extracts, such as the silymarin phytocomplex [21]. For the first time in this work Soluplus nanomicelles have been used to increase the solubility of a phytocomplex of marine origin. Tables 2 and 3 reported the physical stability of both POE nanoformulations over 3 months at 4 • C. All physical parameters, named size, PdI and ζ-potential, resulted unchanged for both NP-POE and PM-POE. Also the chemical stability of the formulated extract before and after the storage was substantially comparable, as confirmed by EE% values. The EE% ranges from 10.63 ± 0.71% to 8.83 ± 0.43% after 90 days, and from 85.55 ± 2.54% to 75.80 ± 2.55% after 90 days, are referred to NP-POE and PM-POE respectively.
Stability Study
The PM-POE stability study well correlate with its in vitro activity, that was maintained until 3 months, as proved by its inhibitory effect on SH-SY5Y cell migration.
In Vitro Release Studies
The release profile of POE solution and POE nanoformulations is shown in Figure 3. In particular, NP-POE released 30% of POE after 3 h, 45% after 4 h and 90% within 24 h of dialysis at 37 • C. The release of the extract by NP-POE is not rapid and immediate as POE solution; the latter in fact already after 3 h reached 60% of release and after 4 h exceeded 90%. NP-POE released the extract in a sustained fashion probably due to the diffusion of the adsorbed extract and its diffusion through the polymeric matrix, mechanisms which govern drug release from chitosan nanoparticles. Soluplus/P407 release a 6.8% of quercetin in the first 8 h and a 28.75% after 24 h [44]. Evidence of a prolonged lag time in POE release encourages the use of polymeric micelles to optimize POE release.
NP-POE and PM-POE Effect on SHSY5Y Cell Migration
POE and POE-loaded into nanoformulations (NP-POE and PM-POE) were tested on human neuroblastoma SH-SY5Y cell line, at the final concentration of 3 μg/mL according to data previously obtained on POE bioactivities [6][7][8].
The effect of POE, NP-POE, and PM-POE on SH-SY5Y cell viability was determined after 24 h treatment by MTT assay, also the effect of empty NP and PM nanocarriers was investigated. In particular, POE had no cytotoxic effects on cell viability as well as cells showed no signs of toxicity in the presence of NP-POE. As for PM-POE, cell viability value was about 80% due to the very low cytotoxicity ascribed to PM as just reported [21] ( Figure S1, in Supplementary Material). Therefore, considering that SH-SY5Y cell viability was maintained over 80% up to 24 h of POE, NP-POE, and PM-POE treatments, cell migration was evaluated by the wound healing assay and compared to their vehicles. After wounding cell monolayers, scratched area images were captured at different time points and the distance from the edges was measured. In particular, Figures 4 and 5 show that POE treatment determined a clear reduction of SH-SY5Y cell migration so that as early as 5 h from the initial scratch the wound width was about 70% ± 4%, while the untreated control cells migrated up to about 45 ± 6% of the wound width. The inhibitory effect of POE on cell migration was maintained over time determining a 57 ± 7% of wound width after 7 h of treatment and preventing complete closure at 24 h (22 ± 1% of wound width). Conversely, scratch closure rapidly progressed in untreated control cells after 7 h (29 ± 7%) until complete closure at 24 h.
These results are perfectly in agreement with previously results obtained on the ability of POE to inhibit human fibrosarcoma cancer cell migration [6,7]. Therefore, POE confirmed to be a good candidate for the design of novel therapeutic approaches in phytotherapy. Given the ability of POE to prevent the complete closure of the scratch, we investigated SH-SY5Y cell migration in the presence of NP-POE and PM-POE. The effect of empty NP and PM nanocarriers on cell migration was also monitored over time ( Figure 4). As reported in Figure 5A, empty NP and NP-POE showed no effect on SH-SY5Y cell migration leading to a complete closure of the wound at 24 h as the untreated control cells. Differently, PM-POE was able to enhance the inhibitory effect of POE on cell migration ( Figure 5B). As early as 5 h PM-POE showed the ability to impair SH-SY5Y cell migration (90 ± 5% of wound width) reducing the wound closure of about 20% compared to 5 h POE treatment.
Comparing the results with those of the in vitro release (Figure 3), the release percentage of POE starts after 5 h and becomes 40% at 8 h, whereas the PM-POE enhanced the inhibitory effect of POE on cell migration as early as after just 5 h. This apparent different behavior of PM in the two in vitro tests can be explained first with the different media and conditions of tests. The in vitro release of As for PM-POE, the release profile was slower and prolonged over time compared to both POE solution and NP-POE, as evidenced in Figure 3. The release of the extract from PM-POE is not immediate but rather delayed, it begins to increase after 5 h, reaching 40% after 8 h, 50% after 12 h, and 90% after 72 h. The obtained results are in agreement with the literature, which refers to a time-delayed release by PM formulations. In fact, it was recently observed that polymeric micelles of Soluplus/P407 release a 6.8% of quercetin in the first 8 h and a 28.75% after 24 h [44]. Evidence of a prolonged lag time in POE release encourages the use of polymeric micelles to optimize POE release.
NP-POE and PM-POE Effect on SHSY5Y Cell Migration
POE and POE-loaded into nanoformulations (NP-POE and PM-POE) were tested on human neuroblastoma SH-SY5Y cell line, at the final concentration of 3 µg/mL according to data previously obtained on POE bioactivities [6][7][8].
The effect of POE, NP-POE, and PM-POE on SH-SY5Y cell viability was determined after 24 h treatment by MTT assay, also the effect of empty NP and PM nanocarriers was investigated. In particular, POE had no cytotoxic effects on cell viability as well as cells showed no signs of toxicity in the presence of NP-POE. As for PM-POE, cell viability value was about 80% due to the very low cytotoxicity ascribed to PM as just reported [21] (Figure S1, in Supplementary Material). Therefore, considering that SH-SY5Y cell viability was maintained over 80% up to 24 h of POE, NP-POE, and PM-POE treatments, cell migration was evaluated by the wound healing assay and compared to their vehicles. After wounding cell monolayers, scratched area images were captured at different time points and the distance from the edges was measured. In particular, Figures 4 and 5 show that POE treatment determined a clear reduction of SH-SY5Y cell migration so that as early as 5 h from the initial scratch the wound width was about 70% ± 4%, while the untreated control cells migrated up to about 45 ± 6% of the wound width. The inhibitory effect of POE on cell migration was maintained over time determining a 57 ± 7% of wound width after 7 h of treatment and preventing complete closure at 24 h (22 ± 1% of wound width). Conversely, scratch closure rapidly progressed in untreated control cells after 7 h (29 ± 7%) until complete closure at 24 h.
The activity of PM-POE was maintained until 3 months confirming the results of the stability study previously reported. PM-POE are amphiphilic carriers able to increase the solubility of both lipophilic and hydrophilic constituents of P. oceanica extract, as demonstrated by the high EE% with respect to NP-POE. Moreover, the inhibitory activity of P. oceanica on cell migration could be ascribed to the prolonged release of POE from nanomicelles. Therefore, the development of nanoformulations, particularly nanomicelles, could be exploited to improve the traditional application of P.oceanica in others chronic diseases, such as diabetes [2] and inflammatory-related diseases [8].
Conclusions
In this work, we have developed two different POE nanoformulations. Both NP-POE and PM-POE were good candidates for increasing the solubility of P. oceanica hydroalcoholic extract, showing These results are perfectly in agreement with previously results obtained on the ability of POE to inhibit human fibrosarcoma cancer cell migration [6,7]. Therefore, POE confirmed to be a good candidate for the design of novel therapeutic approaches in phytotherapy. Given the ability of POE to prevent the complete closure of the scratch, we investigated SH-SY5Y cell migration in the presence of NP-POE and PM-POE. The effect of empty NP and PM nanocarriers on cell migration was also monitored over time ( Figure 4). As reported in Figure 5A, empty NP and NP-POE showed no effect on SH-SY5Y cell migration leading to a complete closure of the wound at 24 h as the untreated control cells. Differently, PM-POE was able to enhance the inhibitory effect of POE on cell migration ( Figure 5B). As early as 5 h PM-POE showed the ability to impair SH-SY5Y cell migration (90 ± 5% of wound width) reducing the wound closure of about 20% compared to 5 h POE treatment.
Comparing the results with those of the in vitro release (Figure 3), the release percentage of POE starts after 5 h and becomes 40% at 8 h, whereas the PM-POE enhanced the inhibitory effect of POE on cell migration as early as after just 5 h. This apparent different behavior of PM in the two in vitro tests can be explained first with the different media and conditions of tests. The in vitro release of POE from PM was done in PBS and using dialysis bag while the wound healing experiment was performed in starvation medium and in direct contact of PM-POE with cells. Furthermore, in the in vitro release assay, POE is released from PM also significantly at shorter times than 8 h. Between 0 h and 8 h there is no release ranging from 0% to 40%, but there are intermediate time points where the release of the extract has already begun. For this reason, it is explained why PM have improved the inhibitory effect of POE on cell migration already after 5 h.
The inhibitory effect of PM-POE on SH-SY5Y cell migration results were clear at 24 h of treatment as PM-POE prevented the complete closure of the wound (50 ± 8% of wound width). The empty PM prevented a total closure of the wound at 24 h maintaining the wound width of approximately 11 ± 4% compared to the initial scratch. This delay in the wound closure with respect to untreated control cells could be ascribed to the reduced cell viability observed after 24 h PM treatment. Considering these results, we obtained that PM-POE are able to improve the inhibitory effect of POE on cell migration. Despite the slight inhibitory effect of the empty PM, at 24 h PM-POE inhibited SH-SY5Y cell migration by 30% more than POE.
The activity of PM-POE was maintained until 3 months confirming the results of the stability study previously reported. PM-POE are amphiphilic carriers able to increase the solubility of both lipophilic and hydrophilic constituents of P. oceanica extract, as demonstrated by the high EE% with respect to NP-POE. Moreover, the inhibitory activity of P. oceanica on cell migration could be ascribed to the prolonged release of POE from nanomicelles. Therefore, the development of nanoformulations, particularly nanomicelles, could be exploited to improve the traditional application of P. oceanica in others chronic diseases, such as diabetes [2] and inflammatory-related diseases [8].
Conclusions
In this work, we have developed two different POE nanoformulations. Both NP-POE and PM-POE were good candidates for increasing the solubility of P. oceanica hydroalcoholic extract, showing good physical and chemical characteristics for parenteral administration and excellent physical and chemical stability during storage at 4 • C for three months. However, only the PM-POE nanoformulation was able to improve the POE inhibitory activity against neuroblastoma cell migration. To date, herbal medicine represents an interesting source for the realization of new drugs. Therefore, the development of adequate systems for the administration of natural compounds, such as nanoformulations, offers an advanced approach to improve the bioavailability and/or optimize the solubility and stability of individual natural compounds or extracts. In this work, for the first time, a phytocomplex of marine origin, i.e., P. oceanica extract, has shown an increase in terms of aqueous solubility and bioactivity once encapsulated inside nanomicelles. Therefore, we can assert that Soluplus polymeric nanomicelles are a suitable nanoformulation for the release and the improvement of bioactive properties of phytocomplexes.
Supplementary Materials:
The following are available online at http://www.mdpi.com/1999-4923/11/12/655/s1, Figure S1: SH-SY5Y cell viability. MTT test on cells untreated (control), treated with POE, PM-POE and NP-POE or with vehicles only (PM and NP) for 24 h, Table S1: Total polyphenols and carbohydrates content in POE and its antioxidant and radical scavenging activities. | 9,230.2 | 2019-12-01T00:00:00.000 | [
"Medicine",
"Chemistry"
] |
Study on Ductility of Ti Aluminide Using Artificial Neural Network
Improvement of ductility at room temperature has been a major concern on processing and application of Ti aluminides over the years. Modifications in alloy chemistry of binary alloy (Ti48 Al) and processing conditions were suggested through experimental studies with limited success. Using the reported data, the present paper aims to optimize the experimental conditions through computational modeling using artificial neural network (ANN). Ductility database were prepared, and three parameters, namely, alloy type, grain size, and heat treatment cycle were selected for modeling. Additionally, ductility data were generated from the literature for training and validation of models on the basis of linearity and considering the primary effect of these three parameters. Model was trained and tested for three different datasets drawn from the generated data. Possibility of improving ductility by more than 5% is observed for multicomponent alloy with grain size of 10–50 μm following a multistep heat treatment cycle.
Introduction
Ti aluminide has been an important aerospace material due to its high temperature properties and lower density as compared to superalloys.The ordered structure of aluminides that make them useful for high temperature applications makes them brittle at ambient temperature [1][2][3].Therefore, inspite of having good properties, the usefulness of these alloys has been limited to some specific applications only.Room temperature tensile ductility is maximum (∼1.5%) at around Ti-48Al (at%) aluminum, which is insufficient for further processing and applications.Hence, development of Ti aluminides has centered around Ti-48Al (at%) composition.It belongs to the γ (TiAl) plus α 2 (Ti 3 Al) region of the phase diagram [4][5][6].Various methods like alloy addition, controlled processing, heat treatment and so forth, are applied to get optimum combination of strength and ductility.Alloying additions in the range of 1 to 10 at% is studied with Cr, V, Mn, Nb, Ta, W, and Mo [6].The alloying additions of V, Mn, Ni, and Cr in the range of 2-4 at% have shown enhancement in ductility of the alloy.
Effect of microstructure on the mechanical properties was studied and duplex structure with fine grain size has been reported to be optimum for superior strength and ductility [4,[7][8][9].To obtain desired microstructures and mechanical properties, the effect of several heat treatment cycles on aluminides has been studied at different temperatures and with varying cooling rates [9][10][11][12][13][14][15][16] and marginal improvement in ductility was reported.In this way several studies have been conducted with limited success in improving ductility of the alloy.However, experimental studies are expensive due to the use of high purity alloying elements and processing under controlled atmosphere.Here, theoretical models are very useful for optimization of process parameters.Experimentation with such optimized parameters shall minimize the number of experimental attempts and could lead to achieve desired ductility.
During the last decade, there has been an increased interest in applying new emerging theoretical techniques such as fuzzy inference system (FIS) and artificial neural network (ANN) for optimization-related problems [17][18][19][20].These are the most common data driven models.These models
Premise parameters parameters
Consequent intend to describe the nonlinear relationship between input (antecedent) and output (consequence) to the real system.In the present paper ductility of Ti aluminide is studied through ANN modeling.
Though ANN modeling is a relatively new technique, there has been an increasing interest in applying this technique in recent times in different fields of material science [21][22][23][24][25].The basic advantage of employing ANN is that it does not require any external manifestation of parametric relationship.It learns from examples and recognizes patterns in a series of input and output values without any prior assumptions about their nature and interrelations.It consists of a number of interconnected computational elements called neurons.The neurons are arranged in three types of layers-input, hidden, and output.Information processing in a neural network occurs through interaction between these neurons.There is a wide range of ANN architectures [26], among which the three layer (input, hidden, and output) feed-forward architecture is used in the present work.It consists of layers of neurons, with each layer being fully connected to the preceding layer by interconnection weights (W).A neural network representation with two inputs x and y is shown in Figure 1.Each of these input variables is associated with 3 neurons as decided after trial and error.The computations were performed in 5 levels (L1 through L5, which consists of 1 output, 1 input, and 3 hidden layers).Network with three hidden layers (Figure 1) was attempted but the error increased as well as computation time was more, therefore, it was restricted for single hidden layer network.The directions marked in Figure 1 imply the flow of information.The contribution of each rule toward the model output is computed in the fourth level (L4).Overall output of the model is computed at fifth level (L5) by combining the signals received from the previous level.Subsequently models were validated with the literature database using three important indices, that is root mean square error (RMSE), regression coefficient (R 2 ) and the model efficiency criterion of Nash and Sutcliffe [27].Effect of alloying elements like Cr, V and Mn is same on ductility.
(2) Effect of alloying elements on various alloys Ti 44-52Al is same as on Ti48Al (3) Grain size means diameter of grain for equiaxed grains and interlamellar spacing for lamellar grain. ( Value of grain size for different composition follow the same trend as it follow for a specific composition and the trend is extrapolated. (5) Ductility data referred from literature is for mean grain diameter. ( Ductility data for alloy type and grain size is the maximum ductility for the alloy in desirable heat treatment condition.
Ductility Parameters and Data for Model
Ductility values were consolidated from the literature [5][6][7][8][9][10][11][28][29][30][31][32][33], and three important parameters, namly alloy chemistry, grain size, and heat treatment cycles were identified, which have major influence in ductility of the alloy.Alloy chemistry, grain size, and heat treatment cycles have important influence on ductility.For example, as content of aluminium increases in the alloy, ductility decreases.Binary alloy has limited ductility, and addition of ternary and quaternary alloying elements improves the ductility of the alloy.Similarly, finer grain size improves the ductility of the alloy through microstructural refinement avoiding segregation of impurities.Heat treatment cycle results in formation of desired phases, which has higher ductility.Collected data were categorized in 2 types of alloy (2 components and multicomponent alloy), 4 types of grain sizes, and 7 types of heat treatment cycles with certain assumptions (Table 1).Ductility values collected from the literature with various categories of parameters are presented in Table 2, which were further interpolated according to trends in literature values and following the linearity basis.In this way 56 ductility data combinations were generated, in the same order as given in [29,31] 1 .8 2 (3) 100 μm [32] 1 .4 3 (4) 250 μm [32] 1 . 1 4 (1) Above Tα (2 hrs) then FC to RT [32] 0 .5 1 (2) Just below Tα (20 min.)then AC to RT [11] 1 .5 2 (3) Below Tα (100 • C) soaking (2 hrs) then AC to RT [11] 3
Model Development
Three variables, namly composition, grain size, and heat treatment cycles were considered as antecedent and ductility as consequence during the modeling.
Formulation of ANN Model.
Depending on the data availability, total data has been subdivided into training and testing sets as mentioned in previous section.The number of hidden levels, the number of input and output nodes and the number of nodes in the hidden levels were decided by trial and error method.A schematic diagram of a typical jth node is displayed in Figure 2. The inputs to such a node come from system variables or outputs of the other nodes, depending on the levels that the node is located in.These inputs form an input vector X = (x 1 , . . ., x i , . . ., x n ).The sequence of weights leading to the node form a weight vector W j = (w 1 j , . . ., w i j , . . ., w n j ), where w i j represents the connection weight from the ith node in the preceding level to this node.The output of j node, that is, y j , is obtained by computing the value of function f with respect to the inner product of vector X and W i j as . . . . . .where b j is the threshold value, also called the bias, associated with this node.The function f is called an activation function.It can be determined with the response of a node with the total number of received input signals.
For selecting number of neurons, models with 3, 4, and 5 number of neurons were tried and RMS error were computed.Although it does not have much difference in RMS error for each type but least error was seen (Figure 3) for 3 neurons in all the cases.Also, with increasing number of neurons becomes more complex therefore three neurons were used in all the analysis.
Training (Learning) of ANN Model.
To generate an output vector as close to the target vector, training process is employed.A network trains by adjusting the weights that link its neurons so as to find optimal weight matrices and bias vectors that minimize a predetermined error function.
It is written as where t = component of the desired output; Y = corresponding ANN output; P = number of training parameters; q = number of output nodes.
A training data set is used to train the network or to determine the interconnection weights such that the response of the ANN clearly matches the observed behavior of the process being modeled.During training, typically mean square error (MSE) is monitored to find the optimal termination point for training.After training, the network is tested with the testing data set to determine how accurately the network can simulate the input-output relationship.If the performance of the ANN on the test data is satisfactory, the network is considered trained, weights are frozen and are used in actual application.
Different network processes have been tried with different sets of training and transfer functions to get optimum solutions.The network, which gives the optimum results with minimum error, could be frozen for testing or validation.The final frozen network contains "trainbr" as transfer function and "tansig" and "purelin" as training functions, as this set of combination with 3 neurons gives the optimum results.Accordingly, "tansig" and "purelin" training functions and "trainbr" transfer function have been fixed with trial and error for entire modeling.Number of experiments was carried out with all three data sets.The required ANN models (best ANN architecture) have been selected based on the training and performance checking of the models.Final architecture for all datasets is shown in Table 4.
Performance Indices.
All models developed with different sets of data were tested and computed ductility was compared with literature-based ductility by means of RMSE (root mean square error) and R 2 (regression coefficient) statistics.The models performances were also evaluated by using Nash-Sutcliffe criterion [27] of percent variance (VAREX) as given below: where N is number of observations; O t is observed value at time t (m 3 /s); O is mean of the observed values (m 3 /s); and P t is predicted ductility at time t (m 3 /s).The value of VAREX ranges from 0 (lowest performance) to 100 (highest performance).
Results and Discussion
Artificial intelligence serves as a smooth interface between the qualitative variables and the numerical domains of the inputs and outputs of the model.As seen from the literature database, ductility of Ti aluminide is a function of number of variables, which are qualitative in nature.However, it can be presented mathematically where certain variables have primary effect on ductility of intermetallics and the same has been considered in the present work.The rule-base nature of the model allow the use of qualitative metallurgical relations expressed as general effect of different variables, which makes the model transparent to interpretation and analysis.Among the emerging modeling techniques ANN model has been considered to be very useful.Models are developed with the data generated from literature database with certain realistic assumptions and using different sets of data for training and validation.This makes the model more reliable.
Results of ANN models are presented in Table 5 and Figures 3-6. Figure 3 compares the RMS error for each model.It clearly shows that error is least for dataset 3. Similarly, Figure 4 compares the observed and achieved (modeled) ductility.Predicted ductility values are very close to observed values.Among the comparison of ANN results, data set 3 results are found to be very close to observed values.It also indicates achievable ductility value up to 5.7% in specific combination of parameters, that is, for multicomponent alloy with grain size of 50 μm following heat treatment cycle number-7 (Table 2), that is, heating just below α transus temperature (Tα), soaking for 4 hrs → furnace cooling to Tα-50 • C and soaking for 4 hrs at this temperature → furnace cooling to room temperature → heating below eutectoid temperature and soaking for 24 hours → air cooling to room temperature.It is observed that models developed with data set 1 and 2 give results with high errors as compared to the models developed with data set 3. The minimum model error, that is, the difference between observed and modeled values gives the optimized parameters and it is found to be with data set 3. In Figures 5(a Scattering of data values is more with dataset 1 (Figure 5(a)) and least with data set 3 (Figure 5(c)).It also shows predicted ductility is close to observed reported values and it follows a definite trend.Therefore, use of different combinations of data sets for training and validation gives better prediction of property in ANN models.It means that model would be helpful in deriving optimum combination of parameters to obtain highest ductility in Ti aluminide intermetallics.
Analysis of the data has also been attempted through adaptive neuro fuzzy inference system (ANFIS) [19] using Takagi Sugeno model with substractive cluster approach.Through ANIFIS models, prediction accuracy is found to be depending on number of input variables with extent of data mixing [19].Therefore, it can be inferred that selection of input data decides the output accuracy in ANFIS as well as in ANN models.Prediction from ANFIS approach [19] and present ANN approach is found to be supporting each other.It is also true that, if scattering of data is more, ANN models are more suitable, since it generates output by adjusting the interconnections between the layers.In addition to that, ANN models have less dependency on modeling parameters as noted in ANFIS models [19], so prediction accuracy shall be certainly better with ANN models.
Conclusions
(1) Models developed through ANN are giving results close to the observed reported ductility value especially with a wide range of data.
(2) It gives very good result with data set 3 when data are randomly selected from entire data, which covers wide range of data.
(3) Error (difference between observed and modeled values) is maximum for data set 2 as 2.2, while it is least as 0.36 for data set 3.
(5) Predicted ductility values are very close to observed values.Here achievable ductility value is obtained as 5.7% in specific combination of parameters.ANN predicts one specific combination.It is multicomponent alloy with grain size of 50 μm following heat treatment cycle number-7, that is, heating just below α transus temperature (Tα) → soaking for 4 hrs → furnace cooling to Tα-50 • C and soaking for 4 hrs at this temperature → furnace cooling to room temperature → heating below eutectoid temperature and soaking for 24 hours → air cooling to room temperature.
3 Figure 3 :
Figure 3: RMS error for various numbers of neurons and for different models using datasets 1-3.
Table 1 :
Assumptions for categorizing literature data.
Table 2 ,
for each alloy type and for each grain size in different heat treatment conditions.These values are termed as observed values.Total number of observed values is 56, in which dataset 1 is selected just by picking first 40 values for training and last 16 values for testing.In data set 2, first 10 values from 21 to 30 and from 41 to 50 values have been taken for training and the rest for testing.Data set 3 has picked randomly 43 values from each category for training data and similarly 30 values for testing data.Few data have been considered in both training and testing sets.Data selection for different data set has been given in Table3.In the present paper, data extracted from the literature are written as observed data.
Table 2 :
Parameters, its detail characteristics, and ductility values.
Table 3 :
Length of data used in different sets.
Table 4 :
Final architecture for all datasets.
Table 5 .
It is clearly seen that RMSE is least for data set 3 and regression coefficient as well as VAREX
Table 5 :
Performance Indices for different datasets using ANN technique.
is highest at 96.91% and 96.8%, respectively.It indicates that all the three performance indicators are close to ideal indices. | 3,959.2 | 2011-10-24T00:00:00.000 | [
"Materials Science",
"Engineering",
"Computer Science"
] |
Free field world-sheet correlators for ${\rm AdS}_3$
We employ the free field realisation of the $\mathfrak{psu}(1,1|2)_1$ world-sheet theory to constrain the correlators of string theory on ${\rm AdS}_3\times {\rm S}^3\times \mathbb{T}^4$ with unit NS-NS flux. In particular, we directly obtain the unusual delta function localisation of these correlators onto branched covers of the boundary ${\rm S}^2$ by the (genus zero) world-sheet -- this is the key property which makes the equivalence to the dual symmetric orbifold manifest. In our approach, this feature follows from a remarkable `incidence relation' obeyed by the correlators, which is reminiscent of a twistorial string description. We also illustrate our results with explicit computations in various special cases.
becomes the orbifold CFT correlator on the right hand side by virtue of the fact that it localises to the points in the moduli space M g,n where a holomorphic covering map from the genus g world-sheet to the boundary sphere exists. This then reproduces manifestly, i.e. without any explicit computation, the symmetric orbifold correlator since the latter is determined in terms of these covering maps [4,5].
The strategy adopted in [3] for genus zero (generalised to higher genus in [6]), to show this striking localisation, was to use the Ward identities for the sl(2, R) k currents on the world-sheet. This required understanding how to deduce constraints for correlators of wspectrally flowed sl(2, R) k representations, which was the main technical advance of [3]. A special solution to these constraints had the property that it was determined in terms of a holomorphic covering map of the boundary S 2 by the world-sheet, x = Γ(z), where Γ maps the world-sheet insertion points z i to the spacetime insertion points x i , x i = Γ(z i ), and z i is a branch-point of ramification index w i .
Covering maps characterised by these branching data {z i , w i } are essentially fixed if three of the image points x i are prescribed, i.e. the remaining (n − 3) points x i are determined in terms of the branched cover (up to finitely many choices). Conversely, in terms of the z i , this means that for fixed {x i , w i }, the covering map only exists for isolated combinations of the z i . In fact, the special solution of [3] had delta function support in z i on the co-dimension (n − 3) locus on the moduli space M 0,n where this is the case. This is precisely what is needed to reproduce the orbifold CFT correlators on the right hand side of (1.1) as per the Lunin-Mathur construction [4,5]. In a second strand of argument, it was shown in [3] that there exists a set of semi-classically exact solutions of the AdS 3 sigma model whose contributions reproduce the correct weight factors for each of these branched covers. Taken together, these two arguments gave strong evidence that the correlators on both sides of (1.1) are manifestly equal.
In this paper, we tighten the first argument from [3]: while [3] showed that a solution to the Ward identities with these properties exists, it was not clear that this is the only solution, although there was good circumstantial evidence that it is the physically relevant one. 1 Furthermore, the analysis of [3] was essentially done in the NS-R framework, using only the bosonic sl(2, R) k+2 symmetry as in [8][9][10]. While this is legitimate, it is a bit unsatisfactory since for k = 1 the NS-R approach breaks down, and we should really be using the hybrid formulation as advocated in [1]. Here we will start directly with the hybrid world-sheet formalism for the AdS 3 string theory, which is described by a psu(1, 1|2) k WZW model in the case of pure NS-NS flux [11]. When the level k = 1, there exists a free field realisation of this WZW model in terms of a pair of spin 1 2 (symplectic) bosons (ξ ± ), and a pair of spin 1 2 fermions (ψ ± ), together with their canonically conjugate fields. This is the same free field realisation which was used to deduce the string spectrum in [1].
After describing the physical correlators in this language, taking into account subtleties of picture changing and charge conservation, we examine the constraints coming from the OPE of the symplectic bosons with the spectrally flowed vertex operators. It turns out that these can be nicely encapsulated in an incidence relation ξ − (z) + Γ(z) ξ + (z) phys = 0 . (1. 2) The brackets here denote the expectation value of the vertex operators V w i (x i ; z i ), see eq. (4.10), and Γ(z) is the covering map mentioned above with branching behaviour w i at the z i and x i = Γ(z i ) for i = 1, . . . , n. We show that this 'incidence relation' implies that the correlators are delta function localised to the locus where a covering map exists; it also fixes some of the additional dependence of the correlators, see eq. (4.31). The argument for this incidence relation is quite abstract (and general), but we also confirm it (and its consequences) explicitly in many examples. Since x(z) = Γ(z) when the covering map exists, the relation (1.2) is very suggestive of an underlying twistor string like description for our system. In this language the ξ ± play the role of twistor-like projective coordinates for the boundary P 1 . As we discuss in Section 4.5, the action describing the free field variables is, in fact, a lower dimensional analogue of the twistor string action of Berkovits [12]. We believe this twistor like description holds the key to many of the underlying topological string features of the k = 1 theory [13]. Note that this topological behaviour is quite specific to the case with k = 1, and appears to be different than the usual topological sector of string theory on AdS 3 that was recently discussed in [14,15]. Incidentally, the fact that the AdS 3 world-sheet theory for k = 1 is special was first noticed in a somewhat different context in [16], see also [17] for subsequent developments.
We should mention that our analysis only concerns the chiral (and anti-chiral) correlators. In order to fully determine the correlators of the world-sheet theory (and obtain their OPE coefficients, etc.), we also have to solve the conformal bootstrap equations for psu(1, 1|2) 1 , which has, to our knowledge, not yet been done. 2 Given the results of this paper, together with the expected form of the general answer (see eq. (4.34)), this bootstrap programme should now be within reach. We should also mention that in order to determine the full string theory amplitudes, one also needs to include correctly the ghost contributions, the fermions, as well as the degrees of freedom coming from the T 4 . We have not yet attempted to do so in detail, although this should not be too difficult.
The paper is organised as follows. We begin in Section 2 with a review of the underlying WZW model and its free field realisation. In Section 3 we explain some of the subtleties that arise for the correlators in the hybrid formalism; in particular, we explain how picture changing needs to be taken into account. Section 4 is the core of the paper: we demonstrate that the correlators satisfy the incidence relation, see eq. (4.10), and deduce various consequences from it, in particular, the fusion rules (Section 4.3), as well as the delta-function localisation property, see eq. (4.23). We also comment on the relation to the twistor string theory in Section 4.5. We exemplify these findings with explicit calculations in Section 5. In particular, we show how the symplectic boson Ward identities can be used to calculate these correlators directly, see Section 5.1. Section 6 contains our conclusions and outlook for future work. There are a number of Appendices where some of the more technical material has been collected. In particular, we give some details about the spectrally flowed representations in Appendix A, and explain how to construct the covering map in Appendix B. We also use the incidence relation in Appendix C to solve for the correlators, and demonstrate that some of the other constraint equations (that are not directly needed for the determination of the correlators) are also satisfied, see Appendix D.
The world-sheet WZW model
In this section we describe the free field realisation of our world-sheet theory in detail and fix our conventions. The key component we shall be focussing on is the psu(1, 1|2) WZW model at level k = 1. Following [1], see also [24][25][26][27], it has a free field realisation in terms of two pairs of complex fermions and symplectic bosons, with (anti)-commutation relations 3 2 For the mini-superspace limit of AdS3 this was discussed in [18], using results for H + 3 from [19,20], but this analysis does not include the spectrally flowed sectors. Most treatments of the spectrally flowed sectors on the other hand, see for example [21][22][23], seem to set the spacetime variable x uniformly to x = 0 (or x = ∞), which is not appropriate in our context. 3 Relative to [1] we have renamedξ α → ξ α and ξ α → η α , and similarly for the fermions,ψ α → ψ α and Here α, β ∈ {±}, and we choose the convention that +− = 1 = − −+ . These free fields generate the algebra u(1, 1|2) 1 , where The generators J a m and K a m describe sl(2, R) 1 and su (2) where we raise and lower indices using the standard su(2)-invariant form and we have introduced the combinations Furthermore, the constant c a equals −1 for a = −, and +1 otherwise, and the σ-matrices are explicitly given by while all the other components vanish. Finally, U m and V m define two u(1) algebras, with commutation relations and hence Y m and Z m satisfy Except for the non-trivial commutator with Y n , Z m is central, and in order to obtain psu(1, 1|2) 1 from the above algebra we need to set Z n = 0. More precisely this means that we take the coset by the u(1) algebra generated by Z n , i.e. we concentrate on the subspace of states that are annihilated by Z n with n ≥ 0. We then consider the quotient space of this subspace where we divide out by the Z −n descendants with n > 0 (which are null).
The highest weight representations
At level k = 1, psu(1, 1|2) 1 has apart from the vacuum representation only one (highest weight) representation [1], and this can also be understood in the above free field realisation. The vacuum representation arises from the NS-sector where both the symplectic bosons and the free fermions are half-integer moded, 4 and it is generated from a ground state satisfying Note that this state has the property that also It is easy to see that it leads to the vacuum representation of psu(1, 1|2).
As regards the R-sector, let us first consider the symplectic boson generators. On the highest weight states, we let the symplectic boson zero modes act as where m 1 , m 2 label the different highest weight states. In this convention we then have (2.14) The sl(2, R) Casimir on the highest weight states labelled by |m 1 , m 2 equals Writing C sl(2,R) = −j(j − 1), this then leads to the equation As regards the fermionic generators, each highest weight state |m 1 , m 2 leads to a Clifford representation with respect to the fermionic zero modes χ ± 0 and ψ ± 0 . We shall choose the convention that |m 1 , m 2 is annihilated by the +-modes, Then the action of the creation operators χ − 0 and ψ − 0 leads to a 4-dimensional space of states; with respect to su(2) 1 , it decomposes into a doublet spanned by Thus the condition that Z 0 = U 0 + V 0 = 0 implies that the possible ground states have to sit in the representations of sl(2, R) ⊕ su(2) where j denotes the spin of the sl(2, R) representation with C sl(2,R) = −j(j − 1). Including the action of the supercharges, we therefore conclude that the only possible highest weight representation is Here C j λ denotes the sl(2, R) continuous series representation labelled by the spin j, as above, together with λ, the J 3 0 eigenvalue (modulo 1). Note that while j is fixed, the J 3 0 eigenvalue given in (2.13) is not determined by the above considerations. This is fixed in the full string theory by the mass shell, i.e. the Virasoro conditions. This reproduces the result of [1], see in particular eq. (4.2). The fact that at k = 1 we only have these supersymmetric short multiplets was crucial to matching the physical spectrum of the string theory with that of the symmetric product orbifold of T 4 . In particular, it implies the absence of the continuous or "long string" states which are a feature of the string theory (with pure NS-NS flux) for k > 1. In contrast, here we only have the state at the bottom of the continuum j = 1 2 + ip with p = 0, together with its superpartners. The free field representation of the k = 1 theory automatically captures these special properties.
Note that the supercharge generators involve one complex fermion and one symplectic boson and change therefore both U 0 and V 0 by ± 1 2 , while maintaining Z 0 = 0; they therefore map between the representation in the first line of (2.22), and the two representations in the second line and vice versa.
Spectral flow
For the actual world-sheet theory we do not just need these highest weight representations, but also the representations that are obtained from them by spectral flow. As was already explained in [1], there are two spectral flow actions one can define on these free fields, namely on the symplectic bosons, and on the free fermions. The combination then acts as the usual spectral flow automorphism on psu(1, 1|2) 1 , and its action on the psu(1, 1|2) generators is spelled out in eqs. (A.6a) -(A.6e). This spectral flow leaves the invariant. The other natural combination iŝ and it acts on the various fields as described in Appendix A.1, see eqs. (A.3a) -(A.3d).
Finally we need to fix our conventions for how to describe spectrally flowed representations. Suppose that τ is some spectral flow automorphism, i.e. some combination of σ (±) , then we define the τ -spectrally flowed representation to be spanned by the states [Φ] τ , where Φ is a state in a highest weight representation of the kind described in the previous subsection, and the action of any generator A n on [Φ] τ is defined by (2.28) Typically, the spectrally flowed representations are not highest weight representations with respect to psu(1, 1|2), see e.g. the discussion in Appendix A.2. However, this is not always the case. In particular, theσ 2 -spectrally flowed vacuum representation is again the vacuum representation with respect to psu(1, 1|2) since defines the vacuum state with respect to psu(1, 1|2). (This is explained in more detail in Appendix A.1.) However -and this will be important below -its eigenvalues with respect to U 0 and V 0 are now different since we have In the following we shall denote the corresponding vertex operator by
Vertex operators in the x-basis
As in [3] we are interested in calculating the correlation functions of the world-sheet vertex operators corresponding to the twisted sector ground states. The vertex operators depend as usual on z, the coordinate on the world-sheet, but it is natural to introduce also a coordinate x for the dual CFT (on the boundary of AdS 3 ) and to consider the vertex operators where w denotes the spectral flow with respect to σ w . Here we have used that the translation operator on the world-sheet is L −1 , while that in the dual CFT is J + 0 ; these two operators commute with one another, and therefore the definition of these vertex operators is unambiguous. Furthermore, we define the action of V (Φ; 0, 0) on the vaccum of the world-sheet theory to be the state Φ, (2.33) Because of locality, this then fixes the definition of the vertex operator completely, see e.g. [28,29]. It follows from (2.28) that the OPE of the free fields η ± (z) and ξ ± (z) near a σ wspectrally flowed highest weight state 5 is of the form where r ∈ Z + 1 2 if w is odd, and r ∈ Z for w even (so that the modes that act on the states before spectral flow are always integer moded). In particular, if Φ is a highest weight state of the form |m 1 , m 2 , then η + and ξ + have poles of order w+1 2 , where we have used (2.12). On the other hand, η − and ξ − have zeros of order w−1 2 , (2.39) If the corresponding vertex operator is evaluated at x = 0, there are in addition correction terms that come from where we have used the notation V w m 1 ,m 2 (x; z) ≡ V w (|m 1 , m 2 ; x, z), and S(ζ) is either η α (ζ) or ξ α (ζ). The shifted fields are and we find explicitly In particular, this therefore fixes the OPEs of the symplectic boson free fields with the vertex operators V w m 1 ,m 2 (x; z). It is then possible to perform a similar analysis as in [3], except that we now derive constraint and recursion relations for the correlators involving the symplectic boson fields, rather than those involving the sl(2, R) currents; this will be done explicitly in Section 5. As we will see, the corresponding constraints are stronger than those found in [3] -this was to be expected since the sl(2, R) currents are bilinears in the symplectic bosons -and in particular, will allow us to tie down the correlation functions essentially uniquely.
The OPEs with W (z)
As will become clear below, we will also need to know the OPEs of the symplectic boson fields with the vacuum field W (z), see eq. (2.31). It follows from the definition of |0 (1) (and in particular the definition of the spectral flowσ) that the OPEs are of the form
Correlators in the hybrid formalism
While everything we have said up to now is quite parallel to the sl(2, R) analysis of [3], there is actually one important structural difference that will play a key role in the following. The analysis of [3] was effectively done in the NS-R formalism, and as a consequence the level of the bosonic (decoupled) sl(2, R) algebra was k + 2 = 3. The level actually played a significant role in the analysis of the correlation functions since the simple solutions that are related to the covering map only exist provided that the identity, see eq. (1.2) of [3] and eq. (1.1) of [6] n i=1 is satisfied, which is the case for k + 2 = 3 and all j i = 1 2 . (Here j i is the sl(2, R) spin of the highest weight state before spectral flow, and we are considering an n-point function on a genus g world-sheet.) In our case, the level of the bosonic sl(2, R) subalgebra of psu(1, 1|2) is actually k = 1, i.e. in (3.1) we need to set k + 2 = 1, and thus this relation is not satisfied for all j i = 1 2 . The resolution of this problem has to do with the fact that we are dealing here with a supersymmetric world-sheet theory, and we need to take picture changing into account. In this technical section we describe how everything works out as expected after taking this on board, as well as the U 0 charge conservation. In the rest of the paper, we will only be using the final result in (3.19) for the physical correlators; the reader may therefore skip the rest of this section on a first reading.
Picture changing
In the context of the hybrid string, the world-sheet amplitudes (for g ≥ 1), which we need to calculate are, see eq. (2.7) of [11] Mg,n where we have only written out the left-moving component of the correlator, and we have already anticipated that not all G − fields and G − −1 Φ i descendants will in fact involve G − −1 -as we shall see momentarily, some of them will in fact involveG − −1 , see also the comment below eq. (2.9) and above eq. (3.5) of [11].
The different supercurrents can be written out explicitly in terms of the bosonised ghost variables (ρ, σ) as well as the N = 2 u(1) current J = ∂H, see the final equation of [11] Here we have used that for the free field realisation at level k = 1, the (p) 4 (For the physical states we are interested in, the T and theG − C operators do not play a role.) On the other hand, the physical states are of the form [30], where for example for the twisted sector ground states Φ = [Φ w ] σ w as in Appendix A.2. This is now sufficient to determine how manyĜ − in (3.2) are G − , and how many areG − . We claim that (for g ≥ 1) of the n + 3g − 3Ĝ − fields that appear altogether we take 6 This prescription is simply a consequence of overall charge conservation. Indeed, the total exponential of the ghost fields equals where the last term in the first line comes from the (g −1)G + fields. The last line describes exactly the required background charge for genus g.
This prescription now reproduces correctly (3.1): if we applyG − −1 instead of G − −1 to the physical vertex operator, we pick up the term (3.10) (The prefactor coming from guarantees that we pick up the (−1)-mode of Q.) From our analysis in Appendix A.2 it is easy to see that the −1 mode is simply for w odd, see in particular eq. (A.11), and similarly for w even, i.e. and see eqs. (A.15) and (A. 16). What is important is that in each case we have the product ξ + 0 ξ − 0 which, according to eq. (2.12), maps As a consequence, see eq. (2.13), the J 3 0 eigenvalue of this state is unchanged, but the spin is shifted by On the other hand, if we replace one of theĜ − in the Beltrami differential byG − , we need to integrate the vertex operator associated to the state Q = Q −3 |0 . We can then express the ξ ± modes of Q in terms of a contour integral, and use the techniques of Section 5 to rewrite the ξ ± fields in terms of ξ ± 0 modes acting on the physical vertex operators. This therefore has the same effect as above, i.e. eachG − field reduces the (total) spin by 1. As a consequence, the sum over the spins j i on the left-hand-side of (3.1) becomes (initially each spin was i.e. it satisfies the constraint (3.1) with the bosonic level equal to k + 2 = 1.
U 0 charge constraint
While the previous consideration takes into account the constraints from picture changing, it introduces another problem: in our free field realisation, all vertex operators initially have U 0 = 0, and this is also not affected by spectral flow. However, once we have applied Q, we do not only shift j i → j i − 1, but also U 0 → U 0 − 1, see eq. (2.14). As a consequence the resulting correlators vanish on the nose since they do not respect overall U 0 charge neutrality. (This is also something we observed in the explicit analysis of Section 5.) This problem is a consequence of our specific free field realisation, and it already implicitly reared its head in the analysis of Appendix C (and in particular eq. (C.24)) of [1]. Indeed, the full spectrum of our psu(1, 1|2) model appears not just once in the free field realisation, but there is a copy of it for each eigenvalue of U 0 . In calculating correlation functions we therefore need to choose the states such that the overall U 0 charge vanishes, and this can always be achieved. In our context, the simplest way of arranging for this is to add (n − 2 + 2g) vacuum fields with U 0 = +1. The relevant field was already constructed above, see eq. (2.29), and was denoted by W (z) ≡ V (|0 (1) , z). 7 Thus the correlation functions we shall be studying in the following are of the form In this paper we shall concentrate on the correlators on the sphere, g = 0; then we may take L = 0, and hence consider the correlation functions
A covering map identity and its consequences
In this section we begin with analysing the correlation functions of the form (3.19) for the case of the sphere (g = 0). One way to constrain these correlators is to repeat the analysis of [3], where instead of determining constraint and recursion relations for the correlators involving the sl(2, R) currents, we now replace these currents by the symplectic bosons. While this is a possible avenue -and we shall spell it out in some detail in Section 5 -there is actually a much more elegant method that gives rise to essentially the same constraints. Apart from being a computationally powerful idea, it also sheds light on the conceptual underpinning of our analysis; in particular, it suggests that we should think of our free field realisation as giving rise to a twistor-like string theory.
The covering map
Before we begin with the detailed analysis of our central relation, let us remind the reader about some properties of covering maps. We will restrict ourselves here to covering maps of Riemann surfaces with g = 0.
Let Γ : S 2 → S 2 be a holomorphic function from the Riemann sphere to itself. Given a collection of points {z i } and {x i } on S 2 and a set of positive integers w i for i = 1, . . . , n, we say that Γ is a branched covering map with branching indices for all i, with no other critical points, i.e. the only zeroes of ∂Γ(z) are at z = z i . A fundamental result in the theory of Riemann surfaces is the Riemann-Hurwitz formula, which, for genus g = 0 surfaces, states that the order of a branched covering map (i.e., the number of preimages of a generic point) can be determined purely by the branching indices, and is given explicitly as In what follows we will be mostly concerned with the conditions for the existence and uniqueness of a covering map. A necessary condition for existence is that the w i satisfy the selection rules where the first statement holds for all j, and the second statement is merely the requirement that the order (4.2) of Γ is an integer. These conditions are enough to guarantee existence for n = 3, but for n ≥ 4 a covering map generically does not exist. To illustrate this, note that Γ, as a holomorphic function from the sphere to itself, can be expressed as a ratio of polynomials of order N (assuming that ∞ is not a branching point). That is, we can define polynomials p ± (z) such that The requirement, then, that Γ(z) has a critical point of order w i at z = z i , whose image is x i , can be written as This can be thought of as a linear homogeneous system in the 2N + 2 coefficients of the polynomials p ± (z). There are i w i = 2N + n − 2 such equations, and thus a solution will only exist on some n − 3 dimensional subspace of the configuration space labeled by {z i }, {x i }. (Note that the overall scale of the polynomials p ± (z) drops out for the determination of Γ(z), and thus there are only 2N + 1 meaningful coefficients.) It will be useful, for what follows in the next subsection, to view these conditions in a slightly different way. First, note that (4.5) as well as its derivative with respect to z can be used to eliminate the x i . Next we observe that the 'Wronskian' behaves in the vicinity of the critical point z i as, see also [4,5,32] where the prime denotes the derivative. (This identity can also be understood as arising from the numerator of the rational function ∂Γ(z).) Since the left hand side is, in fact, a polynomial of degree 2N − 2, 8 we conclude that Here we have used that the right hand side of (4.7) has the correct vanishing behaviour at each of the critical points, and that it is also of degree 2N − 2 because of (4.2); it must therefore agree with the left hand side up to an overall constant which we have denoted by C. The important point to note is that all the x i have dropped out of (4.7). A short argument (see Appendix B) then shows that (4.7) determines the polynomials p ± (z) in terms of any two of its coefficients, as well as one overall scale factor for each polynomial, see eq. (B.4). The overall scale common to both p ± (z) drops out as discussed above. So we are left with one relative scale factor and two ratios of coefficients as unknowns. These three can be fixed by going back to (4.5) for three of the branch points, i.e. we may use for, say, i = 1, 2, 3. This fixes the p ± (z) completely, up to an overall common factor (and up to discrete choices).
In terms of the branched cover Γ(z) defined in (4.4), we can now rewrite the remaining equations of (4.5) (i.e. for i = 4 . . . n) as In particular, if x i = Γ(z i ), then the covering map does not exist as specified (i.e. with p + (z i ) = 0). Thus there are n − 3 conditions (x i = Γ(z i ) for i = 4, . . . , n) that need to be satisfied in order for the covering map to exist.
The incidence relation and delta function localisation
With these preparations at hand we can now turn to the main claim of this paper, which constrains the structure of the correlation functions in the symplectic boson theory almost completely. Let {z i } and {x i } be some marked points on the world-sheet and boundary S 2 respectively, and let {w i } be a set of branching indices. Assuming that a covering map Γ(z) for this configuration exists, we claim that Furthermore, we claim that if such a covering map does not exist, then the correlation functions vanish identically, In fact, as we will see below, the correlators (4.11) are delta function localised on the locus of the covering map, see eq. (4.23).
In order to prove (4.10), we begin by defining the functions The prefactors have been chosen so as to ensure that these functions are globally welldefined, i.e. do not have any branch cuts. (Since the states |m 1 , m 2 define a Ramond-sector representation with respect to the symplectic bosons, the symplectic bosons pick up a sign in going around the corresponding states for even w.) Furthermore, because of (2.37) and (2.39), as well as (2.44), the functions P ± (z) have no poles at z = z i or z = u α (and thus no poles for any finite z). Finally, because of the prefactor and the 1 z fall-off behavior of ξ ± (z), the functions P ± (z) have the asymptotic form It would thus seem that the P ± (z) are polynomials in z of order N , but there is one subtlety that we need to discuss: the above argument only describes the dependence on z, but the coefficients z k of P ± (z) could be arbitrary distributions of the other variables z i and x i . (In fact, this is not just an academic possibility, but as we will see, they actually turn out to be such distributions.) However, the distributional nature of P ± (z) is of a special kind: the coefficients of both polynomials P ± (z) are all proportional (with proportionality factors that are all regular functions of z i and x i ) to a single coefficient, say that of z 0 in P + (z), which by itself can be (and will be) a distribution D(z i , x i ) of z i and x i . 9 Thus we can write wherep ± (z) are now polynomials in z of degree N whose coefficients are ordinary functions of (z i , x i ).
In order to show (4.14) we note that the OPEs of ξ ± (z) with the fields V w i see eqs. (2.37), (2.39), and (2.42), imply that the P ± (z) satisfy the constraints Note that these are essentially the same equations that characterise the covering map, see eq. (4.5). In particular, they define 2N + n − 2 homogeneous linear equations (with coefficients that are regular functions of the (z i , x i )) for the 2N + 2 coefficients of the polynomials P ± (z). Given the structure of the covering maps we know that we can choose a (subset) of 2N + 1 equations such that the resulting system becomes non-degenerate; thus, all coefficients of P ± (z) are proportional (with proportionality constants that are regular functions of (z i , x i )) to one coefficient. This establishes eq. (4.14).
Next writing P ± (z) in terms of eq. (4.14) we note that the polynomialsp ± (z) obey On the support of the distribution D(z i , x i ) it then follows from the arguments below (4.7) and in Appendix B, that we can determinep ± (z) up to two unknowns and the two overall scale factors. By imposing the conditionŝ for three of the i, say i = 1, 2, 3, all the unknowns except for the overall common scale factor ofp ± (z) can be determined. This overall factor, which is not fixed by (4.17), can be absorbed into the distribution D(x i , z i ). Using again eq. (4.14) we therefore find where Γ(z) is the 'branched cover' defined in (4.4), i.e. it is the function that has branching index w i at z i and satisfies Γ(z i ) = x i for i = 1, 2, 3, and we have used that p ± (z) =cp ± (z) for some constantc. If Γ(z) is in fact the actual branched cover, i.e. if also Γ(z i ) = x i for i = 4, . . . , n, then eq. (4.18) is equivalent to the incidence relation of eq. (4.10), where we have used the definitions in (4.12). This proves our first claim. It remains to show that the correlators (4.11) vanish if the covering map does not exist. We will actually show the stronger statement that the correlators are delta function localised on the locus of the covering map. To see this we evaluate (4.18) at z = z i and use (4.14) and (4.16) to conclude that By construction of thep ± (z), this identity is identically true for i = 1, 2, 3, but for i ≥ 4 it leads to the condition where we have used thatp + (z i ) = 0. Thus it follows that 10 where the sum runs over the different 'covering maps', see the discussion below eq. (4.18), and theĈ Γ are ordinary functions of their arguments. Both P ± (z) are proportional to this distribution, because of eq. (4.14). On the other hand, P + (z i ) is essentially a generic correlator of the kind we are interested in since Since the values of the labels m 1 , m 2 are arbitrary, 11 it follows that the correlator itself is delta-function localised, i.e. that where C Γ contains the dependence on the z i and the u α (though the dependence on the latter is rather trivial, see eq. (5.1) below). We also recall that Γ(z) here is the function that has branching index w i at z i and satisfies Γ(z i ) = x i for i = 1, 2, 3.
Eq. (4.23) therefore provides an explicit and manifest realisation of the localisation idea conjectured in [5], for which strong evidence was given in [3]. We have also worked this out explicitly in some simple examples, see Appendix C for more details. Finally, for four-point functions one can give a fairly direct general argument for the structure (4.23); this is explained in Appendix C.2.
Fusion rules
The identity and the proof we provided above have some immediate implications for the structure of the correlation functions. First, let n = 3. Then (4.11) tells us that the three-point function is nonzero only if, see eq. (4.3) where the first statement holds for any distinct choices of i, j, k. Since this is a necessary condition for the three-point function to be non-vanishing, we conclude that (4.25) are (part of the) fusion rules of the theory. In fact, this reproduces precisely the expected fusion rules of the dual symmetric orbifold theory [1].
Constraining the correlation functions
The relation (4.10) can also be used to constrain the m i j -dependence of the correlation functions. Assuming a covering map exists, let a Γ i be the constants (that still depend on the z i and x i ) such that (4.27) Since this expression must vanish identically by (4.10), we have the recursion relation (4.28) Recalling that h i = m i 1 +m i 2 + w i 2 is the J 3 0 eigenvalue (and therefore the spacetime conformal dimension of the corresponding operator in the spacetime CFT) of |m i 1 , m i 2 , see eqs. (2.13) and (A.6a), the solution to this recursion relation is simply given by where C(j i ) is a function of the sl(2, R) spins j i = m i 1 − m i 2 , and we have written This is exactly the structure of the correlation function of twisted sector ground states in the symmetric product orbifold found in [3][4][5]33]. Putting everything together, we therefore conclude that the correlators are of the form where W Γ is some unknown function of the insertion points and sl(2, R) spins. The correlator thus has support only on the locus where the covering map Γ exists. Finally, we sum over all possible covering maps for a given set of branching indices w i . This is exactly the analogue, in the hybrid formalism, of eq. (5.28) of [3], which was obtained there by assuming a particular solution of the sl(2, R) k Ward identities. The above derivation using the free field realisation has allowed us to dispense with the assumption that the delta function localised solution of the Ward identities is indeed the correct solution: at k = 1 it is the only solution.
In [3], the delta function constraint was combined with a semi-classically exact saddle point of the classical sigma model, which gave a weight of e −S L [φ cl ] for each such covering map. Here is the classical Liouville action and the scale factor φ cl is the conformal factor arising from the covering map, i.e. φ cl (z) = ln (|∂Γ(z)| 2 ) . The action S L [φ cl ] is to be evaluated in a suitably regularised way [4]. Together with the results of [33], this then suggests that the full correlator (combining both chiral and anti-chiral degrees of freedom) is of the form (4.34) where we have, as always, assumed that x i = Γ(z i ) for i = 1, 2, 3 (and Γ(z) has branching index w i at z i ). Here h 0 = (w 2 −1) 4w is the spacetime conformal dimension of the w-th spectrally flowed ground state -the contribution from the ground state is already captured by the exponential of the classical Liouville action. 12 One expects theW Γ to be constants, independent of Γ, but it would be good to deduce this (as well as the complete form of (4.34)) directly from solving the conformal bootstrap equations on the world-sheet.
Relation to twistor string theory
We have seen above how the relation (4.10) is central to the unusual property of the physical correlators being delta function localised to those Riemann surfaces which admit a finite branched covering x = Γ(z) of the boundary sphere. The relation (4.10) actually also gives a more geometric picture of our free field realisation of the k = 1 sigma model on AdS 3 ×S 3 . It directly implies that the symplectic bosons ξ ± (z) that appear in (4.10) are to be thought of as homogeneous coordinates in the complex parametrisation of the target CP 1 on the boundary of AdS 3 .
Note that in the conventional NS-R parametrisation of the AdS 3 sigma model which was employed in [3], the field γ(z) (and its complex conjugate) parametrise the sphere direction. In that case we argued, based on the Ward identities, that γ(z) = Γ(z) holds as a relation within physical correlators (see eq. (6.4) of [3]). In the free field realisation, we do not need to go through the Ward identities to arrive at the underlying geometric picture of the path integral localising on branched coverings. 12 In the discussion of this paper we have assumed that the states are spectrally flowed highest weight states with respect to the sl(2, R) factor, but their excitation with respect to the other degrees of freedom can be arbitrary. One would expect that one can bring all physical states into this 'gauge', and therefore that the conclusions of this paper apply to arbitrary physical states of the theory. This also ties in with the expectations from the symmetric orbifold.
Furthermore, (4.10) is very suggestive of a (quantum sigma model version of) a twistorial incidence relation. Recall that in four dimensional (complexified) Minkowski space the fundamental twistor relation is
µȧ + x aȧ λ a = 0 , (4. 35) where the Minkowski coordinates x aȧ = σ µ aȧ x µ are written in bispinor form, and (µȧ, λ a ) are the (bosonic) spinor variables which are homogeneous coordinates for the corresponding twistor space (CP 3 in this case).
In our case, ξ ± (z) therefore play the role of twistor variables that coordinatise the CP 1 . That this is not an accident can be seen by looking at the free sigma model of the symplectic boson and fermions parametrising the psu(1, 1|2) 1 theory. This can be cast in a form which is a two dimensional closed string analogue of the twistor open string theory proposed by Berkovits [12] for 4d super Yang-Mills. To see this, we define the supertwistor fields The free field action splits into left and right moving parts which can each be written as and similarly for the right moving piece. Here we have also introduced a U (1) gauge field A = Az under which the Z I fields carry charge −1 while the Y I fields carry charge +1.
The corresponding left moving current is then In other words, this is nothing other than the gauging of the Z current (not to be confused with the supertwistor Z I !), which is necessary in order to describe the psu(1, 1|2) model, see the discussion below eq. (2.9). In this form, the action (4.37) and the current (4.38) are the direct analogues of eqs. (6) and (8) of [12], respectively. From this perspective it thus seems natural to view Z I as homogeneous coordinates for the supertwistor space CP 1|2 -the lower dimensional analogue to the CP 3|4 that enters in the usual twistor string theory. Thus (4.10) is the natural bosonic incidence relation for these variables. Also recall that the twistor string theory of Berkovits [12] is a topologically twisted string theory which had physical correlators localised on the loci of incidence relations such as these, see in particular eq. (16) of [12]. 13 This again fits with the expectation that there is an underlying topological string description of the psu(1, 1|2) 1 sigma model as well. This is currently under investigation [13]. 13 A similar statement also applies to the twistor string theory of Witten [34]. See for instance, eq. (1) of [12] or eq. (1.1) of [35]. However, in that context the δ-function localisation happens on the worldvolume of a D-string.
Explicit calculations
In Section 4 we used general arguments from complex analysis to derive a relationship between the correlation functions (3.19) and the covering map. The relation in equation (4.10), along with its derivation, allowed us to impose strong constraints on the form of the correlation functions. Namely, we were able to show that the correlator (3.19) is delta-function localised to the insertion points that allow for the existence of a covering map. Furthermore, using (4.10) we could fully constrain the dependence of the correlation functions on the J 3 0 eigenvalue, see eq. (4.31). As we mentioned before, we could also determine the correlators using a more direct approach, following in essence the techniques of [3], but now working directly with the symplectic boson fields (rather than the currents in sl(2, R)). That is, we could constrain the correlation functions of our theory via the local Ward identities that arise upon inserting ξ ± (z) and η ± (z) into a correlation function. In this section, we spell out the details of this approach, and demonstrate that it can be used to exemplify the results of Section 4 in a few simple examples.
Note that the following discussion will be done for the free field theory corresponding to u(1, 1|2); in order to obtain the results relevant for psu(1, 1|2) we have to divide out the Z n generators as discussed below eq. (2.9). Since we will be considering correlators of vertex operators that are primary with respect to Z n (and do not involve any Z −n descendants with n > 0) there is essentially no difference between the two calculations; the only small modification one has to take into account is that some of our fields carry non-trivial U 0 -charge, and that this gives rise to an additional contribution of the form where V q (z) is the vertex operator corresponding to the primary state with U 0 charge U 0 = q. In particular, in the physical correlators in (3.19), we will get additional such contributions from the W fields (that have U 0 charge equal to q = 1) as well as (n − 2) of the V w m 1 ,m 2 (x; z), see eq. (3.19). These contributions will cancel against the explicit dependence on the u α that we will find below. This is how it has to be since in the psu(1, 1|2) theory, the W field describes the vacuum, and hence does not contribute to the correlators.
Ward identities
The local Ward identities for the free field realisation of the psu(1, 1|2) 1 model can be deduced by considering the insertion of ξ ± (z) and η ± (z) into correlation functions. We begin with the correlators For the time being, let us assume that all of the w i are odd, so that (5.2) defines a meromorphic function of z with no branch cuts. (The strategy for even spectral flow is demonstrated in Appendix C.) As a function of z, (5.2) has poles at z = z i with order w i +1 2 , and behaves asymptotically as O(1/z) as z → ∞. Thus, by Liouville's theorem, we can construct the correlator knowing only the Laurent expansion near its poles. Using the OPEs (2.37) and (2.39), as well as (2.42), it therefore follows that Here, we have defined The Ward identities (5.3) involve several 'unknown' correlation functions that we would like to eliminate. This is done by noticing that, since as z → z k . Using (5.3), we can write this as Furthermore, (2.44) tells us that the OPE of ξ ± (z) with W (u α ) has a zero of order 1 at z = u α , i.e.
Using (5.3), we can thus write this as = N + 2(n − 2) − 1 constraints, where N is given by (4.2). Thus, solutions to this system only exist given that n − 3 conditions on the parameters (x i , z i , u α ) are satisfied. It will turn out, as we shall see, that the existence of a solution is independent of u α , and the appropriate conditions for a solution to exist will be those required for the existence of a covering map.
The recursion relations
Equations (5.6) and (5.8) on their own do not give us a great deal of information, since they relate the unknown correlators F i r to each other. However, since we can relate the unknown F i w i 2 to the original correlation function as where we have introduced the shorthand notation m i 2 + 1 2 to represent the correlator (3.19) with the shift m i 2 → m i 2 + 1 2 . Futhermore, using (2.39), we can determine the coefficient of σ w , and we have again used the shorthand in which m k 1 − 1 2 denotes the correlator with the shift m k 1 → m k 1 − 1 2 . This allows us to write (5.6) as a linear system given by where p ∈ {0, . . . , w k −1 2 }, while (5.8) leads to Equations (5.12) and (5.13) can be used to eliminate the unknown F i r−1/2 correlators with r ≤ w i −1 2 , and the remaining constraints define a system of recursion relations in the variables m i 1 , m i 2 for the correlation functions (3.19). These relations, when solved, reproduce the structure (4.31), as we will see in the examples below.
The η ± constraints
So far we have only concerned ourselves with the constraint equations coming from the insertion of the fields ξ ± into correlation functions. As it turns out, the constraints coming from the ξ ± insertions are both technically simpler than those for the insertion of η ± and constrain the form of the correlation functions almost fully. The η ± constraints are, however, important for deriving the U 0 charge constraint discussed in Section 3.2, and thus we present them here.
Consider the correlation function Just as with the ξ ± insertions, we can use the OPE of η ± with the vertex operators to constrain the form of this correlator. This is slightly more complicated than for the ξ ± insertions, since the OPE of η ± with W has a first order pole. The result is where we have defined
(5.16)
Just as for the ξ ± analysis, the linear combination η − (z) + x k η + (z) has a highly regular OPE with V w k m k 1 ,m k 2 (x k ; z k ). That is, Explicitly, this imposes the constraints Unlike the ξ ± case, however, there are no constraints coming from the behavior of this correlator near z = u α . Just as before, since we know the action of η + , as well as the lowest-order coefficient on the right-hand side of (5.18). Putting this together, we end up with the linear system n i =k = N + 2n − 5 unknowns, and is thus under-constrained for n > 4.
Some three-point functions
Let us begin by demonstrating the utility of the local Ward identities from above in the simplest possible case: the three-point function with unit spectral flow. That is, we aim to constrain the correlator Let us start with the ξ ± constraints. In this simple case, the linear system defined by (5.12) and (5.13) takes the form while the last two are easily solved to yield The u-dependence of eq. (5.25) is an artefact of the fact that we are working with the full free field u(1, 1|2) theory, rather than with psu(1, 1|2) 1 . In particular, the u-dependence of eq. (5.25) disappears upon removing the factor that comes from the U 0 charge terms, see eq. (5.1) and the discussion below. Note that eq. (5.25) matches the analysis of Section 4, particularly eq. (4.31). Indeed, the relevant covering map in this case is simply Γ(z) = z. Since Γ(z) = z i + (z − z i ), the constant a Γ i in (4.31) is always just 1. Thus, we should expect the three-point function with unit spectral flow to have no dependence on h i := m i 1 + m i 2 + 1 2 , and this is indeed what we have found. Now, let us consider the η ± constraints for this correlator. With our choice of x i , z i , This is a system of three equations with two unknowns, A ± , and thus the unknowns can be eliminated. After eliminating A ± and using the solution (5.25), we are left with the simple constraint C(u, S + 1 2 ) S = 0 . It is worth noting that the constraint S = 1 2 is exactly the statement of the U 0 charge condition in Section 3.2. We should also mention that, because of (5.1), we know that the function C(u) must be of the form C(u) = Cu(u−1), so that the total u-dependence has the expected form. (Recall that the U 0 -charge of the vertex operator V 1 m 1 ,m 2 is q = m 1 −m 2 − 1 2 .) Finally, let us check that (4.10) actually holds for this three-point function. The appropriate covering map is simply Γ(z) = z. Using the local Ward identities and the solution (5.29), we have 14 C(u, S) could also depend on the m i j mod 1 2 .
5.2.1
The three-point function with (w 1 , w 2 , w 3 ) = (3, 3, 1) The computation of three-point functions with higher values of the spectral flow parameters (w 1 , w 2 , w 3 ) works similarly to the (w 1 , w 2 , w 3 ) = (1, 1, 1) case discussed above. Let us summarise the main steps of the computation by means of an example, the three-point function Eqs. (5.12) and (5.13) define a linear system of seven equations in the seven unknowns The solution of the linear system implies reproducing eq. (4.28) with The correlator is then fixed to be where again C(u) = Cu(u − 1), see the discussion below eq. (5.29). The constraint S = 1 2 follows again from the η ± constraints (5.19). Finally, one easily checks that the incidence relation (4.10) is indeed obeyed. We have performed a similar analysis for all three-point functions with w i odd and w i ≤ 19, and shown that it leads to the correct covering map solution (provided that the selection rules (4.25) are satisfied). We have also checked that this solution indeed satisfies the incidence relation for w i ≤ 9.
Localisation
The relevant covering map for the w = 1 four-point function is simply given by a Möbius transformation satisfying Γ(z i ) = x i . Such a function only exists provided that the crossratios of the two sets of variables coincide, i.e. provided that We want to show in the following that the correlation function indeed localises to this configuration, i.e. that it vanishes unless (5.37) is satisfied. Adopting the short-hand notation of the previous section, eqs. (5.12) and (5.13) take the form Just as for the three-point function, we can use the conformal Ward identities to set x 1 = z 1 = 0, x 2 = z 2 = 1, and x 3 = z 3 = ∞, in which case the relevant covering map is simply Γ(z) = z. Equation (5.38) becomes then Multiplying (5.39a) by u α , and then subtracting (5.39b), one finds The first three terms are independent of α, and one immediately deduces Thus, the correlation function is non-zero only if x 4 = z 4 = Γ(z 4 ). We also note that this has precisely the form of (4.20), and hence implies that the correlator has the structure
Explicit form of correlator and incidence relation
We can actually be more specific about the form of the correlator. To this end we deduce from eq. (5.12) that For our choice of insertion points and using (5.42), this becomes This set of recursion relations fixes the m i 1 and m i 2 dependence of the correlator as Note that eq. (5.46) does not carry any dependence on the J 3 0 eigenvalue m i 1 + m i 2 , in agreement with (4.31). In fact, for the choice of insertion points made above, the covering map is simply Γ(z) = z, and a Γ i = 1 for i = 1, . . . , 4. We also note that the u i dependence of eq. (5.46) comes from the U 0 -charge terms, see eq. (5.1) and the comment below eq. (5.25); this fixes the u i dependence of C(u 1 , u 2 , x 4 , S) to be It is also straight-forward to confirm that the correlator satisfies indeed the incidence relation (4.10). For our choice of insertion points this amounts to where we have used eq. (5.45).
This analysis takes care of the constraints that come from considering correlators with the insertion of ξ ± . We can also determine the constraints that come from the insertion of η ± into the correlators, following the general method of Section 5.1.2; the analysis is a bit technical though, and we have therefore relegated it to Appendix D. The most interesting novelty relative to the situation for the three-point functions of Section 5.2 is that we cannot directly eliminate the unknown correlators A ± α , and that they actually turn out to involve also derivatives of delta-functions, not just delta-functions, see eq. (D. 19). As a side-product we also deduce the constraint that S has to equal S = 0, see eq. (D.11), which is just the U 0 -charge condition. Note that S = 0 is in particular compatible with two of the j i = m i 1 − m i 2 to be equal to j i = 1 2 , and the other two to be equal to j i = − 1 2 .
Conclusions
In this paper we have analysed the world-sheet correlation functions of string theory on AdS 3 × S 3 × T 4 with minimal NS-NS flux (k = 1), and made the matching with those of the dual symmetric orbifold theory manifest. The main advances relative to [3] are that: (i) Our analysis was done directly in the hybrid formalism, using the free field realisation of psu(1, 1|2) 1 in terms of symplectic bosons and fermions. (The analysis of [3] was done in the NS-R formalism, and only made use of the bosonic sl(2, R) k+2 symmetry.) (ii) As a consequence we could derive stronger results: in particular, we could show that the solution is delta-function localised, see (4.23) and (4.31). (In [3] it was only shown that such a delta-function localised solution exists, but it was not clear (and probably also not true) that this is the only solution of the sl(2, R) k+2 Ward identities.) (iii) These special properties of the correlation functions have a nice geometric origin, namely the 'incidence relation' (4.10), that suggests a close relation to a twistor string theory description, see Section 4.5.
There are a number of open problems that remain. In particular, our analysis only concerns the chiral correlation functions, and we therefore cannot fix directly the full world-sheet amplitudes. However, given the results of this paper, it should now be possible to solve the bootstrap equations for the psu(1, 1|2) 1 WZW model, and thereby fix the undetermined constants, say in (4.34). Once this is done, one should also add in the ghost (and fermion) contributions, and confirm that one obtains indeed the correct symmetric orbifold correlators (including normalisation, etc.).
In this paper we have only considered the situation on the world-sheet sphere. However, most of the arguments also go through for higher genus; this will appear elsewhere [36]. We have also only analysed the case of AdS 3 × S 3 × T 4 ; it should be possible to do a similar analysis for AdS 3 × S 3 × S 3 × S 1 , for which the relevant WZW model is associated to d(2, 1|α) k . The tensionless limit for that theory arises if the level of one of the su(2) factors equals one, say k + = 1, and the world-sheet theory has a free field realisation in terms of symplectic bosons and fermions if also the other su(2) level equals one, k + = k − = 1, i.e. α = 1 and k = 1 2 [37]. Another class of backgrounds that should be accessible by these methods are the quotients of AdS 3 × S 3 × T 4 that were recently studied in [38].
The free field description of the world-sheet theory which we have developed and utilised in this paper should also allow one to attack other problems. For example, it should not be too difficult to study the D-branes of this world-sheet theory in terms of the free field description, see e.g. [39][40][41]. This should give access to some non-perturbative aspects of the dual symmetric orbifold theory, and thereby shed light on the extent to which this duality holds beyond perturbation theory.
Another direction which would be physically very important is to use the free field description to construct the vertex operator which adds an infinitesimal amount of RR 3-form flux (in the g s → 0 limit). It would be interesting to understand how to treat such a perturbation about the tensionless limit and, in particular, whether this modifies the delta function localisation of the correlators.
Acknowledgments
We thank Lorenz Eberhardt for useful conversations and comments on a draft of this paper. The work of AD, MRG and BK was supported by the Swiss National Science Foundation through a personal grant and via the NCCR SwissMAP. The work of RG is supported in part by the J. C. Bose Fellowship of the DST-SERB as well as in large measure by the framework of support for the basic sciences by the people of India.
A Spectrally flowed representations
In this appendix we shall describe various spectrally flowed representations in more detail. We begin with theσ 2 -spectrally flowed vacuum representation.
A.1 Another vacuum representation
We want to show that the state defined by (2.29) is indeed the vacuum state with respect to psu(1, 1|2). First we note that theσ spectral flow acts on the free fields aŝ The action on the psu(1, 1|2) 1 and u(1) generators is then It therefore (almost) leaves the psu(1, 1|2) generators invariant, while it shifts the eigenvalues of U 0 and V 0 . In order to see that (2.29) is indeed the vacuum state we first note that because of (A.3a) and (A.3b) this is the case with respect to the bosonic subalgebra sl(2, R) 1 ⊕ su(2) 1 ; here we have used that the fermionic excitations do not modify this property, as one can easily see from the free field realisation. Note that the fermionic generators are required to guarantee that the state is also annihilated by the S αβγ n modes with n ≥ 0; in particular, sinceσ(S αβ− m ) = S αβ− m−1 , this requires that as one verifies again directly from the free field realisation. The other supercharge generators work similarly. Finally, the U 0 charge of |0 (1) follows directly from the spectral flow property (A.3c), while for V 0 we use that since each ψ α generator carries V 0 charge −1/2. Together with (A.3d) this then shows that V 0 |0 (1) = −|0 (1) , and hence leads to (2.30).
A.2 Twisted sector ground states
In this appendix we identify the twisted sector ground state of the dual CFT in terms of our hybrid description. Let us start by recalling that under the usual spectral flow the operators transform as In addition, the energy-momentum tensor transforms as In the hybrid formalism, the mass-shell condition is We also note that on the highest weight states of the above (unflowed) representation L 0 = 0. This follows from the fact that the Casimir of psu(1, 1|2) vanishes on the short representation, see eq. (3.25) of [1]; it also follows from the observation that the 4 free fermions contribute ∆ f = 4 16 = 1 4 to the ground state energy -they are in the R sectorwhile the 4 symplectic bosons contribute ∆ b = − 4 16 = − 1 4 , see e.g. Section 4.2 of [2] so that the total ground state energy is zero. Thus the mass-shell condition in the w-spectrally flowed sector is where N is the excitation number before spectral flow, while m and n are the J 3 0 and K 3 0 eigenvalues before spectral flow.
B On Covering Maps
In this appendix we explain how the conditions on the covering map determine the polynomials p ± (z) up to two overall scale factors as well as two coefficients. Since we can factor out the two overall scale factors, this, in particular, implies that the conditions on the covering map can be written in terms of ratios of expansion coefficients, see the discussion below eq. (B.3). Let us start by denoting the coefficients of p ± (z) as Then the constraint (4.7) becomes Here the right hand side is the expansion of the polynomial C n i=1 (z − z i ) w i −1 , with C 2N −2 = C being the only unknown. The left hand side has the (2N + 2) unknowns {a j , b j }.
We can eliminate C by considering the (2N − 2) equations that come from equating powers of z on both sides of (B.2), b N } for j = 0, . . . , N − 1. We can fix two of these, say (α 0 , β 0 ), and parametrically solve these (2N − 2) equations for the remaining (2N − 2) variables. This algebraic system of equations will generically admit a finite number of discrete solutions for the {α j , β j } for any fixed (α 0 , β 0 ).
We have thus obtained a family of covering maps which have the right branching behaviour at z = z i and are expressible in terms of the two parameters (α 0 , β 0 ), together with the individual scaling factors of (a N , b N ). We can thus write the solutions to (B.2) as Thus we have a solution for the covering map in terms of a common scale factor and three unknowns ( a N b N , α 0 , β 0 ). We can fix these three unknowns by demanding that p − (z i ) + x i p + (z i ) = 0 (B.5) for, say, i = 1, 2, 3. Again, as algebraic equations we generically will have finitely many solutions. The covering map is thus determined up to these discrete choices. In particular, Γ(z i ) for (i = 4, . . . , n) are now fixed to discretely many values.
C.2 Delta-function localisation for four-point functions
In this appendix we give a direct argument to show that the four-point functions are delta-function localised. Let v be the 2N + 2-dimensional column vector containing the coefficients of P ± (z), and let M be the (2N + 2) × (2N + n − 2) matrix implementing the linear system of constraints, see eq. where the product is over the different covering maps, and C is a non-zero constant. Here we have used that det M = 0 is necessary in order for the covering map to exist -recall that (C.8) are also the constraints that characterise the covering map. 16 Putting these two identities together we conclude that 12) and the correlator must therefore be a sum of delta-functions, where C Γ contains again the dependence on the other variables. 16 We have also checked (C.11) explicitly, and at least for generic choices of xi and zi this identity is true. For special cases the determinant det(M ) may actually be a higher order polynomial in x4, but then the additional factors that appear on the right hand side involve x4 or (x4 − 1), which describe various degeneration limits.
D The η ± analysis for the four-point function with w i = 1 In this appendix we analyse the constraints that come from the insertion of the η ± fields into the correlator for the case of the four-point function with w i = 1, following the general method outlined in Section 5.1.2. In particular, eq. (5.17) leads to the recursion relations Note that unlike what happened for the three-point functions in Section 5.2, this time we cannot simply eliminate the unknowns A ± α and obtain a relation involving only correlators of the form (5.36). In fact, (D.1) amounts to four equations in the four unknowns A ± α . One strategy to proceed is to derive relations for the correlators A ± α by inserting an additional ξ ± field into the A ± α correlator, and using the techniques of Section 5.1. To be specific, for the correlator A + 1 , we consider where we have adopted the notation A + 1 (m 1 2 + 1 2 ) = (η + imposing regularity at u 1 and u 2 leads to Similar relations can be derived for A − 1 , A + 2 and A − 2 , and the resulting relations take the form where α, β ∈ {1, 2}. Next we solve which is precisely the U 0 charge conservation constraint discussed in Section 3.2.
D.1 A consistent solution
We have seen in Section 4.2 that the correlators themselves are delta-function localised. One may then ask what the localisation property of the A ± α correlators are, and whether these various relations have in fact a consistent solution. In order to study the functional dependence of these correlators it is useful to take advantage of the fact that J + 0 V 1 m 1 ,m 2 (x, z) = ∂ x V 1 m 1 ,m 2 (x, z) . (D. 12) In terms of the free fields ξ + and η + , see eq. (2.2), this becomes Inserting (D.13) into a correlation function, 17 we obtain the differential equation 17 This procedure is somewhat analogous to the way in which the Knizhnik-Zamolodchikov equation [42] is derived: we write the derivative operator ∂x = J + 0 in terms of bilinears of (in our case free) fields, and then use contour deformation techniques to rewrite this bilinear expression of the free fields. Note though that for the actual Knizhnik-Zamolodchikov equation the derivative operator is a bilinear of currents (via the Sugawara construction), while here they are the free symplectic boson fields. | 17,049.2 | 2020-09-23T00:00:00.000 | [
"Mathematics"
] |
Genome-Wide Search for Gene-Gene Interactions in Colorectal Cancer
Genome-wide association studies (GWAS) have successfully identified a number of single-nucleotide polymorphisms (SNPs) associated with colorectal cancer (CRC) risk. However, these susceptibility loci known today explain only a small fraction of the genetic risk. Gene-gene interaction (GxG) is considered to be one source of the missing heritability. To address this, we performed a genome-wide search for pair-wise GxG associated with CRC risk using 8,380 cases and 10,558 controls in the discovery phase and 2,527 cases and 2,658 controls in the replication phase. We developed a simple, but powerful method for testing interaction, which we term the Average Risk Due to Interaction (ARDI). With this method, we conducted a genome-wide search to identify SNPs showing evidence for GxG with previously identified CRC susceptibility loci from 14 independent regions. We also conducted a genome-wide search for GxG using the marginal association screening and examining interaction among SNPs that pass the screening threshold (p<10−4). For the known locus rs10795668 (10p14), we found an interacting SNP rs367615 (5q21) with replication p = 0.01 and combined p = 4.19×10−8. Among the top marginal SNPs after LD pruning (n = 163), we identified an interaction between rs1571218 (20p12.3) and rs10879357 (12q21.1) (nominal combined p = 2.51×10−6; Bonferroni adjusted p = 0.03). Our study represents the first comprehensive search for GxG in CRC, and our results may provide new insight into the genetic etiology of CRC.
Introduction
Genome-wide association studies (GWAS) have successfully identified single-nucleotide polymorphisms (SNPs) associated with colorectal cancer (CRC) [1][2][3][4][5][6][7][8][9][10]. As biologic candidates, those findings have enhanced our understanding of the genetic etiology of CRC. However, the susceptibility loci found so far explain only a small fraction of the genetic risk: the ''missing heritability'' problem [7]. Among other explanations, the lack of a comprehensive examination of gene-gene interaction (GxG) is often considered as one possible source for the unexplained heritability [11][12][13][14]. A recent paper also suggests that the missing heritability problem could be due to the overestimation of additive heritability if the assumption that there is no GxG or GxE interaction is incorrect [15]. The standard GWAS test for association is to use a single-locus approach, testing one SNP at a time across the entire genome; however, the underlying genetic mechanism of a complex disease, like CRC, probably involves interplays among multiple loci. Testing each locus individually without considering other loci with which it may interact may miss true genetic effects. Compared to the single-locus approach, there have been very few genome-wide examinations of GxG, probably at least partially due to the limited availability of individual-level large-scale GWAS data and analytical difficulties and limitations in computation given the massive number of possible interactions. A genome-wide study of psoriasis has reported compelling evidence for an interaction between variants at the HLA-C and ERAP1 loci [16].
Another study identified a GxG between a previously identified locus C1orf106 and a new locus TEC for Crohn's disease, with the interaction successfully replicated in an independent dataset [17]. So far, no GxG has been identified for CRC.
GxG for 14 known CRC Susceptibility Loci
After applying the QC and selection criteria, there were a total of 2,011,668 SNPs in common among studies in the Phase I studies (Materials and Methods; Table 1).
We selected interactions that have fixed-effect meta-analysis pvalues , 10 26 in Phase I for replication in Phase II. These interactions are summarized in Table 2. For SNPs that are in LD (r 2 .0.8), we reported only the most significant interacting SNP. Overall we identified 12 interactions with p, 10 26 in Phase 1, including three interacting SNPs selected for each of the known loci rs6687758, rs4925386; two interacting SNPs selected for known locus rs7136702, and one interacting SNP for each of known locus rs4779584, rs10795668, rs9929218, and rs961253, respectively.
Within Phase II, the interaction between known loci rs10795668 and rs367615 showed evidence for replication (OR = 0.76, 95% CI 0.61-0.95; p = 0.01) with a combined Phase I and II OR of 0.74 (95%CI 0.67-0.83; p = 4.19610ˆ-8). rs367615 is located on 5q21 and has a MAF of 0.22 in CEU population. Additional inclusion of two advanced colorectal adenoma studies in the replication study further strengthened the statistical significance level of the replication (OR = 0.78 and 8.97610 23 ); OR and p-value for Phase I, II and advanced adenoma studies combined are 0.75 and 2.88610 28 . rs10795668 was genotyped in 10 studies and imputed in 11 studies with average imputation R 2 of 0.97 (range from 0.92 to 1.00); rs367615 was genotyped in 4 studies and imputed in 17 studies with average R 2 of 0.98 (range from 0.91 to 1.00). The forest plot showing individual study results is presented in Figure 1. We did not observe evidence for heterogeneity, and random effects results are similar to fixed effects results for this interaction. Figure 2 shows the regional association plot. Several LD partners of rs367615 also show evidence of interaction with rs10795668.
We also examined the two-locus interaction pattern for the SNP pair described above using a unrestricted model. Table 3(a) summarizes the OR and sample size for each genotype combination relative to the reference genotypes for Phase I and II studies combined. Table 3(b) and Table 3(c) summarize the OR for each SNP stratified by the genotypes of the other. In Table 3, we can see that subjects who carry AG genotype for rs10795668 and CT genotypes for rs367615 have a statistically significantly increased disease risk compared to those who carry reference genotypes at both loci (rs10795668:GG/rs367615:TT). However, for subjects who carry AG or AA genotype for rs10795668, carrying CT genotypes significantly decreases the disease risk. The interaction OR can also be calculated from the table. For example, if there were no interaction effect, samples that carry GG for rs10795668 and CT for rs367615 would have an increased risk compared to the reference group (OR would be 1.03*1.11 = 1.14). However, they actually have a statistically significantly decreased Table 3(a) can easily be calculated to be 0.76, 1.01, 0.60 and 0.89, respectively. This looks like an unusual interaction pattern. However, it is worth noting that the sample size is relatively small when the genotype of rs367615 is CC and as a result, all OR estimates in the third column have large p-values and wide confidence intervals. To account for the small sample size, and to aid interpretation, we re-constructed the interaction table by combining the CT and CC genotype of rs367615 and the AG and AA genotypes of rs10795668. Table 3(d) shows that the CT/CC genotypes of rs367615 have an increased risk when the genotype of rs10795668 is GG. On the other hand, the combination of AG/ AA genotype of rs10795668 and CT/CC genotype of rs367615 has a protective effect.
As we have fit ARDI and unrestricted model for the top interaction between rs10795668 and rs367615, it would be interesting to also see the results from the multiplicative model. The multiplicative interaction OR is estimated to be 0.83 with combined p = 3.14610 26 , which is less significant compared to ARDI model.
GxG among Top Marginal SNPs
Based on the meta-analysis results of the marginal association analysis for all except two advanced adenoma studies, we selected 606 SNPs for testing GxG with MAF.0.05, average R 2 .0.3, and both fixed and random effect meta-analysis p,0.0001. Both fixed and random effect p-values were used because we wanted to avoid selecting SNPs with signal dominated by a few studies. With this selection criterion, all chosen SNPs had heterogeneity p-value .0.1. After applying a LD-pruning routine (Materials and Methods), 163 SNPs remained.
In Phase I, we observed five pairs of SNPs with fixed-effect meta-analysis interaction p-value,5610 25 (Table 4). These five interactions point to 3 independent findings, as indicated by correlation of the first two SNPs (rs2170568 and rs7006896, r 2 = 0.78) and the next two SNPs (rs2200579 and rs10879357, r 2 = 0.75). In the replication, the GxG between rs1571218/ 20p12.3 and the two correlated SNPs rs2200579 and rs10879357 which are on 12q21.1 are significant at level 0.1 (pvalues are 0.04 and 0.06, respectively), with interaction ORs in the same direction. The combined Phase I and II analysis OR and pvalues are 0.81 and 4.61610 26 and 0.80 and 2.51610 26 , respectively. The interaction between rs1571218 and rs10879357 passed the Bonferroni correction with threshold 3.79610 26 = 0.05/(163*162/2). After including the two advanced colorectal adenoma studies, the replication OR and p-value are 0.89 and 0.17 for rs1571218 and rs10879357; the combined analysis OR and p-value are 0.82 and 1.15610 25 . rs1571218 was well imputed in all studies with average imputation R 2 of 0.95 (range from 0.91 to 0.98); rs10879357 was genotyped in 11 studies and imputed in 10 studies with average R 2 of 0.78 (range from 0.76 to 0.80). The forest plot shows consistent results across the individual studies ( Figure 3). Again, we did not observe heterogeneity and random effects results are similar to fixed effects results.
The two-locus interaction pattern for rs1571218 and rs10879357 is summarized in Table 5(a). The OR for each SNP stratified by the genotypes of the other are summarized in Table 5(b) and Table 5(c). In Table 5, we can see that all nonreference combinations are associated with an increased disease risk compared to the reference group. However, due to interactions with inverse associations, the risks are not as large as they would have been without interaction. For example, if there were no interaction effect, persons who carry AG for rs10879357 and GT for rs1571218 would have an higher risk compared to the reference group (OR = 1.1261.18 = 1.32). However, the risk is lower (OR = 1.08) because of the interaction (OR = 0.82). Computed as above, the interaction OR's of rs1571218:GT/ rs10879357:AG, rs1571218:GT/rs10879357:AA, rs1571218:TT/ rs10879357:AG and rs1571218:TT/rs10879357:AA in Table 5(a) are 0.82, 0.84, 0.83 and 0.89, respectively, which seems to follow a dominant genetic model. Table 5(b) shows the deleterious association with allele A of rs10879357 seems to be offset by the allele T of rs1571218. A similar pattern can also be observed for rs1571218 in Table 5(c). This indicates that there may be an exclusive interaction between rs10879357 and rs1571218. We also calculated the multiplicative interaction OR ( = 0.94) and combined p ( = 0.08) between rs1571218 and rs10879357.
Discussion
In this large study, we performed a genome-wide search for pairwise GxG for each of the known CRC susceptibility loci and among top SNPs with small p-values for marginal effects. To our knowledge, this represents the first comprehensive GxG scan for colorectal cancer. The most significant interaction found in our examination of known loci and other SNPs genome-wide was between the known locus rs10795668 (10p14) and rs367615 (5q21) with replication p = 0.01 and combined p = 4.19610 28 . The effect sizes are very similar in Phase I and Phase II studies, and there is no evidence of heterogeneity (P het = 0.39). Among the top marginal SNPs, the most promising interaction was between rs1571218 (20p12.3) and rs10879357 (12q21.1) (nominal p = 2.51610 26 ; adjusted p = 0.03). Again, the effect sizes are very similar in Phase I and Phase II studies and there is little evidence for heterogeneity (P het = 0.74).
The known locus rs10795668 in our identified interaction is located in an intergenic region within 10p14. So far, the function of this SNP has not been clearly defined and it has not been related to specific gene(s). The nearest predicted genes in this region are BC031880 and HV455515 and DD431424, the latter two are newly identified regulator genes for hTERT, a genetic region that contains susceptibility loci of multiple different cancers, including colorectal cancer [9,[18][19][20][21][22][23][24][25][26][27]. Other genes close by are TAF3 and GATA3 (,0.6 M bp). GATA3 belongs to the GATA family of transcription factors, which are important for T-cell development.
TAF3 is a TBP-associated factor (TAF); these contribute to promoter recognition and selectivity and act as antiapoptotic factors [28]. rs10795668 has also been found to be correlated with the expression of ATP5C1 [29], which is involved in cell metabolism. rs367615 is located in an intergenic region within 5q21, where there is one member of the Wnt signaling pathway (APC) known to be important in both familial and non-familial colorectal cancer as well as MCC, perhaps also important in CRC [30,31]. The closest genes to rs367615 are PJA2, MAN2A1 and FER. PJA2 is responsible for ubiquitination of cAMP-dependent protein kinase type I and type II-alpha/beta regulatory subunits and for targeting them for proteasomal degradation [32]. PJA2 has been found to bind the ubiquitin-conjugating enzyme UbcH5B [33], which functions in the ubiquitination of the tumor-suppressor protein p53. FER regulates cell-cell adhesion and mediates signaling from the cell surface to the cytoskeleton via growth factor receptors. MAN2A1 is a Golgi enzyme important in Nglycan processing [34]. Upon additional bioinformatic analysis, we identified two potential functional candidates, rs2201016 and rs2201015, that are in strong LD with rs367615 (r 2 values of 1 and 0.916 respectively). As shown in the UCSC Genome Browser view ( Figure S2, Table S2), rs2201016 and rs2201015 fall within a region of strong DNAse hypersensitivity and evolutionary conservation. As shown in Table 3(a), the interaction seems to be driven by the CT group of rs367615, which is an uncommon phenomenon and may be related to heterozygote advantage. However, the minor allele heterozygous (CC) genotype is relatively rare, making it difficult to conclusively estimate the effect size in that genotyped. Although both SNPs point to potentially relevant genes involved in cancer development, advancing basic research and translating these GWAS findings in to clinical benefit will require further functional characterization through in vitro and in vivo analysis.
We observed a statistically significant interaction between rs1571218/20p12.3 and rs10879357/12q21.1 (and a marginally significant interaction with a close by and correlated SNP, rs2200579). The SNP rs1571218 is in the same region (20p12.3) and modestly correlated (r 2 = 0.56) with the known CRC locus rs961253. The closest gene is bone morphogenetic protein 2 (BMP2), which is part of transforming growth factor-beta (TGF-b) pathway. The TGF-b pathway plays an important role in cell proliferation, differentiation, and apoptosis [35] and is established as important in CRC [36]. Two interacting SNPs rs2200579 and rs10879357 are close together (,4 k bp apart) at 12q21.1 and are correlated (r 2 = 0.76). These SNPs fall in the intronic region of TPH2, which is a rate-limiting enzyme in the synthesis of serotonin [37]. Serotonin is known to be involved in numerous central nervous activities. There is also evidence that serotonin is mitogenic in different cancer cell lines [38][39][40]. One study has shown that lack of serotonin causes a reduction of tumor growth in a mouse model of colon cancer allografts [41]. Further bioinformatic analysis revealed that rs10879357 is in LD (r2 = 0.697) with a synonymous coding SNP (rs4290270) in the exonic region towards the tail end of TPH2. Further in vivo or in vitro analysis is necessary to determine whether this variant has a functional impact such as mRNA stability. Because rs2200579 and rs10879357 are in a gene rich region, it is also possible that the SNPs impact genes other than TPH2. In this paper, studies were divided into Phase I and II according to the time their genotype data became available. Phase II was expected to serve as validation/replication of Phase I. For the known loci GxG search, the Phase II p-value between rs10795668 and rs367615 is 0.01, which is nominally significant at the 0.05 level but does not pass the Bonferroni threshold (0.05/12). Among the top marginal SNPs, the Phase II p-value between rs1571218 and rs10879357 also does not pass the Bonferroni threshold (0.05/ 5) even when the combined p-value passes the Bonferroni threshold (3.79610 26 = 0.05/(163*162/2)). In fact, combined test was recommended in two-stage GWAS because the replication test has been shown to be less efficient compared to combined test [42]. Therefore, larger sample size is needed to reach enough power to replicate our findings.
Adenomas are well known precursor lesions of colorectal cancer. Accordingly, we investigated if the observed interactions for colorectal cancer are also seen in advanced colorectal adenomas. Our findings suggest that the interaction between rs10795668 and rs367615 is present in advanced adenomas, suggesting that the genetic variants may act early in the development of colorectal cancer. In contrast, the interaction between rs1571218 and rs10879357 was not observed in advanced adenoma, which may suggest that the genetic variants act at a later stage of cancer development. However, the findings need to be interpreted with caution, as the number of adenomas is relatively small (,1000 cases).
In marginal association analysis, the most commonly used model is the log-additive model, where the genotype is coded as 0, 1 or 2 (based on the number of count alleles). It is therefore natural to use the same genetic coding in a two-locus interaction model to test for GxG. In the interaction model, the interaction effect is modeled by the product of the genotypes of two SNPs. As we can see in Table 6(a), this interaction model assumes that the interaction when both SNPs have homozygous genotype ( = 2) is four times as large as when both SNPs have heterozygous genotype ( = 1). In other words, this model assumes b 22~4 b 11 in the Table 6(b), which is a strong assumption. Indeed, we can see that the interaction pattern in Table 3(a) is not consistent with this assumption. Some simple calculations demonstrate thatb 22 = log(0.89), which actually represents a smaller effect size compared to b 11 = log(0.76). In fact, we have found in simulation that violation of this assumption can result in substantial loss of power ( Figure S1). A cautious way to avoid posing such a strong assumption is to use an unrestricted model, which is also a widely adopted method [17,43]. Using an unrestricted model can avoid violation of assumptions but may result in substantial loss of power because of the increased degrees of freedom (from 1 to 4). Our ADRI method uses the same genetic coding as the log-additive Table 3. Interaction pattern between rs10795668 and rs367615. For each combination of genotypes, we computed the odds ratio (95% CI), and pvalue relative to the baseline group (rs10795668:GG; rs367615:TT). We also list the sample size for cases/controls. model to allow allelic effects for main effects, which also makes the interaction test independent of the marginal screening. For the interaction, our method estimates the average interaction effect b b of b 11 , b 12 , b 21 , and b 22 . Because b b is an average effect, it is less prone to heterogeneity among studies. As a result, our method is more stable and reproducible compared to the unrestricted and log-additive model. It is worth pointing out that when the underlying genetic model is indeed log-additive, ARDI is less powerful compared to the regular interaction model with logadditive genetic coding. For future applications, a model selection technique needs to be developed to determine the most appropriate model with the least loss of power. Another worth noting point is that the case-only model, which assumes independence between SNPs in controls, is known to be more powerful than the case control combined model while testing for gene-gene interaction [44,45]. In our case, ARDI is a case control combined approach so the power can also be boosted by using its case-only counterpart. We did not implement the case-only ARDI for two reasons: it is relatively hard to completely avoid violation of the independence assumption (thus maintain the type I error rate) in case-only model due to the complexity of the LD structure of the human genome, i.e, long-range LD [46]; in addition, the current available package [47] for fitting a case-only model with covariates are only applicable to genotyped SNPs while our data include imputed dosages. As an on-going work, we are developing a package that can fit case-only model for two imputed SNPs while adjusting for covariates. GxG is usually defined as the departure from main effects [13]. Therefore, if the underlying main effects are not correctly specified, the residual main effects could be incorporated as part of the interaction effect in the statistical model [48]. As a result, testing interaction implicitly evaluates the residual main effect and interaction effect jointly. We keep the main effects as log-additive in ARDI, mainly because we want to be consistent with the usual log-additive model used in the marginal association analysis so that ARDI test is independent of the marginal screening. However, the log-additive main effect is prone to model misspecification. We observed this in our study for four of the known loci, rs10936599, rs6983267, rs4779584 and rs961253. These SNPs all showed an inflated lfor the interaction tests when using additive genetic coding for the main effect. In all four cases, the inflation ldiminished after we switched to unrestricted coding with no misspecification. VanderWeele and Laird (2011) used a similar approach to protect against potential misspecification of main effects [49]. We tried ARDI with unrestricted main effect on our top findings. Under the ARDI model with unrestricted main effect, the interaction between known locus rs10795668 and rs367615 has an OR of 0.75 and combined p = 1.07610 26 (original OR = 0.74 and combined p = 4.19610 28 ); interaction between rs1571218 and rs10879357 has an OR of 0.83 and combined p = 3.90610 24 (original OR = 0.80 and combined p = 2.51610 26 ). As we can see, the OR's stay largely the same and there are still strong signals of interaction. However, the pvalues get larger in the new model, which could be due to random fluctuations between different models, or also could be a sign of main effect misspecification. Hence, our interaction test results should be interpreted with caution.
In our GxG search, we performed genome-wide interaction search for each known CRC locus and all other SNPs, including the SNPs that are in LD with it. This raises an important concern whether it is appropriate to test GxG between two SNPs that are in high LD. As an alternative, it is of interest to conduct haplotype analysis on the nearby regions of the known loci. We also prioritized SNPs based on their marginal association strength, using established methods [50]. Our reasoning is that if a SNP is involved in GxG, it is also likely that it will show evidence of some marginal effect. As most SNPs in GWAS are null, selecting a subset of SNPs that are more likely to show interaction could increase the power substantially as it reduces the overall multiple comparison burden. However, it is also possible for a SNP to show little or no marginal association if it is involved in interaction that is in the opposite direction to that seen with the main effect. In this case, we would not be able to find those qualitative interactions using our screening. Future research is needed to explore methods to complement the marginal association screening while still restricting the number of tests at a reasonable level to ensure reasonable power.
In this paper, we focused on pair-wise interactions. For higher order interactions, data mining methods such Random Forest [51,52] and Multifactor Dimensionality Reduction [53] are preferred compared to the traditional regression-based methods because of the scarcity of the potential high-order contingency Table 5. Interaction pattern between rs1571218 and rs10879357. For each combination of genotypes, we computed the odds ratio (95% CI) and pvalue relative to the baseline group (rs1571218:GG; rs10879357:GG). We also list the sample size for cases/controls. table [13]. As pointed out by Cordell [13], most of the high-order data mining methods, except for Random Forest, are computationally intensive, and hence, are not easily applicable to GWAS data. In addition, as the data mining methods are nonparametric, permutation tests are usually needed to produce p-value. Unfortunately they are generally computationally impossible for GWAS. Given the aforementioned limitations, one possible practical approach for searching for higher order GxG is to use Random Forest in a discovery dataset and use traditional regression-based methods to replicate the findings. It is important to note that we focused on testing statistical interaction in this paper and statistical interaction does not always warrant a biologic or mechanistic interaction [54]. Mechanistic interaction can be tested using the sufficient cause framework [55], which is out of the scope of this paper.
In summary, our study is the first to comprehensively search for GxG for CRC. We have found evidence for two interactions associated with CRC risk. Further studies are needed to evaluate these interactions and to study the underlying molecular mechanisms.
Study Participants
The studies used in this analysis, including number of cases and controls, are listed in Table 1, with each study described in detail in the Text S1. In brief, colorectal cancer cases were defined as adenocarcinoma of colon and rectum (International Classification of Disease Code 153-154) and were confirmed by medical record, pathology report, or death certificate. Advanced colorectal adenoma cases are defined as adenoma $1 cm in diameter and/or with tubulovillous, villous, or high-grade dysplasia/ carcinoma-in-situ histology, and were confirmed by medical record, histopathology, or pathology report. All participants provided written informed consent and studies were approved by the Institutional Review Board.
Genotyping
We conducted genome-wide scans for all studies. GECCO GWAS consisted of participants of European ancestry within 13 studies including the French Association Study Evaluating RISK for sporadic colorectal cancer (ASTERISK); Hawaii Colorectal Cancer Studies 2 and 3 (Colo2&3); Darmkrebs: Chancen der Verhutung durch Screening (DACHS); Diet, Activity, and Lifestyle Study (DALS); Health Professionals Follow-up Study (HPFS); Multiethnic Cohort (MEC); Nurses' Health Study (NHS); Ontario Familial Colorectal Cancer Registry (OFCCR); Physician's Health Study (PHS); Postmenopausal Hormone study (PMH); Prostate, Lung, Colorectal Cancer, and Ovarian Cancer Screening Trial (PLCO); VITamins And Lifestyle (VITAL); and the Women's Health Initiative (WHI). Phase one genotyping on a total of 1,709 colon cancer cases and 4,214 controls from PLCO, WHI, and DALS (PLCO Set 1, WHI Set 1, and DALS Set 1) was done using Illumina Human Hap 550 K, 610 K, or combined Illumina 300 K and 240 K, and has been described previously [9]. A total of 650 colorectal cancer cases and 522 controls from OFCCR are included in GECCO from previous genotyping using Affymetrix platforms [2]. A total of 5,540 colorectal cancer cases and 5,425 controls from ASTERISK, Colo2&3, DACHS Set 1, DALS Set 2, MEC, PMH, PLCO Set 2, VITAL, and WHI Set 2 were successfully genotyped using Illumina HumanCytoSNP. A total of 1,837 colorectal cancer cases and 2,072 controls from HPFS, NHS, PHS, and DACHS set 2, as well as a total of 826 advanced adenoma cases and 923 controls from HPFS and NHS were successfully genotyped using Illumina HumanOmniExpress. A population-based case-control GWAS from CCFR (1,171 cases and 983 controls) was successfully genotyped using Illumina Human1M or Human1M-Duo [56].
We divided the studies into two phases according to the time their genotype data became available (Table 1). We used the Phase I studies (10 studies; 8,380 cases and 10,558 controls) as the discovery set and Phase II studies (6 studies; 2,527 cases and 2,628 controls) as the replication set. In addition, there are two advanced colorectal adenoma studies, which we use to evaluate whether the interactions found for carcinoma are also associated with advanced adenoma.
DNA was extracted from blood samples or, in the case of a subset of DACHS, MEC, and PLCO samples, and all VITAL samples, from buccal cells using conventional methods. All studies included 1 to 6% blinded duplicates to monitor quality of the genotyping. All individual-level genotype data were managed centrally at University of Southern California (CCFR), the Ontario Institute for Cancer Research (OFCCR), the University of Washington (HPFS, NHS, PHS, and the second GWAS of DACHS), or the GECCO Coordinating Center at the Fred Hutchinson Cancer Research Center (all other studies) to ensure a consistent quality assurance and quality control approach and statistical analysis. Samples were excluded based on call rate, heterozygosity, unexpected duplicates, gender discrepancy, and unexpectedly high identity-by-descent or unexpected concordance (.65%) with another individual. All analyses were restricted to samples clustering with the CEU population in principal component analysis, including the three HapMap populations as Each entry in the tables represents the risk of the corresponding genotype combination relative to the baseline (AA/BB). a reference. SNPs were excluded if they were triallelic, not assigned an rs number, or were reported as not performing consistently across platforms. Additionally, they were excluded based on call rate (,98%), Hardy Weinberg Equilibrium in controls (HWE, p,10 24 ), and minor allele frequency. To place studies on a common set of autosomal SNPs, all studies were imputed to HapMap II release 24, with the exception of OFCCR, which was imputed to HapMap II release 22. CCFR was imputed using IMPUTE [57], OFCCR was imputed using BEAGLE [58], and all other studies were imputed using MACH [59]. Given the high agreement of imputation accuracy among MACH, IM-PUTE, and BEAGLE [60] the common practice to use different imputation programs is unlikely to cause heterogeneity [61]. Imputed data were merged with genotype data such that genotype data were preferentially selected if a SNP had both types of data, unless there was a difference in terms of reference allele frequency (.0.1) or position (.100 base pairs), in which case imputed data were used. As a measurement of imputation accuracy, we calculated R 2 [59].
For the GxG analysis, we restricted the search to SNPs with MAF.0.05 and imputation R 2 .0.3 because there is inadequate power to detect interactions between less frequent variants or variants with lower imputation quality given the current sample size.
Statistical Method
GxG model. A logistic regression model was used to assess GxG for each SNP pair tested. In particular, we used a simple yet powerful approach named ''Average Risk Due to Interaction (ARDI)'' to test for GxG. In this approach, the main effects of the SNPs are log-additive and the interaction effect is the averaged deviation from the main effects. This is in contrast to the usual modeling of the interaction effect for log-additive model, where the interaction term is the product of the two SNPs. To see this, we consider two SNPs, G 1 ( = AA, Aa or aa) and G 2 ( = BB, Bb, or bb) while A and B are the major alleles for G 1 and G 2 , respectively. Table 6(a) shows the usual interaction model with log-additive effects. Under this model, the interaction effect of aa/bb combination relative to the main effects is exp(4b), which is considerably larger than the Aa/Bb combination, which is exp(b). One way to avoid this strong assumption of interaction pattern is to use an unrestricted model (Table 6(b)), which models the interaction effect by four parameters b 11 , b 12 , b 21 , and b 22 . A fourdegrees-of-freedom test is needed to test for the interaction effect, which may result in a substantial power loss. We therefore modeled the average interaction effect with one parameter b b while keeping the main effect as log-additive (ARDI) (see Table 6(c)). This modeling avoids the strong assumption of the usual modeling of interactions with log-additive main effects, and yet gains power by having only one parameter to test for interaction. We keep the main effects as log-additive, mainly because we want to be consistent with the usual log-additive model used in the marginal association analysis. We have conducted extensive simulation studies to compare the performance of ARDI with multiplicative interaction model and unrestricted interaction model. Simulation results show that ARDI has favorable performance while the underlying interaction pattern is unknown (see Text S1, Table S1 and Figure S1). We have also tried both multiplicative model and ARDI in the Phase I studies and ARDI yielded more significant results genome widely, which supported the conclusion from the simulation because in this case the true underlying interaction is unknown and likely to vary among SNPs. Hence, we chose ARDI as our GxG model. Specifically, the ARDI model can be written as: log it(d)~a 0 za 1 fI(G 1~A a)z2I(G 1~a a)g where d is the disease status (0/1), a 0 is the intercept, a 1 and a 2 are the main effects, b b is the ARDI interaction effect. The hypothesis is to test whether b b~0. For all models, we adjusted for age, sex, study center, and the first three principal components from EIGENSTRAT [62] to account for population substructure.
GxG searching strategy. We performed genome-wide interaction testing for each of the 14 known CRC susceptibility loci and all other 2.1 M SNPs in the Phase I studies. SNPs with p,10 26 in Phase I were examined in the Phase II studies using the same ARDI model.
We also conducted a genome-wide search of GxG for all SNPs, using a two-stage approach. In the first stage, we did a genomewide marginal association test with additive genetic coding for all 2.1 M SNPs. Then we selected SNPs with marginal association pvalue ,0.0001 for the second stage and searched for pair-wise interactions among the selected SNPs. We selected 0.0001 as the cutoff so that around 100 independent regions would be selected assuming there are one million independent regions genome-wide [63]. It has been shown that the screening on marginal association is independent of the GxG test as long as the genetic coding for the main effect is the same as in the marginal association testing [50]. Because both the marginal association test and the main effect of ARDI use additive genetic coding, we need to adjust only for the number of interaction tests performed in the second stage to maintain the correct type I error level.
We observed 606 SNPs with marginal association p,0.0001. However, the 606 selected SNPs are not independent due to linkage disequilibrium (LD) between SNPs. As a result, if we used the number of pair-wise interactions among those 606 SNPs (n = 183,315) with a Bonferroni correction to compute the adjusted alpha level, the result would be too conservative. Therefore, we performed a pruning based on LD. First, the selected SNPs were ranked based on the marginal association pvalue. Starting with the first SNP (SNP with the strongest signal), we removed all SNPs that have a LD r 2 .0.8 with that SNP. Then we moved to the next SNP, and repeated the procedure until we reached the end of the list. A total of 163 SNPs remained after this LD pruning. We then tested for GxG among these SNPs in Phase I studies. Interactions with p,5610 25 were selected for Phase II (so the expected number of false positive based on total 163*162/ 2 = 13,203 interaction tests is less than one).
Meta-analysis
We used the fixed-effect meta-analysis to combine interaction estimates across studies. In this approach, we used the inversevariance weighting to combine the regression coefficient estimates from each study. As previously demonstrated [64], the imputation quality is automatically incorporated into meta-analysis with the inverse-variance weighting. We report the summary estimate, standard error, and 95% confidence interval, as well as the heterogeneity p-value for the meta-analysis. For top findings we examined whether a random effects model would result in substantively different results from our fixed effects model. We also examined forest plots for top interaction findings. We present meta-analysis results for Phase I alone, Phase II alone, and Phase I and II combined.
Genomic Inflation
We checked the QQ plot and genomic inflation factor l for the GxG meta-analysis results of each known locus. Among 14 known loci, 10 of them showed no systematic bias, with l 's less than 1.05. However, rs10936599, rs6983267, rs4779584 and rs961253 showed some indication of an inflated l (1.10-1.78). For each of these SNPs we found that the systematic inflation was due to inappropriate additive genetic coding for the main effect. If the main effect for a SNP does not follow an additive model (with the heterozygote effect half way between the two homozygotes on the log scale), but additive coding is used, this misspecification results in a residual main effect. The residual effect impacts the testing for the interaction and causes the inflation (see Discussion for more details). For those four SNPs, we switched their main effect coding from an additive model to a 2 degree of freedom unrestricted coding and observed that the inflation factor for the interaction GxG meta-analysis results was diminished (l#1.01). Table S1 An illustration of six two-SNP interaction models used in the simulation. SNP 1 has genotype AA, Aa and aa; SNP 2 has genotype BB, Bb, and bb. A and B are the major alleles for SNP1 and 2, respectively. Each entry in the tables represents the risk of the corresponding genotype combination. (DOCX) Text S1 | 8,579.2 | 2012-12-26T00:00:00.000 | [
"Biology"
] |
Aerial inventory of surficial geological effects induced by the recent Emilia earthquake ( Italy ) : preliminary report
As a consequence of the two main shocks that recently struck the central alluvial Po Plain (May 20, 2012, Ml 5.9, and May 29, 2012, Ml 5.8), a great number of surficial geologic disturbances appeared over a wide area (ca. 500 km2), which extended up to 20 km from the epicenters. The affected area includes Mirabello, San Carlo, Sant'Agostino (Province of Ferrara), San Felice, Cavezzo, Concordia (Modena), Moglia and Quistello (Mantova). Most of the surficial effects that were observed during this study were clearly induced (directly or indirectly) by sand liquefaction phenomena, such as sand volcanoes, burst of water and sand from domestic wells, tension cracks, lateral spreading and associated deformation, graben-like fracturing, and sink-holes. Other effects can probably be ascribed simply to the shaking of the ground (e.g., small collapses of irrigation canal walls). Lastly, there were also some features of dubious origin, such as two 'yellow crop spots' that are cited here with reservations. All of these data were surveyed by means of a small airplane that was especially adapted for this purpose. The aim of this study was to furnish a wide-ranging image of the surface deformation over the whole area impacted by these recent earthquakes, as an instrument towards more exhaustive research, both at the scientific and technical levels (e.g., seismic microzonation). […]
Introduction
As a consequence of the two main shocks that recently struck the central alluvial Po Plain (May 20, 2012, M L 5.9, and May 29, 2012, M L 5.8), a great number of surficial geologic disturbances appeared over a wide area (ca.500 km 2 ), which extended up to 20 km from the epicenters.The affected area includes Mirabello, San Carlo, Sant'Agostino (Province of Ferrara), San Felice, Cavezzo, Concordia (Modena), Moglia and Quistello (Mantova).
Most of the surficial effects that were observed during this study were clearly induced (directly or indirectly) by sand liquefaction phenomena, such as sand volcanoes, burst of water and sand from domestic wells, tension cracks, lateral spreading and associated deformation, graben-like fracturing, and sink-holes.Other effects can probably be ascribed simply to the shaking of the ground (e.g., small collapses of irrigation canal walls).Lastly, there were also some features of dubious origin, such as two 'yellow crop spots' that are cited here with reservations.
All of these data were surveyed by means of a small airplane that was especially adapted for this purpose.
The aim of this study was to furnish a wide-ranging image of the surface deformation over the whole area impacted by these recent earthquakes, as an instrument towards more exhaustive research, both at the scientific and technical levels (e.g., seismic microzonation).
Geological framework
The subsiding Apenninic perisutural basin of the Po Plain contains a thick succession of Pliocene-Quaternary sediments of marine and continental origin, with a thickness that reaches as much as 1000 m.In the study area, the 15 m to 20 m at the top of this succession have been ascribed to the Holocene.
For a thickness of several hundreds of meters, the subsoil of this area is formed by clay, silt and sand deposited by the River Po and its tributaries over the last seven hundred thousand years (Figure 1).These sediments represent the more recent phases of replenishment of the basin, which is linked to the uplift of the northern Apennines [Regione Emilia-Romagna 1988].
The topographic surface shows long and smooth ridges (the so-called dossi) that are formed by sandy channel-fill deposits, levees (natural and artificial), and crevasse splays, that are related to the ancient paths of the Rivers Secchia, Panaro and Reno, which were abandoned in the period between the 10th and 17th century [Regione Emilia-Romagna 1999].Clay, silt and peat deposits prevail in the wider and lower overbank areas (the valli).The subsoil is formed by a succession of these lithologies, due to the permanent migration of rivers in the alluvial plain since the middle Pleistocene.
Human activity has strongly influenced the present landscape since medieval times, by the draining and raising of the surface of swamps through landfill practices, by creating artificial ditches and canals, by diverting the natural flows of waters and retaining them inside artificial levees.In the whole study area, the groundwater level is very close, if not coincident, to the surface.
In summary, the subsoil of this area shows all of the combined predisposing factors that allow the liquefaction of sands during strong earthquakes.
Surface effects
The first observed events were related to the shock of May 20, 2012, with its epicenter near Massa Finalese (Modena).This earthquake most strongly impacted a region to the east of Mirandola: in particular, the villages of San Carlo and Sant'Agostino (ca.16 km from the epicenter), Mirabello (20 km), Dodici Morelli (11 km), San Felice sul Panaro (8 km) and the region surrounding San Martino Spino (12 km) and Scortichino (7 km).
The most spectacular effects were represented by the eruption of sand, which formed small volcanoes, and the bursting of water and sand from domestic wells.Long tension cracks (up to some hundreds of meters in length), 'grabens', and large lateral spreads formed within the village of San Carlo (Ferrara).
Field survey
Considering that rain, vegetation, and human activity would quickly obliterate geomorphic evidence of the earthquakes, an aerial inventory of the impacted area was undertaken, using an especially adapted aircraft.The inventory was collected under the ideal conditions of low speed-low altitude capability of the aircraft, simplicity of using the equipment, and low operational cost.The survey was conducted at the average altitude of 200 m over a two-week period.More than 700 geological features were inventoried and documented by 300 aerial photographs and their related GPS coordinates.In total, about 2000 high-definition digital photographs were taken and examined in a post-flight analysis.In this way, it was possible to also recognise small effects, of only 1 m 2 in area.Some 500 km 2 of the territory were systematically checked in this way, during 15 sorties and 20 h of flight.Systematic checks on the terrain were performed, to reduce the number of errors in the interpretation of photographs.
The observational accuracy depended on the short time that elapsed between the earthquakes and the survey.Within one week of the shocks, the sand remained wet and appeared clearly on the terrain; after that time, the sand became dry The following types of features were recognized: a. Sand volcanoes (punctual); b.Eruption of wet sands from ground fissures; c.Eruption of wet sands from foundations (usually along the perimeters of buildings); d.Eruption of sands (and water) from domestic wells; e. Tension cracks and graben-like fissures (without or with eruption of sand or water); f. 'Dry' craters (or sink-holes) of average diameters of 1-2 m; g.Lateral spreading effects (on slopes); h.Small collapses along the banks of irrigation ditches; i. 'Yellow corn spots' due to local corn withering (saltwater/ gas/heat emissions?).
Seventy percent of the observed effects occurred within town and village boundaries (Figure 3.2), while the remaining 30% was observed in open agricultural fields and was characterized almost exclusively by features of types 'a', 'b' and 'e'.
A simplified map of the aerial inventory is shown in Figure 2a,b.Due to the large amount of data collected in the original inventory (about 700 datapoints), each point in these figures might represent one or more data points.The main aim of these maps is to provide a picture of the spatial distribution of the collected data in relation to the epicenters.The yellow dots in Figure 2a,b represent sand volcanoes, eruption of sand from short cracks, and bursts of sand and water from domestic wells.The red symbols represent short (dots) or long (lines) fissures in the ground, without or with the eruption of sand.The red stars are the epicenters of the main shocks (easternmost: May 20, 2012, M L 5.9; westernmost:May 29 2012, M L 5.8).
In the inventory, the data are geo-referenced and accompanied by a short description and one or more aerial photographs (examples are shown in Figures 3, 4 and 5).In Figure 2b, some peculiar features are visible inside the town of San Carlo (Sant'Agostino municipality): the 'graben' (light red) and the tension cracks (light green).
Conclusions
These observations indicate that most of the sand eruptions and tension fissures follow the sinuous paths of abandoned rivers and their sandy deposits.These observations are consistent with what has been reported for ancient dossi, which has patterned the location of towns and associated modern infrastructure (see Castiglioni et al. 1997, Figure 1).In some cases, also ancient artificial banks and land fill might have been affected.
AERIAL INVENTORY OF GEOLOGICAL EFFECTS
An important exception to this correspondence is represented by the cluster of data in the so-called 'Valli di Mirandola' south of San Martino Spino, that was a relatively recent bed of the Po River, but now abandoned (Figures 2b,4.3,4.4,5.1,5.2).In some cases, the liquefaction occurred on ancient crevasse splays, like mostly in the San Carlo case.A large number (70%) of these effects were concentrated inside towns, so demonstrating a close relationship with the presence of buildings and domestic wells.
Even if in a preliminary form, this aerial inventory raises some interesting elements of discussion.One is related to the spatial distribution of these liquefaction effects: a large number of them occurred at a considerable distance from the epicenters (up to 20 km away), whereas only 10% occurred within the radius of 5 km.
Two evident spots of withered corn about 20 m in diameter (Figure 5.3 and 5.4) were perhaps related to saltwater, gas or heat emissions, and these were observed and documented near Medolla (just near the May 29 epicenter).These are not new features as they had been known and documented since the XVII century [Spinelli et al. 1893, and further references in Gorgoni and Tosatti 2004].As a consequence, a close relationship with the recent earthquake should be considered dubious.For this reason, these effects are cited here but do not appear in the aerial inventory.
Regarding the data survey, we highlight the versatility of the utilized aerial vector.The high-wing, two-seats Tecnam P92 aircraft allowed us to accomplish the whole survey with the maximum of accuracy and at very low overall costs (about 2000 Euros).
This work integrates other similar surveys performed in the same area and period [e.g., EMERGEO Working Group. 2012, this volume] completing the exhaustive picture of the occurred phenomena.
The complete aerial inventory has been used to draw maps for the competent authorities [Regione Emilia-Romagna 2012] and is available for scientific and technical purposes.In particular, it has been a useful base-map for the collection of a series of samples, carried out by DST-Unimore (Dipartimento di Scienze della Terra, Modena and Reggio Emilia University) in order to perform a complete analysis of these sands in the whole territory (in progress).
In the near future, this inventory might also be useful in territorial planning, to perform more accurate hazard assessment.
Figure 1 .
Figure 1.Sketch map of the surficial geological effects induced by the recent earthquakes, and their relationships with geomorphological features.Geomorphological map from Castiglioni et al. [1997].
Figure 2a .
Figure 2a.Simplified maps of the observed geological effects.For symbols see text.
Figure 2b .
Figure 2b.Simplified maps of the observed geological effects.For symbols see text.
and was more difficult to recognise, especially in an urban context.In open fields, these features remained clearly visible for several weeks, even if vegetative regrowth and cultivation have now obscured the effects.
Figure 3 .
Figure 3.Some examples from the mapped surficial effects.1) Long tension cracks (up to 1 m in width and 2 m in depth) and 'grabens' south of San Carlo (Ferrara Province); 2) Sand eruptions (yellow spots) from wells, cracks and foundations inside the town of San Carlo; 3) eruption of sand in an open field; note the formation of small craters; and 4) eruption of sand in San Felice sul Panaro football field (Modena Province). | 2,762.2 | 2012-10-17T00:00:00.000 | [
"Geology",
"Environmental Science"
] |
Intelligent Industrial Process Control Systems
The widespread realization of Industry 4 [...].
Introduction
The widespread realization of Industry 4.0 forces continuous progress in all its embedded technologies. Intelligent manufacturing systems are modern systems of manufacturing that integrate the abilities of humans, machines, and processes to achieve the best possible outcome. In this context, control processes directly influence the behavior of industrial systems. They are supposed to operate in a safe, reliable, and precise way. In order to ensure this, several modern technologies are combined together in an integrated design, involving artificial intelligence.
This Special Issue, belonging to the journal section "Industrial Sensors", was dedicated to the newest interdisciplinary research in the area of intelligent industrial process control systems. A total of eight excellent research articles have been accepted and published, following a rigorous peer review process.
Summary of the Special Issue
The first paper [1] proposes a novel approach for the model checking of autonomous components within electric power systems specified by interpreted Petri nets. A formal specification enables the checking of some basic properties of the models, such as determinism or deadlock freedom, but also some behavioral user-defined properties. The requirements are written as temporal logic formulas and a rule-based logical model is used to support the verification process. The initial specification can then be formally verified, and any design errors can be identified at an early stage of electric power system development.
The second paper [2] introduced a data analysis and modelization method for the rolling mill process of manufacturing billets in steel plants. Based on a case study, two main problems were addressed: the data analysis of temperature sensors and current. The performed data analysis suggested necessary hardware modifications. The modelization phase provided the basis for future control and diagnosis applications that will exploit a temperature decay model. The third paper [3] aims to improve actuator wear using noise filtering. It evaluates and measures the impact of noise filtering on the loop performance and on the actuator weariness. Relationships between the noise filtering time constant, loop performance, and valve travel deliver recommendations for control engineers. Suggestions for filter design are given, showing how far an engineer can go with filtering without a heavy loss of loop performance.
The fourth paper [4] exploits a decentralized PI/PID controller based on frequency domain analysis for two input-two output coupled tank systems. The fundamentals of the gain margin and phase margin were used to design the proposed controller. The robustness of the controller was verified by considering multiplicative input and output uncertainties. According to the authors, the proposed control algorithm exhibits better servo and regulatory responses compared to other existing techniques.
The fifth paper [5] presents a robust nonlinear current mode control approach for a pulse-width modulated DC-DC Cuk converter in a simple analog form. The control scheme is developed based on the reduced-state sliding-mode current control technique.
The proposed controller does not require an output capacitor current sensor and double proportional-integral compensators as in conventional sliding-mode current controllers. Therefore, the cost and complexity of the practical implementation is minimized without degrading the control performance.
The sixth paper [6] proposes an optimization model of production change in a nonlinear supply chain system in emergencies. A three-level single-chain nonlinear supply chain system containing producers, distributors and retailers is established. The adaptive improved sliding mode controller is designed and used to construct an optimization model of the supply chain system under unexpected events. The effectiveness of the proposed method is verified using numerical simulation experiments.
The seventh paper [7] introduces an approach to minimize the maximum makespan of the integrated scheduling problem in flexible job shop environments, taking into account conflict-free routing problems. A hybrid genetic algorithm is developed for production scheduling and the optimal ranges of crossover and mutation probabilities are discussed. A case study of a real flexible job shop is used to present empirical evidence for the feasibility of the proposed approach.
The eighth paper [8] assesses industrial communication protocols to bridge the gap between machine tools and software monitoring. It presents an empirical study of three protocols: OPC-UA, Modbus, and Ethernet/IP. It aims to answer research questions about how these protocols differ in terms of performance and complexity of use from a software perspective, and how can their effectiveness be empirically evaluated and compared. | 1,001 | 2023-08-01T00:00:00.000 | [
"Computer Science"
] |
Nickel oxide interlayer fi lms from nickel formate – ethylenediamine precursor: in fl uence of annealing on thin fi lm properties and photovoltaic device performance † , 10949 – 10958 |
An organometallic ink based on the nickel formate – ethylenediamine (Ni(O 2 CH) 2 (en) 2 ) complex forms high performance NiO x thin fi lm hole transport layers (HTL) in organic photovoltaic (OPV) devices. Improved understanding of these HTLs functionality can be gained from temperature-dependent decomposition/ oxidation chemistries during fi lm formation and corresponding chemical structure-function relationships for energetics, charge selectivity, and transport in photovoltaic platforms. Investigations of as-cast fi lms annealed in air (at 150 (cid:2) C – 350 (cid:2) C), with and without subsequent O 2 -plasma treatment, were performed using thermogravimetric analysis, Fourier transform infrared spectroscopy, ultraviolet and X-ray photoelectron spectroscopy, and spectroscopic ellipsometry to elucidate the decomposition and oxidation of the complex to NiO x . Regardless of the anneal temperature, after exposure to O 2 -plasma, these HTLs exhibit work functions greater than the ionization potential of a prototype donor polymer poly( N -9 0 -heptadecanyl-2,7-carbazole- alt -5,5-(4 0 ,7 0 -di-2-thienyl-2 0 ,1 0 ,3 0 -benzothiadiazole) (PCDTBT), thereby meeting a primary requirement of energy level alignment. Thus, bulk-heterojunction (BHJ), OPV solar cells made on this series of NiO x HTLs all exhibit similar open circuit voltages ( V oc ). In contrast, the short circuit currents increase signi fi cantly from 1.7 to 11.2 mA cm (cid:3) 2 upon increasing the anneal temperature from 150 (cid:2) C to 250 (cid:2) C. Concomitantly, increased conductivity and electrical homogeneity of NiO x thin fi lms are observed at the nanoscale using conductive tip-AFM. Similar V oc observed for all the O 2 -plasma treated NiO x interlayers and variations to nanoscale conductivity suggest that the HTLs all form charge selective contacts and that their carrier extraction e ffi ciency is determined by the amount of precursor conversion to NiO x . The separation of these two properties: selectivity and conductivity, sheds further light on charge selective interlayer functionality.
Introduction
High efficiency bulk heterojunction (BHJ) organic photovoltaic (OPV) devices oen require contacts modied with hole or electron charge-transport interlayers in order to increase the charge carrier collection efficiency above that of the unmodied transparent conducting oxide or metal contact. 1,2 The efficiency of charge collection interlayers relies upon their thin lm conductivity, 3 work function (F w ), 4-7 alignments of interlayer valence and conduction band edges with the highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO) energies across the heterojunction, [8][9][10][11][12] as well as the degree of heterogeneity for the contact surface. 13 The combination of these mechanisms provide for selective charge collection in competition with bimolecular and surface recombination under low internal electric elds (i.e. near open-circuit conditions). [14][15][16] In particular, the importance of selectivity and contact-extraction efficiency becomes increasingly important in solution-processed photovoltaic platforms as free carrier mobilities of photo-active layers increase. Metal oxides formed with Mo, V, W, Ni or NiCo 6 have been shown to exhibit favorable attributes as hole transport layers (HTLs). Photovoltaic applications in organic-, [17][18][19][20][21][22] colloidal quantum structure- 23,24 and methyl ammonium lead halide-based platforms [25][26][27] utilize HTLs with high transmission at operational wavelengths and work function (F w ) values equal to or in excess of the donor IP value, although presumably due to different mechanisms for nand p-type oxides. The grand challenge in contact design remains to effectively create solution-processed deposition methods for high performing thin lm devices while still maintaining material and interfacial functionalities outlined above.
NiO x , is one of few p-type metal oxides that has transversed numerous energy relevant technologies such as catalysis, batteries, fuel cells and photovoltaics. Hence, it is of fundamental interest and several organometallic precursor formulations compatible with solution processing have been identied for thin lm formation. Examples of these are: nickel acetate tetrahydrate complexed with methanolamine (275 C); 28 nickel nitrate hexahydrate with monoethanolamine (500 C) 29 and nickel formate dihydrate with ethylenediamine (250 C). 30 Lowering the processing temperature required to convert these precursors to the oxide allows use of plastic substrates, which in general cannot tolerate prolonged processing above 150 C. 31 There is considerable literature precedent for the decomposition of nickel formate to form Ni and NiO. [32][33][34][35][36][37][38] Diamine complexation with nickel formate lowers the thermal requirement for decomposition and thus enables formation of NiO x at lower temperature. Solutions made with the complexed organometallic precursor in ethylene glycol and water allow fabrication of NiO x thin lms by spin coating the nickel formateethylenediamine-ethylene glycol-water (Ni(O 2 CH) 2 -en-egwater) ink followed by thermal annealing in air. Formation of NiO x by this method is unique as it produces conformal, high performance thin lms with few processing steps.
A detailed understanding of the interconnected decomposition chemistry with the material and interface functionality can drive metal oxide ink development beyond empirical approaches. For example, exposure to reactive oxygen during annealing may further reduce thermal post-treatments. Zhai et al. demonstrated this relaxation in processing conditions for the acetate precursor, below 150 C. 39 As a direct result of the lm growth and processing, NiO x interlayers strongly affect the OPV device performance. 28,30,40 Aer annealing in air and treating with an O 2 -plasma, NiO x outperforms a benchmark HTL of poly(ethylene dioxythiophene):poly(styrene sulfonate) (PEDOT:PSS) in prototypical OPV devices using the BHJ poly[N-9 0 -heptadecanyl-2,7-carbazole-alt-5,5-(4 0 ,7 0 -di-2-thienyl-2 0 ,1 0 ,3 0benzothiadiazole] (PCDTBT): [6,6]-phenyl-C71 butyric acid methyl ester (PC 70 BM). 20 When NiO x interlayers are included in OPV devices, the surface chemistry, band edge energies and mid-gap defect states determine the surface electrical properties and charge selectivity towards holes. Detailed spectroscopic analyses of these solution-deposited NiO x thin lms have shown that these are complex NiO x surfaces, with a wide range of possible oxide stoichiometries that inuence their optoelectronic properties, and their interactions with semiconductors such as those found in organic and hybrid photovoltaic platforms. 8,[25][26][27]41 Previous UPS and XPS measurements on these lms correlated surface hydroxyl species and their dipolar character with an increased band gap energy and improved band edge alignment with BHJ lms. 8,41 More specically, the NiO x surface formed from decomposition of these solution precursors is comprised predominantly of a mixture of NiO x , Ni(OH) 2 and NiOOH, as revealed by XPS characterization. 41 The dipolar character of this modied surface leads to a high F w and favorable energetic matching to the highest occupied molecular orbital (HOMO D ) hole-transport energy level of PCDTBT, while the wide band gap, and an apparent lack of mid-gap states, functions to block reverse electron transfer from the lowest unoccupied molecular orbital (LUMO A ) of the fullerene electron acceptor. 8,20,41 Furthermore, as these processing conditions for the NiO x interlayers led to variations in the measured local density of states observed in UPS, this resulted in higher hole selectivity and lower leakage currents in hole only devices. 41 Through improved charge selectivity and limiting carrier injection from the contact, these NiO x interlayers lower leakage current and increase shunt resistance in OPV devices. 14,42 However, systematic investigation of precursor decomposition in relation to device performance has yet to be addressed and hence, is the focus of this paper.
Here, we study the effects of varying the annealing temperature between 150 C and 350 C for thin lms spin coated from the Ni(O 2 CH) 2 -en-eg-water formulation. The effects of incomplete precursor decomposition are important to understanding their inuence on the interlayer optoelectronic properties and the ability to collect photocurrent in OPV devices. We observe changes to both the chemical and electronic properties of the resulting NiO x thin lms that correlate with large changes in short-circuit photocurrent (J sc ) and little to no changes in opencircuit photovoltage (V oc ) in PCDTBT:PC 70 BM OPV devices. Decomposition/oxidation reactions for the lms were investigated by thermal gravimetric analysis (TGA), differential scanning calorimetry (DSC), Fourier transform infrared absorption spectroscopy (FTIR) and X-ray photoelectron spectroscopy (XPS). By increasing the anneal temperature for thin lms spincoated from the Ni(O 2 CH) 2 -en-eg-water ink, from 150 C to above 250 C, and subjecting the lms to an O 2 -plasma, amorphous thin lms are formed with: (i) increased conductivity as measured by conductive AFM; (ii) increased surface oxygen content (O/Ni ratio revealed by XPS); (iii) increase of the NiO x band gap; (iv) high performance in OPV devices, as revealed by analysis of their series resistance and J sc . V oc is shown to be affected primarily by the surface oxidation chemistry of NiOOH even if the precursor decomposition/oxidation is incomplete, while losses observed in J sc depend primarily upon the nanoscale conductivity threshold reached upon decomposition of the Ni-formate-diamene complex. These studies decouple the underlying oxide formation from the surface effects by O 2plasma treating for photovoltaic device applications.
Ink preparation
Preparation of the Ni(O 2 CH) 2 -en-eg precursor formulation for NiO x lms has been reported earlier. 30 To summarize, nickel formate (1 g) was combined with ethylene glycol (10 ml) followed by ethylenediamine (0.87 ml). The mixture was heated and shaken multiple times, and then ltered at near ambient temperature through a 0.45 mm pore lter. The ink was a deep purple color, consistent with the violet color reported for Ni(O 2 CH) 2 (en) 2 . 43 For spin coating, the ink was diluted 1 : 1 by volume with water (nanopure).
Thin lms & devices
Patterned ITO substrates were rst solvent cleaned in acetone and isopropyl alcohol followed by an O 2 -plasma treatment. NiO x lms were deposited by spin coating the Ni(O 2 CH) 2 -en-eg-water ink at 4000 rpm onto the ITO substrates and immediately annealing the lms in air at 150-400 C for one hour. Aer annealing, all NiO x layers were exposed to O 2 -plasma treatment for 2 minutes at 155 W and 800 mTorr. The 1 : 4 ratio PCDTBT:PC 70 BM solution was prepared in 1,2-dichlorobenzene under an inert atmosphere at a total concentration of 35 mg ml À1 . The solution was stirred at 90 C for 8 hours before cooling to 60 C followed by immediate use, which is a variation on a previously reported procedure. 20 Spin coated active layers were deposited on top of the NiO x HTL lms at a spin rate of 2000 rpm for 120 seconds. The coated substrates were annealed at 70 C on a hot plate for one hour. Top electrodes composed of Ca/Al (20 nm/100 nm) were thermally evaporated using an Angstrom Engineering thermal evaporator with a base pressure below 1 Â 10 À7 Torr to produce 0.11 cm 2 devices. Films of NiO x were prepared on freshly O 2 -plasma cleaned Au substrates for AFM and C-AFM studies.
AFM & C-AFM
Scanning probe measurements employed an Asylum Research MFP-3D Atomic Force Microscope in conductive mode (c-AFM) using a Pt/Cr coated conductive tip (ElectricMulti75-G by Budget Sensors Inc.) with a radius less than 25 nm. Both height topography and c-AFM were obtained simultaneously. To obtain a good c-AFM signal, NiO x lms were deposited on top of Aucoated glass substrates rather than ITO, since ITO has nonuniform conductive regions 44 and thermally-induced electrical degradation at small length scales. 13 The Au substrate used for calibration had a highly uniform c-AFM prole at very small sample-to-tip bias (VST). All c-AFM measurements on NiO x lms were at ambient conditions with identical scan parameters such as scan speed and drive amplitude, using a VST of 40 mV.
Photoelectron spectroscopy
Experiments were performed on a Kratos Axis Ultra X-ray photoelectron spectrometer equipped with a monochromatic Al K-alpha X-ray source (hv 1486.6 eV) and a He UV source (hv 21.22 eV). Linear calibration of the binding energy scale for the detector was performed following the procedure outlined by M. P. Seah. 45 A bias of À10.00 V was applied to the sample during UPS experiments to spectrally separate the lowest kinetic energy electrons and secondary electrons from the local environment. An Ar sputter-etched, atomically-clean gold sample was measured before characterization of the NiO x samples to establish the Fermi edge of the spectrometer.
TGA & DSC
The ink (nickel formate-ethylenediamine-ethylene glycolwater) was placed in a Pt pan at 120 C to evaporate the bulk of the water and ethylene glycol solvent with minimal disruption of the nickel formate-ethylenediamine complex. This procedure was repeated twice to obtain an initial mass of 13.87 mg of the NiO x lm precursor (less ethylene glycol). The pan temperature was increased at 10 C min À1 under dry synthetic air (20% O 2 , 80% N 2 ) in a TA Instruments SDT Q600 operated in TGA/ DSC mode.
FTIR
Transmission spectra were measured using a Thermo-Nicolet 6700 FTIR. A liquid N 2 -cooled mercury-cadmium-telluride (MCT) detector and a KBr beamsplitter were used. Scans were collected to provide data with a resolution of 2 cm À1 . For each measurement 100 scans were averaged for both the sample and the background. Absorbance spectra were calculated from the sample and background using the Beer-Lambert equation.
Ellipsometry
Thin lm optical properties and thicknesses (9.5 AE 0.5 nm) were characterized using an M-2000 spectroscopic ellipsometer (J.A. Woolam Co Inc.) at wavelengths of 250-1000 nm and angle of 65-75 degrees. Spectroscopic ellipsometry data were processed with the aid of WVASE soware. The NiO x complex refractive index constants (N ¼ n À ik) were obtained using a Lorentz parameterized model, which is consistent with the Kramers-Kronig relations. Thicknesses were veried using a Dektak 8 stylus prolometer.
Decomposition and oxidation of Ni(O 2 CH) 2 (en) 2
Decomposition and oxidation processes for the metal organic precursor depends on the nickel complex formed in solution and spin cast into lms. We follow the conversion to NiO x with TGA and DSC shown in Fig. 1. Previous literature isothermal studies have reported the complete decomposition/oxidation of Ni(O 2 CH) 2 $2H 2 O (at 240 C-280 C in air) 38 and Ni(O 2 CH) 2 (at 215-250 C in oxygen and 242-262 C in air). 35,36 For example, Ni(O 2 CH) 2 $2H 2 O was converted to NiO in less than an hour at 240 C-280 C in air. 38 This is consistent with the expectation that the lms annealed in this study at 250 C or 300 C in air should be comprised of NiO x , which is also supported by the decomposition/oxidation temperature for the Ni(O 2 CH) 2 (en) 2 complex observed by TGA/DSC in Fig. 1. Narain reported the synthesis of Ni(O 2 CH) 2 (en) 2 , which should be robust at 120 C without any loss of the ethylenediamine ligand. 43 Therefore, the DSC/TGA measurements (Fig. 1) were performed in a Pt pan on an ink sample aer two rounds of evaporation of the bulk of the ethylene glycol and water solvents at 120 C, leaving a sufficient mass of primarily the Ni(O 2 CH) 2 (en) 2 complex. On heating in synthetic air (80% N 2 , 20% O 2 ), a small endothermic peak is evident near 125 C, with a corresponding mass loss of ca. 5%, which is interpreted as a loss of residual solvent (water, ethylene glycol, excess ethylenediamine). Between 180-240 C, an additional 77% of the initial mass (81% of mass at 150 C) is lost in an endothermic process, which is comparable to the expected 72% mass change for conversion of Ni(O 2 CH) 2 (en) 2 to NiO. Mass loss is due to a combination of evaporation of residual solvent (likely ethylene glycol, boiling point 197 C), loss of the ethylenediamine ligands, which have been shown to leave stepwise, [46][47][48] 49 and Ni(en) 3 SO 4 (ref. 50) reported temperatures for the nal stage decomposition/oxidation to NiO of 300 C, 325 C, 410 C, and 466 C respectively. On further heating, there is some additional mass lost (ca. 2% of the initial mass) until 375 C, when an exothermic transition is seen and correlates with a gradual mass increase. Hence, the Ni(O 2 CH) 2 (en) 2 complex is to date, preferred for lower temperature formation of solution-deposited NiO x but photovoltaic applications require subsequent surface treatments to increase the work-function. 7,30
Chemical composition characterization
Surface elemental composition determined from XPS measurements conrmed predominant NiO x composition as well as the presence of C and N due to the O 2 -plasma treatment and ambient exposure prior to measurement (see Table 1). O 2 -plasma is known to remove adventitious organic compounds. However, the N content in the lms likely originates from the ethylenediamine, which decreases as the anneal temperature increases from 150 C to 250 C. Yet, aer correcting for the C 1s binding energy (BE) the BE values for N 1s peak centroids are located at 406.6 eV (see S1 †) and are too high to be explained by the presence of unreacted amine groups. A more likely assignment consistent with the O 2plasma treatment is one or more forms of near surface N-O species such as -NO 3 , which can be identied using vibrational spectroscopy.
Exposure of the NiO x thin lms to O 2 -plasma predominantly affects the exposed surface creating similar structures for all the lms while providing less effect on the subsurface material. We analyzed FTIR spectra taken for NiO x thin lms to understand the O 2 -plasma effects for the whole system. FTIR spectra are shown in Fig. 2 for lms spin-coated from the Ni(O 2 CH) 2 -en-eg-water ink, comparing the as-deposited lm ('no anneal') to lms annealed for one hour in air at 150 C, 200 C, 250 C or 300 C. Band assignments for chemical constituents of the lm precursor are listed in Table 2. The major band assignments reported in the literature for the fundamental vibrations of the formate group in nickel formate (dihydrate) are the n 1 n(CH) mode at ca. 2900 cm À1 , the intense n 4 n as (COO) mode at ca. 1570 cm À1 , and asymmetric deformation (n 5 d(C-H)) and symmetric stretch (n 2 n s (COO)) modes between 1400-1350 cm À1 . 34,51-53 For liquid ethylene glycol, the major band assignments reported in the literature are the strong n(OH) stretching mode at 3400-3150 cm À1 , the strong asymmetric (n as (CH)) and symmetric (n s (CH)) stretch modes at 2935 cm À1 and 2875 cm À1 respectively, the strong d(CH 2 ) mode at ca. 1450 cm À1 , and the very strong n(CO) and n(CC) modes at 1100-1050 cm À1 . 54 (NH)) and symmetric (n s (NH)) modes at 3350-3150 cm À1 , the strong d(NH 2 ) mode at 1613 cm À1 , the u(NH 2 ) mode at 1318 cm À1 , and the strong n(CN) mode at 1025 cm À1 . These ethylenediamine ligand band assignments for Ni complexes are consistent with values reported for other transition metal organometallic complexes [56][57][58] and also liquid ethylenediamine. 59 The bands at ca. 1020-1030 cm À1 (n(CN) and n(CO)) and ca. 3200-3350 cm À1 (n(OH)) indicate ethylene glycol and/or ethylenediamine, 54,55,57,59,60 and are discernible only in the no-anneal lm and the lm annealed at 150 C as shown in Fig. 2a. The FTIR spectra indicate that ethylene glycol and ethylenediamine are virtually eliminated by a one hour anneal in air at 200 C. Bands at 1627 cm À1 (d(NH 2 )), and 1338 cm À1 (n 5 d(C-H)) and (n 2 n s (COO)) and 1587 cm À1 (n 4 n as (COO)), also seen in Fig. 2a, indicate ethylenediamine and formate respectively. 34,[51][52][53]57,59,60 The formate bands are present in the FTIR spectra for the as spun lm and the lms annealed at 150 C or 200 C without an O 2 -plasma treatment. A one hour anneal in air at 250 C and 300 C eliminates the formate from the lms resulting in a near featureless spectra consistent with NiO x except for broad bands at ca. 3570 cm À1 that are surface hydroxyls. [61][62][63] This result is consistent with the TGA/DSC data described above.
A comparison of the impact of the O 2 -plasma treatment, typically used for NiO HTLs, is also included in Fig. 2a and b for lms annealed for one hour in air at 150 C, 200 C, 250 C or 300 C. The intensities of all the ethylenediamine, ethylene glycol, and formate bands were lowered signicantly aer treatment with O 2 -plasma, consistent with an O 2 -plasma being highly efficient at removing organic compounds from materials and surfaces. Aer O 2 -plasma treatment, two new bands emerge in the FTIR spectra located at 2190 cm À1 and 2340 cm À1 . The 2190 cm À1 band is present aer O 2 -plasma treatment in the lm annealed at 150 C (i.e., before the ethylenediamine is eliminated), and is very weak in the lm annealed at 200 C. The 2340 cm À1 band is present in the lms annealed at 150 C, 200 C and 250 C aer O 2 -plasma treatment, although the intensity decreases signicantly with increasing temperature consistent with greater conversion decomposition/oxidation of the precursor to NiO x . Given the oxidizing environment in the O 2plasma and the presence of C and N in the partially decomposed/oxidized lms annealed at 150 C and 200 C, the 2190 cm À1 band is tentatively assigned to the n(C-N) modes for oxygen-bonded cyanate (OCN) groups or nitrogen-bonded isocyanate (NCO) groups to Ni 2+ : rather than the stretch modes of C]N in a carbon nitride lm. 64 For example, n(CN) modes have been reported at ca. 2200 cm À1 for the nickel isocyanate complex [Et 4 N] 2 [Ni(NCO) 4 ], 65 CNO À intercalated in a-Ni(OH) 2 (ref. 66) and a theoretical study of the adsorption of cyanate and isocyanate on a Ni(100) surface; 67 and at 2262 cm À1 and 2200 cm À1 for Ni(NCO) 2 $H 2 O, 68 In contrast, a theoretical study of the adsorption of cyanide on a Ni(100) surface reported the n(CN) mode at only ca. 2000 cm À1 , 69 and experimentally the n(CN) mode for Ni(CN) 2 $2H 2 O was reported at 2172 cm À1 . 70 The 2340 cm À1 band is tentatively assigned to the n 3 n as (CO 2 ) mode of CO 2 trapped in the lms annealed at 150 C-200 C aer O 2 -plasma treatment of the partially decomposed/oxidized Ni(O 2 CH) 2 (en) 2 complex. Similar IR bands have been reported for free CO 2 trapped during the thermal decomposition in air of hexahydrated nickel iron citrate to form ultrane NiFe 2 O 4 particles (2320 cm À1 ), 71 propanol/TaCl 5 gel to form Ta 2 O 5 thin lms (at 2345 cm À1 and 2333 cm À1 ), 72 and zinc acetate dihydrate/sodium hydrogen carbonate mixtures in argon to form ZnO nanoparticles (at ca. 2340 cm À1 ). 73 Both of these modes appear to be eliminated aer O 2 -plasma treatment for the lm annealed at 300 C since decomposition/oxidation of the precursor to NiO x is complete. However, as described above a small percentage of N is still observed in the XPS spectra with high BE values for [74][75][76] This is also the region where medium strength formate and ethylenediamine bands are anticipated. Conrmation of nitrate cannot be provided by the FTIR spectra aer 150 C anneal and O 2 -plasma treatment. However, the peak position of the ethylenediamine u(NH 2 ) mode shis from 1338 cm À1 to 1358 cm À1 for the as-deposited, and 150 C anneal plus O 2 -plasma respectively, and may indicate possible spectral contribution from -NO 3 ions (see S2 †). Aer the 200 C anneal and O 2 -plasma treatment, the 1300-1400 cm À1 region is nearly featureless. FTIR analysis suggests the trapping of CO 2 and the formation of N-based anions in the lms annealed at the lower temperatures and aer an O 2 -plasma treatment. The connement of CO 2 in solid-state NiO x lms implies that a dense surface barrier forms during the O 2 -plasma treatment. The cyanate species assigned in the FTIR spectra were not identied in the more surface-sensitive XPS measurements. Moreover, nitrates observed by XPS could not be unambiguously identied with FTIR. These complementary surface and through-lm measurements lead to the tentative hypothesis that the low concentration of nitrates are most likely surface conned.
Characterization using spectroscopic ellipsometry
In order to study the effect of annealing on the optical properties we employed angular dependent UV-vis reection using spectroscopic ellipsometry. Fig. 3 displays the square of absorbance versus incident photon energy. Absorbance was calculated using the extinction coefficient from Lorentz oscillator model ts to the ellipsometric data. The onset of absorption was extrapolated aer a linear t to the square of the absorption coefficient between 4.1 and 4.4 eV. 77 For the lowest temperature anneal at 150 C the NiO x absorbance is low and barely resembles the absorption edge of a semiconductor, which is consistent with the minimal decomposition/oxidation of the Ni(O 2 CH) 2 (en) 2 complex at this temperature. Nevertheless, tting the onset region results in an estimate for the optical gap of 3.4 eV. For the lm annealed at 200 C, the absorbance increases as the precursor has partially decomposed/oxidized resulting in a 3.8 eV estimate of the optical band gap. For lms annealed at 250 C and 300 C, because the decomposition of the Ni(O 2 CH) 2 (en) 2 complex to NiO x is well advanced the absorption edge is more clearly dened and results in an optical gap estimate of 3.9 eV, which is in good agreement with the accepted band gap of 4.0 eV for NiO. 78,79
OPV device performance
To investigate the impact of the NiO x lm composition as a function of annealing temperature, on OPV performance we utilized these lms as HTLs, annealed at different temperatures on ITO and integrated them into ITO/NiO x /PCDTBT:PC 70 BM/ Ca/Al OPV devices. Current-voltage (JV) measurements were performed under one-sun illumination. The data from these devices are shown in Fig. 4a with calculated performance metrics in Tables 3 and 4.
Power conversion efficiencies (PCE) are shown normalized in Fig. 4b and increased from 0.5% to 5.7% with increasing anneal temperature between 150 C and 250 C. These PCE values trend directly with short-circuit current density and as a function of thermal annealing temperature. Likewise, the PCE and J sc inversely trend with R s as a function of annealing temperature. For the lowest annealing temperatures (150 C and 200 C) the devices suffer from large resistive losses, poor current extraction and low ll factors. At 250 C and above the R s drops substantially and the device performance improves with gains in J sc and FF. This drop in series resistance within the device is commensurate with the decomposition/oxidation of the NiO x layer.
It is important to note that the open-circuit voltages do not appear to trend with annealing temperature. Ultraviolet photoelectron spectroscopy (UPS) measurements (see S3 †) for these solution-deposited NiO x lms aer annealing between 150 C and 300 C all produce lms with very similar work function values that range from 5.4 to 5.5 eV and IP values of 5.7-5.8 AE 0.1 eV, in agreement with earlier reports. 15 This is consistent with the relatively uniform V oc found across the devices when one considers work-function and the interface electronic structure of the contact determining factors of V oc . For the devices annealed at 250 C and above there is very little statistically signicant difference in the device data. As shown in Table 3, there is a modest increase just above the statistical noise from 250 C to 300 C. It is clear that the compositional changes from annealing the NiO x HTL to 250 C signicantly alter the electronic properties and result in enhanced holecollection from the BHJ.
Conductive tip-AFM
To examine the charge transport across the NiO x lms processed at different temperatures and to better understand holecollecting efficiency, samples were prepared (in an identical way to the lms used in devices) on Au as opposed to ITO substrates. Gold was used to eliminate underlying effects of electrical heterogeneity of the ITO, compared to the uniform and stable background provided on the gold surface. The nanoscale topography and electrical transport of these lms were measured with conductive tip-AFM using a Pt/Ir tip held at the same tip bias for all lms. For these experiments the tip forms the top contact on the Au-NiO x -Pt/Ir tip junction. Fig. 5 shows AFM and c-AFM data for ve different anneal temperatures between 150 C and 350 C. The white pixels in the bottom row of Fig. 5 indicate a good electrical transport or an Ohmic junction, while dark pixels indicate poor electrical conductivity or diode like behavior. Since all the NiO x lms have similar work functions as seen in Table 1, we can assume minimal changes in tip-surface injection/extraction barriers. Hence, we conclude that the measured c-AFM data is indicative of changes in the conductivity of the NiO x . Conformal, thin-lm coatings were observed for each annealing condition on the Au substrates. Moreover, XRD analysis of a series of NiO x thin-lms heated from 150 C to 350 C showed no sign of diffraction peaks. The 150 C anneal resulted in an undulating surface topology (1.46 nm RMS) and a highly insulating lm with no current transfer between tip and the Au ground. In contrast, the lm annealed at 200 C has a much atter surface morphology (0.46 nm RMS) and a small but measurable current, which indicates improved charge transfer with <10% area being electrically active. As the annealing temperature is increased from 200 C to 350 C, increases in the NiO x lm roughness and electrical property are observed. Aer the 250 C anneal, the lm roughness is 0.65 nm RMS and conductive regions occupy most of the surface. The 300 C anneal results in a NiO x lm with larger protrusions (1.35 nm RMS) and a sharp increase in the measured current that saturated the 20 nA c-AFM detection limit. The NiO x lms resulting from a 350 C anneal exhibits slightly lower conductivity compared to the 300 C lm, and a further increase in particle/grain size. The improved through-lm conductivity, as assessed by the area of the saturated pixels, correlate well with lowering of the series resistance and improved current collection for the OPV devices. This suggests that high series resistance for the lower anneal temperatures is a result of the poor conductivity of the HTL, which is a direct result of incomplete precursor decomposition/oxidation.
The large volume fraction of incompletely decomposed precursor results in resistive properties at the nanoscale and macroscale. However, separating improvements to carrier concentration and mobility remains elusive. One would expect 28 These studies show increased series resistance as a direct result of incomplete decomposition of the NiO x precursor complex and how these resistive HTL lms lead to lower J sc in devices with concomitant performance loss. However, for interlayers where conversion to NiO x is performed below the decomposition temperature and is incomplete, V oc remains at high values. The surface chemistry and work function of the NiO x interlayer determines the V oc in OPV devices whether or not the organometallic precursor has fully decomposed to form NiO x . In comparison, non-selective selfassembled molecular interlayer contacts provide paths to electron transfer from the BHJ LUMO levels and reduce quasi-E F splitting, which lead to lower V oc . 7 From the results presented here, we conclude that the nanoscale electrical changes as a function of converted precursor observed seem to strongly affect the ability of charge selective NiO x interlayers to extract holes from the adjacent BHJ and transport those to the external circuit. Recently, the surface polarity of NiO x interlayers was investigated and shown to dominate the interface properties when compared to the interlayer surface roughness and crystal structure. 40 Hence, post treatment and formation of a dipolar surface with low defects is related to the increased polar component of the total surface energy. Data presented here shows that surface composition for these lms are similar. However, differences in their nanoscale conductivity do not strongly affect the V oc . Hence, surface recombination velocity is not signicantly enhanced as quasi-E F splitting seems nominally equivalent for these devices at V oc , which is consistent with steady-state and transient photocurrent studies on similar systems. 42 We hypothesize that the majority of J sc loss observed as the processing temperature is lowered below the precursor decomposition threshold proceeds via recombination in the BHJ and is not mediated by NiO x surface states. If this postulate holds, then charge selectivity and efficient carrier transport are functionally separate and proceed by different mechanisms for this particular active layer. Moreover, these properties are also spatially separate as the selectivity is determined by the surface composition and local density of states that provide a low defect interface and low surface recombination while the interlayer subsurface enables charge delocalization and carrier transport to the transparent electrode. Implications for the separation of selectivity and transport mechanisms could result in designs for bilayer selective contacts and indeed examples exist in literature. 80 This can also help to decouple surface and subsurface effects of decomposition temperature, organometallic precursor formulations and subsequent surface modications for efficient interlayer contacts in photovoltaic technologies. However in more demanding photovoltaic systems with higher carrier mobilities and photogenerated charge densities, it may be necessary to increase the NiO x thickness in order to effectively passivate high carrier density electrodes such as TCOs and metals. | 7,915.6 | 2015-05-12T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Investigating EFL Students’ and Instructors’ Perceptions of Dictionary Usage in Writing Assessment
In this mixed-methods study we investigated the attitudes of 32 students and 34 teachers from a Saudi Arabian university toward dictionary use in writing assessments in EFL settings. We aimed to discern their views on the role of dictionaries in writing assessments and overall language proficiency. We asked both students and instructors to answer a 4-point Likert-style questionnaire to investigate their perceptions about using dictionaries in writing assessments. We interviewed participants later for further investigation. After analyzing the data both quantitatively and qualitatively, we found that students generally perceived dictionary use as being beneficial by enhancing vocabulary acquisition, improving writing performance and accuracy, and fostering positive attitudes toward writing without significantly affecting comprehension or focus on content. In contrast, teachers were skeptical, doubting dictionaries’ contribution to vocabulary development, writing quality, and accuracy. They also raised concerns about dictionaries’ not promoting positive writing attitudes or independent learning and potentially slowing down the writing process due to ineffective usage. Our study highlights a notable discrepancy between students’ positive perceptions and teachers’ reservations about dictionary use in language assessments. It suggests a need for further research to understand dictionaries’ impact on language learning and assessment outcomes, acknowledging the limitations of the study and the need for broader exploration in diverse educational contexts to resolve these differing views.
Research Questions
The purpose of this mixed-methods study was to answer the following questions: (1) What are students' attitudes and perceptions regarding using dictionaries in writing assessments?
(2) How do instructors view the influence of dictionary usage on students' writing assessments and overall language proficiency?Answering these questions can provide important insights into how the two most important stakeholders in EFL settings-teachers and students-perceive dictionary usage, which, in turn, can provide instructional guidance to teachers.
Literature Review
EFL literature has a number of focuses when it comes to discussing dictionary use.One positive interpretation is that dictionaries help students acquire new vocabulary, improve spelling, and develop their writing quality (Abbasi et al., 2019;Alhaisoni, 2020;Boonmoh, 2021;El-Sawy, 2019;Pyo, 2020;Zhang et al., 2021).This interpretation is generally positive in its assessment of how dictionaries are useful to EFL students.However, there is a counterpoint that dictionary use can lead students to focus too much on individual words rather than on the entire written product or subcomponents thereof, such as paragraphs; to eschew critical thinking by relying on the dictionary; and to fail to deliver valid assessments, as assessments include the correct, dictionary-free use of spelling and vocabulary (Abbasi et al., 2019;Al-Khresheh & Al-Ruwaili, 2020;Alhaisoni, 2020;Alhatmi, 2019;Altakhaineh & Shahzad, 2020;Arfae, 2020;El-Sawy, 2019;Lin & Lin, 2019;Lin, 2019;Pyo, 2020).
One of the gaps in the literature is a direct comparative assessment of teacher and student perceptions of the usefulness of dictionaries.Obtaining this information from teachers and students at a single institution can show how far apart instructors and learners might be on the topic of using dictionaries in the classroom.Such insights can help teachers assess and calibrate how the students and their peers feel and can change their outlooks accordingly.Given that writing is the most difficult component of EFL and ESL (Ahmed, 2019;Hamidnia et al., 2020;Karim & Nassaji, 2020;Kuyyogsuy, 2019;Liu & Wu, 2019;López-Serrano et al., 2019;Ma, 2020;Ngui et al., 2020;Tsuroyya, 2020), such findings would be significant for practice.
Participants
The student participants were 32 undergraduates studying English as a major in a public university in Riyadh, Saudi Arabia.Participants were enrolled into two sections, and no specific criteria or tests were used to place participants into their corresponding sections, as they were allowed to register randomly in any section until the maximum number of 20 students was reached.After that, the two sections were randomly assigned to the instructor/researcher by the English Language Department to teach them a writing course.Regarding their level of proficiency, all participants had to score 4.5 or higher in the International English Language Testing System (IELTS) to join the English language program, which is an intermediate level of English, or equivalent to B1 in the Common European Framework of References (CEFR).They had taken two courses in essay writing previously, and this was their third course at the time the study was conducted.All participants were females, in their early 20s as undergraduates.
As for the participants in the instructors' questionnaire, 34 instructors were teaching writing courses at the time of the study.All were from the same public university in Riyadh, Saudi Arabia, and they were non-native speakers of English.They were both male and female.
Procedure
The students were allowed to use hard-copy dictionaries in their writing assessments, as online dictionaries were not allowed, according to department restrictions.Afterward, they were asked to answer a 4-point Likert-style questionnaire to investigate their perceptions about using dictionaries in writing assessments.We interviewed them later to explore the types and purposes of usage and whether their usage was affected by their performance in the assessment.The interview was conducted through open-ended questions delivered via Google Forms.The sessions of their writing assessments were one and two hours.They were asked to write a five-paragraph essay in each assessment.
The instructors were also asked to answer a 4-point Likert-style questionnaire to investigate their perceptions about using dictionaries in writing assessments.We interviewed the instructors later to further investigate their beliefs about the research questions, using open-ended questions that were delivered through Google Forms.
Instruments
We administered a 4-point Likert-scale questionnaire to examine students' and instructors' evaluation of the usage of dictionaries in writing assessments.According to Dörnyei & Taguchi (2009), this questionnaire design could encourage participants to express their opinions.The study questionnaires for students and instructors were mainly adapted from Alhatmi (2019), with adaptations informed by the literature.The students' questionnaire consisted of 14 items.Using Cronbach's alpha, the reliability of the questionnaire was found to be 0.81, which is sufficiently high for research purposes (Natrella, 2013).
The instructors' questionnaire consisted of 16 items.Using Cronbach's alpha, we found the reliability of the questionnaire to be 0.79, which is sufficiently high for research purposes (Natrella, 2013).As for the interview, we asked students to answer two questions: (1) How has permitting dictionary use positively impacted your writing performance and overall learning experience during writing assessments?(2) Share your opinions on the challenges and difficulties that you have faced while using dictionaries in writing assessments and how they have affected the assessment process and language learning.
We also asked instructors to answer two questions: (1) From your experience, how has permitting dictionary use positively impacted students' writing and their overall learning experience during assessments?(2) Can you share your insights on the potential challenges and drawbacks associated with allowing dictionaries in writing assessments and how they may affect the assessment process and student learning?Both the questionnaires and interview questions were provided to participants via Google Forms.
To validate the questionnaires, we cross-checked the two faculty members holding PhDs in applied linguistics and having experience in writing instruction.We made modifications based on their comments alleging lack of clarity or redundancy.We conducted a pilot test afterward on a small sample of participants (students and instructors) using convenience sampling.All participants completed the questionnaires without reporting any issues.
We adhered to ethical considerations, with all participants receiving information on the study and providing informed consent.Additionally, we implemented measures to ensure confidentiality of the collected data.
RQ1
RQ1 read: What are students' attitudes and perceptions regarding using dictionaries in writing assessments?RQ1 was answered using (a) descriptive statistics for survey items, (b) a one-sample t test to identify the survey items with which students agreed and disagreed most, and (c) an integrated discussion.
Here are the items with which students agreed: (1) I believe that using dictionaries in writing assessments is helpful.
(2) Using dictionaries in writing assessments enhances my vocabulary acquisition.
(3) Dictionary use leads to improved performance in writing assessments and better writing pieces.
(4) Dictionary use increases the accuracy of my written work.
(5) Using dictionaries in assessments helps me to have a positive attitude toward writing.
Here are the items with which students disagreed: (1) Using dictionaries in writing assessments can lead to a loss of comprehension as students may rely on dictionary definitions instead of critical thinking.
(2) Allowing dictionaries may encourage students to focus on isolated words rather than the overall content and structure of their writing.
(3) Students might engage in insufficient processing of dictionary information, leading to surface-level understanding.
(4) Allowing dictionaries may raise questions about the validity of the assessment, as it is meant to measure students' abilities without external aid.
Therefore, students had a highly positive assessment of dictionary use.Allowing dictionaries may encourage students to focus on isolated words rather than the overall content and structure of their writing.
-1.72 < .05 Students might engage in insufficient processing of dictionary information, leading to surface-level understanding.
-2.27 < .05 Allowing dictionaries may raise questions about the validity of the assessment, as it is meant to measure students' abilities without external aid.
RQ2
The second research question-How do instructors view the influence of dictionary usage on students' writing assessment and overall language proficiency?(RQ2)-was answered in four steps.First, we presented descriptive statistics for survey items relevant to RQ2.Second, we conducted a one-sample t test to identify the RQ2 survey items with which teachers agreed and disagreed most.Third, we aggregated the qualitative survey items for RQ2 into themes.Fourth, we interpreted and discussed the integrated findings for RQ2.
The following prompts show where teachers disagreed through a one-sample t test: (1) Allowing dictionaries in writing assessments enhances students' vocabulary acquisition.
(2) Dictionary use leads to improved writing assessments and more polished writing pieces.
(3) Dictionary use increases the accuracy of students' written work.
(4) Allowing dictionaries in assessments contributes to learners' positive attitudes toward writing.
(5) Dictionaries support learners' training and foster independent learning.
(6) Dictionaries aid students in understanding words in context, improving their comprehension.
(7) I believe that allowing dictionaries in writing assessments is helpful for the students.
And here is the item with which teachers agreed: (1) Students may face difficulties in using dictionaries effectively, which can slow down the writing process.
Our integrated analysis of these findings is that teachers did not believe dictionaries to be helpful for students, for the multitude of reasons listed above.The qualitative findings triangulated these insights in the sub-theme entitled Challenges in Using Dictionaries.Allowing dictionaries in writing assessments can lead to a loss of comprehension as students may rely on dictionary definitions instead of critical thinking.
-1.15 .134 Students may misinterpret dictionary entries, leading to errors in their written work.
-1.00 .162 Excessive use of dictionaries during assessments can hinder the fluency and coherence of students' writing.
.115
Students may face difficulties in using dictionaries effectively, which can slow down the writing process.
< .05
Allowing dictionaries may encourage students to focus on isolated words rather than the overall content and structure of their writing.0.22 .416 Students might engage in insufficient processing of dictionary information, leading to surface-level understanding.
-1.00 .162 Allowing dictionaries may raise questions about the validity of the assessment, as it is meant to measure students' abilities without external aids.
-1.29 .102 I believe that allowing dictionaries in writing assessments is helpful for the students.
-3.74 < .05"I believe students should be allowed to use monolingual English dictionaries only while writing their practice essays." "I constantly emphasize the use the electronic dictionaries in my classes.The use of dictionaries has improved students' writing skills." "If provided guided use or dictionary training, students can make good use of it."
Challenges in Using Dictionaries
"It is also worth mentioning that using dictionaries is a skill in its own right that needs extensive training." "Most students are not able to use dictionaries effectively." "One of the challenges is that dictionaries might distract students and slow down their writing as they spend a lot of time searching for words." "Most students don't know how to use the dictionary therefore they will spend a long time to find a single word . .." "no focus on overall coherence and format" Impact on Assessment and Learning "I think that permitting dictionary use in writing may affect the validity of the assessment because students' writing would not reflect their actual language proficiency level." "One of the drawbacks is that it may lead to inaccurate assessment, also it may make the writing test less challenging to students so they will not prepare well by revising the most important phrases and vocabulary." "Writing is mostly about ideas and support, and I don't see how dictionaries can actually help." "I didn't use it before but maybe in the future when it is allowed." Influence on the Writing Process "I allow them to access their phones.I found that better than dictionaries as they will be able to find the words in context.""It does confuse the students during the assessment period as well as waste their time hovering from one entry to another.""It will just waste their time looking up words rather than thinking about the ideas and information they should write.""I think it will be distractive.Students will rely on it which might lead to a decrease in their interest in building their vocabulary.""It will slow the writing process in general as students spend a lot of time finding out words and their meanings."
Alternatives and Suggestions
"Another way of making dictionaries effective is to have teachers provide a list of words with their meanings attached to an assignment/assessment.""I encourage using them during the process of writing.I especially encourage students to use concordancers [sic] as I believe they give students a better understanding of the usage of words in different contexts.""I would just suggest giving instructions on how and when to use dictionaries in writing classes.""Students of an average language level may benefit from dictionaries throughout the writing process.It will help them to develop on the grammatical and lexical level but rarely on the stylistic and pragmatic level."
RQ3
The third research question was as follows: "To what extent are there differences in how instructors perceive the students' use of dictionaries compared to the students' perceptions?"This question was simple to answer, as the obvious difference was that students saw dictionary usage positively, whereas teachers saw it negatively.
Students Believe
Teachers Believe (1) Using dictionaries in writing assessments is helpful.
(2) Using dictionaries in writing assessments enhances vocabulary acquisition.
(3) Dictionary use leads to improved performance in writing assessments and better writing pieces.
(4) Dictionary use increases the accuracy of written work.
(5) Using dictionaries in assessments helps to foster positive attitudes toward writing.
(6) Using dictionaries in writing assessments does not lead to a loss of comprehension resulting from relying on dictionary definitions instead of critical thinking.
(7) Allowing dictionaries does not encourage students to focus on isolated words rather than the overall content and structure of their writing.
(8) Students do not engage in insufficient processing of dictionary information, leading to surface-level understanding.
(9) Allowing dictionaries does not raise questions about the validity of the assessment.
(1) Allowing dictionaries in writing assessments does not enhance students' vocabulary acquisition.
(2) Dictionary use does not lead to improved writing assessments and more polished writing pieces.
(3) Dictionary use does not increase the accuracy of students' written work.
(4) Allowing dictionaries in assessments does not contribute to positive learners' attitudes toward writing.
(5) Dictionaries do not support learners' training and foster independent learning.
(6) Dictionaries do not aid students in understanding words in context or improving their comprehension.
(7) Allowing dictionaries in writing assessments is not helpful for the students.
(8) Students may face difficulties in using dictionaries effectively, which can slow down the writing process.
Discussion
The literature has given some indications that EFL students favor dictionary use (Fogal, 2022;Issa et al., 2022;Kendall & Khuon, 2023;Saha, 2022;Staples et al., 2023;Stojanovska-Ilievska, 2023;Xu et al., 2023;Zhang & Zou, 2023), while teachers are less sanguine (Ahmed, 2019;Brockman, 2020;Cao et al., 2019;Graham, 2019;Iswandari & Jiang, 2020;Rassaei, 2021;Selvaraj & Aziz, 2019;Strobl et al., 2019;Tsuroyya, 2020).However, these findings are from different contexts.The contribution of our study was to compare teachers and students from the same educational environment.One main implication of the finding that students are much more positively inclined toward dictionary use is that teachers need to work harder to explain why dictionaries should not be used (on assessed papers) and perhaps provide alternative aids.On the other hand, teachers can also leverage the positive orientation of students to integrate dictionary use more thoroughly into assessment, knowing that doing so might positively motivate students.
Our individual findings were as follows.For RQ1, students were found to believe that: (a) using dictionaries in writing assessments is helpful; (b) using dictionaries in writing assessments enhances vocabulary acquisition; (c) dictionary use leads to improved performance in writing assessments and better writing pieces; (d) dictionary use increases the accuracy of written work; (e) using dictionaries in assessments helps foster positive attitudes toward writing; (f) using dictionaries in writing assessments does not lead to a loss of comprehension resulting from relying on dictionary definitions instead of on critical thinking; (g) allowing dictionaries does not encourage students to focus on isolated words rather than on the overall content and structure of their writing; (h) students do not engage in insufficient processing of dictionary information, leading to surface-level understanding; and (i) allowing dictionaries does not raise questions about the validity of the assessment.
The findings for RQ1 can be examined in light of what the existing literature reveals about differences between students and teachers in their assessments of the value of dictionaries.There is a branch of the literature in which the main finding is that teachers do not feel positively about dictionary use, which they consider to be either actively maladaptive or at best not helpful for the task of language learning.In this context, the gap between students and teachers is not surprising.As Alhaisoni (2020) found, Arabic-speaking students in a Saudi setting tended to use dictionaries solely to look up completely unknown words, which, naturally, they found helpful; however, that study also noted that dictionaries could be used in many other ways, and target language (L2/English) dictionaries possibly represent the most academically promising means of using dictionaries.In Alhaisoni's study, the main way in which Saudi students used dictionaries was to look up Arabic equivalents of English words, and, even when doing so, they often stopped after the first definition instead of reading through all definitions and trying to determine, through context, what the right definition might be.Overall, that study described ways in which dictionaries use substitutes for, rather than augment, true language learning.The same general point was made by Liu and Wu (2019), who discovered that dictionary use was, according to teachers, among the least important factors in the development of writing skills, which they found was better served by interactive feedback from peers and teachers.The discrepancy between teachers and students on this point might be due to students' believing that the use of dictionaries to complete tasks represents a genuine improvement in their writing skills, whereas from teachers' perspectives, true writing-skill acquisition is not dependent on dictionary usage (Liu & Wu, 2019).
For RQ2, we found that teachers believed that (a) allowing dictionaries in writing assessments does not enhance students' vocabulary acquisition; (b) dictionary use does not lead to improved writing assessments and more polished writing pieces; (c) dictionary use does not increase the accuracy of students' written work; (d) allowing dictionaries in assessments does not contribute to positive attitudes toward writing; (e) dictionaries do not support learners' training or foster independent learning; (f) dictionaries do not aid students in understanding words in context or improving their comprehension; (g) allowing dictionaries in writing assessments is not helpful for the students; and (h) students may face difficulties in using dictionaries effectively, which can slow down the writing process.
These findings can be explained by previous findings that, when dictionaries are available to students, they are used superficially, and, therefore, in ways that do not promote learning.In terms of vocabulary, Alhaisoni (2020) reported that Saudi students do not necessarily retain the words they look up in English-to-Arabic dictionaries, both because (a) they do not spend enough time on entries and read all the meanings, and (b) using English-to-English dictionaries is a better way of acquiring new vocabulary.Boonmoh (2021) reached similar conclusions in a study of Thai learners of English, in which students were observed to use dictionaries in an instrumentalist and superficial way, that is, as a rapid means of completing a given academic task, such as a writing task, rather than as part of a more holistic process of language learning.Students often believe that such usage of the dictionary is useful, but teachers disagree, finding that interactive feedback and other methods are better than dictionary usage for skill acquisition (Liu & Wu, 2019).
For RQ3, we concluded that teachers are generally negative about the use of dictionaries, whereas students are generally positive.This aligns with the literature.As the discussions for RQs 1 and 2 explain, students and teachers diverge on their opinions about the usefulness of dictionaries.However, one point that can be emphasized is the possibility that the discrepancy between students and teachers might depend on the language of the dictionary.In several of the studies discussed earlier, teachers were more positive about the use of English-to-English dictionaries.It would be interesting to discover, in the context of future studies, if the language of the dictionary itself makes any difference to the respective attitudes of teachers and students.
Recommendations of Practice and Further Research
Our study presents valuable insights into the perceptions of both students and teachers regarding the use of dictionaries in EFL writing assessments within a Saudi university context.The following provides a roadmap for practitioners and researchers, offering actionable insights to enhance teaching practices and pave the way for further research.
(1) Guidelines or workshops for both students and teachers on effective dictionary use should be developed.
Emphasis should be placed on strategies for utilizing dictionaries to enhance vocabulary, improve writing performance, and maintain a balance between content comprehension and language accuracy.
(2) Interventions should be designed to foster positive attitudes toward writing among both students and teachers.The role of dictionaries in empowering learners to express themselves effectively should be emphasized.
(3) Open and collaborative discussions between students and teachers should be facilitated to address concerns and perceptions regarding dictionary use.A platform for constructive dialogue should be created to bridge the gap in perspectives and develop shared understandings.
(4) Further research exploring context-specific impacts of dictionary use in diverse educational settings should be conducted.Variations in perceptions across different contexts should be understood, and pedagogical approaches should be tailored accordingly.
(5) Longitudinal studies should be conducted to explore the sustained impact of dictionary use on vocabulary development.The influence of continued exposure to dictionaries on students' lexical skills over an extended period should be assessed.
(6) The role of technological dictionaries (e.g., digital, or online dictionaries), as compared to traditional ones, should be investigated.How the format of dictionaries affects perceptions and usage patterns among students and teachers needs to be explored.
(7) How teachers' perceptions of dictionary use evolve over time should be explored.Follow-up studies should be conducted to track changes in attitudes and instructional approaches after the implementation of training or interventions.
Limitations of the Study
(1) The study's sample size, consisting of 32 students and 34 teachers from a specific Saudi university, may limit the generalizability of findings to a broader context.The results may not fully capture the diversity of attitudes present in different educational institutions or cultural settings.
(2) The reliance on a 4-point Likert-style questionnaire could introduce response bias, and the predetermined response scale might not encompass the nuanced spectrum of attitudes.Additionally, the self-reporting nature of the questionnaire may be subject to participants' interpretation biases.
(3) The study primarily focuses on individual perceptions of students and teachers.The complex dynamics within teacher-student interactions, such as the influence of teaching methods and instructional strategies, are not extensively explored.
(4) The study primarily focuses on the perceived benefits and concerns related to dictionary use.Additional variables that might influence attitudes, such as individual learning styles or technological preferences, are not extensively explored.
Conclusion
The purpose of this mixed-methods study was to answer three questions: (1) What are students' attitudes and perceptions regarding using dictionaries in writing assessments?(2) How do instructors view the influence of dictionary usage on students' writing assessments and overall language proficiency?(3) To what extent are there differences in how instructors perceive the students' use of dictionaries compared to the students' perceptions?Our main finding was that students approve of using dictionaries, but teachers do not.Future studies should focus on how and why teachers and students differ in their understanding of the usefulness of dictionaries.The current study was limited by sample size, the lack of a longitudinal approach, and the lack of rich qualitative data, all of which can be corrected going forward.The qualitative data solicited from students were particularly limited, which is why they were not included in this study.The qualitative findings from teachers were richer but still limited in terms of the themes they illustrated.Richer data might be collected by interviewing teachers and students rather than asking them to fill out online forms.
Table 5 .
Qualitative Themes, RQ2 recommend the use of English-English dictionaries. . .they will have options of synonyms and examples of English words put into contexts." | 5,866 | 2024-01-29T00:00:00.000 | [
"Education",
"Linguistics"
] |
Privacy preserving divisible double auction with a hybridized TEE-blockchain system
Double auction mechanisms have been designed to trade a variety of divisible resources (e.g., electricity, mobile data, and cloud resources) among distributed agents. In such divisible double auction, all the agents (both buyers and sellers) are expected to submit their bid profiles, and dynamically achieve the best responses. In practice, these agents may not trust each other without a market mediator. Fortunately, smart contract is extensively used to ensure digital agreement among mutually distrustful agents. The consensus protocol helps the smart contract execution on the blockchain to ensure strong integrity and availability. However, severe privacy risks would emerge in the divisible double auction since all the agents should disclose their sensitive data such as the bid profiles (i.e., bid amount and prices in different iterations) to other agents for resource allocation and such data are replicated on all the nodes in the network. Furthermore, the consensus requirements will bring a huge burden for the blockchain, which impacts the overall performance. To address these concerns, we propose a hybridized TEE-Blockchain system (system and auction mechanism co-design) to privately execute the divisible double auction. The designed hybridized system ensures privacy, honesty and high efficiency among distributed agents. The bid profiles are sealed for optimally allocating divisible resources while ensuring truthfulness with a Nash Equilibrium. Finally, we conduct experiments and empirical studies to validate the system and auction performance using two real-world applications.
Introduction
Divisible resources (e.g., electricity, mobile data, and computation and storage resources in the cloud) have been frequently traded or allocated in a peer-to-peer mode. All the agents can purchase or sell any amount of the resources in such markets. Since all the agents generally compete with each other to maximize their payoffs, divisible double auction mechanisms (Zou et al. 2017) are designed to allow both buyers and sellers to dynamically submit their prices until convergence (e.g., achieving the Nash Equilibrium (Maheswaran and Basar 2003;Johari and Tsitsiklis 2004)) and then complete the transaction with resource allocation. Recently, smart contracts (as decentralized and self-enforcing contracts) can be designed for distributed agents to trade divisible resources with digital agreements. The blockchain-based platform supports the execution of smart contracts for strong integrity and availability, which maintain the transparency, traceable and consensus properties.
However, severe privacy concerns may arise in both double auction (Brandt et al. 2007) and blockchain-based systems (Wüst et al. 2019). For instance, during the auction, all the agents report their bidding profiles, including sensitive data such as their bidding amount and bidding prices. As rival agents, they may want to win competitive advantages in the market (more payoffs) by reporting untruthful bids if they know the others' bid profiles. Then, the market (Krishna 2009) would be deviated. Even worse, such private data might be collected and resold (Brandt et al. 2007) to other untrusted parties.
To this end, it is desirable to propose a truthful divisible double auction mechanism while preserving all the Open Access Cybersecurity *Correspondence<EMAIL_ADDRESS>Illinois Institute of Technology, 10 West 35th Street, Chicago, IL 60616, USA agents' privacy (at least sealing all the bid profiles). Specifically, smart contracts on the blockchain system can be designed for the divisible double auction. However, the blockchain system has limitations on preserving privacy for sensitive data and high performance execution. To complement the blockchain system, the Trusted Execution Environment (TEE) (Hoekstra et al. 2013) could address such limitations by executing the core functionality (e.g., computation for the smart contract) in the enclave, which protects the data against malicious attacks. Compared with other types of secure and private solutions (e.g., Secure Multiparty Computation (SMC) (Paillier 1999;Okamoto and Uchiyama 1998;Naccache et al. 1998)), TEE achieves stronger security and high efficiency for blockchain execution (Das et al. 2019). Thus, in this paper, we propose an efficient and privacy preserving divisible double auction with the TEE-Blockchain hybridized system (e.g., on the Intel SGX, which is a TEE supported by an architecture extension of Intel (Hoekstra et al. 2013)). Then, the hybridized system is co-designed in three aspects.
• First, the blockchain-based platform is expected to ensure integrity and availability while it interacts with other components (i.e., TEE) for the transaction, which helps data/state recovery if the execution/protocol is broken or interrupted by accidents. • Second, the smart contract can be loaded and executed within a protected environment in Intel SGX, (namely enclave) (Tsai et al. 2017). All the agents' sensitive data can be protected during the computation. • Third, we propose an efficient, individually rational and weakly budget balanced double auction based on the Progressive Second Price (PSP) (Lazar and Semret 2001) auction, derived from the Vickrey-Clarke-Groves (VCG) (Tuffin 2002) auction. The proposed divisible double auction ensures truthfulness for all the agents by achieving a Nash Equilibrium.
Furthermore, we conduct experiments for both off-chain procedures (executing the TEE program computation) and on-chain procedures (the interaction between the blockchain and TEE) in the hybridized system to evaluate the system and auction performance using two realworld applications: (1) energy trading, and (2) wireless bandwidth allocation. The remainder of the paper is organized as follows. We first present the background to briefly introduce the divisible double auction, TEE and smart contract in "Background" section. Then, "Overview of hybridized system" section gives an overview for the proposed hybridized system, and more details of the procedures. It includes how to execute the smart contract, how to trigger the TEE, and how to interact with blockchain to perform the validation. In "Auction mechanism design" section shows the designed divisible double auction mechanism with a truthfulness guarantee. In "Discussions" section analyzes the security of the system, and discusses some real-world applications, which are supported by the proposed hybridized system. We evaluate the performance of the hybridized system in "Experimental evaluations" section. Finally, "Related work" section reviews some relevant literature, and "Conclusion" section concludes the paper.
Divisible double auction
In a divisible double auction, let B and S be the sets of buyers and sellers, respectively. The bidding information includes two-dimensional bid profiles, denoted as b m for buyers and s n for sellers. During the auction, the bid profiles are submitted as follows: (1) buyer m ∈ B : b m = (α m , d m ) with bid price α m and amount d m to buy, and (2) seller n ∈ S : s n = (β n , h n ) with bid price β n and amount h n to sell. b = (b m , m ∈ B) denotes the bid profiles of all the buyers while s = (s n , n ∈ S) denotes the bid profiles of all the sellers. In addition, r = (b, s) is defined as a strategy profile, which represents the bid profiles for all the agents. These are private information to be sealed amongst all the agents in the auction. A strategy profile without agent i is denoted as r −i = (r 1 , ...r i−1 , r i+1 , ..., r |m+n| ) , then r = (r i ; r −i ).
From the global viewpoint, the main goal of the divisible double auction mechanism is to seek the maximum social welfare for optimal allocation. We use A m and A n to denote the allocation of buyer m and seller n, respectively. In the current iteration (k-th iteration) of the double auction, A (k) m and A (k) n represent the allocation for buyer m (amount to purchase) and seller n (amount to sell), respectively. The details for our divisible double auction machanism are given in "Auction mechanism design" section.
Trusted execution environment (TEE)
TEE provides a fully isolated environment to prevent others (e.g., software, OS, and hosts) from tampering with or learning the state of applications running in it.
Intel SGX (Costan and Devadas 2016) is an instance of TEE that enables process execution in a protected address space enclave. The enclave ensures confidentiality and integrity for the process against attacks. An enclave is not allowed to make system calls, but can read/write memory outside the enclave region. Thus, the isolated execution can be viewed as an ideal model which guarantees to be correctly executed with confidentiality. We denote the double auction program inside the enclave as Prog x .
Remote Attestation allows to remotely verify if the pieces of code or program are running within the TEE or not. In Intel SGX, CPU can measure the trusted memory, cryptographically sign the computed results, and generate the signatures for the attesting party. The private key is only known to the hardware over the program. Group signatures (EPID) (Brickell and Li 2009) are used for setting up a secure channel for remote attestation.
Smart contract
Cryptocurrencies are traded on the decentralized network of peers which stores all the transactions via a public ledger. Through the consensus protocol, the ledger is stored as a chain of blocks with the agreement state. Smart contract is a machinery built on top of cryptocurrencies, and it defines and executes the contract on the blockchain. In other words, the smart contracts work as a program digitally among distributed agents (Miller et al. 2000). Based on the decentralized cryptocurrencies, the integrity and availability can be guaranteed. In our work, privacy will be ensured by TEE.
Overview of hybridized system
In this section, we provide an overview of the Hybridized TEE-Blockchain System (including the procedures). Figure 1 illustrates the main components of our hybridized system: all the agents ( P ), TEE ( T ), Blockchain ( BC ) and Key Management ( KM)).
Hybridized system architectures
• All the agents P (buyers and sellers) are the end users of the smart contract. Th manager P M is the delegation to compute all the incoming private agents' input and deliver results as the administrator. P M further leverages Relay to trigger the enclave to be initialized for computation (will be explained as the following). Note that the manager P M is considered to be malicious, which may collude with other agents or interrupt the computation. • TEE ( T ) is responsible to run the smart contact to processes the double auction computation among the agents (requested by the manager P M ) in the enclave E , which protects the privacy and integrity of computations. It also generates remote attestations (computation correctness) for state updates. To further improve the functionality and security of our system, we design the only interface component Relay R to provide indirect access to enclave. Relay can also provide the message passing with the Blockchain. • Blockchain ( BC ) maintains a distributed append-only ledger via running a consensus protocol. The state of BC and attestations are stored on the chain. Moreover, the validity of state update are checked by the blockchain with the TEE attestations. • Key Management ( KM ) generates keys for both private agents' inputs and state encryption. All the agents and TEE can directly interact with the KM for the key pairs via a key distribution protocol.
Enclave functionality model
Enclave (E ) protects the code of program and data during the computation for the auction. Specifically, the program running inside the enclave is completely isolated from an adversarial OS as well as other processes on the host. We formalize and integrate the Intel SGX (Shi et al. 2015) as TEE in our hybridized sytem. In order to model the ideal functionality channel with some proprieties such as privacy and authenticity, we utilize a global universal composability (UC) framework functionality (Canetti 2001) to instantiate the SGX Functions. More formally, we denote the program X which runs inside the SGX enclave as Prog x , which can be Prog da for double auction. The SGX function can be expressed as F SGX ( sgx )[Prog x , R] , where sgx is a group signature scheme and R is Relay. As shown in Fig. 2, the program Prog x is loaded into enclave via the " init " call from Relay. When Relay calls " resume ", the program is executed based on the incoming requests or inputs, denoted as inp , and computes the output with an attestation ψ att := sgx ·Sig(sk sgx , (Prog x , outp)) . The signature under TEE hardware key sk sgx and pk sgx could be obtained from the SGX Functions ( F SGX ).
Procedures
In this section, we now sketch the procedures for the execution of divisible double auction with the smart contract in the hybridized system (more details are given in Fig. 3). It depicts that the designed system is executed with three phases: (1) Initialization , (2) ExecProg , and (3) Finalization . We denote the input and output for the TEE as inp and outp , respectively. Also, regarding the deposit, we use ξ b m , ξ s n and ξ P M for all buyers, sellers and managers.
(1) Initialization . Prior to the auction phase, all the agents (buyers and sellers) are supposed to prepare for their deposits ξ b m , ξ s n . Besides, the manager also needs to deposit ξ P M (if the manager or any agent is identified to deviate the computation, then the deposit will be charged as penalty). Then the TEE will set state := init as confirming that the deposits in the blockchain. Otherwise, the TEE will set state := abort for preparing next auction and refund the deposit to the agents. For the auction computation, the TEE will fetch the key pair (pk sgx , sk sgx ) from Key Management for attestation, where the key (pk sgx ) is bundled to the executing prog x instance (auction) for checking the correctness of computation. Besides, the attestation with current state [state, ψ att ] are posted on the blockchain BC (as described in "Enclave functionality model" section).
Next, to tackle the large inputs of agents, the manager P M will handle tx :=[Enc pk (inp) , l id , ξ b m , ξ s n ] from all the agents where inp denotes the inputs of all the agents, and l id represents a unique identifier (ID). Then, P M will send tx to the Relay for executing the auction computation. Note that all the agents send the transactions through secure communication channels among all the agents and TEE. The tx is a transaction to deliver the input and output data among different system components.
(2) ExecProg . To execute the auction requested from P M , the Relay will retrieve the state information from the blockchain and Relay will trigger TEE to execute the requested service (auction) with the " resume " call if the state can be verified. Then TEE first decrypts the input data (from the Manager) with the private key sk obtained from the Key Management and launch the auction smart contract code as Prog x in the enclave (a sandboxed environment). Thus, an adversary cannot interrupt the execution or monitor data inside the enclave considering the natural merit of enclave. The final results output of the program (auction smart contract) Prog x will be securely returned to the manager.
(3) Finalization . Once manager receives the final result outp from TEE and check the correctness with the Blockchain. If the result outp is accepted by the Blockchain by checking and verifying the new state state ′ , the auction result (outp, ψ sgx , l id , ξ b m , ξ s n ) will be delivered to all the agents via Manager and Blockchain will store the new state ′ .
Threat model and properties
To ensure data privacy and integrity for the auction computation, we use the TEE's attestation (Yuan et al. 2018), where the computation is executed inside the enclave trusted by all the agents. However, the remaining software stack outside the enclave and the hardware are not trusted. The adversary may corrupt any number of agents, assuming that honest agents will trust their own codes and platform (leakage resulted from its software bugs are out of the scope). Furthermore, we assume that all the agents do not trust each other in the auction while being potentially malicious, such as stealing the bid profiles information. During the execution, each agent may send, drop, modify and record arbitrary transactions. Note that the side-channel attacks against enclave and DoS attacks are not considered in this paper.
In our proposed hybridized TEE-Blockchain system, the TEE compensates for the privacy issue with respect to the smart contract, i.e., our system can address the privacy issue for the double auction by utilizing the TEE for isolating the contract (auction process) execution inside the enclave, shielding it from potential malicious agents. From the system aspect, the following properties are addressed: • Correctness. The correctness of computation in the TEE can be guaranteed and verified by the remote attestation based on the given state and inputs. • Privacy and Security. Our system protect and verify the sensitive inputs (e.g., bid profiles) and outputs of all the agents.
Problem formulation
We represent the strategy of each agent with a non-negative valuation function V m (·) for buyers, which indicates the willingness to pay, or value for buyers to obtain the amount of divisible item. Similarly, we have negative cost function C n (·) for sellers. In the auction design, we adopt generic assumptions (Lazar and Semret 2001;Tuffin 2002) for the valuation function V m (·) : (1) V m is differentiable, concave and V m (∅) = 0 , and (2) V ′ m (·) is non-increasing and continuous; for the cost function C n (·) : (1) C n is differentiable, convex and C n (∅) = 0 , and (2) C ′ n (·) is increasing and continuous; In our settings, buyers have diminishing marginal utility while sellers have increasing marginal cost. This Assuming that each agent is selfish with the goal to maximize their own payoff. Therefore, they may untruthfully modify their bids in the auction. With the blockchain-based system to realize the smart contract for the auction, untruthful responses could be detected, and thus penalty will be applied to the cheating agent.
Thus, valuation function will be converted to V m (·) − µ p (·) where µ p (·) is a anti-monotonic function for measuring the penalty applied to the cheated amount for the buyers (Li and Marden 2014). Note that µ p (0) = 0 means if the valuation is submitted and penalty will be exempted. Similarly, the cost function will be updated as C n (y n ) + µ p (·) where µ p (·) is a monotonic function (and increasing derivative) for measuring the penalty applied to the sellers (Li and Marden 2014) and µ p (0) = 0 (exempting the penalty for truthful response of the sellers).
Then, the payoff functions are defined for buyer m and seller n as f m (r) and f n (r) , to represent their payoffs w.r.t. the bid profiles of all the agents r. Specifically, ρ m is the payment made by buyer m while ρ n is the payment received by seller n. Moreover, ρ(r i , r −i ) is defined as the difference between all the buyers' aggregated valuation if any buyer i is absent in the auction minus the aggregated valuation if i is included the auction (Lazar and Semret 2001; Zou et al. 2017;Kojima and Yamashita 2017). Similarly, ρ(r j , r −j ) is defined as the difference between all the sellers' aggregated cost if any seller j is absent minus the aggregated cost if j is included. Thus, we have: Then, given the optimal allocation profile for buyer m ∈ B and seller n ∈ S as A * m and A * n , we can define the payoffs for the buyer m and seller n as: Definition 1 (Individual Rationality) The divisible double auction mechanism achieves individual rationality if the following holds: f m (r) ≥ 0 and f n (r) ≥ 0.
It ensures that the all the agents obtain non-negative payoff while participating in the auction mechanism.
Definition 2 (Incentive Compatibility) The divisible double auction mechanism achieves incentive compatibility if the following holds: f m (r) ≥ f m (r) and f n (r) ≥ f n (r) where r and r are denoted as the true bid profile and false bid profile.
It ensures that all the agents in the auction will obtain the maximum payoff if they report the truthful bid.
Definition 3 (Weak Budget Balance)
In the divisible double auction, for ∀ m ∈ B and ∀ n ∈ S , if there exists: ∀m∈B (α m · d m ) ≥ ∀n∈S (β n · h n ), then the auction mechanism satisfies weak budget balance.
It ensures "no budget deficit" in the auction.
Definition 4 (Clearing Price)
The price θ is defined as the clearing price for an optimal allocation A * (·) , if there exists a feasible and efficient allocation, such that, the best response is achieved for the maximum social welfare, denoted as F (·) = m∈B V m (A m ) − n∈S C n (A n ).
We say that the clearing price θ (Brero et al. 2019) supports the optimal allocation A * (·) with the maximum social welfare.
Definition 5 (Nash Equilibrium) In the divisible double auction, Nash Equilibrium holds if given the bid profile r * such that: where r −m = {r} \ {b m } is a bid profile for all the buyers excluding buyer m from B and r −n = {r} \ {s n } is a bid profile for all the sellers except seller n from S.
Our divisible double auction mechanism will find the optimal allocation for all the agents to achieve the maximum social welfare. Moreover, the truthfulness of bids will be ensured in the smart contract via individual rationality and incentive compatibility. To preserve privacy, all the agents' bid prices and amounts (bid profiles), as well as the valuation/cost functions can be protected in the auction. The clearing price and trading amount will only be disclosed to every pair of potential sellers/buyers at the end of the auction (after convergence).
Divisible double auction mechanism
We now design the divisible double auction mechanism (DA), which will be executed as a smart contract inside the TEE. The procedures are detailed as below: (1) Initialization . Denoting the double auction program as Prog da , while executing Prog da in the enclave, the decrypted bid profiles of all the agents will be checked if they satisfy the initial condition (i.e, (α i ) max ≥ (β j ) min ). Otherwise, the state of auction will be turned from " active " into " fail ". Then, it requires all the agents to update their bid profiles. Meanwhile, the potential amount of the resources C should be smaller than the overall demand/supply. 1 The auction will be active if and only if satisfying the above conditions (Fig. 4).
(2) Iteration . Once the iteration starts, the potential amount C(r, C) is updated as below: where Q(r, c) = min{ m∈B A * m , n∈S A * n } , p b (r, C) = min {α i , A i ≥ 0} , p s (r, C) = max{β j , A j ≥ 0} and P = p b (r,C)−p s (r,C) ω max +σ max . We denote Q(r, C) as the minimum value of total demand and total supply; A coefficient P is used for gradients of marginal valuations or costs; Two variables p b (r, C) and p s (r, C) are defined to stimulate the much faster coverage in each iteration with the updated potential amount. We use ω max and σ max to denote the upper bound for buyers' valuations ∂A m |} ) and the upper bound for sellers' costs ( σ max ≥ max sup A n {| ∂ C n (A n ) ∂A n |} ). Note that the valuation function V m (A m ) and cost function C n (A n ) are updated with the penalty functions in the smart contract. The potential amount is expected to achieve a Nash Equilibrium quickly with the gradients of marginal valuations and costs.
The optimal allocation A * m and A * n are updated in each iteration, and agent derive their best responses. Given (r, C), the optimal allocations (for buyers/sellers) are where d m and h n are the updated amount for buyer m to purchase and for seller n to sell, respectively; B m (b) = {i ∈ B|α i > α m } ∪ {α i = α n and i < m} and S n (s) = {j ∈ S|β j > β n } ∪ {β i = β m and j < n}.
The updated potential amount C(r, C) can be iteratively derived based on the given potential amount C.
(3) Best Response . We use b * m and s * n to represent the best response of buyer m ∈ B and seller n ∈ S . With the bid profiles r = (b, s) and a pair of potential amounts (C, C) , the best response can be derived as below: In the divisible double auction program Prog da , the best response will be computed in each iteration and finally converge to a Nash Equilibrium. Notice that, in different applications (e.g., different divisible resources), the valuation and cost functions would be different. In this dynamic auction game, all the agents recompute their best response to the current strategies (bid profiles) of other agents.
Theorem 1 The divisible double auction (as program Prog da ) achieves individual rationality and incentive compatibility.
Proof First, suppose that the truthful bid profile provided by buyer m ∈ B , then we could obtain the non-negative payoff function f m (r) = V m (A * m ) − ρ(r i , r −i ) . Correspondingly, given the truthful bid profile provided by seller n ∈ S , f n (r) = ρ(r j , r −j ) − C n (A * n ), ∀n ∈ S . Thus, the truthful bid profiles show that the non-negative payoffs are guaranteed for all the agents in the auction (individual rationality is proven).
Second, we define the A m and A n as allocation of buyer m ∈ B and seller n ∈ S , separately. And A k m and A k n represent the allocation for k-iteration. We will verify the incentive compatibility for all buyers m ∈ B first for incentive compatibility. Suppose there is truthful bid pro- If we have A k m < A m , the below holds: , ∀m ∈ B with Case (A) and (B). Similar, incentive compatibility can be proven for all the sellers ∀n ∈ S .
Discussions
In this section, we analyze the security of the proposed hybridized TEE-Blockchain system and illustrate some real-world applications supported by the system. (7)
Security
Based on the key feature of isolation in enclave, Intel SGX enables the program (data) to be executed inside the secure container (enclave) for confidentiality and integrity. The adversary cannot interrupt the computation executed in a sandboxed environment (enclave). Note that enclave is created in its virtual address space by an untrusted hosting application with OS support.
Once enclave starts initialization, data and codes inside it will be isolated from the rest of the system. Note that the encrypted data are sent from agents to enclave through secure channels. However, other malicious servers cannot eavesdrop on the encrypted data and even tamper with the communication.
During the execution, if any agents abort/skip this step or behave dishonestly during the initialization , the execution will be terminated and refunds to the honest agents within the time threshold T 1 . Afterwards the computation starts, all agents send the encrypted inputs to the interface of SGX. In this phase, if no malicious behaviors are detected by the manager, the Relay R will forward the encrypted inputs to the enclave E . However, it is hard to determine if the agents behave dishonest (i.e., fail to send message) or the Relay behave malicious (i.e, dropping message) during the execution if the enclave E does not receive any incoming requests. Thus, all agents P and Relay R both receive the challenge request (we denote as request chal ). Within the certain time threshold T 2 , if agents response with inputs and procedure will move to the next steps. Otherwise, the agents are proved to be malicious. Similarly, if the Relay R is proven to be the malicious one, the protocol is terminate and set up state is fail . In terms of the last phase Finalization , the TEE return the final results to all the agents and publish the states on the blockchain. Note that during all the data flow, the deposits of malicious agents are not refunded for punishment.
Real-world applications
In practice, divisible resources which could be privately traded using our system, e.g., electricity , cloud resources (Jin et al. 2018;Fujiwara et al. 2010), and wireless spectrum (Kebriaei et al. 2016). We now discuss two of them as representative applications. Note that different valuation/cost functions will be defined and implemented in different applications.
Energy Trading. There is demand from power grid for trading the excessive locally generated energy, e.g., the renewable energy resources (Aliabadi et al. 2017;Faqiry and Das 2016). The proposed hybridized system is able to implement the privacy preserving divisible double auction for energy trading, due to the divisible of electricity resource. The valuation/cost functions are defined as V m (x m ) = ζ m log(x m + 1) and C n (y n ) = a n y 2 n + b n y n (Bompard et al. 2007), where ζ m is a parameter leveraged by the behavior preference of buyer. The parameter of a n and b n are used for leveraging how much the sellers incline to sell. The valuation/cost functions follow the general assumption illustrated in "Auction mechanism design" section. Eventually, hybridized system only generate the clearing price for the auction to all the agents. The energy amount of each pair of buyer and seller will only obtain the amount traded between them.
Wireless Bandwidth Allocation. We can model the wireless bandwidth allocation (Feng et al. 2015;Zhang et al. 2016b) based on our proposed hybridized system for network traffic and services. In terms of a MVNO (Mobile Virtual Network Operator), the valuation function for buyer m is defined as V m (x m ) = ζ m ln(x m ) where ζ m defined as a positive-valued parameter. This indicates that buyers willing to pay for the bandwidth. Meanwhile, the cost function of seller n, an InP (Infrastructure Provider) is denoted as C n (y n ) = α n e y n , where y n presents the bandwidth it can supply and α n as another positive-valued parameter (bandwidth) for the seller n. As expected, the valuation/cost functions are also follow the general assumptions, and execute privately and truthfully via hybridized system for such divisible double auction.
Each agent can be either a buyer or seller in the auction for both applications.
In the experiments of energy trading/auction, we utilize the valuation function V m (x m ) = ζ m log(x m + 1) and cost function C n (y n )=a n y 2 n + b n y n , as detailed in Zou et al. (2017). We adopt the same parameters ζ m = 50 , a n = 30 and b n = 0 as Zou et al. (2017). Similarly, wireless bandwidth allocation is implemented with the valuation function V m (x m ) = ζ m ln(x m ) and cost function C n (y n ) = α n e y n , where ζ m = 50 and a n = 2 Feng (2015). For the energy trading, real datasets from the UMASS Trace Repository (Barker et al. 2012) are adopted while synthetic datasets are generated per (Feng et al. 2015;Zhang et al. 2016b) for the wireless bandwidth allocation.
Off-chain evaluation
Performance Evaluation We first evaluate system performance for securely perform computation for the auction (the off-chain computation for the optimal allocation). Figure 5a presents the percentage of total reduced runtime for the off-chain computation by comparing the our hybridized TEE-blockchain system ("Hybrid") with the cryptographic protocol based double auction system (Liu et al. 2020) ("PANDA"). Note that this evaluation is performed by the varying the number of agents (from 50 to 200) with the different key sizes (from 512-bit to 2048-bit). As shown in Fig. 5a, compared with "PANDA", the runtime of our ("Hybrid") has been significantly reduced for all different key sizes. The hybridized system ("Hybrid") shows the higher efficiency and scalability by reducing more than 15 % runtime (on average) with strong security guarantees (in case of the 2048-bit key size), 18 % average runtime for 1024-bit key size and 35% average runtime for 512-bit key size. Furthermore, Fig. 5b illustrates the comparison between the total bandwidth for ("PANDA") and ("Hybrid") during the auction. The "PANDA" composes cryptographic primitives for secure computation, which results in heavy burdens for computation. With the TEEblockchain system, the bandwidth of "Hybrid" has been drastically reduced to make the communications more efficient.
In addition, Fig. 5c presents the latency of 720 different auctions. The latency of our hybridized system is less than 1 s for most auctions, which is also significantly lower than the cryptographic protocols (PANDA). It indicates that the real-time performance of divisible double auction can be achieved via the hybridized TEE-blockchain system. Finally, Fig. 5d illustrates the throughput (bits/sec) of the system on a varying number of agents (1024-bit key). Essentially, throughput measures the amount of data transmitted during a specified time period via a network, interface, or channel. In this case, we use the throughput to measure the average size of the encrypted data that are transmitted among different agents per second. It is defined as throughput (bits/ sec) = (the average data size of all the communication channels)/(the average time). Note that the average time includes the agents' local computational time. As shown in Fig. 5d, the throughput of "Hybrid" increases slower than "PANDA" as the number of agents increases. Case study. We also conduct empirical studies for two example applications (energy trading and wireless bandwidth allocation) in case of 20 agents, including 12 buyers and 8 sellers. Figures 6 and 7 demonstrate the detailed results on (1) allocation ( A * m (b, C) and A * n=1 (s, C) ), (2) potential amount (C), and (3) social welfare ( F (·) ) in each iteration until achieving the Nash Equilibrium, for both applications.
First, Figs. 6a and 7a show the allocation for five randomly picked agents (three buyers and two sellers) in different iterations. The allocation of both buyers and sellers increase and finally achieve the optimally allocated amount after multiple iterations. Second, in Figs. 6b and 7b, the potential amount of the auction (used for updating the allocation for buyers and sellers in each iteration) grows until convergence while moving to new iterations. Finally, the social welfare ( F (·) ) is derived based on equation F (·) = m∈B V m (A m ) − n∈S C n (A n ) . Figure 6c presents an increasing trend in multiple iterations and the social welfare of energy trading converges to the maximum value $38 while the social welfare of the wireless bandwidth allocation in Fig. 7c converges to the maximum value $75.
On-chain performance evaluation
Besides the off-chain computation, we demonstrate the performance of on-chain transactions. shows the runtime for different package functionalities.
Related work
There were some other auction mechanisms for allocating divisible resources, i.e., spectrum allocation (Wu et al. 2011;Dong et al. 2012). Combinatorial auctions (Dong et al. 2012) was discussed for cognitive radio networks. Strategy-proof mechanism for multi-radio spectrum buyers was proposed by Wu and Vaidya (Wu et al. 2011). A sealed-bid reserve auction was modeled for the radio spectrum allocation problem. Hoefer et al. (Hoefer et al. 2014) investigated the combinatorial auctions with a conflict graph via an approximation algorithm (LP formulation). Other studies related to divisible resources auctions focused on the revenue maximization (Jia et al. 2009) or the efficiency of social maximization (Dong et al. 2012;Gopinathan and Li 2010). The privacy concerns in auction mechanism for divisible resources have been raised in Chen et al. (2014); Huang et al. (2015). In Suzuki and Yokoo (2003) cryptographic techniques were proposed for achieving the privacy and security in the auction game. A cryptographic scheme for one-side auctions was proposed in Huang Huang et al. (2013). In addition, Cheng et al. (2019) presented the complementary characters for blockchain and TEE, the rigorous security proofs are provided to support the confidentiality of the hybrid system. Also, the Hawk system (Kosba et al. 2016) was designed as a decentralized smart contract framework for running the contracts off-chain while posting zeroknowledge proofs on-chain. Zhang et al. (2016a) proposed a system Town Crier that authenticates data feed using smart contracts supported by the Ethereum platform. It enables data fetching from existing HTTPenabled data sources, and utilizes TEE to execute its core functionality and protect its data against malicious attackers.
In the context of double auction, a recent scheme was proposed to protect privacy for the bids (Liu et al. 2020). However, it requires a heavy computation burden by composing the cryptographic primitives. Instead, ETA (Liu et al. 2021a) was proposed for an efficient and private system, which securely executes double auction for allocating divisible resources among distributed agents within the Intel SGX. However, TEE cannot guarantee the availability (as the host can terminate TEE). We extend the ETA system to the Hybridized TEE-Blockchain System (Liu et al. 2021b), which enables smart contract execution on the blockchain to ensure strong integrity and availability with high efficiency. Therefore, the proposed hybridized system can securely and efficiently perform secure computation for the double auction.
Conclusion
In this paper, we design a hybridized TEE-Blockchain system to securely execute divisible double auction among distributed agents within the enclave in a highly efficient way. Meanwhile, it interacts with the blockchain for validation and storage. The proposed divisible double auction mechanism guarantees individual rationality, incentive compatibility, weak budget balance and pareto efficiency. The input private data of all the agents in the divisible double auction can also be protected in the hybridized system. The experimental results have demonstrated both effectiveness and efficiency for the designed hybridized system to privately compute the optimal allocation and execute the divisible double auction. | 9,080.4 | 2021-12-01T00:00:00.000 | [
"Computer Science"
] |
A Federated Deep Unrolling Method for Lidar Super-Resolution: Benefits in SLAM
In this paper, we propose a novel federated deep unrolling method for increasing the accuracy of the Lidar Super resolution. The proposed framework not only offers notable improvements in Lidar-based SLAM methodologies but also provides a solution to the significant cost associated with high-resolution Lidar sensors. Particularly, our method can be adopted by a number of vehicles coordinated with a server towards learning a regularizer - a neural network - for capturing the dependencies of the Lidar data. To tackle this adaptive federated optimization problem effectively, we initially propose a deep unrolling framework, converting our solution into a well-justified deep learning architecture. The learnable parameters of this architecture are directly derived from the solution of the proposed optimization problem, thus resulting in an explainable architecture. Further, we extend the capabilities of our deep unrolling technique by incorporating a federated learning strategy. Our federated deep unrolling model employs an innovative Adapt-then-Combine strategy, where each vehicle optimizes its model and, subsequently, their learnable regularizers are combined to formulate a robust global regularizer, equipped to handle diverse environmental conditions. Through extensive numerical evaluations on real-world Lidar based SLAM applications, our proposed model demonstrates superior performance along with a significant reduction in trainable parameters, with 99.75% fewer parameters compared to state of the art lidar super-resolution deep neural networks. Essentially, this study is the first initiative to combine deep unrolling with federated learning, showcasing an efficient, and data-secure approach to automotive Lidar super-resolution SLAM applications.
Abstract-In this paper, we propose a novel federated deep unrolling method for increasing the accuracy of the Lidar Super resolution.The proposed framework not only offers notable improvements in Lidar-based SLAM methodologies but also provides a solution to the significant cost associated with high-resolution Lidar sensors.Particularly, our method can be adopted by a number of vehicles coordinated with a server towards learning a regularizer -a neural network -for capturing the dependencies of the Lidar data.To tackle this adaptive federated optimization problem effectively, we initially propose a deep unrolling framework, converting our solution into a well-justified deep learning architecture.The learnable parameters of this architecture are directly derived from the solution of the proposed optimization problem, thus resulting in an explainable architecture.Further, we extend the capabilities of our deep unrolling technique by incorporating a federated learning strategy.Our federated deep unrolling model employs an innovative Adapt-then-Combine strategy, where each vehicle optimizes its model and, subsequently, their learnable regularizers are combined to formulate a robust global regularizer, equipped to handle diverse environmental conditions.Through extensive numerical evaluations on real-world Lidar based SLAM applications, our proposed model demonstrates superior performance along with a significant reduction in trainable parameters, with 99.75% fewer parameters compared to state of the art lidar super-resolution deep neural networks.Essentially, this study is the first initiative to combine deep unrolling with federated learning, showcasing an efficient, and data-secure approach to automotive Lidar super-resolution SLAM applications.
I. INTRODUCTION
L IDAR Sensors are widely used in high-level autonomous vehicles as part of their perception systems, despite their high cost and moving parts.Due to the growing interest in autonomous driving, there are currently over 20 companies developing different types of lidar systems for autonomous driving applications, ranging from low-level to high-end capabilities [1].With the continuous progress made over the years, a lidar-centric perception system is expected to mature in terms of model-based processing algorithms while satisfying at the same time requirements for the majority of autonomous vehicles (AVs) such as precise localization and accurate mapping of unknown surroundings.AVs frequently operate in environments characterized by constant changes, posing challenges to the creation of consistent maps.For instance, self-driving cars must possess the capability to consistently locate legal parking spots and identify safe passenger exit points, even in previously unexplored locations that lack accurate mapping data.The emergence of adaptive federated optimization in the field of Connected and Autonomous Vehicles (CAVs) has the capacity to transform such Lidar Based Simultaneous Localization and Mapping (SLAM) solutions [2], [3], [4].
However, two central challenges are currently restricting their broader implementation: the substantial cost of essential sensor equipment, including high-resolution Lidar (light detection and ranging) systems, and the prevalent skepticism towards deep learning methodologies, especially in high-stakes SLAM and multi-object detection and tracking applications.Large-scale deployment of such systems will only become feasible when the associated costs can be significantly reduced.Currently, the high expense is primarily due to costly sensor equipment (for instance, high-resolution Lidar systems) and the need for processing devices with advanced memory and computing capabilities [1].Note that a 64-channel HDL-64E Lidar, typically employed for autonomous driving, costs roughly 85, 000$.Conversely, a Lidar sensor with 16 channels is much more affordable, priced around 4, 000$.Nonetheless, the lower resolution of the 16-channel sensor may impact its effectiveness in several autonomous driving applications, such as lidar based odometers and SLAM approaches [5].
In light of the above restrictions, a number of studies in the literature have delved into the use of more affordable sensors, for instance, a 16-channel Lidar, coupled with advanced superresolution methodologies [6], [7], [8], [9], [10], [11], [12].The ultimate objective is to enhance the data quality captured by lowresolution sensors, which can then serve as alternatives to costly high-resolution Lidar sensors.Two primary strategies have been explored to enhance the performance of low-resolution LiDAR sensor.The first category involves methods that integrate additional sensors.Such methods incorporate visual cameras [9], inertial measurement units (IMUs) [10], [13], or a combination of both [11].The second category applies restoration methods, often deep learning-based super-resolution algorithms, either after initial range image calculation [6] or directly to the point cloud data [7], [8].However, these methodologies depend on deep learning techniques which are often treated as black-box solutions.In particular, they generate complex neural networks characterized by an extensive amount of learnable parameters, thus necessitating substantial volumes of training data and lacking interpretability [14], [15].Additionally, most lidar superresolution techniques are based on pre-trained networks that are not updated in time, utilizing data recorded during the operation of the vehicles.Though, the data collected by each vehicle or even by a group of vehicles could facilitate continual learning paradigms, increasing the accuracy of the models in time.Building on this line of thought, federated learning [16] can serve as a continual learning methodology enabling collaboration between trusted agents and respecting at the same time privacy concerns.Again, the challenges in this case are related to the size of the state of the art super-resolution DNNs, posing significant communication and complexity challenges.Hence, designing efficient and explainable deep learning architectures that utilize the collaborative nature of FL is actually the (open) problem that is tackled in this paper.
To fill this gap, in this study, we aim to combine federated learning methods with analytical and well-justified optimization-based methods.This novel combination offers the advantages of both worlds: high performance due to the data offered by a number of cooperating agents as well as lowcomputational complexity and explainable model architectures.Specifically, our proposed method operates on range images directly derived from the lidar point clouds which are collected by different AVs that have the capability to communicate.Considering the unique local and non-local dependencies exhibited by 2D range images, our approach expands on recent studies that use learnable regularization terms in the form of suitable neural networks [17], [18]19, [19], [20].These regularizers, derived from the training data of each AV, are adept at encapsulating more complex and unique characteristics of the data under consideration.Thus, the proposed method focuses on formulating a new well-justified optimization problem for the lidar super-resolution problem.This optimization problem consists of two interpretable components.The first component is a neural network derived by individual NNs of different AVs, which serves as a prior for the range images.The second component is a data consistency term that derives from the mathematical connection between the low and high resolution range images.To efficiently solve this problem the Half-Quadratic-Splitting (HQS) approach [21] is utilized.
Upon deriving an efficient iterative solver (e.g., based on the HQS) for the lidar super resolution problem, a model-based deep learning architecture is generated, via the utilization of the deep unrolling (DU) framework [20], [22], [23], [24], [25].To elaborate, the derived solver is unrolled for a specific number of iterations, hence creating a structured neural network.Each layer within this network corresponds to a single iteration of our proposed algorithm.This DU technique facilitates end-to-end optimization of the model, thereby improving its ability to adapt to the problem at hand.The parameters of this model are directly mapped to the parameters of the well-understood optimization algorithm, resulting in an explainable framework that allows a clear understanding of its operation.In addition to offering superior levels of interpretability, the developed deep unrolling method showcase a compact structure and a reduced dependency on vast amounts of training data.
Additionally to the effective and interpretable model-based network, we propose a novel adaptation mechanism for cooperatively updating the regularizer deployed in a considerable number of vehicles, without sharing their own Lidar Data.In this context, our federated version of the deep unrolling methodology, allows each autonomous vehicle, equipped with its own low cost lidar sensor, to function as a distinct unit in a larger distributed learning network.Inspired by the distributed parameter estimation approaches [26], our approach follows the Adapt-then-Combine strategy.During the adaptation phase each vehicle employs its own private dataset to fine tune the local deep unrolling model by solving a local optimization problem.In the combination phase, the focus is on formulating a robust global regularizer, effectively encapsulating the information gathered from various vehicles operating in diverse environments.
The key contributions of this work can be summarized as follows: r We propose a novel adaptive federated optimization mech- anism for solving efficiently the lidar super-resolution problem.The proposed federated deep unrolling approach is the first attempt to solve collaboratively the lidar super resolution utilizing the benefits of FL and the DU frameworks.This combination captures the strengths of both frameworks: the superior performance afforded by data from collaborating agents and the benefits of low computational complexity and interpretable model architectures offered by the DU framework.
r Through comprehensive numerical evaluations, we demonstrate the superiority of our deep unrolling approach against various state-of-the-art methodologies in the context of lidar super-resolution problem.A great benefit that stems out of the proposed methodology is the fact that the individual deep unrolling networks contains 99.75% less parameters compared to other state-of-the-art deep learning networks.Moreover, the proposed mechanism is highly adaptable and can easily incorporate standard privacypreserving mechanisms such as homomorphic encryption or differential privacy.In particular, we have evaluated the benefits of incorporating homomorphic encryption within the proposed FL-DU framework without negatively affecting the overall performance.
r We have also thoroughly studied the impact of the proposed federated DU-based Super resolution scheme in practical Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.and state of the art LiDAR based SLAM systems.More specifically, in order to provide a rigorous assessment, we integrated the proposed adaptive federated mechanism, utilizing the LeGO-LOAM system, a state of the art Lidar based SLAM that offers a real-time six-degree-of-freedom pose estimation and a generated 3D map.The reconstructed Lidar data achieved superior accuracy compared to other methods in various trajectories, highlight the effectiveness and superiority of the proposed approach in practical SLAM applications.The remainder of this paper is organized as follows.In Section II, a detailed literature review and some preliminaries of related works are given regarding the deep unrolling framework, the lidar super-resolution problem and the federated learning.In the sequel, Section III formulates the problem under study and the proposed deep unrolling model.Section IV derive the proposed Federated Deep Unrolling method (FL-DU).Section V presents a series of extensive numerical results in the context of real world SLAM applications, that demonstrate the efficacy of the new algorithms.Finally, Section VI concludes the paper.
A. Deep Unrolling
The deep unrolling framework in an emerging research field with significant potential to numerous real-world inverse problems [20], [22], [23], [25], [27], [28], [29].More formally, the deep unrolling approach transform effective optimization-based algorithms into computationally efficient and interpretable deep learning networks, where each iteration of the solver corresponds to one layer of the network [30].
To be more specific, in imaging systems inverse problems can be expressed as where Y is the corrupted measurements of an signal X, S denotes the forward operator and N is some noise.An effective way to estimate the desired signal X is to form an optimization problem where R(•) is a regularization term, e.g. the total-variation (TV) semi-norm, or a learnable regularizer, aiming to promote the inherent properties of the under-examined signals [31].Classic approaches to address this problem involve iterative solvers to estimate the solution, defined as X k+1 = g(X k , Y ) to estimate a solution based on the Alternating Direction Method of Multipliers (ADMM) and HQS [32] methodologies.However, these methods can pose additional challenges in optimization problems.They often require parameter tuning and multiple iterations to converge to a satisfactory solution.In this context, a more effective solution can be realized using the deep unrolling paradigm.Particularly, a limited number of iterations from the derived iterative solver (e.g., HQS) are unrolled into a predefined number of iterations.This forms a structured neural network, with each layer representing a single iteration of the proposed algorithm.
Although the literature regarding the deep unrolling paradigm is rich in 2D classical image processing domain, its formulation and application in the automotive field, especially concerning Lidar super-resolution challenges in automotive scenes remains underexplored.In fact, in autonomous driving systems often encounter complex inverse problems that can potentially impact the performance of Simultaneous Localization And Mapping applications [33], [34], [35], [36].In this study, we focus on the lidar super-resolution problem for automotive scenes by proposing a novel optimization problem and a deep unrolling network with state-of-the-art performance, low-computational complexity, and well-justified-explainable architecture.Significantly, the proposed DU model exhibits reduced number of trainable parameters i.e., 99.75% less parameters, as compared to other state-of-the-art deep learning methods.
Adopting a wider perspective, the deep-unrolling method provides numerous significant benefits in AV applications.In particular, compared to the classical deep learning approaches where their architectures derive from an ad-hoc manner [37], the proposed deep unrolling network have a clear connection with the under-examined problem and the considered optimization algorithm.This results in an explainable framework that allows a clear understanding of its operation.Understanding how a model operates can be vital for diagnosing and correcting any issues that could have safety implications [38].For example, when a system exhibits sub-optimal performance, explanations assist engineers and researchers in identifying challenges, and potential areas of failure [38].In addition, the developed deep unrolling method showcases a compact structure and a reduced dependency on vast amounts of training data.Therefore, the mathematical consistency and explainability in the proposed deep unrolling model become compelling attributes, amplifying the value and relevance of our work in real-world AV scenarios.
Finally, we leverage the structured design of our proposed deep unrolling framework to extend its capabilities further.Specifically, we propose a new federated learning framework, leading to an explainable deep learning models in the automotive domain.
B. Federated Learning in Automotive Domain
Although the Federated Learning framework has been extensively explored in numerous disciplines, e.g., signal processing, medical processing, its application in the autonomous driving domain remains under-investigated [16], [39], [40].The current body of literature contains a limited number of works that investigate the benefits of FL in autonomous driving.For instance, study [41] used the FL scheme to examine the object detection problem in autonomous driving scenes.Additionally works in [42], [43] proposed methods for predicting the wheel steering angle in autonomous vehicles under the FL scenario.Theoretical aspects of federated learning, such as data distribution and noni.i.d.nature of datasets, were explored in [2], [41].Lastly, [44] developed a benchmark platform for semantic segmentation, incorporating multiple federated learning algorithms.It should be highlighted that the above approaches examine the federated learning framework only in the context of its application to specific problems, based on the FedAverage algorithm [45].
Our study differentiates from the existing literature in two important aspects.Firstly, we explore the novel problem of deep unrolling-based lidar super-resolution from a federated learning perspective, which has not been previously examined.By capitalizing on the distributed nature of federated learning, our approach enables the utilization of private lidar data gathered from diverse autonomous vehicles operating in different environmental conditions in order to improve the lidar slam solutions.Secondly, and more importantly, we propose a novel federated learning scheme based on the proposed deep unrolling formulation.We argue that the well-justified structure of the deep unrolling model can be fully utilized by the Federated learning strategy.Under the well justified structure of the deep unrolling model, the federated learning can be formulated as an Adapt-then-Combine approach.During the adaptation phase, each agent optimizes its local deep unrolling model, which is derived from the problem at hand.Subsequently, during the combination phase, agents upload only a portion of their local deep unrolling models -specifically the learnable regularizers -to a central server.The server then applies a fusion rule to the received learnable regularizers, deriving a global regularizer in the process.Thus, our proposed Federated Learning method seeks to learn a global regularizer that effectively captures the unique characteristics of each device's local private data.This global regularizer can then be efficiently integrated into local deep unrolling approaches to address the problem of Lidar super-resolution.
To the best of our knowledge, this is the first study that combines the deep unrolling strategy with the federated learning framework.This connection offers several practical advantages.Firstly, Firstly, the collaboration nature in the federated learning (FL) framework ensures both high performance and privacy.The deep unrolling method further enhances efficiency due to its compact model structure with fewer parameters, minimizing the communication load between devices and the central server and the computational resources that are required for training.Secondly, the interpretable architecture of the deep unrolling model enhances our understanding of the network's functions and the federated learning operation.Overall, the combination of the deep unrolling strategy with the federated learning framework not only improves efficiency but also enhances interpretability and privacy, making it a valuable approach for various applications.
C. Lidar Super-Resolution
In the field of lidar super-resolution, the majority of existing methods can be classified into two main categories.The first category involves methods that integrate additional sensors.Such methods incorporate visual cameras [9], inertial measurement units (IMUs) [10], [13], or a combination of both [11].
However, these methods have a significant drawback: they introduce increased computational complexity.This is primarily because their performance is heavily dependent on the accuracy of point cloud registration [7].Additionally, the effectiveness of tight integration relies heavily on the precision of the IMU, which is often determined by its cost [7].
The second category involves the application of appropriate restoration methods to the noisy or low-resolution lidar data.In many instances, these approaches rely on the use of a deep learning-based super-resolution algorithm, which is applied either directly within the point cloud domain [7], [8], [46], or subsequent to the range-view domain, where the initial 3D point clouds are organized into 2D range images [6], [12].
Focusing on the studies [7], [8], [46] that process the raw 3D lidar point clouds to perform super resolution, these methods are usually computationally intensive as they require to find neighboring relationships among points [47].The limited density of 3D point clouds derived from low resolution lidar sensors posses another major challenge, as they require complicated deep learning architectures and additional processing algorithms such as segmentation [8].
An alternative approach is to focus on the range image domain or range view, which involves projecting 3D point clouds onto 2D range images.This representation is more compact and provides a clearer insight into lidar point clouds, especially in handling the sparsity of raw 3D point clouds [48].Methods such as [6], [12], utilize deep learning structures like the U-net-based network [6] or attention-based models [12].Nevertheless, these deep learning architectures employed as black-box solutions they generate complex neural networks characterized by an extensive amount of learnable parameters.As a result they require substantial volumes of training data and often lack interpretability and explainability [14], [15].
In this study, we argue that the range view offers a distinct advantage.By converting 3D point clouds to 2D range images, we can identify and exploit the mathematical relationship that connects the low and high-resolution range images.Based on this relationship, we formulate a novel optimization problem that can be tackled using the deep unrolling framework.This approach offers adaptability, increased restoration performance, and computational efficiency while maintaining a clear understanding of the underlying processes.
Lastly, a significant distinction of our research from the existing literature is that most methods assume that the models are trained only once and they are not fine tuned in time utilizing data recorded during the operation of the vehicles.A further limitation of these methods is their lack of adaptability.Specifically, if there's a need to incorporate new information from other vehicles, it necessitates the transmission of new raw data to update the deep learning model.In contrast, we introduce an efficient federated learning framework based on the deep unrolling approach to address the above-mentioned challenges.In particular, the proposed method leverages data from distributed vehicles, mitigating privacy and communication challenges by exchanging only specific parts of the deep unrolling model namely, the learnable regularizer.This design not only ensures data privacy but also enhances the adaptability of the proposed method, hence allowing it to efficiently integrate new information without the need for extensive data transfers.As a result, the overall effectiveness and flexibility of our system are substantially improved compared to traditional deep learning approaches.
III. THE LIDAR SUPER RESOLUTION PROBLEM AND THE DEEP UNROLLING FRAMEWORK
In this section, we introduce the problem formulation regarding the lidar super-resolution problem and provide details how to design an efficient and explainable deep unrolling model.More specifically, Section III-A defines the mathematical relation between the low and high resolution lidar data.Utilizing the derived mathematical formulation, Section III-B presents the deep unrolling model to tackle efficiently the considered problem.
A. Lidar Super-Resolution: Problem Formulation
Assume a high-resolution point cloud generated by a 64channel LiDAR sensor, resulting in the associated highresolution range image X ∈ R C×M , where C signifies the vertical resolution (i.e., the total count of channels or lasers, for instance, C = 64), and M defines the horizontal resolution of the range image.The corresponding low-resolution range image Y ∈ R c×M , which retains the same horizontal resolution as X but includes only c < C channels in the vertical resolution (e.g., c = 16), can be obtained using the degradation model below [6]: where S ∈ R c×C is a the downsampling operator that extracts only the c channels from the high-resolution range image and N denotes a zero-mean Gaussian noise term.With the lowresolution range image Y as input, our objective is to accurately estimate the high-resolution range image X.Given the under-determined nature of this inverse problem, we suggest the following regularized optimization formulation comprised of a term that ensures data fidelity, and a learnable regularizer denoted as R(•) which incorporates prior knowledge to capture the underlying structure of the range images.Importantly, the learnable regularizer can be interpreted as a denoiser which can be replaced by a neural network where whose weights can be learnt using the training data.Furthermore, μ stands for the penalty parameter that controls the importance of the learnable regularizer.By converting the estimated high-resolution range image into 3D coordinates, the desired high-resolution point cloud can be obtained.A visual representation of this workflow is provided in Fig. 1.
B. Deep Unrolling Framework
In this section, we provide details on the design of the proposed deep unrolling model based on the proposed HQS iterative solver.
1) HQS Iterative Solver: It should be highlighted that relation (4) constitutes a very challenging problem to be solved.To overcome this difficulty, we employ an alternating optimization (AO) scheme [32], splitting the main problem into two more tractable sub-problems, i.e., a least squares sub-problem and a denoising sub-problem.In accordance with this approach, we utilize the Half Quadratic Splitting (HQS) methodology [21] that is able to treat such issues.
In more detail, we introduce an auxiliary variable Z ∈ R C×M to problem (4), thus formulating it as follows Based on the above problem, the objective function that the HQS method aims to minimize is given by where b denotes a penalty parameter that controls the importance of the learnable regularizer R(.).From relation (6) a series of individual sub-problems can be derived The sub-problem in (7a) corresponds to a quadratic regularized least squares problem.The closed-form solution for this subproblem is given as Additionally, problem (7b) can be expressed in the following form From a Bayesian perspective, we can interpret sub-problem (7b) as a Gaussian denoiser [18], [20].This implies that we can leverage a neural network f θ (•) as a denoising model, which can be trained using appropriate training data.Consequently, we can express (7b) as follows It is worth noting that this neural network serves as the regularizer, or prior, that enforces specific properties learned from the training data on the predicted high-resolution range images.Hence, the HQS solver consists of two interpretable modules that is the data consistency solution for estimating the highresolution range image (11a) and the denoising step in (11b) In the following, we propose a deep unrolling model.This model is established by unrolling the iterations of the local solver as depicted in (11), thus deriving an end-to-end interpretable deep architecture..
2) Deep Unrolling Model: In this section, we introduce an efficient and interpretable approach grounded in the Deep Unrolling (DU) framework.Instead of iterating over the map in (11) for a large number of iterations, the deep unrolling strategy is utilized.This method involves unrolling a limited number of K iterations, thus creating a K-layer deep architecture, as illustrated ) end for Optimize the deep unrolling model end-to-end using loss function (12) in Fig. 2. Specifically, we consider a small number of iterations, say K, of the HQS solver in (11) as layers of a deep learning model.Each iteration of the solver corresponds to a distinctive layer of the proposed structure, forming a K-layer deep learning network.
In this scenario, the deep unrolling model's depth and parameters are highly interpretable, directly correlating with the underlying iterative algorithm.As shown in (11), each layer of the proposed model incorporates two interpretable modules: the closed-form solution derived from the data consistency term for the high-resolution range image estimation (11a), and the denoising process defined in (11b).The proposed deep unrolling network is depicted in Fig. 2.
After formulating the deep unrolling model, the next step necessitates the training of the learnable parameters of the deep unrolling model -specifically the neural network f θ (•) and the penalty parameter b.In order to achieve this, we aim to optimize the local deep unrolling model by minimizing a certain loss function, expressed as follows: where is the output of the proposed deep unrolling network given a low-resolution range image Y i and X i denotes the i th ground truth high resolution range image.Algorithm 1 summarizes the formation of the deep unrolling model.
IV. PROPOSED FEDERATED DEEP UNROLLING METHOD
In this Section, we formulate a novel Federated Deep Unrolling methodology (i.e., FL-DU).Inspired by the distributed parameter estimation approaches [26], we argue that the proposed FL framework can be expressed as an Adaptthen-Combine (ATC) strategy, thus utilizing the proposed optimization problem in (4).In particular, Section IV describes the Federated Deep Unrolling problem formulation and Sections IV-B and IV-C present the proposed adaption and combination steps of FL-DU framework.
A. Federated Deep Unrolling Framework
To mathematically establish the Federated Deep Unrolling (FL-DU) framework we consider a network of N edge devices (or agents) participating in the learning process.Each device Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply. is identified by an index n within the set N = 1, 2, . . ., N, and holds a local dataset D n = {X n , Y n }.In this setup, X n denotes the high-resolution range images, while Y n represents the corresponding low-resolution range images.To simplify the notations, we assume that each local dataset is composed of one pair of high-and low-resolution range images.
Each agent aims to solve a local optimization problem utilizing the proposed problem formulation for the Lidar superresolution in (4), defined as follows arg min where R n (•) denotes to the learnable regularizer corresponding to the n−th agent.The fact that the local agents utilize only their local data to address the proposed optimization problem may produce a local regularizer (prior) that is not able to generalize well in various environmental conditions.Thus this limitation may result in a local learnable regularizer (R n (•)) that is only limited to capture dependencies of the range images generated from the local distribution.To efficiently overcome this, the proposed federated deep unrolling framework allows agents to collaborate under the orchestration of a central server.Through this collaboration, they are able to learn a more robust regularizer, or prior, which exhibits greater generalization capabilities across diverse conditions.
The goal of the server in this context is to solve the sum of the local optimization problems, i.e., The optimization problem presented in ( 14) consists of two terms.The first term, known as the data consistency term, requires access to each agent's private local datasets.This part is crucial in maintaining the accuracy of the optimization.However, direct access to this local private data may raise privacy concerns.The second term represents the sum of the learnable regularizers corresponding to each agent.These regularizers are expressed as neural networks that capture the underlying structure in the data.Importantly, while the regularizers are learned using local data, they don't expose sensitive information.This makes them suitable for sharing with the server, which facilitates global optimization without compromising privacy.In light of this, the proposed federated deep unrolling framework can be expressed as an Adapt-then-Combine (ATC) strategy [26], taking into account the above optimization formulation.Given the interpretable structure of the deep unrolling model, the proposed framework consists of two steps: adaptation and combination.
In the adaptation step, each agent aims to solve the optimization problem defined in (13).This step is designed to solve the proposed local optimization problem using the deep unrolling strategy and adapting the respective deep unrolling (DU) models to the specific characteristics of the local datasets from each autonomous vehicle (agent).
In the combination step, the focus is on merging the local learnable regularizers obtained from the deep unrolling models.This process results in a more powerful and robust global regularizer that effectively incorporates the information gathered from a diverse range of autonomous vehicles operating in different environmental conditions.By combining the local regularizers, the overall performance and generalization capabilities of the federated learning framework are enhanced.
The proposed approach provides the deep unrolling-based federated learning a clear and interpretable structure.The role of Federated Learning is to facilitate the merging of local learnable regularizers without compromising the privacy of individual datasets.
B. Federated Deep Unrolling: Adaptation Step
In the proposed approach, the adaptation step takes place within the local devices.In more detail, the adaptation step involves the following steps as we mentioned in Section III-B: r Formulation of the optimization problem: Each edge de- vice aims to minimize the objective function defined in (4) to estimate the high-resolution range image based on the low-resolution range image and system matrix S.
r Local HQS iterative solver: The optimization problem is solved using the HQS iterative solver, which iteratively estimates the high-resolution range image and performs denoising.This iterative solver serves as the basis for the deep unrolling model.r Creation of the local deep unrolling model: A K-layer deep architecture is formed by unrolling a small number of iterations of the HQS solver.Each layer corresponds to an iteration, making the model highly interpretable.The learnable parameters, including the denoiser weights, are trained using an end-to-end approach.1) Local HQS Solver and Deep Unrolling Model: Focusing on the devices' side, at the t-th communication round each device n aims to solve the following optimization scheme (see, also (13)), i.e., arg min where R n (•) denotes to the learnable regularizer corresponding to the n−th agent.Similar to the procedure described in Section III-B1, each agent n employs the HQS methodology to tackle the local optimization problem in (15), thus forming the following local objective function (16) Recall that the solution of this optimization problem consists of two interpretable modules that is that is the data consistency solution for estimating the high-resolution range image (11a) and the denoising step in (11b).Thus, at each communication round t, the local device n solves the following iteration map Local Deep Unrolling model: However, as we mentioned in Section III-B2, instead of solving the above iterative map for a large number of iterations, each device employs the deep unrolling strategy, thus unrolling a small number of K iterations and creating a K-layer deep architecture, as depicted in Fig. 2. Having formed the local deep-unrolling model the device n employs some version of the stochastic gradient descent to train it end-to-end using some loss function as in (18).
Note that the learnable parameters of the local deep-unrolling model are the weights of the denoiser f θ n (•) denoted as θ.Thus, during the adaptation step, an agent updates the local deep unrolling model, which consists of the equations in (17) as follows: where Z (K) n is the output of the proposed deep unrolling network given a low-resolution range image Y n and X n denotes the ground truth high resolution range images.
C. Federated Deep Unrolling: Combination Step
After all participating edge devices n ∈ N have updated their local deep unrolling models, the next step is the combination step.The objective of this step is to learn an appropriate regularizer (prior) that captures the unique characteristics of the range images by utilizing local information from the agents.Due to the structure of the local deep unrolling (DU) models, the devices only upload to the server the neural network f θ n (•) responsible for the denoising process in (17b).Subsequently, the server combines all the local denoisers using a fusion rule, as follows where f θ g denotes the global denoiser (regularizer) and a n stand for combination weights.Consequently, the server transmits the global denoiser back to the local devices.These devices initialize the denoisers of the local deep unrolling models (i.e., (11b)) with the received global denoiser.This procedure is repeated for T communication rounds.Hence, the FL-DU algorithm can be written as an agent Adaptation step, which involves the local data consistency term (20a) and the local denoiser (20b) (solved by unrolling these equations using the proposed deep unrolling strategy) and a Combination step (20c): Fig. 3 illustrates the proposed FL-DU framework.Additionally, Algorithm 2 summarizes the proposed approach.
V. PERFORMANCE ANALYSIS
To evaluate the effectiveness of the proposed deep unrolling model and the federated deep unrolling framework, a series of experiments were carried out in the context of LiDAR superresolution.The aim was to upscale data from a 16-channel LiDAR to a 64-channel LiDAR by a factor of 4. Furthermore, we assessed the benefits of our proposed method on a LiDAR SLAM system based on the LeGO-LOAM algorithm [49].The LiDAR SLAM experiments were conducted on a developed simulation framework [50].The primary objectives of the experimental results include: r Designing an interpretable deep unrolling model that ex- hibits state-of-the-art performance and significantly lower complexity compared to other state-of-the-art approaches.
r Demonstrating the advantages of formulating a federated learning framework that utilizes the deep unrolling strategy, called Federated Deep Unrolling.
r Evaluating the benefits of incorporating homomorphic en- cryption within the proposed FL framework without negatively affecting the models performance.
Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.Fig. 3. Proposed end-to-end Deep unrolling-based Federated Learning approach.This strategy contains two key parts i.e., the adaptation and the combination step.In the adaptation step, each agent updates its local deep unrolling model using the local data consistency term (20a) and the local denoiser (20b).This process ensures adaptation to the specific characteristics of the agent's dataset.In the Combination part, the server combines the outputs of all local denoisers using a fusion rule (20c).This creates a global denoiser (regularizer) that captures knowledge from diverse local datasets.
r Illustrating how the super-resolution problem can be use- ful in a real-world application, such as a LiDAR SLAM system.
A. Dataset
Training Data: Regarding the training, we employed the same dataset presented in study [6].A 64-channel lidar, OS-1-64, was simulated in the CARLA Town 1 and Town 2 scenes, matching the Ouster dataset field of view (33.2 • ).For the same scene, a 16-channel lidar, OS-1-16, was simulated to generate low-resolution point clouds.Both high-and low-resolution point clouds were projected onto range images [48], resulting in 7000 pairs of 64x1024 and 16x1024 images.The images were then normalized to a range of 0-1 for training.Note that Town 2 scene contains a variety of environmental settings, resulting in a rich dataset that simulates the diverse experiences vehicles may encounter.
Testing Data: To validate the performance of the proposed FL architecture, the real-world Ouster lidar dataset was utilized.This dataset comprises 8825 scans collected over a 15-minute drive in San Francisco using an OS-1-64 3D lidar sensor.The high-resolution point clouds were converted into 64x1024 range images, and 16 rows were extracted to create 16x1024 low-resolution images.These data pairs were utilized to assess the architecture's performance in recovering high-resolution 3D point clouds from low-resolution inputs.
B. Implementation Details 1) Proposed Deep-Unrolling Model:
Concerning the proposed lidar super-resolution model, the number of unrolling iterations was set to K = 6, thus leading to 6-layer deep network architecture.During the end-to-end training procedure, we employed the Adam optimizer with batch size equal to 6. Furthermore, the learning rate was set to 1e − 03 and the number of epochs to 100.
Neural Network Architecture (regularizer-denoiser): Regarding, the neural network f θ (•) in (20b), we used a 5-layer CNN network, where each layer consists of 64 filters with size 3 × 3.Moreover, we employed the ReLU as activation and drop-out rate equal to 0.05.Initially, using pairs of noisy and ground truth high resolution range images, the CNN network (denoiser) was pre-trained adopting the ADAM optimizer with learning rate equal 1e − 03 and batch size equal to 6 for 100 epochs.After the initialization stage, the derived CNN network was used to initialize the weights of the corresponding CNN denoiser of the proposed deep unrolling model for the end-to-end training procedure.Finally, note that at inference time, similar to study [6], we employed a Monte-Carlo drop-out method [51] as a postprocessing technique to remove unreliable range predictions.
2) Federated Learning Scenario: In our experiment, we examined a network made up of 10 autonomous vehicles, each functioning as a distinct agent.The training data previously mentioned was partitioned into 5 unique blocks.Each block was sourced from different locations within the CARLA simulator, thereby representing varied environmental conditions.
During the local training on the edge devices, we utilized 5 epochs with a learning rate of 1e − 04 and a batch size of 6.Additionally, we determined the number of communication rounds between the central server and the edge devices to be T = 50.The local models were trained using the Adam optimizer.
Optimize the deep unrolling model end-to-end using loss function (18) Due to the structure of the local deep unrolling (DU) models, the devices only upload to the server the neural network (prior-denoiser) f θ n,t (•) responsible for the denoising process in (20b).end for Server side: Combination step Compute the new global denoiser (prior): Send the global model to all edge devices to initialize the local denoiser with the global denoiser end for 3) Secured Federated Learning: In literature, several studies have explored the use of privacy-preserving techniques such as Homomorphic Encryption [52], [53], [54] within the realm of classical federated learning.Our goal is to demonstrate how security mechanisms, such as Homomorphic encryption, can be easily integrated to enhance the security of the proposed Federated deep unrolling method, without negatively affecting the models performance.To this end, we used homomorphic encryption based on the Tenseal library [55].In the context of the proposed federated deep unrolling system, multiple vehicles collaborate to improve a global model during the combination step, while keeping their training data local.However, sharing information between these agents or with a central server can lead to potential privacy breaches.Herein lies the importance of using HE.It enables each client to encrypt their trained denoiser (prior) parameters before sending them to a central server for aggregation.Thus, the agents need to encrypt only the denoising step in (20b) from their local deep unrolling models.Due to the special properties of HE, the server can perform computations directly on these encrypted parameters to generate an encrypted global model.This method ensures that the server, while able to aggregate the model updates and further distribute them, never has access to the raw data or individual model parameters, maintaining the privacy of each participant in the federated deep unrolling process.Finally, even though the aggregated encrypted model is then decrypted, the privacy is still preserved since vehicles have access only to the aggregated model not the individual ones.
C. Lidar Super-Resolution Performance on Raw Data: Centralized Solutions
In this Section, we compared the proposed deep unrolling method with several other methodologies, including the baseline linear and cubic interpolation approaches, the well-established super-resolution SR-ResNet model [56] in classic image processing, and the state-of-the-art lidar-based super-resolution SRAE model [6] under the centralized scenario.Specifically, a central server gathers all available data from distributed edge devices to train the lidar super-resolution models.
Table I presents a summary of the reconstruction results, demonstrating that the proposed deep unrolling method outperforms other approaches in terms of L1-loss.The proposed method not only provides quantitative gains, but it also requires substantially fewer parameters compared to the deep learning methodologies.This advantage is attributed to the well-defined architecture derived from the optimization problem in (4).Particularly, the proposed model contains 99.75% fewer parameters than the SR-ResNet and SRAE approaches, making it an ideal choice for real-world applications with computational and storage constraints.
The aforementioned advantage of the proposed deep unrolling model can be attributed to its architecture, which has been derived from an optimization problem, thereby retaining the concise structure of the optimization-based solution.In addition to the benefits mentioned above, this model also demonstrates its effectiveness in situations where communication efficiency is of paramount importance, such as in federated learning (FL) scenarios.The lower number of parameters in the model helps to minimize communication overhead, further enhancing its suitability for practical applications, including those that involve FL.
D. Lidar Super-Resolution Performance on Raw Data: Federated Learning Solutions
To thoroughly evaluate the advantages and possibilities of our proposed federated deep unrolling framework, called FL-DU with and without the homomorphic encryption part, we compare it with the following approaches: r centralized-SRAE: Since the SRAE method [6] was found to be the top-performing competitor, we include it in our comparison.
Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.r FL-SRAE: Additionally for completeness purposes, we also consider a straightforward federated learning scenario [57], where edge devices utilized the deep neural network proposed in study [6].
Comparison with the centralized methods: As we can see from Fig. 4 and Table II the proposed FL-DU method is able to achieve similar performance against the centralized solution.Crucially, the proposed method necessitates only a limited number of communication rounds between the server and local agents, along with a mere five epochs of local training per round, to converge effectively to the centralized solution.Although, the centralized scheme attains marginally superior quantitative results, it necessitates the exchange of considerable amounts of data, thus imposing a considerable burden on the communication links between edge devices and the central server, and raising data privacy concerns.In contrast, the proposed FL-DU scheme provides an efficient solution that overcomes these issues by requiring agents to share only their local denoisers (or priors) from the respective deep unrolling models, which capture detailed information regarding the structure and dependencies of the range images.
Another important aspect that stems from the proposed federated unrolling strategy is the fact that we can incorporate any privacy preserving strategy.Interestingly, the FL-DU with the homomorphic encryption achieves the same convergence behavior as compared to the FL-DU without the encryption part.
Comparison proposed FL-DU with the federated learning methods: As illustrated in Fig. 5 and Table II, the proposed FL-DU method considerably outperforms the comparative federated learning approach that uses a state-of-the-art deep learning model.This superiority is observed in both accuracy and convergence rate.Notably, our method achieves similar results to the centralized solution while requiring fewer communication rounds.Despite vehicles working with limited training data, the global model, which is derived from our FL-DU method, delivers a performance that aligns closely with the centralized solution.On the other, the performance of the compared FL-AE method fails to converge to the centralized solution.The vast parameter space of the SRAE model cannot be effectively optimized with limited data, thereby resulting in subpar performance.
The superiority of the proposed FL-DU method can be attributed to the fact that the proposed federated unrolling framework contains local deep unrolling models with concise structure requiring less data.The concise structure of the local deep unrolling models can be verified considering that these models contain 99.75% fewer parameters compared to the SRAE model, making it an ideal choice for real-world applications.
Overall, the above results verify the efficacy of the proposed federated deep unrolling framework.The FL-DU approach, enables vehicles to obtain more accurate and computationally efficient models by leveraging information from diverse datasets obtained from various autonomous vehicles.This information is used to learn a global prior for the range images.The derived prior, functioning as a denoiser, is subsequently applied to the local deep unrolling models, in order to tackle the lidar super resolution problem.The proposed framework provides the deep unrolling-based federated learning a clear and interpretable structure.In particular, the role of Federated Learning is to facilitate the merging of local learnable regularizers without compromising the privacy of individual datasets.
E. Impact of Lidar Super-Resolution on Lidar Based SLAM Approaches
In order to thoroughly assess the effectiveness of our proposed deep unrolling model, along with the federated learning approach, we examined its applicability in a real-world automotive scenario.To do this, we utilized the LeGO-LOAM [49] system, which is a Lidar based SLAM mechanism that offers real-time six-degree-of-freedom pose estimation and a generated 3D map.We tested it on two sequences from the Ouster dataset: r The first sequence consists of 2600 consecutive scans, representing a relatively simple trajectory followed by the vehicle.
r The second sequence is composed of 6000 scans that correspond to a more challenging trajectory with short, closely spaced loops.The primary goal of the Lidar SLAM is to deliver realtime six-degree-of-freedom pose estimation for ground vehicles equipped with 3D lidar sensors.The system achieves this by extracting planar and edge features and subsequently utilizing them to calculate different components of the six-degree-of-freedom transformation between consecutive scans.To investigate the influence of the Super Resolution (SR) approach on such a SLAM system, we conducted a performance comparison of the LeGO-LOAM algorithm using for distinct inputs: r High-resolution 3D point clouds reconstructed with the proposed centralized-DU method r High-resolution 3D point clouds reconstructed using our proposed FL-DU approach.In this case, we solely utilized the federated learning scenario with homomorphic encryption, as our findings demonstrated that it achieved practically the same performance as the corresponding federated learning scenario without the encryption part.In more detail, the point clouds were reconstructed using the derived model from the 50-th communication round of the FL-DU framework.r low-resolution 3D point clouds generated by a 16-channel lidar sensor r High-resolution 3D point clouds reconstructed using the centralized SRAE method [6].
r High-resolution 3D point clouds reconstructed using the Federated Learning method that uses the deep learning model of the SRAE method [57].For our analysis, we employed error metrics from earlier studies [58], [59].The results, including the output metrics, the cumulative distribution function (CDF) of Absolute Pose Error (APE) translation, and trajectory heatmaps, are presented in Table III and Figs.6, 7. The reference pose (trajectory) for these results is derived from a 64-channel lidar.
In our analysis, as illustrated in Figs. 6 and 7, we observe that both the proposed centralized-DU and FL-DU methods consistently outperform the 16-channel lidar and the SRAE [6] across all the examined trajectories.This highlights the superior accuracy of our reconstructed 64-channel lidar data compared to the state-of-the-art SRAE method [6] and the FL-SRAE approach.
Additionally, although the 16-channel lidar provides satisfactory results during the simple trajectory (2600 scans, see Fig. 6), in the more challenging trajectory (6000 scans), which contains close loops, the 16-channel lidar is not able to follow the reference trajectory (see Fig. 7).This can be justified by the fact that the LeGO-LOAM method relies on the availability of the edge and planar features to estimate the vehicle transformation, and thus it fails to generate robust features from the sparse point cloud derived from the 16-channel lidar.
In comparison to the traditional federated learning approach (i.e., FL-SRAE), our proposed Federated Deep Unrolling framework consistently demonstrates superior results across both trajectories.It's important to note that the local models used in the FL-SRAE approach consist of more than 30 million parameters.This not only imposes significant communication constraints but also necessitates an extensive and diverse set of training examples for these models, which local agents often lack, resulting in subpar performance.Conversely, our proposed FL-DU method, owing to its mathematical formulation, incorporates deep unrolling models with a concise structure.These models can be effectively trained with considerably fewer training examples.Thus, the FL-DU method is not only more efficient but also more practical in scenarios with data limitations, further highlighting the advantages of our approach.
Finally, our proposed FL-DU approach delivers results comparable to the centralized-DU method.The comparable performance of these two models, despite the reduced data exchange and privacy enhancement offered by the federated approach, indicates the potential of FL-DU as a practical and efficient method in real-world applications.In conclusion, our findings strongly support the efficacy and efficiency of our proposed deep unrolling models (both centralized and federated) as costeffective solutions in real-world automotive scenarios.
F. Ablation Analysis: Impact of Communication Rounds on the Performance of FL-DU
To further demonstrate the benefits of the proposed federated deep unrolling framework, in this section, we conducted an experimental analysis focused on the impact of the number of communication rounds exchanged between the server and the local agents.Specifically, we investigate how these rounds within the federated deep unrolling framework can influence the performance of Lidar SLAM.To this end, we examined the performance of three distinct models that emerged from the 6th, Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.Fig. 6.CDF and heatmaps for the centralized DU method, the proposed FL-DU framework using the DU model derived from the 50-th communication round, the 16-channel Lidar, the method SRAE [6] and the FL-SRAE approach using as reference trajectory the path derived from the lidar with 64 channels regarding the first path with the 2600 scans.We have also prepared a supplementary demonstration video1 that visually illustrates the quality of the reconstructed point clouds derived from the proposed FL-DU method.20th, and 50th rounds of the federated deep unrolling framework.Additionally, we compared the proposed FL-DU approach against the state-of-the-art centralized SRAE method [6] and the proposed centralized DU approach.Fig. 8 and Table IV summarize the results, showcasing the consistent enhancement in performance metrics, including mean error, root mean square error (rmse), and maximum error, as the number of communication rounds in the FL-DU model increases.Specifically, the model derived from the 50th round surpasses the performance of models from earlier rounds, ensuring that the iterative communication within the federated deep unrolling framework indeed contributes positively towards its efficacy.
Incorporating deep unrolling models during the adaptation phase of the FL-DU framework brings forth multiple advantages.Firstly, these models possess a concise structure, resulting in reduced communication overhead between the server and local agents.This reduction is due to the relatively small number of parameters compared to other state-of-the-art deep learning models.Secondly, the FL-DU framework requires a minimal number of communication rounds to converge to the centralized deep unrolling solution.Notably, it only takes 20 rounds for the FL-DU framework to surpass the performance of the centralized autoencoder (SRAE) solution.This efficiency in terms of both communication rounds and model parameters highlights the advantages of the FL-DU framework in federated learning settings.Fig. 7. CDF and heatmaps for the centralized DU method, the proposed FL-DU framework using the DU model derived from the 50-th communication round, the 16-channel Lidar and the method SRAE [6] using as reference trajectory the path derived from the lidar with 64 channels regarding the first path with the 6000 scans.Note that in the CDF plot, the 16-channel Lidar was excluded due to its insufficiency to yield satisfactory comparative results.Fig. 8. CDF and heatmaps for the proposed FL-DU method for different communications rounds i.e., round 6, 20, 50 regarding the first path with the 2600 scans.We have also prepared a supplementary demonstration video2 that visually illustrates the effectiveness of our proposed method in different communications rounds.
VI. CONCLUSION
The paper proposes a new approach for enhancing automotive Lidar super-resolution for simultaneous localization and mapping (SLAM) by addressing the high cost associated with high-resolution Lidar sensors.The method introduces an adaptive federated optimization approach, which involves multiple vehicles coordinated with a central server to learn a regularizer (a neural network) capable of capturing the intricate features and attributes of the Lidar data.
To effectively tackle the adaptive federated optimization problem, the adaptation part is based on a deep unrolling framework that converts an iterative convex optimization solver into a deep learning architecture, with the learnable parameters directly derived from the solution of the optimization problem.The capabilities of the deep unrolling technique are further extended by incorporating a combination step, combining the regularizers from the different collaborative vehicles towards creating a robust global regularizer capable of handling diverse environmental conditions.
The proposed mechanism is extensively evaluated through numerical experiments on a real-world Lidar-based SLAM application.The results demonstrate superior performance compared to other centralized deep learning based methods, while also achieving a significant reduction in trainable parameters.In fact, the proposed Super resolution model exhibits 99.75% fewer parameters compared to prevailing centralized deep learning based approaches.This study represents the first integration of deep unrolling with federated learning, presenting an efficient, explainable, and data-secure approach for automotive Lidar super-resolution and perception applications.
Fig. 1 .
Fig. 1.Proposed Lidar super-resolution framework.Given a low-resolution 3D point cloud derived from a 16-channel lidar, it is project onto a 2D low-resolution range image.This image is provided as input to the proposed deep unrolling model (derived from the solutions of optimization problem (4), see also Section III-B2) to estimate the corresponding high-resolution range image.The estimated image is then transformed into 3D coordinates, thus producing the desired high resolution point cloud for the Lidar SLAM problem.
Algorithm 1 :
Deep Unrolling Model -Training Procedure.Require:Low resolution range images Y , High resolution range images X, Ensure: Deep unrolling model Unroll the derived iterative solver for K iterations for k = 1 : K do
Fig. 2 .
Fig.2.Proposed Deep unrolling model.In particular, a small number of iterations, say K, of the local HQS solver in(11) are unrolled and treated as a deep learning architecture.Each iteration of the iterative solver is considered a unique layer of the proposed model, resulting in a K-layer deep learning architecture.
Algorithm 2 :
Proposed Federated Deep Unrolling Framework.Require:number of communication rounds M , local private datasets D n = {X n , Y n } for n = 1 . . .N. Ensure: Global denoiser for each communication round t = 1 : T do Edge device side: Adaptation step for each device i = 1 : N do Unroll the derived iterative solver for K iterations
r
centralized-DU: This represents our deep unrolling model used in a centralized context, where a central server gathers data from distributed edge devices to train the lidar superresolution model.
Fig. 4 .
Fig. 4. L1 loss of the derived global model from the proposed deep unrolling FL scheme with and without homomorphic encryption vs communication rounds along with the best accuracy achieved by the centralized deep unrolling model.
Fig. 5 .
Fig. 5. Loss of the derived global model from the proposed deep unrolling FL scheme with and without homomorphic encryption vs communication rounds against the classical federated learning framework with the SRAE model denoted as FL-SRAE.
TABLE II QUANTITATIVE
RESULTS -FEDERATED LEARNING
TABLE III LIDAR
SLAM: ABSOLUTE POSE ERROR W.R.T TRANSLATION PART (m)
TABLE IV LIDAR
SLAM: ABSOLUTE POSE ERROR W.R.T TRANSLATION PART (m) -IMPACT OF DIFFERENT COMMUNICATION ROUNDS OF THE PROPOSED FEDERATED DEEP UNROLLING FRAMEWORK | 13,165.6 | 2024-01-01T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Computer Science"
] |
Graphene Transfer: A Physical Perspective
Graphene, synthesized either epitaxially on silicon carbide or via chemical vapor deposition (CVD) on a transition metal, is gathering an increasing amount of interest from industrial and commercial ventures due to its remarkable electronic, mechanical, and thermal properties, as well as the ease with which it can be incorporated into devices. To exploit these superlative properties, it is generally necessary to transfer graphene from its conductive growth substrate to a more appropriate target substrate. In this review, we analyze the literature describing graphene transfer methods developed over the last decade. We present a simple physical model of the adhesion of graphene to its substrate, and we use this model to organize the various graphene transfer techniques by how they tackle the problem of modulating the adhesion energy between graphene and its substrate. We consider the challenges inherent in both delamination of graphene from its original substrate as well as relamination of graphene onto its target substrate, and we show how our simple model can rationalize various transfer strategies to mitigate these challenges and overcome the introduction of impurities and defects into the graphene. Our analysis of graphene transfer strategies concludes with a suggestion of possible future directions for the field.
Introduction
Graphene, a two-dimensional hexagonal carbon allotrope, boasts a number of electronic, mechanical, thermal, and optical properties that make it very attractive for incorporation into next-generation electronic and optical devices [1][2][3][4][5]. Graphene fabrication proceeds via a variety of processes, but only a few of these processes are suitable for commercial production of graphene-based electronics. In particular, epitaxial growth of graphene on silicon carbide and chemical vapor deposition (CVD) of single-or few-layer graphene on copper or nickel are two of the most promising techniques for high-yield synthesis of electronics-quality graphene [6,7].
Although both epitaxial and CVD growth can produce high-quality single-layer graphene, it is often desirable or even necessary to transfer graphene from its original growth substrate onto a different substrate for the material to be useful in device applications. This is most apparent with CVD-grown graphene, where the graphene sits atop a metal substrate. A conductive, opaque metal substrate precludes many applications and therefore must be exchanged for another substrate which is more practical (usually an insulator or semiconductor for electronics applications and a transparent material for optical applications). Epitaxial growth does not have this problem explicitly, as SiC is an insulator. However, due to uneven growth patterns and step edges on epitaxially grown graphene, it is also often desirable to consider transfer of graphene to an more suitable substrate [8].
In this review, we present current knowledge concerning the transferring of graphene from one substrate to another. A number of reviews have appeared recently which focus on various aspects of the graphene transfer process [9,10], including maintaining the cleanliness of graphene [11,12], the nature of a polymer support layer [13], and prospects for industrial scale-up [14]. The present review focuses on graphene transfer with an eye toward the specific physical mechanisms and phenomena that influence the quality of the transfer. This review therefore does not seek to exhaustively cover every report on graphene transfer, but rather to illuminate important conceptual advances in the science of the graphene/substrate interaction, with particular focus on graphene transfer and device fabrication. Examining graphene transfer from a mechanistic point of view brings into sharp relief many of the issues that arise in employing transfer techniques when building electronic devices incorporating graphene. Understanding the transfer process on a conceptual physical level also provides insight into how to overcome these issues and design better graphene device fabrication processes.
Existing reviews tend to divide graphene transfer methods into etching and nonetching, based on whether graphene is separated from its substrate physically or whether the substrate is chemically etched away to create freestanding or polymer supported graphene. We have chosen not to divide the methods this way, but have instead considered the transfer process as consisting of two main parts: (1) delamination of graphene from its original substrate, and (2) relamination of graphene onto its target substrate ( Figure 1). We have organized the review around these two steps since it makes apparent the fact that different steps in the transfer process are prone to induce different types of defects and impurities in the transferred graphene. This division also highlights the fact that an overall mitigation strategy should encompass all aspects of the process. Throughout the rest of this review, we will use the terms "original substrate" and "target substrate" to refer to the substrates from which and to which the graphene is transferred, respectively. The remainder of this paper is structured as follows. In Section 2, we introduce the main challenges that arise during the graphene transfer process. We group them mainly by whether these issues are most likely to arise during delamination of graphene from its original substrate or during relamination onto its target substrate. In Section 3, we introduce a simple physical model for the adhesion of graphene to its substrate. In Section 4, we perform a survey of transfer strategies specifically for the graphene delamination step, focusing in subsections on each of the relevant variables from the model introduced in Section 3. In Section 5, we perform another survey of transfer strategies, this time focusing on the graphene relamination step. In Section 6, we conclude with a few words about possible future directions of research.
A note about terminology: in general, graphene does not explicitly form covalent bonds with its underlying substrates. Several covalently bound carbon morphologies are present during epitaxial growth of graphene on silicon carbide, but even in this case, the carbon overlayer is generally not considered graphene until no covalent bonds exist between it and the underlying "buffer layer" [15]. We will follow this terminology here, and only consider graphene to be a hexagonal two-dimensional form of carbon which does not explicitly form covalent bonds with the underlying substrate material. However, despite the absence of covalent bonding, the adhesion of graphene to its substrate can be quite strong, especially once close conformal contact has been achieved.
Issues Arising in Graphene Transfer
The promise of incorporating graphene into electronics runs up against several practical issues, but the two most important points related to graphene transfer are those of cleanliness and structural integrity. The superlative electronic properties of graphene are extraordinarily sensitive to chemical doping as well as crystal defects [16][17][18][19][20][21]. Ideally, the graphene transfer process would allow the placement of pristine graphene onto an arbitrary substrate with control over doping, wrinkling, tearing, or folding. However, extensive work has shown that existing transfer techniques can fall short of this ideal, with transferred graphene bearing significant residual metallic [22] contamination from polymer support [23][24][25] and extensive physical damage to the graphene ( Figure 2). The two main steps of graphene transfer are delamination from the original substrate and relamination onto the new target substrate. As mentioned above, the primary issues which arise during delamination are not necessarily the same as those which arise during relamination. In the case of CVD grown graphene, the original substrate is the metallic substrate on which the graphene was grown. Removal of this substrate using chemical etchants exposes graphene to metal ions, which can potentially dope the graphene, changing its electronic properties [29]. One study measured a residual metallic contamination of greater than 10 13 atoms·cm −2 , regardless of the extensiveness of the post-etching cleaning procedure [22]. Another study measured the effect of these residual metal ions leftover from transfer on graphene's electrochemical properties, showing that these metallic species could have an electrocatalytic effect on certain reduction reactions [30]. The implication is that researchers must take care when reporting unexpected properties of graphene, as the actual explanation for those observations might lie with impurities introduced in the transfer and subsequent processing of graphene (rather than with the graphene itself).
In addition to chemical impurities introduced during delamination, physical damage of the graphene sheet might also occur. This damage can appear regardless of whether or not chemical etching is used as a delamination strategy. Mechanically, graphene is one of the strongest materials ever observed, with a single sheet having an elastic modulus of roughly 1 TPa, as observed by atomic force nanoindentation experiments [31]. However, it is important to remember that, regardless of superlatives, graphene is only 1 atom thick, and the forces in the aforementioned measurements which resulted in mechanical failure of the material were on the order of micronewtons. Realistically, this force can easily be exceeded in transfer procedures involving mechanical peeling from surfaces. Graphene floating freely on water without a support can also undergo mechanical damage [24]. Even with a polymer support, cracks and tears can be introduced [27,32]. Moreover, the surface tension of water (72 µN·mm −1 ) is high enough that formation and popping of bubbles, whether generated during electrochemical bubble transfer or during chemical etching of the substrate, can also exceed the mechanical failure threshold of graphene [33,34].
Relamination of graphene onto the target substrate presents its own set of unique issues to overcome. Foremost among these issues is contamination of the graphene arising from removal of an assistive polymer support. A number of reports have detailed the polymer residue left behind on graphene as well as different strategies for their removal. This problem is not unique to CVD-grown graphene; mechanically exfoliated graphene (Scotch Tape method) also often exhibits glue residue [35]. The most common polymer removal methods use some combination of solvent cleaning and thermal annealing under a reducing atmosphere [25,36]. Thermal annealing has the secondary benefit of eliminating volatile molecular species that might be trapped between the graphene and the target substrate after relamination. However, Kumar et al. found that thermally annealing graphene transferred using poly(methyl methacrylate) (PMMA) as a polymer support in a reducing atmosphere is found to increase strain and chemical doping of the graphene [24]. Lin et al. attribute this effect to the thermal breakdown of PMMA and possible covalent binding of breakdown product fragments to the graphene [23]. Vacuum annealing was shown to reduce, but not eliminate completely, these polymer residues [24,28,37].
Mechanical damage to graphene is also an issue during the relamination process, but one important challenge that is unique to relamination is control over wrinkling of graphene on the target substrate. As a thin sheet, graphene buckles and folds over itself easily, and it is extremely difficult to remove folds and wrinkles from graphene once it is deposited onto its final target substrate. Graphene wrinkles have unique electronic properties that may be exploited in some circumstances [38,39], but often it is desirable to eliminate them. While methods exist for eliminating wrinkles by judicious choice of target substrate [40], the most straightforward way to avoid wrinkle formation is never to introduce them in the first place. Doing so requires understanding the factors which cause wrinkles to appear during graphene relamination.
General Overview
The graphene-substrate adhesion is mediated solely by van der Waals (vdW) forces. Dispersion forces, which are the weakest of the intermolecular forces, dominate this interaction in general [41], but depending on the substrate, the adhesion energy can also have contributions from ionic and covalent interactions [42]. However, due to graphene's enormous surface area to volume ratio and the close conformation of a graphene sheet to its substrate, the additive nature of the forces renders them quite strong [43]. Whereas the vdW-force-mediated interaction energy per unit area between the planar surfaces of two extended bodies is proportional to 1/r 2 , where r is the distance between the surfaces, the areal interaction energy between an extended body and an infinitely thin plane of atoms is proportional to 1/r 3 [44]. This difference in the exponent is important: at small r, it leads to much stronger adhesion between an ultrathin film and a substrate than in the case of a thick film. However, at larger r, it leads to weaker adhesion.
Moreover, the van der Waals interaction itself is intrinsically strong for graphene. The interaction strength is proportional to the integral of the imaginary part of the dielectric function, which is dominated by the DC conductivity of the material [45]. The high conductivity of graphene, a zero-bandgap semiconductor, thus translates to a higher interfacial interaction energy between graphene and its substrate than can be found in other 2D materials of lower conductivity. This factor impacts not only graphene's contribution to the interaction energy, but the contribution of the substrate as well. One therefore expects that graphene will adhere more strongly to a metallic substrate than to an insulating substrate. A caveat here is that an insulating substrate containing a high density of local dipoles on the surface can induce local electrical polarization in the graphene, known in the literature as "charge puddles," which may increase the adhesion of graphene to the substrate [46].
The physics of the graphene-substrate interaction informs a number of various strategies for removing graphene from one substrate and transferring it to another. Gathering the observations from the preceding paragraphs, we can write the interaction energy in a simple illustrative form: where C s and C g represent interaction coefficients determined by the detailed atomic and electronic structure of the substrate and the graphene, respectively. Strategies for graphene transfer aim to weaken this interaction energy during delamination from the original substrate and strengthen it during relamination onto the target substrate, and numerous examples are present which target each of the variables C s , C g , and r ( Figure 3). We will therefore subclassify transfer strategies in this review according to which variable they alter.
We stress that many, if not most, of the reports on graphene transfer methods in the literature employ multiple strategies throughout the process, so that individual papers often do not fall cleanly into a single category of strategies that we aim to discuss. Our association of a paper in this review with a certain strategy is not meant to imply that no other strategies are used in that paper; rather, it should be taken to mean that the paper in question is a particularly useful illustration of the strategy with which we have associated it.
Technical Details
The detailed results of the physics described in the previous subsection are given here. We follow Israelachvili in this section [44] and assume that the graphene-substrate interaction (1) is dominated by van der Waals interactions, (2) is additive, and (3) lacks retardation effects that become important at large graphene-substrate separations.
We begin by describing the origin of the 1/r 3 dependence of the interaction energy in Equation (1). From the aforementioned assumptions, the potential energy between two pointlike molecules is: where C is the specific dispersion constant for the interacting species. The assumed additivity of the interaction allows us to compute the potential energy between a molecule and a thick flat surface (which will ultimately represent our substrate) by integrating over an infinite half-volume to obtain the result: where ρ s is the density of the substrate molecules and D is the distance between the molecule and the surface of the substrate. To obtain the graphene-substrate interaction, we first note that the interaction energy of two infinitely extended parallel surfaces will be infinite, so that we must look instead at per-unit-area interaction. Ordinarily, to obtain the interaction between two flat surfaces, we would assume their thicknesses t 1 and t 2 were much larger than the distance separating them: t 1 ≈ t 2 D. This would lead us to integrate a thin layer of the second surface over its total thickness in the z direction to obtain the familiar result: where ρ 1 and ρ 2 are the atom densities of substrates 1 and 2. However, in the case of a monolayer of graphene, integration in the z direction is not appropriate, and the graphene thickness t g ≈ dz simply becomes a parameter to give an interaction energy of: where ρ g is the density of the graphene. This is where the 1/r 3 dependence arises in Equation (1). Equation (5) is likely more familiar to physicists when written in terms of Hamaker constants A = π 2 Cρ 1 ρ 2 [47]. The Hamaker constant in this case is dependent on the material properties of both graphene and substrate as well as any dielectric material in the gap between the graphene and the substrate. Explicitly: where k is Boltzmann's constant, T is temperature, α 1 and α 2 are the polarizabilities of species 1 and 2, and ε 3 is the dielectric permittivity of the material in the gap between the species all at imaginary frequencies. We also note that the n = 0 term in the summation is to be halved [48,49]. The expression in Equation (6) is the detailed version of our simplified substrate and graphene interaction coefficients in Equation (1) and shows explicitly how the interaction energy depends on the material properties of each of the components in the system. Strictly speaking, the contributions from the graphene and the substrate are not as cleanly separable as we have indicated in Equation (1). However, for heuristic purposes, the dependence of the interaction energies on polarizability is more or less directly proportional. One important point to note about Hamaker constants is that they can be negative, generally when the dielectric properties of the material in the gap are intermediate between those of species 1 and 2 [50]. A negative Hamaker constant implies a repulsive van der Waals interaction. This presents yet another possible strategy for separating graphene and its substrate: introduce a material in the graphene-substrate gap with intermediate dielectric properties, thus effecting a repulsion between graphene and substrate. To the best of our knowledge, no one has yet attempted this strategy specifically. Given the large range of values reported for graphene's dielectric properties [51,52], this strategy may not currently be practical.
Transfer Strategies: Delamination of Graphene
Delamination of graphene proceeds when the adhesion interaction between graphene and its original substrate is weaker than the interaction between graphene and some separating force, be it a secondary facilitating substrate such as a polymer support, a layer of water intercalating between graphene and its substrate, or a direct transfer to another substrate with a stronger adhesion to graphene. Thus, according to our model in Section 3, strategies for successfully delaminating graphene from its original substrate will involve either weakening the intrinsic substrate interaction coefficient C s , weakening the intrinsic graphene interaction coefficient C g , or increasing the distance r between graphene and the substrate.
As noted in Section 2, the delamination of graphene from its original substrate can introduce residual metallic impurities from the substrate as well as tears and cracks through mechanical deformation of the material. Many of the transfer techniques we will discuss were developed in order to overcome these issues. We will use our graphene/substrate adhesion model to classify the techniques mechanistically to obtain a better understanding of how altering the graphene-substrate interaction in various ways mitigates or exacerbates specific imperfections introduced during the transfer process.
Strategies Changing the Substrate Interaction Coefficient
Since the dielectric function is so intimately related to conductivity, one strategy for weakening C s is to chemically alter the substrate to decrease its conductivity. In the case of a metallic substrate, this involves either oxidation of the metal, possibly to include chemically etching and removing the metal, or reduction of the passivating metal oxide layer to a metal, possibly capped with an intercalated neutral layer. Alternatively, one may provide a secondary transfer substrate, such as a polymer support, which interacts with graphene more strongly than the original substrate. Since the issues introduced when using a polymer support generally do not arise until relamination and removal of the support, we will treat polymer supports more extensively in Section 5.
Decreasing the substrate interaction coefficient can mitigate or exacerbate certain issues. When the methods used are chemically destructive to the substrate, metal dopant contaminants can be a major concern. Non-etching methods such as bubble-free electrochemical transfer can mitigate these issues to some extent, but still might feature a low level of chemical etching. However, the benefit of simplicity and low cost of these etching types of transfer often outweigh the drawbacks of metallic contamination, and these methods have become the most popular techniques for delaminating CVD-grown graphene from a metallic substrate.
Chemical Etching of the Substrate
Since no covalent bonds are formed between graphene and its substrate, chemical methods allow the clean removal of the substrate via etching without damaging the graphene. By far, the most common situation calling for graphene transfer is from the metallic growth substrate to an insulating substrate. In this case, aqueous oxidizers are typically used to convert the substrate to water-soluble metal salts. The earliest, and still most common, method for transferring graphene from a metallic growth substrate involves adding a thin support layer, prototypically poly(methyl methacrylate) (PMMA), onto the top side of the graphene and subsequently oxidatively etching the metal from the underside of the graphene using either ammonium persulfate or iron (III) chloride ( Figure 4a) [27,[53][54][55]. A similar early method exists for transferring graphene from a silicon oxide donor substrate, involving etching of the SiO x with NaOH in water ( Figure 4b) [56]. The PMMA support prevents the delicate single layer graphene from folding back on itself or breaking up during the metal etching process. Also, since the PMMA/graphene layer floats, the graphene can easily be retrieved from the water bath onto an arbitrary substrate. The polymer support can then be dissolved using acetone [54] or acetic acid [57], thus completing the graphene transfer.
Non-Etching Methods to Weaken the Substrate Interaction Coefficient
Cabrero-Vilatela et al. used a combined approach of oxidizing the copper growth substrate in water and mechanically peeling graphene from the substrate with an atomic layer deposited (ALD) layer of Al 2 O 3 and an adhesive (Figure 5a). The authors claim that use of the ceramic layer prevents contamination from a polymer support [58]. We point out, however, that other researchers report that ALD is only effective on graphene with a certain defect density; ALD on pristine graphene is ineffective due to the lack of seed sites to initiate atomic layer growth [59]. Grebel et al. employ a similar method using either alumina or hafnia deposited via ALD, but they also employ copper etching methods in their work [60]. Bubble-free electrochemical methods for transfer have also been put forward, with the reasoning that the H 2 bubbles formed during bubble transfer (vide infra Section 4.3.2) can mechanically damage the thin, fragile graphene. Cherian et al. studied low-potential electrochemical methods where they described the mechanism of delamination as the reduction to metallic copper of the copper oxide passivating layer between copper and graphene, weakening the substrate [62]. Wang et al. extended this method by carbonating the electrolyte bath, coupling carbonate-based chemical reduction of copper oxide with its electrochemical reduction, with the ancillary goal of being able to recycle the copper growth substrate (Figure 5b-d) [61].
Strategies Changing the Graphene Interaction Coefficient
Just as in the case for C s , the most straightforward way to weaken C g is to chemically modify graphene to decrease its conductivity. Whitener et al. showed that chemical hydrogenation of graphene decreases the conductivity of the material by several orders of magnitude, while simultaneously weakening the adhesion between graphene and substrates as diverse as metal growth substrates, silicon oxides, and polymers ( Figure 6) [63]. Additionally, hydrogenation of graphene is reversible chemically and thermally [64][65][66][67][68], so that once the hydrogenated graphene is on the target substrate, it can be restored to pristine graphene, thereby completely avoiding the need for etchants or polymer support. Physically, these techniques share a commonality with water-based delamination of a wide variety of nanomaterials [69][70][71][72][73]. The basic concept is that the separation of a 2D material and its substrate, which is generally energetically unfavorable, can be coupled with the interaction of the 2D material and the substrate with a third component (in this case, water) which is very energetically favorable. Hence, for example, when graphene is hydrogenated, the interaction with its substrate is weakened. Then, when the system is dipped in water, the water interacts strongly with both the substrate and the graphene to act as a wedge, easily and cleanly separating the graphene and the substrate. In addition, the hydrophobicity of the hydrogenated graphene [74][75][76][77] likely stabilizes the graphene on the water surface without the need for a polymer support. The hydrogen functionality acts akin to a surfactant, mimicking a technique that has been previously employed in aqueous exfoliation of bulk graphite [78].
As mentioned, the strategy of changing the graphene interaction coefficient can avoid the use of metal etchants as well as polymer supports, eliminating contamination from these two sources. In addition, the surfactant-like nature of hydrogen functionalization gives the graphene a tendency not to self-adhere. This might ultimately lead to less wrinkling. However, the lack of a support means that graphene is much more prone to mechanical damage during the transfer process. The removal of hydrogen functionality after relamination also represents an extra step in the graphene preparation process, which also ultimately decreases its attractiveness. The main advantage of this transfer strategy over others is the ability to functionalize graphene and have it maintain its functionality throughout the transfer process.
Strategies Increasing the Graphene-Substrate Distance
The interaction energy between graphene and its substrate falls off rapidly with distance. Thus, physically increasing the distance between graphene and its substrate is an effective strategy for non-destructive transfer of graphene. A number of prominent approaches in this category are represented in the literature. These include intercalation methods and electrochemical methods (sometimes referred to as "bubble transfers") as well as mechanical and mechanochemical methods involving pressure-sensitive and thermal release adhesives.
Since these strategies do not strictly employ chemical etching of the original substrate, metallic contamination tends to be less of an issue for them. However, as mentioned above, electrochemical strategies might release a small amount of metal ions, which must be accounted for in high precision experiments. Regardless, the graphene transferred using these methods tends to be cleaner and more free from dopants than in the case where the substrate is etched away.
The main drawback of methods increasing graphene-substrate distance is that physical force is applied to the graphene to separate it from its original substrate. This force can cause tears and cracks to form in the graphene (e.g., Figure 2a). The main challenge for these strategies is therefore mitigating the separating force, which can be achieved in a number of ways. We discuss some of these innovative ways in more detail below, including directional etching of substrates [79], ensuring close conformal contact between adhesives and graphene [80], and using easy-to-remove original growth substrates such as liquid metals [81].
Intercalation Methods
Intercalation methods seek to insert atoms and molecules in the space between the graphene and the substrate. Graphite intercalation compounds are well-known, especially with respect to alkali metal intercalation compounds [82]. Several intercalation methods have been demonstrated. Verguts et al. examined a bubble-free electrochemical delamination method and showed that water intercalation between the graphene and the substrate was crucial for effective separation of the layers. They further showed that intercalation inhibits relamination onto a target substrate [83]. This is to be expected, as intercalation increases graphene-substrate distance, facilitating delamination. However, relamination is favored when the distance between graphene and the target substrate is decreased, so deintercalation should be the goal for the relamination step.
Other more involved intercalation schemes have been explored. Recently, Guo et al. have demonstrated intercalative growth of silicon oxide between graphene and a ruthenium metal growth layer. The presence of silicon oxide increases the distance between the graphene and the metal substrate. However, the researchers did not use the separator to facilitate transfer; they used the intercalated silicon oxide as a new substrate on which to build graphene devices directly [84]. Ma et al. intercalated carbon monoxide gas between graphene and a platinum original substrate. Combined with a polydimethylsiloxane (PDMS) stamping and peeling method, they were able to cleanly remove graphene from the platinum in order to recycle the precious metal for further CVD graphene growth [85]. Ohtomo et al. were able to intercalate a thiol-based self-assembled monolayer (SAM) between graphene and its metallic original substrate [86]. The effect on decreasing adhesion energy was twofold here. First, the long hydrocarbon chain of the SAM provided a spacer over 1 nm in length to increase the distance between graphene and its substrate. Second, the interaction between the hydrocarbon SAM surface and the graphene was inherently weaker than the interaction between the metallic substrate and the graphene.
Electrochemical Methods, Including "Bubble Transfer"
Electrochemical methods pioneered by Wang et al. involve immersing the graphene on copper in water and applying a bias to the copper substrate (Figure 7a) [33]. This leads to water electrolysis and H 2 bubble formation between the graphene and the metal, physically increasing the distance between the graphene and its substrate. Strictly speaking, as there is no water between the graphene and the metal to begin with, water must somehow intercalate between graphene and metal at the graphene edge [87], so these methods are also intercalation methods. In addition, since there is often some electrochemistry going on at the graphene-metal interface, some of these methods might also feature slight chemical etching. However, Gao et al. observed no metal ion contamination of bubble transferred graphene from platinum, pointing to the chief advantage of non-etching transfer methods: avoiding the introduction of metal ion impurities onto the graphene [87]. (Figure 7b) [88]. The transfer of graphene from copper onto a flexible and transparent polymer (FTP) was carried out with the aid of an ethylene-vinyl acetate (EVA) coat. Delamination occurs via electrochemical hydrogen bubbling with the copper substrate acting as a cathode. Then, the graphene/EVA/FTP structure separates from the copper as H 2 bubbles are generated between the graphene and copper substrate.
One interesting strategy to mitigate cracking is directional etching of the metal growth substrate. Zhang et al. performed copper etching in ammonium persulfate solution while applying a bias across two electrodes. The copper was observed to dissolve preferentially from the cathode-oriented side to the anode-oriented side. Presumably, this method mitigates cracking since the stress differential applied to the graphene is more uniform than when the copper is etched in random locations across the graphene when no bias is applied [79].
Mechanical Methods
Mechanical methods rely on the van der Waals force between graphene and its initial substrate being weaker than the force between graphene and either its receiving substrate or an intermediate substrate that acts as a transfer aid. These intermediate substrates have generally consisted of adhesive polymers added to the top side of the graphene, to physically pull the graphene from the substrate. We saw in the section on etching strategies that several groups have examined the use of adhesive tapes in conjunction with etchant baths which explicitly remove the substrate. However, adhesives have also been used without etchant to transfer graphene from metallic or silicon carbide growth substrates. These adhesive techniques are quite varied. Popular techniques include the use of PDMS elastomer stamps [89,90] to peel graphene off its growth substrate and release it onto arbitrary substrates. An interesting example of mechanical-based transfer was demonstrated by Kim et al., to cleanly isolate single-layer graphene from a silicon carbide growth substrate. Epitaxial graphene grown from SiC tends to produce bilayer graphene "stripes" at step edges. To eliminate these stripes upon transfer, the team first deposited onto the graphene a strained nickel film produced by alternately vapor depositing and sputtering nickel onto the target. The strain and adhesion allowed for facile delamination of the graphene from the SiC. Then, to eliminate the bilayer graphene stripes, a second film of gold was added to the other side of the graphene. The gold-graphene adhesion energy was intermediate between the nickel-graphene and the graphene-graphene adhesion energy, allowing the gold to remove the stripes while leaving the continuous single layer graphene intact (Figure 8a) [8]. (a) Nickel sputtering onto epitaxial graphene grown on SiC, followed by mechanical exfoliation of graphene. The nickel substrate adheres the graphene more strongly than the original SiC growth substrate. Adapted with permission from Ref. [8]. Copyright 2013 American Association for the Advancement of Science. (b) Liquid-phase adhesive-assisted delamination of graphene from an original copper growth substrate. The liquid gel is highly conformal to the graphene, augmenting its adhesive energy above that of the copper and allowing subsequent mechanical delamination. Adapted with permission from Ref. [80]. Copyright 2021 American Chemical Society.
Stronger interactions than simple dispersion forces have also been exploited to separate graphene from its substrate. Lock et al. employed a diazonium-based adhesive to directly bind graphene sheets covalently, increasing the force that could be applied for separation of graphene from its substrate [91,92]. Seo et al. used an amine-rich viscoelastic polyethyleneimine (PEI) polymer gel as a transfer layer to facilitate mechanical delamination. The mechanism here is two-fold: first, the soft gel perfectly conforms to and coats the graphene, such that the PEI makes a high-surface-area contact with the graphene; and second, the PEI n-dopes the graphene such that the electrostatic adhesive interaction between the graphene and the PEI-GA layer exceeds that of the graphene-copper bond (38 J m −2 ). These two effects contribute to the complete wrinkle-free delamination of the graphene from its substrate (Figure 8b) [80]. The concept employed in all of these transfer techniques is that the interaction energy between the peeling agent and the graphene should be larger than the interaction energy between the graphene and the initial substrate.
Mechanical methods which do not use intermediate adhesive substrates are also known. In this case, the relevant interactions are the comparative magnitudes of the van der Waals force between graphene and its initial and final substrates. Fechine et al. used hot pressing with a variety of polymers to transfer CVD graphene directly from copper to the polymer [93]. They found that the increased contact area achieved by applying molten polymer to the graphene aided in increasing the strength of interaction between graphene and the polymer relative to the substrate. Another example of direct mechanical transfer involves using molten gallium as a liquid metal growth substrate for CVD graphene. Fujita et al. demonstrated low temperature (50-150 • C) growth of graphene at the interface between molten gallium and either sapphire or polycarbonate. The gallium was removed in the liquid state using a stream of N 2 gas [81].
Laser-Assisted Methods
Finally, a number of groups have recently demonstrated a novel way of transferring graphene across an air gap by focusing a laser pulse onto a graphene-coated substrate ( Figure 9) [94][95][96][97][98]. A disk of graphene at the highest-fluence area on the substrate detaches and is pushed onto a new substrate. Depending on the nature of the experimental setup, the exact mechanism for delamination can vary. In general, though, it either involves trapped gases that expand to force the graphene away from the substrate [97], or potentially a photothermally-induced graphene-substrate lattice mismatch which weakens the interaction strength between the two species [94]. Spatial localization of the graphene transfer can be very precise using laser-assisted methods. However, the transfer often consists of many steps (Figure 9a), and damage can be introduced at several points during the transfer. Somewhat counterintuitively, however, Praeger et al. report that their laserinduced backward transfer experiment shows little sign of oxidative damage to graphene when performed at ambient pressure. This observation suggests that heat transfer from the graphene is sufficiently rapid to prevent the material from burning in the air under the high laser fluence [94].
Other Methods and Combinations
Incorporating a number of these ideas, Chandrashekar et al. devised an impressive method for roll-to-roll transfer of graphene from copper to a polymer by combining hot pressing of the polymer, resulting in favorable graphene-polymer interaction, with hot water delamination of the graphene from the copper [99]. Shivayogimath et al. employed the same idea using a simple and inexpensive office laminator for the hot-pressing and roll-to-roll transfer. This process consists of oxidatively decoupling the graphene from the catalytic surface, then laminating polyvinyl alcohol (PVA) on graphene's surface. The strong adhesion between graphene and PVA layer supports the mechanical delamination of graphene from the copper substrate and largely damage-free, as indicated by mobility and Raman spectral data [100].
Transfer Strategies: Relamination of Graphene
Relamination of graphene as a separate step in the transfer process has seen far less specialized attention than delamination. However, the relamination process is extremely important to the quality of the final product, raising issues that are not typically seen during the delamination step of transfer. Wrinkling and folding, for instance, are far more likely to happen upon relamination of graphene than delamination. The presence of wrinkles can also lead to other defects: trapping residual polymer or metallic contaminants by shielding them from cleaning procedures, and causing tears and cracks by introducing strain into the graphene lattice [39].
Our graphene substrate interaction model suggests that the main factors affecting graphene adhesion are tuning of the intrinsic substrate interaction (namely, ensuring that the target substrate provides a stronger interaction than either the original substrate or a temporary polymer support), tuning of the intrinsic graphene interaction, and manipulation of the distance between graphene and substrate, where a close conformal contact between graphene and the target substrate will tend to mitigate issues arising specifically during relamination such as wrinkles.
Strategies Changing the Substrate Interaction Coefficient
Physically, relamination of graphene is simply the inverse of delamination, so that promoting strong adhesion to the target substrate is largely a matter of inverting the conditions that one would promote for weakening adhesion during delamination. In reality, the processes are not complete inversions of one another. For instance, instead of strengthening the substrate interaction coefficient for relamination, it is much more common to delaminate graphene onto a temporary substrate which strongly adheres to it and subsequently find ways to weaken the adhesion to this temporary substrate in favor of the permanent target substrate. This strategy manifests itself most clearly in the form of a temporary polymer support applied during graphene delamination, which is subsequently removed upon relamination.
However, support removal can leave polymer residues which change the properties of the graphene, and therefore a number of techniques have been developed to limit the initial amount of residue left behind by polymer support during transfer, with the goal of obviating the need for extended cleaning procedures. Ren et al. reported a modification of the standard chemical etching with polymer support technique in 2012, where they omitted the PMMA step completely, simply allowing unsupported graphene to float freely on the etchant bath [101]. However, the unsupported graphene is so fragile that this method requires minimizing mechanical disturbances, even avoiding vibration of the etching bath by mechanical pumps or people walking nearby. Therefore, in general, the advantages of using a polymer support to prevent mechanical damage and breakup of the graphene usually far outweigh the drawbacks of removing polymer residue from the transferred graphene.
PMMA Removal and Cleaning
As mentioned above, the most widely used graphene transfer techniques employ a temporary polymer substrate. This is both to support the graphene mechanically when its original substrate is removed and to facilitate the transfer by being, in effect, a substrate which exhibits tunable adhesion to graphene. The most prominent example is the use of PMMA as a polymer support. While the PMMA adheres strongly to graphene when it is in a solid state, dissolving it in acetone causes its adhesion to drop dramatically and makes it easily removable from the graphene surface.
It has been shown that PMMA and other support polymers leave behind physisorbed and chemisorbed species after being dissolved off graphene with acetone. Lin et al. showed that PMMA residues remain even after annealing at high temperatures in forming gas [23]. Therefore, a significant amount of effort has gone into developing effective cleaning procedures for graphene [102,103]. Some evidence exists that PMMA can be thoroughly cleaned from graphene with careful annealing in air [104]. However, this method risks burning the graphene or, at the very least, introducing oxidative impurities onto the graphene lattice. Thus, extreme care must be taken with the conditions of air annealing.
In the same spirit, Liang et al. developed a "modified RCA" cleaning procedure [36], by adapting a silicon wafer cleaning procedure first introduced by Werner Kern in 1965 while working for RCA [105]. Their method consists of alternating basic and acidic hydrogen peroxide washes, and care must be taken to avoid gas formation associated with the decomposition of H 2 O 2 , which would agitate and possibly damage the single layer graphene. Sun et al. determined using Raman spectroscopy that the main PMMA-based contaminant remaining after thermal annealing was non-covalently bonded methoxycarbonyl side chain material, and they used this finding to develop an electrolytic cleaning technique [106]. The graphene acts as a cathode in an electrolytic cell, and the H 2 bubbles generated from electrolyzed water aid in liftoff of the polymer residues, in a mechanism similar to the bubble transfer technique for lifting off graphene. Park et al. later showed that the formation of bubbles was not strictly necessary, by performing a bubble-free electrochemical cleaning procedure in nonaqueous solution [107].
Releasing Layers as Polymer Support
Another technique to limit polymer residue is the use of releasing layers, which are polymer support layers composed of material that only weakly or tunably adheres to graphene. It has been shown that polymers such as polystyrene or polyisobutylene have relatively weak interaction with graphene [108]. When they are used as a support layer, etching of metallic substrates proceeds as normal, but adhering the graphene to its new target substrate is possible, even when that substrate is hydrophobic. This is due to the fact that the adhesion energy between graphene and the target substrate is more favorable than adhesion between graphene and the releasing layer.
This releasing layer idea can be taken a step further by using thermal release tape ( Figure 10) [109,110]. This material binds strongly to graphene at low temperatures, but the adhesion becomes significantly weaker at higher temperatures. Etching-assisted direct transfer of graphene by hot-pressing or hot embossing is also quite effective, and can produce large-area sheets of graphene continuously in a roll-to-roll process (Figure 10b,c) [110][111][112][113]. Hot pressing is advantageous when the target substrate is rigid, as is the case with silicon wafers or many other ceramic semiconductors, which cannot be processed in a roll-to-roll fashion. The above-cited processes are all wet-transfer, meaning that even though adhesion is achieved between graphene and a target substrate, the copper growth substrate is still etched away chemically.
Other Methods Relying on the Substrate Interaction Coefficient
Calado et al. showed that wrinkling of graphene was minimized when the target substrate was hydrophobic. They hypothesized that water plays an important role in the formation of wrinkles, and that wrinkles provide a conduit through which water drains from between graphene and target substrate upon relamination [73]. A hydrophobic substrate preferentially excludes water and presumably eliminates wrinkles in this manner. Kim et al. provided a generalized method for obtaining wrinkle-free graphene using a similar concept, by employing a low surface tension, hydrophobic, organic liquid, heptane, as a facilitating layer and a pseudo-substrate between graphene and the target substrate. This hydrophobic layer promoted wrinkle-free graphene while simultaneously enabling attachment of the graphene to the desired target substrate, regardless of that substrate's surface energy [114]. Figure 10. Thermal release tape strategies for temporarily weakening the graphene-substrate interaction to effect transfer to the target substrate. (a) Cold pressing with thermal release tape to remove air pockets. Reprinted with permission from Ref. [109]. Copyright 2010 American Chemical Society. (b) Roll-to-roll hot pressing with thermal release tape for large-area graphene transfer. Adapted with permission from Ref. [110]. Copyright 2012 American Chemical Society. (c) Large-area roll-to-roll graphene transfer with etching of original substrate and use of thermal release tape. Reprinted with permission from Ref. [112]. Copyright 2010 Springer Nature.
Strategies Changing the Graphene Interaction Coefficient
As mentioned earlier with respect to delamination, the ability to chemically modify graphene gives us control over C g . Whereas, in general, covalent functionalization of the graphene decreases C g and therefore weakens the adhesion of graphene to its substrate, removing covalent functionalities from chemically modified graphene can strengthen C g and allow previously delaminated graphene to adhere tightly to a new substrate. Whitener et al. have shown that hydrogenated graphene that has been delaminated onto a water surface can be dehydrogenated in situ by briefly exposing the material to bromine gas. The relaminated dehydrogenated graphene then adheres strongly to a target substrate, without further delamination upon introduction of water [67]. A non-covalent method to change the graphene interaction coefficient was pursued by Kafiah et al., who performed electrostatic charging of graphene to ensure a strong interaction between it and a polymer target substrate [115]. This transfer strategy seems to be relatively underexamined in the literature thus far.
Strategies Decreasing the Graphene-Substrate Distance
One of the earliest graphene transfer methods to eschew a polymer support was demonstrated by Regan et al. in 2010, who reported a direct transfer of graphene from a copper growth substrate to a TEM grid by placing a drop of isopropanol between the grid and the copper substrate [116]. The evaporation of the isopropanol through the holes in the grid pulled the graphene into close contact with the grid. Chemical etching of the copper freed the graphene which was then adhered only to the TEM grid, effecting a polymer-free transfer. However, the holes in the grid of the target substrate were necessary to allow evaporation of the isopropanol. Without them, this type of transfer would likely have been accompanied by buckling and wrinkling of the graphene as the hydrophilic isopropanol attempted to drain away from the graphene substrate interface, similar to the wrinkle formation mechanism described in Section 5.1.3.
Avoidance of wrinkling of the 2D sheets during relamination has driven much of the advancement in transfer strategies seeking to decrease the graphene-substrate distance during relamination. In particular, many strategies focus on the use of very soft polymer supports which can be easily melted and are thus conformable simultaneously to flat graphene and to the target substrate [117][118][119]. Of these materials, paraffin in particular has the interesting feature of a comparatively low boiling point, allowing the removal of the support material by heating under low pressure (Figure 11a) [120]. A few researchers have taken this "soft polymer" approach to its logical conclusion. Belyaeva et al. demonstrated graphene transfer at the interface of aqueous solution and cyclohexane [26], while Zhang et al. performed a very similar experiment using n-heptane as the "support" [121]. In both cases, an organic liquid immiscible with water was used that has a high vapor pressure at room temperature. The Zhang study also examined use of a low concentration of cellulose dissolved in water as an anti-wrinkle additive to the copper etchant bath. A definitive study on the use of anti-wrinkle agents in graphene transfer, however, has not been performed. A similar soft polymer approach was advanced by Zhang et al., who replaced PMMA in the standard transfer with a commercially available 3M Nexcare liquid bandage (LB) [123]. The mechanical properties of LB, such as its low elastic modulus of 85 MPa and low average molecular weight, allow for a less contaminated graphene and an extremely flat surface. Analysis of Raman spectra, electron mobility, and hole mobility demonstrate how graphene's properties have improved with LB support compared to a PMMA support layer.
In the same vein is the use of pressure sensitive adhesive as a polymer support. This technique ultimately has its roots in the Scotch Tape micromechanical cleavage method that Novoselov et al. used to isolate graphene in 2004 [124]. Kim et al. found that, by using polyethylene terephthalate (PET)-supported pressure sensitive adhesive, they could etch the metallic growth substrate from graphene and transfer the graphene onto a new substrate by applying light pressure (Figure 11b) [122]. Again, the important factor here is that the intimate contact between graphene and the target substrate produced an interaction energy that overcame the adhesive interaction between the graphene and the pressure sensitive adhesive.
Combinations and Other Strategies
An unusual single-wafer transfer technique employed a remarkable method to avoid introduction of wrinkles to the graphene. Gao et al. reported growth and transfer of CVD graphene on the same wafer by first growing graphene on an ultrathin layer of copper on a specially treated SiO 2 /Si wafer, and then etching away the copper chemically. This etching forms voids in the copper which adhere the graphene to the underlying SiO 2 via capillary forces. Such a process would naturally form wrinkles where the voids connect the SiO 2 and the graphene, but the authors avoid wrinkle formation by modulating the surface tension of the water through addition of isopropyl alcohol. This technique allows an ultrasmooth, single-wafer transfer of CVD graphene [125].
Future Directions
The transfer of graphene is an indispensable step in graphene-based device fabrication. As we have seen, a great deal of progress has been made to mitigate many of the issues that arise during graphene transfer. Delamination and relamination each bring their own challenges, and understanding them and graphene-substrate interaction on a mechanistic level enables a rational, holistic approach to solving problems associated with each step of transfer.
Each transfer strategy comes with benefits and drawbacks, and the most effective transfers appear to be the ones which combine the strengths of several different strategies to mitigate many issues at once. At least part of the future of graphene transfer at a basic research level will likely be continuing to evaluate new combinations of strategies for cleanliness and integrity of the final product. From the standpoint of incorporating graphene into commercializable devices, however, there needs to be an examination of the tradeoff between complicating the transfer process by combining many strategies and reducing cost by simplifying the process. This tradeoff could see great progress in the use of automation specifically for transfer. Tight control of conditions during transfer would provide reproducibility, and an automated system could perform tasks such as mechanical peeling much more slowly and systematically, thereby introducing less strain into the graphene and hence less mechanical damage. Other techniques which are very promising commercially include clean delamination of graphene and recycling of substrates. Chemical etching is destructive and adds to cost, but a simple recycling scheme could make the transfer process and ultimately, graphene incorporation into devices, more economical.
In a less pragmatic, but more fundamental way, studies on graphene transfer can advance basic graphene science by probing graphene-materials interactions systematically. The physics outlined in Section 3 points to several fundamental questions that graphene adhesion studies could help address. As mentioned there, the dielectric properties of single and multilayer graphene are not particularly well-understood, and the range of values that has been obtained for them is rather wide. Examining graphene adhesion to a variety of substrates in different media could provide information about graphenespecific Hamaker constants and increase our understanding of the dielectric properties of graphene. Other interesting effects in graphene-substrate interaction have been observed; for instance, graphene and other 2D materials have been found to screen the dielectric properties of their substrates, effectively shortening the range of van der Waals forces for those surfaces [126][127][128]. Depending on the nature of the electronic structure of the 2D material, this screening can be complete or incomplete. This screening can also extend to hydrogen bonding [129].
In principle, the model that we have introduced is applicable not only to graphene transfer, but to transfer of any 2D material. Van der Waals heterostructures display an astonishing versatility of properties, including superconductivity magic-angle twisted bilayer graphene [130], long-lived excitonic states in mixed transition metal dichalcogenides [131], and ferroelectricity in twisted bilayer boron nitride [132]. Currently, most of these heterostructures are either grown or transferred via a stamping method. However, commercialization might require a more robust and scalable process, and the progress already made on graphene transfer will inform the development of transfer in these other materials. | 12,170.2 | 2021-10-25T00:00:00.000 | [
"Physics"
] |
Cops and Robbers on Dynamic Graphs: Offline and Online Case
. We examine the classic game of Cops and Robbers played on dynamic graphs, that is, graphs evolving over discrete time steps. At each time step, a graph instance is generated as a subgraph of the (static) underlying graph . The cops and the robber take their turns on the current graph instance. The cops win if they can capture the robber at some point in time. Otherwise, the robber wins. In the offline case, the players are fully aware of the evolution sequence, up to some finite time horizon T . We provide a O ( n 2 k +1 T ) algorithm to decide whether a given evolution sequence for an underlying graph with n vertices is k -cop-win via a reduction to a reachability game. In the online case, there is no knowledge of the evolution sequence, and the game might go on forever. Also, each generated instance is required to be connected. We provide a nearly tight characterization for sparse underlying graphs with at most a linear number of edges. We prove λ + 1 cops suffice to capture the robber in any underlying graph with n − 1 + λ edges. Furthermore, we define a family of underlying graphs with n − 1 + λ edges where λ − 1 cops are necessary (and sufficient) for capture.
Introduction
Cops and robbers is a classic pursuit-evasion combinatorial game played on a graph.There are two opposing players aiming at winning the game: a cop player controlling k cop tokens and a robber player controlling one robber token.Initially, the k cops are placed at vertices of the graph.Subsequently, the robber is also placed at a graph vertex.The two players proceed (possibly ad infinitum) by taking turns alternately commencing with the cops.During a cops' turn, each cop may move to a vertex adjacent to its current one; note that cops are presumed to move simultaneously.Similarly, during a robber's turn, the robber may move to a vertex adjacent to its current placement.The cops win, if at least one of them Problems related to the cop number have been studied heavily over the last four decades.Originally, Quillot (34), and independently Nowakowski and Winkler (32), characterized graphs with cop number equal to 1, otherwise referred to as cop-win graphs.The set of (di)graphs with cop number equal to k > 1 was characterized in (14; 18).Building on these notions, a general framework for characterizing discrete-time pursuit-evasion games was developed in (10).
There is a lot of literature regarding the cop number of specific graph classes.Aigner and Fromme (2) proved c(G) ≤ 3 for any planar graph G. Frankl (17) proved a lower bound for graphs of large girth.Other works include (3; 16; 26).
Moving onto general graphs, Meyniel conjectured √ n cops are always sufficient to capture the robber in any graph.The current state of the art is O(n/2 (1−o(1)) √ log n ) proved independently in (24; 35).Yet, the conjecture remains unresolved.On the contrary, the conjecture was proved positive for binomial random graphs (33); relevant works include (7; 11; 25).The cop number is also related to various width parameters, for example see (1).Finally, there is a book (8) capturing recent activity on Cops & Robbers.
The computational complexity of the corresponding decision problem is also worth a note.Given a graph G and an integer k, does c(G) ≤ k hold?Recently, Kinnersley (22) answered the question by proving EXPTIME-completeness.With respect to algorithmic results, for a fixed constant k, there is a polynomial time algorithm to determine whether c(G) ≤ k (4).Other algorithmic results include (9) (capture from a distance), and (10) (generalized Cops and Robbers).
With respect to Cops and Robbers games played on dynamic graphs, there is preliminary work by Erlebach and Spooner (15).They examine the game on edge-periodic graphs, where each edge e is present at time steps indicated by a bit-pattern of length l e used periodically and ad infinitum as evolution rule.Let LCM(L) denote the least common multiple for input lengths l e .The paper presents a O(LCM(L) • n 3 ) algorithm to determine whether the graph is 1-cop-win as well as some other results on cycle graphs.Later on, in (30), NP-and W [1]-hardness results are provided for (parametrizations of) Cops and Robbers played on temporal edge-periodic graphs.Further bounds and hardness results for cycle graphs in this model are given in (31).
Our Results.We consider two dynamic graph scenarios and present preliminary results for a (classicstyle) Cops and Robbers game taking place in them.At each discrete time step of evolution, the current graph instance is fixed, then the cops take their turn, and finally the robber takes its turn.Note that movement may be restricted due to the possibly limited topology of each instance.
In the offline case, the cop and the robber know the whole evolution sequence (up to some finite time horizon T ) a priori.For an underlying graph on n vertices, we prove that deciding whether it is 1-cop-win can be done in time complexity O(n 3 T ); see Theorem 3. To do so, we employ a reduction to another game, a reachability game, played now on the configuration graph (Lemma 1).Our results extend to deciding k-cop-win graphs (Corollary 1), and an exponential time algorithm for determining the exact value of the cop-number (Corollary 2).
In the online case, no knowledge is given to the players regarding graph dynamics.The only restriction imposed is that, at each time step, the realized graph instance needs to be connected.We consider sparse graphs and show that the cop number is at most λ + 1 for underlying graphs with n − 1 + λ edges; see Theorem 4.Moreover, we demonstrate a (nearly tight) graph family where λ − 1 cops are necessary (and sufficient) to ensure cop victory; see Theorem 5.
Outline.In Section 2, we present introductory notions and notation on the dynamic graphs used and on the game of Cops and Robbers played on them.In Sections 3 and 4, we formalize our definitions for the respective scenario: In Section 3, we consider the offline case, whereas in Section 4, we consider the online case.In Section 5, we make concluding remarks.
Preliminaries
Dynamic Graphs.Let G = (V, E) stand for a (static) graph to which we refer to as the underlying graph of the model.We assume G is simple, i.e., not containing loops or multi-edges, and connected, i.e., there exists a path between any two vertices in G.No further assumptions are made on the topology of G.An edge from vertex v ∈ V to vertex u ∈ V is denoted as (v, u) ∈ E, or equivalently (u, v) ∈ E. We refer to the edges of the underlying graph as the possible edges of our model.We denote the number of vertices of G by n = |V | and the number of its edges by m = |E|.For any vertex v ∈ V , we denote its open neighborhood by N (v) = {u : (v, u) ∈ E} and its closed neighborhood by N The dynamic graph evolves over a sequence of discrete time steps t ∈ N. We consider two cases with respect to time evolution.First, t = 1, 2, 3, . . ., T , that is, t takes consecutive values starting from time 1 up to a time horizon T ∈ N given as part of the input.In this case, we define a dynamic graph G with a time horizon T as G = (G 1 , G 2 , . . ., G T ) (Section 3).Second, t = 1, 2, 3, . .., that is, we consider the sequence of time steps t evolving ad infinitum (Section 4).
For any t, let G t = (V t , E t ) be the graph instance realized at time step t, where V t = V and E t ⊆ E: all vertices of the underlying graph G are present at each time step, whereas a possible edge e ∈ E may either be present/alive, i.e., e ∈ E t , or absent/dead, i.e., e / ∈ E t at time t.For any vertex v ∈ V , we denote by N t (v) = {u : (v, u) ∈ E t } its available neighborhood at time t.Similarly, let N t [v] = N t (v) ∪ {v} refer to the available closed neighborhood at time t.
Cops and Robber on Dynamic Graphs.We play a game of Cops and Robbers on a dynamic graph evolving under the general model defined above.There are two players: C controlling k ≥ 1 (k ∈ N) cop tokens and R controlling one robber token.Initially, C places its k tokens on the vertices of the underlying graph.Notice that we allow multiple cops to lie on the same vertex.Afterward, R chooses an initial placement for the robber.Round 0 is over.From now on, for every t ≥ 1, first, the current graph instance G t is fixed and, second, a round of the game takes place.A round consists of two turns, one for C and one for R, in this order of play.C may move any of its cops lying on a vertex v to any vertex in N t [v].Note that all cops controlled by C move simultaneously during C's turn.After C's turn is over, R may move the robber lying on a vertex u to any vertex in N t [u].C wins the game if, after any player's turn, the robber lies on the same vertex as a cop.R wins if it can perpetually prevent this from happening.
A cop-strategy, respectively a robber-strategy, is a set of movement decisions for the cops, respectively the robber.Having knowledge of the current positions, and the current graph instance G t (as well as all future graph instances only in the offline case), the cops/robber decide on a move for round t according to the rules of the game.A dynamic graph is called k-cop-win, if there exists a cop-strategy such that k cops win the game against any robber-strategy.For k = 1, we say that such a dynamic graph is cop-win.
Offline Case
In the offline case, we are given a dynamic graph G with a time horizon T , namely G = (G 1 , G 2 , . . ., G T ), where both C and R have complete knowledge of the evolution sequence.That is, both players are aware of G t = (V, E t ), for all t = 1, 2, . . ., T , a priori.Let c off (G) stand for the temporal cop number (offline case), the worst-case minimum number of cops required to capture a robber when the whole sequence G = (G 1 , G 2 , . . ., G T ) is given as input to both players.If the robber is not captured within T rounds, for any cop strategy, then the dynamic graph is robber-win.Overall, this is a Cops and Robbers game on a dynamic graph with bounded time horizon.From now on, we refer to it as the offline case.The results presented in this section can be viewed by the reader as an extension/completion of the work in (15) on edge-periodic graphs.
Configuration Graph.The task we tackle in the offline case is to characterize the set of given inputs (G, T ), which are cop-win, i.e., one cop can always capture the robber within the T rounds of play.To do so, we first construct a directed configuration graph capturing the cop and robber motion on G.Then, we can play another game, a reachability game (21) to be defined later, on the configuration graph which corresponds to the original cop and robber game played on G and derive our result this way.We define the directed configuration graph as P = (S, A), where S refers to configuration states (vertices) and A to arcs from one state to another state which is a feasible potential next state.
The vertex set S consists of all four-tuples of the form (c, r, p, t), where t ∈ {1, 2, . . ., T } indicates the time step or round of play t, p ∈ {C, R} indicates whether it is the cop's or the robber's turn to play, c ∈ V is the position of the cop just before p's turn takes place in round t, and r ∈ V is the position of the robber just before p's turn takes place in round t.
The arc set A contains the arcs below, for all x, y ∈ V and t ∈ {1, 2, . . ., T }, such that both the dynamics of the graph and the game moves are represented: (1) if z ∈ N t [x] and t ∈ {1, 2, . . ., T }, then ((x, y, C, t), (z, y, R, t)) ∈ A, and (2) if z ∈ N t [y] and t ∈ {1, 2, . . ., T − 1}, then ((x, y, R, t), (x, z, C, t + 1)) ∈ A Cops and Robbers on Dynamic Graphs 5 Case (1) arcs represent the cop's turn at round t, where the cop moves within its closed neighborhood available at time t, the robber retains its position, and, after the cop moves, it is the robber's turn at round t.Respectively, case (2) arcs represent the robber's turn at round t, where the robber moves within its closed neighborhood available at time t, the cop retains its position, and, after the robber moves, it is the cop's turn, but at the next round, namely round t + 1.
Let us now consider the size of P .By the definition of the states s ∈ S, it holds for the number of vertices |S| ∈ O(n 2 T ).Considering the set of arcs A, each vertex in S has at most n arcs leaving it, therefore we obtain |A| ∈ O(n 3 T ).
Before we proceed utilizing the configuration graph, let us add some auxiliary, yet necessary, states and arcs to capture the round of initial cop and robber placement, that is, round 0. This way, we ensure the full correspondence of the reachability game played on P to the cop and robber game played on G.Note that all state and arc additions discussed hereunder do not affect the order of magnitude of the size of P .Let S contain also the states (∅, ∅, C, 0), and (x, ∅, R, 0), for all x ∈ V .State (∅, ∅, C, 0) captures the situation at round 0 before the cop's turn: neither the cop nor the robber have been placed yet on V .States (x, ∅, R, 0) capture the situation at round 0 before the robber's turn: the cop has been placed and it is the robber's turn to be placed.Overall, we have added an extra n + 1 states in S. We now proceed adding the necessary arcs in A to make the transitions from one turn to the next.For each x ∈ V , we add an arc ((∅, ∅, C, 0), (x, ∅, R, 0)) ∈ A, that is, n extra arcs in total.For each x, y ∈ V , we add an arc ((x, ∅, R, 0), (x, y, C, 1)) ∈ A, that is, n 2 extra arcs in total.
Reachability.We now employ the configuration graph P constructed above by playing another twoplayer game on it referred to in literature as a reachability game (5; 21; 27).The goal is to define a reachability game, which corresponds exactly to the Cops and Robbers game (offline case), and so be able to utilize known results in this area to prove our cop-win characterization (Theorem 3).The connection of a (classic) Cops and Robbers game to a reachability game was first identified in (20).Other results, in (11; 18), on cop-win characterizations employ similar tools without explicitly stating the reduction to reachability.
A reachability game is played by two players, C and R, where we maintain the notation such that it corresponds to players in the game of Cops and Robbers.The two players play alternately on a directed graph D = (V D , A D ), where V D is partitioned into two player-respective subsets, that is, then C plays and moves the token to a vertex u ∈ V R for which it holds (v, u) ∈ A D .Then, it is R's turn: R chooses to move the token to a vertex w ∈ V C for which it holds (u, w) ∈ A D .Note that either player has to move the token across an available arc in A D .The game proceeds in this fashion for an indefinite number of rounds.Player C wins, if the token eventually arrives to a designated target vertex set Tar ⊆ V D .Otherwise, if for any C-strategy a vertex in Tar can never be reached, then R wins.In a nutshell, the reachability game played on D = (V D , A D ) is defined by the tuple (V C , V R , Tar ).Theorem 1 demonstrates that, depending on the initial token placement, there exists a winning strategy for one of the two players for any input (V C , V R , Tar ) and D.Moreover, by Theorem 2, it can be decided in time linear to the size of the directed graph D.
Theorem 1 ((5; 27)).Consider a reachability game (V C , V R , Tar ) played on a directed graph D = (V D , A D ).V D can be partitioned into two sets W C and W R such that, if the token is initially placed on w ∈ W p , then there exists a winning strategy for player p ∈ {C, R}.
Theorem 2 ((5; 21)).
There exists an algorithm computing winning sets W C and W R for a reachability game Let us now consider a reachability game taking place in our constructed configuration graph P : let D = P , V D = S, and A D = A. For any (c, r, p, t) ∈ S, let (c, r, p, t) ∈ V p where p ∈ {C, R}.Finally, let Tar = {(x, x, p, t) | x ∈ V, p ∈ {C, R}, t ∈ {1, . . ., T }}.We can now use the just defined sets V C , V R , Tar to prove Lemma 1 and then, as a consequence, our main result in Theorem 3.
Proof: If G is cop-win, the cop has a winning strategy s * , which for any input cop position c, robber position r, and time step of evolution t before the cop's turn, provides the next placement for the cop token, say c .Moreover, eventually, for some t ≤ T , the cop is guaranteed to lie at the same vertex as the robber.Using the above strategy, Player C playing the reachability game on P also has a winning strategy: If the token lies at state (c, r, C, t) ∈ S, then C moves the token to (c , r, R, t).Note that, since s * is a feasible (winning) strategy for the Cops and Robbers game, by construction of P it holds ((c, r, C, t)), (c , r, R, t)) ∈ A. Also, since in s * the cop eventually lies at the same vertex as the robber, the reachability token eventually reaches a vertex (x, x, p, t) ∈ Tar , for some x ∈ V , p ∈ {C, R}, and t ≤ T .Hence, C wins the reachability game.
On the other hand, if (∅, ∅, C, 0) ∈ W C , by Theorem 1, there exists a winning strategy for C when the token lies on (∅, ∅, C, 0).For any ((c, r, C, t)), (c , r, R, t)) ∈ A chosen by C as part of its winning strategy, by construction of P , the cop has a feasible move from c to c at time t.Since C's strategy is winning, the token eventually traverses an arc ((c, x, C, t)), (x, x, R, t)) ∈ A, where (x, x, R, t) ∈ Tar .Respectively, the cop will traverse (c, x) ∈ E t and capture the robber.
Proof: By Lemma 1, it holds c off (G) = 1, if and only if, for a reachability game (V C , V R , Tar ) played on P = (S, A), where V C , V R , Tar are defined according to the statement of Lemma 1, it holds (∅, ∅, C, 0) ∈ W C .By Theorem 2, we decide whether An important remark is that, in Theorem 1 (5; 21), the winning strategy derived for player p ∈ {C, R} is memoryless; see Proposition 2.18 in (27).In other words, it only depends on the current position of the token, and not on any past moves.By the reduction presented in Lemma 1, the winning strategy for the cop/robber is also memoryless: it only depends on the current positions of the cop, the robber, and the time step of evolution.
Let us conclude this part by explaining how the above framework can be generalized and therefore used to determine whether a dynamic graph Since k cops are placed on V throughout the game, it suffices to expand our definition of states by substituting the cop position by a k-tuple of cop positions.That is, the state set S of the configuration graph P now contains tuples of the form ((c 1 , c 2 , . . ., c k ), r, p, t), where c i , for i ∈ {1, 2, . . ., k}, denotes the location of the i-th cop in V .For the arc set, for C, we add . Again, we include auxiliary states and arcs to cater for the initial placements.Overall, we now get |S| ∈ O(n k+1 T ), and |A| ∈ O(n 2k+1 T ), since for the dominantin-magnitude number of C-turn arcs there exist at most n 2k cop transitions from (c 1 , c 2 , . . ., c k ) to (c 1 , c 2 , . . ., c k ).By reapplying the whole framework with Tar = {((c 1 , c 2 , . . ., c k ), r, p, t) | c i = r for some 1 ≤ i ≤ k} we conclude: We may now run a search utilizing the result in Corollary 1 and derive an exponential time algorithm to determine the exact value of c off (G).
Corollary 2. For some dynamic graph G, with an associated time horizon T , the problem of determining the exact value of c off (G) is in EXPTIME.
Online Case
In the online case, we are given an underlying graph G = (V, E) and an indefinite number of discrete time steps of evolution t = 1, 2, 3, . .., that is, time evolution may take place ad infinitum.At each time step t, an instance The only assumption we make on the topology of generated instances, is that we require each G t to be connected.Note that this is a widely used assumption in several dynamic graphs appearing in literature (12; 28).Removing this assumption could lead to trivial cases where, for instance, the k cops or the robber lie indefinitely on isolated vertices.
Initially, the cops and then the robber place themselves on V before the appearance of G 1 .In the general case, neither the cops nor the robber have any knowledge about the evolution sequence.The cops and the robber, taking turns in this order, make their respective moves in G t , then G t+1 is generated, and so forth.Similarly to the offline case, a token at vertex v moves to a vertex in N t [v] (all the cops move simultaneously).Let c t (G) stand for the temporal cop number, the worst-case minimum number of cops required to capture a robber for an underlying graph evolving like described above.In our analysis, we consider worst-case scenarios for the temporal cop number; a different type of analysis is left for future work.In other words, for our bounds to follow, one may assume the robber controls the dynamics of G to its advantage.Hence, at round t, the robber defines instance G t according to the above restrictions.
Preliminary Bounds.As a warm up, let us consider two special cases for the topology of the underlying graph: a tree, and a complete graph.Proposition 1.For any tree T , it holds c t (T ) = 1.
Proof: Since, for any time step t, G t must be connected, it follows G t = T .Since the topology of the tree remains static over time, it holds c t (T ) = c(T ).It suffices to verify c(T ) = 1 for any T (26).
Proposition 2. For any complete graph
The robber places itself on the only cop-free vertex.Then, G 1 is realized: since G 1 is connected, there exists at least one edge connecting a cop-vertex to the robber-vertex.The corresponding cop traverses that edge and captures the robber at the first cop turn.
Stefan Balev, Juan Luis Jiménez Laredo, Ioannis Lamprou, Yoann Pigné, Eric Sanlaville We now demonstrate c t (K n ) > n − 2. The n − 2 cops are initially placed on vertices of V .The robber places itself on a cop-free vertex.Regardless of the cops placement, at any time, there are always at least two cop-free vertices.Without loss of generality, for each time t, G t is a path v 1 , v 2 , . . ., v n−1 , v n with the currently cop-free vertices at one end of the path.For example, the robber lies on v 1 , and v 2 (and possibly other vertices) are cop-free.The cops move during their turn, but they cannot capture the robber: Since the number of cop-free vertices is at least two, a cop can only reach a vertex at distance at least one from the robber (v 2 ).The robber remains at its position indefinitely and avoids capture.
The above propositions cast some intuition on the relationship between the (static/classical) cop number c(G) and our introduced temporal cop number c t (G).For the static case, it is easy to see that if G is either a tree or a clique then c(G) = 1.However, in the temporal case c t (T ) = 1 for a tree T , and c t (K n ) = n−1 for any clique on n ≥ 2 vertices.Intuitively, the denser the underlying graph is, the more leeway there is for the robber due to worst-case dynamics.Overall, for any graph G, c t (G) ≤ n − 1, since initially placing the n − 1 cops on distinct vertices guarantees an edge between a cop-vertex and the robber-vertex in G 1 due to connectedness.Thus, for the ratio of the two cop numbers, we get We now provide a preliminary bound on c t (•) by considering a subset of sparse graphs, that is, underlying graphs with at most linear number of edges.
Proof: To describe the cop-winning strategy, let us define a partition of the vertices into Intuitively, V C stands for the cop-secured vertices, i.e., vertices the robber will never be able to visit, whereas V R stands for the vertices (possibly) still within the eventual reach of the robber.More precisely, the cop strategy below builds a sequence of partitions (V C , V R ) where V C is a set of vertices the robber will never be able to visit, V R contains the other vertices and the cardinality of V R strictly decreases at each time step.This strategy may not be the fastest as V R may contain robber-unreachable vertices, but this is not required for the proof.
Consider the situation before some round t.Let T denote some (arbitrary) spanning tree of G.We refer to the edges of T as the black edges and to any path consisting only of black edges as a black path.We refer to all other edges, which are exactly λ, as the blue edges.Suppose there is one cop at one extremity of each blue edge.Note that several cops may lie on the same vertex.We refer to these cops as the blue cops.One last cop, the black cop, is placed on some other (blue-cop free) vertex, say x ∈ V .The robber is on a cop-free vertex, say r ∈ V .For a visual assistance for the rest of the proof, please refer to Figure 1.
Consider the spanning tree T : there exists a unique (black) path from x to r.Let (x, x ) be the first edge of this path.If this edge is removed from the black tree T , T is split into two black subtrees containing x and x respectively, namely T x and T x .Then, let By construction, the cut associated to (V C , V R ) contains exactly one black edge, (x, x ), plus (possibly) some blue edges.Since G t is connected for all time steps t, at least one edge associated to the cut is present in E t .If the black edge (x, x ) is present, then the black cop moves from x to x during the cops turn.Otherwise, if only a blue edge, say (v, v ), where v ∈ V C , is present, then the associated cop moves from v to v (or remains at v if it were already there).Now, we swap the role of the two edges.That is, (v, v ) becomes a black edge, and its associated cop becomes the black cop, whereas (x, x ) becomes a blue edge, and its associated cop becomes a blue cop.By construction, the set of black edges defines a new black spanning tree T : the unique black path from v to v is replaced by the new black edge (v, v ).
(Notice that, in the previous swap-less case, we trivially had T = T ).Afterwards, the robber may move at its turn; we still refer to its position by r.Even after the robber moves, it holds r ∈ V R \ {v }: there is no edge the robber could use to reach V C since all cut-edges are protected, and v is occupied by the black cop.
Before the next round of the game, let us now reapply the method used to obtain the partition on G t+1 .Let x = v stand for the black cop's current position, and set T = T .Consider again the unique black path from x to r, and denote by (x, x ) its first edge.By construction of T , there is a unique black path from x to all vertices of V C .Hence, if T is split as before into two subtrees after removing edge (x, x ), the resulting subtree T x contains x and also all vertices of former V C (and possibly more vertices).Then, let us set As there is still one cop on one extremity of each blue edge, the vertices of V C are unreachable by the robber.
Let us now consider the very first step.We start from an arbitrary spanning tree, denoted by T , whose edges are the black ones, the other being the blue ones.For the initial positions, let us place one cop at one extremity of each blue edge.One last cop, the black cop, is placed on some other (blue-cop free) vertex, say x ∈ V .Then, the robber chooses a cop-free vertex, say r ∈ V , for its initial place.Edge (x, x ), and sets V C and V R are similarly defined, hence the vertices of V C are unreachable by the robber.The cardinality of V R is at most n − 1.
If we inductively apply the above method for the cops, it follows that after each round, the number of vertices of V R , which contains the vertices reachable by the robber, is strictly decreased.It will eventually reach the value of zero and the robber will be captured in at most n rounds.robber space cop space Fig. 1: A depiction for the proof of Theorem 4. The black cop lies on x, with x at the other side of the cut.Colored vertices/edges indicate blue cops/edges.
Stefan Balev, Juan Luis Jiménez Laredo, Ioannis Lamprou, Yoann Pigné, Eric Sanlaville
The above result provides a better upper bound than the easy to see c t (G) ≤ n − 1, for sparse graphs when λ ≤ n − 3.For λ = 1, cycle graphs are a tight example.We demonstrate the result is nearly tight for certain graph families, see Theorem 5 in the next part of this section.
Theorem 5.For any This theorem is a direct consequence of the two lemmata that follow, which demonstrate the corresponding (worst-case) upper and lower bound strategies.
Lemma 2. For any
Proof: We present a strategy for λ − 1 cops to win against the robber under any dynamics and/or robber strategy.Initially, the λ−1 cops are placed as follows: place one cop at v i for each i = 2, 6, 10, . . ., 2λ−4 and for each i = 3, 7, 11, . . ., 2λ − 3. To verify, since there are two sequences of cops using a distance 4 step, overall the number of cops is 2(2λ − 2)/4 = λ − 1.For an example placement on G 7 , see Figure 3.Then, the robber places itself at some cop-free vertex.By symmetry of G λ and cop placement, without loss of generality, we assume the robber places itself on some vertex in R := {v 4 , v 5 , v 3 , v 4 , v 5 , v 6 }.In the cop strategy we will now propose, the robber will never be able to escape this set of vertices.Therefore, we restrict the proof to the subgraph induced by {v 2 , v 3 , v 4 , v 5 , v 6 , v 7 , v 3 , v 4 , v 5 , v 6 }, see Figure 4a, and will demonstrate how the four cops in this subgraph can always capture a robber with an initial placement within R. For all robber turns below, we assume the robber always remains within R; by our strategy, it is impossible for the robber to move outside R since it would mean "jumping" over a cop.
Fig. 3: The initial positions for cops in graph G 7 ∈ G.In all figures, an integer within a vertex stands for the number of cops currently placed on the vertex.
(a) Initial positions for cops in subgraph.
(b) After the first move: a cop moved from v7 to v6.
Fig. 4: The first move of the cop strategy
The cops' strategy is the following.Since the instance needs to be connected at each time step, at least one edge in {(v 2 , v 3 ), (v 6 , v 7 )} is available.By symmetry, without loss of generality, assume (v 6 , v 7 ) is present and the cop on v 7 moves to v 6 , see Figure 4b.It suffices to prove that the cops have a winning strategy starting from this configuration.
In the next round (following Figure 4b), the cops move as follows.If (v 6 , v 6 ) is available, then one of the cops moves to v 6 (Case 1, Figure 5a).Otherwise, if (v 6 , v 6 ) is not available, in order to ensure connectivity of the instance, either (v 6 , v 5 ) is available, and one cop moves to v 5 (Case 2, Figure 6a), or (v 2 , v 3 ) is available and one cop moves to v 3 , so that two cops lie on v 3 (Case 3, Figure 7a).In all cases, the robber takes its turn in R.
In Case 1, at least one cop can move to vertex v 5 (Case 1a, Figure 5b), v 5 (Case 1b, Figure 5c), or v 3 (Case 1c, Figure 5d) by connectivity of the instance.In Case 1a, one cop can move either to v 5 or to v 3 , otherwise the instance is disconnected.If a cop moves to v 5 , the robber lies within {v 4 , v 3 , v 4 } and at the next step a cop traverses either (v 5 , v 4 ) (hence the robber is trapped on v 3 or v 4 and loses within the next two rounds) or (v 2 , v 3 ) (hence at the next round either a cop arrives at v 4 and the robber is blocked or a cop arrives at v 3 and the robber loses at the following round).If starting from Case 1a a cop moves to v 3 , then in the next round a cop can move to {v 5 , v 4 , v 3 }: if a cop moves to v 5 , in the next round a cop can move to {v 4 , v 3 } and the robber loses in at most two rounds, otherwise, if a cop moves to v 4 or v 3 , the robber again loses in at most two rounds regardless its strategy.In Case 1b, the robber loses if it was on v 5 .Otherwise, a cop can traverse either (v 5 , v 4 ) or (v 2 , v 3 ) and the rest of the strategy is the same as for Case 1a.In Case 1c, at least one cop can move to a vertex in {v 5 , v 5 , v 4 , v 3 }.Similarly to above, in all subcases, it is easy to see that the cops win in at most two steps against any robber position/movement.In Case 2, if the robber lies on v 5 or v 6 , it is easy to see the cops win in at most two steps, since connectivity to the rest of the graph must be maintained.Instead, if the robber lies in {v 4 , v 3 , v 4 }, then a cop traverses either (v 5 , v 4 ) (Case 2a, Figure 6b) or (v 2 , v 3 ) (Case 2b, Figure 6c).In Case 2a, we arrive to 4-cycle {v 3 , v 4 , v 3 , v 4 } with cops on v 4 and v 3 , and as before, the cops win in at most two steps.In Case 2b, again by preservation of connectivity, at least one edge in {(v 5 , v 4 ), (v 3 , v 4 ), (v 3 , v 3 )} must be available and one cop moves to either v 4 or v 3 .In either case, the cops win in at most another two steps.
In Case 3, before they take this new turn, two cops lie on v 6 and another two cops lie on v 3 .By preservation of connectivity, at least one cop can reach a vertex in {v 6 , v 5 , v 4 , v 3 }.By symmetry, it suffices to consider the worst-case scenarios where a cop arrives on v 6 (Case 3a, Figure 7b) and v 5 (Case Fig. 6: Case 2 analysis for the proof of Lemma 2 Fig. 7: Case 3 analysis for the proof of Lemma 2 3b, Figure 7c).We notice Case 3a is the same as Case 1c and Case 3b the same as Case 2b.
To help us with the matching lower bound to follow, we hereby provide some useful definitions and claims on cop movement restrictions on G λ incurred by worst-case dynamics.From now on, all vertex indices are assumed to be modulo 2λ We say that L i is cop-occupied if at least one cop lies at some vertex in V (L i ), otherwise, L i is cop-free.We say that a cop crosses L i if, starting from vertex v 2i−1 (cross-start vertex), it can eventually arrive to vertex v 2i (cross-end vertex), or vice versa.We refer to a (counterclockwise-movement) crossing from v 2i−1 to v 2i as a cc-crossing and to a (clockwisemovement) crossing from v 2i to v 2i−1 as a c-crossing.A cop trivially cc-crosses L i if it already lies on v 2i or v 2i+1 , i.e., the counterclockwise neighbor of v 2i .Respectively, a cop trivially c-crosses L i if it already lies on v 2i−1 or v 2i−2 , i.e., the clockwise neighbor of v 2i−1 .The intuition behind Proposition 3 is that, while a number of cops crosses a loop, at least one of them must stay behind, that is, will not be able to ever cross the loop due to worst-case dynamics.
Proposition 3. Assume we focus on a given loop L i ⊂ V (G λ ) and at most one edge in E(L i ) is not present at each time step of evolution.In the worst case, at most ρ cops can cross L i , if ρ + 1 cops are present at the cross-start vertex.
Proof: Without loss of generality, consider loop L 1 = {v 1 , v 2 , v 1 , v 2 } and suppose ρ + 1 cops lie on v 1 and wish to cross to v 2 .The dynamics of the graph evolve as follows: for each t, if at the end of round t there is at least one cop on v 1 , then In other words, as long as there is a cop on v 1 , the edge to v 2 is blocked.The cops could take advantage of this situation such that at most ρ of them reach v 2 via the available path v 1 , v 1 , v 2 , v 2 .If at any time v 1 is cop-free, then the above path is blocked and (v 1 , v 2 ) is available, however no cop is there to traverse it and cross the loop.The last remaining cop cannot cross since it would mean that, at some point in time, either v 1 is cop-occupied and (v 1 , v 2 ) is available or v 1 is cop-free and (v 1 , v 2 ) is available, a contradiction to the specified dynamics.
Assume strictly fewer than λ − 1 cops initially place themselves at the vertices of G λ .Since there are λ − 1 loops, there exists at least one cop-free loop L i .In general, after the cops are initially positioned, G λ can be partitioned into alternating sequences of cop-occupied and cop-free loops L i .Let O 1 , . . ., O p , respectively F 1 , . . ., F p , stand for the sequences of cop-occupied, respectively cop-free loops, where F 1 is set arbitrarily, and we assume F i is between O i (clockwise) and O i+1 (counterclockwise).Moreover, for i = 1, 2, . . ., p, let |F i | = f i and |O i | = o i .The cardinality p of the two sequence sets is the same, since two non-maximal adjacent cop-occupied subsequences, i.e., with no cop-free loop between them, form one bigger cop-occupied sequence; a similar observation holds for cop-free sequences.By the reasoning above, it holds p ≥ 1.An example initial placement on G 7 is given in Figure 8: The sequences of copoccupied and cop-free loops formed are The following proposition provides us with a necessary condition in order for the cops to win against a robber placed on some cop-free sequence of loops.For a sequence . ., L f } be a cop-free sequence of loops with the robber lying on some vertex within V (F ) not adjacent to a cop.Let L cc , respectively L c , stand for the cop-occupied loop adjacent to the counter clockwise of F , respectively to the clockwise of F .At least f + 1 cops must be able to c-cross L cc , and another f + 1 cops to cc-cross L c , in order for the cops to win.
Proof: In contradiction, and without loss of generality, assume whenever f + 1 cops can cc-cross L c at most f cops can c-cross L cc .Assume L f is clockwise adjacent to L cc , L i is clockwise adjacent to L i+1 for i = 1, 2, . . ., f − 1, and L c is clockwise adjacent to L 1 ; see Figure 9. From now on, consider that the dynamics of the graph force the single edge connecting L c to L 1 to be unavailable in all graph instances.So, no cop may leave L c and reach L 1 in F by moving counterclockwise and each graph instance remains connected.
Having crossed L cc , the f cops may all move to L f .By Proposition 3, at most f − 1 cops can ccross L f .For some i ≥ 1, assume f − i cops have c-crossed L f +1−i .Then, by Proposition 3, at most To win, the robber has a feasible strategy to evade indefinitely, that is, to be placed at the vertex in L 1 connected by the (always unavailable) edge to L c as discussed above.No cop can arrive to L 1 from L c due to the missing edge, and since at most 1 cop can c-cross L 2 , no cop can ever c-cross L 1 .Now, we are ready to show, in Proposition 5, how the robber can identify a cop-free sequence to employ the winning strategy demonstrated in Proposition 4.
For integers i, w i , w i , where . ., F i−wi } stand for the set including F i and the w i cop-free sequences nearer to F i in clockwise fashion, and F cc (i, w i ) = {F i , F i+1 , . . ., F i+w i } stand for the set including F i and the w i cop-free sequences nearer to F i in counterclockwise fashion.In a similar manner, for the cop-occupied sequences, let O c (i, w i ) = {O i , O i−1 , . . .O i−wi } and O cc (i, w i ) = {O i , O i+1 , . . ., O i+w i }.For a cop-occupied sequence O i , we say that o i = |O i | cops (choosing one per loop in O i ) are its occupant cops.If strictly more than o i cops lie at vertices of O i , then this surplus of cops are referred to as extra cops.Proposition 5.If there exists a cop-free sequence F i in G λ such that at least one of the following holds: (a) strictly fewer than Fj ∈Fc(i,wi) f j extra cops lie within vertices in O c (i, w i ), for all integers w i , where 0 ≤ w i ≤ p − 1, (b) strictly fewer than Fj ∈Fcc(i,w i ) f j extra cops lie within vertices in O cc (i + 1, w i ), for all integers w i , where 0 ≤ w i ≤ p − 1, then the robber wins.
Proof: Assume the cops are initially placed and the robber is able to identify a cop-free sequence F i = {L 1 , . . ., L fi }, where, for i = 1, 2, . . ., f i − 1, L i is clockwise adjacent to L i+1 , L c is clockwise adjacent to L 1 and L fi is clockwise adjacent to L cc , like in Figure 9, such that at least one of (a) and (b) holds.
Without loss of generality, we hereby consider only case (a); the other case follows in similar manner by symmetry.
The robber strategy is simply to place itself at vertex v in L fi , which is adjacent to vertex u in L cc and, from now on, consider that the dynamics always force edge (v, u) to be unavailable.We now show that no cop can cc-cross L fi , therefore the robber wins.
For w i = 0, strictly fewer than f i extra cops lie in O c (i, 0) = {O i }.Hence, by using only cops within O c (i, 0), at most f i cops can cross L c : the extra cops in O i and, by Proposition 3, at most one occupant cop, e.g., the one occupying L c by its initial placement.By Proposition 4, the robber wins.
Inductively, assume that, for some w where 0 ≤ w < p − 1, for all w i ≤ w it holds that, by only using cops within O c (i, w i ), at most f i cops can cross L c .If we consider w + 1, by assumption, for w i = 0, 1, . . ., w + 1, strictly fewer than Fj ∈Fc(i,wi) f j extra cops lie within O c (i, w i ).Let f < Fj ∈Fc(i,w) f j be the number of extra cops within O c (i, w) and f < Fj ∈Fc(i,w+1) f j be the number of extra cops within O c (i, w + 1).Then, We consider two cases: Fj ∈Fc(i,w) f j .By the inductive assumption for w, by only using cops within O c (i, w), at most f i cops can cross L c .Since at most f i cops can cross L c for all possible w i ≤ w, then by Proposition 4, the robber wins.By assumption, their number is f < Fj ∈Fc(i,w) f j and the proof follows as in the first case.
Proposition 6. Assume strictly fewer than λ − 1 cops are initially placed on G λ .Then, there exists a cop-free sequence F i in G λ such that at least one of conditions (a) and (b) in Proposition 5 holds.
Proof: Since strictly fewer than λ − 1 cops are initially placed on G λ , and G λ has exactly λ − 1 loops L i , then by pigeonhole principle there exists at least one loop with no cop on its vertices, and so at least one cop-free sequence in G λ .By contradiction, suppose that for every cop-free sequence F i in G λ (i) there exists w i such that at least Fj ∈Fc(i,wi) f j extra cops lie within O c (i, w i ), and (ii) there exists w i such that at least Fj ∈Fcc(i,w i ) f j extra cops lie within O cc (i + 1, w i ).Consider some cop-free sequence, say F i1 , where i 1 = 1 without loss of generality.By (ii), there exists some (minimum-value) w i1 such that at least Fj ∈Fcc(i1,wi 1 ) f j extra cops lie within O cc (i 1 + 1, w i1 ).Let F i2 = F i1+1+wi 1 be the first cop-free sequence to the counterclockwise of O cc (i 1 + 1, w i1 ).Then, by (ii), there exists some (minimum-value) w i2 such that at least Fj ∈Fcc(i2,wi 2 ) f j extra cops lie within O cc (i 2 + 1, w i2 ).We proceed with such statements, inductively, until we reach F i l , for which there exists (minimum-value) w i l such that at least Fj ∈Fcc(i l ,wi l ) f j extra cops lie within O cc (i l + 1, w i l ) and i l + 1 + w i l ≥ p + i 1 .That is, we have performed a full round on G λ .There are three cases to consider with respect to the value i l + 1 + w i l .
• If i l + 1 + w i l = p + i 1 , then, for i z = i 1 , i 2 , . . ., i l , sets O cc (i z + 1, w iz ) form a partition of the cop-occupied space in G λ .By assumption, for each such i z , at least Fj ∈Fcc(iz,wi z ) f j extra cops lie within O cc (i z + 1, w iz ).Summing it all, l z=1 Fj ∈Fcc(iz,wi z ) f j = p j=1 f j extra cops lie within the cop-occupied sequences, since ∪ l z=1 F cc (i z , w iz ) contains all cop-free loops in G λ .• If i l +1+w i l = p+i y , for some y > 1, then the last interval fully covers some already defined intervals starting at i 1 , i 2 , . . ., i y−1 .In this case, for i z = i y , i y+1 , . . .i l , sets O cc (i z +1, w iz ) form a partition of the cop-occupied space in G λ .By assumption, for each such i z , at least Fj ∈Fcc(iz,wi z ) f j extra cops lie within O cc (i z + 1, w iz ).Summing it all together as in the previous case, at least l z=y Fj ∈Fcc(iz,wi z ) f j = p j=1 f j extra cops lie within the cop-occupied sequences.• If p + i y < i l + 1 + w i l < p + i y+1 , for some y > 1, then the last interval fully contains intervals starting at i 1 , i 2 , . . ., i y−1 and partially overlaps with interval i y .Let i l + 1 + w i l = p + i y + x for some 1 ≤ x ≤ i y+1 − i y .There are strictly fewer than f iy + f iy+1 + . . .For the rest of the graph, for z = y + 1, . . ., l − 1, at least Fj ∈Fcc(iz,wi z ) f j extra cops lie within O cc (i z + 1, w iz ).Summing it all together, at least p j=1 f j extra cops lie within the cop-occupied sequences, since each f j is considered once in the above calculations.
In all three cases, considering occupant cops and extra cops together, it follows there are at least p i=1 (o i + f i ) = λ − 1 cops in G λ , since the number of loops in all the sequences is exactly the number of loops in G λ .Lemma 3.For any G λ ∈ G, it holds c t (G λ ) ≥ λ − 1.
Proof: Follows by the combination of Propositions 5 and 6.
Conclusions
In this paper, we consider the topic of playing Cops and Robbers games on dynamic graphs.We show how the cop number can be computed in the offline case, where all graph dynamics are known a priori, via a reduction to a reachability game.In the online case with a connectedness restriction, we show a nearly tight bound on the cop number of a family of sparse graphs.
In the future, considering the online case, we would like to tighten the bound for sparse graphs, and also consider dense graphs.
•
If f * < f i−(w+1) , by inductively applying Proposition 3, move all f * extra cops in O i−(w+1) to become occupant cops of some free loops in F i−(w+1) .Let F * , where |F * | = f * , denote the formerly free loops which are now occupied.We now reset O i−(w+1) to a larger sequence O i−(w+1) containing all loops in F * and O i−(w+1) .We wish to use cops only within O c (i, w+1) = O c (i, w+ 1) \ {O i−(w+1) } ∪ {O i−(w+1) }.Since no extra cops remain within O i−(w+1) , we focus only on the number of extra cops within O c (i, w). | 13,179.6 | 2020-06-29T00:00:00.000 | [
"Mathematics"
] |
Catalytic Properties of Phosphate-Coated CuFe2O4 Nanoparticles for Phenol Degradation
Copper ferrite (CuFe2O4) nanoparticles were prepared using the sol-gel autocombustion method and then coated with phosphate using different treatments with H3PO4. The structural and chemical properties of the phosphate-coated CuFe2O4 nanoparticles were controlled by changing the concentration of H3PO4 during the coating process. The prepared nanoparticles were characterized using XRD, FTIR, SEM, and EDS which provided information about the catalysts’ structure, chemical composition, purity, and morphology. The catalytic and photocatalytic activities of the phosphate-coated CuFe2O4 samples were tested and evaluated for the degradation of phenol using HPLC. The prepared nanoparticles successfully emerged as excellent heterogeneous Fenton-type catalysts for phenol degradation. The phosphate-coated CuFe2O4 catalysts exhibited a higher catalytic activity compared with the uncoated CuFe2O4 ones. Such a higher catalytic performance can be attributed to enhanced morphological, electronic, and chemical properties of the phosphate-coated CuFe2O4 nanoparticles. Additionally, the phosphate-coated CuFe2O4 nanoparticles also revealed a higher catalytic activity compared with TiO2 nanoparticles. Different experimental conditions were investigated, and complete removal of phenol was achieved under specific conditions.
Introduction
The rapid development in industries increased the amount of wastewater discharged into water bodies. The discharged wastewater contains different organic and inorganic pollutants which are highly toxic and harmful to the environment. These pollutants resist degradation and are environmentally persistent which required developing new approaches to tackle this problem [1,2]. The discharged industrial waste contains chemicals like cyanides, mono-or polycyclic aromatics, mercaptans, phenols, ammonia, and sulfides, which are affecting the aquatic habitat via halting algae growth, oxygen depletion, and altering the properties of water [3]. Phenol is one of the abundant hazardous pollutants in wastewater. Phenol was found to be teratogenic, toxic, highly persistent in the environment, and nonbiodegradable [4][5][6]. Once it is discharged into water bodies, it presents a threat to aquatic systems and its inhabitants as well as humans' health [7]. Phenol in wastewater can be treated by various physical and chemical techniques, such as microwave [8,9], membrane technology [10], electrochemical [11,12], thermal [13], and physiochemical processes [14,15]. One of the most adopted techniques in wastewater treatment is the chemical oxidation including the advanced oxidation processes [3,16].
Advanced oxidation processes (AOP) include different systems like Fenton, photocatalysis, and UV/H 2 O processes, which mostly proceed via active nonselective radicals such as hydroxyl radicals (HO·). The radicals are produced through the decomposition of strong oxidants like hydrogen peroxide. In most of these processes, HO • oxidizes phenol and its derivatives to cyclic intermediates that are converted to organic acids which are then mineralized to CO 2 and H 2 O [17]. Recent studies showed that heterogeneous catalysts combined with the advanced oxidation process (AOP) can be used for the degradation of aromatic organic compounds in wastewater [18,19]. While zeolites, clays, and oxide materials were found to be some of the efficient systems for the AOP, the most used ones were mixed iron oxide nanoparticles [20][21][22][23].
Ferrites are iron oxides with the general formula MFe 2 O 4 (M = Ni, Mn, Co, and so on). Spinel ferrites exhibit cubic close packing of oxide ions that form tetrahedral and octahedral coordination, where the divalent metal ions are incorporated. Ferrites have a large number of applications due to their being moderately cheap, highly efficient, recyclable, magnetic, and catalytic properties. They are often used in gas sensors, energy storage, semiconductors, magnetic-based separation, catalysis, and refractory materials. Reactions like decomposition of cyclic organic peroxides, oxidation of propane, oxidation of phenol, and decomposition of hydrogen peroxide are catalyzed by ferrites [13,24]. Copper ferrites are thermally stable magnetic particles which are frequently employed in various environmental applications. CuFe 2 O 4 nanoparticles are used as catalysts and gas sensors [19,25,26]. Ferrites tend to agglomerate due to van der Waals forces, high surface energy, and magnetic dipolar interaction. One of the solutions employed to overcome this problem is coating [27]. The organic coating includes polyamides, polyvinyl alcohol, epoxy, and silicone resins, while the inorganic coating includes oxides and phosphates. Among these coatings, phosphate stands out for its high electrical resistivity, the simplicity of preparation, and high adhesion to the substance surface [28]. The phosphate coating is often done through a phosphate bath or immersing the material in phosphoric acid to form phosphate layer on its surface.
Phosphate doping has improved the catalytic activities of oxide catalysts such as TiO 2 [29][30][31] and BiVO 4 [32], and, consequently, it is expected to improve the catalytic properties of ferrites. In our previous work [19], CuFe 2 O 4 nanoparticles were utilized as efficient Fenton-like catalysts for phenol degradation. The main goal of the present work is to investigate the effect of phosphate coating on the structural, chemical, and catalytic properties of the CuFe 2 O 4 nanoparticles. To our knowledge, there are no reports discussing the catalytic properties of phosphate-coated CuFe 2 O 4 nanoparticles. The obtained results for the degradation of phenol demonstrated that the prepared nanoparticles exhibited a high Fenton catalytic performance with an almost complete degradation of phenol. The results also revealed that phosphate-coated CuFe 2 O 4 nanoparticles exhibited higher catalytic activities than pure uncoated CuFe 2 O 4 ones. The influences of different coating treatments, solution pH, and reaction temperature on the degradation of phenol were investigated. The photocatalytic activity of the phosphate-coated CuFe 2 O 4 nanoparticles was also examined.
Catalyst Preparation.
Firstly, pure uncoated CuFe 2 O 4 nanoparticles were prepared using the sol-gel autocombustion method as described previously [19]. In brief, predetermined amounts of ferric nitrate and the copper (II) nitrate were dissolved in distilled water. Citric acid was then added to the solution and stirred till the acid completely dissolved. The molar ratio of the ferric nitrate, copper (II) nitrate and citric acid was 2 : 1 : 3. After that, the solution was heated to 80°C, and then, ammonia was added until the pH of the solution was around 8. The solution was left to boil until a thick gel was formed, which was kept overnight at room temperature. The gel was then burned and an ash-like product was obtained. The catalyst was crushed and kept for the phosphate-coating step in which the obtained CuFe 2 O 4 powder was treated with acetone for 15 minutes. After that, the powder was filtered and dried in the oven at 70°C. Then Additional comparative studies were carried out using TiO 2 nanoparticles (>99%) purchased from Sigma-Aldrich and used as received.
Catalyst Characterization.
The crystal structures of the uncoated and coated catalysts were determined using X-ray diffraction PANalytical powder diffractometer (X'Pert PRO) which uses Cu-Kα radiation operating at 40 kV and 40 mA at 1.5406 Å, over the 2θ range of 10 o -80 o and a step size equal to 0.02°. The crystallite sizes were calculated by the Debye-Scherrer formula: where K is the Scherrer constant (0.89), λ is the wavelength of the XRD instrument, θ is the diffraction angle, B is the peak full width at half maximum of the intensity plotted against the 2θ profile, and L is the crystallite size in nm [33]. The morphologies of the ferrites were analyzed through scanning electron microscopy FEG Quanta 250 provided with energy-dispersive spectrometer (EDS). The FTIR spectra were recorded using a Bruker ALPHA-Platinum ATR FTIR in the range of 400-4000 cm -1 .
Phenol Degradation Reaction.
Degradation reactions were carried out in a 150 mL glass beaker containing 95.00 mL of 200 ppm phenol and 5 mL of 30% H 2 O 2 . The reaction started after adding 60 mg of catalyst to the solution with continuous stirring. At specific time intervals, samples were withdrawn and filtered using 0.2 μm nylon membrane filters. HPLC was used to analyze the samples in order to determine the remaining concentration of phenol. HPLC measurements were conducted using a Shimadzu machine equipped with a UV detector that was set at 280 nm. The HPLC method was previously developed to follow the phenol degradation process [19].
The following formula was used to calculate the degradation efficiency of phenol: Journal of Nanomaterials where C o is the initial concentration of phenol and C t is the remaining concentration of phenol after a specific time of the reaction. The degradation reactions were modeled using the following first-order expression: where C o is the initial concentration of phenol, C t is the residual concentration of phenol after a specific time of the reaction, and k is the rate contact which can be calculated from the slope of ln (C t /C o ) versus time.
In order to investigate the photocatalytic activities of the prepared catalysts, similar reactions were carried out using a photocatalytic reactor equipped with a metal halide lamp (OSRAM 400 W, 350-750 nm). Figure 2 which clearly shows the typical frequency mode of ferrites at~600 cm -1 due to the stretching vibration of iron-oxygen ions in tetrahedral sites [19,35]. Moreover, additional absorption peaks are observed in the region of 970-1120 cm -1 for the phosphate-coated CuFe 2 O 4 samples, which are attributed to phosphorus-oxygen stretching vibration [36]. The intensity of these peaks increased with the increase of phosphoric acid concentration from 0.35 to 1.5 M.
Phenol Degradation Reactions.
Initially, the nonphotocatalytic performance of the prepared catalysts was investigated at room temperature. The obtained results, displayed in Figure 4, Journal of Nanomaterials Figure 4 indicates that the catalytic performance of CuFe 2 O 4 nanoparticles toward phenol degradation has significantly enhanced after the phosphate-coating process. Clearly, phosphate-coated CuFe 2 O 4 nanoparticles promote a faster phenol degradation as compared to pure uncoated CuFe 2 O 4 ones. The higher catalytic activity of the phosphate-coated CuFe 2 O 4 nanoparticles is most likely due to a higher production of hydroxyl radicals, which in turn facilitates the degradation of phenol. Therefore, it can be suggested that the phosphate coating enhanced H 2 O 2 decomposition to OH radicals. The rate of degradation reactions followed first-order kinetics (Equation 3) with respect to phenol concentration. The reaction rate constants are listed in Table 1. 3.2.1. Effect of Reaction Temperature. Figure 5 shows the effect of reaction temperature on the degradation of phenol using phosphate-coated CuFe 2 O 4 catalysts. The reactions were investigated at three different temperatures which are 25°C, 35°C, and 45°C. The obtained results indicate that phenol degradation increased by increasing the reaction temperature. Reactions at 45°C and 35°C showed complete [37][38][39][40]. Table 2 presents the reaction rate constants calculated at the three reaction temperatures. Clearly, the reaction at 45°C exhibited the highest reaction rate constant value indicating the fastest degradation of phenol.
3.2.2.
Effect of Initial pH. The role of solution pH on the degradation of phenol was investigated. The pH value of the reaction solution was adjusted using 0.1 M NaOH and 0.1 M HCl, and the results are presented in Figure 6. Clearly, the highest degradation of phenol was achieved at the acidic conditions (pH = 3), whereas the degradation at alkaline conditions (pH = 9) proceeded with a much lower extent. The high catalytic activity at acidic conditions can be due to metal leaching from the catalyst to the solution and, consequently, performing as a homogeneous catalyst. At alkaline conditions, H 2 O 2 decomposition produces O 2 and H 2 O, rather than OH radicals, which reduces the degradation extent of phenol [19,41]. The rate constants for the reaction at different pH values are listed in Table 3.
Additionally, the photocatalytic activities of the CuFe 2 O 4 catalysts were investigated, and the results are presented in Figure 7. The results reveal that the activity of the CuFe 2 O 4 catalysts increased after the phosphate coating, which indicates that such coating also enhanced the photocatalytic performance of the CuFe 2 O 4 nanoparticles. Moreover, comparing Figure 7 with Figure 4 clearly reveals that phenol degradation significantly increased for the photoinduced reaction. After In equation 5, the photocatalytic reaction is initiated when the catalyst absorbs a photon which leads to the promotion of an electron in the conductive band (e CB -) and formation of a positive hole in the valence band (h VB + ) [42]. The e CB and h VB + exhibit powerful reducing and oxidizing properties, respectively. In addition to HO • , e CB and h VB + facilitate the degrading of phenol as presented in equations 6, 7, 8, and 9.
TiO 2 is considered one of the most used catalysts in many industrial processes. Therefore, the catalytic activities of phosphate-coated CuFe 2 O 4 and TiO 2 towards phenol degradation were compared. The results of photoinduced and nonphotoreactions are presented in Figure 8. Under nonphotoreaction conditions (Figure 8(a)), an almost complete phenol degradation was achieved after 4 hours using the phosphate-coated CuFe 2 O 4 catalyst, while only 5% of phenol was removed by TiO 2 . This result clearly indicates that the coated CuFe 2 O 4 catalyst is more active than the TiO 2 catalyst. Compared to Figure 8(a), Figure 8(b) highlights the higher degradation of phenol by both catalysts, TiO 2 and CuFe 2 O 4 , under photoinduced reaction conditions. However, the phosphate-coated CuFe 2 O 4 catalyst is still significantly more active than TiO 2 towards phenol.
Conclusions
The prepared sol-gel autocombustion
Data Availability
The XRD, SEM, and IR data used to support the findings of this study are included within the article.
Conflicts of Interest
The authors declare that they have no conflicts of interest. | 3,126.2 | 2019-03-03T00:00:00.000 | [
"Chemistry",
"Engineering"
] |
Perturbational blowup solutions to the compressible Euler equations with damping
Background The N-dimensional isentropic compressible Euler system with a damping term is one of the most fundamental equations in fluid dynamics. Since it does not have a general solution in a closed form for arbitrary well-posed initial value problems. Constructing exact solutions to the system is a useful way to obtain important information on the properties of its solutions. Method In this article, we construct two families of exact solutions for the one-dimensional isentropic compressible Euler equations with damping by the perturbational method. The two families of exact solutions found include the cases \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\gamma >1$$\end{document}γ>1 and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\gamma =1$$\end{document}γ=1, where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\gamma$$\end{document}γ is the adiabatic constant. Results With analysis of the key ordinary differential equation, we show that the classes of solutions include both blowup type and global existence type when the parameters are suitably chosen. Moreover, in the blowup cases, we show that the singularities are of essential type in the sense that they cannot be smoothed by redefining values at the odd points. Conclusion The two families of exact solutions obtained in this paper can be useful to study of related numerical methods and algorithms such as the finite difference method, the finite element method and the finite volume method that are applied by scientists to simulate the fluids for applications.
by the adiabatic γ-law. The constant α ≥ 0 is the damping coefficient.
System (1) is one of the most fundamental equations in fluid dynamics. Many interesting fluid dynamic phenomena can be described by system (1) (Lions , 1998a;Lions , 1998b). The Euler equations (α = 0) are also the special case of the noted Navier-Stokes equations, whose problem of whether there is a formation of singularity is still open and long-standing. Thus, the singularity formation in fluid mechanics has been attracting the attention of a number of researchers (Sideris 1985;Xin 1998;Suzuki 2013;Lei et al. 2013;Li and Wang 2006;Li et al. 2013).
Among others, we mention that in 2003, Sideris-Thomases-Wang (Sideris et al. 2003) obtained results for the three dimensional compressible Euler equations with a linear damping term with assumption γ > 1, that is, system (1) with N = 3 and γ > 1. They discovered that damping prevents the formation of singularities in small amplitude flows, but large solutions may still break down. They formulated the Euler system as a symmetric hyperbolic system, established the finite speed of propagation of the solution, and some energy estimates to obtain local existence as well as global existence of the solution. For larger solution, they showed that the solution will blow up in a finite time by establishing certain differential inequalities.
In this article, we consider the one dimensional case of system (1): More precisely, we apply the perturbational method to obtain the following main results. Theorem 1 For system (3) with γ > 1 and α > 0, one has the following family of exact solutions with parameters ξ , ρ(0, 0) > 0, a 0 > 0 and a 1 : where ρ γ −1 (t, 0) is given by a(t) and b(t) satisfy the following ordinary differential equations: Remark 2 The ordinary differential equation (O.D.E.) (6) will be analyzed in section 2 and it is well-known by the theory of ordinary differential equations that the solutions of system (7) exist and is C 2 as long as f and g, which are functions of ä, ȧ and a, are continuous.
Theorem 3 For the family of exact solutions in Theorem 1, we have the following five cases.
(iii) If ξ < 0, then the solution (4) blows up on a finite time.
Moreover, we show that the singularity formations in the cases iii), iv) and v) above are of essential type in the sense that the singularities cannot be smoothed by redefining values at the odd points. This is an improvement of the corresponding results in Yuen (2011).
Theorem 4 For system
where a(t) and b(t) satisfy the following ordinary differential equations: Theorem 5 For the family of exact solutions in Theorem 4, we have the following five cases.
Analysis of an O.D.E.
Consider the following initial value problem.
where γ ≥ 1, α > 0 and ξ ∈ R are constants. We set Lemma 6 For system (12), if T * is finite, then the one-sided limit Proof Note that we always have Suppose lim t→T * a(t) > 0. Then we can extend the solution of (12) to [0, T * + ε) by solving the following system.
This contradicts the definition of T * . Thus, the lemma is established. (12), we have the following three cases.
Thus,
As A/α < 0, we have a(t) < 0 for all sufficiently large t. This is impossible as T * = +∞. Thus, Case 3. is established.
Remark 8
The case for ξ = 0 will be analyzed in the proof of Theorem 3.
Proofs of the Theorems
Proof of Theorem 1 We divide the proof into steps.
Step 1. In the first step, we show a lemma.
Thus, we have
Taking integration with respect to x, we obtain On the other hand, multiplying ρ γ −2 on both sides of (3) 1 , we get From (33), we have and Substituting (36), (32) and (35) into (34), one obtains the relation claimed in the lemma.
Step 2. We set where c := c(t) and b := b(t) are functions of t. Then, (29) is transformed to where we arrange the terms according to the coefficients of x.
Step 3. We use the Hubble transformation: and set the coefficient of (38) to be zero. Thus,
Note that we have the novel identity
Multiplying the both sides of (40) by a γ +1 , it becomes for some constant ξ.
Step 4. With (39), we set the coefficient of x in (38) to be zero. Thus, b satisfies where Last Step. With (39) and setting the coefficient of 1 in (38) to be zero, we are required to solve where (38) ... a a + αä a + γȧä a 2 + γ αȧ 2 a 2 = 0. Next, we prove Theorem 3 as follows.
Proof of Theorem 3 For ξ > 0, case i) and case ii) of Theorem 3 follow from Case 1. and Case 2. of Lemma 7.
For ξ < 0, by Case 3. of Lemmas 6 and7, there exists a finite T * > 0 such that the onesided limit of a(t) is zero as t approaches to T * . It remains to show T * is not a removable singularity of ȧ/a. To this end, suppose one has Then, Thus, the singularity is of essential type and case iii) of Theorem 3 is proved.
For ξ = 0, (6) 1 becomes which can be solved by using integral factor. The solution is Thus, a(T ) = 0 if a 1 > 0. Also, a(T ) = 0 if a 1 < 0 and a 0 < −a 1 /α, where T := 1 α ln a 1 a 1 +a 0 α > 0. As (T, x) is an essential singularity of u(t, x) for any x. Thus, cases iv) and v) of Theorem 3 are established. The proof is complete.
Proof of Theorems 4 and 5
The corresponding relation of Lemma 9 for γ = 1 is With similar steps, one can obtain the family of exact solutions in Theorem 4.
Note that (10) is a special case of (12) and the arguments in the proof of Theorem 3 hold for γ = 1. Thus, the results for Theorem 5 follows.
Conclusion
The complicated Euler equations with a damping term (1) do not have a general solution in a closed form for arbitrary well-posed initial value problems. Thus, numerical methods and algorithms such as the finite difference method, the finite element method and the finite volume method are applied by scientists to simulate the fluids for applications in real world. Thus, our exact solutions in this article provide concrete examples for researchers to test their numerical methods and algorithms. | 1,929.2 | 2016-02-27T00:00:00.000 | [
"Mathematics"
] |
A renormalisation group approach to the universality of Wigner’s semicircle law for random matrices with dependent entries
In this talk, we show that if the non Gaußian part of the cumulants of a random matrix model obey some scaling bounds in the size of the matrix, then Wigner’s semicircle law holds. This result is derived using the replica technique and an analogue of the renormalisation group equation for the replica effective action.
Introduction
Random matrix theory (see the classical text [1]) first appeared in physics in Wigner's work on the level spacing in large nuclei.Since then, it has proved to have multiple applications to physics and other branches of science, see for instance [2].Most of these applications rely on the universal behaviour of some of the observables for matrices of large size.A simple example is Wigner's semicircle law for the eigenvalue density that holds in the large N limit for matrices whose entries are independent and identically distributed.
Understanding the universal behaviour of eigenvalue distributions and correlations ranks among the major problems in random matrix theory.In this respect, the renormalisation group turns out to be a powerful technique.Introduced in the context of critical phenomena in statistical mechanics by K. Wilson to account for the universality of critical exponents, the latter has also proved to be useful in understanding probability theory.For instance it leads to an insightful proof of the central limit theorem, see the review by G. Jona-Lasinio [3] and references therein.
The renormalisation group has been used to derive the semicircle law for random matrices in the pioneering work of E. Brézin and A. Zee [4].In the latter approach, the renormalisation group transformation consists in integrating over the last line and column of a matrix of size N + 1 to reduce it to a size N matrix.This leads to a differential equation for the resolvent G(z) = 1/N Tr (z − M ) −1 in the large N limit whose solution yields the semicircle law.
In this talk, we follow a different route: We first express the resolvent as an integral over replicas and introduce a differential equation for the replica effective action.This differential equation is a very simple analogue of Polchinski's exact renormalisation group equation [5].It is used to derive inductive bounds on the various terms, ensuring that the semicircle law is obeyed provided the cumulants of the original matrix model fulfil some simple scaling bounds in the large N limit.
This talk is based on some work in collaboration with A. Tanasa and D.L. Vu in which we extend Wigner's law to random matrices whose entries fail to be independent [6], to which we refer for further details.There have been other works on such an extension, see [7], [8] and [9].
What are random matrices ?
A random matrix is a probability law on a space of matrices, usually given by the joint probability density on its entries, Thus a random matrix of size N is defined as a collection of N 2 random variables.However, there is a much richer structure than this, relying notably on the spectral properties of the matrices.
Here we restrict our attention to a single random matrix.Note that it is also possible to consider several random matrices, in which case the non commutative nature of matrix multiplication plays a fundamental role, leading to the theory of non commutative probabilities.
There are two important classes of probability laws on matrices.
• Wigner ensemble: The entries are all independent variables, up to the Hermitian condition M ij = Mji.
• Unitary ensemble: The probability law is invariant under unitary transformations for any unitary matrix U ∈ U(N ).
The only probability laws that belong to both classes are the Gaußian ones up to a shift of M by a fixed scalar matrix.
The main objects of interest are the expectation values of observables, defined as (5) Among the observables, the spectral observables defined as symmetric functions of the eigenvalues of M , play a crucial role in many applications.This is essentially due to their universal behaviour: In the large N limit, for some matrix ensembles and in particular regimes, the expectation values of specific spectral observables do not depend on the details of the probability law ρ(M ).
Universality is at the root of the numerous applications to physics and other sciences, since the results we obtain are largely model independent.Among the applications to physics, let us quote the statistics of energy levels in heavy nuclei, disordered mesocopic systems, quantum chaos, chiral Dirac operators, ...
Wigner's semicircle law
In this talk, we focus on the eigenvalue density, defined as In particular, a universal behaviour is expected in the large N limit for some ensembles.For a Gaußian random Hermitian matrix ρ(M ) ∝ exp − 1 2σ 2 Tr(M 2 ), the eigenvalue density obeys Wigner semicircle law, Empirically, ρ(λ) may be determined by plotting the histogram of eigenvalue of a matrix taken at random with a given probability law, see figure 1.
The derivation of Wigner's semicircle in the large N limit is based on the resolvent (also known as the Green function) Then, the density of eigenvalues is recovered as (2.5) our computation of the average value of the trace and its function very helpful.e average density of state for the Gaussian Unitary Ensemble GUE mit an that in the large N limit, the ar law holds in fact for a much e GUE.Chapter 3 is devoted to ge N limit, the distribution of eigenvalues might become continuous.ntinuous probability measure ⇢ on the real line, instead of Green's f Stieltjes transform which is defined by fined and analytic for z outside the support of ⇢, in particular in f the complex plane.If the support of the probability measure ⇢ is ments then ⇢ can be obtained from its Stieltjes transform through es transform of a probability measure can also be regarded as the ts (2.10) 15 where we have used the relation In the large N limit, for the Gaußian model, the resolvent obeys the self consistency equation (also known as the Schwinger-Dyson equation), see for instance [10], section VII.4, Its solution that behave as 1/z for large z is Taking the cut of the square root on the negative real axis, we obtain the Wigner semicircle law (7) in the large N limit.The semicircle law is not limited to the Gaußian case, it also holds for Wigner matrices in the large N limit.A random Hermitian N × N matrix is a Wigner matrix if • real and imaginary parts of upper diagonal elements are independent and identically distributed (i.i.d.) with mean 0 and variance σ; • diagonal elements are i.i.d. with finite mean and variance and independent of the off diagonal ones.
Then, in the limit N → +∞, the eigenvalue distribution of M √ N is the semicircle law (7).
The original proof is of combinatorial nature and involves the expectation of the moments lim To derive this result, the idea is to first factorise ρ for a Wigner ensemble as where ρ is the common probability density of the real diagonal terms and ρ the common probability density of the real and imaginary parts of the off diagonal terms.Then, we expand the trace and integrate over the independent real variables Mii, Re Mij and Im Mij.The power of N in the expectation of a given moment arises from the denominator 1 N k/2+1 and from the number of independent indices in the summations.In the large N limit, the only configurations that survive are counted by Catalan numbers, C l = (2l)!(l!) 2 (l+1) .Since the latter also appear in the following Taylor expansion we conclude that This is the form of the resolvent that leads to Wigner's semicircle law.
Here, we see universality at work: In the large N limit, the eigenvalue density is given by the semicircle law, whatever the probability densities ρ and ρ are.However, this result relies on the independence of the matrix elements.In the next section, we will extend it to matrices whose entries are not necessary independent.
Wigner's law beyond Wigner ensembles
Let us introduce the cumulants, defined through their generating function In the physics terminology, these are the connected correlation functions.
In particular, the Gaußian cumulants vanish beyond the quadratic term Therefore, cumulants of degree higher than 2 are a measure of the deviation from the Gaußian case.
Turning back to the general case, for each cumulant we construct an oriented graph as follows (see figure 2 for some examples): • vertices are distinct matrix indices in the cumulant, • there is an edge from i to j for every Mij.
i j k l i j Since non quadratic cumulants measure deviations from the Gaußian case, if the perturbation is small it is reasonable to expect that the semicircle law is still obeyed.
To state this result, recall that an oriented graph is Eulerian if every vertex has an equal number of incoming and outgoing edges.Equivalently, it means that every connected component admits an Eulerian cycle, i.e. an oriented cycle that passes through all edges, respecting the orientation.Furthermore, let us denote v(G), e(G), c(G) the number of vertices, edges and connected components of G. Theorem 1 (Wigner's law for matrices with dependent entries).Let ρN be a probability law on the space of Hermitian N ×N matrices M such that its cumulants can be decomposed as CG = C G + C G , with C G a Gaussian cumulant and C G a perturbation such that, uniformly in the vertex indices i1, ..., i v(G) (i.e.all constants involved should not depend on these indices), • lim Then, the moments of the eigenvalue distribution of the matrix M √ N converge towards the moments of the semicircle law, with σ given by the Gaussian cumulant MijM kl c = σ 2 δ il δ jk , lim For instance, for the graph i j k l which is not Eulerian, with v = 3, e = 4 and c = 2, the cumulant should obey with K a constant that does not depend on the indices i, j, k and l.On the other hand, for the graph i j , which is Eulerian, with v = 2, e = 4 and c = 1, we impose uniformly in i and j.
As an illustration, we recover the case of Wigner matrices (with finite moments).Indeed, • there is no graph with v ≥ 3 (independence of off diagonal matrix elements); • for v = 1 and v = 2, e ≥ 3, bounds are satisfied because of 1/N e/2 and all moments are assumed to be finite; independence of diagonal and off diagonal elements); (independence of real and imaginary parts and equality of their distributions with mean value 0); = σ 2 is the Gaußian cumulant leading to the semicircle law.
The case of unitarily invariant matrices is critical since the bounds are saturated, see [6].This is consistent since we known the semicircle law is not obeyed by unitary non Gaußian ensembles [13].
It is possible to give a combinatorial proof of this result based on the relation between moments and cumulants, In the moment method, we have to estimate Then, we express the moments in (25) in terms of cumulants using (24) and represent each cumulant as a graph.Because of the trace, one has to draw Eulerian cycles on the graphs after some vertex identifications.
Then, the scaling bounds on the cumulants can be used to show that only Gaussian terms survive.
Proof based on the replica effective action
Let us give a renormalisation group proof of this result based on the replica effective action.The use of replicas in random matrix theory is a classical subject, see for instance [11] or [12].To begin with, let us note that It is convenient to express the logarithm using the replica method.First observe that Then, we express the n th power of the determinant as a Gaußian integral over n replicas of a complex vector of size N (with a factor of π nN included in the measure), which fit into a N × n complex matrix X = (Xi,a) 1≤i≤N 1≤a≤n .
The limit n → 0 may be worrisome, its meaning is as follows.Because of U(n) invariance, any perturbative result in powers of 1/z is a polynomial in n, from which we retain only the linear term.Of course, this may not hold beyond perturbation theory, where replica symmetry breaking can occur.
After averaging over M with the random matrix density ρ(M ), we obtain the following expression for the resolvent, where the replica potential is Because of the logarithm, the potential involves the cumulants and can be expanded over graphs as where s(e) is the source of edge e and t(e) its target.
Let us introduce a replica effective action, obtained by a partial integration The parameter t ranges between 0 (where we have no integration, V (t = 0, X) = V0(X)) and t = 1/z.
The effective potential obeys a semi-group property that follows from Gaußian convolution (see for instance [14], section A10.1), (33) For small s = dt, it translates into the following renormalisation group equation, which is a simple version of Polchinski's exact renormalisation group equation [5], The first term on the RHS is referred to a the loop term, since it creates a new loop in the Feynman graph expansion of the effective action while the second inserts a one particle reducible line and is referred as the tree term, see figure 3. Taking into account the boundary condition V (t = 0, X) = V0(X), it is convenient to write (34) in integral form This allows us to derive inductive bounds in powers of t = 1/z.From a physical point of view, we evaluate the effective potential by a large succession of small partial integrations, with a total weight given by t.Let us stress that in our context this differential equation is merely a tool to control the t dependence of the effective action after integrating with a t dependent propagator.
The effective potential also admits an expansion over graphs, This leads to a graphical interpretation of the action of the two differential operators in the renormalisation group equation, see figure 4. Indeed, in the expansion (36), an edge joining a vertex carrying label i to a vertex carrying j is equipped with a factor a Xi,aXj,a, with a a replica index.Then, the differential operator ∂ ∂X i,a (resp.
∂ ∂X j,a
) removes the outgoing (resp.incoming) half edge.Finally, the remaining half edges are reattached and the vertices identified to yield a new graph on the RHS of (35), with one less edge.These operations are performed on the same graph for the loop term and on distinct ones for the tree term.
Figure 4: Action of the differential operators on the vertices of the effective action
Let us decompose the effective cumulants appearing in (36) into Gaußian ones and perturbations, and expand both in a power series in t = 1/z, The Gaußian terms are those that are constructed using only the Gaußian term in the initial potential V0(X).Even is V0(X) is quartic in X, this does not hold for the Gaußian part of Vt(X), that contains terms of all orders.The perturbation collects all the remaining terms, they contain at least one non Gaußian perturbation from V0(X).
The renormalisation group equation (35) allows us to prove inductively on k that the perturbations C (k) G obey the same scaling bound imposed on C (0) G = C G (0) and that the purely Gaußian terms do not grow to fast, This involves a combinatorial discussion based on the graphical interpretation of figure 4 that can be found in [6].Let us simply mention that the terms that may violate the bounds are of higher order in n.Thus, they are harmless when taking the limit n → 0 before the limit N → +∞.
Finally, using (29) and the renormalisation group equation (35), the resolvent can be expressed as The scaling bounds for the non Gaußian cumulants impose, perturbatively in 1/z, lim Therefore, only the Gaußian cumulants contribute and we recover Wigner's semicircle law.
Conclusion and outlook
In this talk, we have argued that Wigner's semicircle law remains valid for matrices with dependent entries.The deviation from the independent case is measured by the joint cumulants of the entries, which are assumed to fulfil some scaling bound for large N .To establish this result, we have introduced an effective action for the replicas.This effective action obeys a renormalisation group equation that allowed us to prove perturbative bounds on the effective cumulants.As a consequence of these bounds, only the Gaußian terms contribute in the large N limit, thus establishing the validity of Wigner's semicircle law.It may also be of interest to investigate the case of the sum of a random matrix M and a deterministic one A, see for instance [12] where such a model is discussed.In this case, the resolvent is expressed as In our context, the deterministic matrix A induces a non trivial kinetic for the replicas.In particular, if A is a discrete Laplacian, it yields a non trivial renormalisation group flow that bears some similarities with the QFT renormalisation group.In this case, we expect to exploit the true power of the renormalisation group equation, with a discussion of fixed points and scaling dimensions.
Figure 2 :
Figure 2: Examples of graph associated to cumulants | 4,067 | 2017-10-16T00:00:00.000 | [
"Physics"
] |
Design of Web Based Employees Information System Design in SD Kumnamu School Tangerang
Ayroll systems in every company vary, most have used computer-based information systems, but there are still some companies that have not implemented it as in SD KUMNAMU SCHOOL. In this educational institution in the payroll information system is still using the calculation manually and using MS aids program. Excel. In this study using SWOT method as a method used to determine the strengths, weaknesses, opportunities, and system threats that run today through several stages of interview and literature study. Which produces a payroll information system that can manage computerized payroll, perform absentee calculations, automatically calculate monthly salary, allowances calculation, present salary slips and salary reports required every month or every year and others. System design using UML (Unified Modeling Language) tool, while in making system program using MySQL tool to design database and PHP (Hypertext Preprocessor) as programming language.
Introduction
In this instant, the development of computer technology is growing rapidly, especially in the world of work. Information from each field is interrelated, information provided by a field can affect other fields [1]. With the existence of computer technology can help in operational activities in all fields. Rather than the previous process, the community is forced to be able to open up insight and balance the situation.
Payroll is a reward or wage equal to the work that has been given on the basis of a policy that is considered fair, where the payroll is carried out by an accounting company. Kumnamu School is a school that stands on the City of Tangerang Semi Karawaci, this is engaged in education. The number of teachers who received salaries ranged from 85 teachers [2], [3]. At present the payroll process for employees at the Kumnamu School Elementary School is still conventional so that it takes time to present reports and is less accurate. Therefore, to facilitate employee payroll reports an information system is needed that can provide access collaboration between employee absences and overtime, so that the accounting department can get the information needed quickly and accurately [4], [5].
Based on the explanation above, it is necessary to design an employee payroll information system at Kumnamu School Elementary School in delivering fast and accurate information.
Research Method
The research method used in this study is as follows: 2.1. Data Collection Method This method consists of Observation (Observation): To obtain data by observing the object under study, so that accurate data is obtained as a basis for research; interview (Interview): In order to get the material for this research obtained by asking questions directly with the parties concerned; and literature study (Literature review): Obtained from the collection of data and theories from books, papers, and lecture materials as a basis for employee payroll information systems [6], [7].
System
Analysis Method the analytical method used is by using a SWOT analysis based on logic that can maximize Strengths, Weaknesses, Opportunities and threats both internally and externally [8], [9].
Design Method
In making a system program using MySQL tools to design a database, PHP (Hypertext Preprocessor) as a programming language, and UML (Unified Modeling Language) is used to make a diagram-shaped design [10].
3.4. Research that has been done and has a correlation that has similarities with the research discussed in this journal, namely: Based on the 8 review literature above which discusses payroll and its system, this payroll system is made to facilitate the payroll process of employees and salary reports needed, also can improve the performance of educational institutions. On that basis, the basis for this web-based employee payroll system was made.
Problems faced
The process of calculating employee salaries is still conventional with the tools in the form of Ms.Excel application that still requires a long time when the salary calculation is done, so that there is often a delay in making payroll reports to the chairman of the foundation [11][12][13]. Data storage is still in the form of archives, so data loss often occurs when needed. Making employee salary reports requires a relatively long time [14]. This causes delays in the management decision-making process or the chairman of the foundation.
Troubleshooting
It is necessary to make a web-based employee payroll information system application so that the work process can be done quickly, precisely and accurately [15]. It is necessary to create a database system for data storage that is safer than data loss and faster data retrieval when needed. The employee payroll information system application is also designed to make payroll reports so that reports that are made no longer need a long time, and payroll reports can be quickly submitted to the board of directors.
Design Of Web Based Employees Information System Design in SD Kumnamu School
Tangerang (Sri Rahayu) Image Use Case Diagram One system that covers all activities in the employee payroll system process. The picture has 22 Use Cases that are run on Actors that explain the flow in the system, which is as follows: The Admin Head logs in. After logging in, the page that is displayed is Home, and there are several features on the Home page [16], [17]. On this page, the Admin Head inputs employee data. After that the Admin head inputs and imports employee attendance. Coconut Admin who has finished inputting the previous data, then input the teaching load data, and basic salary. Then the Chief Admin inputs the salary slips and salary components. The Chief Admin can find out reports or reports from each employee. While the Chairperson of the Foundation can only check employee payroll reports.
Database Design
Database Specifications The database specifications used in the proposed system are as follows: The amount of load Int 11 The amount of load
Conclusion
As the end of writing this thesis report, the writer gives conclusions based on the discussion and research results in the previous chapter as follows : 1. Payroll information systems of employees who are running still use the Ms.excel application in which data processing takes a long time and in the presentation of reports is less accurate. 2. With a computerized system that provides absent access collaboration that will simplify and speed up the employee payroll process. The new system is designed with web-based as an employee payroll application designed to produce payroll reports that are fast and accurate, so that there are no more delays or errors in inputting employee salary data or completion of payroll reports. | 1,502.2 | 2018-12-27T00:00:00.000 | [
"Computer Science"
] |
The MADS and the Beauty: Genes Involved in the Development of Orchid Flowers
Since the time of Darwin, biologists have studied the origin and evolution of the Orchidaceae, one of the largest families of flowering plants. In the last two decades, the extreme diversity and specialization of floral morphology and the uncoupled rate of morphological and molecular evolution that have been observed in some orchid species have spurred interest in the study of the genes involved in flower development in this plant family. As part of the complex network of regulatory genes driving the formation of flower organs, the MADS-box represents the most studied gene family, both from functional and evolutionary perspectives. Despite the absence of a published genome for orchids, comparative genetic analyses are clarifying the functional role and the evolutionary pattern of the MADS-box genes in orchids. Various evolutionary forces act on the MADS-box genes in orchids, such as diffuse purifying selection and the relaxation of selective constraints, which sometimes reveals a heterogeneous selective pattern of the coding and non-coding regions. The emerging theory regarding the evolution of floral diversity in orchids proposes that the diversification of the orchid perianth was a consequence of duplication events and changes in the regulatory regions of the MADS-box genes, followed by sub- and neo-functionalization. This specific developmental-genetic code is termed the “orchid code.”
. Schematic diagram of an orchid flower. Fig. (2). Time-calibrated phylogenetic relationships of the five sub-families of Orchidaceae, modified from Gustafsson et al. [14]. The numbers below the branches indicate the divergence time, as expressed in millions of years ago (Mya). On the right are the images of orchid species that are representative of each subfamily.
Mya, covered with pollinia from the orchid species Meliorchis caribea has enabled researchers to narrow the timeframe of the orchid family's origin, estimating it at [76][77][78][79][80][81][82][83][84] Mya in the Late Cretaceous [12]. A more recent study [14], which includes two new orchid fossils assigned to genera Dendro-bium and Earina [15], confirms the ancient origin of the orchids' most recent common ancestor in the Late Cretaceous (~77 Mya), although the origin of the five orchid subfamilies is dated ~1-8 Mya prior to the previous estimates [14] (Fig. 2). Both calibration analyses [12,14] were conducted using molecular phylogenetic reconstructions based on plastid DNA sequences (matK and rbcL), highlighting the relevance of molecular analyses in the study of the origin and evolution of orchids.
Although orchids possess many traits that are "unique" in the plant kingdom, such as highly specialized pollination strategies, diversified flower morphology, peculiar ecological strategies and developmental reproductive biology, molecular studies on this family are scarce when compared with those of other species-rich plant groups [16]. The genome projects of two orchid species, Phalaenopsis aphrodite (Project ID 53151) and P. equestris (Project ID 53913), are under development, although they are not yet available for release. The OrchidBase is a freely available collection of expressed nucleotide sequences that provides integrated information on ESTs from Phalaenopsis orchids (http://lab.fhes.tn.edu.tw/ est) [17]. The establishment of such public resources is important, as it can facilitate the experimental design of studies on orchid biology.
In this review, we will examine the molecular mechanisms underlying the development of the flower, which is the most specialized and diversified orchid structure, with a particular emphasis on the role played by the MADS-box genes in the formation and evolution of the floral organs.
THE GENETICS OF FLOWER DEVELOPMENT: THE MADS-BOX GENES FAMILY
The acronym MADS box is derived from the initials of four loci, MCMI of Saccharomyces cerevisiae, AG of Arabidopsis thaliana, DEF of Antirrhinum majus and SRF of Homo sapiens, all of which contain the MADS-box domain, a conserved 56-amino-acid DNA-binding domain [18]. The MADS-box family has evolved from a region of the topoisomerase II subunit A [19] and includes genes encoding transcription factors. The MADS-box genes are present in nearly all major eukaryotic groups, although they constitute a large gene family only in land plants. A gene duplication preceding the divergence of plants and animals gave rise to two main groups of MADS-box genes: type I and type II, which are distinguished on the basis of genomic organization, evolutionary rate, developmental function and level of functional redundancy [20]. The type I genes are divided into three groups, Malpha, Mbeta and Mgamma [21], and are involved predominantly in development of seed, embryo and female gametophyte [22]. The type II genes share a conserved MIKC structure and encode proteins bearing the highly conserved DNA-binding MADS domain (M) at the amino terminus, a poorly conserved I domain and a moderately conserved K domain in the central portion, which are important for protein-protein interactions and the formation of coiled-coil structures, and a variable carboxyl-terminal (C) region that may function as a transactivation domain [23,24]. The type II genes can be further divided into MIKC C and MIKC* genes, which are distinguished by their various intron/exon structures in the I domain [25]. Functional studies have suggested a major specialization of the MIKC* genes in the development of the male gametophyte [26], whereas the MIKC C genes, the best-characterized group of MADS-box genes, which are often referred to simply as the MIKC genes, are involved in many functions related to plant growth and development and are closely linked to the origin of the floral organs and fruits of angiosperms. The genomic organization of the MIKC C genes is generally consistent, with the presence of seven introns and eight exons [24,[27][28][29][30][31].
The regulatory systems controlling the expression of the MADS-box genes include complex feedback and feedforward networks, which are often integrated in a complex cascade of events [24,29,32]. In addition, more specialized mechanisms, such as regulation by small RNAs [33] and epigenetic control [34], have evolved to control the expression of the MADS-box genes. In the future, more data from genome projects and reverse genetic studies will allow us to understand in greater detail the origin and functional diversification of members of this dynamic family of transcription factors [35].
The spatial and functional activity of the floral homeotic genes is exemplified by the elegant ABCDE model of flower development (Fig. 3A) [36,37]. This model was initially developed on the basis of mutant analyses of the model species Arabidopsis thaliana, which exhibits a flower consisting of four concentric whorls of floral organs. With the exception of APETALA2 (AP2), all genes involved in the ABCDE model are MADS-box genes belonging to various functional classes. In Arabidopsis, the expression of the class A genes (APETALA1, AP1) controls the sepal development in whorl 1 and, together with the expression of the class B genes (e.g. PISTILLATA, PI, and APETALA3, AP3) in whorl 2, regulates the formation of petals. The expression of the class B genes in whorl 3, together with the expression of the class C genes (e.g., AGAMOUS, AG), mediates stamen development. The expression of the class C genes alone in whorl 4 determines the formation of carpel. The class D genes (e.g., SEEDSTICK, STK and SHATTERPROOF, SHP) specify the identity of the ovule within the carpel, and the class E genes (e.g., SEPALLATA, SEP), expressed in the entire floral meristem, are necessary for the correct formation of all of the floral organs.
The activity of the MADS-box transcription factors requires the formation of homo-and heterodimers that recognize the conserved nucleotide CC(A/T) 6 GG DNA sequences, which are known as the CArG boxes [23]. After the formation of dimers, MADS-box proteins further interact, leading to the formation of the "floral quartets", complexes that activate floral organ-specific expression programs [38]. For example, the quartet model predicts that the complexes AP1/AP1/SEP/SEP, AP1/SEP/AP3/PI, AG/SEP/AP3/PI and AG/AG/SEP/SEP are present within whorls 1, 2, 3 and 4, respectively, to induce the formation of floral organs (Fig. 3A) [38,39]. Although the ABCDE model is generally conserved [40][41][42][43][44], the increasing identification and functional and evolutionary analysis of the MADS-box genes is highlighting relevant differences in the mechanisms leading to flower development in non-model species, such as orchids, often emphasizing instances in which the MADS-box gene's function could not be extrapolated from structural orthology [45]. Table 1 lists most of the MADS-box genes characterized in orchids, in Fig. (4) their evolutionary relationships are presented, and functions are described in the following sections.
THE ORCHID MADS-BOX GENES OF THE AP1/AGL9 GROUP
The AP1/AGL9 group includes the phylogenetically related MADS-box genes of class A and class E [29,38], which originated during evolution after several duplication events [46,47]. Class A genes belong to the AP1/SQUA-like subfamily (from the APETALA1 and SQUAMOSA locus of Arabidopsis thaliana and Antirrhinum majus, respectively), which is further divided into the paleoAP1-like and the euAP1-like clades [48,49]. The C-terminal region of the AP1/SQUA-like proteins exhibits conserved motifs. The paleoAP1 motif L/MPPWML (also known as the FUL-like motif, from the FRUITFULL locus of A. thaliana) is typical of the paleoAP1-like clade, whereas the euAP1-like clade is characterized by two alternative motifs, RRNaLaLT/NLa (the euAP1 motif) and CFAT/A (the farnesylation motif), the latter evolved from the paleoAP1 motif through a frameshift mutation [48]. The role of the paleoAP1 and farnesylation motifs is not clear, and the absence of both motifs in some AP1/SQUA-like genes does not affect their function [50,51].
As the AP1/SQUA-like genes are present only in angiosperms, their origin might be related to the emergence of the floral perianth. The euAP1-like clade is typical of the higher eudicots, while the paleoAP1 clade is present in both monocots and dicots [48,49].
The E-function genes belong to the SEP-like subfamily (from the SEPALLATA locus of A. thaliana), which are divided into SEP3 and SEP1/2/4 clades (previously known as AGL9 and AGL2/3/4 clades, respectively) [29,46,47,52]. A third clade, AGL6, also belongs to the AP1/AGL9 group [53][54][55][56]. In addition to their role in determining floral organs, almost all of the members of the AP1/AGL9 group of MADS-box genes are also involved in the floral meristem's initiation and development [57]. This finding suggests that the genes of the AP1/AGL9 group could function at the top of the regulatory hierarchy of the MADS-box genes involved in flower development [58][59][60].
In orchids, a number of genes belonging to the AP1/AGL9 group have been identified and functionally char-
SpodoGLO1
Spiranthes odorata Oncidium Gower Ramsey SEP3-like E HM140844 acterized ( Table 1, Fig. 4). The identification of genes that function early during floral transition is the first step toward the elucidation of the molecular mechanisms of floral transition in orchids.
In the orchid Dendrobium Madame Thong-In, the MADS-box genes DOMADS1, DOMADS2 and DOMADS3 are homologous to SEP1, AP1/SQUA and SEP3, respectively. These genes are successively activated during the floral transition and continue to be expressed later in mature flowers [58]. Their expression pattern is quite different when compared with the transcriptional profile of the homologous genes of Arabidopsis, revealing an absence of functional conservation in MADS-box genes functioning during floral transition in flowering plants. DOMADS1, DOMADS2 and DOMADS3, in accordance with almost all of the MADS-box genes involved in the regulation of floral transition, also function in the later stages of flower development [61][62][63][64]. DOMADS1 transcripts are present in the inflorescence meristem, in the floral primordium and in all of the floral organs. The same expression pattern in floral organs is shared by the DOMADS1 ortholog DcOSEP1 of Dendrobium crumenatum [65]. DOMADS2 is expressed early in the apical meristem of the shoot and throughout the process of floral transition; later, its expression is restricted to the column. The transcrip-tion of DOMADS3 is detectable before the differentiation of the flower primordium, and its expression in floral organs is only detectable in the pedicel tissue.
Compared with the activities in the floral transition of the Arabidopsis orthologs AP1, AGL8 and CAULIFLOWER (CAL) [61,66,67], followed by the activation of SEP1, SEP4 and SEP3 in stage 2 of the flower primordium [68][69][70], DOMADS2 and DOMADS3 are activated much earlier. Differences are also observed in the spatial expression pattern, as the transcripts of DOMADS1 and DOMADS2 accumulate in both the inflorescence and the floral meristem, whereas in Arabidopsis, the expression of AGL8 is restricted to the former region and that of AP1 and CAL is confined to the latter region. These differences indicate the evolution of specific regulatory systems controlling the activity of MADS-box genes involved in floral transition in various plant families.
The promoter region of the DOMADS1 gene contains multiple cis-acting elements that regulate the expression of DOMADS1 in the orchid's reproductive organs and, at low levels, in the stem [71]. This promoter contains six CArGbox sequences, which are the binding sites of diverse MADS-box genes and are crucial modulators of their expression [72][73][74][75]. The presence of the CArG-boxes within the DOMADS1 promoter, as well as in the promoters of MADSbox genes of distantly related species (e.g., Arabidopsis), implies that the basic mechanism of regulation of the MADS-box genes through binding to the CArG-box sequences are conserved during the flowering process. In addition, within the promoter of the DOMADS1 gene, there are five DNA-binding sites of the class 1 knox gene DOH1 [71], which is a negative regulator of the expression of DOMADS1 during floral transition that may directly interact with its binding sites to mediate the regulation of DOMADS1 expression [76].
In addition to DOMADS1-3, three other MADS-box genes belonging to the AP1/AGL9 group have been characterized in Dendrobium. The DthyrFL1-3 genes of D. thyrsiflorum are paleoAP1-like genes within the AP1/SQUA-like subfamily, which evolved from a single ancestor common to all monocots [51]. Similarly to the events driving the evolution of several MADS-box gene lineages [48,[77][78][79], a frameshift mutation is considered responsible for the absence within the DthyrFL3 locus of the paleoAP1-like motif present in both DthyrFL1 and DthyrFL2. All three of the genes are transcribed at low levels in vegetative root and leaf tis- sues, at higher levels in ovules and at much higher levels in inflorescences, with increasing transcription levels of DthyrFL1 and DthyrFL2 observed from small to large floral buds [51]. This expression pattern may indicate that the genes are involved in different mechanisms controlling the development of orchid inflorescence.
In the orchid Phalaenopsis amabilis, the genes ORAP11 and ORAP13 belong to the AP1/SQUA-like subfamily, exhibit the typical paleoAP1-like motif and lack the farnesylation motif [80]. Both genes possess a role in the establishment of meristem identity, with initial expression in the inflorescence and floral meristems that is similar to the early functions of FUL in A. thaliana [81] and OsMADS18 in rice [82]. Later expression of both ORAP11 and ORAP13 in the primordia of all floral organs is consistent with the transcriptional profile of the genes OsMADS18 [82] and LtMADS1 of the monocots Oryza sativa and Lolium temulentum [83], respectively, but not with that of the AP1 and FUL genes of Arabidopsis [66,84]. Subsequently, both ORAP genes participate to the development of petals, lips, columns and ovules, with the last role also described for DthyrFL1-3 in D. thyrsiflorum [51] and PFG in petunia [85]. The presence of ORAP11 transcripts in the columns of mature flowers is consistent with the expression pattern of the DOMADS2 gene in Dendrobium Madame Thong-In [58]. ORAP genes are also expressed in vegetative tissues, such as the root and procambial strand region, thereby resembling the FUL, PFG, Os-MADS18 and LtMADS1 genes more than the AP1, SQUA or PEAM4, the AP1 homolog of pea [83][84][85][86][87][88][89]. The expression profile of the ORAP genes suggests that the orchid AP1/SQUA-like genes have retained an ancestral role in the determination of meristem identity, but they have functions that are quite different from those of the "classic" class A genes.
In the orchid Oncidium Gower Ramsey, the OMADS1 gene belongs to the AP1/AGL9 group; in particular, it belongs to the AGL6 clade [90]. OMADS1 is transcribed early in the apical meristem of the orchid, and its role in regulating floral initiation is functionally similar to that of other members of the AP1/AGL9 group, such as AP1 and SEP3 [50,[91][92][93]. The OMADS1 protein is able to form heterodimers with OMADS3, a class B orchid MADS-box protein also involved in the process of floral initiation [94]. However, the expression pattern of OMADS1 in the mature flower, which is restricted to the lip and carpel, does not overlap with that of its orthologs AGL6 of Arabidopsis and ZAG3 of Zea mays, which are expressed in all four flower organs and ovules [53,54]. The heterodimerization activity of OMADS1 is also achieved with OMADS2, an Oncidium class D MADS-box protein that is expressed in the stigmatic cavity and ovary [95]. OMADS1 may represent a class of MADS-box genes with a function similar to that of the carpel-specific MADS-box genes in regulating floral initiation and ovary development in orchids.
In Oncidium, four additional AP1/AGL9-like genes, OMADS6, OMADS7, OMADS10 and OMADS11, have been characterized [96]. Specifically, OMADS6 is a SEP3 ortholog, OMADS11 is closely related to the SEP1/2 orthologs and OMADS7 is closely related to AGL6-like genes within the E-function genes; furthermore, OMADS10 is a paleoAP1 ortholog of orchid. OMADS6, OMADS7 and OMADS11 exhibit a similar expression pattern, whereas OMADS10 has a completely different profile. In contrast with the expression profile of the SEP3 gene and many of its orthologs, which are transcribed only in the three inner whorls of the flower [27,34,47,69,[97][98][99][100], the expression of OMADS6 is observed in all four floral whorls, exhibiting relatively low levels in stamens. This pattern is similar to that of the SEP1/2 genes [27,70,100] and of LMADS3, a Lilium SEP3 ortholog [101], and could be explained by the significant morphological similarities between sepals and petals (tepals) in orchids and lilies. The expression of OMADS11, which is absent in the stamens, resembles that of OMADS6. The expression pattern of OMADS7 overlaps with that of OMADS6 and is similar to that of AGL6 of A. thaliana and ZAG3 of maize [54]. However, the expression profile of OMADS7 is divergent from that of OMADS1, which exhibits an expression pattern restricted to the lip and carpel [90]. Although not identical, the similar expression patterns of the OMADS6, OMADS7 and OMADS11 genes suggest a possible evolutionary conservation of their transcriptional regulation.
Even though most genes of the AP1/SQUA-like subfamily are generally expressed in the early floral meristem and in floral organs and are absent in vegetative tissues [58,66,67,86,102,103], some AP1/SQUA-like genes in monocots are also expressed in leaves [82,104,105]. OMADS10 is only expressed in the leaves, lips and carpels, and this expression pattern indicates a possible functional conservation for specific lineages of AP1/SQUA-like genes in monocots.
THE ORCHID CLASS C AND D MADS-BOX GENES
Within the ABCDE model of flower development, the class C genes regulate the development of carpels and, together with the class B genes, of stamens. The class D genes are primarily involved in the development of ovules. The class C and D genes are sister clades, which appeared after an early duplication event during angiosperm evolution [78]. Two motifs at the C-terminus, the AG motifs I and II, are common to all of the class C and D gene products [78].
In orchids, the number of characterized genes belonging to the C and D classes is smaller than those of the other classes (Table 1, Fig. 4).
In Dendrobium crumenatum, the DcOAG1 and DcOAG2 genes belong to the class C and class D MADS-box genes, respectively [65]. DcOAG1 is an ortholog of AG of A. thaliana, presents an N-terminal extension preceding the MADS domain, which is typical of the class C genes, and its genomic sequence contains the intron 8, which is common in several AG-like genes of class C and was possibly lost in the class D lineage after the divergence of Nympheales from the other angiosperms [78,106]. DcOAG2 is a SEEDSTICK (STK) homolog and is specifically expressed in the ovary. The expression of DcOAG1 is detectable in all of the floral organs and, in accordance with the expression of the AG orthologs observed in some basal angiosperms, is not confined to the reproductive organs [43]. This common expression pattern shared between the AG orthologs of orchids and basal angiosperms indicates that the regulatory mechanisms involved in the expression of these class C genes may have evolved independently.
In Dendrobium thyrsiflorum, DthyrAG1 is a class C gene and DthyrAG2 belongs to class D [107]. Both genes encode the conserved AG motifs at the C-terminus of the protein, and DthyrAG2 encodes an extension of the AG motif, the MD motif YET/AKA/DDXX, which is typical of the monocot D lineage genes and may be involved in determining their interaction with specific protein partners [78]. The DthyrAG1 gene presents an intron 8 located before the stop codon. Both DthyrAG1 and DthyrAG2 are expressed during ovule and flower development, specifically in the rostellum, stigma and stylar canal. In monocots, class D orthologs are generally expressed in ovules [108,109], and the dicots exhibit a similar expression pattern; however, some exceptions have been reported, such as the LMADS2 gene of Lilium longiflorum, which is expressed in the stylum [110], and the ZmZAG2 of Zea mays, which is expressed in the stigma [111]. The differences in expression patterns between the class D lineage genes in monocots and dicots, together with the presence in monocots of the extension of the AG motif, could be related to the acquisition of a novel function for class D genes within monocots. Both the DthyrAG1 and DthyrAG2 genes are also expressed during ovule development, in agreement with the genes of class C and D in other species [27,103,108,112,113]. However, DthyrAG1 is only transcribed early, whereas DthyrAG2 is expressed throughout the process of ovule development, suggesting a prominent role for DthyrAG2 in late ovule development [107].
In Phalaenopsis, the products encoded by the genes Pha-lAG1 and PhalAG2 contain the AG I and II motifs in their Cterminal regions, and PhalAG2 also exhibits the MD motif [114]. PhalAG1 and PhalAG2 belong to the class C and D MADS-box genes, respectively. Both are genes are expressed in all floral organs at the earliest stage of floral development and, later, in the lip and column. Although these genes belong to different classes of AG-like genes, their similar expression patterns strongly suggest a subfunctionalization of the two genes. In contrast to the AG-like genes of the other monocots, which are generally involved in stamen and carpel development and are not expressed in whorls 1 and 2, PhalAG1 and PhalAG2 are also involved in the lip formation [114].
In Oncidium Gower Ramsey, the genes OMADS4 and OMADS2 belong to classes C and D, respectively [95]. Both of their encoded proteins present the AG motifs I and II, and OMADS2 also contains the conserved MD motif, which is specific to class D proteins of monocots. OMADS4 is specifically expressed only in stamens and carpels, thus resembling the expression patterns of other class C genes [111,115,116]. OMADS2 is only expressed in carpels, in accordance with other class D genes [117,118]. Despite the sequence similarity, the expression patterns of OMADS4 and OMADS2 are quite divergent when compared with those of PhalAG1 and PhalAG2 and may reflect a functional evolutionary divergence of the class C/D genes in Oncidium and Phalaenopsis, with a more redundant role in the latter species than in Oncidium [95].
Although only one class C gene has been identified in Dendrobium thyrsiflorumm (DthyrAG1), D. crumenatum (DcOAG1), Oncidium (OMADS4) and Phalaenopsis (Pha-lAG1), a duplication event generated two class C MADS-box genes in the orchid Cymbidium ensifolium (CeMADS1 and CeMADS2), both of which are involved in regulating the development of the gynostemium [87]. Despite their redundant function in the meristem tissue, these two paralogs exhibit temporal and spatial differences in their expression pattern in floral organs, leading to the hypothesis of sub-and neo-functionalization during the evolution of the CeMADS1 and CeMADS2 genes [87]. According to the floral quartet model, the function of CeMADS1 and CeMADS2 genes in column development is enabled through the formation of the tetrameric protein complexes CeMADS1-CeMADS1-class E-class E and/or CeMADS1-CeMADS2-class E-class E. CeMADS1 has a pivotal role in stamen and carpel development; in fact, the Cymbidium naturally occurring mutant multitepal, in which the column is substituted by tepals, continues to express CeMADS2 but not CeMADS1. The transcription of CeMADS1 enhances the formation of the column, followed by the expression of CeMADS2 to complete development correctly. The function of CeMADS2 is primarily maintenance, rather than initiation, and its expression alone is not sufficient to mediate the formation of the column [87].
THE ORCHID CLASS B MADS-BOX GENES
Based on the ABCDE model, the class B MADS-box genes are necessary for the correct development of petals and stamens and include two major lineages, the AP3/DEF-like genes (from the APETALA3 and DEFICIENS loci of A. thaliana and A. majus, respectively) and the PI/GLO-like genes (from the PISTILLATA and GLOBOSA loci of A. thaliana and A. majus, respectively), which appeared after a duplication of an ancestral gene containing a paleoAP3 motif [77,119]. The AP3/DEF-like genes include the paleoAP3 clade and two further clades, TM6 and euAP3, which originated after a second duplication event [77].
The class B genes characterized in orchids are the most numerous and thoroughly studied compared with those of the other classes (Table 1, Fig. 4). A feature common to a high number of the class B MADS-box orchid genes is the expansion of their expression profile into the first whorl of floral organs that may be responsible for the development of petaloid sepals in orchids (Fig. 3B).
In Phalaenopsis equestris, the four class B genes, Pe-MADS2-5, are AP3/DEF-like paralogs that are expressed during developmental stages ranging from early to late inflorescence [120]. Their organ-specific expression pattern demonstrates an absence of functional redundancy. In fact, Pe-MADS2 is strongly expressed in the outer and inner tepals and, at lower levels, in the column; PeMADS3 is strongly expressed in the inner tepals and lips and, to a lesser extent in the column; PeMADS4 is expressed only in the lips and the column; PeMADS5 is expressed in the outer and inner tepals, lips and the column. The expression pattern of these AP3/DEF-like genes in the naturally occurring Phalaenopsis peloric mutant reveals that PeMADS2, PeMADS4 and Pe-MADS5 are involved in specifying the development of the outer tepals, lip and inner tepals, respectively. In addition, PeMADS4 is also involved in column development, and Pe-MADS5 is important for the initiation of stamens [120].
In contrast to its complement of four AP3/DEF-like genes, the genome of P. equestris contains only one PI/GLO-like gene, PeMADS6 [121]. The expression of Pe-MADS6 in the inflorescence meristem and floral primordium highlights its role in initiating floral development. The expression pattern of PeMADS6 in the outer and inner tepals, lip, column and ovary demonstrates its involvement in the development of these floral organs. Furthermore, the persistence of PeMADS6 transcripts in the flower until senescence might correlate the activity of this gene to the flower longevity of orchids [121]. The PeMADS2-5 proteins can interact with PeMADS6 to mediate the development of specific organs [122]. In addition, PeMADS4 and PeMADS6 can form homodimers, and both the PeMADS4 homodimer and the PeMADS6 homodimer/homomultimer can bind the CArG boxes, which are the MADS-box protein-binding motif. Also, the heterodimers PeMADS2-PeMADS6, PeMADS4-PeMADS6 and PeMADS5-PeMADS6 are able to bind the CArG boxes, indicating that, in orchids, the AP3/DEF-like and PI/GLO-like proteins interact in different combinations and revealing the notable complexity of their regulatory functions [122].
In Dendrobium crumenatum, DcOPI is a class B gene belonging to the PI/GLO-like lineage, whereas DcOAP3A and DcOAP3B belong to the paleoAP3 lineage of the AP3/DEF-like genes [65]. Both DcOPI and DcOAP3A are expressed in all whorls of the floral organs. DcOAP3B is expressed in inner tepals and lip, in pollinia and in the column. These three genes are also expressed in the ovary [65].
In Habenaria radiata, three class B MADS-box genes have been identified: HrGLO1 and HrGLO2, which are two PI/GLO-like genes, and HrDEF, which is an AP3/DEF-like gene [123]. HrGLO1 and HrGLO2 are expressed in the outer and inner tepals and in the column, whereas HrDEF is expressed only in the inner tepals and the column [123].
In Oncidium Gower Ramsey, OMADS3, OMADS5 and OMADS9 are class B MADS-box genes belonging to the AP3/DEF-like lineage, whereas OMADS8 is a PI/GLO-like gene [94,124]. OMADS8 is expressed in all of the floral organs and leaves. OMADS3 is expressed in all four flower organs and in leaves, exhibiting an expression pattern similar to that of a number of the AP3/DEF-like genes of the TM6 clade [125]. OMADS5 is only expressed in the outer and inner tepals, in accordance with PeMADS2 of Phalaenopsis [120]. OMADS9 is transcribed in the inner tepals and lip, in agreement with the expression patterns of DcOAP3B, Pe-MADS3 and HrDEF [65,120,123]. Both OMADS5 and OMADS9 are not expressed in stamens and leaves, and their expression profiles are different from that of OMADS3, which is expressed in all flower organs and leaves [94], indicating a functional diversification of OMADS5, OMADS9 and OMADS3 [124]. OMADS5 can form homodimers and heterodimers with OMADS3 and OMADS9, whereas OMADS8 forms heterodimers only with OMADS3, while OMADS3 can form homodimers and heterodimers with OMADS8. OMADS9 can form homodimers and heterodimers with OMADS3 and OMADS5 [124].
In Orchis italica, OrcPI is a class B PI/GLO-like gene [126]. OrcPI transcripts are detectable in all floral organs, and the maintenance of OrcPI transcripts in the flower through anthesis to senescence confirms the relationship between the PI/GLO-like genes and the long-persisting flower longevity of orchids, as also described in Phalaenopsis [127]. The high number of MADS-box gene sequences publicly available has enabled comparative evolutionary studies to determine the selective constraints acting on coding and/or non-coding regions and the eventual traces of adaptive, purifying and neutral selection. Different evolutionary constraints act on the coding and non-coding regions of OrcPI, suggesting a heterogeneous selective pattern of the OrcPI locus [126,127]. Phylogenetic footprinting analysis detected conserved regions within the 5' regulatory sequence of OrcPI and the homologous regions of Oryza sativa, Lilium regale and Arabidopsis thaliana, confirming the wide conservation of regulatory signals required during flower development [128]. A paralog copy of OrcPI, OrcPI2, has been recently identified in O. italica and other members of the Orchidoideae subfamily. The two PI/GLO-like genes exhibit different selective pressures, particularly on the synonymous sites, and seem to have experienced subfunctionalization [129]. In O. italica, four AP3/DEF-like genes are also present (Aceto et al., unpublished data).
Recently, the evolutionary analysis of a number of class B genes from the major subfamilies of Orchidaceae indicated the presence of four distinct clades (from 1 to 4) of AP3/DEF-like orthologs, while the PI/GLO-like genes seem to form a single ancient clade with recent paralogs present only in the Orchidoideae subfamily [123,129,130]. Within the four AP3/DEF-like clades, the genes belonging to clade 2 exhibit relaxation of purifying selection when compared with the other orchid AP3/DEF-like clades and with the PI/GLOlike genes. In Orchidaceae, gene duplication followed by sub-and neo-functionalization, particularly within the class B AP3/DEF-like genes, seems to have played a crucial role in the morphological evolution that resulted in the extreme specialization of the floral perianth [130].
THE ORCHID CODE
In contrast to Arabidopsis and the other eudicots, which exhibit sepals (whorl 1) and petals (whorl 2) with clearly different morphologies, the flowers of orchids and a number of other monocots have phenotypically similar organs (tepals) in the outer whorls 1 and 2. The modification of the ABCDE model attributes this difference to the extension of the expression of the class B genes into the whorl 1, in addition to the expression in whorls 2 and 3 [131,132] (Fig. 3B). However, orchid tepals are distinguished in the outer and inner tepals and, among the latter, the lip has a highly diversified morphology, which is a feature that cannot be satisfactorily explained by the modified ABCDE model.
A decade of molecular studies on the orchid MADS-box genes has strongly enhanced understanding of the mechanisms underlying flower development in this plant family. However, questions remain regarding the evolution and diversification of flower morphology in orchids. Can all of the data obtained from various orchid species be integrated into an evolutionary model to explain the uniqueness of the orchid flower? The recent theory known as "the orchid code" proposes an elegant model describing the development and evolution of the orchid perianth [6,133,134].
The orchid code theory illustrates a developmentalgenetic code that attributes to the class B AP3/DEF-like genes a pivotal role in tepal and lip identity and leaves unchanged the function of the class B PI/GLO-like genes and the functions of the A, C, D and E class genes with respect to the modified ABCDE model.
In contrast to eudicot model species, such as Arabidopsis, in which the identity of petals is realized through the interaction of one AP3/DEF-like and one PI/GLO-like gene product, the orchid code theory suggests that the identity of orchid tepals and lips is determined by the interactions of the products of four paralogous AP3/DEF-like genes belonging to four different clades with the product of one PI/GLO-like gene. The orchid AP3/DEF-like genes are grouped into four well-defined clades: clade 1 (PeMADS2-like) is sister to clade 2 (OMADS3-like), while clade 3 (PeMADS3-like) is sister to clade 4 (PeMADS4-like). Each clade is characterized by a specific expression pattern [133,134].
Under the assumptions of the orchid code theory, the interactions of the clade 1 and clade 2 gene products mediates the development of the outer tepals (whorl 1). The formation of the two lateral inner tepals (whorl 2) is specified by the interaction of high levels of the clade 1 and 2 and low levels of the clade 3 and 4 gene products, whereas the development of the lip, which is a highly modified inner tepal, is determined by the expression of high levels of the clade 3 and 4 gene products, in addition to low levels those of clades 1 and 3 (Fig. 5). Thus, the expression of clade 3 genes differentiates between the inner and outer tepals, whereas the expression of clade 4 genes distinguishes between the two lateral inner tepals and the lip [6,133].
This proposed scheme can also explain the evolution of the zygomorphic orchid flower, starting from an actinomorphic flower composed of six nearly identical tepals in which the ancestor of the current AP3/DEF-like genes was equally transcribed. The duplication and evolution of different cisregulatory elements played a fundamental role in the functional diversification of the four AP3/DEF-like orchid clades. An initial duplication event produced the ancestor of the clade 1 and clade 2 genes and the ancestor of the clade 3 and clade 4 genes. At this stage, the evolution of a more specialized expression of the ancestor of the clade 3 and 4 genes, which was excluded from the outer tepals, might have established an intermediate flower structure, with distinctive outer and inner tepals (Fig. 5). After a second duplication round, clade 3 and clade 4 genes differentiated, and the modularization of their expression led to the evolution of the lip [133,134].
CONCLUSIONS
The ancient scientific interest in the "mystery" of the orchid flower has greatly expanded since the advent of the EVO/DEVO molecular approach. Certainly, the future of studies on the flower development genes in orchids will Fig. (5). The orchid code, modified from Mondragon-Palomino and Theissen [133,134]. The colors indicate the various clades of the AP3/DEF-like genes and their expression profiles in the orchid perianth. A model for the possible initial and intermediate stage of the orchid perianth is also presented. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this paper). greatly benefit from the completion of the genome projects currently in progress and from new and challenging transcriptomic projects, such as the analysis of microRNAs. The continuous and increasing characterization of genes involved in flower development in orchids has clarified many functional and evolutionary aspects of orchid development. The complexity of the expression pattern of the class B MADSbox genes of the Orchidaceae has successfully been simplified and integrated in the specific developmental-genetic code of the orchid perianth, even if the evolutionary role of the recently discovered paralogs of the PI/GLO-like genes still remain to be clarified. A more extensive and detailed analysis of the orchid MADS-box genes belonging to classes A, C, D and E will allow the proposal of a more exhaustive model that could also explain the evolution and diversification of the orchid's reproductive structures. In this context and in the presence of a growing number of characterized loci, it is particularly important to establish a gene nomenclature system that is less ambiguous than the existing one to identify clearly homolog and paralog genes. | 8,500.8 | 2011-07-31T00:00:00.000 | [
"Biology"
] |
Optimization of Welding Parameters for Friction Stir Lap Welding of AA 6061-T 6 Alloy
Friction Stir Welding (FSW) is currently used in many aircraft and aerospace sheet metal structures involving lap joints and there has been growing interest in recent years in utilizing this process for joining aluminum alloys. In this paper, Friction Stir Lap Welding (FSLW) of the 6061-T6 aluminum alloy was carried out to obtain the optimum welding condition for maximum shear strength where the rotational speed, axial load, and welding speed were taken as process parameters. An L-9 orthogonal array, a Taguchi Method with consideration of three levels and three factors was designed and executed for conducting trials. Analysis of variance (ANOVA) and Signal to Noise (S/N) ratio were employed to investigate the influence of different welding parameters on the shear strength and obtain the optimum parameters. The Fisher-Test was also implemented to find the design parameter which had the most important effect on the characteristic of quality. The results indicated that the tool rotational speed had the maximum percentage contribution (51%) on the response (shear strength) followed by the welding speed (38%) and the axial load (8%) while the percentage of error was 3%. However, to confirm the main effects for the means and S/N ratios of the experiment, theoretical shear strength values were computed to predict the tensile strength. The maximum shear strength of 60 MPa was achieved and the effectiveness of the method was confirmed. The optimum parameter combinations that provided higher shear strength were: rotational speed of 1200 rpm, welding speed of 45 mm/min and the axial load of 11.5 kN.
technique invented in TWI in Cambridge, England in 1991 for joining aluminum alloys [1].The welding process majorly encompasses a non-dispensable rotating tool with a design comprising a pin and a shoulder that travels along the longitudinal length of the weld seam and join the work pieces [2].The properties of the welded metal zone usually have low distortion because of lower welding temperature and higher joint strength when compared to the welded metals from conventional welding methods.Hence, this method eliminates the need for gas shielding requirement and joint edge preparations before welding application.Over the years and after decades of research, the technique has proved to be a versatile method in the academia which is energy efficient, environment-friendly and avoids the formation of solidification, cracking and porosity.Aluminum Alloy 6061-T6 is widely utilized in the construction of aerospace structures, such as wings and fuselages in commercial aircrafts, several parts of a remote-controlled model aircraft and helicopter rotor components.AA6061-T6 has also its wide applications in the automotive industry such as parts like chassis and engine parts.In these past few years in these industries, there is an ongoing effort to reduce the weight of aluminum alloy in assemblies of parts which use conventional methods of welding with filler materials.FSW is presently able to weld several aluminum alloys from various ranges of series bolstering the fact that it was contemplated to be non-weld able alloys due to a vast decrease in the strength of the joint in contrast with the base metal.Several researchers have tried to explore this field of industry requirement.Liu et al. [3] joined AA6061-T6 alloy using FSW in butt configuration where the hardness value and the grain size decreased in the weld zone when compared to the base metal.Rathinasuriyan et al. [4] carried out FSW in submerged condition and identified the defect-free samples using radiography technique.Sankar et al. [5] have studied the mechanical properties on AZ31B Mg alloy by FSP in air, under water and under liquid nitrogen.
Various riveted joints in aircraft structures have mostly been substituted by friction stir welded lap joints where one must understand that riveted joints were the major method of joining aerospace structures since its manufacturing began [6].Rivet holes are notoriously known for probable junctures of crack and corrosion propagation.Resistance spot welding (RSW) is also a major choice in welding for various economic reasons such as the omission of fasteners that point considerable weight and cost savings [7].However, Dubourg et al. [8] investigated the lap joints of aluminum alloys of AA2024-T3 and AA7075-T6 using FSW where the welded joint is stronger than the comparable riveted or resistance spot welded lap joints for the major aluminium alloys.Al-Si and Mg-Al-Zn alloys were lap joined using friction stir welding by Chen and Nakata [9] where the stirring pin is plunged into the lower metal to produce a bonding mechanism of mechanical mixing that enhances bonding strength and the lower welding speed avoids cracks and improves the joint strength.Later on, Cao et al. [10] made lap joints on AZ31B-H24 magnesium alloy and A2198-T4 aluminum alloy using an FSW process where the fracture occurred at the heat affected zone R. Chandran The predominant problems that aerospace and automotive industries generally face in metal welding processes are poor weld quality and strength of the weld.This is due to improper selection of parameters which have a significant influence on the strength of the weld that affects the quality and the strength of the bond formation.Ashok Kumar et al. [13] considered axial load as one of the process parameters while optimizing the parameters for maximum tensile strength.Lokesh et al. [14] optimized the process parameters for SFSW using the Taguchi technique in order to get maximum hardness.Rathinasuriyan et al. [15] developed a mathematical model to optimize the parameters using response surface methodology while conducting submerged FSW process.
From the viewpoint of application in the industry after much study, it would be more significant to optimize parameters of the Friction Stir Lap Welding (FSLW) for maximum mechanical properties of the joints.However, due to limited work on this area.In this study shows the optimization of process parameters such as rotational speed (rpm), axial load (kN) and welding speed (mm/min) for friction stir lap welding (FSLW) of 6061-T6 aluminium alloy.
Experimental Setup
The base metal (BM) used for the experiment is commercially available AA6061-T6 alloy which is 300 mm in length, 300 mm in width and 3 mm in thickness.The chemical compositions and mechanical properties are listed in Table 1 and Table 2.
A cylindrical tool with 18 mm shoulder diameter, 6 mm pin diameter and 5 mm pin length, made up of H13 tool steel is used.The tool used is shown in Figure 1.
To make a lap joint, the work pieces were lapped together and clamped on the backing plate of FSW machine and the rotating tool was brought in contact with the top surface of work pieces.The tool was made to travel along the length of the junction where the two sheets met where a metallic bond was formed.The The rotational speed, welding speed, and axial load were considered to be variables for the optimization of FSLW process.The parameters and its levels are shown in Table 3.
Lap-Shear (Tension-Shear) Tests
Lap-shear (tension-shear) tests were carried out as per ANSI/AWS/SAE/D8.9-97[16] on a universal testing machine.Figure 3 shows a typical lap-shear test specimen with is dimensions produced using friction stir lap welding.After welding, the joints were cross-sectioned perpendicular to the welding direction for tensile shear strength tests.The work pieces were polished to 1 µm finish.
For tensile property evaluation, lap shear tests were carried out covering the entire weld length.All tests were conducted on Instron 5500R testing machine as shown in Figure 4, at a constant cross head displacement rate of 5 mm/min.
The maximum load and failure location were recorded for each specimen.
Tensile tests were carried out at room temperature at a crosshead speed of 1 mm/min.The welded area was located in the center of the tensile specimen.In order to solve this, the Taguchi Method helps to specifically design an orthogonal array survey and research the complete range of parameters with a small number of investigations.An L-9 orthogonal array with three levels and three factors was computed and implemented for conducting trials, as shown in
Signal to Noise (S/N) Ratio
In Taguchi method, the term "signal" indicates the desired value for the output characteristics and the "noise" indicates the undesirable value that signifies the output characteristics.The objective of the signal-to-noise ratio is to develop processes that are insensitive to noise.Process parameter setting with highest S/N ratio always yields the optimum quality with minimum variance.In general, signal-to-noise ratio signifies the ratio of mean to the standard deviation [17].
The quality of the welded joints is investigated by considering shear strength as the main characteristic feature.The S/N ratio and means for each of the process parameters were calculated to find the influence of process parameters on the response (shear strength).In this current work, the S/N ratio was chosen according to the proposition of "the larger-the-better" characteristics that indicate its robustness according to [18].
( ) where n is the number of the repetitions and T i is the value of the shear strength of the test on that trail.The average response of S/N ratio and experimental data for each combination of the process parameters are given in Table 5 and Table 6.
Analysis of Variance
Analysis of variance (ANOVA) is a mathematical technique developed by Sir Ronald Fisher, which breaks the total variation down into accounted sources and delivers a way to interpret the results from actual experiments [19].The test was performed to identify the statistically significant process parameters [20].This analysis was carried out for a level of significance of 5%, i.e. for 95% confidence level.The ANOVA results of S/N ratio and means for shear strength are given in Table 7 and Table 8 respectively.Statistically, there is a tool called an F-Test named to find the design parameters that have a significant effect on the quality characteristic.Usually, when F > 4, it means that the specific design parameter accounts for a significant effect on the quality attributes [21].Hence, the rotational speed and the welding speed have the important impact on the quality attributes of the metal and the axial load is less significant.The contribution of the tool pin profile, rotational speed, and welding speed is shown in Figure 6.The most important factor that influenced the FSLW process was the tool rotational speed with a contribution rate of 51% while the percentage of error is 3%.The Figure 7 shows two graphs, each of which represents main effects plot for Means and S/N ratio.Based on the highest values of the S/N ratio and Mean values, the overall optimum process parameters for shear strength are A3, B2, and C3.
The theoretical shear strength value for the optimum process parameters has been calculated from the following equation [11].
( ) Tm is the mean response or means S/N ratio, T 0 is the mean response or mean S/N ratio at an optimal level and n is the number of design attributes that have an impact and affect the quality of the properties.Substituting the values in Equation (2), the predicted tensile strength is 61.358MPa.The shear strength value for the optimum level of process parameters was 63.08 MPa.
Conclusions
In this investigation, the Friction Stir Lap Welding of AA6061-T6 alloy was carried out successfully.The results can be summarized as follows: The parameters affecting friction stir lap welding while joining AA6061-T6 alloy were studied.It is observed that rotational speed and welding speed have a revealing effect on shear strength. The percentage of contribution of FSLW process parameters was evaluated using ANOVA and found that the rotational speed, welding speed, and axial load contribute 51%, 38%, and 8% respectively. The optimum parameter combinations such as the rotational speed of 1200 rpm, welding speed of 45 mm/min and the axial load of 11.5 kN provided a shear strength of 63.08 MPa.
Figure 5 (
Figure 5(a) and Figure 5(b) show the FSW and shear tested samples.There would be numerous experiments that need to be carried out when there is a large number of parameters.
Figure 4 .
Figure 4. Tensile shear test machine along the entire weld length.
et al.
Table 3 .
Process parameters range and their levels.
Table 5 .
Average response table for mean.
Table 6 .
Average response table for S/N ratio.
Table 7 .
ANOVA of means for shear strength.
Table 8 .
ANOVA of means for S/N ratio. | 2,822.4 | 2018-01-16T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
Oral Biofilms from Symbiotic to Pathogenic Interactions and Associated Disease –Connection of Periodontitis and Rheumatic Arthritis by Peptidylarginine Deiminase
A wide range of bacterial species are harbored in the oral cavity, with the resulting complex network of interactions between the microbiome and host contributing to physiological as well as pathological conditions at both local and systemic levels. Bacterial communities inhabit the oral cavity as primary niches in a symbiotic manner and form dental biofilm in a stepwise process. However, excessive formation of biofilm in combination with a corresponding deregulated immune response leads to intra-oral diseases, such as dental caries, gingivitis, and periodontitis. Moreover, oral commensal bacteria, which are classified as so-called “pathobionts” according to a now widely accepted terminology, were recently shown to be present in extra-oral lesions with distinct bacterial species found to be involved in the onset of various pathophysiological conditions, including cancer, atherosclerosis, chronic infective endocarditis, and rheumatoid arthritis. The present review focuses on oral pathobionts as commensal and healthy members of oral biofilms that can turn into initiators of disease. We will shed light on the processes involved in dental biofilm formation and also provide an overview of the interactions of P. gingivalis, as one of the most prominent oral pathobionts, with host cells, including epithelial cells, phagocytes, and dental stem cells present in dental tissues. Notably, a previously unknown interaction of P. gingivalis bacteria with human stem cells that has impact on human immune response is discussed. In addition to this very specific interaction, the present review summarizes current knowledge regarding the immunomodulatory effect of P. gingivalis and other oral pathobionts, members of the oral microbiome, that pave the way for systemic and chronic diseases, thereby showing a link between periodontitis and rheumatoid arthritis.
INTRODUCTION
The oral cavity is a unique habitat that allows for colonization of a wide variety of commensal microbial species, as it supplies a diversified nutrient influx as well as high humidity and variable oxygen concentrations. Furthermore, the existence of soft (gingiva) and non-shedding hard (teeth) tissues provides microorganisms with potential surfaces for adherence and subsequent interaction with various host cells. Colonization of the oral cavity in healthy individuals is based on balanced bacteria-host and interbacterial interactions. The continuous existence of dental plaques in gingival tissues and interactions of pathobionts with host cells cause inflammation, leading to periodontitis (PD). There is also increasing evidence suggesting an association of chronic PD with other types of systemic inflammatory diseases, such as atherosclerosis, infective endocarditis, diabetes, adverse pregnancy outcome, respiratory diseases, and rheumatoid arthritis (RA) (Li et al., 2000;Pihlstrom et al., 2005;Kim and Amar, 2006;Gaffen et al., 2014;Hajishengallis, 2015). This raises the question whether the periodontal microbiota is bystander or responsible for the initial step of chronic diseases. In the present review, the pathogenic mechanism of PD is introduced from the perspective of host bacteria/interbacterial interactions and host immune responses. Moreover, interactions of the pathobiont Porphyromonas gingivalis with host cells, as well as a possible link between the pathobiont and RA are discussed (Figure 1).
COLONIZATION OF WIDE RANGE OF ORAL SYMBIOTIC BACTERIA
Initial bacterial host colonization occurs at birth, with Staphylococcus epidermidis and Streptococcus species detected within hours after birth (Nelson-Filho et al., 2013). The oral pioneer species Streptococcus salivarius, which has been detected within 8 h after birth (Rotimi and Duerden, 1981), represents the majority of oral bacteria with up to 98% of examined subjects showing its possession at the first tooth eruption (Cortelli et al., 2008). Dental structures and alterations in nutrition allow for further colonization of other bacterial species. Finally, the matured oral microbiome consists of hundreds of bacterial species, contributing to a complex community (Aas et al., 2005;Dewhirst et al., 2010). Gram-positive facultative anaerobic bacteria, such as the Streptococcus and Actinomyces genera, are predominant in healthy individuals, in whom a proper equilibrium between the oral microbiome and host immune responses is maintained with no signs of inflammation observed in the periodontium (Li et al., 2004;Jiao et al., 2014).
DENTAL BIOFILM FORMATION
After tooth surfaces are cleaned, their immersion in the fluid environment of the oral cavity causes surface adsorption of a thin acquired pellicle, which is mainly composed of saliva glycoproteins, such as proline-rich proteins, α-amylase, statherin, mucins, and agglutinin (Heller et al., 2017). Coating of those solid surfaces with a pellicle leads to changes in surface charge and free energy, thus promoting bacterial adhesion (Weerkamp et al., 1988). Bacteria attach to tooth surfaces in a diverse manner, ranging from specific interactions between pellicle components and bacterial surface molecules to charge-mediated weak interactions (Nesbitt et al., 1992;Jenkinson, 1994;Oli et al., 2006;Kolenbrander et al., 2010). The predominant initial colonizers of teeth are Gram-positive facultative anaerobic cocci and rods, including Streptococcus and Actinomyces species. These initial colonizers provide a foundation for further development of dental biofilm. Streptococcus recognizes components in the pellicle, such as a specific interaction between a pilus protein of S. sanguinis and salivary α-amylase (Okahashi et al., 2011). Actinomyces binds to proline-rich proteins and statherin, a phosphate-containing protein (Li et al., 2001). Once the initial colonizers attach to the surface, a biofilm mass develops through continued growth and subsequent adsorption of other bacterial species via coaggregation.
The surface molecules of these early colonizers allow for coaggregation of Gram-negative bacteria possessing a lower level of adherence to the pellicle, including members of the genera Veillonella and Fusobacterium. Bacteria belonging to the genus Fusobacterium, such as Fusobacterium nucleatum, are able to coaggregate with both initial and late colonizers, thus are called bridge species and known to promote successful development of dental biofilm. For bridging neighboring bacteria, F. nucleatum utilizes surface molecules such as RadD, an arginine-inhibitable adhesin, and the fusobacterial apoptosis protein Fap2 (Kaplan et al., 2010). Habitat analysis of the oral microbiome has suggested that the genus Corynebacterium is strikingly specific for supragingival and subgingival plaque, with Corynebacterium matruchotii dominant among six species deposited in the Human Oral Microbiome Database (Dewhirst et al., 2010). Since this genus has been found in only trace amounts in saliva and other specimens from different anatomical sites, it is considered to have a specific role in dental biofilm formation. In fact, in a study that utilized combinational labeling and spectral imaging FISH (CLASI-FISH), Mark Welch et al. (2016) observed a complex microbial consortium, termed a hedgehog structure, mainly consisting of nine taxa arranged in an organized spatial framework, including Corynebacterium, Streptococcus, Porphyromonas, Haemophilus/Aggregatibacter, Neisseriaceae, Fusobacterium, Leptotrichia, Capnocytophaga, and Actinomyces. This plaque hedgehog represents a radially organized structure, of which the main framework is primarily composed of Corynebacterium with a multi-taxon filament-rich annulus and peripheral corncob structures. In the corncob structures, Corynebacterium filaments are surrounded primarily by Streptococcus, though Porphyromonas and Haemophilus/Aggregatibacter are also in close contact with streptococcal cells, while the filament-rich annulus is mainly composed of Fusobacterium, Leptotrichia, and Capnocytophaga. Thus, Corynebacterium organisms are considered to be bridgespecies in regard to biofilm formation. Bridge-species further coaggregate late colonizers that have effects on PD pathogenesis. FIGURE 1 | Brief overview of current concepts regarding onset of periodontitis and rheumatic arthritis, and deduced causal relationships between both diseases. After establishing a subgingival biofilm, oral pathobionts, including Porphyromonas gingivalis, induce periodontitis as a chronic disease, which is attributable to host-pathobiont interactions and deleterious host immune responses in periodontal tissues. Dysregulated citrullination caused by the pathobiont Porphyromonas gingivalis has been suspected to be a causative factor for onset of rheumatic arthritis. Parts of this figure were taken from freely available web resources: https://www.chirurgie-portal.de/innere-medizin/rheuma.html; www.rcsb.org/pdb/ngl/ngl.do?pdbid=5AK7 (Rosenstein and Hildebrand, 2015;Montgomery et al., 2016).
Each anatomical site in the oral cavity possesses a distinct composition of biofilm members that affects the local environment by intrinsic metabolism. Stratification and selective interaction between distinct bacterial species in dental biofilms are conducted by mutually antagonistic and cooperative interactions, which are attributable to environmental/metabolite gradients and quorum sensing (Brown and Whiteley, 2007;Ramsey et al., 2011;Zhu and Kreth, 2012;Wessel et al., 2014). Tooth-related plaque biofilm can be generally classified based on location into supragingival, formed above the gingival margin, and subgingival, formed below the gingival margin. When a pathological dental pocket becomes formed between a tooth surface and gingiva during the course of PD onset, an anaerobic condition is built up. Moreover, major sources of nutrition for subgingival plaque bacteria are provided via inflammatory periodontal tissues and gingival cervical fluid originating in blood, since permeation of saliva components is more or less limited. Consequentially, subgingival plaque in the pocket is dominated by anaerobic and motile bacteria as compared with supragingival plaque, as noted in detail below. Interactions of obligate anaerobic bacteria, such as P. gingivalis, with host cells have been implicated in the pathogenesis of PD.
PERIODONTITIS AS CHRONIC DISEASE
Establishment and maturation of periodontal dental biofilms are characterized by co-aggregation of opportunistic microorganisms caused by diverse factors, including poor oral hygiene. Infection of periodontal host cells as well as expression of virulence factors can provoke a local inflammatory response. Initial periodontal tissue inflammation is termed gingivitis and its pathology can be resolved by removal of dental biofilms (Figure 2). On the other hand, continuous existence of stable plaques, including accumulation of opportunistic bacterial species, supports long-lasting inflammation. A shift in the periodontal microbiome that accompanies an increase in Gram-negative anaerobic species is now accepted as an indicator of periodontal disease (Yano-Higuchi et al., 2000;Klein and Goncalves, 2003;Yang et al., 2004;Berezow and Darveau, 2011;He et al., 2015). Such a shift in composition affects host immune responses, and leads to dysbiosis between the oral microbiota and the host (Hajishengallis and Lamont, 2012). Therefore, following establishment of gingivitis, PD develops as a chronic inflammatory condition.
Periodontitis is characterized by irreversible and progressive degradation of periodontal tissues. With continuous inflammation, proliferation of epithelial cells connecting tooth surfaces and gingival tissues causes detachment of the cell layer, and subsequent formation of a pathogenic dental pocket between teeth and gingival tissues (Figure 2). The resulting micro-environment is characterized by reduced oxygen concentration or even anoxic areas. Mettraux et al. (1984) quantified oxygen concentrations in the periodontal pockets of patients with untreated PD and found them to range from 0.7 to 3.5%. On the other hand, the progression and severity of PD are strongly dependent on the quality and quantity of microorganisms harbored in periodontal plaque, as well as individual risk factors, e.g., age, genetic predisposition, systemic FIGURE 2 | Development of gingivitis and periodontitis. Following dental plaque accumulation, neutrophils dominate the host immune response, accompanied by progression of an early or stable gingivitis lesion, along with increased infiltration of macrophages and T cells. The gingivitis lesion develops into a periodontitis lesion, which is characterized by formation of a pathogenic periodontal pocket and destruction of periodontal tissues. Infiltrated lymphocytes are dominated by B and plasma cells.
Approximately 90% of microorganisms isolated from periodontal pockets are strictly anaerobic (Slots, 1977;Uematsu and Hoshino, 1992) and certain sets of bacteria have been frequently detected at elevated levels in periodontal lesions as compared with healthy tissues. Socransky et al. (1998) analyzed distribution of approximately 40 species in subgingival plaque using a DNA-DNA hybridization technique. Findings from DNA cluster analysis indicated that typical co-colonization of specific oral species, among which a cluster with the nomenclature "red complex" composed of the Gram-negative anaerobic species Tannerella forsythia, P. gingivalis, and Treponema denticola, is associated with increased pocket depth and bleeding upon clinical pocket probing, while the other four clusters examined were not shown to be associated with clinical parameters indicating periodontal disease. This pattern of oral colonization was also confirmed to exist in supragingival plaque samples (Haffajee et al., 2008). Aas et al. (2005) also identified the three bacterial species of the red complex as highly associated with disease status, which confirmed the colonization model by Socransky et al. (1998), and those findings were later supported by other studies (Dewhirst et al., 2010;Zarco et al., 2012;Wade, 2013;Duran-Pinedo and Frias-Lopez, 2015). Presently, the association of particular bacterial species within an intricate microbial community with periodontal health status is widely accepted.
Accumulation of opportunistic bacteria in periodontal plaques and their deleterious effects on host tissues via specific virulence factors provoke host immune responses. In response to microbial challenge, a massive cytokine response occurs, which triggers activation and recruitment of polymorphonuclear leukocytes (PMNs) in periodontal pockets (Figure 2). Their activation and oxidative burst contribute to periodontal homeostasis damage and subsequent degradation of periodontal tissues (Waddington et al., 2000;Kantarci et al., 2003;Graves, 2008). As compared to healthy individuals, the number of PMNs is increased in both periodontal pockets and the bloodstream of patients with chronic PD (Lakschevitz et al., 2013;Kolte et al., 2014), thus sustaining inflammation.
IMMUNE RESPONSE DURING PATHOLOGICAL PROCESS OF GINGIVITIS AND PERIODONTITIS
The concept of chronic PD as an immunological disease, which was proposed more than 40 years ago (Seymour et al., 1979), implies that a primary etiologic factor is bacterial infection that elicits a specific immune response by the host, triggering gingival inflammation and progression to chronic PD. Over the past 20 years, a number of studies have investigated and defined immune system components contributing to its pathogenesis. For example, it has been shown that both innate and adaptive immune systems are involved in PD onset, in which the roles of T-and B-lymphocytes are likely to be equally crucial (Gonzales, 2015). However, in regard to polarization of T-helper (Th) cell response, it remains elusive whether PD pathogenesis is driven by Th1, Th2, or Th17, or what role is adopted by regulatory T cells (Tregs) (Carvalho-Filho et al., 2016).
A plausible model for the pathological process of PD has been suggested, based on histopathological examinations of PD tissue sections. A pathological condition develops in sequential order and PD progression is subdivided into various stages, starting with initial lesion formation during the first 4 days after plaque accumulation. PMNs, i.e., neutrophils, dominate the host immune response, accompanied by the activation of complement component C3 via an alternative pathway. Subsequent production of the anaphylatoxins C3a and C5a leads to activation of mast cells with release of vasoactive substances that facilitate vascular permeability and development of edema. Moreover, mast cells release TNF-α, which upregulates the expression of adhesion molecules on endothelial cells, allowing for increased PMN infiltration (Ohlrich et al., 2009). After approximately 4-7 days of plaque accumulation, the initial lesion progresses to an early or stable gingivitis lesion with increasing infiltration of macrophages and lymphocytes (Figure 2). Lymphocytes are predominantly T cells with a CD4-positive to CD8-positive ratio as high as 2:1, an activated phenotype that is at this point negative for the IL-2 receptor CD25. Since absence of CD25 indicates that T cells have proliferated elsewhere, characteristics of the early lesion indicate a delayed-type hypersensitivity reaction (DTH). The pathology can be stable for a certain period with equilibrium maintained between the immune system and microbiota, and inflammation confined to the gingiva. In cases when plaque is mechanically removed, the lesion will reversibly recover at this stage. However, if plaque accumulation is allowed to continue, and attachment between the gingiva epithelium and tooth surface is progressively lost, the stable lesion advances to an established or progressive PD lesion, characterized by a predominant response of B cells and plasma cells, high levels of IL-1 and IL-6, and periodontal tissue destruction, including alveolar bone loss (Ohlrich et al., 2009). The final stage, an advanced lesion, is also characterized by a dominance of B and plasma cells, while inflammatory status is exacerbated. Fibroblasts stimulated by IL-1β, TNF-α, and prostaglandin E2 secrete matrix metalloproteases (MMPs) that not only advance the lesion, but also accelerate bone loss (Figure 2). Palliative treatment of PD and complete removal of bacterial plaque improves the course of periodontitis and leads to arrest of the irreversible destruction of periodontal tissues.
Recent studies have shown the critical role of Th17 in maintenance of oral tissues. Individuals with a genetic defect in Th17 differentiation are susceptible to oral fungal infections (Liu et al., 2011;Moutsopoulos et al., 2015) and excess Th17 response in gingiva promotes inflammation, leading to deterioration related to periodontitis pathology (Eskan et al., 2012;Moutsopoulos et al., 2014). Dutzan et al. (2017) showed that the population of gingival IL-17-producing CD4+ T cells increases with age. Interestingly, Th17 responses are not dependent on colonization of commensal bacteria, which is totally different from those in the mucosa of other anatomical sites. Moreover, accumulation of gingival Th17 cells is dependent on physiological mechanical damage caused by mastication and subsequent induction of IL-6-mediated signals. Thus, mastication, a normal function of the oral cavity, shapes gingival immune homeostasis.
Even though the above sequence of events leading to chronic PD is feasible, it does not explain why the pathophysiological condition of an early lesion remains stable or even resolves in some individuals, while it progresses to B cell-driven progressive stages in others. The transition from a T cell-to B cell-rich lesion has been suggested to correlate with the transition from a Th1to Th2-dominated response. Indeed, the pathological condition of chronic PD represents a pathology dominated by Th2 (Kinane and Bartold, 2007). Future research will be needed to investigate the involvement of Th17 and Treg cells, as well as the impact of environmental and genetic factors on susceptibility to chronic PD, and the underlying mechanisms for onset of PD-related systemic diseases.
Interaction of P. gingivalis with Epithelial Cells
The periodontal pathogen P. gingivalis infects gingival epithelial cells in the oral cavity, and its in vitro adherence to and internalization of epithelial cells have been well investigated. Dogan et al. (2000) observed invasion of primary epithelial cells by P. gingivalis, though it has been noted that the quantity of adherence and invasion of P. gingivalis are dependent on which human cell types and bacterial strains are investigated (Deshpande et al., 1998). For example, adherence rates of strain A7436 isolated from refractory PD to KB oral epithelial and human umbilical vein endothelial cells were found to be 1.1% and 0.5%, respectively (Deshpande et al., 1998). When comparing diverse P. gingivalis strains, the adherence rates vary, such as 0.5% for type strain W50 and 10.5% for strain 33277 to KB cells (Duncan et al., 1993;Dorn et al., 2000), while adhesion capacity also varies between cell types due to divergent interactions between P. gingivalis and the intrinsic cell surface. Furthermore, Saito et al. (2008) reported that strain ATCC 33 invaded Ca9-22 gingival epithelial cells as well as human aorta endothelial cells (HAEC) at higher rates as compared to strain W83, which might be explained by the highly fimbriated phenotype of strain ATCC33. Addition of Fusobacterium nucleatum strain TDC100 to that culture system increased the number of invaded bacteria for both strains (Saito et al., 2008). Pinnock et al. (2014) examined survival and bacterial release of P. gingivalis in a 3-D organotypic oral mucosal model, which was shown to mimic in vivo conditions. They found an increase of intracellular survival and bacterial release during incubation, as compared to monolayer experiments (Pinnock et al., 2014). These reports demonstrate that the interaction of epithelial cells with bacteria is dependent on a wide range of factors, while the high complexity of the oral cavity is not well represented by a monolayer cell culture system.
In addition to the composition of cells grown in various cell culture systems, host cell response itself is crucial for host signaling cascades. Toll-like receptors (TLRs) are pattern recognition receptors of epithelial cells as well as immune cells to recognize microbial molecules (Sugawara et al., 2006), which are involved in both intracellular and extracellular signaling pathways, culminating in activation of innate immune responses (Muzio and Mantovani, 2000). TLR2 and TLR4 respond to various bacterial factors, including lipoteichoic acid, lipopeptides, and lipopolysaccharide (LPS), and changes in their expression in gingival tissue during chronic PD have been demonstrated (Promsudthi et al., 2014). As an immunomodulation factor, P. gingivalis has effects on miRNA expression in host cells. miRNAs are single-stranded noncoding RNAs involved in regulatory processes, such as mRNA degradation and translational repression, and modulation of their expression results in dysregulation of proliferation and host cell immune responses (O'Connell et al., 2007;Aberdam et al., 2008). Gingival human oral keratinocytes incubated with heat-inactivated P. gingivalis exhibited upregulation of miRNA-105, which is complementary to TLR2 mRNA (Benakanakere et al., 2009). Moreover, infection of primary gingival epithelial cells with viable P. gingivalis organisms was shown to significantly alter the expression of 14 miRNAs involved in regulation of apoptosis and cytokine secretion (Moffatt and Lamont, 2011). Overall, the oral cavity represents a complex network of numerous bacterial and/or host interactions, which can be disturbed by P. gingivalis via its utilization of epithelial cells to support its own survival.
Influence of P. gingivalis on Stem Cells
The interactions of oral pathogens with differentiated cells, such as epithelial and bone cells, as well as stem cells and fibroblasts have been investigated. Stem cells can be isolated from various adult tissues, including bone marrow and gingiva (Barry and Murphy, 2004;Sonoyama et al., 2006;Tavian et al., 2006;Zhang et al., 2009;Jin et al., 2013). The source of human dental stem cells (hDSCs) is located in oral tissues and those exhibit the main characteristics of mesenchymal stem cells. hDSCs can be isolated from dental pulp and exfoliated deciduous teeth, as well as apical papilla, periodontal ligament, and dental follicle specimens (Gronthos et al., 2000;Miura et al., 2003;Seo et al., 2004;Jo et al., 2007;Sonoyama et al., 2008). The prominent presence of hDSCs in oral tissues provokes intriguing questions regarding whether P. gingivalis is able to interact with stem cells in tissues and, if so, what subsequent effects should be expected. The effect of the outer membrane component LPS of P. gingivalis on stem cells in regard to cell proliferation, viability, differentiating capacity, and immunomodulatory characteristics has been evaluated (Mysak et al., 2014;Chatzivasileiou et al., 2015;How et al., 2016), though the interaction of viable bacteria with stem cells remains poorly defined. Kriebel et al. (2013) demonstrated that stem cells and oral bacteria can be co-cultured under anaerobic conditions. In their system, oral microorganisms were less able to adhere to or internalize into human bone marrow stem cells (hBMSCs) in relation to gingival epithelial cells (Kriebel et al., 2013). Thereafter, additional studies revealed that human dental follicle stem cells (hDFSCs) elicit a reduced pro-inflammatory response following bacterial infection, as compared to differentiated cells (Biedermann et al., 2014). Furthermore, Kriebel et al. (2013) and Biedermann et al. (2014) showed that stem cell functions were influenced by oral bacteria in vitro, while Hieke et al. (2016) found that infection with viable bacteria induced distinct reactions by stem cells that were different from reactions to a single administration of LPS (Hieke et al., 2016). Thus, infected stem cells showed a reduced capacity for migration, though that finding is inconsistent with another study that demonstrated increased migration following stimulation with LPS (Chatzivasileiou et al., 2013). As compared with the analyses with LPS stimulation alone, data obtained in experiments with viable bacteria remain controversial. Additional studies are required to evaluate the reaction of stem cells to bacterial infection in human tissues.
Fibroblasts, the most predominant cell type in periodontal tissue, play important roles in tissue regeneration and PDassociated inflammation. They express TLRs, including TLR2 and 4 (Mahanonda et al., 2007), thus are considered to be involved in immune reactions to oral bacteria. The effects of P. gingivalis LPS on fibroblasts have been examined in regard to cell viability, immune response, and tissue repair, as well as the effects of cell signaling on those factors (Souza et al., 2010;Morandini et al., 2013;Sun et al., 2016). Furthermore, interactions of P. gingivalis with gingival fibroblasts have also been investigated, with the effects of immune modulating bacterial factors, the capsule, and gingipains, together with LPS, noted (O' Brien-Simpson et al., 2009;Brunner et al., 2010;Scheres and Crielaard, 2013). P. gingivalis can adhere to and invade human fibroblasts (Pathirana et al., 2008;Irshad et al., 2012;Zhang et al., 2014), and such bacterial infection induces secretion of pro-inflammatory mediators, including IL-6 and IL-8, via TLRdependent and -independent pathways (Liu et al., 2014;Palm et al., 2015). In addition, P. gingivalis infection induces caspaseindependent apoptosis as well as regulation of the inflammasome activation (Desta and Graves, 2007;Kuo et al., 2016). Such diverse responses of the infected fibroblasts affect biological functions and differentiation of other cell types, implicating their regulatory role in progression of periodontitis and consequent chronic inflammation (Zhang et al., 2014;Tzach-Nahman et al., 2017).
Influence of P. gingivalis Infection on Immune System
The dental pocket is constantly exposed to oral microorganisms and innate immunity components permanently interact with bacteria. During inflammation, various cell types, including neutrophils and macrophages, migrate to the site of infection, with the former the first line of defense against invading microorganisms. In addition to phagocytosis, exocytosis of granules, release of reactive oxygen species (ROS), and induction of neutrophil extracellular traps (NETs) serve as anti-microbial factors (Segal, 2005). Interestingly, P. gingivalis is able to modify neutrophil activity in a manner that promotes neutrophil survival. It has also been shown that prolongation of neutrophil survival caused by P. gingivalis results in accumulation of neutrophils in adult patients with PD (Gamonal et al., 2003), while later it was reported that neutrophils isolated from the blood of chronic PD patients were highly reactive to stimulation by P. gingivalis LPS, with increased release of the proinflammatory cytokine IL-8 (Restaino et al., 2007). Furthermore, neutrophils from patients with localized aggressive PD produce higher levels of ROS against P. gingivalis, as compared with those from healthy donors, and release of ROS results in secretion of pro-inflammatory cytokines, which can be advantageous to counteract the increased burden of P. gingivalis (Damgaard et al., 2016). As a strategy for escape from the host immune system, it has been shown that P. gingivalis has an ability to invade epithelial cells. Moreover, triggering of an immune response can be beneficial for colonizing deeper tissues of the host (Li et al., 2008). The working group of Hajishengallis noted that direct interaction of P. gingivalis with PMNs resulted in modulation of the neutrophil killing function via MyD88, an adaptor protein of TLR2 and TLR4 receptors (Hajishengallis et al., 2008;Maekawa et al., 2014).
Macrophage functions are also modulated by P. gingivalis. Macrophage migration-inhibitory factor (MIF) is involved in killing of bacteria by recruitment and activation of macrophages. Li et al. (2013) demonstrated that P. gingivalis is able to reduce the expression of MIF mRNA in deep-pocket tissues (Li et al., 2013). Furthermore, in vitro experiments demonstrated that treatment of macrophages with the lysine-specific gingipain Kgp impaired its migration to apoptotic neutrophils and reduced the anti-inflammatory effect of apoptotic cells, resulting a rapid inflammatory response, leading the authors to suggest that P. gingivalis promotes chronic inflammation by a gingipainmediated defect in apoptotic cell clearance and resolution of tissue restoration (Castro et al., 2017).
Following infection of epithelial cells and fibroblasts, P. gingivalis can also indirectly modulate immune cell functions.
In vitro experiments showed that P. gingivalis infection of oral epithelial cells inhibits neutrophil migration (Madianos et al., 1997). Also, exposure of live P. gingivalis strain W83 to fibroblasts from periodontal ligaments in vitro induced a reduction in expression of macrophage colony-stimulating factor (Scheres et al., 2009). Those findings demonstrated that immune cell functions are also indirectly influenced by P. gingivalis infection, with bacterial secreted factors potentially a part of this complex system.
Overviews regarding the interactions of various other oral pathogenic bacteria with eukaryotic animal cells and cells from human sources have been presented (Feng and Weinberg, 2006;Kebschull and Papapanou, 2011). In general, P. gingivalis modifies antimicrobial host response and causes an imbalance in immune responses, leading to prolongation of inflammatory status and continuous damage against periodontal tissues. General bacterial burden, variability of colonizing species, oral hygiene, and other individual risk factors have further impact on host immune response and the subsequent outcome of periodontal disease.
POTENTIAL ASSOCIATION OF PD WITH SYSTEMIC DISEASE
Specific oral pathobionts influence related systemic diseases, such as atherosclerosis, infective endocarditis, diabetes, adverse pregnancy outcome, respiratory diseases, and RA, with various hypotheses based on epidemiological and experimental data presented. First, these systemic diseases and PD share common confounding factors, including lifestyle and/or genetic predisposition, indicating the importance of common host backgrounds. In addition, oral dysbiosis may cause autoimmunity via immune response against oral microbiota and subsequent molecular mimicry/autoantibody generation, as reported in cases of RA, in which T cell subsets are shaped toward pro-inflammatory cytokine-producing cells that drive autoimmunity development (Hooper et al., 2012). Since improved prognosis of patients with systemic diseases, such as coronary heart disease and RA, has been demonstrated following treatment for PD, a periodontal immune response and/or plaque bacteria provide a link for the mutual relationship between PD and those diseases (Montebugnoli et al., 2005;Al-Katma et al., 2007;Ortiz et al., 2009). In the following sections, possible relevance to the etiologies of PD and RA is discussed.
ASSOCIATION OF PD WITH RA
A reciprocal relationship between PD and RA has been reported (de Pablo et al., 2009;Koziel et al., 2014), and is also implied by the fact that early Assyrians 2,500 years prior treated rheumatism by tooth extraction. Even though no reliable records regarding the effect of that treatment exist, the concept of a mutual relationship of chronic joint disease with PD has been noted. However, the exact molecular and cellular mechanisms linking PD and RA are only slowly being unraveled. Based on a review of recent literature, reports supporting the various scenarios mentioned above are introduced here.
Common Predisposing Factors of PD and RA
The first factors linking RA and PD include lifestyle and genetic predisposition as common confounders. Indeed, smoking and aging have been identified as risk factors for both of those diseases (Eriksson et al., 2016). As for a common genetic predisposition, data presented thus far remain inconclusive. While the strongest association for RA has been found among alleles of the HLA-DRB1 * 04 and * 01 haplotype groups, which carry a shared epitope, there is no significant association between HLA class II antigens and PD (Stein et al., 2008). Findings of an epidemiological study indicated that the hypomethylated status of a single CpG in the IL-6 promoter region plays a role in pathogenesis of RA and PD (Ishida et al., 2012). In an investigation of cases with aggressive PD, a large candidate-gene association study found no definite evidence for a genetic link of PD with RA, while their results suggested that IRF5 and PRDM1 are shared susceptibility factors, both of which are involved in interferon-β signaling, and also associated with systemic lupus erythematosus and inflammatory bowel disease (Schaefer et al., 2014).
Oral Bacteria-Mediated Autoimmunity Links PD and RA
The second scenario, which has gained enormous attention, is oral dysbiosis as a prerequisite for pathogenic autoimmunity that leads to the onset of RA. Notably, the presence of P. gingivalis in PD lesions has been indicated as a link between both chronic inflammatory diseases (Rosenstein et al., 2004;Mikuls et al., 2009Mikuls et al., , 2014Bartold et al., 2010). It was initially speculated that bacterial cell and/or toxin/metabolic byproducts can enter the systemic circulation from a clinically asymptomatic localized lesion containing pathogenic bacteria and spread to discrete anatomical sites, thereby initiating disease (Kumar, 2017). However, neither oral live bacteria nor their toxin/metabolic byproducts have been detected in the focus of the rheumatoid joint, though P. gingivalis DNA has been noted in synovial fluid (Reichert et al., 2013). On the other hand, immunological sequelae associated with oral pathobiont infection have gained considerable attention. These are formations of antibodies against citrullinated peptide antigens (ACPA) that precede the development of RA (Schellekens et al., 1998) and a molecular mimicry of bacterial proteins against host proteins, both of which raise autoantibodies.
Development of RA is attributable to production of ACPAs, the presence of which serves as a potent diagnostic marker for RA (Schellekens et al., 2000). Formation of ACPAs prior to RA onset has been the focus of intense research for the past two decades. While citrullination itself is a physiological process, the formation of antibodies against citrullinated peptide antigens is highly specific for RA (Schellekens et al., 1998). However, this specificity remains enigmatic, and whether these antibodies play an active role in the disease process or simply reflect an ongoing immune response has not been fully elucidated. On the other hand, the finding that citrullinated proteins accumulate in the joint provides a basis for interpreting the pathological condition in patients with RA. The pathology can be characterized as dysregulated citrullination, followed by release of neo-epitopes that breach immunological tolerance and trigger autoantibody formation.
Biological Significance of Citrullination in RA Onset
Citrullination is mediated by peptidylarginine deaminase (PAD) enzymes. In humans, there are five different isotypes, PAD1-4, and PAD6, which exhibit a roughly 50-55% sequence similarity, and show distinct distributions in cells and tissues (Vossenaar et al., 2003;Zhang et al., 2004;Bicker and Thompson, 2013). Citrullination is the post-translational hydrolytic conversion of peptidyl-arginine into peptidyl-citrulline via deamination, a process that renders a reduction in the net positive charges of a given protein, thereby leading to increased hydrophobicity, protein unfolding, and altered intra-and inter-molecular interactions . Physiologically, citrullination impacts gene regulation, terminal differentiation, and apoptosis, thus dysregulated citrullination is associated with numerous disorders, including autoimmune and neurodegenerative diseases (Witalison et al., 2015). PAD activity under a physiological condition is regulated by calcium concentration and a reducing environment (Arita et al., 2004). While full PAD activity in vitro requires millimolar amounts of calcium ion, intracellular nanomolar concentrations are likely to limit aberrant citrullination. Likewise, the oxidizing nature of the extracellular environment may provide protection from aberrant extracellular citrullination by PADs that may leak from activated or dying cells . Of note, the citrullinome in RA is comprised of cytoplasmic and extracellular proteins, suggesting that both compartments are prone to dysregulated PAD activity. The major cell type for intracellular protein citrullination in the RA joint is represented by neutrophils, which are also the major source for soluble PAD2 and PAD4 released into synovial fluid (Romero et al., 2013;Spengler et al., 2015;Konig and Andrade, 2016). An important question then is what triggers hyper-citrullination in neutrophils? Among the various stimuli that trigger neutrophil activation and death, pore forming and membranolytic pathways that involve perforin and the complement membrane attack complex have been shown to induce intracellular calcium fluxes, a transient rise in intracellular calcium concentration and subsequent intracellular hyper-citrullination (Romero et al., 2013). Interestingly, the ability to provoke calcium influx-induced hyper-citrullination in neutrophils is definitely possessed by pore-forming immune mechanisms of the host, though that is also shared by bacterial calcium ionophores and pore-forming toxins . Due to limited conditions required for enzymatic activity, robust extracellular citrullination can only be maintained by a constant release of soluble PADs from dying cells and the presence of autoantibodies against PADs. Neutrophil NETosis and necrosis, as well as autophagy also contribute to extracellular hyper-citrullination via release of transiently active PAD enzymes (Spengler et al., 2015). Recently, it was shown that the presence of PAD3/PAD4 cross-reactive autoantibodies, which cause a decrease in calcium concentration required for catalysis, is associated with most erosive disease courses Navarro-Millán et al., 2016). As a consequence of hypercitrullination, generation of neo-epitopes induced by changes in protein antigenicity might raise autoantibodies. Autoantibodies existing in synovial fluid opsonize target antigens and trigger a complement cascade, thus maintaining the vicious cycles of auto-inflammation and hyper-citrullination.
Potential Involvement of Oral Pathobionts in ACP Generation
Bacterial toxins and enzymes have been reported to induce citrullination of host proteins and release of PADs. The poreforming leukotoxin produced by Gram-negative Aggregatibacter actinomycetemcomitans kills human neutrophils and induces hyper-citrullination in neutrophils, thus contributing to dysregulated citrullination . Likewise, as a putative link between PD and RA, the pathobiont P. gingivalis expresses a prokaryotic PAD (PPAD), which is thus far unique among microorganisms. In contrast to human PADs, PPAD does not require calcium ion for its activity of citrullination of C-terminal arginine residues (Rodriguez et al., 2009;McGraw et al., 1999). Furthermore, while PADs are unable to catalyze free L-citrulline, PPAD can citrullinate both free and peptidebound arginine (Abdullah et al., 2013). A cellular PPAD with an approximate size of 75-85 kDa and a secreted PPAD sized 47 kDa have been described (Konig et al., 2014). PPAD is extracellularly secreted or located in the outer membrane of P. gingivalis together with virulence factors, arginine-specific gingipains RgpA and RgpB, which cleave the carboxyl group of arginine residue in their own target proteins. The cleaved products exposing arginine residue at the carboxyl terminus are prone to rapid citrullination by PPAD Maresz et al., 2013).
Prokaryotic PAD also citrullinates human proteins, such as fibrin, vimentin, epidermal growth factor (EGF), fibrinogen, and α-enolase (McGraw et al., 1999;Mangat et al., 2010;Wegner et al., 2010;Montgomery et al., 2016). Therefore, host proteins modified by PPAD may function as antigens that induce generation of ACPAs (Vossenaar and van Venrooij, 2004;Moscarello et al., 2007;Nesse et al., 2012). As a consequence, citrullination of EGF results in defects in cell-cycle modulation. Since EGF activates cell proliferation, migration, repair, and regeneration of gingival epithelial cells, its citrullination hampers regeneration of damaged tissue. As a result, modification of human proteins mediated by PPAD likely induces a biological shift in the local environment (Pyrc et al., 2013). Also, PPAD may be important for interactions of P. gingivalis with eukaryotic cells, including neutrophils, macrophages, and epithelial cells (Quirke et al., 2014). Additionally, it has been shown that monocytes and macrophages exposed to viable P. gingivalis had increased extracellular citrullination levels, while the endogenous PAD level is not affected (Marchant et al., 2013). Bielecka et al. (2014) also showed that PPAD citrullinates the C-terminal arginine residue of the chemoattractant complement factor C5a, resulting in decreased chemotaxis of human neutrophils and release of pro-inflammatory cytokines from immune cells (Bielecka et al., 2014). However, the extent to which PPAD enzymatic activity affects the functions of oral host cells, including immune cells, remains elusive. As a consequence of citrullination of bacterial proteins, auto-citrullination of PPAD and antibodies against gingipains can be detected in both PD patients and healthy individuals. On the other hand, antibodies against citrullinated PPAD are specific to RA, suggesting that citrullinated PPAD is a member of early induced proteins that contribute to ACPA generation.
In the context of mechanisms that induce antibody crossreactivity, P. gingivalis can citrullinate its own α-enolase, which shares an 82% sequence homology with human α-enolase, a finding that provides a convincing argument in terms of epitope spreading and molecular mimicry (Lundberg et al., 2008). Therefore, PD patients colonized with P. gingivalis might produce antibodies against citrullinated bacterial proteins homologous to human proteins and this molecular mimicry of the antigen may elicit an immune response to human tissues. Similarly, PPAD citrullination of host proteins allows for neo-epitope formation that triggers autoimmune responses.
In Vivo Evaluations of PPAD Functions Connecting PD and RA
Due to diverse factors that influence PD and RA, animal models have been utilized to detect a causal relationship between those diseases. Kinloch et al. (2011) demonstrated that citrullinated enolase induces experimental arthritis, and showed that enolase citrullinated by human PAD or P. gingivalis PPAD induces autoantibody production in DR4-IE-transgenic mice (Kinloch et al., 2011). Utilizing a collagen-induced arthritis (CIA) mouse model, Maresz et al. (2013) demonstrated that the ability of P. gingivalis strain W83 to augment CIA was dependent on PPAD activity. Moreover, infection with the wild-type strain, but not its PPAD-null mutant, induced elevated levels of autoantibodies to collagen type II . These in vivo results emphasize the importance of PPAD as a potential virulence factor of P. gingivalis and a key component connecting PD and RA.
Relevance of ACPAs in Pathogenesis of RA
Recent findings have demonstrated that osteoclasts express PAD enzymes at all stages of their development, while detectable citrullinated proteins have also been detected on their cell surface (Harre et al., 2012;Krishnamurthy et al., 2016). ACPAs can therefore bind to osteoclast precursors and induce expression of IL-8, which acts as an autocrine growth factor and drives differentiation into mature bone-resorbing osteoclasts (Kopesky et al., 2014). While those findings account for boneresorption mechanisms to some extent, they are clearly not sufficient to induce chronic synovial inflammation (Catrina et al., 2017). Indeed, animal models have shown that a single ACPA administration does not induce arthritis. However, if mild synovial inflammation already exists, severe joint disease reminiscent of human RA has been found to develop in the presence of ACPAs (Sohn et al., 2015). These findings suggest that ACPAs are important but not alone sufficient for inducing chronic inflammation, though they apparently play an active role in the disease processes leading to RA.
SUMMARY
Oral pathobionts are constituents of a complex ecosystem and provide a mutually trophic metabolism together in a state of equilibrium with host factors and the immune system. Dental biofilms in each anatomical site are characterized by a distinct composition of bacterial species. A continuous existence of periodontal biofilm exacerbates host inflammatory response and drives a shift in the periodontal microbiome, which leads to onset of PD. As a red complex member, P. gingivalis affects the functions of various host cells and manipulates antimicrobial host response, thereby posing dysbiosis and prolongation of an inflammatory status along with periodontal tissue damage. Furthermore, recent findings indicate that particular periodontal pathobionts, such as P. gingivalis, have an exacerbating role for generation of ACPAs, which has been confirmed by epidemiological data showing an interrelationship between PD and RA. ACPAs activate immune response, including complement activation, and thus facilitate local hypercitrullination, while they also activate osteoclasts to absorb bone and provide the basis for RA development. Imbalances in the oral microbiome shape the pro-inflammatory axis of the cytokine network, which offers a broad framework to comprehend the pathogeneses of autoimmunity and chronic inflammatory diseases (Hooper et al., 2012). However, the exact mechanisms by which oral pathobionts have effects on both the cytokine network and autoimmunity remain obscure. Further research is needed to evaluate the involvement of genetic/environmental susceptibility factors, citrullination, and cytokine networks in the reciprocal relationship of PD and RA. As shown in this review, it is currently not possible to finally conclude if oral pathobionts like P. gingivalis can simply be classified as bystander microbiota or if they are disease initiators, with the consequence that perhaps early prophylactic treatment could prevent systemic and chronic diseases, such as atherosclerosis, infective endocarditis, diabetes, adverse pregnancy outcome, respiratory diseases, and RA.
AUTHOR CONTRIBUTIONS
All authors conceived the concept for this review article and participated in writing the manuscript. Each equally contributed to reading, editing, and reviewing the manuscript.
FUNDING
Research performed in the laboratory of BM-H was supported by intramural funding (FORUN 889023). The work of MN was financed by JSPS KAKENHI (grant number 15KK0306). | 9,896.6 | 2018-01-30T00:00:00.000 | [
"Medicine",
"Biology"
] |
The Alternaria alternata StuA transcription factor interacting with the pH-responsive regulator PacC for the biosynthesis of host-selective toxin and virulence in citrus
ABSTRACT The tangerine pathotype of Alternaria alternata produces a host-selective toxin termed Alternaria citri toxin (ACT). The molecular mechanisms underlying the global regulation and biosynthesis of ACT remain unknown. In the present study, the function of an APSES transcription factor StuA was investigated. StuA was shown to be required for ACT biosynthesis and fungal virulence. StuA was found, for the first time, to physically interact with a pH-responsive transcription regulator PacC using yeast two-hybrid, bimolecular fluorescence complementation, and GST pull-down assays. Functional analyses revealed that StuA and PacC regulate the expression of genes involved in toxin biosynthesis and virulence. Mutation of stuA via targeted gene deletion or silencing pacC yielded fungal strains that decreased the expression of seven toxin biosynthetic genes (ACCT) and toxin production. EMSA analyses revealed that PacC could bind to the promoters of ACTT6 encoding an enoyl-CoA hydratase and ACTTR encoding an ACT pathway-specific transcription factor. Site-directed mutagenesis of five potential protein kinase A (PKA) phosphorylation sites in StuA revealed that none of the sites was involved in ACT production, indicating that the function of StuA in the regulation of ACT gene expression is not dependent on phosphorylation. Overall, our results confirmed that PacC is one of the key regulators interacting with StuA for the biosynthesis of ACT. Environmental pH may play a decisive role during A. alternata pathogenesis. Our results also revealed a previously unrecognized (StuA-PacC)→ACTTR module for the biosynthesis of ACT in A. alternata. IMPORTANCE In this study, we used Alternaria alternata as a biological model to report the role of StuA in phytopathogenic fungi. Our findings indicated that StuA is required for Alternaria citri toxin (ACT) biosynthesis and fungal virulence. In addition, StuA physically interacts with PacC. Disruption of stuA or pacC led to decreased expression of seven toxin biosynthetic genes (ACCT) and toxin production. PacC could recognize and bind to the promoter regions of ACTT6 and ACTTR. Our results revealed a previously unrecognized (StuA-PacC)→ACTTR module for the biosynthesis of ACT in A. alternata, which also provides a framework for the study of StuA in other fungi.
tangerine pathotype produces and secretes an HST named Alternaria citri toxin (ACT).ACT is extremely toxic to susceptible citrus cultivars.It has been demonstrated that ACT at a concentration as low as 2 × 10 −8 M can kill citrus cells of susceptible cultivars.In contrast, resistant cultivars can tolerate ACT up to 2 × 10 −4 M (5-7).ACT causes rapid electrolyte leakage and cell death in susceptible citrus cultivars (8,9).The ability to produce ACT is absolutely required for A. alternata pathogenesis because fungal mutants carrying the deletion in genes required for the biosynthesis of ACT fail to cause any symptoms (10).
ACT is a low-molecular-weight secondary metabolite containing three moieties, 9,10-epoxy-8-hydroxy-9-methyl-decatrienoic acid (EDA), a valine, and a polyketide (4).The ACT gene cluster consisting of 25 genes is located on a small, condition ally dispensable chromosome (11)(12)(13).Of them, nine genes have been functionally characterized to be required for the biosynthesis of ACT.Five genes, ACTT1 encoding an acyl-CoA ligase, ACTT2 encoding a hydrolase, both ACTT3 and ACTT6 encoding an enoyl-CoA hydratase and ACTT5 encoding an acyl-CoA synthetase are involved in the biosynthesis of the EDA moiety (14)(15)(16).Two genes, ACTTS2 encoding an enoyl-reductase and ACTTS3 encoding a polyketide synthase, are responsible for the elongation of the polyketide chain (17).ACTS4 encoding a nonribosomal peptide synthetase has recently been demonstrated to be involved in the biosynthesis of ACT in the tangerine pathotype (18).A Zn2Cys6 transcription factor encoded by ACTTR is a pathway-specific regulator required for the regulation of the genes in the cluster (18).
The biosynthesis of fungal secondary metabolites is regulated or impacted by a wide variety of proteins, signals, and environmental factors (19,20).Studies in the tangerine pathotype have identified components in the mitogen-activated protein (MAP) kinase pathways, peroxisome complex activities, and autophagy-related processes that are required for ACT biosynthesis and virulence (21)(22)(23)(24).A basal transcription factor II H subunit (tfb5) and a GATA transcription factor (AreA) have also been shown to be required for ACT biosynthesis, sporulation, and virulence in the tangerine pathotype (25,26).However, how those proteins and components are coordinated to form a global network regulating the biosynthesis of ACT remains elusive.
Transcription factors belonging to the family of APSES (Asm1p, Phd1p, Sok2p, Efg1p, and StuA) contain a basic helix-loop-helix (bHLH) DNA binding domain capable of binding to the specific stress response element (STRE) with the consensus sequence (A/T)CGCG(T/A)N(A/C) (27,28).APSES often forming homo-or heterodimers (29,30) play diverse biological functions including development, stress responses, and biosynthesis of secondary metabolites in fungi (31,32).The function of StuA homologs has been studied in some pathogenic fungi with a hemibiotrophic lifestyle.A MoStu1 homolog of Magnaporthe oryzae contributes to conidiation, mycelial growth, and appressoriummediated infection in rice (33).StuA of Aspergillus fumigatus is required for the biosyn thesis of secondary metabolites (34,35).StuA of Ustilago maydis plays key roles in dimorphism, virulence, and sporulation (36,37).StuA of F. graminearum is required for life cycle transitions and virulence (28).The biological function of StuA remains unknown in fungi with a necrotrophic lifestyle.In the present study, we found that the A. alternata StuA physically interacted with a pH-responsive transcription factor (PacC) for the biosynthesis of ACT and fungal virulence.
Identification of stuA and construction of stuA mutants in A. alternata
The stuA homolog was identified at the AALT_10904 locus from the A. alternata genome database (GCA 001572055.1)using the Aspergillus nidulans StuA amino acid sequences as a query.stuA has an open reading frame of 1,893 bp encoding a putative polypeptide of 631 amino acids.StuA possesses an APSES-type DNA binding domain similar to the other StuA transcription factors of pathogenic fungi (Fig. S1a).Phylogenetic analysis revealed that StuA is the most related to the Exserohilum turcica EtStuA (Accession no.XP_008027222.1),sharing 82% sequence similarity (Fig. S1).stuA was deleted in the wild-type Z7 strain using homologous recombination, and the deletion transformants were identified by PCR and Southern blot analysis (Fig. S2a through S2c).A complemen tation strain designated ΔstuA-C was generated by introducing the wild-type stuA gene with its native promoter into protoplasts prepared from ΔstuA.
stuA is required for filamentous growth and virulence
After being grown on potato dextrose agar (PDA) for 3 days, both wild type and ΔstuA-C produced grayish colonies.In contrast, ΔstuA produced whitish colonies and reduced growth by 15% compared to the wild type (Fig. 1A and B).Microscopic examination revealed that ΔstuA hyphae were stubby and swollen with frequent branching, but hyphae of ΔstuA-C and Z7 grew relatively straight with few branching (Fig. 1A).ΔstuA produced significantly fewer conidia than Z7 and ΔstuA-C, reducing by 94% after a 7-day incubation (Fig. 1A and C).Infection assays by placing mycelial plugs on detached leaves of a susceptible citrus cultivar Hongjv revealed that necrosis lesions induced by Z7 or ΔstuA-C developed at the inoculation point and spread rapidly 3 days post-inoculation (dpi).No necrosis was observed on leaves inoculated with ΔstuA 3 dpi (Fig. 1D).
stuA is required for toxin biosynthesis
A leaf necrotic bioassay conducted on detached leaves (Citrus reticulata Blanco, cv.Hongjv) revealed that samples purified from culture filtrates of ΔstuA-C and Z7 resul ted in dark brown necrotic lesions 3 days after treatment (Fig. 2A).However, samples prepared from culture filtrates of ΔstuA failed to induce visible lesions.HPLC analyses of samples purified from culture filtrates of ΔstuA-C and Z7 identified a peak with a retention time of 32 min, which was completely absent in samples prepared from culture filtrates of ΔstuA (Fig. 2B).Analyses of RNA samples by quantitative RT-PCR revealed Research Article Microbiology Spectrum that the transcript levels of ACTT2, ACTT3, ACTT5, ACTT6, ACTTS2, ACTTS3, and ACTTR were significantly downregulated in ΔstuA compared with the Z7 strain (Fig. 2C).The expression levels in ΔstuA-C were not significantly different from the Z7 strain.
StuA interacts directly with the pH-responsive transcription factor PacC
Searching yeast interactome database (https://thebiogrid.org/) with the yeast SOK2 (a homolog of StuA) as a query identified 248 proteins that potentially interacted with StuA (Table S3).Of those proteins, the yeast Rim101, orthologous to PacC in filamentous fungi, was found to likely interact with SOK2.The A. alternata PacC coding sequence was used to construct a vector for yeast two hybridization (Y2H) assays.Mixing Y2H Gold strains carrying pGADT7-StuA (bait) and pGBKT7-PacC (prey) allowed the yeasts to grow on a selective medium without His, Leu, Trp, and Ade (Fig. 3A).Similar results were obtained in reverse by pairing pGADT7-PacC (bait) and pGBKT7-StuA (prey).To further confirm the interaction between StuA and PacC, recombinant PacC ZF -MBP, glutathione StuA APSESglutathione S-transferase (GST), and GST (as a negative control) were independently expressed in an Escherichia coli BL21 strain.Pull-down assays revealed that PacC ZF -MBP could be detected in the StuA APSES -GST fraction but not in the GST fraction (Fig. 3B).BiFC assays also revealed that co-transforming StuA-NYFP and PacC-CYFP constructs into the protoplasts of wild type resulted in yellow fluorescent spots resembling that of DAPI staining (Fig. 3C).
pacC is also required for ACT biosynthesis and virulence in A. alternata
As with all fungal PacC homologs, PacC contains three highly conserved Cys2His2 zinc finger DNA binding domains in A. alternata (Fig. S4a).However, phylogenetic analyses revealed that PacC was distant from other fungal PacC homologs (Fig. S4b).
Several attempts to disrupt pacC using homologous recombination failed to obtain successful mutants after screening more than 100 transformants by PCR using three different sets of primers.Thus, RNA interference (RNAi) was performed to evaluate the function of pacC in A. alternata (Fig. S4c).Transforming the pSilent-1-pacC construct into protoplasts prepared from the Z7 strain identified three transformants carrying pSilent-1-pacC.Quantitative RT-PCR analyses revealed that all transformants reduced the expression levels of pacC to varying degrees.One transformant, designated pacC-s-3, was found to reduce the expression of pacC by 75% compared to the Z7 strain (Fig. S4d).The growth of pacC-s-3 was reduced particularly under acidic and alkaline conditions (Fig. 4A).Virulence assays on Hongjv leaves revealed that the pacC-s-3 strain induced much smaller necrotic lesions than the wild type (Fig. 4B).Noticeably, some leaf spots inoculated with pacC-s-3 showed no necrotic lesions.HPLC analysis revealed that pacC-s-3 decreased ACT production by 67% compared to the wild type (Fig. 4C).Quantitative RT-PCR analyses revealed that the expression of ACTT2, ACTT3, ACTT5, ACTT6, ACTTS2, ACTTS3, and ACTTR was significantly downregulated in pacC-s-3 compared to the Z7 strain (Fig. 4D).
PacC binds to the promoters of ACTT6 and ACTTR
Analysis of sequences upstream of the putative ATG translational start codon of all 25 ACT genes identified a 5′-GCCARG-3′ motif, a putative binding site of PacC, only in the promoter regions of ACTT6 and ACTTR (Fig. 5A).Electrophoretic mobility shift assay (EMSA) using the purified PacC ZF -MBP protein (a polypeptide containing a PacC binding domain tagged with MBP) was performed to determine whether or not PacC containing three Cys2His2 zinc finger DNA binding domains could recognize and bind to 5′-GCCARG-3′.Mixing PacC ZF -MBP with the 5′ DNA fragments of ACTT6 and ACTTR resulted in a DNA mobility shift (Fig. 5B).Adding an excess of the wild-type unlabeled DNA fragment abolished binding as no DNA mobility shift was detected.However, adding an unlabeled but mutated DNA fragment containing 5′-GAACCG-3′ resulted in a mobility shift.
DISCUSSION
APSES transcription factors play important roles in a wide range of biological processes in saprophytic, human pathogenic, and phytopathogenic fungi (31,35,38,39).StuA is the key member of the APSES family.Its function as a transcription factor has been studied in many filamentous fungi; however, its regulatory action on PacC-mediated signaling has never been demonstrated.In the present study, we demonstrated, for the first time, that StuA could interact with the pH-responsive transcription factor PacC (Fig. 6).Furthermore, we provided evidence that PacC affects ACT production and virulence of A. alternata by directly regulating the expressions of genes required for ACT biosynthesis.Experiments were also conducted to demonstrate that PacC can physically bind to the promoter regions of ACTT6 encoding an enoyl-CoA hydratase and ACTTR encoding an ACT pathway-specific transcription factor.These findings uncover an important protein-interaction module (StuA-PacC)→ACTTR→ACTT that controls ACT production and virulence in the tangerine pathotype of A. alternata.
StuA is required for A. alternata growth.Growth reduction has been reported in stuA mutants of F. graminearum and Arthrobotrys oligospora (28,32).Moreover, deleting stuA resulted in a drastic decrease in conidia production.StuA is also required for conidiation in Magnaporthe grisea, F. graminearum, Stagonospora nodorum, and A. nidulans (27,28,40,41).The results confirm the important role of StuA in hyphal differentiation and the development of fungi.Although growth and conidiation are impaired in ΔstuA, the major deficiency that impacts the ΔstuA virulence is its inability to produce ACT and infect citrus plants.ACT is a secondary metabolite, which is released during spore germination and host colonization, and is absolutely required for the virulence of A. alternata (4,42).StuA impacts the biosynthesis of ACT by transcriptionally regulating the expression of ACT biosynthetic genes located on the ACT gene cluster.This regulatory function was demonstrated to be mediated via PacC in this current study.Thus, we concluded that StuA is one of the key virulence determinants in the tangerine pathotype of A. alternata.Similarly, StuA has been shown to be required for virulence in S. nodorum (41), Leptosphaeria maculans (43), U. maydis (37), and A. oligospora (32).However, deleting stuA has no impact on the virulence of F. oxysporum (44).Those studies indicate that StuA has different pathological functions in different fungi.
The biosynthesis of fungal toxins is controlled by genes commonly found in clusters scattered throughout the genome (45).Global transcription factors responding to environmental stimuli and developmental signals are essential regulators for the biosynthesis of fungal toxins.StuA is one of the key regulators for the biosynthesis of fungal secondary metabolites.For example, the expression of genes involved in the biosynthesis of epipolythiodioxopiperazine (ETP) gliotoxins is StuA-dependent in A. fumigatus (34).In Fusarium verticillioides, StuA is required for the production of fumonisin and the expression of genes involved in the biosynthesis of fumonisin and fusarin C (46).StuA is involved in the production of a mycotoxin (Alternariol) in S. nodorum (41).Despite StuA has been shown to be essential for producing a number of secondary metabolites in filamentous fungi, the underlying mechanism remains elusive.As demonstrated in the current study, StuA is required for the biosynthesis of ACT and virulence.Five potential PKA phosphorylation sites were found in StuA; however, none of the sites impacts ACT production after site-directed mutagenesis.The results indicate that the StuA activity in relation to ACT production is not regulated by cAMP/PKA-induced phosphorylation.Through computer prediction and experimental verification, we have demonstrated that StuA physically interacted with a pH-responsive transcription factor PacC, and the expression level of pacC was decreased in the ΔstuA strain but not ΔstuA-C T110A , suggesting that pacC can be regulated by StuA and is independent of the T110 phosphorylation site of StuA (Fig. S4d).Functional characterization demonstrated that StuA regulates the production of ACT in A. alternata by regulating the ambient pH-responsive regulator pacC.The results also indicate that sensing environmental pH is important for gene expression and toxin production.
PacC has been shown to be required for development, toxin production, and virulence in various fungi (47).During the hemibiotrophic stage, the rice blast fungus M. oryzae depends on PacC-mediated sensing to manipulate host cellular pH (48).The postharvest pathogenic fungus Penicillium expansum secrets ammonia to activate a PacC homolog and increases the accumulation of the mycotoxin patulin (PAT) during fruit colonization (49).PacC positively regulates 15 cluster genes involved in the biosynthesis of PAT in P. expansum as all 15 genes are downregulated in a PacC knockout mutant (50).The promoter regions of nine PAT cluster genes contain one or more PacC-binding consensus sequences, suggesting that PacC likely affects the biosynthesis of PAT by regulating the PAT gene cluster (50).PacC acts as a transcriptional activator to regulate gene expression by binding to the consensus sequence (5′-GCCARG-3′) (51,52).In the present study, we have demonstrated that the purified PacC could directly bind to the promoter regions of ACTT6 and ACTTR, which are known to be critical components in the biosynthesis of ACT.Silencing pacC resulted in markedly reduced production of ACT, supporting the important role of PacC in the biosynthesis of ACT (10,18).Given that environment pH is a common stress that A. alternata might encounter, especially at the initial stage of infection, it remains unclear how pH stress would affect the PacC-medi ated regulation for the production of ACT during fungal colonization.
In summary, this study established a close link between StuA and PacC for the biosynthesis of ACT in A. alternata.Both StuA and PacC also are required for growth, conidiation, and virulence.We demonstrated that StuA physically interacts with PacC, which in turn transcriptionally regulates the expression of seven ACTT genes required for the biosynthesis of ACT.PacC recognizes and binds to the promoter regions of ACTT6 and ACTTR.No consensus sequence (5′-GCCARG-3′) in the promoter region of other ACT genes, indicating that PacC regulates the pathway-specific regulator ACTTR, which in turn regulates the expression of other genes in the ACT biosynthetic gene cluster.Because the APSES transcription factors usually regulate downstream genes through the formation of homo-or heterodimers (29,30), further investigation will allow us to identify other components that interact with StuA and also play a key role in the biosynthesis of ACT.
Fungal strains and culture conditions
The wild-type Z7 strain of A. alternata (deposited at China General Microbiological Culture Collection Center under the accession number CGMCC3.18907) was isolated from an infected citrus (Ougan) leaf in Zhejiang, China (53,54).Unless otherwise indicated, fungal strains were grown on PDA (Solarbio, Beijing, China) at 26°C.Conidia were harvested from fungal cultures grown on PDA for 8 days.PDA powder was buffered with 0.2 M Na 2 HPO 4 and 0.1 M citric acid to pH 4.0, 7.0, or 9.0 and used in the pH shift experiments.Mycelium was obtained from 36-h liquid cultures at 26°C for DNA, RNA, and protein extraction as well as microscopic observation.Fungal strains were grown in a flask with 300-mL Richard's solution at 26°C for 7 days to induce the production of ACT toxin.
Gene deletion and complementation
Gene deletion mutants were generated using fungal transformation system descri bed elsewhere (10).Briefly, a 2-kb upstream flanking sequence fragment and a 2kb downstream flanking sequence were amplified with the primer pairs stuA-up-F/ stuA-up(hph)-R and stuA-down(hph)-F/stuA-down-R, respectively, from Z7 genomic DNA.The two fragments were joined together with the hygromycin resistance gene cassette (HPH) by overlapping PCR to produce two split HPH fragments.The fused fragments were purified and transformed into Z7 protoplasts.The protoplast-mediated transformation was carried out following the protocol described by Lin and Chung (55).Transformants were selected on PDA supplemented with 100 µg/mL hygromycin (Roche Applied Science, Indianapolis, IN, USA) and verified by PCR and Southern blot hybridi zation.For complementation, a 3,625-bp DNA fragment including the 5′ untranslated region (1.5 kb) and the full-length stuA gene was amplified with the primer pair (neo1300) stuA-F/stuA (neo1300)-R and cloned into the plasmid pNeo1300 containing a G418-resistance gene to yield pNeo1300-stuA.The plasmid was transformed into protoplasts prepared from a ΔstuA.Putative transformants were examined by PCR and validated further by Southern blot analysis.All primers used in this study are listed in Table S1.
RNAi vector construction and transformation
Two 300-bp pacC fragments were amplified with the primer pairs Psilence-pacC-LF/Psilence-pacC-LR (for sense fragment) and Psilence-pacC-RF/Psilence-pacC-RR (for antisense fragment) from the cDNA library of A. alternata.The fragments were cloned into a pSilent-1 vector following the homologous recombination ligation method (56) to yield an RNAi pSilent-1-pacC vector containing the hygromycin-resistance gene.
Site-directed mutagenesis
The S67, T110, S370, S411, and T601 residues of StuA were independently substituted by alanine according to the manual of ClonExpress Ultra One Step Cloning Kit (Vazyme, Nanjing, China) using the pNEO1300-StuA plasmid as the template and the respective primer pairs.Point mutations were confirmed by sequencing using the primers listed in Table S1.
Virulence assays and HPLC analysis
Virulence assays were performed by placing 5-mm mycelia plugs on 7-15 days old detached healthy Hongjv leaves and kept in a moist plastic box at 26°C for 2 or 4 days.For toxicity assays, fungal strains were cultured in a 300-mL Richard's solution at 26°C for 28 days on a rotary shaker set at 60 rpm.The pH of culture filtrates was adjusted to 5.5 using 10% phosphate buffer (NaH 2 PO 4 ,) and the solution was mixed with 10 mL Amberlite XAD-2 resin with magnetic stirring for 2 h.The resin was filtered with filter paper, and the crude ACT extract was eluted with 40 mL methanol.About 10 μL of crude extracts were placed on the detached Hongjv leaves, and the leaves were kept in a plastic box for 3 or 5 days.All tests were done at least three times, each with three replicates.
ACT toxin was analyzed by high-performance liquid chromatography (HPLC) analysis as previously described.In brief, the 40 mL toxin crude extract was concentrated to 1 mL in a rotary evaporator.ACT was separated by an XbridgeTM18.5 column (4.6 × 250 mm 2 ) connected to the Waters 880-PU HPLC system (Japan Spectroscopic, Tokyo, Japan) using methanol/0.1% acetic acid gradient solvent system at a flow rate of 1 mL/min.ACT was detected by the absorbance value at 290 nm.Extracts with a peak retention time of 30-35 min were collected and used for bioactivity analysis.Richard's medium with blank PDA plugs was extracted in a similar manner and used as a negative control.
Protein-protein interactions
For yeast two-hybrid screening (Y2H), the coding sequence of each gene was amplified from the cDNA of Z7 with primer pairs listed in Table S1.Full-length cDNA fragments were independently cloned into the vector pGADT7 (bait) or pgbkt7 (prey).The resultant clones were co-transformed into the Saccharomyces cerevisiae Y2H Gold strain.Plasmids were sequenced to ensure error-free.The plasmids pGBKT7-53 and pGADT7-T were used as the positive control.Transformants were grown at 30°C for 3 days on a synthetic medium (SD) lacking Leu and Trp, serially diluted, and transferred to SD without His, Leu, Trp, and Ade to assess binding activity.Three independent experiments were performed for each Y2H assay.
For GST pull-down assays, cDNA containing the StuA APSES domain was cloned into pGEX-4T to generate GST-tagged StuA APSES -GST protein, and cDNA containing PacC ZF was cloned into PMAL-c5x to generate an Mbp-tagged PacC ZF -MBP protein.To test in-vitro binding between MBP-and GST-tagged proteins, the GST-tagged protein or GST (negative control) was mixed with glutathione beads, incubated at 4°C for 1 h, mixed with the MPB-tagged protein, and incubated for an additional 3 h with shaking.The beads were washed five times with phosphate-buffered saline, and GST pull-down proteins were examined by western blot analysis using the monoclonal mouse Anti-GST (EM08071, HuaAn bBiotech, Hangzhou, China, 1:5,000 dilution) and the monoclo nal mouse anti-MBP antibody (AE016, ABclonal, Wuhan, China, 1:1,000) antibodies.
The experiment was repeated three times.BiFC analysis was performed as previously described (57).The fusion constructs were generated by cloning the corresponding cDNA fragments into PKD2-YFPN and PKD5-YFPC vectors.Plasmids YFPN and YFPC were co-transformed into the wild-type strain, and transformants were screened for resistance to both hygromycin and chlorimuron.The resulting transformants were analyzed by PCR.Epifluorescence microscopy was performed using a Zeiss LSM780 confocal microscope (Gottingen, Niedersachsen, Germany).
Microscopy
Fungal hyphae were examined with a Nikon microscope equipped with an LV100ND image system (Nikon, Japan).For the proteins tagged with YFP, each strain was cultured in PDB on a shaker set at 160 rpm, 26°C for 36 h.The confocal microscopy images were obtained using LSM780 (Gottingen).Fungal nuclei were stained with 1 mg/mL DAPI (Sigma, St. Louis, MO, USA).Each experiment was repeated three times.
RNA extraction and quantitative RT-PCR
Fungal strains were grown on CM agar medium for 3 days, transferred to Richard's medium for 7 days, and mycelium was harvested for RNA isolation.Each strain had three biological replicates.Total RNA was extracted from the mycelia of each sample with TRIzol (Takara, Biotechnology, Dalian, China), and reverse transcription was performed with a HiScript II 1st Strand cDNA Synthesis Kit (Vazyme Biotech, Nanjing, China).The relative expression level of each gene was determined by quantitative RT-PCR with HiScript II Q RT SuperMix (Vazyme Biotech).The expression of the actin gene was included as a reference.The experiment was repeated three times.
Electrophoretic mobility shift assay
EMSA analysis was performed using a LightShift Chemiluminescent EMSA Kit 20148 (Thermo Fisher, USA).The cDNA encoding the StuA APSES domain was amplified and cloned into pGEX-4T and PMAL-c5x vector to generate a GST-tagged protein and an Mbp-tagged protein, respectively.The resulting constructs were individually transformed into the E. coli strain BL21 (DE3) and purified for the verification of the cDNA sequence.The recombinant protein PacC ZnF_C2H2 -MBP was generated as described above.DNA probes were prepared by labeling biotin at the 3̍ ′-end with annealing complemen tary oligonucleotides.The purified MBP was used as a negative control.Protein-DNA complexes were separated on 6% native polyacrylamide gels in 0.5 × TBE and transferred to a positively charged nylon membrane (Millipore, Burlington, MA, USA).Biotin-labeled probes were detected according to the instructions of the manufacturer.
7 FIG 1
FIG 1 StuA regulates vegetative growth, sporulation, and pathogenicity in A. alternata.(A) Vegetative growth, hyphal morphology, hyphal tip growth, and sporulation of ΔstuA, Z7, and the complementation ΔstuA-C strains on PDA.Conidia produced by the ΔstuA strain are indicated by red arrows.(B) Quantification of colony diameter.Error bars represent standard deviations.Different letters indicate a statistical significance according to the one-way ANOVA test (P < 0.05).(C) Quantification of conidia production.(D) Inoculation of Z7, ΔstuA, and ΔstuA-C by placing mycelial plugs on detached Hongjv leaves.A blank agar plug was used as the mock.Necrotic lesions were recorded 3 days post-inoculation (dpi).
FIG 2
FIG 2 StuA plays a vital role in ACT biosynthesis of A. alternata.(A) Detached Hongjv leaves were inoculated with 10 µL sterile culture filtrates of each strain to test the toxicity of ACT.Leaves were kept in a plastic box for lesion development.Necrotic lesions were recorded at 6 dpi.(B) HPLC analysis of ACT toxin purified from culture filtrates of each strain.The peak representing ACT toxin is indicated by a black arrow.(C) The relative expression level of the ACT biosynthetic genes in the Z7 and the ΔstuA strains.The actin gene was used as an internal control.Expression of each of the ACCT genes in Z7 was set at 1 and used for statistical analysis to determine the relative expression of each gene in ΔstuA.Error bars represent standard deviations from three biological replicates.Different letters represent statistical significance according to the one-way ANOVA test (P < 0.05).
FIG 3
FIG 3 StuA physically interacts with PacC.(A) Y2H analysis reveals the interaction of StuA with PacC.Serial dilutions of yeast cells (cells/mL) transferred with the bait and prey constructs indicated in the figure were assayed for growth on SD/−Ade/−His/−Leu/−Trp plates.Pairing pGADT7-StuA (bait) and pgbkt7-PacC (prey) resulted in the growth of the yeast strain on medium without Ade, His, Leu, and Trp.Similar results were obtained in reverse by pairing pGADT7-PacC (bait) and pgbkt7-StuA (prey).Pairing pGBKT7-53 and pGADT7 was used as a positive control (B).The APSES domain of StuA (StuA APSES ) interacted with the ZnF_C2H2 domain of PacC (PacC ZF ) in-vitro by GST pull-down assays.StuA APSES and PacC ZF were fused to GST and MBP tags, respectively.StuA APSES -GST or GST-bound resin was incubated with crude protein extracts containing PacC ZF -MBP and analyzed by western blot analyses.Proteins were detected by staining with Coomassie Brilliant Blue (CBB).(C) The interaction of PacC with StuA in the nucleus was visualized by BiFC assays.YFP signals were observed in vegetative hyphae of the transformant harboring YFPN-StuA and YFPC-PacC.Scale bar = 5 µm.
FIG 4
FIG 4 PacC is required for pathogenicity and ACT biosynthesis.(A) Growth of the silencing mutant pacC-s-3 under different pH conditions compared with Z7.All strains were cultured on CM medium for 3 days and transferred to PDA plates with different pH values (4, 7, or 9).(B) Necrotic lesions formed on detached Hongjv leaves by the pacC mutants and Z7 strains 3 dpi.(C) HPLC analysis of ACT toxin purified from culture filtrates of pacC-s-3 and Z7.ACT toxin is indicated by a black arrow.(D) The relative expression level of ACT biosynthetic genes in Z7 and pacC-s-3 strains.The actin gene was used as an internal control.Expression of each of the ACCT genes in Z7 was set at 1 and used for statistical analysis to determine the relative expression of each gene in the pacC mutants.Error bars represent standard deviations from three biological replicates.Different letters represent statistical significance according to the one-way ANOVA test (P < 0.05).
FIG 5
FIG 5 PacC directly regulates transcription of ACT biosynthetic genes.(A) Physical position of ACT biosynthetic cluster.Red arrows correspond to the genes having a PacC binding motif 1 kb upstream of the first ATG codon.(B) Validation of the interaction between PacC and putative binding motif in the promoter of three ACT biosynthetic genes by electrophoretic mobility shift assay (EMSA).A schematic diagram shows the predicted binding motif of PacC in the promoter region of ACT biosynthetic genes (upper panel).The promoter of each gene was incubated with purified PacC ZF -MBP or MBP at 28°C for 20 min.
FIG 6 A
FIG 6 A proposed model for StuA-mediated regulation of ACT biosynthesis.Deleting stuA results in severe vegetative growth, sporulation, ACT production, and pathogenicity of A. alternata.In addition, StuA interacts with a pH-responsive transcription factor PacC, which increases the transcript levels of the ACT biosynthetic genes ACTT6 and ACTTR by directly binding to its promoter. | 6,763.4 | 2023-10-09T00:00:00.000 | [
"Biology"
] |
Higher Protein Kinase C ζ in Fatty Rat Liver and Its Effect on Insulin Actions in Primary Hepatocytes
We previously showed the impairment of insulin-regulated gene expression in the primary hepatocytes from Zucker fatty (ZF) rats, and its association with alterations of hepatic glucose and lipid metabolism. However, the molecular mechanism is unknown. A preliminary experiment shows that the expression level of protein kinase C ζ (PKCζ), a member of atypical PKC family, is higher in the liver and hepatocytes of ZF rats than that of Zucker lean (ZL) rats. Herein, we intend to investigate the roles of atypical protein kinase C in the regulation of hepatic gene expression. The insulin-regulated hepatic gene expression was evaluated in ZL primary hepatocytes treated with atypical PKC recombinant adenoviruses. Recombinant adenovirus-mediated overexpression of PKCζ, or the other atypical PKC member PKCι/λ, alters the basal and impairs the insulin-regulated expressions of glucokinase, sterol regulatory element-binding protein 1c, the cytosolic form of phosphoenolpyruvate carboxykinase, the catalytic subunit of glucose 6-phosphatase, and insulin like growth factor-binding protein 1 in ZL primary hepatocytes. PKCζ or PKCι/λ overexpression also reduces the protein level of insulin receptor substrate 1, and the insulin-induced phosphorylation of AKT at Ser473 and Thr308. Additionally, PKCι/λ overexpression impairs the insulin-induced Prckz expression, indicating the crosstalk between PKCζ and PKCι/λ. We conclude that the PKCζ expression is elevated in hepatocytes of insulin resistant ZF rats. Overexpressions of aPKCs in primary hepatocytes impair insulin signal transduction, and in turn, the down-stream insulin-regulated gene expression. These data suggest that elevation of aPKC expression may contribute to the hepatic insulin resistance at gene expression level.
Introduction
The normal physiological responses to insulin stimulation in the liver include the increase of glycolysis and lipogenesis, and the reduction of gluconeogenesis [1,2]. In hepatic parenchymal cells, insulin initiates a signaling cascade upon binding to its receptor on the cell membrane, which is followed by the activation of insulin receptor substrates (IRSs) [3]. Tyrosine overexpression of PKCz or PKCι/λ in ZL primary hepatocytes impaired the insulin-regulated gene expression.
Animals and diets
Zucker rats were bred and housed under constant temperature and humidity in the animal facility on a 12-hour light-dark cycle. Male ZL (fa/+ or +/+) or ZF (fa/fa) rats at weaning (3 weeks old) were kept on Teklad rodent chow ad libitum (#8640, Harlan Laboratories, Indianapolis, IN) for 8 weeks before liver tissue collection and primary hepatocyte isolation. All procedures were approved by the Institutional Animal Care and Use Committee at the University of Tennessee at Knoxville (Protocols #1256, 1642 and 1863).
Liver tissue collection, total protein preparation and total RNA extraction The rat was euthanized by primary carbon dioxide asphyxiation, and then secondary cervical dislocation according to regulations. The procedure for liver tissue collection was reported elsewhere [21,22]. In brief, a 10ml syringe with a 21G × 1½" hypodermic needle was used to drain blood from the liver via the inferior vena cava. The liver was excised, sliced, then snap-frozen in liquid nitrogen, and store at -80°C before further analysis. A small portion of the liver tissue was homogenized in 10 volumes of cold whole-cell lysis buffer (1% Triton X-100, 10% glycerol, 1% IGEPAL CA-630, 50mM Hepes, protease inhibitors, pH 8.0), and then centrifuged to remove insoluble matters [21,22]. The protein concentration of the supernatant was determined with PIERCE BCA protein assay kit. Another small portion of the liver tissue was homogenized in 10 volumes of cold STAT-60. Total RNA was extracted according to the manufacturer instructions.
Cloning of the rat Prkci cDNA and subcloning of the rat Prkcz cDNA Based upon the rat Prkci mRNA sequence (GenBank: EU517502.1), sense 5'-ATC CCC TCA GCC TCC AGC GG-3' and antisense 5'-ACT GTG ACC GGG CTA ACG GT-3' primers were designed using Primer-BLAST tools from the National Center Biotechnology Information. For the complete coding sequence of rat Prkci cDNA, PCR was carried out using cDNA derived from total RNA of ZL primary hepatocyte as the template. For subcloning of rat Prkcz, pEYFP-N1 vector containing its complete coding sequence (generous gift from Dr. Ralf Kubitz) was used as the template for the PCR amplification of Prkcz with sense primer 5'-ACC TCG AGA TGC CCA GCA GGA CCG AC-3' and antisense primer 5'-GTG AAT TCA CAC GGA CTC CTC AGC AGA C-3'. The amplicons containing the complete coding sequences of Prkci and Prkcz cDNA sequences were ligated into pCR2.1 vector through TA Cloning Kit (Invitrogen) according to the manufacturer's protocol.
Generation of Ad-Prkcz and Ad-Prkci recombinant adenoviruses
The inserts containing the complete coding sequences of Prkcz and Prkci were subcloned into pACCMV5 vector to make pACCMV5-Prkcz and pACCMV5-Prkci, respectively. To generate Ad-Prkcz and Ad-Prkci recombinant adenoviruses, pACCMV5 plasmid containing the complete coding sequence of atypical Prkc sequences was co-transfected with JM17 into HEK293 cells using FuGENE 1 6 Transfection Reagent (Roche) according to manufacturer instructions. After the formation of the viral plaques, the crude lysate was collected and stored at -80°C. The amplification and purification of adenovirus were carried out according to published protocols [23]. The optical density (OD) at 260nm of the adenoviral suspension was determined to estimate the plaque forming units (pfu) of the purified recombinant adenoviruses. We used that 1 OD equals to 1 × 10 12 pfu/ml. The purified recombinant adenoviruses were stored at -80°C until being used.
Infection of recombinant adenoviruses and treatments of primary hepatocytes
In experiments using recombinant adenoviruses, purified Ad-Prkcz and Ad-Prkci were added in the medium A (5,000 pfu/cell) to allow the overexpression of PKCz and PKCι/λ in the primary rat hepatocytes during the pretreatment period, respectively. The pretreated hepatocytes were then washed once with PBS, and treated with medium A containing indicated concentrations of insulin (0nM to 100nM) for 6 hours before total RNA extraction, or for 15 minutes before whole cell lysate preparation.
Immunoblot analysis of proteins
Protein samples (40μg) of the whole cell lysates of primary hepatocytes or liver tissues were resolved in 8% SDS polyacrylamide gel, and then transferred to BIO-RAD Immuno-Blot PVDF membrane (Hercules, CA) as described [23]. Membranes were blocked by 8% non-fat milk, and then probed with specific antibodies (1:1 000 dilution). After gentle wash, membranes were incubated with goat anti-rabbit IgG conjugated with horseradish peroxidase (1:5,000 dilution). After washing, antigen-bound antibody was detected using ECL Western Blotting Substrate (Thermo Scientific), and subsequently exposure to X-ray films. The films were scanned, and the images were stored for the densitometry analysis using ImageJ software (NIH). Densitometry data for each protein were normalized to β-actin levels in each sample.
Statistical Analysis
Statistical analyses were performed using SPSS 19.0 software. Student t-test was used to compare the means between two groups. One-way ANOVA with LSD post-hoc test was used to compare the means among three or more groups. Data were presented as means ± S.E.M. The number of experiments indicates hepatocyte isolations from different animals. A p value less than 0.05 is considered statistically significant.
Results
PKCζ expression is elevated in the liver of ZF rats Fig. 1A shows that, despite the well-reported hepatic insulin resistance, the levels of phospho-AKT at Ser473 and Thr450 in the liver of ZF rats were higher than that of ZL rats (please see densitometry analysis in S1 Fig.). The phospho-AKT at Thr308 in the liver tissues could not be detected in the current experimental condition, probably due to non-maximal activation of AKT in ad libitum state, relative less AKT protein amount from hepatocytes in the liver tissue lysate or rapid de-phosphorylation of AKT at Thr308 during the process of euthanasia of the animals. This observation shows that the AKT phosphorylation at Ser473 and Thr450 in ZF rat liver tissue is unimpaired, suggesting the consequence of hyperinsulinemia. In addition, ZF rat liver had higher protein level of FAS, indicating elevation of hepatic lipogenesis.
On the other hand, the protein level of PKCz, but not PKCι/λ, in the liver of ZF rats was higher than that of ZL rats (Fig. 1A). The level of phospho-PKCz/λ at Thr410/403, which indicated the phosphorylation of both PKCz and PKC ι/λ, was higher in the liver samples of ZF rats than that of ZL rats (Fig. 1A). The mRNA level of Prkcz, but not that of Prkci, was also elevated in the ZF liver (Fig. 1B). These data collectively demonstrated that the aPKC expression levels changes with the development of hepatic insulin resistance in the ZF liver.
Short-term insulin treatment did not alter protein levels of total and phospho-aPKCs in primary rat hepatocytes Fig. 2 shows that protein levels of FAS, ACC and phospho-ACC at Ser79 in hepatocytes of ZF rats were higher than that of ZL rats. This result demonstrates that the hepatic lipogenic capability remains elevated in ZF hepatocytes cultured for 20 hours. Insulin treatment dose-dependently induced the levels of phospho-AKT at Ser473 and Thr308 in ZL and ZF primary hepatocytes similarly (Fig. 2). The level of phospho-AKT at Thr450, which was insulin-independent, was slightly higher in ZF hepatocytes than in ZL hepatocytes (Fig. 2). These data demonstrated that the insulin-induced AKT phosphorylation is comparable in both ZL and ZF rat hepatocytes. The impaired insulin-regulated gene expression in ZF hepatocytes may be caused by changes in other components of insulin signaling pathway. On the other hand, the aPKC Overexpression and Hepatic Gene Expression 15-minute insulin treatment did not significantly induce or suppress the protein levels of PKCz and PKCι/λ and phospho-PKCz/λ at Thr410/403 in ZL and ZF primary hepatocytes (Fig. 2). There was a slight induction of the total PKCz when fresh M199 was added. ZF primary hepatocytes had higher level of PKCz than ZL primary hepatocytes (Fig. 2), which was in line with the finding from the liver tissues ( Fig. 1).
Overexpressions of PKCζ and PKCι/λ in ZL primary hepatocytes
To investigate whether elevated aPKC protein levels contribute to the hepatic insulin resistance at gene expression level, we constructed recombinant adenoviruses Ad-Prkcz and Ad-Prkci for their overexpressions in ZL primary rat hepatocytes. The expression level of Pckcz or Prkci mRNA was normalized to that of ribosomal gene 36B4, which was stable across all treatment groups (S2 Fig.). In hepatocytes transfected with Ad-β-gal, the relative abundance of Prkcz mRNA (-ΔCt of -7.90) was much lower than that of Prkci mRNA (-ΔCt of -4.20). The overexpression fold of Pckcz or Prkci was calculated using their levels in the Ad-β-gal group as 1. Ad-Prkcz and Ad-Prkci respectively caused overexpressions of Prkcz mRNA (-ΔCt of 3.13,~2 000 fold, Fig. 3A) and Prkci mRNA (-ΔCt of 3.78,~250 fold, Fig. 3B, Ã ).
Interestingly, 0.1-100nM insulin treatments significantly suppressed the expression levels of Prkcz mRNA by 40-60% in hepatocytes transfected with Ad-Prkci (Fig. 3A, black columns). Acute insulin treatment did not alter protein levels of total and phospho-aPKCs in primary hepatocytes from ZL and ZF rats. The primary hepatocytes from ZL and ZF rats (chow diet, 8 weeks) were pre-treated as described in the Materials and Methods for overnight, and then incubated in fresh medium A with increasing concentrations of insulin (0nM to 100nM) for 15 min. The hepatocyte group remained in pre-treatment medium was indicated by "C". Immunoblot analysis with antibodies against the indicated proteins was performed. The figure shows the representative result of three parallel experiments obtained from three sets of primary hepatocytes independently isolated from ZL and ZF rats fed chow diet ad libitum. On the other hand, in hepatocytes transfected with Ad-Prkcz, the expression levels of Prkci mRNA in 1-100nM insulin treatment groups were significantly lower than that in hepatocytes with β-GAL overexpression at the corresponding treatments (Fig. 3B, striped columns, #). Fig. 4 shows that PKCz, but not PKCι/λ, protein level was overexpressed by~9.3 folds in hepatocytes transfected with Ad-Prkcz compared to that with Ad-β-gal. Overexpression of PKCz did not significantly increase phospho-PKCz/λ Thr410/403 level. Fig. 5 shows that primary rat hepatocytes transfected with Ad-Prkci manifested~4.0 fold overexpression of PKCι/λ protein and increased phosphorylation of PKCz/λ at Thr410/403, compared to that with Ad-β-gal. Unfortunately, we tried and failed to develop a method for the determination of activities of PKCz and PKCƖ/λ using substrates as reported and available antibodies.
Overexpression of PKCζ or PKCι/λ attenuated the insulin signaling cascade in ZL primary hepatocytes Fig. 4 shows that the levels of insulin-induced phosphorylation of AKT at Ser473 and Thr308, and insulin-independent phosphorylation of AKT at Thr450, were diminished in hepatocytes overexpressing PKCz, but not β-GAL. There was no change of total AKT. The expression levels of IRS1 were also lowered upon the overexpression of PKCz. Interestingly, based on the densitometry analysis, there was a slight but significant elevation in the levels of PKCι/λ, FAS, ACC or phospho-ACC at Ser79 in the hepatocytes overexpressing PKCz compared with Ad-β-gal group (S3 Fig.). Fig. 5 shows that PKCι/λ overexpression did not significantly change the expression levels of PKCz and ACC in ZL primary hepatocytes. The levels of FAS and phospho-ACC Ser79 were marginally but significantly elevated (S4 Fig.). PKCι/λ overexpression did not change the total AKT level compared with Ad-β-gal group. Similar to that in PKCz overexpression groups, the levels of IRS1, insulin-induced phospho-AKT at Ser473 and Thr308, and insulin-independent phosphorylation of AKT at Thr450 were significantly decreased in primary rat hepatocytes overexpressing PKCι/λ in comparison to the Ad-β-gal group. These data collectively demonstrate that overexpression of aPKC attenuates the activation of insulin signaling cascade in primary hepatocytes from ZL rats.
Overexpression of PKCζ or PKCι/ζ impaired the expression levels of insulin-regulated genes in primary hepatocytes from ZL rats
Insulin dose-dependently induced the expression levels of Gck (Fig. 6A) and Srebp-1c (Fig. 6B) transcripts in Ad-β-gal group. Overexpression of PKCz or PKCι/λ increased the basal levels of Gck and Srebp-1c in ZL primary hepatocytes. However, overexpression of PKCz or PKCι/λ abolished the insulin-induced Gck and Srebp-1c expressions in ZL hepatocytes.
Discussion
Here we show that the expression levels of PKCz protein and Prkcz mRNA are higher in the liver of ZF rats than ZL rats. This elevation remains intact in primary hepatocytes that have been cultured for more than 20 h. Since the insulin-regulated gene expression is impaired in primary hepatocytes from ZF rats, we hypothesize that the alteration of aPKC expressions may play a role in this impairment. Therefore, PKCz and PKCι/λ are successfully overexpressed using recombinant adenoviruses in ZL primary hepatocytes. The adenoviral-mediated overexpression of PKCz or PKCι/λ in primary hepatocytes from ZL rats leads to the impairment of insulin-regulated Gck, Srebp-1c, Pck1, G6pc and Igfbp1 expressions (Fig. 6). Overexpression of either PKCz or PKCι/λ is sufficient to alter basal and insulin-regulated gene expression in hepatocytes from ZL rats.
The impairment of the insulin-regulated gene expression might be attributed to the reduction of of AKT phosphorylations in cells with overexpression of PKCz or PKCι/λ., Maximal activation of AKT needs phosphorylations of AKT at both Ser473 and Thr308 as indicated in [4]. Therefore, detection of its phosphorylation using specific antibodies have been a indicator of AKT activation status. It was interesting to note that we could not detect phosphorylation of AKT at Thr308 in the liver tissue lysates of both ZL and ZF rats. Here are the possible reasons. First, the in vivo activation of AKT may be moderate, not maximized, which allows further regulation in response to stimuli. Second, there might be relative less AKT protein amount from hepatocytes in the liver tissue preparations, which requires a larger amount of protein per sample for the detection. Third, the phosphorylation of AKT at Thr308 in the liver may be rapidly de-phoshorylated during the time when the animals were euthanized for the collection of tissue samples. Nevertheless, we demonstrated here that in ZL hepatocytes of Ad-β-gal group, insulin dose-dependently induces the levels of phospho-AKT (Ser473 and Thr308) and IRS1 protein.
On the other hand, PKCz or PKCι/λ overexpression attenuates the insulin-induced phosphorylation of AKT, and decreases the protein level of IRS1, which suggests a diminished signaling cascade initiated from IRS1 upon insulin treatment (Figs. 4 and 5). Without insulin treatment, the expression level of total AKT and p-AKT seem to be not different between the Ad-β-gal control and aPKCs overexpression groups (Figs. 4 and 5). However, higher mRNA levels of Gck, Pck1, Srebp-1c and Pklr were observed in the aPKCs overexpression groups compared to those in the Ad-β-gal control (Fig. 6). This indicates that other mechanisms may contribute to the elevation of the basal expression of those genes in hepatocytes overexpressing PKCz or PKCι/λ. GCK catalyzes the phosphorylation of glucose into glucose-6-phosphate, which is the ratelimiting step of glycolysis in the liver [24]. The induction of the hepatic Gck expression depends on the activation of insulin signaling pathway in the liver [11]. PKCz (Fig. 4) or PKCι/λ (Fig. 5) overexpression diminished phosphorylation of AKT at Ser473 along with the decrease of IRS1 protein expression. It has been reported that aPKCs probably act as negative feedback regulators of insulin signaling pathway by promoting serine phosphorylation and tyrosine dephosphorylation of IRS proteins both in vitro and in cell lines [25][26][27]. Thus, the reduction of the activation of insulin signaling pathway due to PKCz or PKCι/λ overexpression may impair the insulin-regulated Gck expression. Additionally, the insulin-induced Gck expression has been suggested to be mediated by transcription factors, such as hepatocyte nuclear factor 4, forkhead box protein O1, SREBP-1c, liver x receptor α, and peroxisome proliferator-activated receptor γ [28][29][30][31]. Since the knowledge about the non-kinase activities and endogenous substrates of PKCz or PKCι/λ is limited, it is possible that PKCz or PKCι/λ directly regulate Gck expression through controlling the activation of certain transcription factors at its promoter, which increases the Gck basal mRNA level.
PEPCK catalyzes the rate limiting enzymatic reaction in gluconeogenesis, by which oxaloacetate is converted into phosphoenolpyruvate [32]. In the liver, the insulin-suppressed Pck1 expression has been thought to be purportedly mediated through three pathways: (1) the phosphorylation and inactivation of forkhead box proteins by insulin [33,34]; (2) antagonizing arbitrarily set to 1 for that gene. The mRNA levels of (A) Gck, (B) Srebp-1c, (C) Pck1, (D) Igfbp-1, (E) G6pc and (F) Pklr were expressed as fold inductions (mean ± SEM; n = 4; all p<0.05; a<b, c<d, e>f, g>h, i>j, k>l, m>n, o>p, q>r for comparing the insulin treatment groups of hepatocytes overexpressing Adβ-gal, Ad-Prkcz or Ad-Prkci using one-way ANOVA; * for comparing fold inductions among Ad-β-gal, Ad-Prkcz and Ad-Prkci groups at the indicated insulin concentrations using one-way ANOVA). doi:10.1371/journal.pone.0121890.g006 aPKC Overexpression and Hepatic Gene Expression the glucagon-induced PPARγ co-activator 1 expression [35]; (3) the phosphorylation of CREBbinding protein via aPKCs [36,37]. Interestingly, insulin-induced activation of aPKCs was shown to suppress Pck1 expression via this phosphorylation of SREBP-binding protein [36]. However, our results show the induction of Pck1 expression upon overexpression of PKCz or PKCƖ/λ in the absence of insulin. The activities of aPKC isoforms are elevated in the liver of individuals with type 2 diabetes [17]. Treatment of hepatocytes from these individuals with aPKC inhibitors decreases the basal transcript levels of gluconeogenic genes [17]. Hepatic inhibitors of PKCƖ have been show to correct the metabolic abnormalities in rodents [18]. Here, aPKCs overexpression not only increases the basal transcript of Pck1 mRNA, but also abrogates the insulin-suppressed Pck1 expression. Additionally, aPKCs overexpression attenuates the insulin-suppressed G6pc expression. Therefore, our data suggest that aPKCs are prominent positive regulators of hepatic gluconeogenesis.
SREBP-1c is a master regulator of the hepatic expression of lipogenic genes [38]. The insulin-induced Srebp-1c expression requires the activation of PI3K, and the mTORC2-mediated phosphorylation of AKT at Ser473 [39]. Based on the use of pharmacological inhibitors, the activation of mTORC1 downstream of AKT is identified as an indispensible step in the induction of Srebp-1c [39]. On the other hand, the activation of PI3K subsequently activates aPKC in the liver, which promotes Srebp-1c expression and lipogenesis [12,13]. Despite the diminished activation of AKT and the abolished insulin-induced Srebp-1c expression, PKCz or PKCι/λ overexpression increases the basal transcript level of Srebp-1c in the primary hepatocytes from ZL rats (Fig. 6C). Interestingly, PKCz or PKCι/λ overexpression only marginally induced the protein levels of FAS and ACC (Figs. 4 and 5). One possibility is that the elevation levels of transcripts might not have been translated in the presence of overexpression of PKCz or PKCι/λ. Alternatively, diminished insulin signaling pathway results in decreased activation of p70 S6-kinase, which is required for the SREBP-1c processing to generate nuclear SREBP-1c [40]. Moreover, there might be a lag time between the elevation of Srebp-1c mRNA and the increase of SREBP-1c protein, and the down-stream targets. Nevertheless, the protein levels of ACC and are maintained at a high level in the hepatocytes and liver samples of ZF rats (Figs. 1 and 2), demonstrating the metabolic changes associated with these insulin-resistant animals. Further investigations are needed to delineate the sequential events associated with the elevation of the hepatic lipogenesis upon insulin stimulation and activation of AKT and aPKC isoforms.
It is interesting to note that the Prkcz mRNA and PKCz protein, but not that of Prkci mRNA and PKCι/λ protein levels are elevated in the liver of ZF rats. Insulin at 1nM and up significantly induces the expression levels of Prkci mRNA, a process that is blunted in the presence of PKCz overexpression. On the other hand, insulin does not affect the expression of Prkcz mRNA. Additionally, PKCι/λ overexpression alone does not affect Prkcz mRNA expression. However, PKCι/λ overexpression introduced the insulin-mediated suppression of Prkcz mRNA expression. These data indicate that there are crosstalks between PKCz and PKCι/λ pathways, which may mutually regulate the expression of each other at mRNA levels in response to hormonal or nutritional stimuli. The alteration of this mutual regulation mechanism probably is associated with the development of the impairment of insulin signaling cascade. Further studies are warranted.
It is worth noting that, in ZL primary hepatocytes, the overexpression of aPKCs and the attenuated insulin signaling cascade lead to abolishment of insulin-regulated expression of Gck, Pck1 and Srebp-1c (Fig. 6). In contrast, ZF rats showed unimpaired phosphorylation of AKT at Ser473 and Thr450 despite increased PKCz levels (Figs. 1 and 2). This difference may explain that the insulin-regulated expression of Gck and Pck1 is impaired rather than abolished in ZF primary hepatocytes [41]. Additionally, in high-fat diet fed mice, increased basal AKT phosphorylation was observed in the liver with indication for the development of insulin resistance [42]. Additionally, elevated aPKC activity was shown to attenuate AKT-dependent FOXO1 phosphorylation in the liver of diet-induced obesity mice, despite concurrent elevated hepatic AKT activity [43]. These data collectively suggest that activation statuses of both AKT and aPKC are critical in the regulation of hepatic insulin sensitivity.
In summary, we have shown that the expression level of PKCz is elevated in insulinresistant animals. Overexpression of PKCz or PKCƖ/λ in ZL primary hepatocytes reduces the activation of AKT probably through the reduction of IRS1 expression, demonstrating the attenuation of the activation of insulin signal pathway. This alteration of the expression levels of aPKC leads to the impairment of the insulin-regulated gene expression in primary hepatocytes. Our data demonstrate the critical roles of aPKCs in the regulation of hepatic insulin-sensitivity at gene expression level, and provide insights into the development of the insulin resistance and type 2 diabetes.
Supporting Information S1 Fig. Densitometry analysis of target proteins in the livers of ZL and ZF rats. Densitometry analysis was performed using ImageJ software (NIH). The data for each protein were normalized to β-actin levels in each sample. One-way ANOVA with LSD post-hoc test was used to compare the means. Error bars represent S.E.M. Ã indicates a p value less than 0.05. (DOCX) S2 Fig. Ct number of the ribosomal gene 36B4 in ZL hepatocytes transfected with Ad-β-gal, Ad-Prkcz and Ad-Prkci, respectively. Primary hepatocytes of ZL rats fed chow for 8 weeks were isolated and seeded onto dishes as described in the Materials and Methods. Purified Adβ-gal, Ad-Prkcz and Ad-Prkci were added in medium A during the overnight pretreatment period to allow the overexpression of β-GAL, PKCz and PKCι/λ, respectively. The primary hepatocytes were then incubated in fresh medium A with increasing concentrations of insulin (0 nM to 100 nM) for 6 hours before total RNA extraction. Total RNA was extracted, and then subjected to real-time PCR analysis with primer pairs for 36B4. The data were the raw Ct number of 36B4 for 0 nM insulin treatment groups (mean ± SEM; n = 4). (DOCX) S3 Fig. Densitometry analysis of target proteins in ZL hepatocytes transfected with Ad-βgal or Ad-Prkcz. Densitometry analysis was performed using ImageJ software (NIH). The data for each protein were normalized to β-actin levels in each sample. Two-way ANOVA was used to determine the contribution of insulin treatment and adenovirus overexpression to total variance. Error bars represent S.E.M. Ã indicates that adenovirus overexpression accounts for a significant percentage of total variance with a p value less than 0.05. a<b, c<d<e, which indicate insulin treatments account for a significant percentage of total variance with a p value less than 0.05. (DOCX) S4 Fig. Densitometry analysis of target proteins in ZL hepatocytes transfected with Ad-βgal or Ad-Prkci. Densitometry analysis was performed using ImageJ software (NIH). The data for each protein were normalized to β-actin levels in each sample. Two-way ANOVA was used to determine the contribution of insulin treatment and adenovirus overexpression to total variance. Error bars represent S.E.M. Ã indicates that adenovirus overexpression accounts for a significant percentage of total variance with a p value less than 0.05. a<b, a'<b', c<d, e<f, which indicate insulin treatments account for a significant percentage of total variance with a p value less than 0.05. (DOCX) | 5,922 | 2015-03-30T00:00:00.000 | [
"Biology"
] |
Surface plasmon resonance modulation in nanopatterned Au gratings by the insulator-metal transition in vanadium dioxide films
Correlated experimental and simulation studies on the modulation of Surface Plasmon Polaritons (SPP) in Au/VO2 bilayers are presented. The modification of the SPP wave vector by the thermallyinduced insulator-to-metal phase transition (IMT) in VO2 was investigated by measuring the optical reflectivity of the sample. Reflectivity changes are observed for VO2 when transitioning between the insulating and metallic states, enabling modulation of the SPP in the Au layer by the thermally induced IMT in the VO2 layer. Since the IMT can also be optically induced using ultrafast laser pulses, we postulate the viability of SPP ultrafast modulation for sensing or control. ©2015 Optical Society of America OCIS codes: (050.2770) Gratings; (240.0310) Thin films; (240.6680) Surface plasmons; (310.6860) Thin films, optical properties; (310.6628) Subwavelength structures, nanostructures; (310.6845) Thin film devices and applications. References and links 1. N. I. Zheludev and Y. S. Kivshar, “From metamaterials to metadevices,” Nat. Mater. 11(11), 917–924 (2012). 2. L. Wang, K. Yang, C. Clavero, A. J. Nelson, K. J. Carroll, E. E. Carpenter, and R. A. Lukaszew, “Localized surface plasmon resonance enhanced magneto-optical activity in core-shell Fe-Ag nanoparticles,” J. Appl. Phys. 107, 09B303 (2010). 3. X. Huang, I. H. El-Sayed, W. Qian, and M. A. El-Sayed, “Cancer cell imaging and photothermal therapy in the near-infrared region by using Gold nanorods,” J. Am. Chem. Soc. 128(6), 2115–2120 (2006). 4. W. L. Barnes, A. Dereux, and T. W. Ebbesen, “Surface plasmon subwavelength optics,” Nature 424(6950), 824– 830 (2003). 5. A. Cavalleri, C. Tóth, C. W. Siders, J. A. Squier, F. Ráksi, P. Forget, and J. C. Kieffer, “Femtosecond structural dynamics in VO2 during an ultrafast solid-solid phase transition,” Phys. Rev. Lett. 87(23), 237401 (2001). 6. F. J. Morin, “Oxides which show a metal-to-insulator transition at the Neel temperature,” Phys. Rev. Lett. 3(1), 34–36 (1959). 7. S. Kittiwatanakul, J. Laverock, D. Newby, K. E. Smith, S. A. Wolf, and J. Lu, “Transport behavior and electronic structure of phase pure VO2 thin films grown on c-plane sapphire under different O2 partial pressure,” J. Appl. Phys. 114(5), 053703 (2013). 8. S. Lysenko, A. Rúa, V. Vikhnin, F. Fernández, and H. Liu, “Insulator-to-metal phase transition and recovery processes in VO2 thin films after femtosecond laser excitation,” Phys. Rev. B 76(3), 035104 (2007). 9. M. Rini, Z. Hao, R. W. Schoenlein, C. Giannetti, F. Parmigiani, S. Fourmaux, J. C. Kieffer, A. Fujimori, M. Onoda, S. Wall, and A. Cavalleri, “Optical switching in VO2 films by below-gap excitation,” Appl. Phys. Lett. 92(18), 181904 (2008). 10. B. Wang and G. P. Wang, “Plasmon Bragg reflectors and nanocavities on flat metallic surfaces,” Appl. Phys. Lett. 87(1), 013107 (2005). 11. H. Raether, Surface Plasmons on Smooth and Rough Surfaces and on Gratings (Springer-Verlag, 1986). 12. J. Nag, “The solid-solid phase transition in vanadium dioxide thin films: synthesis, physics and application,” Ph.D. thesis (Vanderbilt, Nashville, TN, 2011). 13. J. Y. Suh, E. U. Donev, R. Lopez, L. C. Feldman, and R. F. Haglund, “Modulated optical transmission of subwavelength hole arrays in metal-VO2 films,” Appl. Phys. Lett. 88(13), 133115 (2006). #232977 $15.00 USD Received 22 Jan 2015; revised 1 May 2015; accepted 4 May 2015; published 12 May 2015 © 2015 OSA 18 May 2015 | Vol. 23, No. 10 | DOI:10.1364/OE.23.013222 | OPTICS EXPRESS 13222 14. M. J. Dicken, K. Aydin, I. M. Pryce, L. A. Sweatlock, E. M. Boyd, S. Walavalkar, J. Ma, and H. A. Atwater, “Frequency tunable near-infrared metamaterials based on VO2 phase transition,” Opt. Express 17(20), 18330– 18339 (2009). 15. K. G. West, J. Lu, J. Yu, D. Kirkwood, W. Chen, Y. Pei, J. Claassen, and S. A. Wolf, “Growth and characterization of vanadium dioxide thin films prepared by reactive-biased target ion beam deposition,” J. Vac. Sci. Technol. A 26(1), 133–139 (2008). 16. L. Wang, E. Radue, S. Kittiwatanakul, C. Clavero, J. Lu, S. A. Wolf, I. Novikova, and R. A. Lukaszew, “Surface plasmon polaritons in VO2 thin films for tunable low-loss plasmonic applications,” Opt. Lett. 37(20), 4335–4337 (2012). 17. E. Radue, E. Crisman, L. Wang, S. Kittiwatanakul, J. Lu, S. A. Wolf, R. Wincheski, R. A. Lukaszew, and I. Novikova, “Effect of a substrate-induced microstructure on the optical properties of the insulator-metal transition temperature in VO2 thin films,” J. Appl. Phys. 113(23), 233104 (2013). 18. Grating Solver Development Co.”, retrieved http://www.gsolver.com/. 19. E. D. Palik, Handbook of Optical Constants of Solids (Academic, 1998). 20. H. W. Verleur, A. S. Barker, and C. N. Berglund, “Optical properties of VO2 between 0.25 and 5 eV,” Phys. Rev. 172(3), 788–798 (1968).
Introduction
Current interest in SPP technology is focused on the development of nanoscale optical devices to control the propagation of light in sub-wavelength geometries [1][2][3][4]. The use of photons and electrons together in such technologies is desirable for developing optoelectronic protocols to speed up information processing and transmission, as well as for biological sensing and new imaging techniques. There are a number of materials and structures with unique properties that can be exploited for these applications, including vanadium dioxide (VO 2 ). VO 2 is well-known as a material exhibiting an insulator-to-metal phase transition (IMT) that can be optically [5], thermally [6], and electrically [7] induced, and optical switches using VO 2 have gained attention due to their extremely fast switching speeds (<100 fs) [8] and very low switching energies (on the order of 1 picoJ/μm 2 ) [9]. In alloptical devices, SPPs enhance the local optical field intensity in the region of sub-wavelength structures [10], producing a strong non-linear effect, thus allowing for new ways to control light propagation. As such, it is of interest to combine these plasmonic effects with the optical transition of VO 2 .
When exciting SPPs using only light, the wave vector of the incident light is smaller than that needed to excite the SPPs, i.e. k i < k sp , where k i = k 0 sinθ is the in-plane wave vector of incident light in vacuum, k || = ω/c, and θ is the incident angle. The dispersion relations of the incident light and the SPPs are shown in Fig. 1(a). To overcome the difference in wave vectors, k || can be increased to match k sp by using optical couplers such as a prism in the socalled Kretschmann configuration or by using diffraction gratings [11]. In the latter case, k || can be enhanced to match k sp by adding integer multiples of a grating wave vector g as k sp = k || + mg = k 0 sinθ (1), where g = 2π/a, m is an integer representing the diffraction order, and a is the pitch of the grating. The SPP is observed as a sharp minimum in the m = 0 order reflection when the angle of incidence satisfies Eq. (1). Note that in this equation all three vectors -k || , k sp , and g -must be collinear; the direction of the grating grooves must be perpendicular to the plane of incidence and the illuminating light must be p-polarized ( Fig. 1(b)) This is the configuration that we used in the experiments presented here. If instead the direction of the grating grooves is at an angle ψ≠90° with respect to the plane of incidence, then the above equation should be replaced by a ψ-dependent equation, and, under certain condition, spolarized light can also excite the SPPs [12].
Although combinations of nanostructured noble metals and VO 2 layers have been studied and reported in recent times [13,14], here we present the first correlated experimental investigation and simulations of SPPs on Au gratings patterned onto a VO 2 thin film thus enabling tailoring such structures for greatest benefit. Schematic of the SPP excitation on gratings. The plane of incidence is perpendicular to the direction of grating grooves.
Sample structure
The crystalline VO 2 thin films studied were ~70 nm thick and grown on quartz using reactive based target ion beam deposition [15]. The surface morphology and crystalline structure of these films have been previously characterized and discussed elsewhere [16].
Temperature-dependent infrared (IR) optical transmission studies showed a significant transmission decrease from the room temperature (RT) insulating state to the high temperature (340 K) metallic state, showing the thermally induced IMT of the VO 2 films [15]. A 65 nm Au layer was then evaporated onto the VO 2 film and e-beam lithography was used to pattern a diffraction grating on the Au surface, with the grooves piercing through the Au layer so that optical transmission measurements would be possible. Atomic force microscopy (AFM), shown in Figs. 2(a) and 2(c), was carried out to characterize the morphology of the Au gratings. The gratings were found to have a width of approximately 400 nm and a pitch of 2.5μm. In addition, line scans were performed along the grating vector direction (green lines) to provide more detailed information about the gratings, shown in Figs. 2(b) and 2(d). These scans showed an additional structure at the edges of the rectangular grooves, which previous experience indicates is likely due to residual Au material remaining along the edge of the grooves after the lift-off step of the lithography process, as demonstrated in Fig. 3(a). AFM scans in multiple directions show this residual Au, evidence that the structure is real and not simply due to AFM tip overshoot; a schematic of the sample structure is shown in Fig. 3(b). In our simulations, we considered rectangular grooves both with and without the added Au structures.
Experimental setup
The experimental setup described here and shown in Fig. 4(a) was used to investigate the effect of the thermally-induced IMT of VO 2 on the SPP excitation of the Au gratings. The film was mounted on a pierced thermoelectric cooler (TEC) stage to allow for temperaturedependent measurements and to allow both IR reflectivity and transmission measurements. The sample on the TEC was then mounted on a custom-built goniometer system such that the grating vector was parallel to the plane of incidence. The goniometer stage was computercontrolled to measure the sample's reflectivity as a function of the incident angle with 0.01° resolution. A p-polarized red He-Ne laser (λ = 632 nm) modulated with a 503 Hz optical chopper mounted on one arm of the goniometer illuminated the sample grating over a range of incident angles. A glass window was used to pick off a fraction of the incident laser beam so that the incident power could be monitored throughout the experiment; Si photodetectors and lock-in amplifiers were used to measure both this reference beam and the reflected beam. The sample stage was carefully aligned, bringing the grating surface into coincidence with the stage's axis of rotation so that the incident beam did not move on the sample surface as the incident angle was varied. In addition to the reflectance measurements, we performed IR transmission measurements using an IR He-Ne laser (λ = 1520 nm) at normal incidence. The transmitted intensity was measured using a Ge photodetector with an IR filter. The sample was kept at a constant temperature using the TEC as the red laser light was stepped through a range of angles; once this sweep was done, the temperature was increased and the process was repeated for each new temperature.
Measurements
As discussed above, temperature-dependent IR transmission of the VO2 film was measured through the Au gratings grooves at normal incidence in order to determine the IMT properties of the film, as shown in Fig. 5 (left axis). A significant IR transmission decrease takes place during the IMT, indicating that the deposition and etching of the Au did not significantly affect the thermally-induced IMT properties of the VO2, although the transition temperature was observed to be slightly lower than the well-known value of 340 K. This is mainly due to the strain inherent in thin film crystalline structures as reported elsewhere [17]. Also plotted in Fig. 5 (right axis), the reflectance of the red laser from the Au grating at a fixed incident angle was measured as a function of temperature. The incident angle was scanned over a range of angles around the SPP excitation angle (also referred to as the critical angle θ c , determined to be θ c = 44.75°. It is worth noting that the reflection of the red laser from the Au gratings shows a similar decrease as the IR transmission of the VO 2 during the IMT, which indicates that the SPP excitation on the Au gratings is modulated by the IMT in the VO 2 . To study this change in the SPP properties of the Au gratings under the VO 2 IMT, the reflection of the red laser was measured as a function of incident angle and temperature. The experimental results are shown in Fig. 6(a). The reflection was measured with the VO 2 in both the insulating state (T = 303 K) and the metallic state (T = 331 K), indicated by the two stars in the IR transmission curve in Fig. 5. To confirm the SPP resonance angle θ c , which occurs around 45° based on simulations, we first used the sample stage goniometer to measure reflection through a wide angular spectrum from 20° to 75° at RT. Precision measurements were then taken from 42° to 47°, where, as seen in Fig. 6(a), a reflection minimum evidencing an SPP resonance occurred at θ c = 44.75° due to the SPP excitation on the Au gratings. We have carried out preliminary studies on transmission and reflection geometries in thin Au and Ag films deposited on gratings, confirming that these precision measurements show a surface plasmon resonance absorption at 44.75°. The resonance was seen at 44.75° at both T = 303 K and T = 331 K, although it is clear that the SPP resonance is stronger when the VO 2 is in the metallic state (T = 331 K).
Simulations
The optical response in this thin film/grating structure was simulated using the GSolver grating simulation software [18], which describes the reflection of light from a periodic grating structure by solving Maxwell's equations using a class of algorithms known as Rigorous Coupled Wave (RCW) analysis. In our simulations, the sample's structure was simulated using rectangular shaped Au gratings first without and then with additional Au structures at one end on top of a continuous layer of VO 2 . The optical properties for the sample layers were applied to the simulations using constant optical properties for the Au layer (n = 0.1984, k = 3.0875) [19], while using optical properties for the VO 2 layer in the insulating (n = 2.85, k = 0.341771) and metallic (n = 2.22 k = 0.600000) states based on measurements of similar VO 2 thin films [20]. The results from both simulations are shown in Figs. 7(a) and 7(b) for comparison, although only the simulation with the added Au structure (Fig. 7(b)) is used. showcasing a possible path for further enhancement via tailoring the design and the processing steps to create it.
Conclusion
In conclusion, we have studied the SPP excitation in an Au grating/VO 2 thin film structure for potential photonic/plasmonic applications. We found that deposition and etching of the Au gratings did not degrade the quality of the thermally-induced IMT contrast in the VO 2 thin film. The reflectance measurements showed the SPP excitation in the Au gratings at θ c = 44.75° for both the low temperature insulating and high temperature metallic state of the VO 2 and that the Au SPP absorption resonance is made stronger when the VO 2 is transitioned to the high temperature metallic state, evidence that the SPP resonance is extremely sensitive to changes in the underlying layer. In addition, the optical reflectance simulations of the bi-layer grating structure are in excellent agreement with our experimental measurements. Our results demonstrate a method for tailoring the SPP modulation in the Au by the IMT in the underlying VO 2 film. These findings have broader impact when considering that the IMT in VO 2 can also be optically induced on a sub-picosecond timescale using ultrafast laser pulses [11], pointing to the great potential for an ultrafast SPP modulation scheme for all-optical detection and switching. This holds particular interest for defense applications involving optical limiters. | 3,789.8 | 2015-05-18T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Self-healing Ability of Epoxy Vitrimer Nanocomposites Containing Bio-Based Curing Agents and Carbon Nanotubes for Corrosion Protection
Epoxy is extensively used for anti-corrosion coatings on metallic materials. Conventional epoxy coatings have a permanent crosslink network that is unable to repair itself when cracks and damages occur on the coating layer. This study aims to develop self-healing epoxy vitrimer/carbon nanotube (CNTs) nanocomposite for coating. Two bio-based curing agents viz., cashew nut shell liquid (CNSL) and citric acid (CA) were employed to create covalent adaptable networks. The 0–0.5 wt% CNTs were also incorporated into epoxy/CNSL/CA matrix (V-CNT0-0.5). Based on the results of our study, thermomechanical properties of V-CNT nanocomposites increased with increasing CNTs content. The bond exchange reaction of esterification was thermally activated by near infrared (NIR) light. The V-CNT0.5 showed the highest self-healing efficiency in Shore D hardness of 97.34%. The corrosion resistance of coated steel with V-CNT0 and V-CNT0.5 were observed after immersing the samples in 3.5 wt% NaCl for 7 days. The corrosion rate of coated steel with V-CNT0.5 decreased from 9.53 × 102 MPY to 3.12 × 10–5 MPY whereas an increase in protection efficiency of 99.99% was observed. By taking advantages of the superior self-healing and anti-corrosion properties, V-CNT0.5 could prove to be a desirable organic anti-corrosion coating material.
Introduction
Nowadays, organic coatings are widely used in various industries to prevent corrosion of metal surfaces. The organic coatings have attracted much attention to replace heavy metals for corrosion prevention due to environmental concerns and safety [1][2][3]. Epoxy is one of organic coating materials used in industry because of its effectiveness and ease of application [4,5]. However, defects such as cracks and pores of epoxy coatings during application may result in direct contact between the metal surface and the prevailing aggressive environment leading to corrosion [6]. Recently, smart polymers for coatings with self-healing properties have been developed to prolong service life and reduce maintenance cost of materials [7,8]. Vitrimers are smart thermosets that have cross-linked polymer networks with reversibility through thermally activated bond exchange reactions (BERs) [9][10][11]. The exchange reactions allow vitrimers to rearrange their topology leading to the abilities of reshaping, welding, healing and reprocessing via thermal treatment [12]. Li et al. [9] developed the epoxy vitrimer nanocomposites. The incorporation of ZnAl-layered double metal hydroxide into epoxy can enhance mechanical properties and self-healing efficiency. Jouyandeh et al. [13] studied organic coatings based on epoxy vitrimer nanocomposites containing modified cellulose with halloysite nanotubes (HNT-C). The epoxy containing 0.3 wt% HNT-C showed a good self-healing behavior with an approximately 56% increase in tensile strength compared to neat epoxy.
Bio-based curing agents for epoxy have become a topic of interest because of their biocompatibility and eco-friendliness [14,15]. Cashew nut shell liquid (CNSL) is an inexpensive bio-based curing agent for epoxy that is obtained as the by-product of cashew nut production [16][17][18][19]. Three main components of CNSL are cardanol, cardol and anacardic acid. The two main kinds of crosslinked networks between epoxy and CNSL are ether and ester linkages [18]. Kasemsiri et al. [14] studied the properties of epoxy phenolic novolac (EPN) cured with CNSL, and the shape reconfigurability and self-welding properties of the vitrimer were observed. However, the reconfigured shape of EPN/CNSL composite could not be completely fixed due to insufficient ester linkages. Esterification in the epoxy network can be promoted by adding natural carboxylic acids such as citric acid (CA). A vitrimer from epoxy cured with CA and catalyzed by imidazole had 99% reconfigurability of new shape after heating at 160 °C for 1 h via transesterification [20]. Based on previous studies [14,20], the use of epoxy cured with bio-based curing agents such as CNSL combined with CA presents an attractive smart polymer candidate for organic coatings.
To improve properties and create new functions for specific coating applications and other properties, the incorporation of nanoparticles with and without modification [21][22][23] viz., carbon nanotubes (CNTs) into epoxy matrices has been reported [24,25]. CNTs have mild chemical stability, high electrical conductivity, and good mechanical properties [26]. Deyab and Awadallah [27] observed that adding CNTs into epoxy can reduce corrosion rates by forming uniform coating layers. The resistance obtained from equivalent circuit showed that the charge transfer resistance of the epoxy coating containing CNTs was 11-20 times greater than that of the neat epoxy. The presence of CNTs in epoxy decreased electrolyte diffusion towards the metal substrate. The incorporation of CNTs into polymer matrices not only improves anticorrosion but also enhances the selfhealing ability of polymer coatings. The healing process of most self-healing CNTs incorporated polymers can be activated by heat and near infrared (NIR) light [14,28]. CNTs are photothermal fillers that can convert the absorbed NIR light into heat. Guan et al. [29] studied the self-healing ability of epoxy cured with 4-aminophenyl disulfide (EP-AFD) containing CNTs. They found that self-healing efficiency of EP-AFD containing CNTs increased from 85 to 90% when the temperature was increased from 130 to 160 °C during NIR irradiation for 1 min.
To our knowledge, there have been only few reports on the properties of vitrimer nanocomposites based on the use of bio-based curing agents and CNTs for coating application. Therefore, the aims of this study were to develop novel bio-based vitrimer based epoxy cured with CNSL and CA and incorporated with CNTs. The curing reactions, thermomechanical properties, NIR light-activated self-healing and anticorrosion efficiency were investigated.
Preparation of Vitrimer Nanocomposites
The epoxy/CNSL/CA matrices at various equivalent ratios of AEW of CA to EEW (R ratio) were prepared for preliminary test of self-healing efficiency, as summarized in Table S2. The CA was dissolved in ethanol and then mixed with CNSL for 10 min using a magnetic stirrer. The epoxy was added into the mixture of CA and CNSL and then stirred for another 10 min. The obtained solution was casted onto the steel plate using doctor blade. The sample was cured at 60, 80, 100, and 120 °C for 1 h each followed by 200 °C for 2 h. The suitable epoxy/CNSL/CA was found at R ratio of 0.35 as depicted in Figure S1. This R ratio showed the selfhealing on surface without any scratch and was subsequently used to prepare the vitrimer nanocomposites for coating.
For vitrimer nanocomposite preparation, the CNTs were dispersed in ethanol for 5 min by probe sonication using a UCD-P01-250 W ultrasonic cell disrupter from Biobase Biodustry (Shandong) Co., Ltd. Sonication was done at 50% vibration amplitude with 3-s on and 1-s off pulses. The CNTs/ethanol suspension was mixed with the CA/ CNSL solution and stirred for 5 min. The resulting CNT/ CA/CNSL suspension in ethanol was mixed with epoxy for 15 min by sonication and stirred for another 30 min. Finally, the obtained mixture was casted on the steel plate and cured in an air-circulated oven following the heating 1 3 steps as described above. The thickness of the prepared coatings on the steel plates was 200 ± 35 µm. The vitrimer nanocomposites with CNTs contents of 0, 0.1, 0.3, and 0.5 wt%, were named as V-CNT0, V-CNT0.1, V-CNT0.3, and V-CNT0.5, respectively.
Characterization
Attenuated total reflectance-Fourier transform infrared (ATR-FTIR) spectrometry (Tensor 27, Bruker) was carried out in absorbance mode. The wavelength ranged from 4000 to 600 cm −1 at a resolution of 4 cm −1 over 32 scans. Dynamic mechanical analysis (DMA) was conducted using TA Q800 dynamic mechanical analyzer to study the thermomechanical properties. The size of the nanocomposite films for all DMA tests was 30 mm × 5 mm × 0.7 mm. The multi-frequency-strain tension mode was used at a frequency of 1 Hz and 0.01% strain from -30 to 120 °C with a heating rate of 2 °C min −1 . The stress relaxation mode was applied with 0.1% strain at different temperatures, which were 70, 80, 90, and 100 °C. The cross-sectional morphologies of the samples were observed using a field emission scanning electron microscope (FEI Helios NanoLab G3 CX). To evaluate the self-healing ability, the nanocomposite specimens were scratched using a razor blade with 10 µm edge width and exposed to NIR light (120 W, EVE) with light intensity of 1.6 mW·cm −2 for 1 h. The images of the specimens before and after NIR light exposure were taken using a Nikon Measurescope 20 stereo microscope and analyzed by ImageJ software for the change in the size of the scratch. The self-healing ability was calculated using Eq. 1.
where L i refers to an initial width of the damaged area and L h represents the width after healing for different healing times.
Shore D hardness tests were performed on the vitrimer coating using a handheld durometer according to ASTM D2240 [30]. The self-healing efficiency of the coatings was calculated using Eq. 2.
Electrochemical corrosion tests were performed using a palmsen. 4 electrochemical workstation with three electrodes. Platinum and silver/silver chloride were used as auxiliary electrode and reference electrode, respectively, while the coated sample with exposure area of 0.283 cm 2 functioned as working electrode. The sample was immersed in 3.5 wt% NaCl aqueous solution. Potentiodynamic curves were collected at a scan rate of 0.005 mV.s −1 . X-ray diffraction (XRD) analyses were performed using PANalytical, EMPYREAN at 2θ = 20°-80° with a step of 0.02°, voltage of 45 kV, and current of 40 mA.
Characterization of Chemical Structure
The FTIR spectra of V-CNT containing different CNTs contents are displayed in Fig. 1. The characteristic peaks of epoxy, CNSL and CA are depicted in Fig. 1a. The main (1) Self -healing ability = L h -L i L i × 100% (2) Self -healing efficiency of coating = Hardness value after healing Initial hardness value × 100% Fig. 1 Fourier-transform infrared spectroscopy spectra of a epoxy monomer, CNSL and CA and b vitrimer nanocomposite characteristic peak of epoxy was observed at 915 cm −1 , which was attributed to epoxide rings. The C−H and C−CH 2 stretching vibrations of epoxide rings were also found at 2864 and 1453 cm −1 , respectively [31]. The absorption bands at 1007 and 912 cm −1 were assigned to the phenolic group of CNSL. The C=O stretching of anacardic acid and citric acid appeared at 1650 cm −1 and 1714 cm −1 , respectively [18,32]. Figure 1b shows FTIR spectra of samples after the curing process. All samples display similar FTIR spectra. The peak intensity at 915 cm −1 remarkably decreased, which indicates the opening of the epoxide rings followed by crosslinking (16). The new characteristic peak of epoxy/CNSL occurred at 1727 and 1106 cm −1 which was assigned to C=O stretching of ester and C-O-C stretching of ether formation, respectively [14,33,34] whereas epoxy/ CA showed the strong peak at 1730 which was assigned to ester linkages [35]. For V-CNT, the characteristic peaks of ester and ether were found in the same region at 1734 cm −1 and 1112 cm −1 , respectively. The ester linkage of V-CNT was formed by esterification between epoxy and the carboxyl groups of CA and anacardic acid [33] while ether linkage was created by the etherification between epoxy and hydroxyl groups of cardanol and cardol in CNSL [14,18]. Figure 1b also shows the peak shift from 3444 cm −1 to 3425 cm −1 after adding CNTs at various contents. This shift was due to H-bonding interaction between C=O group of CNTs and the OH groups of epoxy matrix during the ring opening polymerization reaction [36]. The possible reactions of epoxy, CNSL, CA and CNTs are depicted in Scheme S1.
Thermomechanical properties of vitrimer nanocomposites
The relationships between storage modulus and temperature of V-CNT are revealed in Fig. 2. The storage moduli at glassy state (0 °C) of V-CNT0, V-CNT0.1, V-CNT0.3, and V-CNT0.5 vitrimers were 5773, 6658, 7366 and 8602 MPa, respectively. This increase in storage modulus with increasing CNTs content was attributed to the reinforcing effect of the homogeneous dispersion of CNTs nanofiller [14]. The cross-sectional micrographs of all samples were used to confirm the dispersion of CNTs without aggregation as shown in Fig. 3. Saha and Bal [37] suggested that the carboxylic groups on CNTs could form covalent bonds with epoxy which enhanced the interfacial stress transfer and positively improved dispersability of CNTs. Figure 4 depicts the Tg which was determined from the peak loss modulus. The Tg of the V-CNT increased from 27.3 to 30.6 °C when 0-0.5 wt% CNTs were incorporated in the epoxy matrix. The increase in CNTs content improved the glassy storage modulus and Tg of the nanocomposites, which could be explained by the effect of CNTs reinforcement. Uniformly dispersed CNTs in the network obstructed the mobility of the polymer chains, leading to a higher resistance for deformation [38]. Furthermore, the strong covalent bonding between epoxy and COOH groups on CNTs promoted dissipation of energy between CNTs and matrix, enhancing the thermal properties of the epoxy composite [36]. Figure 5 depicts the relationship between the normalized relaxation modulus (G/G 0 ) and time at 70, 80, 90, and 100 °C for the V-CNT specimens with different CNTs contents. A relaxation time for viscoelastic fluid can be defined at the G/G 0 of 0.37, according to Maxwell model [39]. It can be observed that all specimens can reduce the G/G 0 value to 0.37, indicating the completion of BER via esterification to transesterification in the specimens. Figure 6a shows the linear relationship between relaxation time and temperature. The transesterification activation energy can be obtained from the slope of the plot according to Eq. 3 [40]. Moreover, the topology freezing transition temperature (T v ) of the vitrimer can be evaluated using Eq. 3 and Eq. 4. The value of τ* can be determined by Maxwell relation according to Eq. 4 [41,42]. The E´ at rubbery plateau of V-CNT was 0.14 MPa and ln(τ*) value was 16.88.
Stress Relaxation of Vitrimer Nanocomposites
where τ* is relaxation time (s), T is temperature (K), E a is the activation energy (kJ/mol), and R is the universal gas constant.
where η is viscosity of 10 12 Pa·s and G can be calculated from elastic modulus (E´at rubbery plateau/3).
Fig. 2 Storage modulus of vitrimer nanocomposites
Legrand and Soulie -Ziakovic [43] suggested that the stress relaxation of vitrimers containing functionalized fillers depended on two main phenomena, i.e. (i) the bond exchange of linkages between filler and polymer matrix and rearrangement and (ii) rearrangement of the filler in the polymer matrix. Figure 6(b) shows the activation energy and Tv of V-CNT with different CNTs contents. The E a values ranged from 55.7 to 62.4 kJ/mol and increased with CNTs content. The increase of CNTs in the network hindered the polymer chain mobility, hence a higher energy was required for transesterification. This observation was in good agreement with a previous report on polymethacrylate vitrimer nanocomposites [44]. The Tv values of V-CNT0 -V-CNT0.5 were in the range of -51.68 to -37.15 °C. This Tv was also observed to increase with CNTs in the specimens, which was related to the hindrance effect of CNTs on the chain mobility. The delay in transesterification rate due to the increasing CNTs therefore led to the increase in Tv.
NIR Self-Healing Ability of Vitrimer Nanocomposites
Dynamic thermosetting is a new class of polymers that has been developed to resolve the limitations (unrepairable and unrecyclable) of conventional thermosetting polymers. Figure 7 demonstrates the self-healing ability of V-CNT specimens before and after NIR exposure. After NIR exposure for 1 h, the scratch sizes were clearly reduced, particularly for V-CNT0.5. Since the photo-activation resulted in a higher thermal energy, transesterifications in the vitrimer network were initiated to dissociate the crosslinked network. This phenomenon allows the polymer chain diffusion to fill the scratch, resulting in self-healing [45]. Without CNTs, the V-CNT0 showed self-healing because of the π-π interactions between polymer chains [46], but the self-healing ability was only 31%. The addition of CNTs improved self-healing ability of V-CNT up to 93%, three times as high as that of V-CNT0, as shown in Fig. 8. The addition of CNTs in the vitrimer network directly promoted the photothermal conversion effect, leading to the higher thermal energy activating transesterifications [47]. In addition, CNTs promoted heat dissipation in the polymer matrix to improve heat transfer throughout the network, according to their high thermal conductivity [48]. The self-healing results in this work demonstrate the opposite, which is likely due to the functionalization of the CNTs. The CNTs in this work contain COOH groups, which increase the compatibility to bond with epoxide groups to create reversible linkages that participate in the self-healing process [49][50][51]. It should be noted here that an increase in CNTs content in the vitrimer composites could physically retard the chain mobility and decreased the dynamic bond exchange, resulting in poor self-healing ability [44]. The effect of CNTs addition at high content will be evaluated in further study.
Self-Healing Efficiency of Vitrimer Nanocomposite Coating
The Shore D hardness was measured to investigate the ability of the coatings to resist indentation using a durometer scale in range of 0-100, as shown in Table 1. A higher value represents a harder coating material. The hardness values of virgin samples increased with CNTs content and ranged from 27.53 to 30.13, which were comparable to those of bio-based epoxy cured with anhydride and organic acids [52] and epoxy/polycaprolactone copolymer [53]. After healing, the hardness values slightly decreased for all samples. The self-healing efficiency of the coating increased from 94.91% for coated steel with V-CNT0 to 97.34% for coated steel with V-CNT-0.5. It is postulated that adding CNTs into the epoxy matrix was beneficial to the self-healing efficiency. Since V-CNT0.5 exhibited the highest performance of self-healing in coating, the vitrimer with this composition was used to coat steel plates for anti-corrosion tests. Figure 9 illustrates the Tafel plots of bare steel and steel samples coated with V-CNT0 and V-CNT0.5 after immersion Fig. 7 Images of damaged area on a surface of vitrimer nanocomposites before and after exposed under NIR light in 3.5 wt% NaCl for 7 days. The current (I corr ), corrosion potential (E corr ), protection efficiency (P.E.) and corrosion rate (R corr ) from the Tafel plots are summarized in Table 2. The P.E. and R corr values can be evaluated following Eqs. 5 and 6 [54].
Corrosion Properties of vitrimer Nanocomposite Coating
where I o corr and, I c corr (ampere cm −2 ) are the corrosion current of steel plate and coated steel plate with vitrimer nanocomposite, respectively.
where R corr is corrosion rate in milli-inches per year (MPY), E w is equivalent weight of sample (g equivalent −1 ), K is a constant value (1.288 × 10 5 milli-inches per year/ ampere -cm-year), A is the surface area of sample (cm 2 ), d is density of the sample (g cm −3 ) and I corr is the corrosion current (ampere). The two main parameters, I corr and E corr , were used to analyze the anticorrosion performance. The I corr represents a corrosion dynamic rate involving the cathodic reduction of oxygen and anodic dissolution of metal ions whereas E corr indicates the corrosion resistance performance [55]. The negative shift of I corr and positive shift of E corr implies the enhancement in corrosion resistance with low corrosion dynamic rate [56]. The anticorrosion performance was in the following order: coated steel with V-CNT0.5 > coated steel with V-CNT0 > uncoated steel. The higher P.E. and lower R corr of coated steel with V-CNT0.5 were also observed. The hydrophobic character of V-CNT0.5 (see supplementary information in Figure S2) repelled the NaCl solution, resulting in diffusion away from the steel substrate. In addition, the uniform dispersion of CNTs can change and hinder the diffusion pathway of oxygen molecules, chlorine ions and H 2 O molecules in the epoxy matrix [55]. Harb et al. [57] suggested that the negatively charged CNTs in a polymer matrix acted as a repulsive agent for chloride anions resulting in an increase in ionic resistance. Furthermore, the presence of CNTs with high aspect ratio increased the surface area for oxygen cathodic half reaction resulting in a decrease in over-potential for the reduction of oxygen molecule. At a low over-potential, the passive film formation might occur due to an increased metal dissolution at the anode in low corrosion current density [58].
The corrosion products of steel coated with V-CNT0 and V-CNT0.5 after immersion in 3.5 wt% NaCl solution for 7 days were characterized using XRD, as shown in Fig. 10. was observed in steel coated with V-CNT0 whereas this peak was clearly seen in the steel coated with V-CNT0.5. The Fe 3 O 4 peaks were derived from the steel substrate and corrosion products [6,59]. This oxide film acted as a barrier to prevent oxygen from invading into the mediate layer [59]. The proposed corrosion protection mechanism of the steel coated with V-CNT0 and V-CNT0.5 is illustrated in Scheme 1.
Conclusion
In this work, epoxy vitrimer nanocomposites were successfully developed and applied as anticorrosion coatings. The prepared epoxy vitrimer at R = 0.35 provided a uniform and self-healing coating film. Hence, this epoxy vitrimer formula was used to prepare vitrimer nanocomposites. The incorporation of CNTs at 0-0.5 wt% enhanced the storage modulus from 5773 to 8602 MPa and increased the Tg from 27.3 °C to 30.6 °C. The V-CNT0.5 showed the highest selfhealing ability of 93%. The Shore D hardness increased from 27.53 to 30.12 when 0.5 wt% CNTs was incorporated into the epoxy matrix. After healing, V-CNT0.5 showed the highest Shore D hardness of 29.33. The uniform dispersion of CNTs in the epoxy matrix improved the BER capability under NIR light irradiation. The coated steel with V-CNT0.5 showed good anti-corrosion performance, which decreased the corrosion rate from 9.53 × 10 2 MPY to 3.12 × 10 -5 MPY. Based on the obtained results, the V-CNT0.5 has good potential to be used as an organic anti-corrosion coating. | 5,076.4 | 2021-04-23T00:00:00.000 | [
"Materials Science"
] |
HAWC+ Far-Infrared Observations of the Magnetic Field Geometry in M51 and NGC 891
SOFIA HAWC+ polarimetry at $154~\micron$ is reported for the face-on galaxy M51 and the edge-on galaxy NGC 891. For M51, the polarization vectors generally follow the spiral pattern defined by the molecular gas distribution, the far-infrared (FIR) intensity contours, and other tracers of star formation. The fractional polarization is much lower in the FIR-bright central regions than in the outer regions, and we rule out loss of grain alignment and variations in magnetic field strength as causes. When compared with existing synchrotron observations, which sample different regions with different weighting, we find the net position angles are strongly correlated, the fractional polarizations are moderately correlated, but the polarized intensities are uncorrelated. We argue that the low fractional polarization in the central regions must be due to significant numbers of highly turbulent segments across the beam and along lines of sight in the beam in the central 3 kpc of M51. For NGC 891, the FIR polarization vectors within an intensity contour of 1500 $\rm{MJy~sr^{-1}}$ are oriented very close to the plane of the galaxy. The FIR polarimetry is probably sampling the magnetic field geometry in NGC 891 much deeper into the disk than is possible with NIR polarimetry and radio synchrotron measurements. In some locations in NGC 891 the FIR polarization is very low, suggesting we are preferentially viewing the magnetic field mostly along the line of sight, down the length of embedded spiral arms. There is tentative evidence for a vertical field in the polarized emission off the plane of the disk.
measurements. In some locations in NGC 891 the FIR polarization is very low, suggesting we are preferentially viewing the magnetic field mostly along the line of sight, down the length of embedded spiral arms. There is tentative evidence for a vertical field in the polarized emission off the plane of the disk. Subject headings: galaxies: ISM, galaxies: magnetic fields, galaxies: spiral, galaxies: individual (M 51, NGC 891), polarization
INTRODUCTION
A face-on and an edge-on galaxy each provides the observer with a unique advantage that enhances the study of the properties of spiral galaxies in general. For a faceon galaxy, there is far less confusion caused by multiple sources along the line of sight, a minimum column density of gas, dust and cosmic ray electrons, and a clear view of spiral structure. For an edge-on galaxy, the vertical structure of the disk is easily discernible, vertical outflows and super-bubbles can be seen, and the fainter, more diffuse halo is now more accessible. M51 and NGC 891 provide two well studied examples of nearly face-on (M51) and edge-on (NGC 891) galaxies. We are interested in probing the magnetic field geometry in these two systems to compare far-infrared (FIR) observations with optical, near-infrared (NIR) and radio observations, and to search for clues to the mechanism(s) for generating and sustaining magnetic fields in spiral galaxies.
Over the past few decades, astronomers have detected magnetic fields in galaxies at many spatial scales. These studies have been performed using optical, NIR, CO and radio observations (see Kronberg 1994;Zweibel & Heiles 1997;Beck & Gaensler 2004;Beck 2015;Montgomery & Clemens 2014;Jones 2000;Li & Henning 2011, for example). In most nearly face-on spirals, synchrotron observations reveal a spiral pattern to the magnetic field, even in the absence of a clear spiral pattern in the surface brightness (Fletcher 2010;Beck & Gaensler 2004). If magnetic fields are strongly tied to the orbital motion of the gas and stars, differential rotation would quickly wind them up and produce very small pitch angles. The fact that this is clearly not the case is an argument in favor of a decoupling of the magnetic field geometry from the gas flow due to diffusion of the field (Beck & Wielebinski 2013), which is expected in highly conductive ISM environments (e.g. Lazarian et al. 2012).
Radio observations measure the polarization of centimeter (cm) wave synchrotron radiation from relativistic electrons, which is sensitive to the cosmic ray electron density and magnetic field strength (Jones et al. 1974;Beck 2015). Li & Henning (2011) measured the magnetic field geometry in several star forming regions in M33 by observing CO emission lines polarized due to the Goldreich-Kylafis effect (Goldreich & Kylafis 1981), although there is an inherent 90 • ambiguity in the position angle with this technique. Studies of interstellar polarization using the transmission of starlight at optical and NIR wavelengths can reveal the magnetic field geometry as a result of dichroic extinction by dust grains aligned with respect to the magnetic field (e.g., Jones & Whittet 2015) where the asymmetric dust grains are probably aligned by radiative alignment torques (Lazarian & Hoang 2007;Andersson et al. 2015). However, polarimet-Current address: Lincoln Laboratory, Massachusetts Institute of Technology, Lexington, Massachusetts 02421-6426 ric studies at these short wavelengths of diffuse sources such as galaxies can be affected by contamination from highly polarized, scattered starlight. This light originates with stars in the disk and the bulge, that subsequently scatters off dust grains in the interstellar medium (Jones et al. 2012). The optical polarimetry vector map of M51 (Scarrott et al. 1987) was claimed to trace the interstellar polarization in extinction and does indeed follow the spiral pattern. As we will see later in the paper, it also demonstrates a remarkable degree of agreement with our HAWC+ map of the magnetic field geometry. A more recent upper-limit to the polarization measured at NIR wavelengths appeared to rule out dichroic extinction of starlight as the main polarization mechanism ). The scattering cross section of normal interstellar dust declines much faster (∼ λ −4 between 0.55 and 1.65 µm) than its absorption, which goes as ∼ λ −1 (Jones & Whittet 2015). It is therefore possible that the optical polarization measured by Scarrott et al. (1987) is due to scattering, rather than extinction by dust grains aligned with the interstellar magnetic field, since polarimetric studies at these short wavelengths of diffuse sources such as galaxies can be affected by contamination from highly polarized scattered light (Wood & Jones 1997;Seon 2018). Nevertheless, the similarity we will find between the optical data and FIR results is striking, but if they are both indicating the same magnetic field, then the non-detection in the NIR is a mystery. Note that we will find a similar dilemma in comparing the optical and FIR polarimetry of NGC 891.
Observing polarization at FIR wavelengths has some advantages over, and is very complementary to, observations at optical, NIR and radio cm wavelengths for the following reasons. 1) The dust is being detected in polarized thermal emission from elongated grains oriented by the local magnetic field (see the review by Jones & Whittet 2015), not extinction of a background source, as is the case at optical and NIR wavelengths. 2) Scattering is not a contaminant since the wavelength is much larger than the grains, and much higher column densities along the line of sight can be probed. 3) Faraday rotation, which is proportional to λ 2 , must be removed from radio synchrotron observations, and can vary across the beam, is insignificant for our FIR polarimetry (Kraus 1966). 4) The inferred magnetic field geometry probed by FIR polarimetry is weighted by dust column depth and dust grain temperature, not cosmic ray density and magnetic field strength, as is the case for synchrotron emission. In this paper we report observations at 154 µm of both M51 and NGC 891 using HAWC+ on SOFIA (Harper et al. 2018) with a FWHM beam size of 560 and 550 pc respectively. In all cases, we have rotated the FIR polarization vectors by 90 • to indicate the implied magnetic field direction. This rotation is also made for synchrotron emission at radio wavelengths, but is not made for optical and NIR polarimetry where the polarization is caused by extinction (unless contaminated by scattering), not emission, and directly delineates the magnetic field direction. The polarization position angles are not true vectors indicating a single direction, but the term 'vector' has such a long historical use that we will use that term here to describe the position angle and magnitude of a fractional polarization at a location on the sky. The polarization is a true vector in a Q,U or Q/I,U/I diagram, but this translates to a 180 • duplication on the sky.
FAR-INFRARED POLARIMETRIC OBSERVATIONS
The 154 µm HAWC+ observations presented in this paper were acquired as part of SOFIA Guaranteed Time Observation program 70 0609 and Director's Discretionary Time program 76 0003. The HAWC+ imaging and polarimetry -resulting in maps of continuum Stokes I, Q, U -used the standard Nod Match Chop (NMC) observing mode, performed at 4 half-wave plate angles and sets of 4 dither positions. Multiple dither size scales were used in order to even the coverage in the center of the maps.
The M51 data were acquired during two flight series, on SOFIA flights 450, 452, and 454 in November 2017 and on flights 545 and 547 in February 2019. The chop throw for the Nov. 2017 observations was 6.7 arcminutes at a position angle of 105 degrees east of north. For the Feb. 2019 observations, the chop throw was 7.5 arcminutes in the east-west direction. The total elapsed time for the M51 observations was 4.6 hours. The observations with telescope elevation > 58 • at the end of flight 547 were discarded due to vignetting by the observatory door. Otherwise, conditions were nominal.
The NGC 891 data were acquired on flight 450 and on flights 506 and 510 in September 2018. The chop throw for all observations was 5.0 arcminutes at a position angle of 115 degrees east of north. The total elapsed time for the NGC 891 observations was 3.2 hours. Four dither positions with telescope tracking problems during flight 450, which did not successfully run through the data analysis pipeline, were discarded. Otherwise, observing conditions were nominal.
Data Reduction
All HAWC+ imaging and polarimetry were reduced with HAWC+ data reduction pipeline 1.3.0beta3 (April 2018). Following standard pipeline practice, we subtracted an instrumental polarization {q i , u i }, calibrated with separate 'skydip' observations, having a median value of q 2 i + u 2 i of 2.0% over the detector array. The final uncertainties were increased uniformly by ∼ 30−40% based on the χ 2 consistency check described by Santos et al. (2019). We applied map-based deglitching as described by Chuss et al. (2019). Due to smoothing with a kernel approximately half the linear size of the beam, the angular resolution in the maps (based on Gaussian fits) is 14 FWHM at 154 µm. Since both galaxies are well out of the Galactic plane, reference beam contamination is minimized.
The flux densities in the maps were calibrated using observations of Solar System objects, also in NMC mode. Due to the lack of a reliable, calibrated SOFIA facility water vapor monitor at the time of the observations, the version 1.3.0 pipeline uses an estimate of far-IR atmospheric absorption that is dependent on observatory altitude and telescope elevation, but is constant in time. For all observations, we used the default pipeline flux calibration factor, for which we estimate 20% absolute uncertainty. For each galaxy, the maps from the two flight series, analyzed separately, show flux calibration consistency to within 5% . For M51, we adjusted the coordinates of the Feb. 2019 map (with a simple translation in both axes) prior to coaddition with the Nov. 2017 map. The relative alignment of the per-flight-series maps for NGC 891 was within a fraction of a beam without adjustment.
Alignment of the coordinate system for M51 supplied by the pipeline was checked against VLA 3.6cm, 6.2cm, and 20.5cm (Fletcher et al. 2011), Spitzer 8 µm (Smith et al. 2007), and Herschel 160 µm maps (Pilbratt et al. 2010). We did this by matching 6 small, high surface brightness regions between our 154 µm map and the maps at the other wavelengths. We found that the HAWC+ map was consistently 4 ± 1 south relative to the comparison maps. For this reason, we have added an offset of 4 N to our maps of M51. Since we are not making any comparisons of NGC 891 with high resolution maps at other wavelengths, we made no adjustment to the coordinate system for that galaxy.
Polarimetry Analysis
For both galaxies we computed the net polarization in different synthetic aperture sizes, depending on the signal-to-noise (S/N) in the data. The pixel size is 3.4 , or ∼ 1/4 a FWHM beam width. In all cases we used the I, Q and U intensity and error maps to form the polarization vectors. The results reported here were obtained by placing different sized synthetic apertures on the images and computing intensities from the sums of individual pixels and the errors from the sums of the error images in that aperture in quadrature. The errors and intensities in the individual pixels are not statistically independent, since they were created by combining intermediate images in the data processing and then smoothed with a truncated Gaussian with FWHM = 2.04 pixels (6.93 ). We determined the effect of the Gaussian kernel on the computed errors by applying it to maps with random noise. As a result of this exercise, we increased the computed error by factors of 1.69 for the 2 × 2 pixel (half beam), 2.27 for the 4 × 4 pixel (one beam) and 2.56 for the 8 × 8 pixel (two beam) synthetic apertures.
An additional concern is spatially correlated noise such as might be due to incomplete subtraction of atmospheric noise and other effects. A thorough investigation into the possibility of correlated noise in our data is beyond the scope of this paper and will be addressed in a later paper, but we report the results of a simple test for spatially correlated noise carried out by the HAWC+ instrument team (Fabio P. Santos) in 2017 on B and C observations of HL Tau. This analysis showed that an approximate quadrupling of the sky area being combined causes the noise in the data (compared to what would be expected from uncorrelated noise) to increased by a factor of 1.06. Specifically, results were compared for a Gaussian smoothing kernel of 4 FHWM truncated at an 8 diameter and one having 7.8 FHWM with truncation at a 15.6 diameter.
For this reason we have made extra cuts in Stokes I (total intensity) at a S/N of 50:1 for M51 and 30:1 for NGC 891, and increased the error for the largest synthetic aperture of 8 × 8 pixels by a factor of 1.06. We are particularly concerned about the scientifically important inter-arm and halo regions, which have low intensity and need to use the larger synthetic aperture. Q and U are intensities, and small spurious values will adversely influence the net polarization derived for regions of low intensity, but not high intensity. For example, at a contour level of 100 MJy sr −1 between the arms, a 1 MJy sr −1 value for Q that is due to a glitch, a bad pixel, or residual flux from image subtraction will produce a 1% polarization that is not real. In the arm where the intensity is ∼ 800 MJy sr −1 , this would contribute no more than 0.12%. The final computed polarization was then corrected for polarization bias (Wardle & Kronberg 1974;Sparks & Axon 1999) and cuts were made at a fractional polarization for a final S/N of ≥ 3 : 1 and S/N between 2.5:1 and 3:1.
To further guard against systematic errors in the I, Q and U maps at lower intensities, we made a cut using the total intensity error I err map at σ > 0.003 Jy/pixel. This removed the outer regions of the images where there was incomplete overlap in the dithered images. This final cut made little difference in the M51 polarimetry results where less than 10% of the image was removed. But, for NGC 891 about 20% of the image was removed and the northern and southern extremes of the disk in NGC 891 were excluded. Note that the edge-on disk in NGC 891 is at least 10 long, and our HAWC+ image spans only about 5 along the disk, centered on the nucleus. In an upcoming paper we will be working with existing and new HAWC+ data on M51 and will create smoothed images starting with the raw data.
M51
3.1. Introduction M51 is not only a face-on spiral galaxy but also a twoarm, grand design spiral (e.g. Rand et al. 1992), at a distance of 8.5 Mpc (McQuinn et al. 2016). It is clearly interacting with M51b and tails and bridges in the outer regions of the two galaxies are shared, while in the inner regions of M51 the spiral structure appears to be unaffected by the companion. Our observations did not reach far enough from the center of the galaxy to include M51b. Because of its low inclination, M51 shows well defined spiral arms and well separated arm and inter-arm regions. This makes M51 an excellent laboratory to study how the magnetic field geometry changes from arm to inter-arm regions due to the effect of spiral density waves and turbulence. Star formation in M51 is located mostly in the spiral arms and in the central region, but some gas and star formation are also detected in the inter-arm regions (e.g., Koda et al. 2009). Molecular gas is strongly correlated with the optical and infrared spiral arms and shows evidence for spurs in the gas distribution (Schinnerer et al. 2017). The magnetic field geometry M51 was studied at radio wavelengths by Fletcher et al. (2011), who find that the overall geometry revealed in the polarization vectors follows the spiral pattern, but there is depolarization in their larger 15 20.5 cm beam. They find that the 6.2 cm polarized emission is probably strongly affected by sub-beam scale anisotropies in the field geometry. Our HAWC+ observations allow us to study the magnetic field geometry as measured by dust emission instead of cosmic ray electrons, and thereby sample the line of sight differently, and also probe denser components of the ISM than is possible at optical and NIR wavelengths.
Magnetic Field Geometry
The polarization vector map of M51 is shown in Figure 1, where the polarization vectors have been rotated 90 • to show the inferred magnetic field geometry. Fractional polarization values range from a high of 9% to a low of 0.6%, about 3σ above our estimated limiting fractional polarization of 0.2% . Clearly evident in Figure 1 is a strong correlation between the position angles of the FIR polarimetry and the underlying spiral arm pattern seen in the color map. This can be better visualized in Figure 2, where all the polarization vector lengths have been set to unity, and only the position angle (PA) is quantified.
In spiral galaxies, the spiral pattern is often fitted with a logarithmic spiral (e.g. Seigar & James 1998;Davis et al. 2012; ?) Shetty et al. (2007) found a pitch angle of 21.1 • for the bright CO emission in the spiral arms. Hu et al. (2013) suggested 17.1 • and 17.5 • for each of the two arms using SDSS images, and Puerari et al. (2014) determined the pitch angle of 19 • for the arms from 8 µm images. Also, several investigators find that the pitch angles are variable depending on the location (e.g., Howard -Geometry used to de-project the polarization vectors so that their individual pitch angles can be calculated. The inclination with respect to the plane of the sky is 20 • and the major axis (labeled Y) of the ellipse (a circle in projection) is 170 • east of north. We are assuming the magnetic field vectors in the disk of M51 have no vertical component when computing the deprojection. The polarization vector is shown relative to a circle (in projection), which has a pitch angle of zero. & Byrd 1990;Patrikeev et al. 2006;Puerari et al. 2014).
M51 is not perfectly face-on, but rather is tilted to the line of sight. Shetty et al. (2007) used the values for the inclination of 20 • and a position angle for the major axis of 170 • from Tully (1974) in their analysis of the spiral arms seen in CO emission. This geometry is illustrated in Figure 3. Using these same parameters and assuming the intrinsic magnetic field vector has no component perpendicular to the disk, we can de-project our vectors and compute their individual pitch angles using the geometry from Figure CO observations for the model spiral arms, and compute ∆θ. We will call this Model 1. A normalized histogram of ∆θ is shown in Figure 4. We simulated the expected distribution in ∆θ under the assumption that the vectors and the spiral arm pitch angle were the same, and only errors in the FIR polarization data were responsible for the dispersion in the angle difference. We generated simulated data assuming the errors in polarization position angle are Gaussian distributed for each vector and ran a Monte Carlo routine that generated simulated distributions, repeating 1000 times. Since the simulated data are assumed to follow the arm exactly, the peak of the distribution function is set at ∆θ = 0. When the observational data and simulation are compared, the distribution of observed ∆θ is broader than the simulated one with a standard deviation of σ = 23 • compared to σ = 9 • for the simulation. The observational data shows greater departure from a single pitch angle than can be accounted for by errors in the FIR polarimetry vector position angles alone.
Next we modeled the spiral features with two pitch angles, with a change in pitch angle chosen to fit the FIR intensity data by eye. We will call this Model 2. The resulting model spiral arms are shown in Figure 5 where the inner spiral arms at a radial distance of 137 from the center retain the 21.1 • pitch angle based on the CO observations for part of the arms, and then a much tighter pitch angle of 3.9 • is used for the outer arms. Following the same procedure as before, we computed the angle difference between the pitch angles of the polarization vectors and the spiral arms and ran a simulation of these differences, assuming they are intrinsically the same, and only observational errors are responsible for the dispersion in the differences. For this two pitch angle case, the results are plotted in Figure 6. Even with the two pitch angle model, the dispersion in ∆θ is much greater than can be accounted for by the observational errors with nearly identical standard deviations to Model 1. To explore the spiral pattern in our polarimetry vectors in more detail, we separated the magnetic field vectors into arm, inter-arm, and center regions. These regions are classified according to the mask given in Figure 1 of Pineda et al. (2018), where the center region is roughly the inner 3 kpc (in diameter). Note that we are interpolating both models into the inter-arm region (see the blue line in Figure 5). The distribution of ∆θ for -Model 2 geometry using two spiral arm pitch angles (shown in grey) that we used to compute the distribution of ∆θ for this case. The inner part has the pitch angle of 21.1 • , and the outer part a pitch angle of 3.9 • . The green dashed and dotted lines are the inner resonance and the co-rotation radii respectively, described in Tully (1974). The angle φ is used to define a measure of distance along a spiral 'feature'. That is, we assume the basic two pitch angle model (shown in grey) extends between the arms (shown in blue). Figure 4, but using Model 2, which has two pitch angles. Grey and black represent the simulation and observations respectively. In the right hand figure, the observation are subdivided into arm, inter-arm, and central regions (see text), which are indicated by blue, orange, and red color, respectively. The locations of the different regions are defined in Pineda et al. (2018). Although very similar in appearance, the left panel is not identical to Figure 4 these separate regions is shown in the right hand panel of Figure 6. The vectors in the center group have a distinct positive mean offset of 17.4 • , which means a more open spiral pattern compared to the model pitch angle. The inter-arm and arm groups have no clear offset from zero, but the dispersion is still much larger than can be explained by measurement errors alone.
In Figure 5 we define φ, a measure of the angular distance along a spiral feature, increasing from zero clockwise around the galaxy (along the spiral features). We define a spiral feature for each point in the map (see Figure 5), and extrapolate back to the central region to determine the angular distance φ. The pitch angle, av- (top), the deviation of these pitch angles from the spiral arms (middle), and the fractional polarization (bottom) depending on φ, an angular distance along the arm defined in 5, assuming Model 2 with the two pitch angles for the spiral arms. Vertical bars represent the standard deviation of the data within each bin, not an error in measurement. Red, blue, and orange represent the center, arm, and inter-arm group, respectively. eraged over intervals of φ = 40 • , as a function of angular distance along a spiral model line, is illustrated in Figure 7. The top panel is the pitch angle of the FIR polarization vectors. The middle panel plots ∆θ, the difference between Model 2 and observed pitch angles. The lower panel shows the trend in fractional polarization with φ. We find no statistically significant difference in the trends of fractional polarization with φ when comparing the arm and interarm regions. The dispersion for ∆θ in the interarm region is large, and departs from the trend seen in the arm in the last data bin.
Overall our FIR vectors follow the spiral arms in M51, but with fluctuations about the spiral arm direction that are greater than can be explained by measurement errors alone. Stephens et al. (2011) found no correlation between the magnetic field geometry in dense molecular clouds in the Milky Way and Galactic coordinates, and this may add a random component to the net position angles we are measuring in our large 560 pc beam. However, the relative contributions of emission from dense (n H > 100 cm −3 ) and more diffuse regions in M51 to our 154µm flux has not been modeled. The FIR vectors in the central region indicate a more open spiral pattern than seen in the molecular gas (Shetty et al. 2007), opposite to what one would expect if the magnetic fields were wound up with rotation. Although our data in the inter-arm region are relatively sparse, the fractional polarization is statistically similar to the that in the arms, which are delineated by a higher FIR surface brightness. Houde et al. (2013) used the position angle structure function (Kobulnicky et al. 1994;Hildebrand et al. 2009;Houde et al. 2016) to characterize the magnetic turbulence in M51 using the radio polarization data from Fletcher et al. (2011). See section 3.4 for a comparison with the radio data. Analyzing the galaxy as a whole and using a 2D Gaussian characterization of the random component to the magnetic field, they found the turbulent correlation scale length parallel to the mean field was 98 ± 5 pc and perpendicular to the mean field was 53 ± 3 pc. This indicates that the random component has an anisotropy with respect to the spiral pattern, and could be interpreted as due to shocks in the spiral arms (Pineda et al. 2020) compressing anisotropic turbulence in a particular direction (Beck & Wielebinski 2013). We will explore the position angle structure function in a later paper with new SOFIA/HAWC+ observations that will allow us to measure fainter regions due to increased integration time. Houde et al. (2013) also found that the ratio of random to ordered strengths of the magnetic field was tightly constrained to B r /B o = 1.01 ± 0.04, and this ratio is consistent with other work (e.g., Jones et al. 1992;Miville-Deschênes et al. 2008). Assuming the spiral pattern represents the geometry of the ordered component, the addition of a random component may explain our broad distribution of position angles with respect to the spiral structure. Broadening of the distribution of ∆θ by a random component depends on the number of turbulent segments in our beam. If we use the 100 pc turbulent correlation scale determined by Houde et al. (2013), there are > 25 segments in our beam, which will largely 'average out' relative to the ordered component (see Figure 8 in Jones et al. (1992)). A simple broadening of the distribution due to this spatially small random component would not produce the number of position angles differing by 60 − 90 • from the spiral pattern seen in Figure 6. However, all of the vectors that depart by more than 60 • are in the inter-arm region and have S/N only between 2.5:1 and 3:1. The distribution of ∆θ for the arm region (only) is much more similar to the simulation, with a mean value of only 5 • . The dispersion, however, is still a factor of 2 greater. Given the uncertainty in the contribution of a random component to the magnetic field, the FIR vectors in the arms (blue colored bars in Figure 6) could be consistent with the spiral pattern we defined in Figure 5, but without a better determination of the turbulent component, we can not make a better determination. Even with these uncertainties, there remains a clear shift in the mean pitch angle for the center region to a more open (greater pitch angle) pattern than seen in the CO and star formation tracers. More sensitive observations, in particular for the inter-arm region, will be necessary to better define the correlation between the FIR vectors and the spiral pattern.
Using broadband 20 cm observations with the VLA, Mao et al. (2015) studied the rotation measures in M51 in detail. They find that at 20 cm most of the observations are consistent with an external uniform screen (halo) in front of the synchrotron emitting disk. The disk itself produces synchrotron emission that is partially depolarized on scales smaller than 560 pc (which is our beam size), with most of the polarized flux originating in the top layer of the disk, then passing through the halo. The scale length for the rotation measure structure function in the halo is 1 kpc, which is consistent with blowouts and superbubbles from activity in the disk. Our FIR observations are tied to the warm dust in the disk and are Figure 1 were used. The grey solid line is a linear fit to the data with a slope of log I p154 µm = 0.43 log I 154 µm (α = −.57) calculated by an orthogonal distance regression (ODR) weighted by the squares of errors using scipy.odr module. Each dashed line of different color represents the 2.5σ observation limit estimated from the errors in Q and U in each bin size. The grey dash-dotted line in the upper left corner shows the maximum value of Ip corresponding to a maximum fractional polarization of 9% (see text), and has a slope of +1.0 (α = 0). The horizontal dotted line corresponds to an empirical upper boundary seen in the data at Ip = 25 MJy sr −1 and corresponds to α = −1. Finally, the line in the lower right hand corner shows the estimated ±0.2% limit in fractional polarization precision we can achieve with HAWC+ polarimetry in an ideal data set.
largely insensitive to the magnetic field geometry in the halo, but should be sensitive to the formation of superbubbles which have their origin in the disk. We will be exploring the position angle pattern in more detail in a later paper.
Polarization -Intensity relation
In our previous FIR polarimetry of galaxies Lopez-Rodriguez et al. 2019) we found that the fractional polarization declines with intensity and column depth, and can often be characterized by a power law dependency p ∝ I α . This trend is also common in the Milky Way (e.g., Planck Collaboration et al. 2015), in particular in molecular clouds, and is commonly plotted as log(p) vs. log(I) (e.g., Fissel et al. 2016;Jones et al. 2015a;Galametz et al. 2018;Chuss et al. 2019). In our previous papers we have used fractional polarization p, but because of selection effects due to intensity cuts, the minimum measurable fractional polarization and a physical maximum in the fractional polarization are difficult to discern in that type of a plot. Instead, here we adopt plotting the polarized intensity I p as a function of intensity or column depth. For comparison, a slope of α = −0.5 in log(p) vs. log(I) (or column depth) is equivalent to a slope of +0.5 in log(I p ) vs. log(I). This can easily be seen through the relation I p = pI.
For M51, this comparison is shown in Figure 8. The column density was computed assuming a constant temperature for the dust, and is therefore a simple multiplicative factor of the intensity. We used an emissivity modified blackbody function assuming a temperature of 25K (Benford & Staguhn 2008). The dispersion in de-rived temperature found using Herschel data was only ±1.0K, confirming that variation in temperature across M51 will not affect our results. We define an emissivity, , which is proportional to ν β using a dust emissivity index, β, of 1.5 from Boselli et al. (2012). We made use of the relation of the hydrogen column density, N(H + H 2 ) = /(kµm H ), with the dust mass absorption coefficient, k, of 0.1 cm 2 g −1 at 250 µm (Hildebrand 1983), and the mean molecular weight per hydrogen atom, µ of 2.8 (Sadavoy et al. 2013). The maximum expected fractional polarization of 9% at ∼ 150 µm is taken from Hildebrand et al. (1995) and is within the range of dust models computed by Guillet et al. (2018) that were based on Planck observations. This upper limit nicely delineates the boundary seen in the maximum I p measured at low column depths in M51.
Note that the lowest polarized intensities are associated with the larger 27.2 × 27.2 aperture (labeled twobeam), and averaging over this aperture could artificially reduce the computed polarization if there is significant variation in position angle of the ordered component (not the random component) to the field within the aperture. However, even a 45 • variation in position angle for the ordered component across the aperture would only reduce the net polarization by 1/ √ 2, yet the mean for the two-beam I p is at least a factor of 3 lower than for the half-beam data. Also, the large aperture results are concentrated well away from the nucleus where the spatial variation in position angle is less. The primary cause of the vertical separation between the different beam sizes in Figure 8 is S/N, rather than beam averaging. A simple linear fit (in log space) to all of the data in Figure 8 has a slope less than +0.5. This translates to a slope more negative than α = −0.5 in a log(p) vs. log(I) plot. Note that selection effects such as our minimum detectable polarized intensity are easy to delineate in Figure 8, as shown by the horizontal lines. Due to concerns about the effect the minimum detectable fractional polarization on the data points in the lower right of Figure 8, we will concentrate on examining the upper envelope of the data rather than the best-fit slope.
The upper limit in Figure 8 has a slope of +1 (p = constant) up until N(H + H 2 ) ∼ 3.5 × 10 20 cm −2 . The slope then changes and becomes flat (I p = constant), and I p = 25 MJy sr −1 at greater column depth. This flat slope corresponds to a slope of α = −1, as discussed above. For M51, the change in slope for the upper limit in polarized intensity occurs at approximately 1/3 the value of N(H + H 2 ) ∼ 10 21 cm −2 found by Planck for polarization in the Milky Way (see Figure 19 in Planck Collaboration et al. (2015)). As mentioned above, a strong decline in fractional polarization with column density was also found for FIR polarimetry of M82, NGC 253 ) and NGC 1068. Note that NGC 1068 has a powerful AGN which could create a more complex magnetic field, but most of the FIR polarimetry samples only the much larger, surrounding disk. Lopez-Rodriguez et al. (2019) suggested three possible explanations for the decline in fractional polarization with column depth, assuming the emission is optically thin. Polarization may be reduced if there are segments along the line of sight where 1) the grains are not aligned with the magnetic field, 2) the polarization is canceled because of crossed or other variations of the magnetic field on large scales, or 3) there are sections along the line of sight that contain turbulence on much smaller scale lengths than in lower column density lines of sight, contributing total intensity, but little polarized intensity. Lopez-Rodriguez et al. (2019) considered the contribution of regions that are sufficiently dense that their higher extinction may prevent the radiation necessary for grain alignment from penetrating. These regions make a very small a contribution to the FIR flux in the HAWC+ beam, simply because they are small in angular size and very cold. Although these dense cores probably experience a loss of grain alignment, they cannot have any effect on our observations of external galaxies. An additional explanation is the loss of the larger aligned grains due to Radiative Torque Disruption (Hoang 2019) in very strong radiation fields, although any connection of this process with higher column depth is not clear.
The magnetic field in the ISM is often modeled using a combination of ordered and turbulent components (e.g., Planck Collaboration et al. 2016;Miville-Deschênes et al. 2008;Jones et al. 1992). The trend of fractional polarization with column depth (Hildebrand et al. 2009;Houde et al. 2016;Jones et al. 2015a;Fissel et al. 2016;Planck Collaboration et al. 2016, 2018Jones 2015b) provides an indirect measurement of the effect of the turbulent component. For maximally aligned dust grains along a line of sight with a constant magnetic field direction, the fractional polarization in emission will be constant with optical depth τ in the optically thin regime. This case would correspond to a line in Figure 8 with a slope of +1.0 (α = 0). If there is a region along the line of sight with some level of variations in the magnetic field geometry, this will result in a reduced fractional polarization. Using a simple toy model, Jones (1989) and Jones et al. (1992) showed that if the magnetic field direction varies completely randomly along the line of sight with a single scale length in optical depth τ (not physical length), then p ∝ τ −0.5 (or, I p ∝ τ +0.5 ). (See Planck Collaboration et al. (2016, 2018 for a very similar model). In real sources, more negative slopes of α = -1/2 to -1 are found in many instances ranging from cold cloud cores to larger molecular cloud structures to whole galaxies (e.g., Galametz et al. 2018;Fissel et al. 2016;Chuss et al. 2019;Lopez-Rodriguez et al. 2019). In more recent work employing MHD simulations, King et al. (2018) and Seifried et al. (2019) find that the ordered and random components are more complicated than modeled by Jones et al. (1992). While Jones et al. (2015a) argued that a slope of α = −1 indicated complete loss of grain alignment due solely to loss of radiation that aligns grains by radiative torques (Lazarian & Hoang 2007;Andersson et al. 2015), King et al. (2019) find that including a dependency on local density for grain alignment efficiency can help explain these trends seen in large molecular clouds.
In our large (560 pc FWHM) beam, we are averaging over many molecular clouds and associated regions of massive star formation. This complicates any effort to understand the flat slope for the upper limit in Figure 8 in terms of observations and modeling for individual molecular clouds in the Milky Way. Note that the upper limit in Figure 8 at larger column depths is dominated by the lower polarization in the central 3 kpc (diame-ter) region (see Figure 7). One possibility is that the field in this region has a strong component perpendicular to the plane (along our line of sight), reducing the fractional polarization. This is unlikely, given the planer field geometry seen in the central regions of edge-on spirals such as NGC 891 (this paper; Jones 1997; Montgomery & Clemens 2014), NGC 4565 (Jones 1997) and the Milky Way (e.g., Planck Collaboration et al. 2015). Starburst galaxies such as M82 Jones 2000) and NGC 4631 (Krause 2009) can show a vertical field geometry in the center, but there is no indication of a massive central starburst in M51 (Pineda et al. 2018). A more likely explanation is that lines of sight through higher column density paths have segments with high turbulence on smaller scale lengths ( 560 pc) than other lower density lines of sight. In this scenario, there are segments along the line of sight that add total intensity, but add correspondingly very little polarized intensity due to turbulence in the field on scales significantly smaller than our beam (see Figure 2 in Jones et al. (1992)).
The model in Jones et al. (1992) assumes that the optical depth scale at which magnetic field is entangled is the same through the entire volume. This may not always be true. First of all, the injection scale of the turbulence depends on the source of turbulent motions. The motions arising from large scale driving forces, whether from supernovae or magnetorotational instabilities, may have a characteristic scale comparable with the scale height of the galactic disk. The local injection of turbulence arising from local instabilities or localized energy injection sources, whatever they are, can have significantly smaller scales. These significantly smaller scales form the random component that would decrease the fractional polarization compared to the simple model.
We also point out another important effect that affects the polarization. Even if the turbulence injection scale stays the same, the scale at which the magnetic field experiences significant changes in geometry may vary due to variations in the turbulence injection velocity. To understand this, one should recall the properties of MHD turbulence (e.g., Beresnyak & Lazarian 2019). If the injection velocity V L is larger than the Alfven velocity V A , the turbulence is superAlfvenic. Magnetic forces at the injection scales are too weak to affect the motion of at large scales and at such scales the turbulence follows the usual Kolmogorov isotropic cascade with hydrodynamic motions freely moving and bending magnetic fields around. However at the scale l A = LM −3 A , where L is the turbulence injection scale and M A = V L /V A , the turbulence transfers to the MHD regime with the magnetic field becoming dynamically important (Lazarian 2006). The scale l A is the scale of the entanglement of magnetic field. This scale determines the random walk effects on the polarization in the Jones et al. (1992) model. Evidently, l A varies with the media magnetization and the injection velocity. These parameters change through the galaxy and this can affect the observed fractional polarization at high column depths. 1 To explore the nature 1 In the presence of turbulent dynamo one might expect that I A eventually reaches L. However, the non-linear turbulent dynamo is rather inefficient (Xu & Lazarian 2016) and therefore the temporal variations in the energy injection and in Alfven speed are expected to induce significant variations of l A . of the turbulent component further, we next compare the radio synchrotron polarimetry with our FIR polarimetry.
Radio Comparison
The magnetic field geometry of M51 seen in synchrotron polarimetry has been also been extensively studied (Beck et al. 1987;Fletcher et al. 2011). We can compare the FIR emission with the synchrotron radiation at 20.5 cm and 6.2 cm using the data from Fletcher et al. (2011), which we obtained from ATLAS OF GALAX-IES at Max Planck Institute for Radio Astronomy 2 . We rotated the 6.2 cm radio vector position angles by 90 • to obtain the inferred magnetic field direction and made no correction for Faraday rotation (Fletcher et al. (2011) found no statistically significant difference in fractional polarization between 3.6 cm and 6.2 cm wavelengths). The beam sizes at 20.5 cm and 6.2 cm are 15 and 8 (Fletcher et al. 2011), while our beam size at 154 µm is 14 . First, in Figure 9, we compare the total intensity at 154 µm and at 20.5 cm, which has a similar beam size to that at 154 µm. We have convolved the 154 µm beam to the slightly larger beam at 20.5 cm assuming a Gaussian form for the beam shape. To be conservative in our comparison, we use only regions where all the pixels in the 154 µm image have I/I err > 5. In Figure 9 we show the color coded intensity ratio on a logarithmic scale, log(I 154 µm /I 20.5cm ) along with the intensity contours at 154 µm and 20.5 cm.
Overall, the synchrotron emission and the FIR emission closely follow the grand design spiral pattern seen at other wavelengths. The arms are brighter than the interarm region at both wavelengths. However, the 154 µm emission shows greater contrast between the arm and inter-arm regions compared to the 20.5 cm emission, in many locations by up to a factor of 3 greater contrast. This contrast ratio is highest in the arm to the southeast of the center, and in the arms near (but not directly at) the center of the galaxy. Basu et al. (2012) compared Spitzer 70 µm with 20 and 90 cm radio fluxes for four galaxies and found a greater FIR/radio flux ratio in the arms compared to the inter-arm region using 90 cm radio fluxes, but not for 20 cm fluxes. Based on our 154 µm fluxes and the 20.5 cm data of M51, the FIR and radio measurements are not sampling volumes along the line of sight in the same way.
To first order, the dependence of synchrotron emission on cosmic ray electron density and magnetic field strength is I syn ∝ n ce B 2 (e.g., Jones et al. 1974), where I syn is the synchrotron intensity and n ce is the cosmic ray electron density. Crutcher (2012) finds that the line of sight component (only) of the magnetic field strength (typically 2 − 10 µG) in the diffuse ISM of the Milky Way shows no clear trend with hydrogen density up to n H ∼ 300 cm −3 , a density typical for photo dissociation regions and the outer edges of molecular clouds (Hollenbach & Tielens 1999). At even higher densities the field strength increases with density as B ∝ n k H with the exponent k between 2/3 and 1/2 (e.g. Tritsis et al. 2015;Jiang et al. 2020), but these regions occupy a small fraction of the total volume of the ISM (Hollenbach & Tielens 1999). We interpret our results as due to the synchrotron emission in M51 arising mostly in the more diffuse ISM, with denser regions contributing a smaller fraction. Assuming equipartition between the cosmic ray energy den- Fig. 11.-Plot of the 154 µm position angle against the 6.2 cm position angle. 180 • has been added to some position angles to account for the ambiguity at 0 • and 180 • . The Pearson correlation coefficient for each region is higher than 0.75 and the p-values are smaller than 10 −4 . The ODR best fit line weighted by the squares of errors to all the data has a slope of 0.85 ±0.12 at the 1 − σ confidence interval. The contours show the probability density of 0.3, 0.6, and 0.9 estimated by Gaussian kernel density estimation (KDE) using scipy.stats.gaussian kde module. KDE is a way to estimate the probability density function by putting a kernel on each data point, and we used Scott's Rule to determine the width of a Gaussian kernel. The normalization factor was 9% at 154 µm and 70% at 6.2 cm (see text). The symbols and contours are the same as in Figure 11. The Pearson correlation coefficients and p-values for the arm, inter-arm, and center are [0.38, 0.02], [-0.06, 0.82], and [0.68, 10 −5 ] respectively. The correlation coefficient for the entire data set is 0.61 with a p value of 10 −9 . The slope of the best fit line to all the data is 0.87 ± 0.22. sity and the magnetic field energy density, Fletcher et al. (2011) find a moderately uniform magnetic field strength of 20 − 25 µG in the arm and 15 − 20 µG in the inter-arm regions of M51, suggesting the synchrotron emission is more dependent on n ce than magnetic field strength in those regions. In the denser star forming regions located in the spiral arms, the ratio of FIR to radio intensity must be dominated by emission from warm dust in a volume that does not contribute as much proportionally to the total synchrotron emission as it does to the FIR emission. Note that the very center of M51 has a synchrotron emission peak (Querejeta et al. 2016) due to a Seyfert 2 nucleus (Ho et al. 1997) emitting a relatively low luminosity of L bol ∼ 10 44 erg s −1 (Woo & Urry 2002), but the FIR emission peaks outside this region in the inner spiral arms (see Figure 5), and the AGN contributes very little to the FIR flux.
For comparison of the radio and FIR polarization, we used the observations at 6.2 cm instead of 20.5 cm because depolarization in the beam by differential Faraday rotation is less (Fletcher et al. 2011). We first convolved the 6.2 cm I, Q and U maps to a 14 beam. We used the rms fluctuations in the convolved Q and U maps well off the galaxy to estimate the error in Q and U. Assuming these errors, the fractional polarization could then be computed and debiased in the same manner as our FIR polarimetry (p debiased /p err > 3), except no cut was made in the synchrotron total intensity. In Figure 10 we plot the resulting 6.2 cm radio and FIR polarization vectors overlayed on a map indicating radio intensity. The polarization vectors at both wavelengths clearly delineate the grand design spiral. There is good agreement in po-sition angle at most locations where there is significant overlap, with one exception. At 13 h 30 m 02 s +47 • 12 30 the 6.2 cm vectors angle away from the arm along the bridge of emission connecting to M51b, but the FIR vectors continue to follow the spiral pattern.
The polarization position angles are compared quantitatively in Figure 11, and show a strong overall correlation between the radio and FIR polarization vectors. Even though the emission mechanisms are completely different, and the ISM in the respective beams is being sampled differently, we find that the inferred magnetic field geometry is essentially the same in a global sense. In other words, the FIR polarization position angle weighted by dust emission (at varying temperatures) integrated along and across the line of sight is very similar to the synchrotron position angle weighted by cosmic ray density and field strength (squared), integrated along the same paths in most locations.
Our goal in this section is to investigate whether the synchrotron observations can shed light on the underlying cause of the strong decline in fractional polarization with intensity found at FIR wavelengths. For example, consider the hypothesis that there are segments across the beam and along a line of sight associated with dense gas and dust that have field geometries highly disordered in our beam relative to the larger scale field, adding significant FIR total intensity but very little polarized intensity. In lower column depth lines of sight, these segments (perhaps giant molecular clouds) may be absent or relatively rare, making proportionally less of a contribution to the total FIR intensity, and have less effect on the fractional polarization. Since the synchrotron polarimetry is sampling the same line of sight differently, these segments may contribute differently to the polarized synchrotron emission.
We compare the polarized intensity between the FIR and the radio in Figure 12 and the fractional polarization in Figure 13. Although this may seem redundant, there are important differences between the polarized intensity and the fractional polarization. In the diffuse ISM there is no clear dependence of dust grain alignment on magnetic field strength (Planck Collaboration et al. 2015;Jones 1989Jones , 2015b. Thus, in the FIR neither polarized intensity nor fractional polarization are dependent on magnetic field strength, but they are strongly dependent on the magnetic field geometry (Planck Collaboration et al. 2018Collaboration et al. , 2016Jones et al. 1992). For synchrotron emission, the polarized intensity is dependent on magnetic field strength and the magnetic field geometry, but the fractional polarization is dependent only on the field geometry, as is the case in the FIR. Thus, we should expect no correlation between polarized intensity at the two wavelengths, but there should be a correlation between their fractional polarization if they are indeed sampling the same net magnetic field geometry.
In Figure 12, there is no correlation seen between the polarized intensity at FIR and 6.2 cm wavelengths for the higher surface brightness central region (red contours), the arm region (blue contours), or the inter-arm region (orange contours). For fractional polarization ( Figure 13), we have normalized both the FIR and 6.2 cm polarization with respect to their maximum expected values. We used p max = 70% at 6.2 cm based on computational results in Jones & Odell (1977). There is a modest cor-relation for the entire data set, with the greatest correlation in the center region. Note again that the central region has very weak fractional polarization at both wavelengths.
For the arms (see Figure 7), we do not see a significant difference in fractional polarization for our FIR observations when compared to the inter-arm region. At radio wavelengths, Fletcher et al. (2011) found that the interarm region has a greater fractional polarization than the arms (see their Table 2), which they attribute to a more ordered field in the inter-arm region. This difference between FIR and radio observations suggests variations in the magnetic field geometry are similar between the arm and inter-arm regions as sampled by FIR polarimetry, but that the greater column depth in the arms may have caused enough Faraday depolarization across the beam to further reduce the fractional polarization at 6.2 cm. Finally, the high surface brightness central region shows very weak fractional polarization at both wavelengths. Here the radio and FIR beams must sample a more complex magnetic field geometry with highly turbulent segments across the beam and along individual lines of sight within the beam. This more complex magnetic field geometry reduces the net fractional polarization at both FIR and radio wavelengths with, perhaps, added Faraday depolarization in the beam at 6.2 cm. Polarized emission in this region is sampled differently at the two wavelength regimes, hence producing uncorrelated polarized intensities. Yet the net position angles strongly agree, the fractional polarizations are moderately correlated, and both techniques yield the same net magnetic field geometry in the beam. We will explore this interpretation more carefully in a later paper.
NGC 891
4.1. Introduction At a distance of 8.4 Mpc (Tonry et al. 2001), NGC 891 presents an interesting case for an edge-on galaxy that is a late type spiral with similar mass and size compared to the Milky Way (Karachentsev et al. 2004). Like the Milky Way, NIR polarimetry of NGC 891 reveals a general pattern of a magnetic field lying mostly in the plane (Jones 1997;Montgomery & Clemens 2014). Radio synchrotron observations are also consistent with this general field geometry, but extend well out of the disk into the halo (Krause 2009;Sukumar & Allen 1991). According to models by Wood & Jones (1997), highly polarized scattered light may be a contaminant affecting the optical and NIR polarization in edge-on systems producing polarization null points at locations along the disk, well away from the nucleus. Montgomery & Clemens (2014) do not find evidence for the predicted null points along the disk, but do find null points at other locations that they associate with an embedded spiral arm along the line of sight. Optical polarimetry (Scarrott & Draper 1996) revealed (unexpected) polarization mostly vertical to the plane, with only a few locations in the NE showing polarization parallel to the disk. The optical polarimetry was attributed to vertical magnetic fields, but Montgomery & Clemens (2014) argued that the optical polarimetry was contaminated by scattered light. Scattering in the halo of light from stars in the disk and the bulge, as modeled by Wood & Jones (1997) and Seon (2018), may be a more likely explanation for the optical polarization. Note that the NIR and FIR polarimetry penetrate much deeper into the disk than is possible at optical wavelengths.
The Planar Field Geometry
Our 154 µm polarimetry of NGC 891 is shown in Figure 14 where the colors and symbols are the same as described for M51. To show the magnetic field geometry more clearly, we set the fractional polarization to a constant value in Figure 15. Along the center of the edge-on disk, the vectors align very close to the plane of the disk everywhere except in the extreme NE. There, a few vectors are perpendicular to the disk, suggesting a vertical magnetic field, which will be discussed below. Clearly evident in both the NIR polarimetry (Jones 1997;Montgomery & Clemens 2014) and the radio synchrotron polarimetry (Krause 2009;Sukumar & Allen 1991) is an ∼ 15 • tilt for many of the polarization vectors relative to the galactic plane to the NE of the nucleus. Figure 8 in Montgomery & Clemens (2014) best illustrates this offset, and it is not seen in the FIR vectors.
The distribution of ∆θ between the position angle of our rotated polarization vectors and the major axis is shown in Figure 16. We used 21 • as the position angle for the major axis of the galaxy (Sofue et al. 1987). In an identical manner to M51, we simulated the expected distribution under the assumption that the polarization vectors intrinsically follow the major-axis of the galaxy and only observation error causes any deviation. In Figure 16 the grey solid line shows the distribution for all the data whereas the solid, light grey bars show the distribution only for regions with intensity higher than 1500 MJy sr −1 , which isolates the bright dust lane (see Figure 14). When constrained to the bright dust lane, the simulated distribution and the observed distribution are very similar, with a formal p-value for this comparison is 0.97.
Although more penetrating than optical polarimetry, NIR polarimetry at 1.65 µm still experiences significant interstellar extinction in dusty, edge-on systems (e.g., Clemens et al. 2012;Jones 1989). In a beam containing numerous individual stars mixed in with dust, the NIR fractional polarization in extinction will saturate at A V ∼ 13, or A H ∼ 2.5, (Fig. 4, Jones 1997). At 154 µm, the disk is essentially optically thin (τ ∼ 0.05 for A V = 100, Jones et al. (2015a)), thus the FIR polarimetry penetrates through the entire edge-on disk. One interpretation of our FIR polarimetry is that the NIR is sampling the magnetic field geometry on the near side of the disk, where the net field geometry shows a tilt in many locations, perhaps due to a warp in the disk (Oosterloo et al. 2007). The FIR polarimetry is sampling the magnetic field geometry much deeper into the disk, where the net field geometry is very close to the plane. The radio synchrotron polarimetry at 3.6 cm from Krause (2009) used a much larger beam of 84 , and could be influenced by strong Faraday depolarization in the small portion of their beam that contains the disk, which has a much greater column depth than is the case for the face-on M51. Their net position angles may be sensitive only to the field geometry in the rest of the beam, also possibly influenced by the warp. Whatever the explanation, the FIR polarimetry along the disk within 2 of the nucleus clearly indicates that the magnetic field direction deep inside NGC 891 lies very close to the galactic plane.
There are two regions of enhanced intensity in the disk about 1 on either side of the nucleus, designated by colored outlines in Figure 14. These locations also correspond to intensity enhancements seen in a radio map of the galaxy made by combining LOFAR and VLA observations (Mulcahy et al. 2018), and in PACS 70 µm observations as well (Bocchio et al. 2016). Those studies attribute such enhancements to the presence of spiral arms and the enhanced star formation associated with them, but do not present a model of the emission from the disk. These features are 3 − 4 kpc from the center, not untypical for spiral arms. For example, rotate M51 about a N-S axis to create an edge-on spiral, and there would be enhancements in FIR emission on either side from the center at this distance. The polarization is very low in the southern region, at the limits of our detection. The polarization is also quite low in the northern bright spot. As with M51 and discussed below for NGC 891, the fractional polarization is anti-correlated with intensity, so this may not be unexpected, but the polarization in the southern spot in particular is exceptionally low. Montgomery & Clemens (2014) also found regions along the disk where the NIR polarimetry was very low. They suggested the observer was looking down along a spiral arm, where the magnetic field is largely along (parallel to) the line of sight, which results in much lower polarization (e.g., Jones & Whittet 2015). This could be the explanation for the very low polarization in our two bright spots, and could also explain the origin of the enhancement in intensity, since a line of sight down a spiral arm will pass through more star forming regions. However, the regions of low polarization seen at NIR wavelengths and FIR wavelengths are not coincident, rather the NIR null points are located further out from the center of the galaxy. Given the greater penetrating power of FIR observations, it is possible we are viewing more deeply embedded spiral features than is accessible by NIR polarimetry, which is more sensitive to the front side of the disk. Five of these vectors are consistent with a vertical magnetic field geometry, in strong contrast to the disk. At optical wavelengths, Howk & Savage (1997) imaged vertical fingers of dust that stretch up to 1.5 kpc off the plane, also suggestive of a vertical field extending into the halo. Optical polarimetry of the NE portion of the disk (Scarrott & Draper 1996) has a few vectors parallel to the plane, but the majority are perpendicular to the plane. Although the optical polarimetry was interpreted as evidence for vertical magnetic fields by Scarrott & Draper (1996), the NIR polarimetry from Montgomery & Clemens (2014) and modeling by Wood & Jones (1997) and Seon (2018) indicate that scattering of light orig-inating from the central region can be a major effect. Without significant dust to shine through (causing interstellar extinction), it is difficult to produce measurable interstellar polarization in extinction (Jones & Whittet 2015).
The optical polarization vectors in Scarrott & Draper (1996) are typically 1-2 % in magnitude ∼ 20 off the plane using a 12 beam. Based on our 154 µm contours, this corresponds to about 400 MJy sr −1 , or A V ∼ 0.4. The historically used empirical maximum for interstellar polarization in extinction at V is p(%) = 3A V (Serkowski et al. 1975), but recent work shows this can be as high as p(%) = 5A V for low density lines of sight out of the Galactic Plane (Panopoulou et al. 2019). For an optimum geometry of a screen of dust with a uniform magnetic field geometry entirely in front of the stars in the halo, a maximum fractional polarization of ∼ 2% would be expected. For a mix of dust and stars along the line of sight and turbulence in the magnetic field, the expected fractional polarization would be even less. Although Howk & Savage (1997) estimated A V ∼ 1 within some of the vertical filaments, which are only 2−3 wide, considerable unpolarized starlight emerging between the filaments would be contributing as well. At optical wavelengths it is not clear there is enough extraplanar dust to shine through to cause significant polarization in extinction ∼ 20 off the disk, but plenty of dust to scatter light (a mean τ sc ∼ 0.3 at V) from stars in the disk and bulge. As with M51, the striking similarity between the optical polarimetry vectors and our FIR vectors can not be denied, and remains a mystery when the non-detection at NIR wavelengths is considered.
Polarimetry at FIR wavelengths is measuring the emission from warm dust, and generally the fractional polarization is observed to be highest at low FIR optical depths Planck Collaboration et al. 2015;Fissel et al. 2016), but there must be enough warm dust in the beam to produce a measurable signal. For our observations of NGC 891, a vertical scale height of 1.5 kpc corresponds to 36 , or 2.7 beamwidths for our 154 µm observations. The surface brightness at this vertical distance for most of the disk is ∼ 100 MJy sr −1 (A V ∼ 0.1), which is near the limit of our detectability of statistically significant fractional polarization. At 1.5 beams (20 ) off the plane, the surface brightness ranges from 300 MJy sr −1 to 500 MJy sr −1 , a range in which 5% polarization is easily detectable. Note, if NGC 891 were face-on, this halo dust emission would contribute very little to the total flux in our beam compared to the disk.
We draw the tentative conclusion that the several 154 µm vectors in the halo that are perpendicular to the disk are indicative of a vertical magnetic field geometry in the halo of NGC 891. No evidence for vertical fields was found in radio observations by Krause (2009), but they had a very large 84 beam. Using a 20 beam, Sukumar & Allen (1991) find hints of a vertical field on the eastern side of the southwest extension of the disk, just east of the region outlined in green in Figure 14, where we suggest we are looking down a spiral arm. Mora-Partiarroyo et al. (2019) made radio observations of NGC 4631, an edge-on galaxy with an even more extended halo than NGC 891, using a 7 beam. They find the magnetic field in the halo is characterized by strong vertical components. Examination of the Faraday depth pattern in the halo of NGC 4631 indicated large-scale field reversals in part of the halo, suggesting giant magnetic ropes, oriented perpendicular to the disk, but with alternating field directions. Our FIR polarimetry, which is not affected by Faraday rotation, cannot distinguish field reversals (since the grain alignment is the same), and would reveal only the coherent, vertical geometry, such as we see in our observations in the halo of NGC 891. Brandenburg & Furuya (2020) present numerical results of mean-field dynamo model calculations for NGC 891 as a representative case for edge-on disk systems, but our observations do not have enough vectors for a detailed comparison. Figure 17 plots the polarized intensity against the intensity and column depth for NGC 891. Other than using a temperature of 24 K for the dust (Hughes et al. 2014), the procedure for calculating the column depth from the surface brightness at 154 µm is the same as for M51. NGC 891 shows a clear trend in I p vs. I, with a similar slope to that found for M51, and shows evidence for a horizontal upper limit as well. However, unlike M51, the decrease in polarization in the bulge is not quite as strong, and more of the very low fractional polarization values are located in the disk away from the nucleus. Also unlike M51, the data at lower column depth in ei- -Distribution of ∆θ between the position angle of our polarization vectors and the major-axis the galaxy. A positive value means counter-clockwise rotation from the major-axis. The Grey solid line shows the distribution of all data and the grey shaded region that of the data only in the region with intensity higher than 1500 MJy sr −1 . The black solid line indicates a simulation made under the assumption that the polarization vectors follow the major-axis of the galaxy and only errors in the data contribute to the dispersion. ther the disk or the halo generally lie well below the upper limit of p = 9% in Figure 17, although this may be partially due to smaller number of vectors compared to M51. Presumably, the more complex line-of-sight magnetic field geometry through an edge-on galaxy reduces the net polarization compared to the face-on geometry for M51. Spiral structure seen edge-on can present a range of projected magnetic field directions along a line of sight, crossing nearly perpendicular to some arms, but more down along other arms in our beam.
Polarization -Intensity Relation
The two regions with low polarization delineated in Figure 14 by green and blue outlines are shown in Figure 17 using the same colors. These are the two regions we speculated were lines of sight down a spiral arm, reducing the fractional polarization. There is only one detection in these regions and all the rest of the data points are 3σ upper limits, indicating a low fractional polarization compared to the general trend. Until a model of the spi- ral structure in NGC 891 is developed, we can only identify these two locations as potential indicators of spiral features.
CONCLUSIONS
In this work we report 154 µm polarimetry of the faceon galaxy M51 and the edge-on galaxy NGC 891 using HAWC+ on SOFIA with projected beam sizes of 560 and 550 pc respectively. We have drawn the following conclusions: 1. For M51, the FIR polarization vectors (rotated 90 • to infer the magnetic field direction) generally follow the spiral pattern seen in other tracers. The dispersion in position angle with respect to the spiral features is greater than can be explained by observational errors alone. For the arm region, the position angles may be consistent with the spiral pattern, but uncertainties in the contribution of a random component to the magnetic field prevents us from making a more definitive statement. The central region, however, clearly shows a more open spiral pattern than seen in the CO and dust emission.
2. Even though the FIR (warm dust) and 6.2 cm (synchrotron) emission mechanisms involve completely different physics and sample the line-of-sight differently, their polarization position angles are well correlated. The ordered field in M51 must connect regions dominating the synchrotron polarization and the FIR polarization in a simple way.
3. Both the 6.2 cm synchrotron and FIR emission show very low fractional polarization in the high surface brightness central region in M51. There is a moderate correlation in fractional polarization between the two wavelengths, yet the polarized intensity shows no correlation anywhere in the galaxy. The low polarization is likely caused by an increase in the complexity of the magnetic field and a greater contribution from more turbulent segments in the beam and down lines of sight within the beam. The lack of correlation between polarized intensity at both wavelengths indicates that the magnetic field strength, which influences the polarized intensity at 6.2 cm, but not in the FIR, is not the cause of the low fractional polarization at FIR wavelengths. Lack of grain alignment can also be ruled out. We conclude that along individual lines of sight, different segments must be contributing to the total and polarized intensity in different proportions at the two wavelengths.
4. Within the arms themselves, we find a similar fractional polarization to the inter-arm region in dust emission, unlike the synchrotron emission, which has a lower fractional polarization in the arms relative to the interarm region. This suggests the turbulent component to the magnetic field (as sampled by FIR emission) is similar to that in the inter-arm region, but that the synchrotron emission may be additionally influenced by some Faraday depolarization in the arms.
5. For NGC 891, the FIR vectors within the high surface brightness contours of the edge-on disk are tightly constrained to the plane of the disk. Dispersion in position angle about the plane can be explained by errors in the measurements alone. This result is in contrast to radio and NIR polarimetry which show a clear departure from planar at many locations along the disk. We are probably probing deeper into the disk of NGC 891 than is possible with NIR and synchrotron polarimetry, revealing a very planar magnetic field geometry in the interior of the galaxy.
6. There are two locations along the disk of NGC 891 that show very low polarization and may be locations where the line of sight is along a major spiral arm, resulting in lower fractional polarization. These two locations line up with FIR intensity contours, but do not correspond to nulls in the NIR polarimetry, thought to be due to the same cause. Likely, the NIR is sensitive to spiral features that are closer to the front side of the disk due to extinction obscuring such features deeper into the disk.
7. There is tentative evidence for the presence of vertical fields in the FIR polarimetry of NGC 891 in the halo that is not present at NIR wavelengths and is only hinted at in radio observations. At FIR wavelengths there is dust above and below the disk in emission, but this dust may not be enough to produce polarization in extinction at optical or NIR wavelengths.
These data are the first HAWC+ observations of M51 and NGC 891 in polarimetry mode. The brighter regions within the spiral arms of M51 and the disk of NGC891 are well measured. However, the inter-arm regions in M51 and the halo of NGC 891 are less well measured, and these two regions will require deeper observations to better quantify the arm-inter-arm comparison in M51 and the presence of vertical fields in NGC 891. | 17,542 | 2020-08-18T00:00:00.000 | [
"Physics"
] |
Coulomb Drag Study of Non-Homogeneous Dielectric Medium: Hole-Hole Static Interactions in 2D-GaAs DQW
. The induced (drag) resistivity ( 𝜌𝜌 𝐷𝐷 ) is calculated numerically in low temperature, large interlayer separation and weak interactive regime for 2D hole-hole (h-h) static interactions using the RPA method, with the geometry of non-homogeneous dielectric medium. Exchange-correlations (XC) and mutual interaction effects are considered in low/high density regime for analysing the drag resistivity. It is found that the drag resistivity is found inhanced on using the XC effects and increases on increasing the effective mass. In Fermi-Liquid regime, drag resistivity is directly proportional to r 2 /n 3 at low temperature. Temperature (T), density (n), interlayer separation (d) and dielectric constant ( ϵ 2 ) dependency of drag resistivity is measured and compared to 2D e-e and e-h coupled-layer systems with and without the effect of non-homogeneous dielectric medium.
Introduction
Coulomb drag (CD) is a transport phenomenon that occurs in coupled layer structure, when a current is allow to flow by an active layer and a drag/induced voltage is detected in passive layer, without the tunnelling effect [1][2][3][4][5][6][7][8][9][10][11][12][13]. An insulated wall seperates both the layers electrically. Drag resistivity ( ) is a general phenomena of momentum and energy transfer rate by the interaction of electron-electron (e-e), electron-hole (e-h), hole-hole (h-h), pasmons and phonons etc. With the Fermi-liquid state, the phase space give the dependency of drag resistivity, as ∝ 2 / 3 at low temperature regime.
Consequently, coulomb drag effect was devoted to the numerically and quantitatively results, for measuring the strength of the screening due to induced field [14,15], electron-electron bilayer system of 2D-3D [7,16], two dimensional (2D) electron-electron (e-e) [16][17][18][19][20]. A vast range of curiosity have emerged for studying transport based properties. Mutual coulomb scattering can theoretically be realised cause of the exchange of momentum (ℏ ) and energy (ℏ ) between the coupled layers system. Following the ground breaking experimental work in the AlGaAs/GaAs double quantum wells (DQW) [17,20]. Drag effect became an important part for measuring the many body properties. It had been used to analyze the properties of electrons, holes, phonons, plasmons. Interactions in low density regime [21,22], excitons impact of e-h double layers systems [23][24][25].
Theoretically, a vast and explored field of CD phenomena which suggested several extensions and generalizations in the field of non-homogeneous dielectric medium. The theory of CD effect has extended to multi-layer between two 2D electron and/or hole system, which is an fascinating electronic system of 2D-GaAs DQW [2,3,6,8,[11][12][13][26][27][28][29][30]. For the simplest such structure, CD in bilayer systems is a very interesting phenomenon. We consider two 2D-GaAs distanced by a insulated wall of 2 3 , with the case of a large interlayer separation limit ( ≫ 1), where d is a distance of the wall and is the Fermi wave vector of 2D-sheets. It is a general and well known results for , which present the dependency to temperature and interlayer separation, as 2 −4 in low temperature, large separation and weak interaction limit.
Theoretical Formalism
With considering a double layer systems of 2D-GaAs, which contain holes in both the layers. The numerical results of drag resistivity is measured by using the solution of RPA method in weak interaction (~1), at low temperature ( ≫ ), and large interlayer distance limit ( ≫ 1). The interlayer screening ( 12 ( )) caused of hole-hole (h-h) static interaction is considered in this manuscript. In low concentration regime ( ≫ 1), the RPA method don't able to hold consistency cause of exchange correlations (XC) effects are not considered. The total screening between the electron and/or hole extracted enhanced results because of XC effects. The LFC introduced by the HA and STLS approximation, which consider the XC effect into account.
Interlayer Interaction [ ( )]
To evaluating the screening effects of holes in the double layer systems, a typical toolbox of Dyson equation is taken into consideration within the random phase approximation (RPA) method [2,3,6,8,[11][12][13][26][27][28][29][30]. This finally presents the standard equation of interlayer interaction as, The screening effects is measured by the Eq. (3a) for interactive weak field cause of stationary point charge source is present at active layer and drag the carriers in other layer. Where (q) is the dielectric function. 0 ( ) and 0 ( ) are known as bare intra and interlayer potential, respectively and local form factor (LFF) ( ) are key equations. To evaluating the form factor ( ), [3,8,[11][12][13]32]. The form factors for non finite width [3,8,[11][12][13]32], Where F 22 (d)may be obtained by replacing ϵ 1 ↔ ϵ 3 in Eq. (4a). The method of RPA with including the XC effects is a implications of measuring the required results. The LFC of using the HA and STLS approximation are commonly used approximations for considering the XC effects.
is enhanced on using the LFC, as seen in Eq. (3). The LFC is evaluated by the solution of the static structure factor S(q) by using the fluctuation dissipation theorem [3][4][5]13,33,34].
is found 14.47, 16.18 Ω for e-e and e-h interactions [11,13] for the parameters such as, the carrier concentration ∼ 3 − 20 × 10 10 −2 , temperature is t = 0.5 k, the interlayer separation is d = 30 nm, the dielectric constant of the barrier is 2 = 9, interlayer distance d = 30 nm, effective mass of hole is 0.45 . For these parameter, h-h interactions found enhanced results compare to e-e and e-h interactions, as seen in Fig. (1a, 1b) for the RPA method with and without including the LFC effects.
Articles on in coupled-layer structure measured inhanced results of stationary and weak screening with including the LFC, as seen in Fig. (1a, 1b). It is found consistently better results for h-h interactions than RPA method, because of LFC based on HA and STLS approximation. The RPA model does not account for XC impacts, which are taken into account by LFC. The expressions of LFC impact the effect of interlayer interaction effective and cause of this, is found enhanced [3,[11][12][13]. This is because the coupling between the charge carriers has increased as a result of the increase in coupling, as seen in Fig. (1a, 1b).
Conclusion
In this section, its aim is to demonstrate the improvement of as a result of h-h screening, as well as the impact of LFC, compare to e-e and e-h screening of weakly coupling condition. is measured for density = 1.154-2.979, at low temperature and large interlayer distance in a double layer structure, separated by a thick wall. The measured results of are consistently better compare to the system of a simple double layer systems. | 1,547 | 2022-04-05T00:00:00.000 | [
"Physics"
] |
Planning for communication resources
Abstract : For many human team activities, ranging from military operations through to emergency rescue or large entertainment events, communications resources must be assigned to different teams or team members. These assignments must reflect the capabilities of the available communication devices and avoid conflicting use of communications channels already in use in the local environment. In general, finding and assigning available communication channels for short-term use is a task performed manually by human operators. Operators, using generic tools, such as spreadsheets and database manipulation programs, access government databases to obtain information on frequency usage and then manually attempt to locate suitable unused channels. This process is time intensive, prone to error, and 'mechanistic' in nature. In this paper, we describe the CommPlanner, a new fully implemented system developed to automate this assignment procedure and thereby speed up and make more reliable the process. We describe the algorithms used by the CommPlanner, and the underlying issues that, while not always obvious, must be addressed in the processes of assigning frequency usage.
Introduction
In many human team tasks, assigning open and available communication channels to each agent in the team is critical to the team success. Traditionally, for both civilian and military applications, such channel assignment is performed by hand in a painstaking, laborious process. The task, however, is a mechanistic one that is ripe for automation. In this paper, we describe CommPlanner, a prototype software package for automating the assignment of free communication channels to teams operating in a known geographic area.
As technological progresses, finding free communication channels for a given device becomes increasingly complex. Usage of the available spectrum is becoming increasingly dense as new devices, which typically require less bandwidth than older devices, 'chop up' the available bandwidth. With the advent of spread spectrum devices, which are more immune to jamming but make more complex use of the spectrum, the situation becomes even more convoluted. Secondly, across a geographic area, channels can be reused or overlap, and can vary in power significantly. Typically, there exist government databases that store frequency usage, or allotment, information. Thus, finding and assigning free channels to a team of agents operating in an area is a process of accessing the databases to retrieve the relevant frequency usage, and determining what parts of the spectrum are available and reachable by the communication devices to be used.
Invariably, an expert operator using spreadsheet and database access tools performs this task manually. It is both a time intensive, laborious operation, and a mechanistic one that could conceivably be automated by specialized software. Indeed, in this work we describe the CommPlanner, a prototype software package developed to perform exactly this task. The CommPlanner makes only modest use of state-of-the-art planning technology, indeed the task is performed manually by humans in a mechanistic way that is procedural rather than inventive, however it fulfils a need that is currently unfulfilled by any commercial or Open Source package that we are aware of.
There are many potential advantages to be gained from automating the frequency assignment task. The assignment process, which normally takes on the order of days, can vastly hastened. Furthermore, as it is a laborious task performed by a computer rather than by a human, the potential for errors to creep into the process is significantly reduced. Graphical visualizations of frequency usage, and frequency usage over space, can impart the viewer with an easy understanding how the spectrum is being used. This knowledge can illuminate project decisions and policy decisions for governing bodies.
The potential advantages are more profound than this, however, particularly if the approach is taken to its natural conclusion by automating the entire communications resource allotment problem. To begin with, even very large projects involving hundreds or thousands of channels could be assigned automatically within minutes thereby greatly easing the planning requirements for such projects. For high-risk endeavors in dynamic environments, for example disaster rescue, communications resources can be replanned on the fly within the constraints of the already assigned resources as the controlling constraints evolve over time (e.g. more people are required, certain devices do not work in the conditions etc.). Within a project, or across multiple projects if the data is combined, the usage of frequencies and devices can be monitored and analyzed to aid in determining how devices are best used, and what devices will be required in the future. If centralized communication planners are used in a client/server format, then multiple organizations can conduct operations in the same area without any explicit coordination between the planning bodies of the organizations. Finally, in situations where a particular transmitter needs to be suppressed the request could be automatically generated and sent to the network operators. The work we describe here is addresses the fundamental task of frequency assignment, but promise for more exciting developments This paper is structured in the following way. The following section describes the frequency assignment problem and also reviews some of our early work on graphical visualization techniques. We then discuss the CommPlanner, our prototype server software for assigning frequency bands using government furnished databases. We then discuss the results of the CommPlanner performance, and conclude with a discussion of the future directions we are pursuing to achieve the goal of complete communication planning.
The Problem
In this section, we describe specifically the problem of communication channel assignment. Although many scenarios are possible, we focus on the task of finding free channels meeting device specifications for a team of agents operating in a geographical area.
As mentioned in the introduction section, the ability to graphically visualize represents a very useful step towards understanding the frequency assignment problem, and how the frequency spectrum is used over a geographical area. In our early work, we developed a tool using OpenGL to perform just this task. Figure 1 shows the output of this visualization for some (around 200 transmitters) frequency usage in the Pittsburgh, Pennsylvania area. Essentially the process works by plotting in 3D space the use of frequencies (the z axis) over a geographic area (the x-y axes, respectively). The colors in this diagram are representative of different transmitter types. Note that this tool makes no account of line-of-sight limitations. Rather, it estimates the range of each transmitter from the power of that transmitter. As it runs within OpenGL, the user can 'fly' around the frequency space to examine it. Figure 2 shows an overlay of the transmitter location and range on a 2D map. In this case, color represents the transmitter frequency. Although this example only shows part of the frequency usage, it is readily apparent that channels accumulate in bands and overlap in a haphazard fashion. The frequency assignment task translates to finding open gaps in this 3D frequency-area space within the area of interest and the device capabilities. In short, we are looking for a cube or cylinder of the frequency-area space that is unused. Thus, we must consider what transmitters are in the local vicinity, over what range and with what frequencies they operate. Within the area of interest and device capabilities we must then attempt to find unused portions and assign those channels to the users as required.
More concretely, for each type of device used we must assign a number of useable channels as requested by the user. Each device has a list of properties that determine what parts of the frequency space are relevant along with usage rules for that device in different environments. In particular, we consider devices to have a limited effective transmission range, limited bandwidth, frequency centers, and a range of settable frequencies within the device bandwidth. To assign useable channels for the device, we must first determine what channels are freely available for use in the environment. In short, our program must consult an oracle of known transmitters that impact on the area of interest and then determine what channels are useable for the device in question.
Within the US, and in most countries around the world, permanent transmitters must be registered with the local government authorities. Thus, there are usually commercial and government databases for known transmitters. These databases provide a set of records encoding the transmitter geographic location (usually in latitude and longitude coordinates), its frequency usage, power, owner, and sometimes purpose (e.g. commercial, military, emergency and so on).
Each agent within the team needs to be able to communicate to various networks of agents. Typically, there will be a network for each agent (so that one-to-one communication is possible) along with a hierarchy of larger networks for wider broadcasts. The hierarchy of larger networks will often represent the hierarchical structure physically present within the team (ie. sub-teams) as well as several other requests, including in particular the need for additional wide broadcast channels for emergency announcements.
CommPlanner Implementation
In this section we describe our approach to the communication assignment problem. Specifically, we describe the details of the CommPlanner program that we have developed to perform automatic frequency and call sign assignment.
3,1 Overview
Given the resource management nature of the problem, our approach to the CommPlanner uses a client-server model. The CommPlanner program operates as a server. It manages database information, accepts and executes requests for new communication plans from clients, and sends the results back to the client in a standardized format. Figure 3 shows a block diagram of this approach. With the client-server format, following the usual approach, the CommPlanner communicates to its clients via TCP sockets. Requests, and responses are both sent over the sockets as XML files where tokens in the file delimit the information for the request and the resulting frequency assignments that are returned.
The main loop of the CommPlanner server consists of waiting for clients and then receiving their XML file requests as they connect. In our current implementation, each request is processed independently, although this need not be the case and is a simple extension of our current work to allow multiple requests for the same or overlapping frequency-area space. Upon receiving each request, the CommPlanner requests information on transmitters that operate in the local frequency-area space to the request area and device from its database resources. Additionally, it will log all transactions to log files for documentation and later analysis. Upon receiving the results from the database queries, the CommPlanner will generate the list of available frequencies of suitable bandwidth for each device in question. An allocation process follows this where bands are assigned using some algorithm. We have used both random assignment and bottom up assignment but other alternatives are possible. Upon assigning the bands, a call sign is attached to each band. An XML file encoding the results is then transmitted back to the client. If any errors occur during the process, then an error signal is sent back instead.
In the following paragraphs we describe the details of each portion of the CommPlanner system.
File Formats
There are three XML file formats used for the CommPlanner: request files, result files, and the configuration file. Request files encode a request for frequency usage through a series of rectangular geographic areas with a list of devices, and number of channels per device, to be used in that area. Table 1 shows an example portion of a file.
Here the area is delimited by two latitude and longitude coordinates that create a bounding box (i.e. upper left corner and lower right corner). The list of devices must be known by the server, which obtains this information through its configuration process. In the current formulation, the frequency-area space for each device-area request must be disjoint and nonoverlapping. This is because each device-area request is performed independently of the others. In practice this is the approach followed by human planners.
Table 2. The output file. (M denotes MHz.)
Likewise configuration information is specified via an XML format. The configuration file contains database locations, the file location for the callsign list, default values for unspecified fields such as transmitter power, debugging configuration options, as well as the listing of devices and their parameters.
The device list defines the properties of each assignable device. The current implementation includes the unique device name used to recognize the device, its operating frequency range (device bandwidth), the frequency range for each available channel on the device (channel bandwidth), and the frequency centers the channels operate on (channel centers). Most devices have a number of pre-assigned channels on set frequency divisions rather than being capable of arbitrary frequency within the device frequency range. Thus, an operating frequency for the device is / modulo / c , where f c is the frequency center division. The number of channels for the device can of course be reconstructed as: This current work focuses primarily on narrow spectrum transceivers and does not specifically address the issue of spread spectrum transceivers. In such a situation, the level of interference will vary depending upon the method of spread spectrum communication -direct sequence, frequency hopping, chirp, or time hopping. Future work will address this issue to provide probability of jams, which will depend upon the transmission technique for the device and the known transceivers in the area.
The Assignment Process
For each request a number of assumptions are made. As described above, we assume that different devices-area is disjoint meaning the device operates in a unique portion of the frequency-area space without overlaps. Thus, devices used in separate device-areas are essentially non-conflicting. We also ignore line of sight considerations. Following the usual procedure, we assume each device is capable of operating over the area it is assigned to. In future work, we will remove this assumption and generate the operational range of the device using line-of-sight calculations.
As a consequence of these assumptions, each area and device request can be processed separately without consideration for the whole. Hence, these assumptions, although apparently limiting, greatly simplify the processing steps and in practice are not a great hindrance. Indeed, frequency planning which is done by hand has evolved in such as way to make these assumptions viable. Table 3 shows psuedo code for the assignment process. The key to doing the frequency assignment is to determine what frequencies are open and available in the selected portion of frequency-area space and then using some assignment policy to select frequencies. To obtain this information, we make use of a frequency database (or databases). These databases are available for commercial enterprises as well as for military ones. They essentially contain a multitude of information regarding frequency usage (ie. transmitter/receiver geographic location, power/sensitivity, bandwidth, frequency centers etc.). The task for the CommPlanner is then to query these database(s) to retrieve the records relevant to the device and area in question and then to determine what parts of the spectrum are then free for use. Table 3. The open band generation process.
To access a database the CommPlanner follows the usual Microsoft Windows approach and uses the data access objects (DAO) library. The DAO library provides a simplified interface to access and query a range of different database formats (DBase, ODBC, etc.). Using the Microsoft Foundation Classes (MFC) wrappers that encapsulate the DAO access methods, we can generate an SQL query to retrieve the relevant records and process them accordingly. There is a catch that needs to be accounted for, however. Although the essential content of the query is independent of the database in question, the specifics of the query are not. In particular, different databases, while containing essentially the same information (or some subset thereof), may make use of different tables as well as different table and column names. To account for this, the major components of the SQL query, which remain constant, are stored in the configuration XML file and it is only the query values (the latitude/longitude coordinates etc) that are modified.
Using the DAO MFC class, the database is opened and queried to retrieve the records relevant to the area and frequency of interest. The area is derived directly from the area specification in the request. The frequency of interest is derived from the device bandwidth, which is widened by a default value to account for boundary cases. This default value, which increases the bandwidth at both limits, is also stored in the configuration file.
Upon retrieving the records of interest the core of the assignment algorithm comes into play. The key idea is to identify which frequencies are free and then to make the channel assignments from within these "open bands". To identify the open bands, we first assume that the entire frequency range of the device is available for use. As we iterate on the retrieved records, we remove the portions of the open band(s) that overlap with retrieved record. At the end of this process, we have a list of ranges that are viable.
Concretely, an open band is a frequency range°k = Ulow'Jhighl
Thus, the set of non-overlapping open bands O is
The set of open bands encodes the partitioned range of available frequencies. Graphically, O corresponds to a series of non-overlapping ranges as shown in Figure 4, which are vertical slices of frequency-area space.
Figure 4. Graphic visualization of open bands
Each retrieved record encodes a used frequency range as: ** j = L/ ~" /l J bandwidth* / "*" /2 Jbandwidth! where f and/bandwidth are the frequency center and bandwidth, respectively. In some cases the bandwidth may not be defined. In such situations a nominal default value is used instead, which is obtained from the configuration file. We first expand the range of Fj to account for the device channel bandwidth, thereby saving a later check. Thus: The band assignment itself merely requires creating a new assigned band B k and adding it to the list of assigned bands B. In addition to determining the frequency center of the band, we need to assign it an identifier and also a call sign. The CommPlanner performs call sign assignment randomly using a list of useable call signs. Alternative methods, based on generation rules of some kind, are possible but are not the focus of the current work. Thus each new band is: Once generated it is an easy process to generate the XML output, which is just an enumeration of the set B for each area/device token. If successful, the results are sent back to the client via the TCP socket.
Results and Discussion
The CommPlanner performs its required task as designed as will shortly enter in the field user testing. In its current form it is able to satisfy requests of say a few hundred channels for a city-sized area in a matter of seconds (the actual response time varies depending upon the machine and version of Microsoft Windows). Additionally, the CommPlanner maintains access logs of all its activities for monitoring purposes. Figure 6 shows the debugging output of an example planning process for 150 channels for a UHF device in the greater Pittsburgh area. Note the frequency output here is plotted on a logarithmic scale. It is readily apparent that the technology used here is systematic, and yet, to our knowledge, this is the first type of application of this kind. To date, all communication planning is performed manually, if mechanistically, using simple tools such as spread sheets and manual database accesses. Given the amount of data handled, it is a time consuming and error prone task. In many cases, communication planning is a task that requires specialized training (although it should be noted that this training covers more than the specific task described here). Thus, the entire communication planning process, which is a critical part of many team activities, hinges on the operations and knowledge of a few individuals. Clearly, automating this process is of paramount importance to reduce the chance for errors, speed up the allocation process, and also provide a mechanism for wider distribution.
As mentioned above, the CommPlanner is about to enter user testing and should hopefully be in practical operation in the near future. Future enhancements to the CommPlanner include adding security measures to ensure only valid clients can request frequencies, adding line-of-sight considerations to boost the area based approach, and allowing for multiple requests in overlapping portions of frequency-area space.
Conclusions
In conclusion, this paper has described the CommPlanner program, a tool for automating the frequency assignment process for teams operating in areas with known static frequency usage. To date, this task is performed manually in a tedious and error prone process by operators with specialized training. The process used by these operators is mechanistic, and therefore ripe for automation. We have presented our working prototype solution to this problem, its details and inherent problems and limitations therein. The CommPlanner, which is about to enter user testing, represents a first step towards the ultimate goal of completely automating communication planning for human teams performing mission-like tasks. | 4,855.2 | 2003-05-01T00:00:00.000 | [
"Computer Science"
] |
Seeing Islam as a Social Fact: Hermeneutic Approach to the Quran in Abu Zayd's Thought
This study aims to discuss Nasr Hamid Abu Zayd's thoughts on the Quran as scripture. Using the type of literature study research, this article explains Abu Zayd's thoughts as contained in several of his writings. This study's findings state that Abu Zayd's argument that the Quran as a cultural product and cultural producer (muntaj tsaqafi and muntij tsaqafi) departs from Abu Zayd's approach, which sees Islam as a social fact. There is a big difference between Islam, studied as a social fact, and Islam, seen as a religious doctrine. Islam as a social fact demands objective and positive scientific work. Abu Zayd took a new approach to the Quran by shifting the direction of study from a vertical dimension to a horizontal dimension. The horizontal dimension referred to by Abu Zayd is the canonization in which the teachings of the Quran were spread gradually by the Prophet Muhammad. This article also concludes that Abu Zayd sees it as a text for studying the Quran. It is at this point that Abu Zayd uses a hermeneutic approach. It is at these two points that Abu Zayd said that his study of the Quran is based on the human aspect. What Abu Zayd means by studying the Qur'an, which departs from the human aspect with a horizontal dimension is also what is meant by Islam as a social fact in this article. In the order of writing, the article sequentially discusses the following points. The first part of this article reviews a brief biography of Abu Zayd, along with the social and political conditions that influenced his thinking. The second part explains the Quran as a text. And in the last section describes the Quran as a product as well as a cultural producer.
Introduction
So far, there has been some literature discussing Abu Zayd's thoughts. Most of these works discuss Abu Zayd's thoughts on revelation and al-Quran hermeneutics (e.g., Campanini 2011, pp. 52-62;Kermani 2006;Rahman n.d.). Some scholars discuss the humanistic approach of Abu Zayd's thought (e.g., Sukidi 2009). Some scholars also compare Abu Zayd's thinking with other Islamic thinkers, especially those with an Islamic reformist spirit, such as Fazlur Rahman and Muhammad Arkoun (e.g., Völker 2015). Research also explores pluralism from Abu Zayd's perspective (e.g., Zohouri 2021). In addition, several scientific works support the Egyptian court's decision against Abu Zayd; in other words, this scientific work is here to criticize Abu Zayd's thinking. Traditionalist scholars write most scholarly works on this model (e.g., Bälz 1997;Najjar 2000).
Previous researchers have not looked at Abu Zayd's thinking comprehensively. They did not see the political conditions that influenced Abu Zayd, nor did they see that each of Abu Zayd's thoughts departed from an objective philosophical paradigm. The point is that Abu Zayd is aiming to see religion as a social fact, including his view of the Quran as a cultural product. Instead of trying to see first the paradigm used by Abu Zayd in his thinking, these scholars who criticized Abu Zayd immediately justified Abu Zayd. They let go of specific factors that influenced Abu Zayd. For example, the research by Abu Hadi in his Ph.D. thesis reviews the thoughts of Abu Zayd in understanding the texts of the Quran and Hadith (Abu Hadi 2011). Apart from that, the research conducted by Mustafa also did the same thing in criticizing Abu Zayd (Mustafa 2014). They did not see the essential factors why Abu Zayd had thoughts like that. Different from the studies mentioned above, this article will position itself as a study in viewing the thoughts of Abu Zayd, who sees religion as a social fact, not a matter of theology and faith. Not only that, using the type of literature study research, this article explains Abu Zayd's thoughts as contained in several of his writings. The previous researchers were still trapped in studies that saw Abu Zayd's work as the work of theology and faith. The first part of this article reviews a brief biography of Abu Zayd, along with the social and political conditions that influenced his thinking. The second part explains the Quran as a text.
And in the last section describes the Quran as a product as well as a cultural producer. Nasr Hamid Abu Zayd is an Egyptian scholar who authored several works in Arabic literature, the Quran, and Islamic studies. He received his Ph.D. at Cairo university in 1981 with the thesis title Ibn Arabi. During his lifetime Abu Zayd was known as a controversial It is essential to explain that since the beginning Nasr Hamid Abu Zayd has said that his study of the Quran is based on the human aspect. Abu Zayd wants to present a study of the Quran by moving the direction of study from the vertical to the horizontal dimension (Abu . What Abu Zayd meant by studying from the horizontal dimension was canonization, namely the gradual dissemination of the message of the Quran by the Prophet Muhammad. Moreover, a basis for this study is to see the Quran as a text. It is referred to in this article by Islam as a social fact. What is meant by Abu Zayd with the study of the Quran, which departs from the human aspect with a horizontal dimension, is also what Islam means as a social fact in this article. It is essential to understand in advance so that there are no misperceptions that it confuses the study of faith and the study of social facts.
Methodology
This research is library research, a series of activities or processes of obtaining, finding, and selecting written sources or data regarding a problem in a particular aspect or field, which becomes the object of research reasonably through systematic, directed, and accountable work procedures. In other words, this research uses library materials as research objects. For primary sources, the author refers to and relies on analysis and synthesis of almost all of Abu Zayd's scientific writings on the Quran, both books and various articles.
While secondary sources consist of written works on Abu Zayd's thoughts on the Quran.
To analyze these sources, descriptive-explanatory is used. Descriptive research is aimed at analyzing and presenting an analysis and synthesis of the construction of thought systematically so that the conclusions put forward later are always clear on a factual basis and can always be returned directly to the source of the data obtained. Meanwhile, explanatory research explains the socio-historical background which is the reason emergence of thoughts, as well as exploring the extent of the relevance and implications of thought in contemporary discourse and context. With these two analyses, the author seeks to read, see, understand, and present Abu Zayd's thoughts on the Quran and its studies.
Nasr Hamid Abu Zayd and His Political Culture
At the age of 11, Nasr Hamid Abu Zayd began to join the movement group al-Ikhwān al-Muslimun (1954) was published in 1983, after which many of Abu Zayd's works appeared. One is Mafhūm an-Nash by Nasr Hamid Abu Zayd in response to his study of the Mu'tazilah group. He began to argue that the study and interpretation of a text must be viewed "objectively," namely by using scientific methodologies and approaches such as linguistics and hermeneutics as a scalpel to the text (Abu Zayd 2008). According to Abu Zayd, the two approaches, namely hermeneutics and linguistic studies, are two sets of science that can produce a product of contextual and progressive interpretation of the Quran (Abu Zayd 2004).
Since Islamic thinkers such as Muhammad Abduh and Rashid Rida emerged, Egypt has always been a mecca for developing Islamic world thought. The study discourses widely discussed is the study of the Quran. Various types of methods of understanding the Qur'an emerged from Egyptian intellectuals. From Muhammad Abduh, Rashid Rida, Amin al-Khulli, and Thaha Husain. Nasr Hamid Abu Zayd is further considered the latest figure in this field (Abdul Rahman 1999).
Since childhood, Nasr Hamid Abu Zayd experienced hot and tense times and situations in Egyptian history. After being free from the shackles of French colonialism, Egypt was immediately faced with efforts to create self-government. At the same time, several movements and struggles of competing ideological thoughts emerged, some of which were religious and secular. The dominant ideology and the majority at that time was the ideology of secular Arab nationalism (separating religion and state affairs). At that time, the ideology of Gemal Abdul Nasser's regime was official (Haydar 2015).
"My academic experience in the United States was quite fruitful. I did much reading on my own, especially in philosophy and hermeneutics. Hermeneutics, the science of interpreting texts, opened up a brand-new world for me and has opened up a whole new world for me)."
Undeniably, the influence of colonialism from Western countries on Islam has caused Islam to be divided, giving birth to a European-style social elite. Apart from that, the education sector is deliberately infiltrated by state interests and structured duping.
According to Said Masykur (Masykur2015), there were at least several attempts to make the Western imperialists or colonialists try to deceive, which ultimately resulted in the infertility of the Islamic world.
First, from the beginning, the colonialists intended to divide Islamic countries. They know that the solid ideological-cultural power of Islam is difficult to defeat if united. The imperialists know that such a force will be easily defeated if conditions are divided.
Second, the imperialists tried to destroy the order of life starting from their penetration into the traditional agricultural sector, the system of ownership, exchange, production, and other public works carried out by traditional societies. The imperialists demand that the natives dress, eat and drink, build houses, and even educate their children in the Western way.
So that an assumption is embedded in the minds of the natives that their civilization is ancient and Western civilization is advanced. From there, gradually, the natives began to be anti-local products and switched to Western products.
Third, they, the imperialists, entered Islamic sciences and traditional educational institutions. The imperialists encouraged the natives to enter European universities and then formed a notion that the parameters of truth must follow the Western-style modernist way of thinking.
Fourth, the imperialists deliberately created intrigues against minorities (non-Muslims) which became the embryo of the birth of schisms and conflict between traditional schools of thought. Because according to them, the imperialists must make such an effort to facilitate the mission of forming a modern society.
In general, there are at least two factors behind the rise of Muslim intellectuals in the mid-20th century; first, the restlessness of the clergy against the presence of imperialism.
Finally, the Aqidah purification movement emerged, which became known as the reform movement. Second, the dominance of the West in the field of Islamic politics ultimately encourages Muslims to carry out political movements to create a balance of power against the progress of the West.
Al-Quran as a Text
Since the beginning of his discussion, Nasr Abu Zayd sees that the study of religion is currently filled with theological views that efficiently justify (Harb 2005 analytical study by returning religious texts to their original context in which the text was born as a product of human history. In this way, the central core of religious teachings will be known. To achieve this type of study like the model above, the first step that researchers must consider is first to place an aura of sanctity from the text and then return it to its original context in the context of plunder. Researchers in this momentum need to guarantee a broad and open mind space to reach the context of a complete historical understanding. Openmindedness can make it easier for researchers to understand society's condition when the text is present with them. The text (in this case, the Quran) must be understood as part of the reality of society as a party that interacts with the text. Using text, understanding it, and solving specific problems. That is the spirit of study that Abu Zayd wants to bring.
According to Nasr Hamid Abu Zayd, the Quran is a sacred text that remains in terms of pronunciation. However, it becomes different when humans interact, so it loses its sacredness. Permanent from the side of God's absolute revelation but becomes relative when humans try to interact with him (Abu For Abu Zayd, religion is "a collection of divine texts embodied in history," while religious thought is human interpretation or understanding of these texts (Abu Zayd 1996 Zayd). According to him, historically, religion is a fixed collection of sacred texts, while thinking about religion as a human effort (ijtihād) to understand religious texts. If so, interpretation is human understanding of the revealed text; therefore, it is relative and cannot be an absolute truth that all humans must follow. Nasr Hamid Abu Zayd sees nash (text) and mushaf (book) as different. According to him, "Text" requires explanation, understanding, and interpretation, whereas mushaf is more to writing whose form has been formed into an object (book) or a particular corpus of manuscripts (Abu Zayd 1990 To respond to this fact, according to Abu Zayd, we must look at the treasures of classical Islamic thought from various perspectives. One way is with hermeneutics. Historically, hermeneutics has been used by Muslim scientists according to the knowledge they have studied since the beginning of the course of Islamic thought. The holy text they believe in and have faith in is the Quran. History tells us that the Quran cannot be separated from the development of other Islamic studies such as Islamic law, ushul fiqh, philosophy, and Sufism (Abu Zayd 1994). Al-Qur'an hermeneutics is more than ulum al-Qur'an as traditionally understood. More than that, it is a study that is transformed into a multiinterdisciplinary. Contemporary Qur'anic hermeneutic studies inspired by the social sciences and humanities must be addressed. That was what Abu Zayd wanted to do.
To arrive at that stage, Abu Zayd first deconstructs the concept of revelation (Tabī'at al-Naṣṣ). This step is the beginning of entering the contextual interpretation methodology initiated by Abu Zayd. According to Nasr Hamid Abu Zayd, the Qur'an was revealed in two stages. The first is the process of the vertical descent of the text of the Quran from Allah to the angel Gabriel, and this stage is called tanzil (Abu Zayd). Second, the process of the angel Abu Zayd argues that the Quran is described as a treatise (message) because the Quran is a message (Abu . Then, of course, there is a communication relationship between the Sender and the recipient of the message through a language code or system. However, because the Sender cannot be the object of scientific study, the entry point for scientific study is reality and culture. The reality that gave birth to human movement as the target of the text and governed the recipient of the text, namely the Prophet Muhammad. and culture in the form of grammar. what is contained in the structure of the Quran and has been manifested during the communication process between the Quran and its audience . This dimension is the dimension of Al-Qur'an discourse. The discourse dimension is a living and dynamic area; it manifests in the context of everyday life so that it is not only conveyed in Arabic, which only focuses on the place where the revelation was sent down, but also has an influence on the thinking and culture of the recipient. It can be seen in how the Qur'an enters and influences everyday human life. Abu Zayd mentions in his writing "the Qur'an in Everyday Life" that the Quran greatly influences Muslims (Abu Zayd 2000). Abu Zayd sees this in the implementation of the pillars of Islam, zakat as the concern of Muslims towards social conditions, the area of Islamic philanthropy, rules regarding food and drink, the nature of women, the language of communication used daily, and so on which are found in the spiritual life of Muslims. Departing from that thought, Abu Zayd began studying the text of the Quran by placing the Quran as a cultural product of muntaj al-tsaqāfat, as well as producing a culture of muntij li al-tsaqāfat. It can be proven historically to have occurred in two phases, the first is the openness phase (marhalah al-tasyakkul) in more than 20 years when the Qur'an formed itself structurally in the underlying cultural system, and the second is the phase of forming a new culture, namely when the text of the Quran forms and rebuilds its cultural system.
Al-Quran as Muntaj and Muntij Tsaqafi
In his basic view, Nasr Hamid Abu Zayd said that the nature of the Quran is a language text (nash lughawi), a cultural product (muntaj tsaqafi), and a historical text (nash Tarikhi) (Abu Zayd 1990). Some researchers emphasize that Yusuf Rahman, for example, said that Abu Zayd's statement was since the text of the Quran appeared together with a cultural structure in which there was a central signifying system (Rahman n.d.). Even though the text's origin is from a holy divine source, it cannot be separated from the bounds of space, time, and certain social-historical conditions. From there, the Quran is a product of culture because it interacts with humans as cultural actors, and at the same time, the Quran is also a cultural producer because it presents a new culture. The new culture can be completely new that has never been found before, or the tradition is new through changes to old traditions that have been changed and modified.
To strengthen his theory of the Quran as a cultural product (muntaj al-tsaqāfah) and, simultaneously, a maker of a new culture (muntij li al-tsaqāfah) in the two phases above, he applies a semiotic study of the Quran (Abu Zayd 2004). According to him, an accurate text is a text that can free itself from the bonds and shackles of the initial context in which it was produced; according to Abu Zayd, an accurate text can bring out its vitality, apart from norms from outside it. Text, on the one hand, is an object in the form of a cultural product where it is born and joined in, and on the other hand, it is also a subject that changes the socio-cultural system itself.
Text enters the level of "semiotics" when it becomes a subject that can carry out transformations and changes at a new structural level. Therefore, Abu Zayd sees that the power of the text of the Qur'an as a miracle (i'jāz al-Qur'ān) is not to be returned to its divine source because, indeed, from the beginning, the source was extraordinary; more precisely, the mu'jizatan side of the Qur'an lies in the privilege of its literature which far surpasses other texts and brings extraordinary changes to the form of civilization. authority, and power except for the epistemological area (as-sultah al-ma'rifiyyah) (Abu Zayd 1990), namely the text's authority only as a text to be applied at a certain epistemological level. Every text can bring out its new epistemological side, departing from the assumption that it renews the texts that make it new. However, the authority of this text does not change and metamorphose into cultural-sociological authority unless it is carried by certain groups that carry it with an ideological framework.
Therefore, Abu Zayd voiced efforts to liberate from the power of the text (tahrīr min sultah al-nusus); he called for an effort to understand, analyze, and interpret texts objectively and authoritatively. The point is a scientific interpretation based on language studies without getting stuck in text authoritarianism. Authoritarian interpretation (interpretative despotism) occurs when the reading model has tendentious, ideological, and political nuances (al-qirā'ah al-mugridah).
Conclusion
Discussing a thinker like Nasr Hamid Abu Zayd must start with who Abu Zayd was, how the political situation influenced his thinking, and from what point of view he studied religion. Abu Zayd is not in the mood to see Islamic studies as mere theological studies.
However, Abu Zayd sees it deeper from the point of view of objective social facts. The model of religious studies that Abu Zayd is studying is a form of effort to advance Islamic studies, which were initially known to be backward. It was motivated by the social conditions in Egypt at that time, where Abu Zayd lived.
One of the results of Abu Zayd's thoughts departing from this way of thinking is regarding the Quran. Abu Zayd sees the Quran as a cultural product as well as a cultural producer. It is again important to emphasize that Abu Zayd's argument departs from the point of view of religious studies, which are objective and historical, not theological. Thus, according to Abu Zayd, a vital research task, today is a critical and analytical study by returning religious texts to their original context in which the text was born as a product of human history. In this way, the central core of religious teachings will become known.
Abu Zayd sees the Quran as a text that communicates between God and his creatures.
God wanted to respond to the social conditions of Arab society at that time through the Quran. In this way, the Quran is present as a response to the culture and social conditions of the Arab community. Furthermore, the response of the Quran to Arab social conditions sometimes comes in the form of a complete abolition of old traditions or a slight change in the | 4,995.2 | 2023-07-15T00:00:00.000 | [
"Philosophy"
] |
Rapid Motion Segmentation of LiDAR Point Cloud Based on a Combination of Probabilistic and Evidential Approaches for Intelligent Vehicles
Point clouds from light detecting and ranging (LiDAR) sensors represent increasingly important information for environmental object detection and classification of automated and intelligent vehicles. Objects in the driving environment can be classified as either dynamic or static depending on their movement characteristics. A LiDAR point cloud is also segmented into dynamic and static points based on the motion properties of the measured objects. The segmented motion information of a point cloud can be useful for various functions in automated and intelligent vehicles. This paper presents a fast motion segmentation algorithm that segments a LiDAR point cloud into dynamic and static points in real-time. The segmentation algorithm classifies the motion of the latest point cloud based on the LiDAR’s laser beam characteristics and the geometrical relationship between consecutive LiDAR point clouds. To accurately and reliably estimate the motion state of each LiDAR point considering the measurement uncertainty, both probability theory and evidence theory are employed in the segmentation algorithm. The probabilistic and evidential algorithm segments the point cloud into three classes: dynamic, static, and unknown. Points are placed in the unknown class when LiDAR point cloud is not sufficient for motion segmentation. The point motion segmentation algorithm was evaluated quantitatively and qualitatively through experimental comparisons with previous motion segmentation methods.
Introduction
LiDAR systems are rapidly becoming an integral part of automated and intelligent vehicles for environmental awareness. The price of LiDAR sensors is reducing, and automakers are increasingly installing LiDAR in production vehicles for advanced intelligent functions [1,2]. LiDAR measures the distance and direction of the surrounding environment by emitting laser pulses in certain directions and measuring the time-of-flight (ToF) of each laser pulse reflected by the environment. The directions and distances can be converted to a digital 3D representation called a point cloud to express the spatial information of the surrounding environment. Because LiDAR uses light waves, the measured point cloud can represent spatial information very accurately. In addition, LiDAR point clouds can be fused with data from other sensors, such as radars, and cameras, to gain more meaningful information about the vehicle's environment.
All objects in the driving environment are classified as dynamic or static according to the moving conditions. Therefore, the LiDAR point cloud from detected objects can also be classified as dynamic or static point-wise depending on the motion state of the object. Such point-wise classification of point cloud states can be used for safety and convenience functions in automated and intelligent vehicles. For instance, points classified as static are measured from the surfaces of static objects, such as curbs, poles, buildings, and parked vehicles. Such a static point cloud can be applied to various automated and intelligent driving functions, such as mapping, localization, and collision avoidance systems [3][4][5]. Points classified as dynamic are detected from objects that have speeds above a certain level, such as nearby moving vehicles, motorcycles, and pedestrians. These points can be used for object tracking or motion prediction, which are necessary functions for automated and intelligent vehicles for tasks such as autonomous emergency braking (AEB), lane keeping, traffic jam assistance, and adaptive cruise control (ACC) systems.
As shown in the previous examples, the information of the point cloud state is classified according to the motion is useful for automated and intelligent vehicles. This paper proposes an algorithm to rapidly segment the motion states of a point cloud detected by LiDAR in real-time. The overall process of the proposed algorithm is shown in Figure 1. The rapid motion segmentation algorithm has inputs of LiDAR's 3D point cloud and the 3D pose (position and direction) of the LiDAR sensor. The sensor pose can be estimated from an inertial measurement unit (IMU) or the vehicle's on-board motion sensors (such as wheel speed sensors or steering angle sensor). Then, point motion segmentation is performed by applying the laser beam characteristics to the pose correlation between consecutive LiDAR point clouds. A combination of probability theory and evidence theory is applied to accurately and reliably update the motion state of points. The algorithm performs point-wise segmentation of the point cloud into three states: dynamic, static, and unknown. dynamic information is detected from an object moving above a certain speed, and static information is detected from a stationary object. If there are insufficient consecutive LiDAR point clouds for motion classification, some points are classified as unknown. The performance of the proposed algorithm was evaluated quantitatively and qualitatively through comparison with existing methods.
This research has three main contributions: (1) reflecting the laser characteristics of LiDAR, (2) applying a combination of probabilistic and evidential approaches to update the motion state of points, and (3) online motion updated for real-time applications. This paper focuses on the characteristics of lasers, such as multi-echo, beam divergence, and horizontal and vertical resolution, so that it can segment the motion of points more accurately than existing algorithms, such as occupancy grid mapping. In addition, when updating the state information, a combination of probabilistic and evidential modeling is applied to more accurately reflect the actual LiDAR characteristics to update motion in a point-wise manner. Because all the proposed updating processes are real-time, they are suitable for real-time application in automated and intelligent vehicle systems.
Previous Studies
With the development of autonomous vehicles, LiDAR is being more widely used, and many studies of LiDAR are accordingly being conducted. However, there have not been many studies that classify the point-wise motion of LiDAR measurements themselves in real-time. In the field of autonomous and intelligent vehicle systems, studies related to the proposed algorithm aim to generate static environment maps or remove nonstatic points based on tracking results. An occupancy grid map is a typical method for generating a static environment map using LiDAR measurements. The occupancy grid map divides the environment around the vehicle into a 2D grid or 3D voxel cells with uniform size. The occupancy level of each cell can be updated through the laser's ray tracing. The occupancy level of a grid (or voxel) cell passing by the laser becomes lower because the physical space of the cell is likely to be free. Conversely, the levels of cells located on the reflecting surface become higher. Based on these principles, static objects are classified as occupied when the occupancy level exceeds a certain threshold. Contrarily, for a moving object, the occupancy level of the cell is not constantly accumulated, so it is not classified as occupied. The numerical value of each cell's occupancy level is updated based on probability theory [6] or belief theory [7]. Static maps in large-scale traffic environment are constructed using LiDAR sensors with probabilistic and belief approaches [8][9][10]. Moras et al. presented an occupancy grid framework that generates a global static map and classifies local moving objects simultaneously [11][12][13]. Classification of traffic objects (such as vehicles, pedestrians, road curbs, and poles) is used to classify the motion of a point cloud [14][15][16].
The advantages of occupancy grid-based static point cloud classification are that its implementation is relatively straightforward and its performance is stable because it has been studied for a long time in various applications. However, occupancy grid mapping has several disadvantages for use in real-time automated and intelligent vehicle applications. Large memory is required because the driving environment must be represented by a grid or voxel cells. Also, the ray tracing method takes a long time to update all cells related to all LiDAR beams. In addition, because space is represented by discrete cells, a discretization error occurs when the resolution is coarse.
Research on the detection and tracking of moving objects using LiDAR has been conducted to recognize the driving environment of automated and intelligent vehicles [17,18]. Object detection algorithms detect surrounding objects by clustering the point cloud and generate a bounding box for the each detected object. Tracking algorithms generate tracks for detected objects to estimate their position, direction, velocity, and acceleration. Using the tracking results, we can classify the motion of a point cloud into dynamic and static states. Points in a track bounding box above a certain speed are classified as dynamic, and the remaining points are classified as static. This tracking-based point motion classification is straightforward, and the tracking results can be reused. However, it has some limitations. The point cloud clustering groups the detected points on the same object in the object detection step. Although many clustering methods have been studied, it is difficult to obtain accurate results using point cloud information alone. Incorrect clustering causes incorrect point motion classification. In addition, because the tracking has an initialization time to generate new track, it struggles to satisfy real-time motion classification. Furthermore, because it is difficult for tracking to accurately estimate the speed of slow objects, the point motion of objects is likely to be misclassified near the threshold speed.
The point motion classification algorithm presented in this paper has many advantages over previously proposed ones. First, the proposed method directly segments the point cloud into dynamic and static states using the laser beam model. Therefore, there is no chance of misclassification due to discretization error in the occupancy grid method and erroneous clustering of the tracking-based method. In addition, the proposed method does not require a large amount of memory like the occupied grid approach because it simply buffers recent point clouds. Finally, the proposed method is able to satisfy the real-time requirements of point motion classification because it does not need to update all of the gird cells to initialize a new track.
System Architecture
The objective of the proposed point motion segmentation algorithm is to classify the latest LiDAR point, z n,m t , into motion states, motion n,m t , in real-time. The notation of the LiDAR point and motion states are z index,order time and motion index,order time , respectively. The n describes the index of the laser beam from 1 to N. The m represents the order of multi-echo for the laser beam and usually has a value of 2 or less. The t represents the time for the LiDAR scan measurement. The "motion state" has three possible values: motion = {dynamic, static, unknown}. The dynamic state indicates the points detected as moving objects, and the static state represents points detected as stationary objects. The unknown state means that there is not sufficient evidence to classify the motion state as dynamic or static.
The basic principle of the proposed algorithm is to classify the motion state of the current point cloud by applying the laser characteristic model to the registration relationship between the previously measured point cloud and the current point cloud. The inputs of the proposed algorithm are the current point cloud, Z t = {z 1,m t , z 2,m t , · · · , z N,m t }; the previously buffered point clouds, Z t−1 , · · · , Z t−W+1 , Z t−W ; and the sensor pose, x t , x t−1 , · · · , x t−W+1 , x t−W , for each point cloud. W denotes the time window size of the previous data buffer to be used for motion classification of the current point cloud. There are several methods for obtaining the sensor pose x t , · · · , x t−W , such as inertial measurement unit (IMU) dead reckoning, scan matching, a high-definition (HD) map-based localization, and simultaneous localization and mapping (SLAM). To avoid loss of generality, we assume that the sensor's pose and its uncertainty are provided. The output of the algorithm is the motion state The point motion segmentation algorithm consists of two steps: (1) probabilistic modeling of point motion and (2) evidential point motion classification. In the first step, the probability of Motion (t−k)→t , which is a motion classification of Z t against Z t−k , is updated. Figure 2 illustrates the concept of the probability update of Motion (t−k)→t based on a geometrical relationship between (Z t , x t ) and (Z t−k , x t−k ). The likelihood field of the motion can be updated using (Z t−k , x t−k ) and the characteristics of the laser (such as beam divergence and multi-echo). The Dynamic probability for z m,1 t and z m+3,1 t will be higher when they are located in the path of the laser (green region) for the previous point cloud Z t−k , and the Static probability for z m+2,1 t will be higher if it is located near the previous point (red region). In the second step, the probabilities of each motion classification Motion (t−1)→t , · · · , Motion (t−W)→t are integrated to estimate the final motion classification Motion t , as shown in Figure 3. However, the probability of Motion (t−k)→t cannot be updated by previous points if the current points are not in the likelihood field of the previous points. In this case, it should be classified as unknown. However, because unknown cannot be expressed clearly using probability theory, evidence theory, which can handle the unknown state explicitly, is employed. The probabilities of Motion (t−1)→t , · · · , Motion (t−W)→t are converted into mass (degree of belief) with consideration of LiDAR and sensor pose uncertainty and then integrated into a mass of Motion t using Dempster's combination rule. The motion states
Characteristics of LiDAR Point Cloud
LiDAR uses rotating laser beams to measure the distances and angles from surrounding objects. A laser pulse is emitted at a specific angle, and the distance to the object for that angle can be measured using the time-of-flight (ToF) principle, as demonstrated in Figure 4. ToF represents the difference between the time the laser pulse is emitted from the diode and the time it returns to the object. The distance is calculated by multiplying this time by the speed of the laser light. Using the horizontal-vertical emitted angles and the measured distances, 3D information of surrounding objects can be reconstructed in the form of point data. The actual LiDAR's laser is not emitted in a straight line, as shown in the Figure 4a. The laser has a characteristic of "beam divergence", which increases the beam's cross section over the distance. Because of this characteristic, the farther away from the laser source an object is, the wider the area objects can be detected in. In addition, beam divergence enables multi-echoing of the emitted laser pulse, as shown in Figure 4b, and the multi-echo allows simultaneous measurement of distances to various objects. Another important characteristic of LiDAR is the uncertainty of the distance and angular measurement. Despite the use of lasers, the distance measurement is not infinitely accurate. The distance measurement accuracy is proportional to the measuring capability of the time the laser pulse takes to return. Conversely, the angular accuracy (vertical and horizontal) is discretely accurate because LiDAR is able to control and configure the emission angle. Many previous studies that used LiDAR measurements did not properly account for the above-mentioned characteristics of LiDAR (i.e., beam divergence and distance uncertainty); they treated LiDAR measurements as points with no volume and constant 3D Gaussian uncertainty. To classify LiDAR point motion with high accuracy and reliability, the proposed algorithm accurately reflects these characteristics of LiDAR.
The LiDAR point cloud measurement Z t = {z 1,m t , z 2,m t , · · · , z N,m t } must be representable in both spherical and Cartesian coordinates for processing by the point motion segmentation algorithm. The measurement can be represented in spherical coordinates as Z rθφ,t = {z 1,m rθφ,t , · · · , z i,m rθφ,t , · · · , z N,m rθφ,t }. } based on the W-buffered point clouds Z t−1 , · · · , Z t−W+1 , Z t−W and the sensor pose x t , x t−1 , · · · , x t−W+1 , x t−W of each point cloud. The sensor pose x t , · · · , x t−W can be obtained using several methods, such as an IMU dead reckoning, scan matching, HD map-based localization, and SLAM. However, the proposed algorithm assumes that the sensor's pose information and its uncertainty is abstracted regardless of the type of pose estimation method.
Probabilistic motion modeling of the point cloud Z t can be estimated using the LiDAR sensor pose x t of Z t and the previously detected LiDAR point cloud Z t−k and its sensor pose x t−k , as shown in Figure 2. The probabilistic motion model of Z t to x t , Z t−k , and x t−k can be described as p(Motion (t−k)→t ). The probability p(Motion (t−k)→t ) can be represented by a conditional probability for the given conditions, the past and present point cloud pairs (Z t , Z t−k ), and their sensor pose (x t , x t−k ), as represented in Equation (1).
Motion t is composed of each independent point motion {motion 1,m t , motion 2,m t , · · · , motion N,m t }, so p(Motion (t−k)→t ) can be represented by the set of conditional probabilities of each point, as described by motion i,m t consists of two states {dynamic, static}, and the sum of p(dynamic) and p(static) is always one.
The conditional probability of one point motion can be reorganized by the Bayes rule, as represented by Equation (3).
p(motion i,m conditional probability of one LiDAR point motion being expressed by the likelihood of the given point motion. Therefore, the motion probability estimation problem is converted to a likelihood estimation problem for LiDAR point cloud.
Likelihood of LiDAR Point Measurement
We know that the point cloud motion probability p(Motion (t−k)→t ) can be obtained from the likelihood of the point cloud p(z i,m t |motion i,m t , Z t−k , x t , x t−k ), as described by Equation (5). The likelihood p(z i,m t |motion i,m t , Z t−k , x t , x t−k ) represents a statistical state when motion i,m t is determined as static or dynamic for given Z t−k , x t , and x t−k . The likelihood field of one point z i,m t can be represented intuitively, as shown in Figure 5. The likelihood field of the LiDAR point z i,m t , measured at sensor pose x t at time t, is represented by points z j,l t−k in point cloud Z t−k measured at pose x t−k at the previous time t − k. The intensity of the green color indicates the likelihood of the point z i,m t in the green region being in dynamic motion, and the intensity of the red region represents the likelihood of the point z i,m t in the red region being in static motion. The likelihood field is represented in the local spherical coordinates of the previous sensor pose x t−k . The point cloud Z t−k is represented in spherical coordinates as Z t−k = Z rθφ,t−k = {z 1,l rθφ,t−k , · · · , z j,l rθφ,t−k , · · · , z J,l rθφ,t−k }. The likelihood field for the point z i,m t is constructed in a triangular-pyramid form by each previous point z j,l rθφ,t−k with consideration of the beam divergence characteristics of the LiDAR laser, as shown in Figure 5.
The 3D likelihood distribution of p(z i,m t |motion i,m t , Z t−k , x t , x t−k ) can be divided by two 2D likelihood fields. The first one is a likelihood field for the distance-horizontal angle (r − φ) plane, , and the second is a likelihood field of the distance-vertical angle (r − θ) plane, p(r i,m t , θ i,m t |motion i t , Z t−k , x t , x t−k ). Figure 6 shows the 2D likelihood field in the (r − φ) and (r − θ) planes for the previous measurements z The cross-section of the likelihood field for one laser beam z j,l t−k in Figure 6a can be represented by the likelihood value in Figure 7. Through this figure, we can more accurately analyze the distribution of likelihood for each motion. Figure 7a shows the likelihood p(z i,m t |motion i,m t = static, z j,l t−k , x t , x t−k ) when a measured point z i,m t is static for given previous measurement z j,l t−k and given poses x t and x t−k . Here, the previous measurement z j,l t−k can be expressed in spherical coordinates as {r The region where the previous LiDAR point was detected is likely to be static. LiDAR is measured using ToF, so the uncertainty of the measured distance, r i,m 1 σ √ 2π e −(r−r i,m t ) 2 /2σ 2 (6) σ is the standard deviation of the distance measurement, which are different depending on the LiDAR. Figure 7b shows the likelihood p(z i,m t |motion i,m t = dynamic, z j,l t−k , x t , x t−k ) when a measured point z i,m t is dynamic for given z j,l t−k , x t , and x t−k . The region where the previous LiDAR beam z j,l t−k passed is likely to be free, and the location of the current LiDAR point z i,m t in the region means that this point is more likely to be detected from a dynamic object. This characteristic can be represented by the following equation: through Equation (5). However, if the point z i,m t is located outside of the likelihood field, we cannot obtain the likelihood using the above equations. The area outside of the likelihood field must be dealt with as unknown, but probability theory cannot handle the unknown state explicitly. Therefore, in the next chapter, we apply evidence theory to deal with the unknown state explicitly.
Evidential Modeling of LiDAR Point Motion
The process of point-wise probabilistic motion estimation achieved through the likelihood field was described in the previous section. However, there is a limitation to the probabilistic method if the point z i,m Both probabilistic and evidential approaches are based on the concept of assigning weights to the hypothesized states of the measurement. However, the evidential approach allows sets of alternatives, which means new states can be created by combining existing states. The probabilistic approach deals with the two states {static, dynamic}. In the evidential approach, the two states form a frame of discernment Ω = {static, dynamic}. Dempster-Shafer theory can manage more states explicitly (Ω, φ) by extending the frame of discernment Ω to the power set 2 Ω = {static, dynamic, Ω, φ}. Ω is the set Ω = {static, dynamic}, which means that the point motion is static or dynamic. However, because the point motion cannot be static and dynamic simultaneously, the state Ω indicates an unknown state. φ is an empty set, which means that the point motion is not both static and dynamic. However, because this situation is physically impossible, the state φ indicates a con f lict situation. For each state of the power set 2 Ω = {static, dynamic, unknown, con f lict} in the evidential approach, a mass function Mass is used to quantify the belief of the hypothesis. The mass functions of Mass i,m t (static) and Mass i,m t (dynamic) represent the belief of point z i,m t being static and dynamic, respectively. The mass function of Mass i,m t (unknown) is the union of the beliefs of static and dynamic, and Mass i,m t (con f lict) represents the belief that the point is conflicted by different measurements. The sum of mass functions for the power set must be one based on its definition in the evidential framework.
Based on the evidential approach, we can explicitly handle the points located outside the likelihood field for the given point z j,l t−k as an unknown state. The boundary between the inside and outside is kσ, as shown in Figure 8. k is the tuning factor, which determines the size of the likelihood boundary, and we used k = 3. The point z * t is located inside the likelihood field, but z # t and z $ t are located outside the likelihood field. The mass of point z i,m t motion for the given z j,l t−k , x t , and x t−k is denoted by mass j,l→i,m (t−k)→t (state) for each state = {static, dynamic, unknown, con f lict}. mass j,l→i,m (t−k)→t (state) can be calculated based on whether the point z i,m t is located inside or outside of the likelihood field using the following equation. The mass values of the static and dynamic states are calculated by the motion probability p(motion i,m t |z i,m t , z j,l t−k , x t , x t−k ) and its confidence, λ k . The confidence λ k can be determined using Equation (10).
λ reg describes the confidence of the pose registration between x t and x t−1 . This value is determined by the performance of the registration method, such as IMU, scan matching, and HD mapping. If the registration is very accurate, the value is close to one; however, if it is not good, it is close to zero. The confidence λ k is also affected by the time difference k. Because the confidence of the probabilistic model decreases as the time difference k increases, the confidence λ k also decreases by exp(−k/τ), where τ is the time constant that determines the decay rate.
Point Motion Segmentation by Integrating the Point Motion Masses
For the given point z j,l t−k and the given poses x t and x t−k , the point motion of z i,m t can be described by mass function mass j,l→i,m (t−k)→t (state). For all given previous scan points Z t−k = {z 1,l t−k , · · · , z j,l t−k , · · · , z N,l t−k } and the given poses x t and x t−k , several mass functions mass 1,l→i,m (t−k)→t (state) · · · mass N,l→i,m (t−k)→t (state) can be calculated. We must integrate the mass functions into one mass function mass i,m (t−k)→t (state). In addition, for the previously buffered point clouds Z t−1 , · · · , Z t−W+1 , Z t−W in the time window W, several mass functions Mass i,m (t−1)→t (state), · · · , Mass i,m (t−W)→t (state) can be obtained, and these mass functions should be integrated into a single mass function Mass i,m t (state) to represent the motion of one point z i,m t . To integrate two different mass values from different laser scans and times, Dempster's combination rule (Equation (11)) is applied.
Dempster's combination rule is based on the conjunctive combination rule described by Equation (13).
Experimental Environments
An autonomous vehicle (A1) was used for the experiment to evaluate the proposed algorithm. A1 was equipped with two LiDARs (Velodyne VLP-16) and an IMU, as shown in Figure 9. The LiDARs provided point cloud data with a 10 Hz sampling frequency and their maximum detection range is 100 m. LiDAR lasers beams have beam divergence, which means the beam cross section is increased over time. We designed a beam divergence model based on the specification of the LiDAR sensors, as shown in Figure 10. The horizontal and vertical beam divergence characteristics were different. The standard deviation σ of the distance accuracy was set to 3 cm. Because the distance accuracy can vary based on factors such as temperature and target reflectivity, the selection of the standard deviation σ for the probabilistic LiDAR model must consider the uncertainty. The horizontal field of view (FoV) was 360 • , and the horizontal angular resolution was set to 0.2 • . The vertical FoV was 30 • , and the vertical resolution was 2 • . Because the LiDAR controls the laser emitting angle, it was assumed that the angular uncertainties (vertical and horizontal) were negligible. Dead reckoning was implemented using the IMU to estimate the LiDAR pose for each time step. For the experiments, raw IMU data (acceleration and gyro) from the RT3002 sensor were used without real-time kinematic (RTK) GNSS correction. The specifications of the MEMS IMU are listed in Table 1. Although the IMU was not sufficiently accurate to estimate the long-term pose of the LiDAR sensor, it can provide stable performance in short windows of approximately 50 or less (five seconds or less). In addition, the evidential integration algorithm was able to account for the inaccuracy of the IMU-based pose estimation by tuning the registration confidence λ reg . By considering the installed MEMS IMU, we set λ reg to 0.9. The synchronization between the LiDAR, IMU, and point motion classification algorithm was measured by a pulse per second (PPS) signal from an RT3002. The LiDARs and IMU were precisely calibrated to be located in the same coordinate system.
Segmentation Performance Evaluation through Comparative Analysis
To evaluate the performance of the point-wise motion segmentation, experiments were conducted under various scenarios (e.g., cities and highways). The total length of the experiment road is more than two kilometers. Figure 11a shows single scene of the experimental condition, where moving cars and stationary road structures were mixed. The result of segmentation through the proposed algorithm is shown in Figure 11b. The RGB value for each point is set using the proposed motion belief algorithm. The red values represent static state belief, the green values represent dynamic state belief, and the blue values represent unknown state belief. Therefore, the objects that have a high probability of stopping will appear red, moving objects will appear green, and unsegmented objects will appear blue. As shown in Figure 11b, traffic signs and roadside trees are segmented as red, and moving cars are classified as green. due to incorrect bounding boxes and inaccurate speed estimation by the tracker. The segmentation accuracy of the proposed algorithm is better than that of the tracking-based segmentation algorithm when the time window W is 20, 30, 50, and 100, as shown in Table 2. The segmentation performance for a time window of 50 is better than that of 100 because the drift error of pose estimation affects the segmentation performance.
Real-Time Performance Evaluation
The algorithm was verified in an RTMaps environment with a QuadCore Intel Core i5-3570K, 3600 MHz (36 × 100) CPU. Hard real-time performance could not be fully evaluated because it is not an embedded environment, but it can be optimized later by checking the soft real-time performance in the RTMaps environment. To evaluate the real-time performance of the algorithm, the occupancy grid map-based segmentation algorithm was compared with the proposed algorithm. As shown in Figure 12, the algorithm based on occupancy grid maps took a long time because all cells in the area the LiDAR beam passed were constantly updated. The larger the window size of the proposed algorithm, the more computation is required. The most appropriate time window setting for the proposed algorithm is 50, as illustrated by the confusion matrix, and its computation time is below the sampling period of the Velodyne LiDAR (100 milliseconds).
Conclusions
This paper proposed a segmentation algorithm to rapidly classify the motion states of a LiDAR point cloud in real-time. The motion segmentation algorithm requires inputs of point clouds and 3D pose (position and direction) of the LiDAR sensor. The point-wise motion segmentation is performed based on the laser beam characteristics and the 3D pose correlation between consecutive LiDAR points. A combination of probability and evidence theory is used to accurately and reliably segment the motion state of points into dynamic, static, and unknown.
(1) The point motion segmentation algorithm considers the characteristics of the LiDAR laser beam, such as multi-echo, beam divergence, and horizontal and vertical resolution. Therefore, the point motions are segmented more accurately and reliably than by conventional algorithms (e.g., occupancy grid mapping and tracking-based segmentation algorithm).
(2) To update the motion state of each LiDAR point, a combination of probability theory and evidence theory is applied to point motion modeling to accurately reflect the LiDAR characteristics. Probability theory is used to model the likelihood fields of LiDAR point clouds, taking into account the uncertainty of LiDAR measurements. Evidence theory is used to incorporate multipoint motion probabilities, taking into account pose uncertainty and the unknown state.
(3) The proposed point motion segmentation algorithm was evaluated experimentally. The segmentation accuracy was 86% for a time window of W = 50. This is better than the accuracy of the tracking-based algorithm (73%). Because the proposed algorithm can handle one LiDAR point clouds in a one-step process, when operating under 100 ms, it is suitable for real-time applications in automated and intelligent vehicle systems.
The performance of the algorithm is related to the positioning algorithm that estimates the pose of the LiDAR sensor. In future research, we plan to analyze the quantitative relationship between the LiDAR sensor positioning and the proposed algorithm performance, and to study the SLAM algorithm that classifies the point motion and simultaneously estimates the pose of the LiDAR sensor. | 7,744.8 | 2019-09-23T00:00:00.000 | [
"Computer Science",
"Engineering",
"Environmental Science"
] |
Spacetime Discreteness : Shedding Light on Two of the Simplest Observations in Physics
Loop quantum gravity is considered to be one of the two major candidates for a theory of quantum gravity. The most appealing aspect about this theory is it predicts that spacetime is not continuous; both space and time have a discrete nature. Simply, space is not infinitely divisible, but it has a granular structure, and time does not flow continuously like a smooth river. This paper demonstrates a review for two missed (unnoted) observations that support the discreteness of the spacetime. The content of this paper does not validate the specific model of quantized geometry of the spacetime which is predicted by the theory itself. Instead, it proves that time does not flow continuously. But it flows in certain, discrete steps, like a ticking of a clock, due to a simple observation which is absence of any possible value of time that can exist between the present and the future. Regarding space, it validates the spatial discreteness, and the existence of spatial granules (space quanta) due to a simple observation which is the existence of the origin position in a coordinates system. All of this is achieved by reviewing the concept of discreteness itself, and applied directly to the observations.
Introduction
The problem of quantum gravity represents one of the biggest problems in physics today.Mainly, the problem does not arise from the lack of working theories in this field, but it arises-till now-from absence of any experiment or observation that can validate any theory in this field.The problem comes from the fact that the theories of quantum gravity work on a very small length scale ), that is beyond our current experimental reach.Therefore, no evidence has been obtained to validate any theory in this field.Loop quantum gravity (LQG) is considered to be one of the two major candidates for a theory of quantum gravity [1] [2] [3], and String theory is the other candidate.Mathematically, the framework of String theory [4] requires the spacetime to have extra dimensions beside the four dimensions that we currently observe (length, width, height, and time).Also, it demands the existence of a specific type of symmetry for the spacetime, which is called supersymmetry.This symmetry implies that each elementary particle should have another particle as a partner (super-partner).A fermion should has a partner boson and vice versa.But the problem with all this is that none of the above until now has been validated by any experiment or observation, not even after the launch of the Large Hadron Collider (LHC) in Europe.Technically, the experts require more time and effort on the experimental research in order to reach their final conclusions.But for now, I think this should raise many questions.
On the other hand, LQG in its current theoretical framework does not require the unobservable extra dimensions or the undiscovered supersymmetry.Also, its experimental future is much more promising.A few years ago, a group of scientists from America and France have proposed a new approach to test LQG [5].
Their proposal depends on detecting the radiation that is emitted from the black holes.Historically, the idea that a black hole can radiate was introduced in 1974 by the British physicist Stephen Hawking [6].According to his theoretical model, the radiation does not come directly from the black hole itself.But it comes from the quantum effects-due to the uncertainty principle-that exists near the event horizon.Emitting this radiation causes a black hole to loss energy (mass), and from here it is also called black hole evaporation.In their proposal [5], by including LQG in the picture, the process of radiation should reveal footprints that are distinct from the usual outcome which is expected from the process of Hawking radiation.But a major challenge for this test is that the process of black holes radiation is just a hypothesis.Until now, this radiation has not ever been detected.Therefore, in order to verify LQG by this approach, first, we need to prove that this radiation does really exist.Then, we shall look for the characteristic footprints in the radiation process that distinguishes LQG.I hope this can be achieved in the near future.
Theoretically, the most appealing aspect about LQG is that it predicts that spacetime is a discrete entity; space is not infinitely divisible, but it has a granular structure and time flows in a discrete pattern like a ticking of a clock.LQG draws an accurate geometry for the spacetime at the very small scale (Planck length).The theory has its early beginning in the mid-twentieth century.It has been developed by a number of physicists including; Carlo Rovelli, Abhay Ashtekar, and Lee Smolin.It was built in order to merge Einstein's idea of gravity with the quantum theory.In Einstein's notion of gravity [7] [8], the gravitational field is just a curvature in the space-time itself.Therefore, the space-time (our A. Isam background) is an active field, and not just a passive entity where other interactions happen within.In the quantum theory, the traditional description of a field (like Electromagnetic field) is built to rely on a passive background, and here a spontaneous problem emerges.Because Einstein's theory of general relativity tells us that the universe is built by fields on fields interactions, and not by fields interactions on a passive background (fixed, inactive spacetime).The gravitational field does not require a background to rely on, it is the background itself.
From here, a need for a new concept becomes prominent in order to merge gravity within the quantum realm.LQG is concerned with this.It describes a new quantum field for gravity that can interact with the other forces, with no fixed background to rely on.
It describes space as network of intersecting loops Figure 1(a).These loops are not located within space, but they are the space.They are excitations of the gravitational field at a very small scale (Planck length).These loops interact with our ordinary particles (like electron), and their effect is manifested as gravitational interaction.
Loops intersect with each other to form a network, which is called spin network.When this network is observed over time, it is called spin foam.There are two values in this network which are important; nodes and links.They are related to elementary values of volume and area respectively Figure 1(b).A node stands for an elementary quantum (chunk) of volume, and similarly, a link stands for an elementary quantum of area.There are minimum values for volume and area that can exist within the framework of this theory.By a rough approximation, they are about one cubic Planck length, and one square Planck length respectively.This constitutes the granular aspect of the space or simply, the spatial granules.But despite of its precise description for the geometry of the space- time at the Planck length scale, LQG is just a hypothetical approach that has not been yet validated by any experiment or observation.However, in this paper, we Figure 1.(a) Loops of LQG, they do not rely on space, they are the space; (b) Loops intersect with each other to form a network.This network is described by nodes and links.shall discuss two missed (unnoted) observations that prove the existence of discrete structure for the spacetime.But they cannot validate the precise shape of the quantized geometry which is predicted by LQG.
Defining a Simple Approach to Detect Discreteness
From our daily experience, we are familiar with two types of quantities; discrete and continuous quantities.To explain them, let us take the following example.
Consider two types of bags.The first is a usual bag that is used to carry weight.
The second one is a ball bag (mesh sack) which is used to carry soccer balls.The maximum capacity of the first bag is 10 kilograms, and for the second are 10 soccer balls Figure 2. Regarding the first bag which carries weight, we are familiar with the fact that matter is composed of discrete entities; molecular and sub-molecular particles.But this discreteness does not appear in our ordinary macroscopic measurements, as in this example.Therefore, just for the sake of demonstration, we shall consider it a continuous quantity.Later on, we shall discuss two accurate examples.Now, let us guess the amount (quantity) that each bag carries without looking at them.Our possible answer regarding the first bag is any value from zero (empty bag) to 10 kgs (full capacity).All the possible values from zero to ten kilograms are expected, which include values like 0.3 kg, 2.5 kgs, and 7.9 kgs.
Concerning the second bag, our answer will be different.There are only eleven possibilities.Our possible answer will be any value in the range from zero (empty bag), one ball, 2 balls, 3 balls… up to 10 balls (maximum capacity).Only 11 answers are allowed.Values like 2.5 balls or 7.4 balls are not possible, because there are no 0.5 ball or 0.4 ball.
We shall use the term "spectrum" which refers to a set (or a continuum) of the possible values for each bag, and here you can simply notice the difference between the two spectrums of these bags.The spectrum of the second bag (ball bag) Figure 2. Two bags, the first one carries weight, which is considered from our daily macroscopic experience as a continuous quantity.The second bag, carries balls, which are discrete entities.Balls are discrete entities, and this discreteness is directly reflected as emptiness within the spectrum between the successive values Figure 3.This emptiness exists because the number of balls which is carried by this bag is a discrete quantity.
On the other hand, the spectrum of the first bag is continuous, and there is no emptiness between the successive values Figure 3.There is always a possible value between any two successive values.From here, we consider it as a continuous quantity.Let us take another example, which is the one dimensional quantum harmonic oscillator.The energy of this oscillator is a discrete quantity, and it is given by the equation , where ( ) v is the oscillator's frequency and h is Planck's constant.In this section, we define the spectrum for a physical quantity as a set (or a continuum) of the possible values that this quantity can take or obtain.From this definition, and by using the previous equation, we can write part of the spectrum for the oscillator's energy as: the spectrum is infinite.Hence, the spectrum is continuous everywhere between the three successive values, and at any site between them there is a possible value.
No emptiness exists, and from here, we call it a continuous quantity Figure 4.
Our final example is the electric charge.It is a discrete quantity.For a positively charged particle, its spectrum is Regarding the quantum harmonic oscillator, as you can see, nothing exists between the successive values within the spectrum; there is a complete absence of any possible value between them.We call it emptiness within the spectrum.On the other hand, the spectrum of a continuous quantity lacks the emptiness within it; each small dot refers to a possible value within the spectrum.
A e do not exist.From here, there is emptiness in the spectrum between 2e and 3e.The emptiness stands for the complete absence of any possible value within the spectrum between 2e and 3e.Its existence is a direct reflection (consequence) for the discreteness of this physical quantity.
We can consider further examples for discrete quantities, but our conclusion will always be the same; a discrete physical quantity has a discontinuous spectrum of possible values.The discontinuity appears as emptiness between the spectrum's successive values.Emptiness by itself stands for the complete absence of any possible value within the spectrum.You can notice this in Figure 3 and Emptiness does not exist, and the spectrum is continuous everywhere.This differentiates a discrete quantity from a continuous quantity, and from here, we can simply detect the discreteness of a physical quantity by observing the emptiness that exists within its spectrum.
Time as a Physical Quantity, Does Its Spectrum Contain Emptiness?
The nature of time is a site of controversy and ambiguity.But from an objective aspect, in physics, it is usually defined by its measurements; time is a quantity that is measured by a clock.In the universe, both space and time are merged together into one single entity called spacetime ( ) , , , x y z t , which is an inevitable consequence, regarding the theory of relativity Initially, in order to obtain a spectrum for time, we shall consider an observer within a specific frame of reference.In this frame of reference, time measurements are differences within time (∆t), and these differences are measured or taken at the same spatial location within the frame of reference.Therefore, there should be no difference in the spatial coordinates ( ) only difference in time.From here, regarding time measurement, we shall consider one axis; time axis (ct) Figure 5.
As you can see from the figure, we consider our observer at specific moment (t A ).Then, he (she) will have three components of time; the past, the present, and the future.The present is the time that exists between the future, and the past; it is the moment of now.The past is the period of time before the present.
The future is the time that will come after the present.From a simple approach, the future can be visualized as a continuum of futuristic moments.A futuristic moment is defined as the moment of the present after a flow of a specific time interval.For example, from Figure 5, (t B ) and (t C ) are two futuristic moments that will be the present for our observer after a flow of ( B A t t − ), and ( C A t t − ) respectively.
In the previous section, we have defined the spectrum for a physical quantity as a set (or continuum) of the possible values that this quantity can take.For our that exist between the present (t A ) and the future is zero; there is a complete absence of any possible value in the spectrum between the future and the present Figure 6.
The future is a continuum of futuristic moments.From here, the emptiness that exists between the present and the future is just an emptiness between the present and a futuristic moment; call it ( ) Regarding, our observer, when time flows by the amount ( ) . Then, ( ) t α will not be a futuristic moment any more.It is now the present ( ) , and similarly as the previous ( ) there will be emptiness between it, and the future.Again, the future is just a continuum of futuristic moments.Therefore, the emptiness that exists between the present, ( ) t α and the future is just emptiness between the present and a futuristic moment, call it ( ) By repeating the previous analysis, we can get ( ) From here, one minute is divided to 60 seconds, and approximately, each second is divided to 10 43 elementary time quanta, therefore, time discreteness-as we currently observe-does not play any apparent role in our ordinary macroscopic activities.Its significance becomes obvious at the scale, where the theory of quantum gravity works.At that length scale, as we have discussed earlier, LQG visualizes the space by a spin network, which describes the quantized microscopic geometry of the space by using nodes and links.This spin network, when it is observed over time, its name changes to a spin foam.An important point that should be mentioned is that the geometry of this microscopic space is not fixed.But it changes with time for a number of reasons which include matter movement and the quantum effects of the uncertainty principle (quantum fluctuations).These geometrical changes appear as rearrangements within the patterns that nodes and links can take within the spin network (e.g.multiple nodes may combine to form a single node).At the level of these events, the discreteness of time becomes important.Because it implies that these rearrangements which happen within the spin network, will not occur in a smooth, continuous pattern, because time does not flow continuously.Instead of this, it will occur in discrete abrupt steps, since time advances in a discrete pattern.From this perspective, and at this small level, time can be defined by the sequence of distinct moves that rearrange the network.More precisely, I quote the following words from the American physicist Lee Smolin, one of the theorists who developed the LQG "Time in our universe flows by the ticking of innumerable clocks-in a sense, at every location in the spin foam where a quantum "move" takes place, a clock at that location has ticked once" [3].
Position as a Physical Quantity, Does Its Spectrum Contain Emptiness?
It is logical to assume that time discreteness should be associated with space discreteness, since space and time are intimately connected in nature, and they constitute one physical entity (spacetime), where time is a dimension within its structure.However, spatial measurements within the spacetime are concerned with the differences between the spatial coordinates.These differences are measured or taken at the same moment, which means there is no difference in time (∆t = 0), but only differences in (∆x, ∆y, ∆z) which are the spatial component of the spacetime.Now, if we consider position as a physical quantity, we shall ask ourselves an important question.Is it continuous or discrete?To answer this question, we start with one spatial dimension (a line) for simplicity.
Positions in this line are represented with axis (x).The position spectrum represents all the possible values that a position can take.It can take a positive or a negative value (to the right or to the left with respect to the origin), or it can be the origin itself.It is illustrated on the axis by labeling the axis with position units, as shown in Figure 7 below.
By looking at the spectrum above, there are three facts that are noticeable; the First fact from Figure 7, the positive part of the spectrum represents a continuum of positions that are directed in the positive direction with respect to the origin.The negative part of the spectrum represents a continuum of positions that are directed in the opposite direction with respect to the origin.Therefore, the word "positive" or "negative" only refers to the direction.Positive means to the right and negative means to the left with respect to the origin.The second fact; existence of the origin position in the spectrum which is the position that is located outside the positive and the negative parts of the spectrum.Therefore, it is neutral (null vector).The final fact; in the spectrum above, the number of possible values (positions) between the positive part and the negative part of the spectrum is one, which is the origin itself, but the number of possible values (positions) between the positive part of the spectrum and the origin is zero.Regarding observations in physics, by looking at the previous figure.I believe this is one of the simplest observations in physics that can be used to validate a proposed hypothesis (spatial discreteness).It is one of the simplest, because you do not have to measure anything to observe the emptiness in the spectrum that exists between the origin and the positive (or the negative) part of the spectrum Figure 7.As you can see in Figure 7, between the origin and the positive part of the spectrum, the total number of possible values (positions) is zero; there is a complete absence of any possible value within the spectrum between the origin and the positive part.It is emptiness within the spectrum Figure 8.
Since the positive part of the spectrum is merely a continuum of positive positions.Then, the emptiness that exists between the origin and the positive part of the spectrum is an emptiness between the origin and a positive position, call it position (x 1 ).The origin position is a relative position and not an absolute.
Therefore, position (x 1 ) can be considered as an origin position.Then there will be emptiness between (x 1 ) and the following position in the spectrum, just like the previous one, call it position (x 2 ).Also position (x 2 ) can be considered as an origin, since the origin is a relative concept.Therefore, there will be emptiness between (x 2 ) and the following position in the spectrum, call this position (x 3 ), and so on.Therefore, the position spectrum in the positive direction will take the form: 0, x 1 , x 2 , x 3 , x 4 … which is discrete and not continuous, and the number of possible positions in axis interval (△x) is limited, and not infinite, but what does it mean?
Initially it means that position is a discrete physical quantity.It is important to note that our analysis above can only prove the discreteness of the position as a physical value.But it cannot answer whether the successive positions are equally spaced or not, but it only shows that they are spaced.Classical geometry defines line as a continuum of infinite number of points spreading in one dimension.
This definition makes any given value of length (∆x) infinitely divisible Figure 9.
In this classical definition, any point in the line refers to a position in space.
Therefore, existence of infinite number points in any given value of length (∆x) means existence of infinite number of positions in that length.This clearly Figure 8.As you can see in the spectrum (axis), nothing exists between the origin (represented by the black dot) and the positive part of the spectrum (represented by the straight green line).There is a complete absence of any possible value between them; it is emptiness within the spectrum.Also, emptiness exists between the origin and the negative part of the spectrum.contradicts the discreteness of position as a physical value.Because discreteness implies that the number of positions in any length value is limited, as we have shown in the analysis above.From here, a new definition of line is required to solve this contradiction.By redefining line as a continuum of quanta instead of points, the problem is simply solved.Each quantum will represent-or refer to-a position in space and since the quantum has a non-zero value of length, their number in any length interval (∆x) is limited.This in turn results in existence of limited number of positions in that interval, which is consistent with its discreteness concept Figure 10.
The quantum means an elementary value of length.The word "elementary" means it is not divisible, just like elementary particles are not divisible.Therefore, observation of space below the level of the quantum is not possible, because it results in the divisibility of the quantum itself, and this cannot happen, since it is elementary.
Existence of elementary value for length implies the existence of elementary
A. Isam values for area and volume too.We shall consider the following example.From the previous discussion, existence of the origin position illustrates discreteness in space structure, but it does not illustrate a specific or certain shape of microscopic geometry at the discreteness length scale.This creates a problem when trying to extend the previous conclusion about space discreteness to include two and three spatial dimensions.However, the problem is solved by using a large length scale relative to the scale of discreteness.Because at this large scale, the microscopic discrete geometry is reduced to the classical macroscopic geometry as an approximation (just as the classical mechanics is used as an approximation for quantum mechanics at the macroscopic length scale).
Therefore by choosing a large macroscopic length scale, the classical Cartesian coordinates system is used as an approximation, but it is important to bear in mind that the axes (x), (y) and (z) are discrete and not continuous, since they contain an origin position.By considering areas, an additional spatial dimension (y) is required, and it is discrete just like (x), since it contains an origin position.
The "classical" definition of area is that it represents a two dimensional continuum of infinite number of points, and this definition makes any given value of area infinitely divisible.This definition contradicts the conclusion regarding position discreteness as will be illustrated below.Let us take a circle as an example, Since the number of points inside the circle is infinite, this results in infinite Figure 11.By taking a large scale, classical geometry is used as an approximation for the quantized, microscopic geometry.Just as the classical mechanics is used as an approximation for the quantum mechanics at the macroscopic length scale.
number of positions in (△x) and (△y) intervals, as illustrated in Figure 12 below.
The previous result makes a contradiction with the result regarding position discreteness, because the number of positions in (△x) and (△y) intervals will be infinite and not limited as we have shown before.By redefining area as a continuum of quanta, instead of points, the spatial quantum represents an elementary value of area (not divisible).
The quantum of area is an elementary value.In physics elementary values are not divisible, just like elementary particles.From here, the quantum cannot refer to more than one position in space.Because observation of the space below the quantum's level is not possible.It will result in the divisibility of the quantum itself, and this cannot happen, since it is an elementary value.Therefore, every quantum will refer to a single position inside the circle.Since the quantum has a non-zero value of area, the number of quanta, and therefore positions inside the circle will be limited.Now, since the Cartesian coordinates are used as an approximation, every position in the circle is approximated to a position in (x) and (y) axes.Position number (1) in the circle is approximated to position (x 1 , y 1 ) in the axes, position number (2) in the circle is approximated to position (x 2 , y 2 ) and so on, just like the idea from Figure 12.The number of positions inside the circle will be limited.This results in a limited number of positions in (△x) and (△y) intervals which bound the circle's area.This is consistent with the fact of position discreteness.By considering volumes, the same concept used in dealing with area is hold here, but with additional dimension (z) because volume is a three dimensional quantity.This leads to redefining volumes as a continuum of three dimensional quanta, instead of points.
Existence of elementary values for space, which we can call it spatial granules may appear strange, but let us recall a historical similarity.Before the 19 th century, many scientists believed that matter is infinitely divisible, which means that it is not composed of elementary constituents The idea of elementary constituents (elementary particles) of matter was a strange too.But today, we know that matter is composed of atomic and subatomic particles.Space has the same argument.Therefore, in science, we have to follow the evidence, regardless of its impression.On the other hand, the existence of the spatial granules solves one of the major problems in physics, which is the infinities problem.This problem was originated in 1927, when the German physicist Werner Heisenberg had discovered a new principle, which limits our knowledge in certain measurements.It is called the uncertainty principle.By considering the uncertainty principle, infinities in calculations arise from physical interactions that occur in spatial points, which lead to results that contradict our practical observations.But now, this problem disappears very simply.Because we know that these infinities do not exist because there are no spatial points.The space has elementary spatial granules that restrict the uncertainty principle, and preventing it from blowing up.
Generally speaking, and apart from the picture that is given by LQG for the granular space (links and nodes).Regarding space discreteness as an idea, once I was asked a question; if the space has a discontinuous structure which means that it consists of elementary quanta or granules.This discontinuity indicates that the space will end at the limits of each spatial quantum or granule.This in turn results in existence of gaps or a space-less physical entity that will separate these granules, just like the elementary constituents of matter (particles) are separated by space.These gaps should not have any space; since they separate the space itself (separate the spatial granules).From here, they are space-less physical entities Figure 13.Then, how can we move from one place to another through the space?Or, how can physical entities like waves move or pass through these physical gaps that contains no space (space-less entity), when they propagate from one location to another ?Because movement as a physical concept is concerned with space.It is the change of position with time, and position exists within the space, but not outside the space.Therefore, we can only move through the space (spatial granules), but we cannot move outside the space (space-less entities).
I think this question can be answered from a simple perspective [9].The value of space (length, width, and height) in the space-less physical entity (gaps) is Figure 13.Imaginary description of the granular space.The space-less physical entity which separates the spatial granules is dimensionless; it has no length, height or width.It is represented in this figure as gaps (or spaces) between the spatial granules just for the sake of demonstration.
zero, since it has no space.Therefore, it is a dimensionless entity.We know from our classical geometry that dimensionless entities are spatial points.Because in geometry (and also physics), points do not have length, width or height.Therefore, they are dimensionless.From here, spatial points and the space-less entity are physically indistinguishable.Both share the same concept.Therefore, the statement that says "the spatial granules are separated by gaps or space-less entity" is physically equivalent to the description that implies" the spatial granules are separated by virtual spatial points".We call it "virtual", because they do not play any role with the physical interactions.Because as we know, physical interactions occur within space, which means within the spatial granules.Bu they do not occur outside the space (the space-less entity), which appears to us as spatial points.Therefore, their only apparent role is to allow a linkage or connection between the space granules, which in turn allow us to move through a discrete space without any problem.
This also may explain why the space appears so smooth and continuous on the large scale, although it has a discontinuous structure.For the sake of clarification, a volume of frozen water (ice) appears so smooth and continuous at the large macroscopic level, although microscopically, it consists of discrete, discontinuous molecules.The reason behind this is that these separate, discontinuous molecules are linked to each other through the intermolecular forces, which hold these molecules as one continuous unity at the large level.By a "rough" resemblance, the space can be visualized from a similar perspective.Microscopically, it consists of discrete, discontinuous granules.These granules are linked or connected to each other through virtual spatial points (space-less physical entity).
But at the large macroscopic level, the value of one spatial granule (volume and area) becomes extremely small to be notable.Therefore, it can be reduced to a spatial point as sort of approximation; an approximate point.These approximate points (spatial granules) will not be discrete or discontinuous, because they will be linked to each other through the virtual spatial points.From here, at the large scale, the discrete space can be visualized as a continuum of spatial points (approximate and virtual), just like the classical continuous space is considered to be a continuum of spatial points.
Discussion and Conclusion
I believe in this paper, we have proven the existence of a discrete structure for the spacetime by a very simple approach.Time flows in discrete steps, and space is not infinitely divisible.Therefore, the major prediction of LQG is validated.
This represents a successful aspect for this theory, and shows that LQG is on the right direction, because it yields spacetime discreteness as a prediction from its theoretical framework, and does not use it as an assumption or a postulate.But our problem is that our evidences do not validate the (physical) geometrical features for the discrete spacetime which is given by LQG.Therefore, the following question is problematic; do the spatial granules have the same geometrical fea- tures that are provided to us by the LQG?The possibility that one day, another theory may emerge to the surface, and contains a discrete picture for the space, but with geometrical features that are entirely different from that of LQG.At that moment, unless we have an experiment or observation that can tell us the accurate description of this microscopic, quantized geometry.Then, the validity of each one of them will be questionable.I hope we can find an answer soon.But for now, I am excited, because we know that the spatial granules do exist, although we do not know their geometrical (physical) features.Their existence alone can explain how the uncertainty principle and the general theory of relativity exist in our universe without their usual conflict.Certainly, this is promising.
Figure 3 .
Figure 3. Discrete quantity versus continuous quantity, the difference between them is the existence of emptiness within the spectrum of a discrete quantity.
, 2 ,3 , 4 ,5 , e e e e e .where "e" is the elementary charge (≈1.6 × 10 −19 Coulombs).The sub-nucleic particles (Quarks) have smaller charges, but they are only found in combination.Now, in the spectrum above, you can notice that the number of possible values between any two
Figure 4 .
Figure 4.Regarding the quantum harmonic oscillator, as you can see, nothing exists between the successive values within the spectrum; there is a complete absence of any possible value between them.We call it emptiness within the spectrum.On the other hand, the spectrum of a continuous quantity lacks the emptiness within it; each small dot refers to a possible value within the spectrum.
Figure 4 .
Figure 4. On the other hand, a continuous quantity has a continuous spectrum.
Figure 5 .
Figure 5.Time axis; the past, the present, and the future for our observer.We consider our observer at the moment t A (the present for him), t 0 is defined as the moment at which observations have been started regarding this frame of reference.
Figure 6 .
Figure 6.As you can see in the time spectrum (axis), nothing exists between the present (represented by the black dot) and the future (represented by the straight green line);there is a complete absence of any possible value between them.Emptiness also exists between the present and the past.But this has no physical significance or meaning.Because time as we currently know advances to the future, and it does not flow reversely to the past.
Figure 7 .
Figure 7.A line with Position Spectrum.
Figure 9 .
Figure 9. Line as a continuum of points.It is infinitely divisible.Infinite number of positions in the length interval (∆x) results from the infinite number of points.
Figure 10 .
Figure 10.Line as a continuum of spatial quanta results in limited divisibility.Limited number of positions results from limited number of quanta in the interval (∆x).
from the classical definition of area; it represents a continuum of infinite number of points spreading in two dimensions.Since every point in the circle refers to ⎯or represents⎯ a position in space, existence of infinite number of points in the circle result in existence of infinite number of positions inside it.This in turn results in existence of infinite number of positions in (△x) and (△y) intervals which bound this circle as illustrated in Figure 11.This clearly contradicts the fact of position discreteness.This happens because every position in the circle refers to a position in (x) and (y) axes, for example, position or point number (1) in the circle refers to position (x 1 , y 1 ) in the axes.Position or point number (2) in the circle refers to position (x 2 , y 2 ), and point or position number (3) will refer to position (x 3 , y 3 ) in the circle and so on.
Figure 12 .
Figure 12.Existence of infinite number of positions inside the circle, results in infinite number of positions in (∆x) and (∆y) intervals.Since every position in the circle refers to a position in the intervals.
hv do not exist.In other words, there is a complete absence of any possible value in the spectrum between the two successive values.From here, we call it emptiness in the spectrum As you can see in the spectrum, emptiness exists between any two successive values, and its existence is a direct consequence for the discreteness of this physical quantity Figure4.On the other hand, a continuous quantity has a continuous spectrum which lacks the emptiness within it.
γ are three successive values within the spectrum.As you can see, the spectrum lacks the emptiness within it.Between any two successive values, there is always a possible, since (v) is classically considered as a continuous quantity.Therefore, the number of possible values between the two successive values | 8,475.6 | 2018-06-06T00:00:00.000 | [
"Physics"
] |
Correlation dimension of self-similar surfaces and application to Kirchhoff integrals
For s urfaces (cid:3) generated (cid:3) by (cid:3) a (cid:3) class (cid:3) of (cid:3) asymptotically (cid:3) self-similar (cid:3) processes (cid:3) we (cid:3) define (cid:3) a (cid:3) probability (cid:3) measure, (cid:3) supported (cid:3) by (cid:3) the (cid:3) surface. (cid:3) We (cid:3) show (cid:3) that (cid:3) the (cid:3) correlation (cid:3) dimension (cid:3) of (cid:3) that (cid:3) surface (cid:3) measure (cid:3) is (cid:3) linked (cid:3) to (cid:3) the (cid:3) self-similarity (cid:3) exponent (cid:3) almost (cid:3) surely. (cid:3) This (cid:3) result (cid:3) is (cid:3) applied (cid:3) to (cid:3) the (cid:3) Kirchhoff (cid:3) integral (cid:3) well (cid:3) known (cid:3) in (cid:3) scattering (cid:3) from (cid:3) rough (cid:3) surfaces. (cid:3) We (cid:3) show (cid:3) that (cid:3) a (cid:3) certain (cid:3) average (cid:3) of (cid:3) the (cid:3) scattered (cid:3) intensity (cid:3) exhibits (cid:3) almost (cid:3) surely (cid:3) a (cid:3) scaling (cid:3) that (cid:3) allows (cid:3) us (cid:3) to (cid:3) recover (cid:3) the (cid:3) self-similarity (cid:3) index (cid:3) of (cid:3) the (cid:3) surface (cid:3) in (cid:3) an (cid:3) experiment (cid:3) involving (cid:3) only (cid:3) one (cid:3) sample (cid:3) of (cid:3) the (cid:3) surface.
Introduction
Random fractal models are often used to describe natural rough surfaces that exhibit structures over a wide range of scales, such as sea surfaces, soils, mountains or rough deposits. A powerful tool for the remote characterization of such media is wave scattering. This method is now commonly used in various domains such as oceanography, geophysics or physics and chemistry of solid surfaces. We refer to [1] for a general introduction to fractal geometry and to [2] and [3] for some surveys on scattering from fractal surfaces. The wavenumber of the interrogating wave is chosen in accordance with the spatial frequencies to be probed in the material. The typical observation is a power-law dependence of the scattered intensity upon the wavenumber, with a non-trivial exponent which is interpreted as a fractal dimension of the surface. Such results can be well explained in the framework of the small-perturbation model, i.e in a regime where the roughness is small with respect to the illuminating wavelength. 3 On leave from CNRS. In that case, the scattered amplitude is a mere Fourier transform of the surface and the mean scattered intensity yields directly its power spectrum. However, the interpretation of the scattering data becomes problematic outside the perturbative regime. Very often, the scattering amplitude is then approximated by an oscillatory integral of the following form: A(k, q) = R n χ(r) e i(k·r+qh(r)) dr. (1.1) Here h(r), r ∈ R n is a continuous function representing the rough interface, χ is a cut-off function that delimits the illuminated area, A is the scattered amplitude for a fixed sourcereceiver geometry and (k, q) ∈ R n × R are the horizontal and vertical frequency variables, respectively. They are related to the components of the incoming and outcoming wave vectors by k = k out − k in and q = q in + q out (see figure 1). For simplicity, χ will be chosen as the (normalized) characteristic function of the unit sphere. We call an integral of type (1.1) a 'Kirchhoff integral'. This terminology originates from the so-called Kirchhoff approximation [4], which leads to this expression for the scattering amplitude, apart from trivial geometrical factors. The Kirchhoff approximation amounts to replacing the surface by its local tangent plane in the calculation of the unknown field at the boundary. This simplifying assumption is justified for smooth surfaces but is in principle incompatible with fractal surfaces, where such a tangent plane does not even exist. Many authors have chosen to work with band-limited fractal surfaces, but still the use of the Kirchhoff approximation is questionable since the frequency domain where the method applies and the fractal frequency regime do not necessarily overlap. A second instance where this integral appears is the case of two homogeneous media separated by a rough interface h and illuminated from one side, say the upper medium. In the limit of small contrast the scattering amplitude is accurately given by the so-called Born approximation, which amounts to replacing the total field in the lower medium by the incident field. The Born approximation in this context takes on the form of a Kirchhoff integral. Scattering from fractal surfaces under the Kirchhoff approximation has been extensively studied in the physical literature (e.g., [5][6][7][8][9][10][11][12]). Typically, the diffraction diagram retains the fractal properties of the surface and a dependence of the scattered intensity upon some fractal dimension is evidenced numerically or analytically in different regimes.
The aim of this work is to establish a precise and quantitative link between the fractal properties of h and the asymptotic behaviour of the Kirchhoff type oscillatory integral A as (k, q) → ∞. As we want to show now, the theoretical results of this paper allow us to recover the self-similarity exponent of an asymptotically self-similar random surface with probability one by a scattering experiment, where only the intensity is measured. This is important, since it shows that only one sample of the surface is needed (instead of a characteristic ensemble). The physical problem corresponds to n = 1 or n = 2, i.e. when h actually describes a profile or a surface.
However, the results of this paper hold and will be presented in arbitrary dimension n 1. Note that A is the (n + 1)-dimensional Fourier transform of the measure which is supported by the graph of h. Thus, at the mathematical level we are interested in how the high-frequency behaviour of the Fourier transform of μ is related to small-scale properties of h. The link will be established through the correlation dimension of the graph measure. The main results will be exposed in detail in the following section.
The graph measure
Consider the graph ⊂ R n+1 of a continuous real valued function z = h(r) defined on the unit ball r 1 in R n , = {(r, z)|z = h(r), r 1}. A natural question is to understand how fractal properties of h are mirrored in fractal properties of . When h is fractional Brownian motion (fBm) over R it is known that almost surely the Hausdorff-Besicovitch dimension D HB of the graph is simply related to the Hurst exponent H by D HB = 2 − H [13].
In this paper we will prove a similar result for the correlation dimension of the measure supported by which is defined as follows. Consider the extension to all of R n+1 of the pullback of the Lebesgue measure to under the projection π : → R n (r, z) → r.
More precisely, for any Borel set B ∈ R n+1 we set (with a normalization factor) where |·| denotes Lebesgue measure and V n is the measure of the unit ball in R n . Since h is continuous, this defines a measure whose support is precisely . Because of the normalization it is a probability measure. Formally we can write where V n χ is the characteristic function of the unit ball in R n . Note that this makes sense, if we agree to integrate over z first. We can also view this measure as the push forward of Lebesgue measure on R n by the mapping r → (r, h(r)). Thus to draw a random point according to μ, we pick a random point r uniformly in the unit ball in R n and go to the graph above r.
The main theorem of the paper can be formulated as follows. For the precise definitions of the employed terms see below.
Theorem 2.1. Let h be a version of (isotropic) fractional Brownian motion over R n with continuous trajectories. Let α be its self-similarity exponent. Let μ be the graph measure associated with each trajectory. Then the upper and lower correlation dimensions of μ are random variables which are constant almost surely with: Actually we will show this theorem for a larger class of processes, which are only asymptotically self-similar.
The correlation dimension
Here we want to recall the basic facts of correlation dimension. The correlation dimension d c (μ) of a probability measure μ in R m is defined in the following way [14]. Consider the so-called correlation integral at scale : where B(R, ) is the ball of radius around the point R. If a power-law behaviour is observed at small scales, then the corresponding exponent d c is termed the correlation dimension of the measure μ.
More precisely, the upper and lower correlation dimensions are defined as Note that X( ) is the probability that two points randomly chosen in R m according to the probability measure μ are within a mutual distance of , thereby justifying the denomination. This becomes obvious on rewriting: where H is the Heaviside function. The correlation dimension is related to the Hausdorff-Besicovitch dimension D HB ( ) of the support of μ by the general inequality: A useful property of the correlation dimension is its relation to the Fourier asymptotic of the measure. Define the Fourier transform of μ by and consider the Cesaro average: Then it was shown in [15] that the scaling of C(K) is governed by the correlation dimension of μ in the sense that In view of these results, theorem 2.1 can be given the following equivalent form. For a vector (k, q) in R n+1 and the graph measure μ we note that for almost all realizations of h.
The projected measure
Take a probability measure μ in R m . For a fixed direction v ∈ R m with v = 1 consider the projection along v ⊥ defined as follows: The push forward of μ under L defines a positive measure on R, which we denote by μ v . More precisely, for any Borel set B ⊂ R denote by L −1 B its pre-image under L. We then have Let a ⊂ R m , a ∈ R be the affine hyperplane defined by (v, R) = a. Then we can formally write If now μ is the graph measure considered previously it seems clear that this measure again should contain information about the fractality of h. However, as it turns out, it is hidden in a more subtle way, since it has a trivial correlation dimension. We did not succeed in proving a corresponding version of theorem 2.1 for the measure μ v . However, we will prove the following weaker result:
The wavelet correlation dimension
In the case that a probability measure ν in R m has a square integrable density f the correlation integral always exhibits a trivial scaling in the sense that d c [ν] = m, as follows from However, some sub-scaling may still reveal fractal properties of such a measure. The waveletcorrelation dimensions d ± w are a natural generalization of the standard correlation dimension [16,17]. For this paper it is enough to give the following definition: This fractal exponent can again be seen in the Fourier domain For the projected measure μ v we will show the following result. As a consequence, the projected measure has a non-trivial wavelet correlation dimension in the sense that: This will be applied to the scattering problem in the following way. For any direction v ∈ R n+1 , v = 1, the restriction of μ to av, a ∈ R, can be interpreted as the one-dimensional Fourier transform of the projected measure μ v . Indeed for R ∈ u we have av · R = au and we can write upon splitting the integral Thus, the scattered amplitude in a fixed geometry (k, q) = av is simply linked to the Fourier transform of the projected measure: with the following consequence:
Application to Kirchhoff scattering
The theoretical results have obvious applications to Kirchhoff scattering as we shall make explicit. We call a variable geometry scheme a configuration where the source and the receiver are allowed to vary freely in space and a fixed geometry scheme a configuration where the position of the latter is fixed but the frequency can vary. The former case corresponds to (k, q) varying freely in R n+1 while the latter case corresponds to a set (ak, aq), a ∈ R, for a fixed direction v = (k, q). We call I (k, q) = |A(k, q)| 2 the intensity. Note that no phase information is needed for this quantity, which makes it easier to assess in experiments than the full amplitude A. Moreover, we define the average intensity at maximum frequency K:
Asymptotically self-similar Gaussian random surfaces
We will prove the main theorems for a class of Gaussian random surfaces which include the fractional Brownian surfaces as particular case. Let C α be the class of centred Gaussian fields h(r) on R n satisfying the following properties: • The increments h(r 1 ) − h(r 2 ) are (wide sense) stationary and isotropic, with correlation and structure function • The structure function is asymptotically self-similar at small scales, that is for some constants 0 < α < 2 and σ > 0 and a continuous positive bounded function ϕ such that ϕ(0) = 1. • The first two derivatives of ϕ satisfy: The parameter σ in (3.16) is a vertical dilation parameter and the factor 2 is introduced for convenience. A finite non-zero value for ϕ(0) expresses the asymptotic self-similarity of the surface at small scales. Finally, the requirement (3.17) on the derivatives of ϕ is a nonoscillating condition for the correlation function. Isotropic fractional Brownian motion (fBm) on R n is paradigmatic of the class C α , and is obtained for ϕ = 1. Stationary processes with covariance of the form, where F is twice differentiable, are another instance of the class C α (a usual example is the Weilbulian covariance Eh(r 1 )h(r 2 ) = exp(− r 1 − r 2 α ). Another example can be found in the family of 1/f processes, i.e. stationary processes whose power spectrum (ξ) has an asymptotic power-law: (ξ) ∼ ξ −α−1 , ξ → ∞.
The graph measure
Let again μ be the graph measure associated with a realization h of a process in class C α . We will prove the following theorem.
Theorem 3.1. Let h be a version of a process in class C α over R n with continuous trajectories.
Let μ be the graph measure associated with each trajectory. Then the upper and lower correlation dimensions of μ are random variables, which almost surely are the following constants: Proof. Let X( ) be the correlation integral (2.2) associated with μ. By definition of the measure μ we have 4 and thus Now the increment h(r 1 ) − h(r 2 ) is a centred Gaussian process with p.d.f: and thus where we have introduced the kernel: and the error function: In view of the inequality this implies: With the change of variables (r 1 , r 2 ) → (r 1 − r 2 , r 1 + r 2 ) we obtain for < 1/2 (s n is the surface of the unit sphere in R n ): Next we show that the standard deviation decreases faster than the mean. From (3.18) we have (EX( )) 2 = dr 4 χ(r 1 )χ (r 2 )χ (r 3 )χ (r 4 )U ( r 1 − r 2 ; )U ( r 3 − r 4 ; ) (r 1 , r 2 , r 3 , r 4 ; ) where we have introduced the kernel U (r 1 , r 2 , r 3 , r 4 ; ): Note that the effective domain of integration in the above integrals is reduced to a region r 1 − r 2 < , r 3 − r 4 < , as follows clearly from the definition of the kernel U.
To estimate the kernel we need the p.d.f of the Gaussian vector with simplified notation: Then by definition of (3.25) r 2 , r 3 , r 4 , u, u ).
Now the p.d.f can be factorized as
where p(r 1 , r 2 ; u) is the p.d.f (3.19), so the kernel U (r 1 , r 2 , r 3 , r 4 ; ) may be rewritten as which in view of the inequality in lemma 4.2 yields U (r 1 , r 2 , r 3 , r 4 ; ) U ( r 1 − r 2 ; )Ũ (r 1 , r 2 , r 3 , r 4 ; ) with a modified kernel: Since Var X( ) = To estimate this quantity we introduce the function: for which we observe that Applying the mean value theorem to f and noting that it has a decreasing derivative we obtain the following estimation: Since ρ is not bounded away from 1 we have to distinguish two domains. For any 0 < β < 1 we define two complementary domains D and D c by On the domain D c we have r 1 −r 2 + r 3 −r 4 r 1 −r 3 2 1−β < 1 for small enough, so we may apply the lemma 4.1 in the appendix: with a positive exponent. On D we use the simple fact that U ( r 3 − r 4 ) U (r 1 , r 2 , r 3 , r 4 ; ) Ũ (r 1 , r 2 , r 3 , r 4 ; ) 1, together with the uniform estimation: Altogether this yields: Now we can chose the optimal β. A direct computation shows that the optimal exponent is We conclude with a standard argument using Tchebyshev inequality and Borel-Cantelli lemma that X( ) ∼ EX( ) a.s. as → 0. In view of (3.24) this completes the proof.
The projected measure
Let us now prove theorem 2.3 for asymptotically self-similar processes.
Theorem 3.2.
Let h be a version of a process in class C α over R n with continuous trajectories. For fixed v(k, q) ∈ R n+1 let μ v be the projection of the graph measure along v ⊥ associated with each trajectory. Then we have Proof. The proof is very similar to that of theorem 3.1 so we will make it short. Let X( ) be the correlation integral (2.2) associated with μ v . To pick a point uniformly according to the measure μ v , we pick a point r uniformly on the unit ball of R n and form k · r + qh(r). We thus have Similar calculations as previously yield It is easy to check that the above function is differentiable with respect to > 0, with a finite derivative in zero. This shows that EX( ) ∼ → 0. (3.28) Now from Jensen's inequality we have log EX( ) E log X( ) which, combined with (3.28), implies Ed ± c [μ v ] 1. Since 1 is the maximum reachable value for a one-dimensional probability measure, this implies the stated result.
Remark.
A correlation integral similar to (3.27) has been considered in [18] for R d -valued (d 1) random processes over R and for k = 0. Specializing the results of this paper to d = 1 and to fractional Brownian motion one obtains d c = 1 almost surely. As a consequence, the projected measure has a non-trivial wavelet correlation dimension in the sense that:
Let us decompose this expression in EI (ak, aq) = EI (0, aq) + R n ×R n dr 2 (e iak · (r 1 −r 2 ) − 1) and start with the evaluation of the first term on the right-hand side. Using (3.23) again and performing the change of variables (r 1 , r 2 ) → (r 1 − r 2 , r 1 + r 2 ) we obtain The above integrands are radial functions and thus we also have where s n is the surface of the unit sphere in R n . Using the asymptotic behaviour S(r) ∼ r α we obtain after straightforward calculations: The contribution of the remaining term can be estimated with the same change of variables as previously, leading to a 2n/α |EI (ak, aq) − EI (0, aq)| a 1−2/α a → ∞.
Since 2/α > 1, this shows (3.29). The second part of theorem is a consequence of definition (2.11) and Jensen's inequality log E I (ak, aq) E log I (ak, aq).
The conclusion follows from the mean value theorem. Proof. Consider two cases. First suppose |b| < a. Then by a simple change of variables: where the concavity of the error function on R + has been invoked for the last inequality. In the alternative case |b| a we have This proves the lemma. | 4,870.6 | 2003-08-12T00:00:00.000 | [
"Mathematics"
] |
Comprehensive performance of a low-cost spring-assisted mechanism for digital light processing
In additive manufacturing, separation is an important issue in constrained-surface digital light processing. A force higher than the force peak and a sharp increase in force increase the printing failure rate. This study comprehensively evaluated the performance of a low-cost spring-assisted separation mechanism. The Taguchi method was used to confirm the correlation between the inputs of the spring-assisted mechanism (number, coefficient, working height, free height of spring, and length of working arm) and obtain the parameters that minimize the separation force and time. Compared to the pulling-up and tilting mechanisms, the spring-assisted mechanism reduces the difference between the maximum and minimum separation forces for different geometric shapes and areas by 2.4 and 3 times, respectively. In addition, the spring-assisted mechanism solves the problems of the pulling-up mechanism, which has two separation force peaks, and the tilting mechanism, which has a sharp increase in force before the final separation. Finally, the separation force of specific geometric shapes and areas was predicted by the linear regression equation, and the error rate was maintained within 5%, which helped to significantly reduce the calculation costs and time.
Introduction
Additive layer manufacturing technology forms threedimensional (3D) models by stacking materials layer by layer. It can easily build high-quality models with complex structures while minimizing time and material costs. It has influenced industrial manufacturing to gradually shift from traditional to nonconventional processes [1]. Many incremental processes have been used, with the main differences among them being the stacking method and the materials used. Some processes layer by melting or softening the material, such as selective laser melting (SLM), direct metal laser sintering (DMLS), directed energy deposition (DED), and fused deposition modeling (FDM) [2][3][4].
Stereolithography (SLA) is another commonly used technique that uses a light source to irradiate a photosensitive liquid resin with a fixed thickness that is stacked layer by layer [5][6][7]. The key to successful manufacturing is that the light source must provide sufficient energy to cure the material. For example, SLA uses a laser, whereas digital light processing (DLP) uses a projector. DLP can be further divided according to the modeling direction of the build plate: constrained surface (i.e., bottom-up) [8] and free surface (i.e., top-down) [9]. Constrained-surface DLP has become the mainstream approach because of its good material filling rate, low amount of material waste, and short processing time [10]. However, a constrained-surface DLP has a separation failure problem. This is because the space between the cured layer and resin tank is close to the vacuum state; thus, they can only be separated by an external force [11]. An excessive separation force can easily be generated when large areas are printed, damaging the cured layer, or separating the cured layer from the build plate. This can result in printing failure, limiting the printing area.
There are many ways to reduce the separation force, such as optimizing the experimental parameters [12][13][14], changing the contact area between the bottom of the resin tank and the cured layer [15][16][17], or adding materials [18][19][20]. The development of an innovative separation mechanism is a practical approach that may provide additional benefits. Wang et al. [21] proposed an active separation bottom-up stereolithography method, which adds a layer of watercontaining Teflon film in the resin tank and pumps water to reduce the separation force on the cured layer. Another practical approach is to change the pulling-up mechanism, which fixes both sides of the resin tank during separation [22]. Wu et al. [23] found that the tilting mechanism improved the uniformity of photosensitive liquid resin. Changing the separation speed and using polydimethylsiloxane (PDMS) film can reduce the stress on a structure with a large separation force to avoid damaging the printed item. Jin et al. [24] and Xu et al. [25] proposed using vibrations to reduce the separation force without damaging the printed item or affecting the dimensional accuracy. They developed a vibrationassisted system and built a mechanical-analysis model. Lin et al. [26] built a spring-assisted mechanism to reduce the separation force by using the additional pre-stress provided by the compressed spring.
Another approach to varying the separation force is to change the printing geometry. Determining the relationship between the geometry and separation force is valuable for improving manufacturing efficiency. Pan et al. [12] found that a porous shape can increase the effective separation area of the cured layer and obtained an approximate polynomial relationship for predicting the separation force of a porous round. Khadilkar et al. [27] used deep learning to predict the separation stress distribution in cured layers. Their method can be used to determine the stress distribution of each layer and is more efficient than the finite element analysis. Yadegari et al. [28] changed the printing parameters to obtain the separation force-time curve of a resin tank containing a PDMS film. Their experimental results revealed that a fracture mechanics model can be used to predict the maximum separation force. Gritsenko et al. [29] used a fluid mechanics model to simulate the separation force of cylindrical parts, and concluded that optimizing the transient parameters can reduce the separation force. In addition, increasing the printing speed/rate and maintaining a constant height were found to reduce processing time. He et al. [30] used a Siamese neural network to predict the printing speed and performed physical model simulations to obtain the appropriate speed range and best speed for printed items. Wang et al. [31] proposed using a neural network to predict the separation force for molding five symmetric geometric shapes. They used the finite element method (FEM) to verify the prediction model of a neural network. However, their model generated relatively large errors when dealing with complex geometric shapes and could only predict reasonable trends.
The literature review above reveals that separation and mechanism costs are important issues in DLP machines. This study demonstrated the effectiveness of a low-cost spring-assisted mechanism for the separation of DLP. The Taguchi method was used to confirm the correlation between the inputs of the spring-assisted mechanism and obtain the parameters that minimize the separation force and time. Different geometric shapes and areas were printed to compare the manufacturing stability of the proposed method and those of the pulling-up and tilting mechanisms. In addition, the separation issue of more than one force peak and sharpness was also reviewed. Finally, a linear regression equation was established to predict the separation force in a specific geometric shape and area to significantly reduce the calculation cost and time.
The remainder of this paper is organized as follows. Section 2 presents the spring-assisted mechanism, separation force measurement equipment, Taguchi method, and linear regression. Section 3 presents experimental results verifying the effectiveness of the spring-assisted mechanism. Section 4 discusses the performance of the spring-assisted mechanism and its application. Finally, Section 5 presents the conclusions and possible future directions of research.
Spring-assisted mechanism
The schematic of the spring-assisted mechanism is shown in Fig. 1a. The right side is a fulcrum, and the left side is free to move up but subjected to an initial force (Fr) by a compressed spring. According to Hooke's law, Fr can be calculated based on the spring compression, spring constant, and number of springs. When a cured layer starts to separate, the build plate rises and generates a separation force Fp. Both Fp and Fr increase with the lifting distance of the build plate until the cured layer separates. After separation of the cured layer, Fr returns to its initial value until the next cycle. To implement the spring-assisted mechanism, this study uses Titan 2 (Kudo3D, Inc.) [32] to build the prototype, as illustrated in Fig. 1b. One side of the resin tank, which is close to the Z axis of Titan 2, is a fulcrum. The compressed spring mechanism is located on the other side. It is fixed by a stainless steel bar, two purple hollow plastic tubes, and two long screws that can be locked on Titan 2. Moreover, it is sandwiched between two purple plastic bars with holes in them. The upper one is buckled on the stainless steel bar, whereas the lower one is buckled on the resin tank.
According to Hooke's law and the moment theorem, parameters that may affect the separation force include the number of springs, spring coefficient, working height of the springs, free height of the springs, and length of the working arm. Table 1 presents the ranges of values of these five parameters obtained in a feasibility experiment. The springs required for the experiment were purchased from a Japanese company (Misumi) [33] and the material is SWP-A of JIS G 3522 (Japanese Industrial Standard piano wire). The printed items have geometric shapes, including a regular triangle, square, pentagon, hexahedron, round, and donut. The items were printed from an ABS-like resin (3DM Company) [34].
Separation force measurement system and separation force-time diagram for separating the cured layer
The separation force measurement system comprised an LC201 load cell, DMD4059 signal conditioner (OMEGA Engineering), and USB-6002 data acquisition card (DAQ Device, National Instruments) [35,36]. Because the load cell measured the voltage, the separation force was obtained using the following conversion: where x is the mass (g), and y is the voltage. Figure 2 shows the diagram of the separation force versus time for a single separation process. Points A and B represent the exposure and molding of photosensitive resin, respectively. At point B, the light source stopped projecting the light, and the separation process began. At points B (1) x = (y − 0.0291)∕ − 0.0004 and C, the build plate started to move upward and separate from the resin layer at point D, where the separation force was at its maximum. Therefore, the separation distance was from B to D, and the separation time was represented by t. At point E, the build plate returned to the position for printing the next layer after a waiting interval.
Taguchi method
The Taguchi method uses experimental planning and statistical techniques to reduce the overall number of experiments and determine the factors that affect manufacturing quality to stabilize the quality and reduce costs. The Taguchi method involves the following steps: (1) selecting the quality characteristics; (2) determining the ideal function of the quality characteristics; (3) listing all factors that affect the quality characteristics; (4) determining the levels of signal, control, and noise factors; (5) selecting the appropriate orthogonal array and plan the experiment; (6) conducting the experiment and collect data; (7) analyzing the data; and (8) verifying the experimental results. Quality characteristics have corresponding ideal target values, which can be divided into the following types: nominal best, smaller better (STB), and larger better. These have target values of zero, infinity, and threshold values. A control factor is a parameter set by the user to improve the robustness. The signal factor is a parameter that the user adjusts to change its quality characteristics. The noise factor is a parameter that affects the quality characteristics that cannot be controlled by the user. An experimental array is considered orthogonal if a direct intersecting relationship exists between any two rows. The Taguchi method provides many useful orthogonal arrays, the most significant advantages of which are simplified data analysis and reduced experimental costs. The signal-to-noise ratio (SNR) is a measurement of quality that considers the average value and standard deviation of a quality characteristic. A characteristic is considered to be of good quality when the average value is consistent with the target value and the standard deviation is small.
In this study, the STB was adopted to minimize the separation force. The SNR, mean value, and standard deviation are calculated as follows: where y is the average value, y i is the measured value of the quality, and n is the number of samples. Analysis of variance (ANOVA) was performed to determineobtain the influence and error degree of error of each experimental parameter on the entirewhole process. This required the calculationcalculating of the correction number (CF), sum of squares between groups (SSA), total sum of squares, mean square (MSA), and contribution percentage (P): where n is the number of calculated items, p is the number of levels of control factors, m is the number of levels used, and df is the number of levels of control factors minus one.
An L18 orthogonal array was used in this study. To avoid interaction between parameters, the number of springs, spring coefficient, working height of springs, free height of springs, and length of the working arm were set to rows 1, 3, 4, 5, and 6, respectively. An experimental plan was developed according to the upper, intermediate, and lower limits of the control ranges for each parameter in Table 1. The values are presented in Table 2.
Linear regression model
The Pearson correlation coefficient is typically used in data display and curve integration to measure the linear dependence between two variables. The degree to which this coefficient approaches 1 is related to the number of datasets. A smaller number of datasets increased the fluctuation of the correlation coefficient, and the absolute value of the correlation coefficient approached 1. Increasing the number of datasets decreased the absolute value of the correlation coefficient. Therefore, if the number of samples is relatively small, a large correlation coefficient is insufficient to indicate a close linear relationship between the variables X and Y. The Pearson's correlation coefficient of the samples is calculated as follows: where S x is the standard deviation of x, S y is the standard deviation of y, and C x,y is the covariance of the variables x and y.
To confirm a linear relationship, the significance of the sample correlation coefficient must be verified. The verification data is obtained using This value is compared with the value (degrees of freedom: n-2) as given in Appendix Table 10. The correlation coefficient is considered significant if the following condition is satisfied: This indicates that a linear relationship between query values can be explained (significance level a = 0.05). If all data meet the condition in Eq. (15), a linear correlation between x and y can be confirmed. The coefficients are calculated as follows: The linear regression model is given by Figure 3 shows a representative printed item: a 20 × 20 mm square. Table 3 presents the separation force data obtained in the experiments and the SNR calculated using Eqs. (1)-(3). Minimizing the separation force increased the SNR. Table 4 and Fig. 4 indicate that the optimal parameter combination for minimizing the separation force was one spring, a spring coefficient of 0.1 N/mm, a spring working height of 14.5 mm, a free height of the spring of 15 mm, and an axle base of 165 mm. As Table 5, the ANOVA results indicate that the most important parameter affecting the separation force was the axle base, followed by the free height of the spring and spring coefficients. Figure 5 shows the 3D object produced according to the optimal parameters. This object demonstrates the manufacturing stability of the spring-assisted mechanism. To fully analyze the characteristics of the spring-assisted mechanism, the printing parameters with the minimum separation time were verified experimentally. The experimental data, SNR, factor response table, factor response diagram, and ANOVA results are presented in Appendix Tables 11, 12 and 13 and Fig. 12. The optimal parameter combination for minimizing the separation time was two springs, a spring coefficient of 0.3 N/mm, a spring working height of 13.0 mm, a free height of 25 mm in the spring, and an axle base of 165 mm. The ANOVA results revealed that the most important parameter affecting the separation time was the axle base, followed by the free height of the spring and the spring coefficients. Figure 6 summarizes the experimental results for the minimum separation force and time of the spring-assisted, pulling-up, and tilting mechanisms. By adjusting the printing parameters, the spring-assisted mechanism can be used to vary the separation force and time. Because the separation force is the key factor that affects printing quality, the subsequent experiments and performance comparisons in this study mainly focused on optimizing the separation force.
Comparison of separation mechanisms for printed items with different geometric shapes and areas
The separation characteristics of the spring-assisted, pulling-up, and tilting mechanisms were compared for different geometric shapes and areas. Figure 7 shows the geometric shapes of the printed items: square, regular triangle, pentagon, hexahedron, round, and donut. Each geometric shape had a printing area of 400 mm 2 . Figure 8 shows the measured separation forces for the three mechanisms. Table 6 lists the experimental results. For the different geometric shapes, the average separation force of the spring-assisted mechanism was 3.03 N with a standard deviation of 0.07. This was 33.3% lower than the separation force of the pullingup mechanism and only 5.2% higher than that of the tilting mechanism. The spring-assisted mechanism had the smallest standard deviation in the separation force, indicating that it provided the highest stability during printing. Figure 9 shows the separation forces of the three mechanisms for squares with printing areas in the range of 100-2500 mm 2 . Table 7 indicates that the separation force of the springassisted mechanism increases linearly with the printing area, with values between those of the pulling-up and tilting mechanisms.
Linear regression model for predicting the separation force according to the geometric shape
To confirm that linear regression could be used to predict the separation force of the spring-assisted mechanism, an equation was constructed, and the calculated values were compared with those of the printing experiments. First, the separation force equations were constructed based on square areas of 100, 400, 900, 1600, and 2500 mm 2 . The experimental data of the separation forces are presented in Appendix Tables 14, 15, 16, 17, 18, 19, 20 and 21.
The experimental data revealed that the separation force was clearly higher for the first 10 layers than for the last 15 layers, which can be attributed to the surface tension between the build plate and photosensitive liquid resin [37]. To maintain the consistency of the samples, only data for the 11 th -25 th layers were used for statistical analysis. The difference between the average and median values of the separation force data was less than 10%, indicating a normal distribution. A statistical analysis was performed using Eqs. (10)- (13), and the resulting data are presented in Table 8. The Pearson correlation coefficient was 0.99, and all statistical values met the condition in Eq. (15), indicating that the correlation coefficient was significant. This implies that the linear regression equation for the separation force of the printed squares of different areas is meaningful. Figure 10 shows the linear regression trend line for the separation force of the printed square. This is expressed by where x is the area (mm 2 ). The prediction accuracy of the separation force was determined by the error rate between the predicted separation force and experimental data. Six squares with different areas were selected for verification. Table 9 lists the predicted separation force, experimental data, and error (19) y(x) = 1.433 + 0.003114x Fig. 9 Separation forces depending on the printing area: a spring-assisted mechanism, b pulling-up mechanism, and c tilting mechanism Fig. 11. The equations are as follows: where x is the area (mm 2 ).
Discussion
The design concept of the spring-assisted mechanism was similar to that proposed by Wang et al. [21]. This provides a counteracting force during the separation process, which (20) y(x) = 1.09 + 0.005867x reduces the separation force. However, it has a lower cost than introducing oxygen as an inhibitor [38] or using a water pumping mechanism [21]. Figure 6 shows that the curve of the minimum separation force or separation time of the spring-assisted mechanism was between those of the pulling-up and tilting mechanisms. This demonstrates the flexibility of the spring-assisted mechanism in adjusting the parameters to vary the separation force and time. An ANOVA was performed to determine the relationship and significance of each parameter considered in this study, following the methods of Zamheri et al. [39] and Putra et al. [40]. Comparing the optimal parameters for minimizing the separation force and separation time of the spring-assisted mechanism, both require the longest axle base. In addition, reducing the separation time required to apply a relatively large pre-stress before the object is separated based on Hooke's law requires an increase in the number of springs and a reduction in the free height of the spring. Meanwhile, minimizing the separation force requires a decrease in the spring coefficient and length of the working arm. Figure 8 shows that the spring-assisted mechanism reduced the separation force generated from printing triangles for a given printing area. Table 6 indicates that the spring-assisted mechanism printed different geometric shapes with the smallest standard deviation of the separation force and reduced the difference between the maximum and minimum forces by 2.4 and 3 times compared with the pulling-up and tilting mechanisms, respectively. This helped to reduce the probability of defects or failures on the printed items and control the printing quality. Pan et al. [12] and Yadegari et al. [41] proposed that porous geometric shapes would increase the effective separation area, which would increase the separation force. This characteristic was observed when the spring-assisted mechanism printed the donut, which had the maximum separation force.
For printing items with different areas, Fig. 9 shows that the spring-assisted mechanism eliminated the problems of the pulling-up mechanism, which had two separation force peaks, and the tilting mechanism, which had a sharp increase in the force before separation. This helped improve the manufacturing stability. Yadegari et al. [41] experimentally demonstrated that the separation force increased with the cross-sectional area of the printed item. The spring-assisted mechanism fits this trend and exhibits a linear relationship similar to that of the pulling-up mechanism. Therefore, linear regression was used to predict the separation force, which could provide a reference for large-area printing. The linear regression model accurately predicted the separation force of the printed squares with different areas, which supports the arguments of Yadegari et al. [41] and Gritsenko et al. [29] that mathematical models can be used to predict the separation force. Finally, if the standard deviation of the separation force of the round shape is taken as a benchmark, the results indicate that the separation force increases with the complexity of the geometric shape. This result is the same as that of Wang et al. [31]. However, the linear regression model can still predict a reasonable trend for such shapes.
Conclusions
This study demonstrated the superior separation performance of a low-cost spring-assisted mechanism. The contributions of this study are summarized as follows.
1. The most important parameter affecting the separation force and separation time of the spring-assisted mechanism was the axle base, followed by the free height of the spring and spring coefficients. An axle base of 165 mm can minimize both the separation force and separation time. 2. The spring-assisted mechanism printed different geometric shapes with the smallest standard deviation of the separation force and reduced the difference between the maximum and minimum forces by 2.4 and 3 times compared to the pulling-up and tilting mechanisms, respectively. In addition, the spring-assisted mechanism solved the problems of the pulling-up mechanism, which has Fig. 11 Linear regression of separation force: a regular triangle, b pentagon, c hexahedron, d round, and e donut two separation force peaks, and the tilting mechanism, which exhibits a sharp increase in force before the final separation. This helped reduce the probability of defects or failures in the printed items and control the printing quality. 3. A linear regression trend was obtained between the separation force of the spring-assisted mechanism and printing area. In addition, equations were constructed to quickly predict the separation force with different geometric shapes and areas, which greatly reduced the calculation cost and time. The prediction error rate gen-erally remained less than 5%, except for an area of 676 mm 2 .
For future research, dimensional tolerance can be incorporated as an evaluation objective for multi-objective optimization. In addition, carbon emissions from energy consumption and the impact of volatile organic compounds produced during the manufacturing process on human health can also be considered. Finally, a fixture that enables the spring-assisted mechanism to be installed on a commercial printing mechanism of the printer is proposed to speed up commercialization. | 5,674.2 | 2023-02-04T00:00:00.000 | [
"Engineering"
] |
The mass range of hot subdwarf B stars from MESA simulations
Hot subdwarf B (sdB) stars are helium core burning stars that have lost almost their entire hydrogen envelope due to binary interaction. Their assumed canonical mass of $\rm M_{\mathrm{sdB}}\sim0.47 M_{\odot}$ has recently been debated given a broad range found both from observations as well as from the simulations. Here, we revise and refine the mass range for sdBs derived two decades ago with the Eggleton code, using the stellar evolution code MESA, and discuss the effects of metallicity and the inclusion of core overshooting during the main sequence. We find an excellent agreement for low-mass progenitors, up to $\sim2.0 \rm M_{\odot}$. For stars more massive than $\sim2.5 \rm M_{\odot}$ we obtain a wider range of sdB masses compared to the simulations from the literature. Our MESA models for the lower metallicity predict, on average, slightly more massive sdBs. Finally, we show the results for the sdB lifetime as a function of sdB mass and discuss the effect this might have in the comparison between simulations and observational samples. This study paves the way for reproducing the observed Galactic mass distribution of sdB binaries.
INTRODUCTION
Hot subdwarf B (sdB) stars are core helium-burning stars on the horizontal branch with a very thin hydrogen layer.Due to the interaction with a binary companion they have lost almost their entire envelope during their earlier evolution, but managed to ignite helium after the envelope was ejected (see Heber 2009, for a comprehensive review).
Most studies of sdB stars assume a canonical mass of M sdB ∼ 0.47 M ⊙ , which corresponds to the core mass for a ∼solar-mass giant at the tip of the red giant branch (RGB), where the helium core flash occurs.However, the observational constraints of the masses reveal a rather broad mass range.For example, Fontaine et al. (2012) obtained an empirical range of M sdB ∼ 0.35 − 0.63 M ⊙ from asteroseismology of 15 pulsating sdBs, and a somewhat broader range of M sdB ∼ 0.29 − 0.63 M ⊙ if eclipsing sdB binaries are also included.A similar mass distribution was recently found by Schaffenroth et al. (2022) for a larger population of sdBs in close (post-common envelope) binary systems with low-mass main sequence (MS) or brown dwarf companions, as well as by Lei et al. (2023) for single-lined hot subdwarf stars in The Large Sky Area Multi-Object Fiber Spectroscopic Telescope spectra survey (LAMOST).
Theoretically, the range of possible masses for sdBs was calculated by Han et al. (2002), who showed that the resulting mass depends on the progenitor mass and other assumed parameters and physical processes, such as metallicity, core overshooting, etc.While the derived masses were restricted to a rather small range ★ E-mail<EMAIL_ADDRESS>† E-mail<EMAIL_ADDRESS>of M sdB ∼ 0.45 − 0.48 M ⊙ (i.e.close to the canonical value) for low-mass progenitors with initial masses < ∼ 1.3 M ⊙ , progenitors with larger initial masses can lead to smaller sdB masses (as low as ∼ 0.32 M ⊙ ) but also to larger masses if the progenitor was more massive than ∼ 3 M ⊙ .
Having accurate and reliable constraints of the sdB masses, both from an observational and theoretical point of view, is crucial to test the evolutionary paths toward these stars.The work from Han et al. (2002) was based on the stellar evolution code from Eggleton (1971).We here redo these calculations and refine the grids using the most updated and flexible open source code Modules for Experiments in Stellar Astrophysics (mesa 1 , Paxton et al. 2011Paxton et al. , 2013Paxton et al. , 2015Paxton et al. , 2018Paxton et al. , 2019;;Jermyn et al. 2023) to provide updated mass ranges for sdB stars, depending on the mass and metallicity of the progenitor, and the sdB phase duration.This code stands out due to its collaborative, open-source nature, which ensures it is continuously being tested and updated.It incorporates modern numerical techniques for efficient and accurate stellar evolution simulations, and its modular design and flexible microphysics enable customization and versatility, supporting a wide range of astrophysical processes.The inlist files for mesa are also included, allowing for the reproduction of our calculations across a wide range of initial masses, metallicities, and overshooting strengths.This is particularly valuable for further binary population synthesis models and comparison with observational distributions.We compare the results from mesa with those obtained by Han et al. (2002) with the Eggleton code, and study the effects of including core overshooting during the MS phase of the progenitors of the sdBs.
MODELS WITH MESA
Previous works have already used the stellar evolution code mesa to simulate sdBs.For example, Schindler et al. (2015) calculated a series of sdB stellar evolution models with mesa to compare with observational results derived from spectroscopy (Green et al. 2008) and with previous sdB models that use different stellar codes (Charpinet et al. 2002;Bloemen et al. 2014).They assumed initial masses between 1.0 and 2.5 M ⊙ and applied the built-in tool from mesa called 'Relax Mass' to artificially remove the envelope at the tip of the RGB, in order to reproduce the sdB phase.Later, Ghasemi et al. (2017) used mesa models to study the effects of overshooting by comparing their models with the evolutionary and asteroseismic properties of the observed sdB pulsator KIC 10553698.They modeled the evolution of a star with an initial mass of 1.5 M ⊙ from the pre-main-sequence phase until the core helium depletion, and also used the 'Relax Mass' tool at the tip of the RGB to remove the envelope in order to obtain an sdB star, with different values of the overshooting parameter.
Following these works, we here used the single star module from mesa to evolve stars with different initial masses and metallicities, and the 'Relax Mass' tool to artificially remove the envelope during the RGB.The maximum sdB mass for a given progenitor was obtained by removing the envelope at the tip of the RGB, defined by the onset of helium burning in the core.The minimum sdB mass, on the other hand, was obtained by removing the envelope at different stages after the MS, but before the tip of the RGB, in order to find the minimum mass that manages to ignite helium in the core after removing the envelope.For low-mass progenitors, in which the core becomes degenerate during the RGB phase and ignition of helium in the core occurs explosively (experiencing core helium flash), we further required that stable helium burning was achieved, thus leading to a developed carbon core.Otherwise, the star was considered as a 'failed sdB star'.
We used a grid of initial masses from 0.8 to 6 M ⊙ and two different metallicities ( = 0.02 and 0.004, following Han et al. 2002).Core overshooting was included during the MS for both metallicities (see Sec. 2.1).Models without overshooting where also considered but only for = 0.02, in order to allow for a direct comparison with the same models from Han et al. (2002).For each possible progenitor in our grid, the modeling process was divided into three steps: i) evolving the star from the pre-main-sequence phase to the terminal age main sequence (TAMS); ii) evolving the star from the TAMS to the tip of the RGB; iii) loading the star at different evolutionary stages after the TAMS and rapidly removing the envelope.In what follows, we detail some important parameters considered in each of these steps.
From pre-MS to TAMS
The predictive mixing scheme was used to establish the limit between the radiative and convective zones during the MS (see Section 2.1 in Paxton et al. 2018).The mixing length alpha ( ) parameter, defined as the local pressure scale height, was set to 1.8 (e.g., Ostrowski et al. 2021).
The inclusion or not of core overshooting is important during the MS, because the mass of the helium core at the TAMS depends on mixing processes, therefore affecting the derived mass range for the sdBs.There is growing evidence that models without overshooting do not reproduce observations for giant stars initially more massive than ∼ 2 M ⊙ (e.g., Constantino & Baraffe 2018).Therefore, we decided to include overshooting in our standard mesa models.Some models without overshooting were also calculated for = 0.02, in order to analyse the effect that overshooting has on the sdB mass, but also to allow for a more direct comparison with the masses obtained with the Eggleton code, presented by Han et al. (2002).
In our standard models, where core overshooting was included during the MS phase, we adopted the exponential diffusive overshooting scheme from mesa (Herwig 2000;Paxton et al. 2011), in which overshooting is treated as a diffusion process with an exponentially decreasing diffusion coefficient (see, e.g., Zhang et al. 2022).The free parameter ov sets the extent of the overshoot region.Here we adopted the recommended value from Herwig (2000), ov = 0.016, based on fits to the stellar models from Schaller et al. (1992), which is roughly equivalent to ov,step = 0.2 in the step overshoot scheme (also implemented in mesa).
The models were saved when the TAMS was reached, defined by the depletion of hydrogen in the center (when the central lower limit for hydrogen is 10 −4 ).
From TAMS to the tip of the RGB
The total power of the helium burning reactions was used to stop the simulation at the tip of the RGB, with the limiting value set to 10 L ⊙2 .This condition corresponds to the rapid increase in helium burning luminosity following helium ignition at the tip of the RGB.For the minimum sdB mass the models should be stopped before reaching the tip of the RGB.This was obtained simply by stopping the simulation at a specific model number, which corresponds to a helium core mass smaller than the core mass at the tip of the RGB.Whether this core mass ignites helium after removing the envelope was tested in the third step of the modelling process.
Removing the envelope after the TAMS and evolving until the white dwarf cooling track
For each progenitor, we loaded a previously saved model either from the tip of the RGB (to derive the maximum sdB mass), or with different helium core masses after the TAMS (to derive the minimum sdB mass) and applied the 'Relax Mass' process from mesa to remove the envelope.A small hydrogen envelope was left around the helium core, part of which was incorporated into the core before helium ignition.We found that leaving an envelope of 0.01 M ⊙ for the 'Relax Mass' process resulted in a hydrogen envelope mass of the order of ∼ 10 −2 − 5 × 10 −4 M ⊙ during the sdB phase (e.g., Schindler et al. 2015).
From this point, having already removed the envelope, we allowed the star to evolve until the luminosity drops below log (/ ⊙ ) = −3.5, when the star is already on the white dwarf cooling track.We clarify here that, while we were not interested in the white dwarf phase, setting such a low luminosity as the stopping condition in mesa was needed to ensure that we really obtain the minimum sdB masses.This is because, as we will see in the next Section, when the envelope is removed before the tip of the RGB, the star first needs to contract, following the white dwarf cooling track, until enough compression allows for helium ignition.
An example of a mesa inlist file for each of the three steps in the modelling process can be found in Appendix A.
Finding the minimum sdB masses
To evaluate whether a model has succeeded in burning helium in a stable way, i.e. going through the sdB phase, we looked at the mass of the carbon core.If the model had a final carbon core mass larger than 0 it was assumed that it experienced a phase of core helium burning as an sdB star.We chose this condition, instead of a condition based on helium burning, because some models that managed to ignite helium under degenerate conditions did not reach a stable helium burning phase and moved directly to the white dwarf cooling sequence.In those 'failed sdBs', ignition most likely did not reach the center and the remnant was a helium-core white dwarf, i.e. with a final carbon-core mass of 03 .Initially, we applied the 'Relax Mass' process loading models after the TAMS with increasing core masses, in steps of 0.01 M ⊙ , until we found a model that went through the sdB phase.Later, we used finer steps of 0.001 M ⊙ between this core mass and the previous model, that did not create an sdB star, in order to get an accurate minimum sdB mass.
RESULTS
Here we present the results we obtained with the mesa code.First, we focus on the details of the evolution for a given initial mass all the way to the white dwarf cooling track for the two limiting cases, i.e. for the maximum and minimum sdB mass.Then we present and discuss the sdB mass ranges obtained in this work.
Evolution before and during the helium ignition
Figure 1 shows an example of the evolution in the HR diagram for a typical sdB progenitor with an initial mass of 1.5 M ⊙ , = 0.02 and including core-overshooting during the MS phase.The left panel shows the evolution when the envelope was artificially removed at the tip of the RGB phase, which results in the maximum sdB mass for this initial mass (M sdB = 0.4708 M ⊙ ).In the right panel, on the other hand, the envelope was removed when the core mass was the minimum needed to ignite helium after losing the envelope, leading to the minimum sdB mass for the same progenitor mass (M sdB = 0.449 M ⊙ ).The gray dotted lines show the whole evolution, while the black and red dots correspond to steps of 1 Myr and the red dots are highlighting the sdB phase.
We can see that by removing the envelope at the tip of the RGB the star manages to ignite the helium core very quickly, because the necessary conditions for mass, pressure and temperature were already almost reached.On the other hand, for the minimum mass, the star first needs to contract and heat to reach the necessary conditions to ignite helium.This can be seen in the right panel of Fig. 1, where the star first goes towards the white dwarf cooling track until enough compression is achieved and helium can be ignited, moving the star back up in the HR diagram to become an sdB.A similar behaviour was obtained by Byrne et al. (2018).
The sdB phase corresponds to the phase of stable helium burning in the core of the naked star, which was identified due to the presence of a convective core after the removal of the envelope.Before helium ignition, the core is either degenerate or radiative (for low-and highmass progenitors, respectively).The ignition of helium in the core provides a sufficient energy flow to ensure convection.When helium at the very center is depleted the core again becomes radiative (and later degenerates).
Given the mass of the initial star in these models, helium is ignited under degenerate conditions in a series of rapid helium flashes that cause the loops in the luminosity and effective temperature seen in Fig. 1 just before the sdB phase.In Fig. 2 we show the evolution of the luminosity (top), effective temperature (second panel from top), radius (third panel from top) and location of the largest (by mass) convective region (bottom) for the 1.5 M ⊙ star after the envelope was removed either at the tip of the RGB (left panel) or when the core mass corresponds to the minimum mass that ignites helium after the envelope is ejected (right panel), focusing on the first few Myr after removing the envelope, to see the behaviour of important stellar parameters during the helium flashes.The blue dashed regions in the bottom panel show the location of the largest convective zones while the black line represents the convective core mass.From the bottom panel of this figure, it can be observed that the ignition of helium starts off center, generating a series of flashes and the development of a convective zone that moves towards the center of the star with each subsequent flash.Each of these flashes causes a contraction of the whole star and a drop in luminosity, while the effective temperature increases.The only exception is for the first flash in the model with the minimum sdB mass (right panel), because the star was already a white dwarf when the first flash occurred, and helium ignition resulted in an increase in radius and luminosity, and a decrease in effective temperature.The effect of the flashes on the surface are more evident for the first and more external flash, while the subsequent ones are progressively less intense, closer to the center, of longer duration and with less effect on the surface of the star.For this particular case, ignition of helium reaches the center during the sixth flash for the sdB with the maximum mass (left panel) and during the fifth flash for the sdB with the minimum mass (right panel), setting the beginning of the sdB phase (vertical red dotted line) when a convective core appears.These results are strongly consistent with those obtained by Ostrowski et al. (2021, their Fig. C1) for a 1 M ⊙ star after removing the envelope at the tip of the RGB.
In Fig. 3, we show the same four panels as in Fig. 2 but for the whole sdB and post-sdB phases until the star is again on the white dwarf cooling track.We also show in the top panels, the power of helium burning (as the logarithm of the total thermal power from triplealpha process, excluding neutrinos, in solar luminosities), given by the dotted gray lines.The two red dashed vertical lines indicate the beginning and end of the sdB phase, which last ∼ 150 Myr and ∼ 180 Myr for the models with the maximum and minimum sdB mass for the given progenitor mass, respectively.Both the mass of the convective core as well as the duration of the sdB phase are also consistent with the results obtained by Ostrowski et al. (2021, their Fig. 6).They found that for an initial mass of 1.0 M ⊙ , and using the predicting mixing scheme, the duration of the sdB phase was 147.9 Myr when the envelope was removed at the tip of the RGB.
We can see from Fig. 3 that the sdB that descends from the tip of the RGB phase (left panel) has a larger radius and a lower effective temperature, compared to the sdB with the minimum mass (right panel) for a 1.5 M ⊙ progenitor.This is a consequence of the hydrogen envelope that remains during the sdB phase.Although in both cases we left the same hydrogen envelope of 0.01 M ⊙ around the heliumcore during the 'Relax Mass' process, the sdB that descends from the tip of the RGB ignites helium very quickly (only ∼ 17 000 yr after the envelope removal), while hydrogen was still being burned and incorporated into the core.The ignition of helium halts the burning of hydrogen and results in an sdB with a larger hydrogen envelope (of ∼ 9 × 10 −3 M ⊙ ).On the other hand, for the model with the minimum sdB mass, the hydrogen burning phase lasts for ∼ 140 000 yr after the envelope removal, while ignition of helium occurs ∼ 2.8 Myr after removing the envelope, when the star was already a white dwarf and the hydrogen envelope was smaller (∼ 5 × 10 −4 M ⊙ for this model).
It can also be seen from this figure that after the end of the sdB phase, when helium core burning stops, the whole star experiences a rapid contraction phase, decreasing the radius while increasing the luminosity and effective temperature.The power of helium burning quickly recovers (see dotted gray line in the top panels), increasing the luminosity, due to the ignition of helium in a shell around the inert core (small convective zone just after the end of the sdB phase in the bottom panels).This shell is initially convective but quickly becomes radiative, as the radius increases.The behaviour of the radius and effective temperature during the shell helium burning phase is very different for the two models.For the maximum sdB mass, the radius increases causing the effective temperature to decrease slightly.The star moves up and slightly to the right on the HR diagram while remaining in the same temperature range of B-type stars, due to the thicker hydrogen envelope.On the other hand, for the minimum sdB mass, the radius decreases slightly and the effective temperature increases, moving the star up and to the left on the HR diagram, ascending towards the phase where hot subdwarf O (sdO) stars reside.During the helium shell burning phase, part of the remaining hydrogen envelope is burnt and converted to helium, leading to very similar hydrogen envelope masses (∼ 4 × 10 −4 M ⊙ ) for the resulting white dwarfs in both cases.However, given that there was a larger hydrogen envelope around the more massive sdB, a peak in the luminosity and a rapid increase in effective temperature, associated with hydrogen shell burning, is observed in the left panel just before entering the white dwarf cooling track.Given the small envelope left, the stars fail to reach the asymptotic giant branch (which is true for all the sdBs we simulated, regardless of the progenitor mass).The post-sdB phase, with shell helium burning, lasts for ∼ 10 − 20 Myr, after which the power of helium burning drops dramatically, considerably decreasing the luminosity, effective temperature and radius, following the cooling track of a carbon/oxygen or a hybrid helium/carbon/oxygen white dwarf (see, e.g., Zenati et al. 2019, for a discussion on the formation of hybrid white dwarfs).The formation of hybrid white dwarfs with helium shells larger than 0.01 M ⊙ are of crucial interest for modelling the transient resulting from merging white dwarf binaries (Perets et al. 2019).
We note that the differences just outlined for the two sdBs, with the maximum and minimum sdB mass for a 1.5 M ⊙ progenitor, are a direct consequence of having left the same envelope mass after the envelope removal, which might not be realistic.It might well be that removing the envelope closer to the tip of the RGB is more efficient, given the lower binding energy of a more extended envelope.This might translate into an initially smaller hydrogen envelope around the core compared to the case in which the envelope is ejected earlier on the star's evolution.Therefore, one should not conclude from our results that sdBs descending from more evolved progenitors are colder and larger due to their larger hydrogen envelope, or that they do not pass thought the location of sdO stars during the shell helium burning phase.Given that the detail mechanism of the mass loss to form an sdB is not entirely understood, the mass of hydrogen that remains around the helium core after removing the envelope is unknown, albeit it should be small.As we mention earlier, we have chosen to leave 0.01 M ⊙ following the work of Schindler et al. (2015).
It is out of the scope of this paper to review in detail the evolution for different progenitor masses, and the example just shown was meant to illustrate the results that can be obtained with mesa for a typical progenitor mass.However, for more massive progenitors ( > ∼ 2.0 − 2.5M ⊙ ) helium ignites smoothly and at the very center of the star.Therefore, a convective core appears immediately after helium ignition.The rest of the evolution is similar to the cases we have shown here.However, we note that as the mass of the progenitor is increased, the difference between the minimum and maximum sdB mass is greater (as will be seen in Sec.3.2).This leads to larger differences in the HR diagram during the sdB phase, especially with respect to the luminosity, which can differ by more than an order of magnitude (more massive sdBs being more luminous), while the effective temperature remains in a similar range.Furthermore, the duration of the sdB phase strongly depends on the mass of the sdB star, as we will discuss in Sec.4.4.
SdB masses
Figure 4 shows the maximum (black) and minimum (gray) sdB masses as a function of the zero age main sequence (ZAMS) mass for the models with = 0.02, with overshooting.The hydrogen envelope of 0.01 M ⊙ left around the helium-core during the 'Relax Mass' process is considered as part of the sdB mass.As we explained in the previous section, a fraction of this hydrogen is burnt after the 'Relax Mass' process, resulting in hydrogen envelopes of 10 −2 − 5 × 10 −4 M ⊙ during the sdB phase.Typically, we obtained that a more massive hydrogen envelope remains around the sdB if the progenitor was closer to the tip of the RGB, i.e. for those that become sdBs faster after the envelope was removed.As we mentioned before, this is a direct consequence of leaving the same hydrogen envelope mass in the 'Relax Mass' process for progenitors at different evolutionary stages, which might not be realistic.However, given that 0.01 M ⊙ is considered as the upper limit for the H envelope mass in sdBs (e.g., Heber 2016), the sdB mass range will not be strongly affected by this.
We note that the more massive stripped stars predicted here (with M sdB > ∼ 0.6 M ⊙ ) will most likely be located in the region of the HR diagram that corresponds to sdOB or sdO stars, instead of sdBs (see e.g., Fig. 6 in Götberg et al. 2018).
For stars with M ZAMS ∼ 0.8 − 1.5 M ⊙ the maximum sdB mass is very close to the canonical value of ∼ 0.47 M ⊙ .These stars have a high level of degeneracy in their cores during the RGB phase, which translates into a similar core mass at the tip of the RGB, regardless of the total mass.For larger progenitor masses, the maximum sdB mass decreases rapidly reaching a minimum value of 0.34 M ⊙ for M ZAMS ∼ 2.0 − 2.1 M ⊙ .This is because the level of degeneracy in the core during the RGB phase decreases abruptly for ZAMS masses around 1.7 − 1.9 M ⊙ , which facilitates compression and heating of the core during the RGB, therefore allowing the star to reach the conditions for helium ignition at a smaller core mass.For initial masses above ∼ 2.1 M ⊙ the level of degeneracy in the core can be neglected and the maximum sdB mass starts to increase for an increasing ZAMS mass as a direct consequence of the more massive core mass at the end of the MS.
The level of degeneracy is determined in mesa by the dimensionless electron degeneracy parameter ∼ E F /k B T. According to Paxton et al. (2011), this parameter corresponds to the ratio of the electron chemical potential to k B T. In principle, any value of > 0 indicates some level of electron degeneracy, while for = 4 the elec- tron degeneracy pressure is roughly twice that of an ideal electron gas.In mesa, = 4 is used to determine whether a region of the star is degenerate or not (for example in the default TRho_Profile plot).In the top panel of Fig. 5, we show as a function of enclosed mass ( r ) for different ZAMS masses at the tip of the RGB.Only regions with > 0, i.e. close to the center, are shown.For all the models, the level of degeneracy decreases from the center outward, as expected due to the decrease in density.Even for a star with ZAMS = 4.0 M ⊙ there is a very small level of degeneracy at the center.However, only stars with ZAMS < ∼ 2.0 M ⊙ have zones where > 4 and are expected to ignite helium under a highly degenerate condition, i.e. experiencing a helium core flash.One can also see from this figure that stars with ZAMS masses up to ∼ 1.5 M ⊙ have cores that converge to nearly identical levels of degeneracy.The value of at the center starts to slightly decrease for ZAMS = 1.6 M ⊙ and then decreases more rapidly for larger masses, with an abrupt change between the models with ZAMS = 1.8 M ⊙ and 1.9 M ⊙ .This is exactly the range of initial masses where we observe the abrupt decrease in the predicted sdB masses in Fig. 4.This is more evident in the bottom panel of Fig. 5, where we show r for which there is some level of degeneracy ( > 0) or strong degeneracy ( > 4), at the tip of the RGB, as a function of initial mass.It is evident from this figure that the decrease in core mass at the tip of the RGB (and therefore on the maximum sdB mass) is strongly related to the enclosed mass for which > 4.
For low-mass progenitors (M ZAMS < ∼ 2.0 − 2.1 M ⊙ ), the behavior for the minimum sdB masses is the same as for the maximum sdB masses, but only ∼ 0.02 M ⊙ below.For stars initially more massive, the level of degeneracy in the core during the RGB phase is negligible, i.e. the ignition of helium goes smoothly, and the maximum sdB mass starts to grow again.The minimum sdB mass, on the other hand, remains constant at 0.327 M ⊙ for initial masses up to ∼ 3 M ⊙ .For larger initial masses, the helium-core mass at the TAMS is already larger than this value, and we found that helium was ignited, after removing the envelope, for any core mass we chose after the TAMS.It is highly unlikely that an sdB star can result if the envelope is removed during the MS, as there is still hydrogen in the core.Therefore, the minimum sdB mass was set to the helium-core mass at the TAMS plus the 0.01 M ⊙ of hydrogen envelope left, i.e. applying the 'Relax Mass' process to the first model at the base of the subgiant branch.
Even though sdBs can be produced from massive progenitors if they lose their envelope at the base of the subgiant branch, the large expansion of the radius during the RGB phase allows binary systems with a larger range of initial periods to experience mass transfer.Also, the population of sdBs in wide binaries with un-evolved companions is expected to descend from progenitors that filled their Roche lobes during the RGB phase (e.g., Vos et al. 2019Vos et al. , 2020)).For the case of sdB stars that belong to close binary systems (e.g., Schaffenroth et al. 2022), with orbital periods of the order of hours to a few days, the current orbital configuration can only be understood if the initial binary system experienced a common-envelope phase (Paczynski 1976), in which the orbital distance was dramatically reduced.It is more likely that the mass transfer process became dynamically unstable, entering a common envelope phase, if the donor filled its Roche lobe when the envelope was already deeply convective, i.e. during the RGB phase.Therefore, we decided to also show in Fig. 4 the minimum sdB mass that is obtained for massive progenitors at the base of the RGB phase (dotted line).Although mesa does not distinguish the different evolutionary phases, the base of the RGB can be defined assuming that a certain percentage of the envelope is already convective.Here we assumed that the star was at the base of the RGB when 1/3 of the envelope was already convective, based on the definition used in the single star evolution code (sse) from Hurley et al. (2000).The gray shaded area in Fig. 4 represents the whole range of possible sdB masses obtained from mesa for the models with = 0.02, while the dashed area highlights the region for sdBs descending from progenitors on the subgiant branch phase, which are less likely.
The same results are shown in Fig. 6 for the models with = 0.004 and overshooting, where the behaviour is very similar to the model with a larger metallicity.The tables with the results obtained from mesa for our different models are presented in the appendix B.
DISCUSSION
Here we analyse the effects of metallicity and overshooting.We also compare the results from mesa with the ones calculated by Han et al. (2002).Finally, we show the results for the sdB lifetime as a function of sdB mass and discuss the effect this might have in the comparison between simulations and observational samples.
Metallicity effect
In Fig. 7 we compare the maximum (black) and minimum (gray) sdB masses as a function of the ZAMS mass for the models with = 0.02 (solid lines) and with = 0.004 (dashed lines).It is clearly seen that, for most initial masses, the whole range is slightly shifted towards larger sdB masses when a lower metallicity is assumed.This is, in part, a consequence of having more hydrogen and helium available, but is mainly related to the different opacity.Metal poor stars have a lower opacity, which allows their cores to cool easier, and therefore more mass needs to be accumulated before igniting.A lower metallicity implies a more massive, and therefore hotter, helium-core at the TAMS.
For more massive progenitors (M ZAMS > ∼ 2 M ⊙ ), where the level of electron degeneracy of the core during the RGB is sufficiently small to ignite helium smoothly, the maximum sdB masses are again slightly larger for the models with a lower metallicity, mainly as a consequence of the drop in the opacity, as we explained before.However, the minimum sdB masses remain smaller for lower metallicity models in the range of M ZAMS ∼ 2 − 3 M ⊙ .This is a consequence of having a hotter helium-core, where the level of degeneracy is smaller and therefore can be more compressed and heated after losing the envelope.Above ∼ 3 M ⊙ , the minimum sdB masses are again larger for the models with lower metallicity.As it was mentioned in Sec.3.2, any core mass we took after the TAMS ignited helium after the envelope removal for massive progenitors.Therefore, the minimum sdB masses for progenitors more massive than ∼ 3 M ⊙ come from the core mass at the TAMS plus the hydrogen envelope left, and lower metallicity models have a more massive core at the TAMS.
Comparison with the work from Han et al. 2002
The Eggleton code used by Han et al. (2002) includes an approach for core overshooting based on stability criteria called the ' ov prescription' (Pols et al. 1997), which is not available in mesa.This makes it difficult to compare the results from both codes using models that include overshooting.Therefore, we decided to compare our results with those of Han et al. (2002) only for the models without overshooting.Given the long computational time it takes to obtain the minimum sdB masses, we only calculated these for initial masses up to 2.2 M ⊙ in these models, following Han et al. (2002).
In Fig. 8 we show the comparison of the maximum (black crosses) and minimum (gray crosses) sdB masses calculated with mesa for the case with = 0.02 and without overshooting, with those from Han et al. (2002, blue squares and cyan triangles, respectively).The values obtained with mesa agree very precisely with those obtained by Han et al. (2002) with the Eggleton code, both for the maximum and minimum sdB masses.The only exception is for the more massive progenitor calculated by Han et al. (2002) for this model, i.e. the one with M ZAMS ∼ 2.1 M ⊙ , for which we obtained smaller masses.However, by having a much finer grid of progenitors in the mesa models, it was possible to smooth the drop in the curve that is obtained at the transition from stars that develop a degenerate core during the RGB to those that do not.This implies that using a linear interpolation within the masses derived by Han et al. (2002) would have resulted in an underestimation of the sdB masses for progenitors with masses within ∼ 1.6 and ∼ 2.0 M ⊙ .
Given than Han et al. (2002) did not simulate more massive progenitors in the models without overshooting, we can only conclude that the mesa and Eggleton code predictions are comparably good for stars up to M ZAMS ∼ 2.0 M ⊙ .Despite not being able to make a direct comparison for more massive progenitors, given the difference in the overshooting prescriptions used in both codes for massive stars, we can still infer from Table 1 in Han et al. (2002) that the range of sdB masses obtained with mesa is wider for stars initially more massive than ∼ 2.5 M ⊙ , even if sdB descending from progenitors at the base of the RGB are considered for a more likely minimum mass.
The effect of core overshooting during the MS
Figure 9 compares the results for the maximum sdB mass as a function of initial mass for the mesa models with = 0.02, with (black) and without (grey) overshooting, respectively.For stars with M ZAMS < ∼ 1.5 M ⊙ the maximum sdB mass is very close to the canonical value of ∼ 0.47 M ⊙ , and including core overshooting during the MS does not make any difference, as expected for stars with radiative cores.For more massive stars, the shape of the two curves remains very similar, but with the curve that includes overshooting shifted to the left.In both cases the sdB mass decreases rapidly with increasing the initial mass, reaching a minimum value of ∼ 0.33 M ⊙ for an initial mass of ∼ 2.1 M ⊙ and ∼ 2.4 M ⊙ for the cases with and without overshooting, respectively.The difference comes from the fact that considering overshooting results in a more massive and hotter core at the end of the MS, reducing the maximum initial mass for which the core becomes strongly degenerate during the RGB phase.For more massive stars, where helium is ignited in the core under non-degenerate conditions, the maximum sdB mass increases with the progenitor mass.A similar result was recently obtained with mesa by Ostrowski et al. (2021, their Fig. C4).
The duration of the sdB phase
The duration of the sdB phase is an important aspect to consider when comparing simulations with observations.In Fig. 10, we show the duration of the sdB phase obtained from the mesa models as a function of the resulting sdB mass for the models with overshooting and with a metallicity of = 0.02 (left panel) and = 0.004 (right).The black crosses correspond to sdBs descending from progenitors that lost their envelopes at the tip of the RGB, while the gray crosses represent the minimum sdB masses derived for each initial mass.Regardless of the metallicity, the duration of the sdB phase is strongly dependent on the sdB mass, decreasing for more massive sdBs, both for the maximum and minimum sdB masses.This result is not surprising, as more massive sdBs should be hotter, and therefore burn faster, which results in a shorter lifetime.
The calculated lifetimes fit extremely well with a linear fit in the log ( sdb ) − log( sdB ) plane (represented by the solid line in each panel) given by: for = 0.02 and: for = 0.004.The three shortest and the three longest duration were excluded to obtain the fit for each metallicity, but they are still pretty consistent with the derived fit.The main difference between the two models is that in the low-metallicity case the least massive sdBs are 0.02 M ⊙ less massive than in the models with = 0.02, so that the longest lifetimes exceed 1 Gyr for = 0.004.However, for the same sdB mass, the low metallicity models predict a slightly shorter duration of the sdB phase.The dashed line in the left panel represents the sdB lifetime as a function of sdB mass derived by Yungelson (2008), with the Eggleton code, for a (0.35 − 0.65) M ⊙ mass range 4 .While the shape of the fits is very similar, the lifetimes derived by Yungelson (2008) are slightly larger than the ones we obtained with mesa.
Even considering that low-mass sdBs spend considerably more time in the core-helium burning phase, observational evidence seems to indicate that the population is not dominated by the lower mass sdBs.From the initial mass function (e.g., Kroupa 2001) we know that low-mass stars are much more likely to form.Therefore, most sdBs should descend from progenitors with initial masses < ∼ 1.5 M ⊙ , resulting in an sdB mass distribution that peaks at the canonical value, with a tail towards lower mass sdB (descending from progenitors with initial masses of ∼ 2−3 M ⊙ ) and a smaller tail towards more massive sdBs (that descend from the more massive stars).This is consistent with the mass distribution from Fontaine et al. (2012), although it 4 Although these authors did not explicitly give the assumed metallicity, they mentioned that the mass fractions of helium is = 0.98 for homogeneous helium models, from which we infer = 0.02.should be taken with caution due to the low number statistics (only 22 systems), but also with the most recent mass distribution derived from observational data for a larger population of sdBs in close binary systems with unevolved companions from Schaffenroth et al. (2022) and for single-lined hot subdwarf stars in LAMOST from Lei et al. (2023).
Implications for binary modelling
The sdB models in this study are an important possible ingredient for the detailed modelling of the observed sdB population.Such modelling also requires performing binary population synthesis.Here, we list the key ingredients that should be considered in the binary population synthesis of sdB binaries based on our models.
1) The range of possible core masses that result in an sdB star after losing the envelope, translates into a range of radii where a star can fill its Roche lobe to become an sdB, constraining the range of initial orbital periods.
2) The lifetime of an sdB strongly depends on its mass, as we discussed above, with low-mass sdBs spending more time in this phase.
3) The age of the population is also a crucial factor to consider in simulations that attempt to compare with observational samples of sdBs.Low-mass stars need much longer to leave the MS and become giants, i.e. possible sdB progenitors, compared to more massive stars.Also, old (low-metallicity) stars evolve faster than their younger (high-metallicity) counterparts.Therefore, a very old and low metallicity population, for example from the halo of the Milky Way, will probably have an sdB mass distribution strongly peaked at the canonical mass.This peak should become less pronounced as we move into younger populations.
4) As we discussed in Sec.4.1, the metallicity also has a small but not negligible effect on the sdB masses.However, the metallicity strongly affects the minimum initial mass that can evolve from the MS at a given age.For example, a star of 0.8 M ⊙ with solar metallicity does not have enough time to leave the MS within the Hubble time, but if the same star has a very low metallicity (e.g., Z = 0.0005) it will evolve faster and will be able to leave the MS within the Hubble time (based on simulations with the sse code, Hurley et al. 2000).
5) The inclusion of core overshooting during the MS, increases the resulting sdB mass for progenitors more massive than ∼ 1.5 M ⊙ .However, different prescriptions for overshooting are available, without a general consensus on the more suitable.Also, the extent of the overshoot region ( ov ) is another parameter that remains poorly constrained.Although ov = 0.016 seems to fit well for stars more massive than 2 M ⊙ , Claret & Torres (2016, 2017, 2018, 2019) suggest that there is a dependence of the strength of overshooting on stellar mass, with a sharp increase between ∼ 1.2 and ∼ 2 M ⊙ .This conclusion, however, is still under debate (e.g., Constantino & Baraffe 2018).
6) The likelihood of a star to have a binary companion, which is a necessary ingredient to form sdB stars, should also be considered.There is observational evidence that the overall binary frequency is an increasing function of stellar mass (e.g., Raghavan et al. 2010).
7) It should also be taken into account whether the observed population consists of single sdBs or sdBs in wide or close binary systems.Close sdB binaries are most certainly the result of common envelope evolution, in which the envelope is rapidly ejected during the RGB phase.On the other hand, sdBs in binaries with periods of hundreds of days probably descend from a previous phase of stable mass transfer.A considerably larger timescale is needed in order to eject the envelope in the later scenario.The different timescales involved might have a measurable effect on the resulting sdB star, for example on the mass of the hydrogen envelope that is retained.For the population of single sdBs, which are expected to descend from a merger process, the internal structure of the sdBs is probably affected, and the mass distribution for this population might be completely different.
8) SdBs in binary systems can be paired with different types of companions, that might also influence the sdB masses.For example, Schaffenroth et al. (2022) suggests that the observed sdBs in close binaries show a different sdB mass distribution if they are paired with un-evolved low-mass companions, i.e.M dwarfs or brown dwarfs, or with white dwarf companions.While the sdBs with M dwarf or brown dwarf companions show a peak around the canonical mass, the peak of the distribution is shifted to lower masses for sdBs with white dwarf companions.The latter probably underwent two mass transfer phases, with the first phase caused by the evolution of the white dwarf progenitor, which must have affected the orbital period of the system before the second mass transfer phase and/or the mass of the sdB progenitor.
Having taken all these aspects of stellar population into account one might obtain a realistic mass distribution of the present Galactic sdB population.
SUMMARY
In this article we have revised and refined the sdB mass range as a function of initial mass for two different metallicities, = 0.02 and = 0.004, using the stellar evolution code mesa.We found that the lower metallicity models predict, on average, slightly more massive sdBs (0.01 − 0.02 M ⊙ larger).The effect of including core overshooting during the MS is more evident for progenitors more massive than ∼ 1.5 M ⊙ , as expected, decreasing the maximum initial mass for which the core becomes strongly degenerate during the RGB phase, and increasing the sdB mass for progenitors that ignite helium under non-degenerate conditions.
We also compared our results with the ranges for sdB masses derived more than two decades ago by Han et al. (2002), and found an excellent agreement for low-mass progenitors, up to ∼ 2 M ⊙ , in the models without overshooting.For more massive progenitors, a direct comparison was not possible due to the different prescription for overshooting these authors used, which is not available in mesa.However, we found that in general the mesa models result in a wider mass range compared to the simulations performed by Han et al. (2002) with the Eggleton code.
The duration of the sdB phase was also calculated, finding a strong anti-correlation with the sdB mass, in agreement with previous results from Yungelson (2008).The lifetime for sdBs at the lower end of the mass distribution (with M sdB ∼ 0.3 M ⊙ ) is five times larger than for sdBs with the canonical mass, and at least an order of magnitude larger than for the more massive sdBs (with M sdB > ∼ 0.55 M ⊙ ).Finally, we discussed several factors that might affect the sdB mass distribution and should be considered in binary population synthesis models that aim to compare with observational samples.One of the most important factors is the age of the population where the sdBs reside, as it will constrain the minimum progenitor mass.While older populations should exhibit a strong peak at the canonical mass, corresponding to sdBs that descend from low-mass progenitors, this peak should become less pronounced or even disappear for younger populations.The evolutionary path that leads to an sdB star also needs to be considered, as we do not expect to see the same mass distribution for single (post-merger) sdBs, as for sdBs in close or wide binaries.Finally, observations seem to indicate that the sdB mass distribution is not the same for sdBs with un-evolved (MS or brown dwarf) and evolved (white dwarf) companions, at least for close binaries.Therefore, the type of companion should also be considered in population models.
APPENDIX A: MESA INLISTS
Here we give one example for each of the three inlist files.The star in all these samples has the same initial mass (1 M ⊙ ) and metallicity (Z = 0.004) and the envelope was extracted when the core mass was c = 0.452 M ⊙ .We used the mesa version -r15140.
A1 Pre-MS to TAMS
! inlist to evolve a 1 M ⊙ star from the pre-MS to the TAMS.
APPENDIX B: TABLES
In the following tables we present the results obtained whit mesa.
For each initial mass (M ZAMS ) we listed the minimum and maximum sdB masses (M min sdB and M max sdB , respectively) and the corresponding duration of the sdB phase for both limits (t min sdB and t max sdB ).
Figure 1 .
Figure1.HR diagram evolution for a 1.5 M ⊙ star where the envelope was artificially removed either at the tip of the RGB (left) or when the core mass on the RGB corresponds to the minimum mass that ignites helium after the envelope is ejected (right).The gray dotted lines show the whole evolution while the black and red dots are in steps of 1 Myr with the sdB phase highlighted as red dots.
Figure 2 .
Figure 2. Evolution of the total luminosity (top), effective temperature (second panel from top), radius (third panel from top) and location of the largest convective region (bottom) for a 1.5 M ⊙ star after removing the envelope at the tip of the RGB (left panel) or when the core mass on the RGB corresponds to the minimum mass that ignites helium after the envelope is ejected (right panel).In the bottom panel, the black line represents the mass of the convective core while the blue dashed regions show the location of the largest convective zone.The time was set to 0 after the envelope was fully removed to facilitate visualization.We show only the first few Myr after the envelope ejection, to focus on the phase of helium flashes, where the convection region is approaching the center with each flash.The red dashed line indicates the time when convection reaches the center, setting the beginning of the sdB phase.
Figure 3 .
Figure 3. Same as in Fig. 2 but including the sdB and post-sdB phases.The helium luminosity is also included in the top panel as a dotted gray line.The two dashed red lines indicate the beginning and end of the sdB phase.
Figure 4 .
Figure 4. Maximum (black) and minimum (gray) sdB masses as a function of initial mass for the mesa models with = 0.02 and core overshooting during the MS.The gray area corresponds to the whole range of possible sdB masses, while the dashed area highlights sdBs descending from progenitors on the subgiant branch phase.
Figure 5 .
Figure 5. Top: The mesa dimensionless electron degeneracy parameter ∼ E F /k B T, which indicates the level of degeneracy, as a function of enclosed mass for different ZAMS masses (color coded), at the tip of the RGB.Bottom: Enclosed mass at which > 4 (i.e. the total mass of strongly degenerate material, black dots) or > 0 (mass of the material with some level of degeneracy, open circles) at the tip of the RGB as a function of ZAMS mass.Both panels correspond to our reference model (Z=0.02 with core overshooting during the MS).
Figure 6 .
Figure 6.Same as in Fig. 4 but for the models with = 0.004 and core overshooting during the MS.
Figure 7 .
Figure 7. Maximum (black) and minimum (gray) sdB masses as a function of initial mass for the models with = 0.02 (solid lines) and with = 0.004 (dashed lines).
Figure 8 .
Figure 8. Maximum (black crosses) and minimum (gray crosses) sdB mass as a function of initial mass for the mesa models with = 0.02 without overshooting.The values derived by Han et al. (2002) for the same model are shown as blue squares and cyan triangles for the maximum and minimum sdB masses, respectively.
Figure 9 .
Figure 9. Maximum sdB mass as a function of initial mass for the mesa models with = 0.02, with (black) and without (gray) overshooting.
Figure 10 .
Figure 10.Duration of the sdB phase as a function of the sdB mass for the models with = 0.02 (left) and = 0.004 (right), with overshooting.The black crosses correspond to the maximum sdB masses (i.e.removing the envelope at the tip of the RGB), while the gray crosses are for the minimum sdB masses.The solid line in each panel corresponds to a linear fit to the data in a log − log scale, while the dashed line in the left panel represents the lifetime derived by Yungelson (2008).
Table B1 .
Model with Z = 0.02 and with overshooting.The last four values for the minimum sdB masses (highlighted with *) correspond to progenitors that lost their envelopes at the base of the subgiant branch, as explained in Section 3.M ZAMS [M ⊙ ] M min sdB [M ⊙ ] M max sdB [M ⊙ ] t min sdB [Myr] t max sdB[Myr]
Table B2 .
Model with Z = 0.004 and with overshooting.As in TableB1, the values with * correspond to progenitors at the base of the subgiant branch.
Table B3 .
Models with Z = 0.02 and without overshooting.For progenitors more massive than 2.2 M ⊙ we did not calculate the minimum sdBs masses (see Section 4.3). | 12,637.8 | 2023-12-15T00:00:00.000 | [
"Physics"
] |
An overview of data mining in medical informatics: Bangladesh perspective
Recently using information technology in the health care system is an important issue. Medical informatics is the combination of information science, computer science, and health care. As population is increasing rapidly, it is obvious to use medical informatics to save human lives and to treat people in efficient way. Therefore, we tried to explore an overview of the necessities and practical uses of data mining in administrative, clinical, research as well as educational aspects of medical informatics in Bangladesh. It is one the most populous countries in the world and the health care system including data mining in medical informatics is not so handy. Besides, for the effect of monsoon weather people of this country are affected by various diseases but poor investment and weak implementation make these diseases a burden. The study focuses on the needs of clinical data warehousing and the practice of examining these databases in order to improve various aspects of medical informatics in Bangladesh. The study suggests that government and private health care organizations need to take account to store their data and create a research wing in every hospital in Bangladesh as well as other developing countries in the world so that researchers and doctors may be able to find out the solution of their problems. For the greater benefits of the people, more research on medical informatics is essential and implementation of the research outputs must be done in medical treatment.
Introduction
Medical informatics is in its most sensational period because nowadays it is easy to handle big data by computer. Medical informatics is the study of information engineering and how to exercise it in health care field. This disciplines works jointly with information science, computer science, social science, behavioral science, management science, and others (Nadri et al., 2017). As the population is increasing rapidly in the world mostly in developed countries and so the cost of health care, governments and other health organizations are mostly rely to save time, money and human lives on the use of health Informatics (Raghupathi, 2010). From an annual report of United States 44000 to 98000 patients died in hospitals because of medical error (Institute of Medicine (US) Committee on Quality of Health Care in America, 2000). A study shows that due to drug the adverse events and ranging from minor side effects to death of an individual costs $136 billion per year only in United States (Johnson and Bootman, 1995). Electronic patient records, technology based alerting, reminder, predictive system for administration of hospitals, and education and training for nurses and doctors can reduce medical error and financial costs of healthcare system (Raghupathi, 2010). Data produced in healthcare system is an example of "Big data". The term "Big data" is designated newly but the process of analyzing large dataset is old enough (Everts, 2016). In hospitals, large amount of data generates from each patients such as datasets including MRI images or gene microarrays for each patient (Herland et al., 2014). As of 2011, health care organizations had generated over 150 Exabyte's of data (one Exabyte is 1000 petabytes). It is noted that as Health informatics generates large and growing amount of data, the healthcare industry can be protected by using data mining in Medical Informatics up to $450 billion each year just in the United States (Herland et al., 2014;Yuan et al., 2013). Since 1971 the healthcare system of this country does not have any significant development though there is mass population. Government have taken many steps and set up medical forums to develop the healthcare system as effective and accountable. Though there is adequate number of qualified medical personnel still people is getting improper treatments, unsatisfactory care and losing confidence over this system. Lack of medical equipment, lack of adequate training of health personnel, lack of information about the patients and diseases make it more challenging (Al Mahdy, 2009). For the improvement of healthcare service it is necessary to establish clinical data warehousing system and the practice of examining these databases in various aspects of medical informatics. In this modern era developed countries like USA, Canada etc. can provide proper treatment to patients; retrieve patients' historical data, assist disease predictive and prevention system through this data mining process in health informatics (Alkhatib et al., 2015). In this study we discuss the present situation of applying data mining in various aspects of medical informatics in Bangladesh.
Applications of data mining on medical informatics in Bangladesh
Medical informatics contains mainly four subfields: Clinical care, Administration of health services, Medical research and Education & Training. Here we present the details of each subfield in Bangladesh perspective.
Clinical care
Normally doctors or nurses prescribe new medicines to the patients on the basis of the information of previous medical reports. But in Bangladesh most of the physicians prescribe medicines without checking previous history and family records as there is no centralized electronic patient record database system. Even most of the time they only prescribe medicines based on symptoms without rechecking those symptoms via tests. If health care system of Bangladesh uses centralized patients record database, by applying data mining in that database physicians can go beyond what is more appropriate to treat a patient. For similar case a new physician could refer more effective medicine to patient by query for the decision based on the centralized database (Raghupathi, 2010). Clinical Decision Support System (CDSS) is a computer based software which is designed to help health care provider to make health care decisions. Search capabilities for medical queries, monitor inputs and check them for predetermined triggers, reminders for periodic tasks, suggestions based on medical knowledge and different prediction models like diagnosis and prognosis are the implementations of CDSS (Raghupathi, 2010). But CDSS is not available in Bangladesh yet. Knowledge based system and data mining based system are the main types of CDSS. A hybrid of these two systems attains high performance. This hybrid CDSS was proposed for rural Bangladesh but it has not applied yet (Iqbal, 2012). Figure 1 shows how CDSS works. CDSS have some subsections like Health Evolution through Logic Processing (HELP) system, The Acute Physiology and Chronic Health Evolution (APACHE) series of models and the pneumonia Severity of Illness Index are currently using in clinics and hospitals of developed countries (Raghupathi, 2010). It is high time to use these technology in Bangladesh.
Administration of health services
In health care organizations administrators take large number of steps to make better the health status of the population. Those steps totally depends on the data related to the incidents for which decisions are going to be made. Administrators are going to take decisions about arranging more tools and equipment's for a specified time such as disease outbreak and decide whether or not any additional services requires in terms of cost and benefit. For taking these type of decisions in developed countries they use a computer based system to predict their needs accurately (Raghupathi, 2010). But in Bangladesh health care institutions does not have such kind of system and sometimes they cannot handle the pressure of patients and cannot treat them effectively because of scarcity of tools and beds at that period of time.
To detect disease outbreaks, Izadi and Buckeridge developed a system POMDPs (Partially Observable Markov Decision Processes) which can suggest a better solution to overcome the disease outbreaks in terms of cost and effects and predict about outbreaks though the amount of false detection of outbreaks by this system is not affordable for practical use (Raghupathi, 2010). In Bangladesh administrators need such kind of system to improve the health care facilities and to give better care to the patients but that kind of system need to have small fraction of error. 2.3. Medical research Most basic areas covered by medical research include cellular and molecular biology, medical genetics, immunology, neuroscience, and psychology. Researchers aim to establish an understanding of the cellular, molecular and physiological mechanisms underpinning human health and disease. Data mining methods can help researchers to obtain penetrative patterns, cause and effect relationships, and prognostic scoring system from available data of medical patients. If clinical and administrative decision support systems are starts to apply in various clinics, hospitals, and research centers, this method could apply on those small or scatter data from clinics, hospitals and research centers (Raghupathi, 2010). In Bangladesh, Bangladesh Medical Research Council (BMRC) a focal organization for health research established in 1972 as an autonomous body under the Ministry of Health and Family Welfare. Its target is to focus on the problems and issues related to medical and health sciences and determine priority areas in research on the basis of healthcare needs, fields of medicine, public health, reproductive health and nutrition and also make sure to arrange the application and utilization of the results of these researches. BMRC publishes a quarterly bulletin; a journal called Research Information and Communication on Health (RICH) twice a year; a newsletter twice a year; and a journal titled Current Awareness Service (CAS) once a year.
Education and training
Education and training fourth subfield of medical informatics is the application of Data Mining techniques to educational data. To provide knowledge about medical informatics to healthcare professionals like doctor, nurse, paramedic etc., to retrain them and to keep them up to date with modern technologies of medical science it is seen as the emerging interdisciplinary research area in the field of e-learning (Raghupathi, 2010;Romero and Ventura, 2010). Trainee, instructors and administrators can be benefitted from the data mining techniques as it monitor the learning paths, resources, materials of learning tasks, discover learning patterns, web based educational system, intelligent tutorial system and e-learning (Raghupathi, 2010;Sachin and Vijay, 2012). University of Alberta developed a centralized e-learning system and internet community named HOMER which provides medical students free online lifetime access with various learning materials of advanced medical research and knowledge (Raghupathi, 2010). A Naive Bayesian approach is applied by Leonard et al. to find cross-references between the symbol of genes and proteins and Medline articles which case study is considered as an overview of a relatively new data mining technique to find relevant reference articles for particular genes and proteins (Leonard et al., 2002). Flow chart of education and training system is shown in Figure 2. But in Bangladesh medical students and trainee do not have any modern technological education advantages as if they can be benefitted with more successful learning experiences. As a result they always lag behind and administration deprived of the effectiveness of the modern educational program. So medical professionals treat with back dated medical treatment systems. Bangladesh has achieved admirable progress in healthcare sector and socioeconomic development over the last few decades. As a developing country Bangladesh have quite insufficient budget resource allocation in healthcare sectors compared to other developing economies. A report shows that in 2014 less than 6% of the total expenditure was for health care sector. In 2017-18 fiscal years the budget allocation for health and family welfare sector was 5.2% which is much lower than the 15 percent budgetary allocation recommended by World Health Organization (WHO). For achieving the objectives of the Sustainable Development Goals (SDG), the expenditure for health is too inadequate. It also represents a poor amount of allocation for a country of over 160 million people. According to WHO, the ratio of doctors to nurses to technologists should be 1:3:5 but in Bangladesh the ratio stands at 1:0.4:0.24. This inappropriate ratio is the result of poor and insufficient allocation of budget in health care system.
Data warehouse
Data warehouse (DW) is a large, central electronic storage system of oriented, integrated, well-decorated and with all significant parts of data. Basically data warehousing was first used for business intelligence in context of sales, production, planning and in order to keep information of employees and was invented in 1988 by IBM researchers Barry Devlin and Paul Murphy. Typically in any organization data is kept for a short period of time but in the field of medical informatics in DW (Data Warehouse) data must be kept for the people's lifetime and till the cure of diseases and with time-variant (Pedersen and Jensen, 1998). It is helpful for getting information easily, in research purposes, and for making better decisions in healthcare sector. This system also can help to reduce the total cost on healthcare. Simultaneous access by multiple medical personnel and patients from multiple reliable sources, online processing for administrative and clinical initiative, the expenses for clinical proceedings, enhancing the quality of data and compatibility, security etc. are the advantageous features of data warehousing (Raghupathi, 2010;Khan and Haque, 2015). There are some characteristics a proper DW should carry on: (1) This should be central, easily accessible, (2) Should have all significant parts of data or information, (3) Properly oriented by subject, (4) Perfectly integrated, (5) Well-decorated as any sector related with this data house could be benefited like researchers or others and (6) Confidential and security should maintain hardly. In Bangladesh healthcare systems are weak and not properly oriented like other developed country. As we are on a journey to build a digital Bangladesh so healthcare sector should also be developed and on the contrary central data warehouse with healthcare information is the first and foremost step on this. For having this the great rate of progress in the field of information technology, data mining, computing, information security and also with great approach and expansion of World Wide Web (WWW) are highly necessary. This can help to get information for all like junior or senior doctors, medical students, patients (limited as they required) and researchers also. To establish this whole storage this may could be a long and critical procedure and also may take time a little longer but this is highly needed to develop the healthcare system of this country. We are suggesting a software based data warehouse system where information can be input directly into a software from heterogeneous sources as shown in the above flow chart of Figure 3. This software will be designed in such a way that the raw data will be converted into an oriented form and this organized information will be stored into different folders in a database. Peoples from different levels with respect to health care system e.g. patients, doctors, nurses, administrators, medical students, researchers etc. can query for necessary information from that database. This database system can reduce higher cost of implementing health data warehouse system.
Big data and data mining in health informatics
Data mining is a process of collecting useable information from large data set which is known as big data. In this process one or more software will be used to analysis these large raw dataset. In the middle of 1990's data mining was emerged as a new concept and new approach of analyzing data and knowledge discovery. In 1995 the first ACM conference on knowledge discovery and data mining was held in the USA. But in late 2009 the term "Data Mining" was first registered for the 2010 Medical Subject Headings (MeSH1) (Yoo et al., 2012). Big data has some characteristics which are: Volume, Velocity, Variety, Veracity and Value. Volume means size of gathered data; Velocity refers the new data generation speed; Variety pertains to the norm and nature of the data; Veracity measures the precision of the data and Value appraises the quality of the data in terms of the aimed result (Hilbert, 2016;2015). Data for Health informatics research has some of the characteristics of big data. Large amount of data generates from each patients such as datasets including MRI images or gene microarrays for each patient. Big Velocity occurs when data generated at high speed which can be seen at time of observing real time events whether that can be observing a patient's existent condition through medical equipment or trying to track an epidemic through incoming web posts (such as from Facebook, Twitter etc.) Big Variety pertains to that datasets which includes large amount of different types of independent variable, datasets that collected from different sources or any dataset which is complex and thus need to be evaluate at many level of data throughout Health informatics. High Veracity of data arises in Health Informatics because of faulty clinical censors, gene microarrays or from individual patients data stored in database. High value of data can be gathered by traditional methods such as clinical settings (Herland et al., 2014). The field of Health Informatics has various sub-fields including Bioinformatics, Image Informatics (e.g. Neuroinformatics), Clinical Informatics, Public Health Informatics, and also Translational Bioinformatics (TBI). Bioinformatics is not usually supposed as a part of Health Informatics but it becomes a vital source of health information. It is a field which uses various tools and develops method to describe biological data. It uses the knowledge of computer science, statistics, biology and engineering to analyze the biological data and interpret them to improve the health care condition (Lesk, 2011). Bioinformatics data such as DNA sequence of thousands of organisms is incessantly increasing which is the great example of Big Volume. McDonald created khemr which is a Bioinformatics suite of software which seeks to solve hardware computational problem and it helps to preprocess Big Volume genomic sequence by converting it into short fragmented sequence which can be stored in Bloom filter-based hash table to analyze the data effectively and efficiently (McDonald and Brown, 2013). Neuroinformatics is a subfield of Health Informatics which investigated only the brain image data to know how brain works, connection between various part of body and brain and find relation between brain image data and medical events information. Neuroinformatics works mainly with neuroscience and informatics research to improve and uses computer based tools in understanding the function and structure of brain. This subfield covers mainlytools for analyzing, visualizing nerve system, theoretical, mathematical and simulation environments for describing the structure and function of brain. Clinical Informatics helps to predict that can assist a physician to make a faster and more accurate decision about patients by analyzing their data. According to a research, the clinical research and the implication of that research in practice has about 152 years distance (Bennett and Doub, 2011). Nowadays decisions are mainly based on previous information or on what have been found by experts but if physicians embrace the findings of recent research which defines a new ways then the decisions would be more accurate and reliable about patients. Public health Informatics deals with population level data to achieve acuteness of medical state. Population level data collects from population via social media, poll or from hospitals, doctors, clinics and this type of data has Big Volume, Big Velocity and Big Variety. Translational Bioinformatics is a subfield of health informatics which tries to converge molecular bioinformatics, biostatistics, statistical genetics and clinical informatics. It mainly works with informatics methodology to rapidly increasing biomedical and genomic data to improve medical tools for efficiently using it in health care by doctors, experts etc. (https://www.amia.org/applications-informatics/translationalbioinformatics). For using data mining to medical data firstly we have to know the algorithms or techniques of data mining. Usually there are two classifications of algorithms of data mining. Descriptive data mining for measuring the similarity between objects and identify the patterns in data so that a big data easily can understand. This includes clustering, association, summarization, and sequence discovery. Predictive data mining applies the methods to unpredicted data including classification, regression, time series analysis, and prediction (Yoo et al., 2012). To create an optimal result from a big data by data mining, there are 5 techniques those could apply, like; classify different data in different classes(classification analysis), using proper methods to identify relations between variables in database (association rule learning), detect or identify the observations that are failed to complete the expected requirements for having a place in the dataset (anomaly or outlier detection), clustering the observations by putting same characteristics observations in the same group or cluster and run analysis based on clustering which may help to find the degree of association between to objects and also helpful for making personal profile of patients (clustering analysis) and using of regression analysis to find out the dependency of variables and so on (regression analysis).
Recommendations
We recommend that it needs to increase the allocation of budget for health and also improve rural health care services and resources, revise and develop health policies, develop existing technologies and include new necessary technologies as people's expectations to ensure relatively quick and proper health care service. In order to solve the problem arising with medical treatment and to improve the quality of health care system data mining has potential impact on medical informatics.
Conclusions
In this study we discoursed about medical informatics and its different levels, status of Data Warehousing, Data mining and its technique in Bangladesh. From this study we established that various research work has been done on different aspects of medical informatics but the finding of these research work doesn't use in real life applications. Data formed in health care sector should be easily accessible for students and researchers as they can improve the quality of findings. And also Government should consider about these findings to apply these in relative sectors. | 4,939.2 | 2020-02-03T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
PREPARATION AND CHARACTERIZATION OF FICUS LACOR METALLIC PARTICLES BASED NANOGEL FOR WOUND HEALING ACTIVITY
.
INTRODUCTION
A wound is described as a detriment or destruction to the typical anatomy and physiology.Wounding, regardless of the cause or nature, injures the tissue as well as interrupts the nearby surrounding within it.Normal wound healing takes place in a dynamic and intricate way encompassing a sequence of coordinated events such as hemostasis, inflammation, proliferation and tissue remodelling stages [1].When normal wound healing fails as in persistent injuries that don't heal and would be extremely expensive and reduced quality of life in the society [2].As per a 2018 retrospective analysis study of Medicare enrollees revealed that 8.2 million people had wounds with or without infections.The estimated costs to Medicare for both acute and chronic wound therapy were between $28.1 billion and $96.8 billion [3].Looking into the therapy, wound healing treatment has gone through a great revolution.The various methods are used in treating wounds, such as debridement, skin substitutes, growth factors, wound dressings, gene therapy, stem cell therapy, antiseptics, antibacterial agents [4], traditional therapy-based herbal and animal-derived compounds, living organisms [5].
Another treatment option that is grabbing greater attention is nanotechnology.The study of incredibly small structures is known as nanotechnology.The preceding "Nano" is a term from Greek that refers to "dwarf" [6].Nanoparticles (NPs), which typically have dimensions between 1 and 100 nanometers (nm), are unique from their bulk counterparts.Their unique physicochemical, optical, and biological properties can be tailored to suit specific applications [7].By utilising nanomaterials, nanotechnology has opened a new chapter in the treatment of wound healing by offering strategies for accelerating wound healing.The drug could be produced at a nanoscale to function as a self-contained "carrier," or nanomaterials could be used as drug delivery vehicles [8].The primary types of nanomaterials utilized in wound therapy include scaffolds, coatings, nanocomposites, and nanoparticles.Because of its beneficial effects on treating and preventing bacterial infections, as well as speeding up wound healing, metallic nanoparticles are becoming a more popular kind of nanomaterial.Additional advantages include low frequency of dressing changes, ease of use, and a continuous moist wound environment [9].Silver NPs (AgNPs), gold NPs (AuNPs), and copper NPs (CuNPs) are examples of metal-based NPs with antimicrobial and wound-healing properties that have been reported.In addition, metal oxide NPs such zinc oxide NPs (ZnO NPs), titanium dioxide (TiO2), cerium oxide (CeO2), yttrium oxide (Y2O3), etc.Among these, silver nanoparticles draw a lot of attention due to their remarkable qualities, which include their large surface area to volume ratio, low tendency to develop resistance, and exceptional antimicrobial activity and wound healing properties.According to published reports, silver nanoparticles can regulate the release of anti-inflammatory cytokines and promote quick healing while minimizing scarring.Through keratinocyte proliferation, they also contribute to epidermal re-epithelization [10].The synthesis of nanoparticles has generally been accomplished through the use of three distinct methodologies: chemical, biological, and physical.To overcome the shortcomings of physical and chemical methods, biological methods have emerged as feasible options.Biologically mediated synthesis of nanoparticles is a simple, cost-effective, dependable, and environmentally friendly approach using various biological sources like bacteria, fungi, plant extracts, and small biomolecules like vitamins and amino acids [11].These biological sources contain active compounds, such as enzymes, proteins, polyphenols, flavonoids, and terpenoids, which can act as catalysing, reducing, stabilising, or capping agents for onestep synthesis [12].The other advantages of biological methods are the availability of a vast diversity of biological resources, a decreased time requirement, high density, stability, and the ready solubility of prepared nanoparticles in water.Moreover, biological methods allow for additional proficiency in the control of shape, size, and distribution of the produced nanoparticles by optimization of the synthesis methods, including the number of precursors, temperature, pH, and the amount of reducing and stabilising factors [11].
The synthesis of AgNPs was reported in several studies through biological methods using Turmeric extract [13], Arnebia nobilis root extract [14], Catharanthus roseus and Azadirachta indica extracts [15], Glucuronoxylan (GX) isolated from seeds of Mimosa pudica (MP) [16].Like the above plants, Ficus lacor is one such plant that is traditionally known for its great medicinal values.Ficus lacor Linn.as shown in fig. 1 is a large, fast-growing foliaceous, deciduous tree.It is about 20 metres tall and has a finely shaped crown.It is widely distributed throughout tropical and subtropical regions of the world.It is available globally in Southeast Asia, Australia, India, Myanmar, Bhutan, Nepal, and Burma/Indochine.Due to its diverse chemical composition, it has long been utilised as a treatment for a wide range of illnesses, including dysentery, hay fever, typhoid, ulcers, wounds, and gastric issues [17].Plaksha (Ficus lacor) is mentioned as having potential woundhealing properties in ancient literature, such as Bhavprakash nighantu, Charak Samhita, and Sushrut Samhita.Acharya Charaka has written about using it externally on a variety of wounds, including ulcers that don't heal.The majority of the time, wound infections on human skin are caused by aerobic, anaerobic, Gram+ve (S. pyogenes, S. aureus), Gram-ve (P.aeruginosa) microorganisms.
Fig. 1: Ficus lacor plant leaves
The antibacterial properties of F. lacor bark have been demonstrated against S. aureus and E. coli [18].Additionally, recently, pharmacological activities such as antioxidant, antiinflammatory, anti-diabetic, and anti-arthritic properties have been reported.Phytochemical screening demonstrated that Alkaloids, tannin, flavonoids, saponins, phenolic compounds, sterols, glycosides, coumarins, triterpenoids, amino acids, and carbohydrates are abundant in plants and these compounds are essential for the synthesis of silver nanoparticles [12,17].
As silver and Ficus lacor plants both are having good antimicrobial and wound healing properties, our objective is to prepare silver nanoparticles using Ficus lacor plant extract to enhance a woundhealing activity along with green synthesis benefits.So our aim of this study is to prepare a topical gel using the Ficus lacor-silver nanoparticles and evaluate its effectiveness in an animal excision wound model.
Collection of plant
The leaves of Ficus lacor were gathered in January 2022 from the local areas of Pune and an identity and authentication were obtained for the plant specimen by Dr. Rajashekaran, Taxonomist, Indian horticulture research centre, Bengaluru.
Preparation of plant extract
The freshly collected leaves were washed thoroughly in running water to take away the dirt and dust on the surface of the leaves.
Preparation of silver nanoparticles
A 1 mmol silver nitrate (AgNO3) solution is made by dissolving 0.1699g of AgNO3 in 1000 ml of water.5 different concentrations of plant aqueous and hydroalcoholic extracts were taken 1 ml, 2 ml, 3 ml, 4 ml, 5 ml, each mixed with 10 ml of 1 mmol AgNO3 solution.Synthesis of silver nanoparticles was carried out in an ambient temperature where the plant extract was added dropwise to the AgNO3 solution on the magnetic stirrer at 100 rpm, maintaining basic pH, room temperature.Then the mixed solution was incubated for 24 h at room temperature and kept in dark condition to prevent agglomeration.Later the formation of silver nanoparticles was indicated by the colour shift from yellow to dark brown.Then they were analysed using UV-visible spectroscopy, the concentration which gave the highest peak was considered for the larger preparation of nanoparticles.Then the same above procedures followed for the bulker preparation by adding 100 ml of aqueous extract to 250 ml of silver nitrate solution and 75 ml of hydroalcoholic extract to 250 ml of silver nitrate solution and obtained AgNPs were separated and cleaned by frequent centrifugation for 15 min at 5000 rpm.Then the liquid supernatant was disposed of, and pellets were dried and kept for additional inspection, as shown in fig. 3.
Visible colour confirmation
After 24 h of incubation, the solution of 1 mmol silver nitrate+5 different concentrations of plant extract (1, 2, 3, 4, 5 ml) turned to dark brown, indicating and confirming the formation of silver nanoparticles as shown in fig. 3.
UV-visible spectroscopy
The prepared nanoparticles were analysed using UV-spectrometer in the range of 350-700 nm.For aqueous extract, the (4 ml plant extract+10 ml 1 Mm AgNO3) sample gave maximum absorbance of 3.510 observed at wavelength 391.6 nm and for hydroalcoholic extract, the (3 ml plant extract+10 ml 1 Mm AgNO3) sample gave maximum absorbance of 4.00 observed at wavelength 398.8 nm and it remain constant for all higher concentrations.It confirmed that the peak for the sample is the characteristic of surface plasmon resonance of silver nanoparticles so this concentration was utilised for further study.
Particle size distribution analysis
The particle size distribution was measured using a Malvern Nano ZS 90 (Malvern Instruments, UK) after appropriate dilution with double distilled water.The mean particle size and distribution were measured based on photon correlation spectroscopy (PCS), dynamic light scattering, (DLS) technique, which is a powerful and versatile tool for estimating the particle size distribution of fine-particle materials ranging from a few nanometres to several micrometres.Light scattering was measured at 25 °C and with an angle of 90 °.
The particle size distribution is reported as a Polydispersity Index (PDI).The range for the PDI is from 0 to
Preparation of topical nano gel
The topical Ficus lacor-silver nanoparticle-based gel was prepared using the carbopol 940 as a base.Where three different formulations were prepared using the 3 different concentrations of carbopol 940.
• 100 ml of distilled water with 0.5 g of carbopol 940 is Gel A (5%) • 100 ml of distilled water with 0.75 g of carbopol 940 is Gel B (7.5%) • 100 ml of distilled water containing 1.0 g of carbopol 940 is Gel C (10%) The weighed amount of carbopol-940 was mixed with distilled water, gently stirred, and left for twenty-four hours to swell.After that, 12 g of glycerine is added, and triethanolamine is used to neutralise it until a transparent gel forms.The gel is then allowed to stabilise for 24 h at room temperature.Using slow mechanical mixing (25 rpm) for 10 min, 1 ml of 169.9 µm/ml ethanol extract Ficus lacor-silver nanoparticles are finally added to 50 g gel to create the final formulation.Later, the homogeneity, consistency, pH, viscosity, and color of all three formulations were assessed.
Physical examination and pH measurement
To ensure that the gel remained stable at the skin's pH of 5.5, the pH was also reassessed.Gel formulations were discovered to be homogenous, translucent, and odorous of ethanol.Table 6 illustrates that the gels' pH was below allowable bounds.
Rheological evaluation
Brookfield viscometer was used to determine the viscosity of prepared gels.The gels of 0.5, 0.75 and 1 gm were dissolved in 25 ml of purified water allowed to stand for 24 h then their viscosities were determined.The values showed that viscosity increased along with an increase in the concentration of carbopol-940 as shown in table 6.By the above evaluation of different formulations, the Gel A was found to give an ideal required properties so it was considered for further study.
Wound healing study in animal model
In the current investigation, albino wister rats weighing 150-180 g of either sex were employed.The Mallige College of Pharmacy's animal house provided the animals.The institutional animal ethical committee granted approval for the use of animals in experiments (Approval number: MCP 104/2021-22).According to the guidelines of the Committee for Control and Supervision of Experiments on Animals (CPCSEA), animals were kept in laboratory settings with regulated humidity and temperature.They will be provided with a standard water ad libitum.
Wound induction-excision wound model
The 12 rats were divided into 3 groups (standard, sample, gel base) each containing 4 rats.All rats were anaesthetised using ketamine hydrochloride (50 mg/kg i. p body weight) before creating a wound.Then the animal fur in the dorsum was shaved with a sterilised electric razor and after disinfection of the skin with Dettol liquid fullthickness, round wounds of 6 mm in diameter were excised under aseptic conditions with the help of a sterile dermal biopsy punch followed by marking the initial wound area by placing transparent polythene sheet then the area is measured by using millimetre based graph paper.Later the rat's wounds were bandaged to avoid infection.
Then, the formulations under study were applied daily to three groups until the complete healing.(Group 1-standard drug silver nitrate gel [Silverex ionic, Sun Pharmaceutical Industries Ltd, Mumbai, India], Group 2-Test drug Ficus lacor-based silver nano gel, Group 3-gel base without active ingredient).
In this study, the effect of Ficus lacor-based silver nanogel activity will be evaluated by measuring wound contraction and epithelialization period.The wound area is measured and marked on days 0, 2, 4, 6, and 8 for all groups.Wound contraction was measured every 2 nd day until complete wound healing and represented as a percentage of the healing wound area.Wound contraction was measured as percent contraction and was calculated using the below formula.
Excision wound model: wound healing activity
The effect of Ficus lacor-based silver nanogel activity on the excision wound model was evaluated by measuring wound contraction and epithelialization period.The wound area was measured and marked on days 0, 2, 4, 6, and 8 for all groups.Wound contraction was measured every 2 nd d till 8 th d and represented as a percentage of the healing wound area as shown in table 2 and the pictures wound healing shown in fig.The results showed that % wound contraction of Group 1 (standard) rats was 1.24% to 76.66%, Group 2 (sample) rats was 1.24% to 76.24%, Group 3 (gel base) rats was 0.41% to 58.33% from day 2 to day 8.
The obtained data indicated that Group 1 (treated with silver nitrate gel) and 2 (treated with Ficus lacor-silver nano gel) showed almost similar wound healing activity with a difference of 0.42% and the group 3 treated with gel base alone showed lesser wound healing activity than group 1 and 2.
DISCUSSION
Our study aimed to synthesise silver nanoparticles using Ficus lacor leaf extract to enhance wound healing with the advantages of green synthesis.We successfully prepared the nanogel and examined its effects on excision wounds.The production methods were meticulously selected from various studies, and our results demonstrated that the Ficus lacor-based silver nanoparticle gel was comparable in wound healing activity to a standard marketed product.Remarkably, Group 1 (treated with silver nitrate gel) and Group 2 (Ficus lacor-silver nanogel) exhibited almost identical wound healing activity, differing by just 0.42%.In contrast, Group 3, treated with gel base alone, showed inferior wound healing activity.The similarity in effectiveness between Ficus lacor silver nanoparticle gel and the standard product underscores the potential of green synthesis in wound healing solutions.
Whereas the studies conducted by Roy P Kr et al. on Picrasma javanica extract silver nanoparticles for wound healing activity [19] showed that the formulated plant extract nanogel exhibited enhanced wound healing activity than the standard used in study.
The Ficus lacor-silver nanogel offers advantages over silver nitrate, such as being environmentally friendly, biocompatible, and derived from a natural source, potentially reducing adverse effects when applied to wounds.The natural nanoparticles from Ficus lacor may have a broader spectrum of activity and reduced chances of bacterial resistance.Additionally, the plant extract may provide bioactive compounds for tissue regeneration and inflammation reduction.
The study has a few limitations that need to be mentioned.Due to budget constraints, comprehensive nanoparticle characterization using methods like FTIR, SEM, and XRD was not possible.The focus on a single wound model, the excision wound model, might not encompass the full range of wound healing effects.Histopathology parameters were not considered, which could have provided tissuelevel insights.Gender-specific analysis was omitted, potentially overlooking gender-based variations.Preliminary phytochemical screening was not conducted, missing valuable information about plant extract constituents.
In light of these limitations, future research can address these issues by thoroughly characterizing Ficus lacor-silver nanoparticles, exploring various formulations, and expanding the scope to include a wider range of wounds and experimental models.Comparative studies against existing herbal wound healing preparations can highlight unique advantages.These future directions hold promise for advancing our understanding of Ficus lacor-silver nanoparticles' applications in wound healing and related fields.
CONCLUSION
With passing years, the burden of wounds is increasing due to the rise in treatment costs and antimicrobial resistance.Within available treatment options, nanotechnology is found to have greater benefits, especially metallic nanoparticles prepared using biological methods have overcome many existing problems.Silver and Ficus lacor both are traditionally well-known for their ability to heal wounds, but the effect of green synthesised silver nanoparticles using the Ficus lacor on wound healing was not studied.Therefore in our present study, we demonstrated the wound healing activity of green synthesised Ficus lacor-silver nanoparticles topical gel on an animal excision wound model where the study drug Ficus lacor-silver nanogel exhibited similar wound healing activity in comparison with standard marketed product hence we can conclude that Ficus lacorsilver nanogel has the potential to be a good fit for wound healing.
Then, they were kept drying under shade for 15-20 d.Later the dried plant leaves were crushed and made into a coarse powder.Then the extraction was carried out through the maceration process, where the aqueous and hydro-alcoholic extracts were taken by adding 25 g of leaves powder into each 250 ml of sterile water and 70% alcohol in two separate conical flasks and kept for 3-4 d with occasional stirring.After 3-4 d the plant extract was filtered and the plant liquid filtrate was collected as shown in fig. 2 and stored at-20 degrees Celsius in a tightly closed vessel for future use.
1 .
The values closer to zero indicate the homogenous nature of the dispersion and those greater than 0.5 indicate the heterogeneous nature of the dispersion.The results were found as Z-Average of 329.3 d. nm, PDI of 0.485 and intercept of 0.887.
Table 2 : Showing size, % intensity, St Dev of ficus lacor-silver nanoparticles Peak number Size
In our studies, a PDI value of 0.485 indicates the homogenous nature of dispersion showing greater particle stability and the absence of aggregated nanoparticles.Z-Average value of 329.3 d nm indicating the average size of particle size distribution.Intercept of 0.887 indicating the data produced by the best system. | 4,111.8 | 2024-01-15T00:00:00.000 | [
"Medicine",
"Materials Science"
] |
Analytical Estimations of Limit Cycle Amplitude for Delay-differential Equations
The amplitude of limit cycles arising from Hopf bifurcation is estimated for nonlinear delay-differential equations by means of analytical formulas. An improved analytical estimation is introduced, which allows more accurate quantitative prediction of periodic solutions than the standard approach that formulates the amplitude as a square-root function of the bifurcation parameter. The improved estimation is based on special global properties of the system: the method can be applied if the limit cycle blows up and disappears at a certain value of the bifurcation parameter. As an illustrative example, the improved analytical formula is applied to the problem of stick balancing.
Introduction
Delay-differential equations (DDEs) appear in various fields of science.Examples include neural networks [18], human balancing [13], epidemiology [6,15], control theory [12,17], wheel shimmy [22], and machine tool vibrations [1,21], just to mention a few.Investigating the dynamics of systems with time delay is therefore an important field of research.The mathematical analysis of DDEs is complicated by the fact that their phase space is infinite dimensional.The infinite-dimensional nature often yields rich dynamics, including the possibility of periodic, quasi-periodic, and even chaotic solutions [2,21].
In this paper, we focus on the computation of periodic solutions (limit cycles) of nonlinear DDEs occurring via Hopf bifurcation.An analytical approach is proposed to give an accurate estimation of the amplitude of limit cycles.The whole concept is based on the center manifold reduction technique [3,8,11], by which a two-dimensional center subsystem can be decomposed from the infinite-dimensional DDE.With the two-dimensional subsystem at hand, the normal form theory of ordinary differential equations (ODEs) can be applied to deduce a polar-form equation, which determines the amplitude of the periodic solutions arising from the Hopf bifurcation.We remark that this polar form can be obtained by other methods as well (see e.g. the method of multiple scales [16]).Based on the polar-form equation, the standard analytical estimation of the limit cycle amplitude is given by a square-root function of the bifurcation parameter [7].Although the square-root function provides good approximation in the vicinity of the Hopf bifurcation, its accuracy may be insufficient if the bifurcation parameter is far from the bifurcation point.In this paper, a special hyperbolic function is proposed for the limit cycle amplitude by considering special global properties of the DDE.This way, a more accurate analytical prediction of large-amplitude periodic solutions of DDEs can be given.
We consider nonlinear autonomous DDEs of the form The evolution of the system is described by the shift defined in the Hilbert space H of continuously differentiable vector-valued functions.The integral on the right-hand side of Eq. (1.1), which accounts for the linear terms, is a Stieltjes integral with η : [−σ, 0] → R n×n being a matrix-valued function of bounded variation.The nonlinearities in the system are included in the continuous functional f : H → R n .In what follows, we assume that a Hopf bifurcation is associated with Eq. (1.1), and analyze the arising limit cycle.The rest of the paper is organized as follows.Section 2 gives an introduction to center manifold reduction and shows the polar-form equation determining the amplitude of the limit cycle.Section 3 proposes an analytical approach by which the amplitude can be approximated by a higher-order function of the bifurcation parameter.Section 4 demonstrates the results through an example: periodic solutions are computed for the single-degree-of-freedom model of an inverted pendulum subjected to a nonlinear feedback control.Section 5 summarizes the conclusions of the paper.
Flow on center manifold
In this section, we shortly revise a possible method to derive the polar-form equation that determines the amplitude of limit cycles arising from the Hopf bifurcation associated with Eq. (1.1).The analysis is based on the center manifold reduction technique discussed in [3,8,11], which allows us to characterize the long-term dynamics of infinite-dimensional timedelay systems undergoing Hopf bifurcation.The center manifold reduction uses an abstract representation of system (1.1) given by the operator differential equation (OpDE) form ẏt = Ay t + F (y t ) , (2.1) where A, F : H → H are the linear and the nonlinear operators, respectively, defined by We assume that system (2.1) has a trivial equilibrium y(t) ≡ 0.Then, the associated linear system is described by operator A, and the eigenvalues of A (called as the characteristic exponents) determine the stability and bifurcations of the equilibrium.Let p denote the bifurcation parameter and assume that a Hopf bifurcation takes place at p = p H .When the equilibrium of system (2.1) loses stability via Hopf bifurcation, a complex conjugate pair λ = ±iω of eigenvalues lies on the imaginary axis (i 2 = −1, ω > 0), whereas all the other infinitely many eigenvalues of operator A are located in the left half of the complex plane.Accordingly, a two-dimensional center manifold embedded in the infinite-dimensional phase space attracts the solutions of the differential equation.The long-term dynamics of the system is therefore determined by the flow on the two-dimensional center manifold.The flow on this manifold can be analyzed by decomposing the two-dimensional center subsystem.This procedure is referred to as center manifold reduction, and it can be done using the decomposition theorem given by Eqs.(3.10) and (3.11) in Chapter 7 of [8].
Here, we do not present all the details of the decomposition, we just remark that the center manifold reduction technique uses the operator A * , which is formally adjoint to operator A relative to a certain bilinear form.We rather focus on the analysis of the center subsystem.The center manifold reduction allows us to decouple the two-dimensional center subsystem from the infinite-dimensional time-delay system to obtain the form where o : R → H is a zero operator, O : H → R is a zero functional.Here, z 1 and z 2 denote the two local coordinates on the center manifold, whereas y tn represents the remaining infinitedimensional component of y t transverse to the center subspace.The first term on the righthand side of Eq. (2.4) is linear, while g 1 , g 2 : R × R × H → R and G : R × R × H → H contain all nonlinear terms.Parameter ω gives the approximate angular frequency of the arising periodic solutions.Note that the two-dimensional center subsystem described by z 1 and z 2 is decoupled only on the linear level from the remaining infinite-dimensional stable subsystem described by y tn .There is still a coupling through the nonlinear terms g 1 and g 2 .In order to fully decouple the two-dimensional center subsystem in the first two rows of Eq. (2.4), the dynamics must be restricted to the center manifold.This manifold is embedded in the infinite-dimensional phase space and can be given in the form y tn = y CM tn (z 1 , z 2 ).The expansion of the center manifold y CM tn (z 1 , z 2 ) into Taylor series in terms of z 1 and z 2 allows us to construct a thirdorder approximation of the decoupled center subsystem in the form That is, the nonlinearity is approximated only by quadratic and cubic terms, which is necessary and suitable for bifurcation analysis.Here, we do not investigate the effect of higher-order nonlinearities.
Finally, using near-identity transformation, the center subsystem (2.5) with quadratic and cubic nonlinearity can be transformed into a simple polar form.In the vicinity of the Hopf bifurcation taking place at p = p H , the polar-form system reads ṙ =r σ(p) + δ(p)r
.8)
The limit cycle is stable (attractive) if the bifurcation is supercritical and unstable (repelling) if it is subcritical.
Analysis of limit cycles
As indicated in Eq. (2.8), the limit cycle amplitude r is a function of the bifurcation parameter p.In this section, we propose methods to accurately estimate this function.The standard approach [7] is to expand the parameter σ(p) into Taylor series in terms of p up to first order and approximate the parameter δ(p) by constant (by the PLC).According to Eq. (2.8), this yields a linear function for r 2 (p) and hence a square-root function for the amplitude r(p): where prime indicates differentiation with respect to p, subscript H refers to the substitution p = p H , and we used σ H = 0.The calculation of σ H (and also the higher derivatives of σ) is possible via the implicit differentiation of the characteristic equation Ker(A − λI ) = {0} ⇔ D(λ) = 0 defining the characteristic exponents, where I is the unit operator and D(λ) is the characteristic function.After the implicit differentiation of D(λ) = 0 with respect to p, and after the substitution of p = p H and λ = ±iω, the derivative σ H = (λ H ) can be expressed analytically.The PLC δ H can also be calculated by a closed-form formula, see [9].From this point on, we refer to Eq. (3.1) as standard analytical estimation as indicated by the superscript of r.The standard estimation is accurate only for p ≈ p H , and it may become inaccurate if p lies farther from p H .The standard estimation can be improved if the following special global properties hold for the system.
1. Without loss of generality, let us assume that the point where the special properties hold is at p = 0. Note that p = 0 does not have to lie in the vicinity of p = p H .
2. Assume that r stan (0) exists and is real, but the time-delay system (1.1) in fact does not have a periodic solution for p = 0.That is, the periodic solution vanishes when changing p from p H to 0, which is not described by the standard analytical estimation (3.1).
3. Furthermore, assume that the Hopf bifurcation is unique in the sense that no other Hopf bifurcation takes place between 0 and p H .
4. Finally, assume that the actual amplitude r(p) is a bijective function of the bifurcation parameter.
These assumptions imply that the periodic solution can only vanish at p = 0 by fulfilling Now we propose two methods by which the standard analytical estimation (3.1) can be improved if the above assumptions hold.
The first method to improve the standard estimation is by means of the expansion of σ(p) and δ(p) into higher-order Taylor polynomials.It requires the calculation of the derivatives of σ and δ at p = p H .The higher derivatives of σ can be calculated by implicit differentiation in a similar manner to σ H , however, the derivatives of δ cannot easily be obtained.Still, we can use this approach by taking advantage of property (3.2).The simplest way to improve the analytical estimation is the approximation of both σ(p) and δ(p) by a linear Taylor polynomial: The unknown coefficient δ H can be determined so that property (3.2) is fulfilled.Equivalently, the denominator on the right-hand side of Eq. (3.3) must be zero for p = 0.This yields the coefficient δ H = δ H /p H and the improved analytical estimation The same result can be obtained by a slightly different approach.This second method also incorporates the global property (3.2), and fits a bifurcation curve to the limit cycle amplitude by considering the behavior away from the bifurcation (at r → ∞).However, now we approximate r 2 (p) by a hyperbola as rather than expanding σ(p) and δ(p) into polynomials.The core idea is that Eq. (3.5) is the simplest function, which ensures the automatic fulfillment of Eq. (3.2).The coefficients a 0 and a 1 can be calculated based on Eq. (2.8) and using σ H = 0, which give This way, Eqs.(3.5), (3.6), and (3.7) imply ) which yields the improved analytical estimation Note that estimation (3.10) is exactly the same as that in Eq. (3.4).
Applications
The idea of the improved analytical estimation (3.4) for the amplitude of limit cycles originates in the problem of regenerative machine tool vibrations in metal cutting [4,20], where Hopf bifurcation occurs due to the nonlinearity of the cutting force characteristics and gives rise to an unstable limit cycle in the vicinity of the linearly stable equilibrium.Correspondingly, an unsafe zone exists in the plane of the technological parameters, where the equilibrium is only locally (but not globally) stable.From practical point of view, it is important to avoid these unsafe parameters, therefore in [14] we used the above approach for the special case of machine tool vibrations to accurately estimate large-amplitude periodic solutions and to accurately compute the unsafe zone.The present paper generalizes the results of [14] for a class of DDEs and provides the theoretical background, as the approach is not restricted to the problem of machine tool vibrations.In the metal cutting example, the difference of the delayed and the actual variable appears in the governing equation.Similar expressions can be found for example in the Pyragas control strategy [17] and in the Duffing oscillator with delayed feedback [23].The method in Section 3 may help in the bifurcation analysis of these systems.As an additional example, now we demonstrate the application of formula (3.4) on a model of stick balancing: an inverted pendulum subjected to a nonlinear proportional-derivative (PD) controller is investigated.The equation of motion related to the single-degree-of-freedom model of an inverted pendulum with delayed PD control can be written in the form [10] Accordingly, the inverted pendulum is represented by an unstable second-order system (a > 0) on the left-hand side.The expression on the right-hand side is originated in the control force exerted by the PD controller in order to maintain the stick at its upright position (at the equilibrium x(t) ≡ 0).Parameters p and d are the proportional and the derivative control gains, respectively, whereas τ represents the delay in the control loop.The proportional term on the right-hand side has a cubic nonlinearity, which can be considered as a simplified smooth model of sensory dead zone [13].
Stability and bifurcation analysis of Eq. (4.1) was carried out for τ = 1, a = 0.1, and d = 1, using p as bifurcation parameter.The equilibrium x(t) ≡ 0 is linearly stable for a < p < p H .At p = a = 0.1, a fold bifurcation occurs, whereas at p = p H = 0.5983, a subcritical Hopf bifurcation takes place.Due to the subcritical Hopf bifurcation, an unstable limit cycle exists for 0 < p ≤ p H .The bifurcation scenario is shown in Fig. 4.1 (a).Solid line indicates the numerical result r num (p) for the limit cycle amplitude obtained by the continuation software DDE-Biftool [5].Here, r num stands for the half of the peak-to-peak amplitude of the nonharmonic periodic solution.According to numerical continuation, the limit cycle blows up at p = 0, which implies that property (3.2) holds and, along with this, the bijective property of r(p) and the uniqueness of the Hopf bifurcation for 0 < p ≤ p H is fulfilled.Therefore, the improved analytical estimation r impr (p) given by Eq. (3.4) can be computed as shown by the dash-dot line in the figure.The parameters σ H = 0.6976 and δ H = 0.3173 were determined numerically by DDE-Biftool, although they can also be derived analytically.Dashed line shows the standard analytical estimation r stan (p) given by Eq. (3.1).As shown in Fig. 4.1 (a), the standard analytical estimation is accurate and agrees well with the numerical results only in the vicinity of the Hopf bifurcation at p = p H .For zero proportional gain (p = 0), the nonlinearity on the right-hand side of Eq. (4.1) vanishes.Since the resulting linear system does not have a periodic solution, the limit cycle vanishes (blows up) at p = 0.The standard analytical estimation fails to capture this phenomenon as r stan (p) exists and is real at p = 0.In contrast, the improved analytical estimation captures the blowup phenomenon at p = 0, and is everywhere in a very good agreement with the numerical results: the dash-dot curve overlaps with the solid line.
Similarly, Fig. 4.1 (b) shows the bifurcation diagram with the same color scheme for a = 0.1 p = 0.5, d = 1, and with bifurcation parameter τ.The corresponding parameters are τ H = 1.0959, σ H = 0.6776, and δ H = 0.2668.Again, the periodic solution blows up at τ = 0, since the corresponding delay-free system does not have a limit cycle.The standard analytical estimation again fails to capture this phenomenon and the dashed curve deviates from the numerical bifurcation curve, while the improved analytical estimation agrees well with numerics.
Consequently, we can use the improved analytical estimation (3.4) to give an accurate analytical prediction of the unstable oscillations, even at large amplitudes.The unstable largeamplitude oscillations are important, because they determine the basin of attraction of the stable equilibrium and affect global stability.
Conclusions
In this paper, the analytical estimation of the amplitude of limit cycles arising from Hopf bifurcation was addressed for nonlinear DDEs.The standard analytical estimation (3.1) known from the literature describes the amplitude as a square-root function of the bifurcation parameter.Here, an improved analytical estimation (3.4) was proposed to estimate the amplitude more accurately for large-amplitude vibrations outside the vicinity of the bifurcation.The method uses the concept of center manifold reduction and the polar-form equation of the two-dimensional center subsystem.From the polar-form equation, the limit cycle amplitude is expressed as a special hyperbolic function of the bifurcation parameter, whose coefficients can be determined using some special global properties of the system.Namely, the limit cycle must disappear at a certain value of the bifurcation parameter, where the amplitude according to the standard analytical estimation exists and is real.The application of the method was demonstrated through the example of the inverted pendulum subjected to a nonlinear feedback control.
are functions of the bifurcation parameter p. Actually, σ(p) is the real and α(p) is the imaginary part of the eigenvalues λ = σ(p) ± iα(p) that cross the imaginary axis during the Hopf bifurcation.Consequently, at the Hopf bifurcation, i.e., at p = p H , the real part is zero: σ H := σ(p H ) = 0, and the imaginary part is equal to parameter ω: α(p H ) = ω.Besides, parameter δ(p) is related to the so-called Poincaré-Lyapunov constant (PLC): δ H := δ(p H ). The criticality of the Hopf bifurcation is determined by the sign of the PLC: the bifurcation is subcritical for δ H > 0 and supercritical for δ H < 0. A limit cycle arising from the Hopf bifurcation is associated with the nontrivial equilibrium of Eq. (2.6): | 4,303.6 | 2016-01-01T00:00:00.000 | [
"Mathematics"
] |
A dual-criterion optimization for sulfonated asphalt-assisted aqueous phase exfoliation to prepare graphene suitable for protective coating of aluminium
In this work, we aimed to prepare graphene with high concentration (CG) or quality (indicated by ID/IG) by sonication-assisted exfoliation in aqueous sulfonated asphalt (SAS) solution. The highest CG can reach 0.181 mg ml−1, while the smallest ID/IG is only 0.331 in the investigated range. Meanwhile, we observed that CG and ID/IG changed in the opposite trend with the increase of SAS concentration, and reached their extreme values simultaneously. This was attributed to SAS’s agglomeration-induced redistribution of total energy absorbed by graphite between exfoliation and crushing. The graphene size was mainly within 100–400 nm and most of layer number was <5. The stabilization of graphene dispersion comes from the electrostatic repulsion between negatively charged SAS groups adsorbed on the graphene sheets. As the protective coating of aluminium, the graphene with relative small (for H2SO4 solution) or large (for NaCl solution) size, relative high defect content and annealing at proper temperature can improve the anticorrosion performance of graphene.
Introduction
All the time, graphene (G) has attracted extensive attention since its birth due to its unique hexagonal honeycomb structure and the resultant excellent physical and chemical properties [1][2][3]. After more than ten years of development, with the application of graphene in more and more fields, the demand for graphene is increasing day by day. However, up to date, the price of high-quality graphene is still expensive, and the preparation process is accompanied by the problems of pollution and safety [4]. As a result, the preparation technology of high-quality graphene with cost saving, scalability, and non-pollution increasingly becomes a research focus [1,5]. So far, the preparation methods of graphene mainly includes micromechanical cleavage [6], liquid phase exfoliation (LPE) [7], chemical vapor deposition [8], reduction of graphene oxide [9,10], etc Among these methods, LPE is thought to be a promising method for industrialization due to its simple equipment, scalability, low cost, and high quality of obtained graphene [5,11,12]. In the LPE, graphite (Gi) is exfoliated into graphene in liquid media under ultrasound, microwave, or shear force, etc [5] As far as we know, three liquid media have been used for exfoliation, namely water, organic solvents, and ionic liquids [5,13]. Obviously, water is more advantageous in large-scale preparation because of its environment-friendly and costeffective features. However, due to super hydrophobicity of graphene, the addition of dispersants/stabilizers into water is indispensable in the exfoliation process. Many substances have been used as dispersants, including graphene oxide [14], carbon dots [15], alkaline lignin [12], fulvic acid [5], various surfactants [16], and some biomolecules such as nucleotides, DNA, RNA, proteins/peptides, polysaccharides, plant extracts, and bilesalts [17]. However, relatively low graphene concentrations [5,12,16], high cost or/and toxic stabilizers [14][15][16][17] hinder the industrialization of the method. Thus, it is valuable to explore new dispersants to assist the exfoliation of graphite into graphene in aqueous solution.
Asphalt is a black-brown complex mixture, mainly classified as coal tar asphalt, petroleum asphalt, and natural asphalt depending on their source [18,19]. Asphalt is mainly composed of alkane, cycloalkane, aromatic hydrocarbon, polycyclic aromatic hydrocarbon, and heterocyclic hydrocarbons containing sulfur/nitrogen [19]. The aromatic groups in asphalt make it possible to form ,π-π, interactions with graphene. Unfortunately, asphalt is insoluble in water and cannot be used as dispersants. Yet sulfonated asphalt (SAS) prepared from asphalt is soluble in water and is suitable for the exfoliation of graphite. In industry, SAS, mainly as a component of drilling fluid, has been produced on a large scale [20], which makes SAS economically competitive as dispersants.
For the large-scale production of graphene by LPE method, high graphene concentration (C G ) is beneficial to improve the economy of process, and the defect content (indicated by I D /I G , where I D and I G are peak intensities of D and G peaks in Raman spectrum) affects greatly its performance in application. Therefore, it is desire to prepare graphene with the highest possible concentration and defect contents suitable for specific applications. Therefore, based on C G and I D /I G to obtain optimized operation parameters (called as double-criterion optimization), the production of graphene, which is beneficial economically and suitable for specific applications, can be realized.
In this work, graphene was prepared by SAS-assisted aqueous-phase exfoliation of graphite under sonication. First, the operation parameters, i.e., subfraction of SAS, power (P), concentration of SAS (C SAS ), and sonication time (t s ), were optimized based on C G or I D /I G . Meanwhile, the mechanism of the effect of C SAS on C G and I D /I G (figure 1) was explored to guide graphene production. Then the quality of as-exfoliated graphene was evaluated, and the stability mechanism of graphene dispersion was elucidated. Finally, the as-exfoliated graphene was used as anticorrosion coating of aluminium, and the effects of graphene size, defects and annealing on anticorrosion performance were investigated in H 2 SO 4 or NaCl solutions.
2.2. Subfraction preparation of SAS with different molecular weight SAS with different molecular weight has different interaction with graphene. Therefore, SAS was separated into several subfractions with different molecular weights by centrifugation. In a typical procedure, 200 ml of aqueous SAS solution with a concentration of 10 mg ml −1 was sonicated in water bath for 2 h. After a two-step centrifugation, the obtained precipitate was dried for the next research.
Preparation of graphene dispersion and powder
Before preparation, graphite is first pre-treated to remove possible impurities and very small graphite sheets by 30 min water-bath sonication, 600 rpm centrifugation (Relative centrifugal force (RCF): 5013) for 30 min, the Figure 1. Schematic illustration of mechanism in of the effect of C SAS on C G and I D /I G in sonication-induced exfoliation. Note: the upward arrow with a short horizontal line indicates an increase, and that with two short horizontal lines indicates a significant increase. resultant precipitate washed with deionized water for three times, and dried in turn for use below. The preparation procedure of graphene dispersion and powder is similar to those described in our previous work [5]. In a typical operation, a certain amount of graphite and SAS were added into 200 ml deionized water in a 250 ml beaker, then an ultrasonic cell disruptor (Model: ULD43-1200, 1 cm 2 Ti horn, Ningbo Sincere Ultrasonic Equipment Technology Co., Ltd, China) was used to dispose the mixture at 20 kHz and a certain amplitude with 1200 W and impulse 2 s on and 2 s off for a certain period of time, and the treated mixture was laid aside for 12 h. Then, the liquid mixture was centrifuged at 2 and 5 krpm (the corresponding RCF is 557 g and 3481 g for the used centrifuge) for 1 h, respectively. The obtained sediment was re-dispersed in deionized water and recentrifuged at 5 krpm to wash out residual SAS triple. The final sediment was freeze-dried to obtain black graphene powder.
Determination of graphene concentration in dispersion
Graphene concentration in dispersion can be determined through the Lambert-Beer law: A=α×C G ×l, where A is absorbance, α is absorption coefficient, C G is graphene concentration, and l is the optical path length (0.01 m, here) [21]. The value of α was determined from the slope of linear relation of A and C G (figure S1 is available online at stacks.iop.org/MRX/6/1250j2/mmedia), from which the absorption coefficient was 2396 l g −1 m −1 . It should be pointed out that the content of residual SAS in graphene need to be deducted from the mass of graphene powder due to the unavoidable residue of SAS in the graphene powder. The content of residual SAS in the graphene powder can be determined by an annealing method (see below) proposed in our previous work [5].
Preparation of corrosion samples
The as-exfoliated graphene was used as an anticorrosion coating of aluminium. The preparation of corrosion samples is similar to our previous work [5] and described simply as follows. First, a square aluminium foil with a side length of 1.5 cm was polished with 500, 1500, and 2000 mesh sandpapers, respectively, then sonicated in ethanol and water in turn for 10 min to remove debris from the surface, and dried. Next, 0.1 mg ml −1 graphene dispersion was prepared by bath sonication for 15 min 10 ml of the dispersion was coated uniformly on both sides of the aluminium foil by a spin-coating apparatus (Model: KW-4A, Institute of Microelectronics, Chinese Academy of Sciences, China). The spin coating was done at 500 rpm for 10 s and subsequently at 1000 rpm for 30 s. The spin-coated samples were dried in vacuum at 80°C for 12 h to remove residual water. Some coated samples were annealed at 200, 400, 600°C, respectively, for 2 h. All the graphene-coated aluminium foils were used for electrochemical measurements to test their anticorrosion performance. Each experiment was done in parallel three times, and their average values were recorded.
Electrochemical measurements
An open three-electrode system was used to test the corrosion performance of the graphene-coated aluminium foils. A calomel electrode, a platinum wire, and a graphene-coated Al foil were used as the reference, counter, and work electrode, respectively. 0.5 M H 2 SO 4 or 0.6 M NaCl was used as the electrolytic solution. An electrochemical analyzer (model: CHI660d, CH Instruments, Inc., USA) was used to measure potentiostatic polarization curves at room temperature. The open-circuit potential was first measured, and when the opencircuit potential was no longer changed with time, the polarization curves of dynamic potentials were measured. Three parallel measurements for each electrode were done and their average values were recorded.
Characterization
Ultraviolet visible absorption spectra were recorded in a spectrometer (Model: TU-1900, Beijing Persee General Instrument Co., Ltd, China). The incident light is set at 660 nm. Graphene dispersion was diluted by 5-10 times to avoid exceeding the instrument range. The size and Zeta potential of SAS aqueous dispersion were measured by the dynamic light scattering (DLS) method with an instrument (Model: Zetasizer Nano ZS, Malvern Instruments Co. Ltd, UK).
TEM and SEM images were obtained by a transmission electron microscope (Model: JEM-2100, Electronic Co., Ltd, Japan) and a field emission scanning electron microscope (Model: SU8010, Hitachi Hi-Tech Co., Ltd Japan), respectively. AFM images were obtained by an atomic force microscope (Model: MultiMode 8, Bruker Daltonics Inc., USA).
X-ray photoelectron spectroscopy (XPS) measurements were conducted with a spectrometer (Model: ESCFAAB250Xi, Thermo Fisher Scientific Co., Ltd, USA). Raman spectra were obtained by a Raman confocal spectrometer (Model: HORIBA XploRA ONE, with 532 nm excitation wavelength, HORIBA Jobin Yvon, France). The preparation of graphene films for Raman spectra were similar to that described in the reference [22]. The 100-fold diluted graphene dispersion was first filtered through a mixed cellulose membrane (pore diameter: 100 nm) and then the membrane removed by dissolving in acetone. It should be noted that the thickness of graphene films should be limited to a maximum of 20 nm to avoid large aggregation [23]. Fluorescence emission spectra were obtained by a fluorescence spectrometer (model: Perkin Elmer LS-55, American PerkinElmer Ltd, USA) with an excitation wavelength of 660 nm. Fourier transform infrared (FTIR) spectra were obtained by an infrared spectrometer (Model: Nexus 670, Thermo Nicolet Co. Ltd, USA).
The sample for the measurement of conductivity were prepared by compressing freeze-dried graphene powder into a wafer, and its conductivity was measured by the linear four-probe method with an instrument (Model: ST2253, Suzhou Jingge Electronic Co., Ltd, China). Resistivity was determined directly by this instrument and the conductivity is calculated by the following equation: σ=1/(R s d), where R s is the sheet resistance (Ω sq −1 ) and d is the wafer thickness (m).
Optimization of exfoliation process
In the optimization, the absorbance of the supernatants obtained after 2 or 5 krpm centrifugation were measured, respectively. Therefore, the graphene concentration (C G ) can be calculated based on Lambert-Beer law with the difference of two absorbance values above. Furthermore, the I D /I G of graphene was determined by Raman spectra.
Here, based on C G and I D /I G , four parameters, i.e., subfractions of SASx (x=1, 2, 3, 4 and 5) obtained by two-step centrifugation (table 1), sonication power (P), SAS concentration (C SAS ), and sonication time (t s ) were optimized. Incidentally, for the subfraction SAS1-5, their molecular weight arranges in a descending order. It should be pointed out that the subfractions with molecular weight higher than SAS1 and smaller than SAS5 were discarded and not studied due to the very small amount.
The effect of above four parameters on C G and I D /I G are shown sequentially in figures 2(a)-(d). Obviously, from figure 2(a), C G reaches the highest value with SAS1 as the dispersant. This can be attributed to the highest molecular weight of SAS1 leading to the strongest interaction of SAS1 with graphene. Interestingly, the lowest values of I D /I G , meaning the lowest defect content, were also obtained with SAS1 as the dispersant. This may be the reason of the best protection against aggregation during sonication process due to the strongest interaction of SAS1 with graphene. It has been noted that the errors of I D /I G are relatively large, but it does not affect the conclusion. In the follow optimization, SAS1 is used as the dispersant for further research and is still marked as SAS for the sake of convenience.
From figure 2(b), when the P exceeds 480 W, the increase of C G becomes slow, but the I D /I G values increase sharply. Therefore, the optimal P was fixed at 480 W for subsequent study.
From figure 2(c), when C SAS =0.3 mg ml −1 , C G reaches the highest value. The similar results were also found in the case of other dispersants [5,12]. At the same time, the I D /I G reaches the minimum at the same C SAS . Therefore, an interesting phenomenon can be observed that C G and I D /I G values change in opposite trend with C SAS , with a maximum for C G and a minimum for I D /I G at the same time.
The observed phenomenon can be explicated as follows. Firstly, we assume that the total energy absorbed by graphite is a constant under a fixed P and t s in exfoliation processes. When C SAS was very low (<0.3 mg ml −1 ), the amount of SAS available for exfoliation (namely forming complex with graphene, see below) was very small, resulting in a low concentration of graphene. In another word, in the total energy absorbed by the graphite, only a small proportion was used for exfoliation and a large proportion was used for making defects (leading to high I D /I G ). With the increase of C SAS , more SAS would take part in the exfoliation, and therefore the energy used for exfoliation increased and the energy used for making defects decreased accordingly (decreased I D /I G ). When Table 1. Centrifugation speeds and the corresponding relative centrifugal force (RCF) in the two-step centrifugation to produce subfractions of SAS (SASx).
Low speed centrifugation
High speed centrifugation C SAS increased up to 0.3 mg ml −1 , C G reached a maximum value. At this time, the highest proportion of energy was used for exfoliation and the lowest proportion of energy was used for making defects, leading to the lowest I D /I G . As C SAS increased continuously to exceeding 0.3 mg ml −1 , SAS molecules would agglomerate into larger particles (figure S2), which is disadvantageous to exfoliation due to the decreased interaction with graphene [5,12]. Thus, the proportion of absorbed total energy for exfoliation was decreased and proportion for defects was increased accordingly. The phenomenon has also been observed in the graphene exfoliation process with lignin or sodium humate as the dispersant (figures S3-4). Therefore, this should be a general law in liquid phase exfoliation by sonication. This mechanism of the effect of C SAS on C G and I D /I G in sonication-induced exfoliation is illustrated in figure 1.
Finally, the effect of t s on exfoliation is shown in figure 2(d). With the increase of t s , both C G and I D /I G increased. The highest C G reached 0.181±0.004 mg mL −1 at t s = 5 h. It should be pointed out that sonication time of exceeding 5 h is not suitable for large-scale preparation due to reduced efficiency [24]. Further, the I D /I G values increased faster when t s >2 h. This may be attributed to the fact that when t s <2 h and t s >2 h, different types of graphene defects have been produced. When t s <2 h, edge defects are dominant, and when t s >2 h, bulk defects are dominant [25]. Therefore, if the graphene with less defect contents (I D /I G =0.331±0.021) was required, the sonication time t s =2 h should be a better choice.
It should be noted that each experiment of exfoliation was conducted triple and the average values were reported with the maximum relative errors less than 10% for C G and I D /I G .
As a result, under P=480 W, C SAS =0.3 mg ml −1 , C Gi =15 mg ml −1 , the I D /I G value is as low as 0.331±0.021 (C G =0.103 mg ml −1 ) with t s =2 h, and the C G value can reach 0.181±0.004 mg ml −1 (I D /I G =0.410) with t s =5 h. Therefore, t s at 2 or 5 h are more suitable for low-defect or high-concentration production of graphene, respectively.
In addition, the C G of the supernatant after 2 krpm centrifugation was 0.138 mg mL −1 (t s =2.0 h) or 0.235 mg mL −1 (t s =5.0 h), respectively. Therefore, our graphene accounted to 74.6 % (t s =2.0 h) and 77.0 % (t s =5.0 h) of full-size graphene, respectively. Incidentally, due to less sonication time used in our work, the concentration of our graphene is slightly lower than some literature values [5,26]. In the following study, the graphene obtained under the optimization conditions (P=480 W, C SAS =0.3 mg ml −1 , and t s = 2 h) was used unless otherwise stated.
It should be noted that the yield of our graphene is low (0.68% with t s =2 h). Recycling of sediment graphite for exfoliation is an effective method to raise the yield to 3.6% with five-time cycles (see Supplementary Material: 2. Re-cycling of sediment graphite).
Characterization of graphene
As is known to all, the size, layer number, and defect content of graphene would affect its performance in applications. Thus, the estimation of the size, layer number, and defect content of graphene is essential. The size of graphene can be determined by SEM, TEM, and AFM; the layer number can be estimated by AFM, TEM, and Raman spectra; and the defect content can be evaluated by the ratio I D /I G in Raman spectra. Furthermore, the content of residual dispersant in graphene is an important index to affect graphene quality because it affects performance of the graphene in use [5,12]. Figures 3(a)-(c) shows typical SEM, TEM, and AFM images of the graphene, respectively. From these images, we can observe that the thickness of the graphene is very thin, and the size is within a few hundred nanometers, which is much smaller than that of its precursor graphite (44 μm). The greatly size reduction of graphene may be attributed to the effect of sonication-induced crushing [27]. SEAD pattern of TEM (inset of figure 3(b)) shows the characteristic of circular regular hexagon pattern with uniform distribution and the same bright diffraction spots at the innermost layer and the second inner layer, indicating a typical characteristic of monolayer graphene sheet with good crystal [7,12].
In the AFM images with their height profiles shown in figure 3(c) and figure S5, the graphene thickness of (0.7-1.4) nm, (1.5-1.8) nm, (2.4-2.9) nm, (3.2-3.8) nm, and (4.0-4.2) nm marked with (1)-(3) may be regarded as a monolayer to 5 layers, respectively [5,21]. This existence of monolayer is consistent with the result of TEM. To determine the size and layer number distribution of the graphene, we randomly selected 238 graphene sheets obtained under t s =2 and 5 h in the AFM images as the statistical samples. The resultant normalized histograms of the length, width, and layer number distribution are shown in figure S6. From these histograms, the length and width was mainly in the range of 100-400 nm, and the proportion of the graphene with 1-4 layers reached 98% for t s =2 or 5 h.
Raman spectroscopy is a versatile tool for studying the properties of graphene [28,29]. It can provide information about not only the defect content but also the number of layers per flake [28]. Raman spectra of graphene are characterized by a D-band (∼1350 cm −1 ), a G-band (∼1582 cm −1 ) and a 2D-band (∼2700 cm −1 ) [21]. In Raman spectra, the intensity ratio of D-band and G-band (I D /I G ) can be used to characterize the defect content of graphene, as illustrated in Subsection 3.1; and the position and shape of 2D peak can be used to evaluate the layers number of graphene [14]. Raman spectra of the graphene obtained in optimization condition (t s =2 or 5 h) are shown in figure 4(a). Spectra of graphene oxide (GO), reduced GO (RGO), and raw Gi are also provided in figure 4(a) for comparison. In the spectra of our graphene, the existence of 2D peak indicates that the quality of our graphene was much better than that of the GO and RGO (for GO and RGO, no 2D peak was observed) [30]. The 2D-band of less than 2700 cm −1 (532 nm laser excitation) with the symmetrical shape indicates that the graphene obtained was few-layer (<5 layers) [14], consistent with the results of AFM. The ratio of I D /I G for graphite, graphene obtained for t s =2 h and 5 h, GO, and RGO was 0.083, 0.331, 0.410, 0.94, and 1.10, respectively. The much lower I D /I G values of the graphene for 2 and 5 h than those of GO and RGO indicates that the defect content of our graphene was much lower than that of the graphene obtained by chemical method (such as Hummers method and its various improved methods) [5,12]. Furthermore, the lower I D /I G value of the graphene obtained for t s =2 h than that for t s =5 h shows that a longer t s would produce more defects [26]. It should be pointed out that the I D /I G value was an average of ten parallel measurements of freezedried samples, with a relative deviation of less than 10%.
Besides the layer number, size, and defect content of graphene, the determination for the content of residual dispersants in graphene is also an important aspect of characterization. The residual dispersant in graphene is difficult to be removed, and its content depends on the type of dispersants and the separation procedure of graphite, sulfonate asphalt, and graphene from dispersions [7,17,21]. Figure 4(b) shows the C1s XPS spectra of graphite, sulfonate asphalt, and graphene, revealing the existence of several types of carbon: C=O, C-S/N/O, and C-H in the prepared graphene [12,30,[32][33][34]. Compared with the C1s spectra of graphite and SAS, the graphene sample had several types of carbon derived from SAS. This proves the existence of SAS in the graphene sample. Here, the content of residual SAS in graphene samples was determined by the annealing method proposed in our previous work [5]. According to the weight change of SAS, graphene, and graphite before and after the annealing at 600°C for 2 h, the content of residual SAS in the graphene can be calculated to be 11.4 wt%, which is similar to that of other dispersant residues in the graphene obtained by the filtration method [5,7,21,35,36].
Conductivity is also an important property of reflecting the quality of graphene. The conductivity of our graphene is 10134 S·m −1 , which is one of the highest conductivities for the graphene prepared by the LPE method [5,13]. The conductivity of the graphene by annealing at 600°C for 2 h under N 2 atmosphere can increase to 23781 S m −1 , which is among the highest values [13]. The graphitization of SAS molecules absorbed on the surface of graphene at 600 ºC, through the pyrolysis of oxygen/sulfur-containing groups (as indicated in figure S7), may be a reason for the great increase of conductivity.
Exfoliation and stabilization mechanism
The weak alkalinity (pH=8.2) and negative zeta potential (ζ=−43.3 mV for 0.3 mg ml −1 SAS solution) of aqueous solution of SAS indicated that SAS was dissociated into large negatively charged ions and counter medium ions in water. The large negatively charged ions of SAS play an important role in the stabilization of graphene dispersion.
The exfoliation process of graphite in aqueous SAS solution can be described as follows: First of all, sonication leads to a cavitation of water, i.e. a large number of small bubbles occur, grow up or suddenly burst. The resulting strong impact force on graphite would enlarge the spacing in graphite layer instantaneously. In the meantime, some negatively charged SAS ions enter the interlayers and are adsorbed on the surface of the graphene sheets due to their strong interactions, thus isolating the graphite layers on both sides. Therefore, the graphite layers cannot restore even if the ultrasound stops. Furthermore, the distance of graphite layers further increases due to the electrostatic repulsion between the negatively charged SAS ions adsorbed on the graphite layers, resulting in the complete exfoliation of adjacent graphite layers. Finally, the negatively charged graphene sheets continually attract cations in aqueous solution to form the stable electric double layer. The exfoliation and stability mechanism of graphite into graphene in aqueous SAS solution are also supported by the results reported in the literature [5,12,21].
Whether the above mechanism works or not depends on the formation of the complex of SAS with graphene (G-SAS). Only when SAS and graphene form a complex, there be energy transfer between them, leading to the graphene negatively charged. The formation of the G-SAS complex can be confirmed by their fluorescence and infrared spectra in figure 5. Figure 5(a) shows that compared with pure SAS, the fluorescence of the SAS adsorbed on the surface of graphene (G-SAS) was quenched. This proves that there is an energy or electron transfer between graphene and SAS, which is a strong evidence for the formation of G-SAS complex [12,37]. The red shift of characteristic absorption peaks of SAS absorbed on the surface of graphene (see figure 5(b)) also provides an evidence for the formation of the complex. For SAS, there are four absorption peaks at 536.1, 626.8, 1051.0, and 1130.0 cm −1 derived from sulfonic acid group [38]. Yet for the residual SAS absorbed on the surface of graphene, there are similar peaks at near the four locations (503.3, 624.8, 1041.0, and 1028.0 cm −1 ), but all of them have a red shift, thus also confirming the formation of G-SAS complex.
In addition, the stability of the G-SAS complex was indicated by its high zeta potential (ζ=−38 mV) in the dispersion, whose absolute value exceeds the accepted stable value (|ζ|=25 mV). After a month, the zeta potential was kept to ζ=−36 mV, further also confirming the good long-term stability of the graphene dispersion.
Effect of graphene quality on the anticorrosion of aluminium
Aluminium is one of the most widely used metals, but it is also one of the metals with the lowest electrode potential, which makes it easier to be corroded in applications [39]. Therefore, the study of aluminium anticorrosion is interesting in practical application.
One of the most widely used anticorrosion methods of aluminium is to coat its surface with a layer of material to isolate it from the environment. As far as we know, many materials have been used to coat Al surface for anticorrosion, such as metals (Au, Pt, Ni, Ti, and their alloy) [40,41], polymers and their mixtures with carbon materials (e.g. carbon nano-tube and graphene) [42,43], and stearic acid-modified ZnO nanoplates [44]. Among all these materials, graphene is a good candidate due to ultrathin coating, excellent thermal stability, good chemical inertia, high mechanical strength, etc [45] Graphene oxide (GO) [45], polyvinyl alcohol-reduced GO (PVA-rGO) complex [46], and monolayer graphene prepared by the CVD method [6] have also been successfully used as anticorrosion coating to protect aluminium effectively.
In our previous work, we used the graphene obtained by the fulvic acid-assisted exfoliation as an anticorrosion coating of Al [5]. In this work, with the prepared graphene as an anticorrosion coating of Al, we studied systematically the effect of the size, defect content of graphene and the annealing on the anticorrosion of Al. Herein, the electrochemical measurements were done in 0.5 M H 2 SO 4 and 0.6 M NaCl aqueous solutions, simulating the electrolytes in lead-acid battery and seawater, respectively. It should be noted that each experiment of anticorrosion was performed triple and the average values were recorded.
It should be noted that, when Al foils contact with water saturated with oxygen, hydroxyl groups are formed [46]. When the graphene was spin-coated onto the Al surface, the oxygen-containing functional groups of residual SAS would form covalent bonds with hydroxyl groups. Therefore, the existence of the residual SAS in the graphene is beneficial for improving anticorrosion.
Before studying the effect of graphene quality and the annealing on aluminium anticorrosion, the graphene materials and corrosive samples were named for the convenience of the discussion below. In the markers of G−t s −(l + h) or a-G(T a )−t s −(l + h), G and a-G(T a ) refer to graphene and annealed graphene at T a (°C), and (l + h) refers to the low and high centrifugation speed (krpm) in two-step centrifugation (see subsection 2.3 above). The markers of Al/G−t s −(l + h) or Al/a-G(T a )−t s −(l + h) refer to the corresponding graphene-coated Al foil samples.
For the surface of Al/G−2−(2+5) sample, the representative SEM and AFM 3D images with scanning areas of 2×2 μm 2 and 10×10 μm 2 were shown in figures S8(a)-(c). From figure S8(a), the whole surface has been covered completely, indicating the effectivity of the spin-coated method. From the figures S8(b)-(c), the average roughness (R a ) and root-mean-square roughness (R q ) of the scanning areas 2×2 μm 2 or 10×10 μm 2 were 5.14 and 3.86 nm, or 50.7 and 36.1 nm, respectively. Compared with the SEM image, the 3D images of AFM were more intuitive to show the corresponding morphology. It should be pointed out that the great fluctuation of the surface was attributed to the grooves caused by polishing.
First, two graphene materials of G−2−(2+3.5) and G−2−(3.5+5) were used to study the effect of the graphene size on Al anticorrosion. Tafel curves of Al/G−2−(2+3.5) and Al/G−2−(3.5+5) or Al/G−5 ) [47,48]. The coating formed by the graphene with smaller size would provide more ion channels and thus is more likely to be pierced by smaller Cl − instead of larger SO 4 2− . As indicated at subsection 3.1, the defect content of G−2−(2+5) is smaller than that of G−5−(2+5) (their I D /I G is 0.086 and 0.123, respectively). Here, the graphene samples of G−2−(2+5) or G−5−(2+5) were used as the corrosion inhibitor to investigate the effect of the defect content on the anticorrosion of Al. [25,49]. This may be why the corrosion resistance of Al foils coated by the higher defect content graphene was improved.
It should be pointed out that the effect of size and defect content of graphene on the corrosion resistance of coated aluminium cannot be arbitrarily extended to a wider range of size and defect content. If the size and defect content are far beyond those of our graphene, the conclusion needs to be re-validated.
Next, the effect of the annealing temperature on the corrosion resistance was investigated. Here, G−2 −(2+5) or G−5−(2+5) were used as the coating to investigate the effect of the annealing on the corrosion resistance of Al. Tafel curves of the graphene-coated Al foils annealed at 200, 400, and 600°C for 2 h are shown in figure 7, and the relative corrosion parameters are also listed in table 2. With the increase of annealing temperature, the j corr for the used two materials increased in 0.5 M H 2 SO 4 solution, and the j corr for the used two materials decreased at first and then increased, with a minimum at 200°C in 0.6 M NaCl solution. The effect of the annealing on the corrosion current density can be explained as follows.
First of all, we noted that the high temperature of annealing led to SAS decomposition, while the graphene didn't. The graphene (i.e. G-SAS complex) attached to the surface of Al goes through two changes with the increase of annealing temperature. At low temperature range (<230°C, see TG curve of SAS in figure S7), the trace bound water and/or absorbed small gas molecules in the G-SAS complex would evaporate to remove. The Table 2. Corrosion potentials (E corr ) and corrosion current densities ( j corr )) in 0.5 M H 2 SO 4 and 0.6 M NaCl. Notes: Al/G−t s −(l−h) or Al/a−G(T a )−t s −(l + h) refer to the unannealed or annealed graphene-coated Al foils samples, respectively, in which T a , t s , l and h refer to annealing temperature, sonication time, low and high centrifugal speeds (unit: krpm) in the preparation of graphene, respectively. Based on three parallel experiments, the maximum relative errors of corrosion potential and corrosion current are less than 10%. evaporation would not cause desorption of the graphene from the Al surface, rather, would cause the graphene to be adhered more closely to the Al surface due to the remove of trace water and gas molecules. This is beneficial to improve the corrosion resistance (Factor I). As the temperature continues to increase (>230°C), SAS would be pyrolyzed (figure S7), resulting in the escape of a large number of small gas molecules from the gap between the graphene and Al, which were produced due to these reactions of the cyclization, aromatization, and/or polycondensation of SAS pyrolysis products [50]. Unlike the case in low temperature, in high temperature the escape of a large number of small gas molecules would disrupt the adhesion of the graphene to Al surface, which is disadvantageous to the improvement of corrosion resistance (Factor II). But these reactions of cyclization, aromatization and/or condensation of pyrolysis products increase the graphitization degree of SAS, and thus result in an enhancement of the π-π interaction between graphene and SAS, which can be confirmed by IR spectra of the aromatic ring (1400-1600 cm −1 wavenumber) of annealed SAS at 200, 400, and 600°C, respectively (see figure 5(b)). The graphitization is beneficial to improve the corrosion resistance of the samples (Factor III). Therefore, the corrosion resistance of the samples after annealed at high temperature depends comprehensively on the three factors above. In 0.5 M H 2 SO 4 , the j corr increases with the annealing temperature, indicating that Factor II is dominant. Further, the higher the temperature (>230°C), the greater the contribution from Factor II. This is because higher temperature causes more gas molecules to escape, and thus makes greater disturbance to the graphene absorbed on the surface of Al foils. In 0.6 M NaCl, for the annealing temperature of less than 200°C, the corrosion current density decreases with the increase of annealed temperature until 200°C. Therefore, in the temperature range (<200°C), the corrosion current density is dominated by Factor I instead of Factor III, which can be confirmed by a slightly reduced weight at 200°C in TG curve of SAS shown in figure S7. When the temperature is in the temperature range (> 200°C), Factor II dominates, which is supported by the significantly decreased weight under>200°C (see figure S7). In addition, in H 2 SO 4 and NaCl solutions, the annealing temperature has different effect on anticorrosion of Al foils, which may be due to different penetration abilities of SO 4 2− and Cl − ions. Of all the tested samples of our graphene as the anticorrosion coating, Al/G−5−(2+5) and Al/a−G200−5 −(2+5) shows the best corrosion resistance in 0.5 M H 2 SO 4 ( j corr =7.20×10 -7 A cm −2 ) and in 0.6 M NaCl ( j corr =7.30×10 -7 A cm −2 ) solution, respectively, which both are superior to the U.S. DOE target ( j corr <10 -6 A cm −2 ) [41]. Furthermore, they indicated high anticorrosion efficiencies (87.6% in 0.5 M H 2 SO 4 and 99.4% in 0.6 M NaCl, respectively). In addition, not only the anticorrosion efficiencies of the graphene prepared in this work are higher than those reported in our previous work [5], but also the operation is much simpler due to only a single coating required.
Conclusions
In summary, a new dispersant (SAS) was used successfully to exfoliate graphite into graphene in aqueous solution under sonication. A called double-criterion optimization method was proposed, i.e. the C G and I D /I G of as-exfoliated graphene, which are both used as two optimization criterions simultaneously. Under the optimized conditions, the highest C G can reach 0.181 mg ml −1 , and the lowest defect content was 0.331. The asexfoliated graphene sheets were <5 layers and mainly within 100-400 nm. The stabilization of the graphene dispersion comes mainly from the electrostatic repulsion between negatively charged graphene flakes. When the graphene was used as the protective coating of aluminium, relatively small size in H 2 SO 4 solution or large size in NaCl solution, relatively high defect content and proper annealing temperature were beneficial to the improvement of anticorrosion performance. The new dispersant (SAS) and the proposed two-criterion optimization would promote the development of the large-scale liquid-phase exfoliation of graphite into the graphene for the applications where specific defects are required. | 8,934.8 | 2020-02-12T00:00:00.000 | [
"Materials Science"
] |
Using the TAP Component of the Antigen-Processing Machinery as a Molecular Adjuvant
We hypothesize that over-expression of transporters associated with antigen processing (TAP1 and TAP2), components of the major histocompatibility complex (MHC) class I antigen-processing pathway, enhances antigen-specific cytotoxic activity in response to viral infection. An expression system using recombinant vaccinia virus (VV) was used to over-express human TAP1 and TAP2 (VV-hTAP1,2) in normal mice. Mice coinfected with either vesicular stomatitis virus plus VV-hTAP1,2 or Sendai virus plus VV-hTAP1,2 increased cytotoxic lymphocyte (CTL) activity by at least 4-fold when compared to coinfections with a control vector, VV encoding the plasmid PJS-5. Coinfections with VV-hTAP1,2 increased virus-specific CTL precursors compared to control infections without VV-hTAP1,2. In an animal model of lethal viral challenge after vaccination, VV-hTAP1,2 provided protection against a lethal challenge of VV at doses 100-fold lower than control vector alone. Mechanistically, the total MHC class I antigen surface expression and the cross-presentation mechanism in spleen-derived dendritic cells was augmented by over-expression of TAP. Furthermore, VV-hTAP1,2 increases splenic TAP transport activity and endogenous antigen processing, thus rendering infected targets more susceptible to CTL recognition and subsequent killing. This is the first demonstration that over-expression of a component of the antigen-processing machinery increases endogenous antigen presentation and dendritic cell cross-presentation of exogenous antigens and may provide a novel and general approach for increasing immune responses against pathogens at low doses of vaccine inocula.
Introduction
Major histocompatibility complex (MHC) class I molecules are highly polymorphic cell-surface glycoproteins, which function to bind peptides for presentation to cytotoxic lymphocytes (CTLs) following microbial infection or cell transformation [1][2][3]. In humans, a number of genes located mainly in the MHC class II region of Chromosome 6 are responsible for the generation, assembly, and transport of MHC class I molecules, referred to as the antigenprocessing pathway. These genes include (1) the proteasome components, low molecular-weight polypeptides (LMPs): LMP2, LMP7, and LMP10; (2) transporters associated with antigen processing (TAP): TAP1 and TAP2; (3) the chaperone proteins: calnexin, calreticulin, and tapasin; and (4) MHC class I heavy chain and b2-microglobulin [4][5][6][7][8]. Peptide antigens are transported into the endoplasmic reticulum (ER) by TAP and are loaded onto the MHC complex with the aid of the chaperone proteins. The functional MHC class I complexes are, in turn, transported to the cell surface and presented to T lymphocytes. Precursors of CTLs, through the T cell receptor, recognize foreign peptides derived from pathogens, and begin a cascade of activities leading to stimulation of specific immune responses against pathogen-infected cells. Overall, the expression of stable MHC class I molecules on the cell surface is regulated by the peptides supplied by TAP [1,9]. Antigen-presenting cells (APCs) derived from TAP1 À/À mice cannot transport antigenic peptides from the cytoplasm into the lumen of the ER for MHC class I binding and eventual presentation on the cell surface. Therefore, these cells lack the capacity to direct the priming of antigenspecific T cells [8].
The development of vaccine adjuvants that promote immunity at low doses of inocula is one approach to generate protection in individuals who would otherwise respond adversely to the administration of standard doses of inocula. Adverse responses to standard doses of inocula are a problem encountered in vaccination against viruses such as smallpox and, as a result, conventional vaccines cannot be administered to a significant fraction of the population who are either immune suppressed or who would otherwise react adversely to the established vaccine protocols [10]. As vaccination against a variety of pathogens becomes more widespread, there will be a greater need to increase the efficiency of the inocula while reducing the sizes of the batches of vaccine required for vaccination of an entire population. This would be particularly important during times of acute need, when rapid responses are required during an emergent epidemic. To increase vaccine potency and efficiency, a variety of adjuvants have been developed. These either suffer from substantial toxicity or cannot be implemented because their mode of action is obscure [11]. Here we report that a profound increase in Tcell-mediated immune responses to several infectious viruses was achieved using an immunization strategy involving a combination of both the infectious agents and a recombinant vaccinia virus (VV) containing a TAP-gene construct. This unexpected observation suggests that recombinant TAP may be used as a novel adjuvant to increase vaccine efficacy and potency.
Results
TAP Expression Is Required for Antigen-Specific H-2 (Mouse Major Histocompatibility Complex) K b Surface Expression In Vitro T2-K b cells express H-2K b but lack both TAP1 and TAP2 [12,13] and have a very low expression of MHC class I on the cell surface owing to inefficient antigen processing. In a CTL assay, vesicular stomatitis virus (VSV)-specific effectors were able to lyse T2-K b cells coinfected with VV containing minigene for VSV-NP 52-59 (VV-NP-VSV) and recombinant vaccinia virus carrying human TAP1 and TAP2 (VV-hTAP1,2) in a dose-dependent manner. This indicated that a functional TAP complex was formed by the VV-hTAP1,2 infection, leading to high levels of H-2K b -vesicular stomatitis virus nucleoprotein (VSV-NP) surface expression ( Figure 1A). In contrast, T2-K b targets coinfected with VV-NP-VSV and VV encoding the plasmid PJS-5 (VV-PJS-5, negative control vector), or with VV-NP-VSV alone, were strongly resistant to lysis, and the level of lysis was similar to that in uninfected targets. These targets did not respond to increasing doses of effectors in the assay, indicating that the surface expression of H-2K b -VSV-NP 52-59 is below the threshold for CTL recognition. The levels of lysis in the targets coinfected with VV-NP-VSV and VV-PJS-5, or with VV-NP-VSV alone are small (3%-5%) relative to the targets infected with VV-hTAP1,2 which are associated with up to 45% lysis.
Increased TAP Expression Increases Immune Responses to VSV and Sendai Virus Infection In Vivo
Splenocytes from mice coinfected with the low dose of VSV and VV-hTAP1,2 showed dramatic increases in CTL activity against RMA cell targets ( Figure 1B). The response was specific to the expression of TAP rather than to the VV vector alone, since coinfection with VV-PJS-5 did not increase CTL responses to VSV. To exclude the possibility that the effect seen with VV-hTAP1,2 was the result of VV-dependent alteration of proteasome function interacting with TAP over-expression, VV-NP-VSV was used in the coinfections as the source of antigen instead of VSV [14]. The epitope generated by VV-NP-VSV does not require degradation by the proteasome, and therefore any increase in CTL activity can be attributed to TAP over-expression. Mice coinfected with VV-NP-VSV and VV-hTAP1,2 exhibited dramatic increases in vaccinia virus carrying the vesicular stomatitis virus nucleocapsid protein minigene (VSV-NP 52-59 )-specific CTL responses when compared to coinfection with VV-NP-VSV and VV-PJS-5 ( Figure 1C).
Another well-characterized CTL epitope, from Sendai virus nucleoprotein (SV-NP), was investigated to provide further evidence that TAP over-expression can augment antiviral responses. The infectious dose of Sendai virus (SV) required to achieve minimal and maximal CTL responses was also first determined by titration (data not shown). As was the case for VSV, mice coinfected with SV and VV-hTAP1,2 exhibited an increased specific immune response against SV-NP epitope when compared to responses elicited by the vector controls ( Figure 2A).
Increased TAP Expression Increases Immune Responses to VV Infection In Vivo
The augmentation of a specific immune response against a virus by increasing TAP expression in APCs may require that
Synopsis
The development of protective vaccines against infectious diseases such as AIDS, SARS, and West Nile virus has become a societal priority but remains a scientific challenge. In recent years, the threat of bioterrorism agents such as anthrax and smallpox has heightened the need for the rapid development of effective new vaccines. One of the major stumbling blocks to the implementation of any vaccine is the toxic side effects on the vaccine candidate. For example, a significant number of doses of a new vaccine against smallpox have been commissioned, but approximately 20% of the individuals targeted to be inoculated will suffer toxicity due to vaccination. Furthermore, an additional difficulty in the production of vaccines is the creation of sufficient doses to vaccinate a large population. The authors have identified a novel approach that appears to address these issues. They demonstrate that the inclusion, in low doses of vaccines, of a normal component of the antigen-processing pathway, the transporter associated with antigen processing (TAP), confers protective immunity against lethal viral loads during viral challenges. This new paradigm is shown to be applicable to many viruses, including poxviruses, and could significantly advance the creation of new vaccines and improve those that already exist. viral infection and the over-expression of TAP occur in the same cell. To address this, we generated CTLs directed against antigen(s) derived from VV. VV-specific CTL activity in mice infected with a low dose of VV-hTAP1,2 was much greater than that from mice infected with an equivalent low dose of the control, VV-PJS-5 ( Figure 2B).
Increased TAP Expression Increases Endogenous Antigen Processing
We examined whether TAP over-expression could increase endogenous antigen processing. A VV-specific CTL assay was used to compare H-2K b and H-2D b VV-specific antigen processing in naïve splenocytes infected with VV-hTAP1,2 or VV-PJS-5. Naïve splenocyte targets infected with VV-hTAP1,2 were more susceptible to killing by VV-specific effectors than naïve splenocytes infected with VV-PJS-5 ( Figure 2C).
Increased TAP Expression Increases the Frequency of VSV-Specific CD8 þ VSV-NP/K b -specific tetramer analysis compared the proportion of splenocytes specific for H-2K b VSV-NP 52-59 among CD8 þ splenocytes between VSV-infected mice and mice coinfected with VV-hTAP1,2 and VSV ( Figure 3). Mice coinfected with VV-hTAP1,2 and a low dose of VSV elicited a higher frequency of VSV-NP 52-59 -specific CD8 þ splenocytes (17.5% of CD8 þ splenocytes) than mice coinfected with VV-PJS-5 and low-dose VSV (12.8%), or with a low dose of VSV alone (8.3%). The differences between a combined VV-PJS-5 and VSV low-dose vaccination, on the one hand, and a combined VV-hTAP1,2 and VSV low-dose vaccination, on the other hand, were highly significant (z-statistic ¼ 4.701, p , 0.0001). The maximum VSV-specific CD8 þ frequency (27.6%) was observed with the highest dose of VSV infection.
Peptide-Transport Activity and Human TAP Expression
Human TAP1 protein was detected in splenocytes by immunoblotting ( Figure 4A), and RT-PCR analysis showed that both human TAP1 and human TAP2 mRNA were present ( Figure 4B). Human TAP1 mRNA levels were quantified with real-time RT-PCR in splenocytes from mice 4, 8, and 24 h after intraperitoneal (i.p.) infection. Calculations based on the threshold cycle number indicated that the abundance of human TAP1 mRNA was equal to mouse TAP1 mRNA 4 and 8 h after infection, but decreased to 2% of endogenous mouse TAP1 mRNA by 24 h after infection. Immunocytochemistry showed that 7% of total splenocytes were positive for human TAP1. Double staining for cellspecific antigens and human TAP1 showed that 3.0% of B cells, 2.4% of macrophages, and 1.9% of dendritic cells (DCs) were positive for human TAP1 expression ( Figure 4C). A peptide-transport assay clearly showed that splenocytes from VV-hTAP1,2-infected mice had a higher capacity to transport a 125I-labeled peptide library into the lumen of the ER than splenocytes from normal mice or from the mice infected with VV-PJS-5 ( Figure 4D). Figure 1. VV-hTAP1,2 Restores Antigen Processing in the TAP-Deficient Cell Line T2-K b and Increases Immune Responses to VSV (A) A standard chromium-release assay was performed to establish the ability of VV-hTAP1,2 to restore antigen processing in the TAP-deficient cell line T2-K b . T2-K b cells coinfected with VV-hTAP1,2 and VV-NP-VSV were used as targets, and splenocytes from VSV-infected mice were used as effectors. Targets coinfected with both VV-PJS-5 and VV-NP-VSV or infected with VV-NP-VSV alone, or uninfected cells, were used as negative controls for VV-hTAP1,2. (B) A standard chromium-release assay was performed to measure the ability of VV-hTAP1,2 to increase specific CTL activity in immunized mice. RMA cells pulsed with VSV-NP 55-59 peptide were used as targets, and effectors were obtained from mice coinfected with VV-hTAP1,2 and lowdose VSV. Effectors from mice coinfected with VSV and VV-PJS-5 or a low dose of VSV alone were used as negative controls for the presence of VV-hTAP1,2 in the coinfections. Effectors from mice infected with a high dose of VSV demonstrated maximal CTL activity and were used as a positive control. (C) A standard chromium-release assay was used to confirm that the increase in immune responses was due to TAP-dependent transport of NP-VSV rather than to nonspecific effects of VV infection on antigen processing. RMA cells pulsed with VSV-NP 55-59 peptide were used as targets, and effectors were obtained from mice coinfected with VV-hTAP1,2 and VV-NP-VSV. Effectors from mice infected with a high dose of VSV were used as positive controls for maximal CTL activity. Effectors from mice coinfected with VV-PJS-5 and VV-NP-VSV or from mice infected with VV-NP-VSV alone were negative controls for the presence of VV-hTAP1,2. Values represent the mean of triplicate measurements 6 standard error of the mean. DOI: 10
Increased TAP Expression in DCs Increases MHC Class I-Restricted Presentation of Exogenous Antigens
A subpopulation of splenocytes corresponding to the splenocyte DCs showed a 7.4% increase in total H-2K b following infection with VV-hTAP1,2 when compared to V-PJS-5 (data not shown). This effect on DCs led us to investigate the cross-presentation of ovalbumin (OVA), an exogenously derived antigen. Treatment of DCs with VV-hTAP1,2 or recombinant vaccinia virus carrying mouse TAP1 (VV-mTAP1) significantly increased the amount of H-2K b -SIINFEKL expression when compared to DCs infected with VV-PJS-5 (p , 0.01). The assay was repeated three times, with each assay showing statistically significant increases in H-2K b -SIINFEKL and total H-2K b expression by DCs infected with VV-hTAP1,2 or VV-mTAP1 (p , 0.01) ( Figure 5A and 5B). DCs infected with VV-hTAP1,2 or VV-mTAP1 and incubated with OVA expressed significantly higher numbers of total H-2K b complexes compared to VV-PJS-5-infected DCs (40% and 30% increase, respectively; p , 0.01) ( Figure 5C and 5D).
VV-hTAP1,2-Vaccinated Mice Resist Lethal Viral Challenge
A viral-challenge experiment determined whether TAPdependent increases in immune function are significant in generating protective immune responses in vivo. Weight change was monitored in groups of mice, following vaccination with escalating doses of VV-hTAP1,2, VV-PJS-5, or The percentage of CD8 þ splenocytes specific for H-2K b -VSV-NP 52-59 was determined by flow cytometry using double labeling with an anti-CD8 þ antibody and a VSV-NP-specific tetramer. The value in the upper-right quadrant of the scatter-plots represents the percentage of CD8 þ cells specific for H-2K b -VSV-NP 52-59 for mice infected with a low dose of VSV and VV-hTAP1,2. The mice coinfected with both VSV and VV-PJS-5 or with a low dose of VSV, or uninfected mice, were used as negative controls for VV-hTAP1,2. The mice infected with a high dose of VSV alone were used as a positive control. DOI: 10.1371/journal.ppat.0010036.g003 Figure 2. VV-hTAP1,2 Increases Antigen Presentation and Immune Responses to SV and VV in Mice (A) A standard chromium-release assay was used to determine the ability of VV-hTAP1,2 to increase immune responses to SV. RMA cells pulsed with SV-NP peptides were used as targets, and effectors were obtained from the mice coinfected with a low dose of SV and VV-hTAP1,2. The mice coinfected with a low dose of SV and VV-PJS-5 or with a low dose of SV alone were used as negative controls. Effectors from the mice infected with a high dose of SV were used as positive controls for maximal SVspecific CTL activity. (B) A standard chromium-release assay was used to determine the ability of VV-hTAP1,2 to stimulate VV-specific CTL responses. RMA cells infected with VV-PJS-5 were used as targets, and effectors were obtained from the mice vaccinated with a low dose of VV-hTAP1,2. Effectors from the mice vaccinated with an equivalent low dose of VV-PJS-5 were used as negative controls, and effectors from the mice vaccinated with a high dose of VV-PJS-5 were used as positive controls for maximal CTL activity. (C) A standard chromium-release assay was used to measure the ability of human TAP expression to increase antigen presentation in normal mouse splenocytes. Naïve splenocytes, which had been stimulated overnight with LPS (LPS blasts) and infected with VV-hTAP1,2, were used as targets for VV-specific effectors; VV-specific effectors were obtained from mice infected with VV-PJS-5. LPS blasts infected with VV-PJS-5 were used as negative controls. Values represent mean of triplicate measurements 6 standard error of the mean. DOI: 10 PBS, and then administration of a lethal vaccinia virus Western Reserve strain (VV-WR) challenge [15][16][17][18]. Five out of six mice receiving the lowest VV-hTAP1,2 vaccine doses survived the challenge with minimal weight loss (less than 5%) and returned to normal, pre-challenge weight within 6 d. All the mice vaccinated with the intermediate and high doses of VV-hTAP1,2 survived challenge without weight loss. In contrast, the mice vaccinated with the lowest dose of VV-PJS-5 suffered significant morbidity (approximately 20% weight loss) and high mortality (four out of six mice died). The mice vaccinated with the intermediate dose of VV-PJS-5 also experienced high morbidity and one death. These mice were unable to regain weight to the pre-challenge level until 14 d after viral challenge. Mice receiving the highest dose of VV-PJS-5 were completely protected. All sham-vaccinated mice (PBS) were dead by 7 d post-challenge, consistent with a 1e5 plaque-forming unit (PFU) intranasal challenge ( Figure 6A and 6B). VV-hTAP1,2 vaccination provided significantly greater protection than vaccination with VV-PJS-5 (p , 0.05) in a dose-dependent manner (p , 0.01). Mice vaccinated with VV-hTAP1,2 at the lowest doses were able to resist a lethal challenge equivalent to the highest vaccine dose of VV-PJS-5. This represents a 100-fold increase in the efficacy of VV-hTAP1,2 to generate a protective immune response compared with VV-PJS-5.
Discussion
Contrary to expectation, we observed that increasing TAP expression in mice augments the cellular immune response to a variety of viral pathogens including VSV, SV, and VV at infectious doses that normally do not generate significant CTL activity. These responses do not appear to be the result of a reversal of viral immuno-evasion mechanisms with respect to antigen presentation since VSV and SV are not known to down-regulate MHC class I surface expression. The correlation of immune responses with increasing levels of viral infection in mice reflects the fact that CTL priming requires a threshold level of relevant viral peptides to be expressed on the surface of APCs [19][20][21][22]. The increase in CTL activity in mice coinfected with VV-hTAP1,2 and with low infectious doses of VSV, SV, and VV-NP-VSV shows that this increase is dependent on the expression of TAP activity alone.
It is unlikely that this augmentation is due to the more efficient translocation of VSV-derived peptides by human TAP and/or interspecies (human/mouse) TAP heterodimers compared to mouse TAP complexes. It has been reported that human TAP preferentially transports peptides containing either hydrophobic or positively charged amino-acids at their C-terminus, while mouse TAP is slightly more restrictive and favors peptides with hydrophobic amino-acids at their Cterminus [23]. The VSV-NP and SV-NP are two murine K brestricted epitopes that contain the same hydrophobic residue (leucine) at the C-terminus. The transport of these peptides by human TAP would compete with an additional peptide pool containing positively charged C-terminal residues. This might lead to a reduced amount of VSV-NP or SV-NP entering the ER lumen for surface presentation through human TAP heterodimers.
In addition, the SV-NP epitope has an aromatic residue (phenylalanine) at the peptide position 1 (N-terminus), and this has a strong deleterious effect for human TAP binding [24] and for transport. Therefore, we conclude that the transport of SV-NP by human TAP would be no better than by mouse TAP. For interspecies TAPs, the transport of peptides is restricted to those with hydrophobic C-terminal residues, similar to mouse TAP [25]. This would imply that (D) ATP-dependent TAP activity was measured in splenocytes taken 24 h after the mice were infected with VV-hTAP1,2 or VV-PJS-5 (negative control). Active transport activity was measured in the presence or absence of ATP by a peptide-transport assay that determined the translocation of radioactive peptides from the cytosol into the ER. Normal uninfected mice, uninfected TAP À/À mice, and mice infected with VV-PJS-5 were used as negative controls when assessing the effect of VV-hTAP1,2 infections on peptide-transport activity. The bars represent the mean value 6 standard error of the mean of triplicate measurements. when mouse TAP1 or TAP2 associates with its human TAP counterpart, they play a dominant role in selecting the peptides for transport. Furthermore, once the transportpermissive peptide, for example SV-NP, binds to interspecies TAPs, the phenylalanine residue at the N-terminus that is in contact with the human TAP counterpart may limit its binding capacity and, therefore, its transport. For these reasons, we conclude that the augmentation of the CTL responses against viruses in our experiments is justified by TAP over-expression rather than by the increased efficiencies of interspecies TAP heterodimers.
The priming of T cells requires cell-to-cell contact, and therefore APCs adjacent to T cells play a critical role [2,9]. The increase in VSV-specific CD8 þ cells observed with VV-hTAP1,2 coinfections indicates a TAP-dependent increase in APC cross-priming and cross-presentation activity and is explained by an increase in TAP expression and peptidetransport activity in the APCs of the spleen. To reconcile the increase in peptide-transport activity observed in the translocation assays with the frequency of human TAP-positive splenocytes, we estimate that the human TAP-positive splenocytes need to express 12 to 17 times more human TAP1 mRNA than endogenous mouse TAP1. This was confirmed by the high expression of the human TAP gene early in the infection.
DCs generate virus-specific CTLs via cross-presentation of exogenously acquired viral antigens in the context of MHC class I molecules [26]. This is achieved by a TAP-dependent process although additional TAP-independent pathways have been described recently [27][28][29]. Increased expression of mouse TAP1 appeared to be as effective as increased expression of human TAP1 and TAP2 in raising the level of MHC class I complexes (H-2K b -SIINFEKL) on the cell surface. Expression of TAP1 alone has been shown to stabilize TAP2 protein and TAP2 mRNA in TAP-deficient cells and could explain the effectiveness of TAP1 alone [30]. We conclude that supra-normal expression of TAP increases endogenous antigen processing (see Figure 2C) and the levels of both total and cross-presented MHC class I antigens on the surface of DCs, thereby leading to greater CTL responses in vivo.
Under normal conditions, it is estimated that only onethird of all TAP molecules translocate peptides actively. During an acute viral infection, however, TAP activity increases significantly owing to the rapid increase in the available intracellular peptide pool [31]. Therefore, the supply of peptides to MHC class I molecules is theoretically the limiting factor in antigen presentation. It is known that increasing the delivery of peptides into the ER by the artificial creation of a signal-sequence peptide fusion increases antigen presentation and immune responses [32]. However, these experiments bypass TAP altogether, and the relative amounts of peptide generated in the two pathways are difficult to equilibrate and, therefore, to compare. Our results indicate that during a viral infection there is competition between self and viral peptides in the MHC class I binding-peptide pool, limiting the amount of viral epitopes reaching the lumen of the ER and subsequently the cell surface. Increased TAP activity leads to more viral epitopes presented on the cell surface by MHC class I, resulting in increased immune responses. In DCs, over-expression of TAP has the additional effect of increasing the cross-presentation pathway for MHC class I, a pathway believed to be unique to these cells. A framework for how cross-presentation may operate has been published recently [26,27,33,34]. Endocytosed or phagocytosed exogenous antigens gain access to one or more types of vesicles where loading of antigenic peptide onto nascent MHC class I molecules is thought to occur. MHC class I molecules gain access to the endolysosomal compartment by virtue of a tyrosine-based sorting signal in their cytoplasmic domains [26]. It has been suggested that TAP molecules reside in this compartment and that a fusion event with the ER may make antigen processing more efficient for both exogenous and endogenous antigens. Over-expression of TAPs in the endogenous and the exogenous antigen-processing compartments of DCs could increase the efficiency of both pathways, leading to enhanced specific immune responses without stimulation of detectable autoimmune responses (D. Waterfield, unpublished data).
The presence of TAP genes in VV provides protection against lethal viral challenges at 100-fold lower amounts of inocula than VV without TAP genes. The inclusion of TAP in vaccination regimens acts as a gene-based adjuvant to boost immune responses against viral antigens, thereby allowing for reduced vaccine doses. Increased immune responses in response to low-dose vaccinations are desirable in the elderly and the very young, in whom immune systems may be compromised [35]. An additional advantage of including TAP as an adjuvant is its ability to increase peptide transport of a number of immunogenic peptides simultaneously, thereby aiding in the delivery of diverse peptides for binding to most HLA (human major histocompatibility complex) class I alleles expressed in the immunized population. TAP could be used as an adjuvant in peptide vaccines, but its use does not have to be restricted to viral vectors. For example, it could also be injected in other forms, such as in DNA plasmids attached to gold particles, or in any other system that inserts the TAP complex directly into the cell's protein-processing pathway [36]. Finally, the use of TAP as an adjuvant has the advantage that we have a solid intellectual understanding of its mechanism of action. This appears to be lacking in the case of many other generalized adjuvants [37]. We have shown here, in an animal model, that TAP over-expression can augment cell-mediated immunity against the cowpox virus (vaccinia), a close relative of smallpox (variola). It is conceivable that this approach could have applications in augmenting responses against smallpox in humans.
Recently, clinical trials for some promising HIV-vaccine candidates have been suspended because of poor cellular immune responses [38]. Inclusion of TAP in such vaccines may be able to improve their efficacy. Future clinical experiments will help to further establish whether the inclusion of TAP in vaccine regimens has advantages over existing protocols and whether other components of the intracellular antigen-processing pathway(s) are also limiting in healthy individuals. Nonetheless, the approach that has been discovered is novel and may have tremendous potential for vaccination of humans and animals against a variety of infectious diseases. Cytotoxicity assays. Cytotoxic activity was measured in standard 4h 51Cr-release assays using T2-K b cells, RMA cells, or naïve splenocytes as targets. T2-K b targets were infected with VV-NP-VSV alone or in combination with VV-hTAP1,2 or VV-PJS-5 (multiplicity of infection [MOI] ¼ 10) for 6 h. The RMA target cells were pulsed with VSV-NP 52-59 peptide (5-25 lM) or SV-NP 324-332 peptide (5-25 lM) for the relevant CTL assay. For VV antigen-specific killing, the RMA targets were infected overnight with VV-PJS-5 (MOI ¼ 0.34). For the measurement of endogenous antigen processing, targets were generated by the in vitro stimulation of naïve splenocytes (2e7 cells) for 2 d with lipopolysaccharide (LPS) (Escherichia coli J5 LPS [1 lg/ ml], Calbiochem, San Diego, California, United States), followed by overnight infection with either VV-PJS-5 or VV-hTAP1,2 (5e6 PFU).
Detection of human TAP expression in splenocytes. Human TAP1 expression in splenocytes from the mice infected with VV-hTAP1,2 was determined by SDS-PAGE and Western blot. The blots were probed for human TAP1 with rabbit anti-human TAP1 antiserum (Stressgen Biotechnologies, Victoria, British Colombia, Canada) and visualized by enhanced chemiluminescence (Amersham Biosciences, Little Chalfont, United Kingdom).
Quantitative RT-PCR for human and mouse TAP1 in splenocytes. RT-PCR was used to detect human TAP1 and TAP2, and mouse TAP1 and TAP2, in spleens 24 h after infection. In addition, a time-course quantification of human TAP1 and mouse TAP1 was performed using quantitative real-time PCR (QRT-PCR). Total RNA was extracted (RNeasy Midi Kit, Qiagen, Valencia, California, United States) from mouse spleens 4 h, 8 h, and 24 h after i.p. infection with VV-hTAP1,2 (3e4 PFU). QRT-PCR reactions (Sigma-Genosys Canada, Oakville, Ontario, Canada) were performed in duplicate using a Light Cycler (Roche Diagnostics, Mannheim, Germany) for mouse TAP1 and for human TAP1, plus a ribosomal small subunit S15. The sequences of the primer pairs used in the RT-PCR and the QRT-PCR are listed in Table 1.
The threshold cycle (CT) above background was determined for mouse TAP1 and human TAP1 and normalized to the lowest S15 CT value among the reactions. The differences in CT were used to calculate the abundance of human TAP1 relative to mouse TAP1. Relative abundance ¼ 2 (CTmTAP1-CThTAP1). The CT values represent the average of three mice per time point.
Visualization of human TAP expression in splenocytes. Visualization of human TAP expression in spleen-derived antigen presentation cells from mice infected with VV-hTAP1,2 or VV-PJS-5 was performed using confocal fluorescence microscopy. Splenocytes were double labeled with rabbit anti-human TAP1 antiserum (Stressgen Biotechnologies) and one of the following cell-surface markers: rat anti-mouse B220 (B cell marker, BD Biosciences Pharmingen), rat anti-mouse MAC-1 (macrophage marker, BD Biosciences Pharmingen), or rat anti-mouse NLDC-145 antibodies (DC marker) (gift from Ralph Steinman, the Rockefeller University, New York, New York, United States). The presence of human TAP1 was determined in approximately 300 cells per surface marker in 20 randomly chosen fields. Splenocytes from VV-PJS-5-infected mice were used as negative controls.
Transport activity of human TAP in mouse splenocytes. TAP heterodimer activity, in the presence or absence of adenosine triphosphate (ATP), was detected by a streptolysin-O-mediated peptide-transport assays in splenocytes harvested 24 h after mice were infected with VV-hTAP1,2 or VV-PJS-5 using a radio-iodinated peptide library containing a glycosylation site (NXT) (125I, specific activity, 10 Ci/mmol) [41,42]. Splenocytes from uninfected normal mice and TAP À/À mice were used as controls.
VV-WR-challenge experiments. VV-hTAP1,2 and VV-PJS-5 viruses were demonstrated to replicate equally in cell culture. Forty-two mice were randomized into seven groups (six mice per cage) and were vaccinated with three different doses of VV-hTAP1,2 or VV-PJS-5 (3e3, 3e4, and 3e5 PFU in 300 ll PBS i.p.) or PBS. Fourteen days later, mice were weighed and challenged with a lethal dose of VV-WR (1e5 PFU in 20 ll of clarified cell lysate delivered intranasally) under isoflurane anesthesia. Weight was recorded daily for 14 d, and any mouse falling below 25% of pre-challenge weight was euthanized. Mean weight going forward was calculated from the remaining survivors.
Statistical analysis. The z-statistic was calculated to determine the statistical significance of the differences in the proportions of H-2K b , VSV-NP, and CD8þ cells generated by the vaccination protocols. A two-way ANOVA, after square-root transformation of the data, was used to analyze the main effects of dose and the recombinant VV on mouse weight 5 d after VV-WR challenge. Bonferroni procedure corrected p-values for multiple comparisons. A chi-square test (univariate comparison, using FlowJo 3.7.1 [Treestar, http://www. treestar.com]) compared flow-cytometry histograms for differences in total H-2K b or H-2K b -SIINFEKL complexes. p-Value of , 0.01 (99% confidence interval) was considered significant, and T(X) . 10 was empirically determined as a cut-off value. | 7,238 | 2005-12-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
Dysregulation of Gene Expression in the Artificial Human Trisomy Cells of Chromosome 8 Associated with Transformed Cell Phenotypes
A change in chromosome number, known as aneuploidy, is a common characteristic of cancer. Aneuploidy disrupts gene expression in human cancer cells and immortalized human epithelial cells, but not in normal human cells. However, the relationship between aneuploidy and cancer remains unclear. To study the effects of aneuploidy in normal human cells, we generated artificial cells of human primary fibroblast having three chromosome 8 (trisomy 8 cells) by using microcell-mediated chromosome transfer technique. In addition to decreased proliferation, the trisomy 8 cells lost contact inhibition and reproliferated after exhibiting senescence-like characteristics that are typical of transformed cells. Furthermore, the trisomy 8 cells exhibited chromosome instability, and the overall gene expression profile based on microarray analyses was significantly different from that of diploid human primary fibroblasts. Our data suggest that aneuploidy, even a single chromosome gain, can be introduced into normal human cells and causes, in some cases, a partial cancer phenotype due to a disruption in overall gene expression.
Introduction
During cell division, errors in chromosomal segregation result in the loss or gain of chromosomes in daughter cells, which is referred to as aneuploidy. An extra or missing chromosome, known as trisomy and monosomy, respectively, is observed in people with developmental disabilities, mental retardation, and cancer. In addition, various types of cancer have karyotypes with complex numerical aberrations [1][2][3]. Although increasing evidence that aneuploidy is a hallmark of cancer, the causal relationship between aneuploidy and tumorigenesis remains unclear.
The addition of a single chromosome has been reported to have various transcriptional effects in several cell types [4][5][6]. An increase in the average transcriptional activity of a trisomic chromosome has been observed in trisomic primary mouse cells, human trisomic colorectal cancer cells, immortalized trisomic mammary epithelial cells, and acute myeloid leukemia (AML) cells (with an additional chromosome 8; derived from an AML patient). One chromosome affected not only the gene expression levels on the trisomic chromosome but also a large number of genes on other diploid chromosomes [5]. Apoptosis-regulating genes were significantly down regulated in AML cells containing an additional chromosome 8 that were derived from an AML patient [6].
Despite increased information about the characteristics of aneuploid human cells, the data reported thus far have been obtained from immortalized or cancer-derived cells. Normal human cells have a limited life span and ultimately enter a nondividing state called senescence [7,8]. It remains unclear whether aneuploidy renders cells immortal or if immortalization induces aneuploidy in cells. To determine the role(s) that aneuploidy plays in human cancer, it will be indispensable to design artificial aneuploid model cells derived from normal human cells.
Trisomy is a simple model of aneuploidy with the gain of a single chromosome. Trisomy of chromosome 8 is the most commonly observed trisomic chromosomal aberration, as has been demonstrated in fibroblastic/myofibroblastic tumors (Fig. S1) [9][10][11][12]. In this study, we used normal human diploid embryonic cells (HE35) as a donor for chromosome transfer, and chromosome 8 was chosen as the introduction chromosome. We succeeded in isolation of multiple clones that chromosome 8 became trisomic (trisomy 8 cells). All of the trisomy 8 cells expressed transformed cell-like phenotypes, such as a loss of contact inhibition, regrowth after senescence, and chromosome instability. The overall gene expression profile, determined by microarray analysis, was significantly different from that of the diploid HE35 cells. Our results suggest that aneuploidy is a key factor in tumorigenesis, as demonstrated by disrupted gene expression.
Generation of human primary cells bearing three copies of chromosome 8 (trisomy 8 cells)
To generate the trisomy 8 cells, we used microcell-mediated chromosome transfer (MMCT) to introduce an additional chromosome 8 into HE35 cells of culture passage 7, which is a line of normal human diploid primary cells [13]. In this study, inactivated viral envelope proteins of the haemagglutinating virus of Japan (HVJ-E) were used. We isolated three independent clones (HE35tri8-1, -2, and -3) in 35-mm diameter dish, and cultured them by stepwise scale up until reaching confluence in three diameter 100 mm dishes (P100). Then cells were stocked at a total population doubling level of 30 (TPDL 30) for subsequent analysis.
To confirm that the extra chromosome was maintained in each cell clone, we performed Multicolor FISH, which identifies each chromosome by a unique fluorescent color (Fig. 1). The proportion of cells with three copies of chromosome 8 was 90% in HE35tri8-1 and HE35tri8-3 cells and 79% in HE35tri8-2 cells, indicating that each clone had an additional chromosome 8 ( Fig. 1).
Proliferation defects, loss of contact inhibition, and regrowth in the trisomy 8 cells Aneuploidy causes a proliferative disadvantage in mouse cells [4]. To investigate whether this phenotype could be observed in normal human cells, we examined the proliferative capacity of the artificial trisomy 8 cells in culture. Three artificial trisomy 8 cells had decreased proliferation compared to the diploid HE35 cells (Figs. 2A, S2A-B), indicating that the presence of an additional chromosome inhibits cell proliferation in culture.
Normal cell growth is arrested when cells contact each other in culture and in tissues. This phenomenon, known as contact inhibition, prevents uncontrolled cellular proliferation [14]. In contrast, transformed cells pile densely upon one another [15]. Growth arrest was observed upon cell contact for the normal diploid HE35 cells (Fig. 2B). However, the growth of the trisomy 8 cells (HE35tri8-1, -2, and -3) was not arrested when the cells made contact in vitro, and the cells piled densely on one another (Fig. 2B).
Normal human cells undergo a finite number of cell divisions and ultimately enter a non-dividing state called senescence, whereas neither transformed nor cancer cells undergo senescence when they become immortalized. To understand the effects of a single chromosome gain on senescence, we investigated whether the artificial trisomy 8 cells reached senescence and became immortalized. Three clones of normal diploid HE35 (HE35-1, -2, and -4) ultimately entered senescence (Figs. 2C, S3). On the other hand, the trisomy 8 cells (HE35tri8-1, -2, and -3) temporarily exhibited senescence-like characteristics, but after 4-6 weeks of this senescence-like phenotype, a small portion of the trisomy 8 cells had regrown and formed colonies (Figs. 2C and 2D, Table 1). These colonies were made with relatively small sized cells, and the characteristic of the colony morphology, such as piled-up and criss-cross, was the same as those of malignant cells (Fig. 2D). The colony grew up to approximately 5 mm in diameter, but was not able to finally get infinite growth ability.
Micronucleus assays have emerged as the preferred method to assess chromosome damage because micronuclei provide an index of both chromosome breakage and non-disjunction. Micronuclei are the origin of lagging whole chromosomes and acentric chromosome fragments during anaphase [21]. The micronucleus frequencies were no difference between the trisomy 8 cells and the diploid HE35 cells (Fig. 3B).
Chromosome aberrations are distinctive features of tumors [22]. To investigate whether trisomic conditions bring about chromosome aberrations, we analyzed metaphase chromosome aberrations. Diplochromosomes (chromosomes that have undergone DNA replication but have not segregated) were detected only in the trisomy 8 cells ( Table 2, Fig. S6). Chromatid-type aberrations were observed in both the diploid HE35 cells and the trisomy 8 cells, whereas chromosome-type aberrations were found only in the trisomy 8 cells (Table 2). Although the trisomy 8 cells tended to induce both chromosome aberrations and DNA DSBs more than the diploid HE35 cells, there was not the statistical significance.
Disruption of global gene expression patterns in the trisomy 8 cells
We compared the gene expression profiles of the trisomy 8 cells by microarray analysis. A clustering analysis of genes that exhibited statistically significant changes revealed a pattern that clearly separated the trisomy 8 cells from the diploid HE35 cells (Fig. 4A). The complete dataset is available at the NCBI GEO database (http://www.ncbi.nlm.nih.gov/projects/geo, accession number GSE28076).
The number of genes expressed on each chromosome was examined using a genome-wide transcript expression analysis. Total 127 genes were significantly altered in expression in the trisomy 8 cells compare with the diploid HE35 cells (Table S1). 72% of genes (91 genes) altered in expression were up regulated and the rest (36 genes) were down regulated. We did not yet identify a pathway associated with these genes. The pattern of genes expressed on each chromosome was similar in three clones of the trisomy 8 cells (HE35tri8-1, -2, and -3). The level of genes expressed on trisomic chromosome 8 was increased an average of 115% in each of the trisomy 8 cells compared to those of the diploid HE35 cells (Fig. 4B). In contrast, the average gene expression level on the other chromosomes was significantly decreased (Fig. 4B).
The trisomy 8 cells showed the malignant morphology, which occurs for a decrease of the cell adhesion ability. Therefore, to identify significantly deregulated cell adhesion genes that are involved in the loss of contact inhibition, microarray analyses were performed. As shown in Fig. 4C, Cell adhesion molecule 1 (CADM1) was dramatically down regulated in the trisomy 8 cells. Wilms tumor 1 (WT1), a known oncogene, was markedly up regulated in the trisomy 8 cells (Fig. 4C), but the relationship between WT1 and the phenomena in the trisomy 8 cells remained unclear. Although we have also done the pathway analysis, unfortunately there was no pathway related with 127 genes that changed more than 10 times.
Discussion
Our analysis of normal human cells containing an additional chromosome 8 (trisomy 8 cells) revealed that all trisomy 8 cells share similar characteristics such as altered gene expression, proliferation defects, loss of contact inhibition, and regrowth after senescence. A small portion of the trisomy 8 cells regrew and formed colonies after 4-6 weeks of exhibiting a senescence-like state (Figs. 2C-D, Table 1). The trisomy 8 is the most commonly observed chromosome number aberration in fibroblastic/myofibro-blastic tumors (Fig. S1) [9][10][11][12]. Transformed cells lose contact inhibition [15] and do not undergo senescence. Our data showed that introducing a chromosome 8 into the normal diploid cells causes expression of transformation-associated phenotypes, such as chromosome insta-bility, and malignant morphological characters. However, all trisomy 8 cells did not finally immortalize. We previously reported that human embryonic cells rapidly deplete telomerase activity associated with the significant shortening of telomeres, and then reached senescence. However, rodent embryo cells retained telomerase activity and the long telomeres (19-50 kb) during the long-term cultures, and cells immortalized. [23,24]. It is likely that trisomy of chromosome 8 might not succeed reactivation of telomerase of human cells. However, unfortunately, because cell number collected from the regrowth colony was too few in the present study, we could not measure telomerase activity yet.
The number of foci where c-H2AX and 53BP1 co-localized, and micronucleus were no difference between the trisomy 8 cells and the diploid HE35 cells (Fig. 3A-B). However, structural chromosome aberrations were increased in the trisomy 8 cells ( Table 2). The number of foci where c-H2AX and 53BP1 colocalized and the micronucleus frequencies were indirect method of DSB observation, whereas structural chromosome aberrations were direct method. This suggests that the trisomy does not cause the genetic instability due to DNA DSBs. And also these data suggested that DNA DSB is not a cause of becoming trisomy, but is a result of trisomy.
Diplochromosomes were found only in the trisomy 8 cells, suggesting that trisomy of chromosome 8 causes production of tetraploidy. Although chromatid-type aberrations were seen in both the diploid HE35 cells and the trisomy 8 cells, chromosome-type aberrations were observed in the trisomy 8 cells and not in the diploid HE35 cells. Increased chromosome-type aberrations, but not chromatid-type aberrations, have been associated with an increased cancer risk [25]. The present results revealed that trisomy of chromosome 8 causes other numerical and structural chromosomal aberrations that contribute to the relationship between increased chromosome instability and subsequent cancer risk.
Previous reports have shown that the average level of gene expression on trisomic chromosomes is increased in mouse cells, human cancer cells, and immortalized human cells compared to diploid cells [4][5][6]. Our artificial trisomy cells derived from primary human cells had greater average gene expression on the additional chromosome 8 (Fig. 4B). Surprisingly, the average gene expression level on all non-trisomic chromosomes was all decreased; moreover, the profile of each clone was similar (Fig. 4B). Total 127 genes were significantly altered in expression in the trisomy 8 cells compare with the diploid HE35 cells (Table S1). However, it is not clear whether this phenomenon is event that is specific for chromosome 8. The results of our pilot study show that similar change of gene expression is obtained in trisomy of chromosome 1, 6 and 7 (data not shown). These results strongly suggest that gaining even a single chromosome disrupts the expression levels on the trisomic chromosome as well as on the other chromosomes.
Each chromosome occupies a non-random and confined space in the interphase nucleus of higher eukaryotes [26][27][28]. There is increasing evidence that the positioning of genomic regions in the nuclear space is important for gene regulation [29]. One possible explanation for the general disturbances in the gene expression levels in the trisomy cells is alterations in chromosomal territory.
CADM1 was markedly down regulated in the trisomy 8 cells. The tumor suppressor CADM1 is involved in cell adhesion and is preferentially inactivated in invasive cancers [30][31][32]. CADM1 is expressed universally in human tissues and is frequently silenced in a variety of human carcinomas [33,34]. A recent study further showed that hypermethylation of the CADM1 promoter induced gene silencing [33,[35][36][37][38][39][40]. We speculate that a down-regulation of the CADM1 gene in the trisomy 8 cells results from hypermethylation of the CADM1 promoter region.
In contrast, WT1 was significantly up regulated in the trisomy 8 cells. The WT1 gene was isolated as the gene responsible for a childhood renal neoplasm, Wilms' tumor, which is thought to arise due to the inactivation of both alleles of the WT1 gene located at chromosome 11p13 [41][42][43]. The WT1 gene was originally defined as a tumor suppressor gene [41,[44][45][46][47][48], but recent studies suggest that the WT1 gene is highly expressed in leukemia and solid tumors and likely plays an oncogenic role in leukemogenesis and tumorigenesis [49,50]. It is possible that aneuploidy results in an epigenetic modification of WT1. In conclusion, our findings strongly suggest that the addition of a single chromosome causes chromosome instability and extensive disruption of gene expression. Such critical and extensive changes in gene expression can produce parts of transformation-associated phenotypes in aneuploid cells.
Cells and cell culture
Human embryonic (HE35) cells were obtained from 7-to 8week-old human embryos previously described [51]. HE35 cells were cultured in Eagle's Minimum Essential Medium (MEM) supplemented with 10% fetal bovine serum at 37uC with 5% CO 2 in a humidified environment. The trisomy 8 cells (HE35tri8-1, -2, and -3 cells) were cultured in Eagle's Minimum Essential Medium (MEM) containing 800 mg/ml G418 supplemented with 10% fetal bovine serum at 37uC with 5% CO 2 in a humidified environment. Briefly, both the diploid cells derived from HE35 (HE35-1, 2, and 4) and the artificial trisomy 8 cells (HE35tri8-1, -2, and -3) at TPDL 30 were plated at 5610 5 cells per T75 flask. Subconfluent cells were trypsinized and counted to determine the number of cells per T75 flask, and the cells were then replated at 5610 5 cells per T75 flask. Medium change has done every three days of all cultures. This process was repeated until there were either insufficient cells for plating or immortalization (as determined by increased cell proliferation).
Microcell-mediated chromosome transfer
Donor mouse A9 cells containing human chromosome 8 were established by Kugoh et al. [52]. A9 cells grown in three T-25 flasks at 7610 5 cells/flask in medium containing 800 mg/ml G418 (Nakarai Tesque, Kyoto, Japan). To generate normal human cells containing an additional chromosome 8, we used microcellmediated chromosome transfer (MMCT) procedure [13]. Briefly, the cells were incubated with 0.05 mg/ml colcemid in medium plus 20% fetal bovine serum for 48 h (to induce micronucleus formation), and then centrifuged in the presence of 10 mg/ml cytochalasin B (Sigma, MO, USA) at 8,000 rpm for 1 h at 34uC to isolate the micronuclei. The micronuclei were then purified by sequentially filtering through sterile filters of pore size 8-, 5-, and 3mm. The purified micronuclei were suspended in fusion buffer and then HVJ-E was added (Genomone-CF; Ishihara Sangyo, Osaka, Japan) to the recipient cells (HE35 at TPDL7), which were kept on ice for 5 min, and then incubated at 37uC for 15 min. The supernatant was aspirated and MEM containing 800 mg/ml of G418 and 10% FBS was added.
We isolated three independent trisomy 8 cells (HE35tri8-1, -2, and -3) in 35-mm diameter dish (P35), and cells were cultured with doing stepwise scale up using P35, 60-mm diameter dish (P60), and 100-mm diameter dish (P100). Then, cells were cultured until reaching confluence in three P100 dishes. At this point, because total cell number in three dishes is approximately 8610 6 , cells of each clone had divided at least 23 times (8610 6 <2 23 ) during cloning process. Then, cells were stocked in liquid nitrogen until doing assay. Therefore, cells at 30 TPDL (7PDL+23PDL) were used for subsequent assay.
Karyotype analysis
To prepare the metaphase chromosome, 5610 5 cells were seeded in P100 dishes. After incubating for 48 h, colcemid (Gibco, CA, USA) was added at a final concentration of 0.06 mg/ml, and the cells were treated for 2 h. Mitotic cells were collected and treated with 0.075 M potassium chloride for 25 min at room temperature. The cells were fixed in Carnoy's solution (methanol: acetic acid, 3:1) and spread on glass slides using the air-drying method. After the cells were stained with a 3% Giemsa solution, the number of chromosomes with at least 50 metaphases per sample was scored.
Multicolor fluorescence in situ hybridization (M-FISH)
Multicolor FISH was performed according to the manufacturer's protocol (Cambio, Cambridge, UK). A chromosome slide was aged on a hot plate at 65uC for 90 min, and the samples were denatured in a solution (70% formamide in 2X SSC) at 65uC for 2 min. After the reaction was quenched in ice-cold 70% ethanol for 4 min, the slides were dehydrated by washing for 5 min each in 70% ethanol and 100% ethanol and then dried at 37uC. An aliquot (10 ml) of the M-FISH probes was denatured at 65uC for 10 min and applied to the chromosome slide. Hybridization was performed at 37uC for 48 h in a humidified atmosphere. After hybridization, each slide was washed twice for 5 min each in washing solution (50% formamide in 0.5X SSC) at 45uC, followed by two incubations of 5 min each in 1X SSC at 45uC. Each slide was then incubated for 4 min in detergent washing solution (0.05% detergent DT in 4X SSC) at 45uC. An aliquot (125 ml) of the detection reagent was applied to the slides, which were then covered with parafilm and subsequently incubated in a humidified atmosphere for 20 min at 37uC. After the parafilm was removed, the slides were washed three times for 4 min in detergent washing solution at room temperature. Finally, the DNA was stained with 49, 6-diamino-2-phenylindole (DAPI) in antifade solution. Chromosome images were captured and analyzed using the Leica CW4000 system. Proliferation assay Exponentially growing diploid HE35 cells (HE35-1, -2, and -4) at TPDL 37 and the trisomy 8 cells (HE35tri8-1, -2, and -3) at TPDL 37 were plated at a density of 5610 4 cells in individual wells of multiple 6-well plates. All cells were plated in a final volume of 3 ml of medium. Cells were incubated in humidified 5% CO 2 incubator at 37uC. The medium was replaced with fresh medium every three days throughout the experiment. Cells were harvested by trypsinization and cell number was counted by hemocytometer every day.
Immunofluorescence detection
Both the diploid HE 35 cell and the trisomy 8 cells at TPDL 37 were fixed in 4% formaldehyde in PBS 2 for 15 min, permeabilized for 10 min on ice in 0.5% Triton X-100 in PBS 2 , and then washed extensively with PBS 2 . Then, the coverslips were incubated with anti-phosphorylated histone H2AX at serine 139 (Upstate Biotechnology, NY, USA) and 53BP1 (Bethyl Laboratory, TX, USA) in TBS-DT (20 mM Tris-HCl, 137 mM NaCl, pH 7.6, containing 50 mg/ml skim milk and 0.1% Tween-20) for 2 h at 37uC. The primary antibodies were washed with PBS 2 , and Alexa Fluor 488labeled anti-mouse IgG antibody and Alexa Fluor 594-labeled antirabbit IgG antibody (Molecular Probes, CA, USA) was added. The coverslips were incubated for 1 h at 37uC, washed with PBS 2 , and sealed on glass slides with 0.05 ml of PBS 2 containing 10% glycerol. The cells were examined by fluorescence microscopy.
Micronucleus assay
Cells at TPDL 37 were treated with 2 mg/ml cytochalasin B for 24 h in a T25 flask. They were then harvested and treated with 3 ml of hypotonic (0.1 M) KCl for 20 min, and fixed with 3 ml of methanol/ acetic acid (5:1). The cell suspensions were centrifuged at 1,200 rpm for 5 min. Then, the cells were suspended in 4 ml methanol/acetic acid solution and incubated on ice for 5 min. After centrifugation, the supernatant was removed and a 0.5-1 ml methanol/acetic acid solution was added to the cells. The cell suspensions were dropped onto slides and stained with 7.5% Giemsa for 40 min. The number of micronuclei per 1,000 binucleated cells was counted.
Transcript array and date analysis RNA was isolated from cells at TPDL 37 using an RNeasy Mini Kit (Qiagen, Tokyo, Japan). Five hundred nanograms of total RNA was then reverse-transcribed and labeled with a Quick Amp Labeling Kit, as recommended by the manufacturer (Agilent Technologies, CA, USA) and hybridized to Human Whole Genome Arrays (Agilent Technologies, CA, USA). Chips were analyzed and the data were extracted for examination using GeneSpring GX 11.5 Software (Agilent Technologies, CA, USA). To identify significantly related genes, GeneSpring GX 11.5 was used to perform a t-test.
Accession number
The microarray data reported herein are available at the NCBI GEO database (http://www.ncbi.nlm.nih.gov/projects/geo, accession number GSE28076). | 5,156.2 | 2011-09-29T00:00:00.000 | [
"Biology"
] |
Brain Isoform Glycogen Phosphorylase as a Novel Hepatic Progenitor Cell Marker
An appropriate liver-specific progenitor cell marker is a stepping stone in liver regenerative medicine. Here, we report brain isoform glycogen phosphorylase (GPBB) as a novel liver progenitor cell marker. GPBB was identified in a protein complex precipitated by a monoclonal antibody Ligab generated from a rat liver progenitor cell line Lig-8. Immunoblotting results show that GPBB was expressed in two liver progenitor cell lines Lig-8 and WB-F344. The levels of GPBB expression decreased in the WB-F344 cells under sodium butyrate (SB)-induced cell differentiation, consistent with roles of GPBB as a liver progenitor cell marker. Short hairpin RNA (shRNA)-mediated GPBB knockdown followed by glucose deprivation test shows that GPBB aids in liver progenitor cell survival under low glucose conditions. Furthermore, shRNA-mediated GPBB knockdown followed by SB-induced cell differentiation shows that reducing GPBB expression delayed liver progenitor cell differentiation. We conclude that GPBB is a novel liver progenitor cell marker, which facilitates liver progenitor cell survival under low glucose conditions and cell differentiation.
INTRODUCTION
Pluripotent progenitor cells are critical elements in regenerative medicine. Many progenitor cells were developed for various tissues including the liver: oval cells [1][2][3], liver epithelial cells [4][5][6][7][8][9] and small hepatocyte-like cells [10]. Advances in liver progenitor cell research may lead to new cell therapies and facilitate the development of new drugs [11][12][13]. However, many of the liver progenitor cells were very hard to isolate due to limited liver progenitor cell markers. Thus, a proper liver progenitor cell marker is highly desirable to accelerate the development of liver regenerative medicine.
To identify potential liver progenitor cell markers, we took advantage of a monoclonal antibody Ligab previously generated in our lab using whole Lig-8 cells [17]. The Ligab antibody reacts with the liver progenitor cells Lig-8 but not mature hepatocytes, suggesting that the Lig-8 cells express certain Ligab antigens specific to liver progenitor cells. Moreover, the expression of the Ligab antigens in the Lig-8 cells decreased when the cells underwent SB-induced cell differentiation [17]. Thus, the Ligab antigens could be potential liver progenitor cell markers. Using proteomics, we identified brain isoform glycogen phosphorylase (GPBB) in a protein complex of the Ligab immunoprecipitates from the Lig-8 cells. Immunoblotting showed that GPBB was expressed in the Lig-8 and WB-F344 cells and the levels of GPBB in these cells decreased upon SB-induced cell differentiation, consistent with GPBB as a liver progenitor cell marker. GP is the first enzyme required for glycogenolysis [19]. Our shRNA-mediated GPBB knockdown followed by functional assays shows that GPBB facilitates liver progenitor cell survival under low glucose conditions and SB-induced cell differentiation.
Immunoprecipitation and electrophoresis
As previously described, the Ligab antibody reacts specifically with the Ligab antigen in a nondenaturing protein extraction buffer [17]. Therefore, we prepared Lig-8 cell protein extracts by dounce-homogenizing the cells in a non-denaturing protein lysis buffer containing 1% v/v Triton X-100, 50 mM Tris (pH 7.4), 300 mM NaCl, 5 mM EDTA, 0.02% w/v sodium azide, 1 mM phenylmethylsulfonyl fluoride, and 1% v/v protease inhibitor cocktail (Sigma-Aldrich, St. Louis, MO, USA). The protein extracts were cleared by centrifugation at 12,000 ×g at 4°C for 30 minutes and the supernatants were further subjected to ultracentrifugation (Beckman Optima XL-90 Ultracentrifuge, Global Medical Instrumentation Inc., Ramsey, MN, USA) at 226,000 ×g at 4°C for 1 hour to separate the cytosolic fraction (S2) from the precipitated membrane fraction (S3). The S2 fraction was further separated into S2.1 (MW > 30 kDa) and S2.2 (MW < 30 kDa) by using a centricon tube (Millipore, Billerica, MA, USA). The S3 membrane precipitates were re-suspended in a non-denaturing lysis buffer containing 0.01% dodecyl-beta-D-maltoside (DDM; Sigma-Aldrich, St. Louis, MO, USA) at 4°C on a rotator overnight followed by centrifugation at 2,000 rpm at 4°C for 10 minutes. The supernatants were then subjected to dialysis to remove the detergent DDM. The protein concentration of each fraction thus obtained was determined using a protein assay dye reagent (Bio-Rad, Hercules, CA, USA).
Each protein fraction thus obtained was incubated with 5 μg of the Ligab antibody and 30 μL of 50% protein G slurry (Invitrogen, Carlsbad, CA, USA) at 4°C on a rotator overnight. The protein G slurries were washed 3 times in the protein lysis buffer and thereafter subjected to standard SDS-PAGE. The gels were silver-stained using SilverSNAP (Thermo Scientific, Rockford, IL, USA).
Protein identification by using liquid chromatography combined with tandem mass spectrometry The bands of interest on the silver-stained gels were excised and diced into approximately 1-mm 3 pieces. The diced gel pieces were washed in microcentrifuge tubes with 100 mM ammonium bicarbonate, dehydrated with 50% acetonitrile (Sigma-Aldrich, St. Louis, MO, USA), and dried completely in a Speed-Vac. The gels were rehydrated and reduced with 50 μL of 10 mM dithiothreitol in 50 μM ammonium bicarbonate. Cysteine residues were alkylated by treating the gels with 50 μL of 55 μM iodoacetamide (Sigma-Aldrich, St. Louis, MO, USA). After the supernatant was washed and decanted, 50 μL of acetonitrile was added to the gels and the gels were dried in a Speed-Vac. The gels were rehydrated in 50 μM ammonium bicarbonate containing sequencing grade trypsin (Roche Diagnostics Ltd., Lewes, UK) at a concentration of 13 ng/μL and incubated at 37°C for 1 hour, and then at room temperature overnight. The final step was to re-dissolve the gel pieces by using a mixture containing 1 μL of 1% formic acid (Sigma-Aldrich, St. Louis, MO, USA) and 9 μL of 50% acetonitrile. The re-dissolved peptide mixture solution was then injected online to a column for liquid chromatography (LC Packings Ultimate, Dionex, Sunnyvale, CA, USA) and thereafter subjected to mass spectrometry (QSTAR XL, Applied Biosystems, Foster City, CA, USA). Protein identification was performed using the MASCOT database search (Matrix Science, Boston, MA, USA). MASCOT scores greater than or equal to 38 indicate identity or extensive homology (p < 0.05).
Extraction of cellular proteins and immunoblotting analysis
For immunoblotting, Lig-8 and WB-F344 cellular proteins were extracted in a buffer containing 50 mM Tris (pH 8.0), 0.5 mM EDTA, 150 mM NaCl, 0.5% NP-40, and 1× protease inhibitor cocktail (Sigma-Aldrich, St. Louis, MO, USA). Each protein extract (50 μg) in a sample buffer (50 mM Tris (pH 6.8), 2% SDS, 0.05% bromophenol red, and 10% glycerol) was boiled for 10 min, separated on 10% SDS-PAGE, and transferred to a nitrocellulose membrane (Hybond-C extra, Amersham Biosciences, Piscataway, NJ, USA). The membranes were blocked for 4 hours at room temperature in PBS (phosphate-buffered saline: 137 mM NaCl, 2.7 mM KCl, 10 mM Na 2 HPO 4 , 2 mM KH 2 PO 4 ) containing 0.05% Tween 20 and 5% skim milk and then incubated overnight at 4°C with primary antibodies (Table 1) in PBS containing 0.05% Tween-20 and 5% skim milk. The membranes were thereafter washed and incubated with horseradish peroxidase-conjugated secondary antibodies at a dilution of 1:10000 for 1 hour at room temperature and then developed using the Western Blot Luminol Reagent Kit (Santa Cruz, Santa Cruz, CA, USA). Protein band intensities in the immunoblots were analyzed using the Image J software (National Institute of Health, Bethesda, MD, USA). For comparison purposes, protein band intensities were normalized against that of the housekeeping protein β-actin.
Construction of lentivirus-expressing GPBB-specific short hairpin RNA
Because of the high degree of mRNA homology (87%) among GPBB, GPMM, and GPLL [21][22][23][24], we carefully compared the 3 mRNA sequences and selected 2 sequences (nucleotide 3119-3137 and nucleotide 3759-3777), each containing 19 nucleotides that were specific to GPBB. Two pairs of sense and antisense oligonucleotides were synthesized ( Table 2). The annealed oligonucleotide pairs were inserted into the BamHI and EcoRI cloning sites of a dual-marker short-hairpin RNA (shRNA) expression vector pGreenPur (System Biosciences, Mountain View, CA, USA). These recombinant vectors were cotransfected with a packaging plasmid, psPAX2, and an envelope plasmid, pMD2G, into HEK293 cells by using JetPEI transfection reagent (Polyplus-transfection Inc., New York, NY, USA) to produce recombinant lentiviruses. The transfection units of the lentiviruses were determined using flow cytometry based on the reporter copGFP. Ten microliters of concentrated viruses were used to infect 10 5 WB-F344 cells per well of 6-well culture plates. The positively infected cells were selected by treating them with puromycin at 2 μg/mL for 3 days.
Examination of the Ligab antigen in liver progenitor cells WB-F344 and Lig-8 by flow cytometry
Lig-8 cells have been shown specifically to react with Ligab by flow cytometry and confocal microscope [17]. Here we examined whether WB-F344 cells could also react with Ligab antibody using H4IIE and MF cells as controls. WB-
Apoptotic cell analysis
Apoptotic cell analysis involving propidium iodide (PI) staining was performed using flow cytometry. Lig-8 and WB-F344 cells were cultured in high glucose (4.5 g/L), low glucose (1.0 g/ L), or glucose-and-pyruvate-deprived medium at 37°C for 24 hours. For each condition, cells in suspension and cells attached to the dish were harvested, mixed and fixed in 70% ethanol at -20°C for 1 hour. Thereafter, the cells were washed with PBS and stained with PBS containing 20 μg/mL PI, 0.1% Triton X-100, and 0.2 μg/mL DNase-free RNase A in the dark at 4°C for 1 hour. These cells were washed with PBS three times and then analyzed using flow cytometry.
Statistical analysis
Data were expressed as the mean ± standard error of means from triplicate measurements. Statistical significance were based on Student's t test in Microsoft Excel, set at P < 0.05 ( Ã ) or P < 0.01 ( ÃÃ ).
Lig-8 and WB-F344 liver progenitor cells express Ligab antigen
Before using the monoclonal Ligab antibody to identify potential liver progenitor cell markers, we examined whether this antibody could differentiate liver progenitor cells (Lig-8 and WB-F344) from non-progenitor cells (MF and H4IIE) using flow cytometry. When stained with the Ligab antibody (Fig. 1A, red line), 98.8% of the liver progenitor cells Lig-8 showed staining intensity greater than 10 whereas no apparent staining was found in the Lig-8 cells stained with a nonspecific IgG (grey dotted line). Similarly, 95.3% of the liver progenitor cells WB-F344 were stained by the Ligab antibody (Fig. 1B). In contrast, the two non-progenitor cells MF and H4IIE were not stained by the Ligab antibody (Fig. 1C, D). These results suggested that the Ligab antigens expressed in the Lig-8 and WB-F344 cells could be potential markers of liver progenitor cells.
WB-F344 and Lig-8 cells express brain isoform glycogen phosphorylase
Because there are 3 GP isoforms (GPBB, GPLL and GPMM) in mammals [21][22][23][24]29], we generated isoform-specific antibodies against GPBB and GPLL and acquired an antiserum against GPMM [24] and used them to examine the isoforms expressed in WB-F344 and Lig-8 cells with immunoblotting. As seen in Fig. 3A, only GPBB at about 97 kDa was detected in the WB-F344 and Lig-8 cell lysates whereas GPLL and GPMM were not detectable, indicating that the liver progenitor cells express specifically the brain isoform of glycogen phosphorylase GPBB. In contrast, the non-progenitor cells MF and H4IIE did not express GPBB (Fig. 3B). These results suggested that GPBB is a potential liver progenitor marker. Note the mass difference between GPBB (97 kDa) and the Ligab-immunoprecipitated band (38 kDa) submitted to protein identification.
GPBB is likely present in a protein complex of the Ligab immunoprecipitate
To examine whether GPBB was a specific Ligab antigen, we knocked down GPBB in WB-F344 and Lig-8 cells using shRNA-based method followed by immunoblotting with the GPBB antibody. As seen in Fig. 4A, two shRNA sequences (shRNA3119 and shRNA3759) successfully reduced GPBB protein levels to about 17% and 7% in WB-F344 cells. Similarly, the two shRNA sequences reduced GPBB protein levels to about 36% and 13% in Lig-8 cells. However, GPBB knockdown did not affect reactivity of the Ligab antibody to the GPBB knockdown cells. As shown in Fig. 4B, nearly 100% of the Lig-8 and WB-F344 cells with or without GPBB knockdown still reacted to the Ligab antibody. There was no difference in the mean fluorescence intensity either regardless GPBB knockdown. Thus, GPBB was unlikely a direct Ligab antigen. The reason that GPBB was identified could perhaps be due to its presence in a protein complex immunoprecipitated by the Ligab antibody. In line with this, the Ligab immunoprecipitates from GPBB knockdown cells still showed presence of GPBB albeit at lower amounts (Fig. 5). However, GPBB was not detectable in the control IgG immunoprecipitates, consistent with its presence in the protein complex immunoprecipitated by the Ligab antibody.
GPBB expression decreased upon sodium butyrate induced cell differentiation
To verify GPBB as a potential liver stem cell marker, sodium butyrate was used to induce WB-F344 cell differentiation followed by examining levels of GPBB protein along with mature cell makers CK19 and GPLL with immunoblotting. Upon sodium butyrate addition to the WB-F344 cells, the mature cell markers CK19 and GPLL were detected on day 3 and continuously increased on day 5 (Fig. 6), indicative of cell differentiation. During the cell differentiation process, GPBB protein levels decreased in a time-dependent manner to about 71% on day 3 and 56% on day 5 compared to those on day 1.
GPBB knockdown delayed sodium butyrate induced cell differentiation
To investigate potential roles of GPBB in cell differentiation, GPBB was knocked down in WB-F344 cells followed by sodium butyrate induction of differentiation. Fig. 7 shows the flow cytometry analysis results. In the control knockdown cells, sodium butyrate increased the percentage of mature cell marker CK19 positive cells from barely detectable to about 28% on day 2. In contrast, the percentage of CK19 positive cells was only about 8-9% in the GPBB knockdown cells on day 2. After the delay in expressing the CK19 marker on day 2, the percentage of CK19 positive cells in the GPBB knockdown cells reached about 57-63% on day 3 to a level similar to that of the control knockdown cells. These observations suggested that GPBB knockdown delayed sodium butyrate induced WB-F344 cell differentiation.
GPBB knockdown rendered WB-F344 and Lig-8 cells vulnerable under glucose-deprived conditions
Given GP a key enzyme in glycogenolysis, we examined whether GPBB plays a role in liver progenitor cell survival in low glucose medium. Control and GPBB knockdown cells were cultured in medium containing various levels of glucose concentrations followed by flow cytometry analysis for apoptosis. As shown in Fig. 8A, the percentages of the cells undergoing apoptosis were similar (~10%) for the control and GPBB knockdown Lig-8 cells in the medium containing 4.5 g/L glucose. When the glucose levels were reduced to 1.0 g/L, the percentages of apoptotic cells significantly increased (19.31% ± 3.77% by shRNA3119 and 20.17% ± 3.70% by shRNA3759 vs. 10.81% ± 1.73% by shControl). The percentages of apoptotic cells increased even higher when glucose and pyruvate were both deprived from the medium (21.72% ± 4.24% by shRNA3119 and 24.05% ± 2.41% by shRNA3759 vs. 14.41% ± 1.03% by shControl). These results suggest that the liver progenitor cells depend on GPBB for survival under low glucose conditions. Similar results were observed in GPBB knockdown WB-F344 cells under low glucose conditions. The percentages of apoptotic cells were about 4% in medium containing 4.5 g/ L glucose (Fig. 8B). In medium containing 1.0 g/L glucose, the apoptotic percentage increased to 10.94% ± 0.52% by shRNA3119 and 10.60% ± 0.06% by shRNA3759 compared to 2.36% ± 0.77% by shControl. When glucose and pyruvate were deprived, more than 70% of the GPBB knockdown WB-F344 cells underwent apoptosis (79.78% ± 1.07% by shRNA3119 and 72.45% ± 3.86% by shRNA3759 vs. 49.93% ± 4.47% by shControl). Note that the apoptotic percentage of the control cells was reduced, from 18.03% in high glucose medium (4.5 g/L) to 2.36% in low glucose medium (1.0 g/L).
DISCUSSION
Despite of many existing liver progenitor cells, the advances in liver regenerative medicine was hampered in part due to the lack of proper cell markers. Taking advantage of a liver progenitor cell specific antibody Ligab generated in our laboratory, we discovered a novel liver progenitor cell marker, GPBB, by analyzing a protein complex of the Ligab immunoprecipitates with LC-MS/MS-based proteomics. GPBB is expressed at a high level in the Lig-8 cell as well as the well-known liver progenitor cell WB-F344 (Fig. 1). Upon SB-induced cell differentiation, GPBB expression levels significantly decrease in WB-F344 cell (Fig. 6), consistent with a role of GPBB as a liver progenitor cell marker. GPBB knockdown followed by glucose deprivation shows that GPBB is required for Lig-8 and WB-F344 cells survival under low glucose conditions (Fig. 8). GPBB knockdown followed by SB-induced cell differentiation shows that GPBB knockdown delays SB-induced cell differentiation (Fig. 7). Our data indicate that GPBB is a liver progenitor cell marker that helps liver progenitor cell survival under low glucose conditions and promotes differentiation. Although GPBB was identified in the Ligab immunoprecipitates, GPBB is unlikely a direct Ligab antigen for a number of reasons. 1) The Ligab antibody most likely recognizes an unidentified protein in the plasma membrane of the cells because of its ability to detect Lig-8 and WB-F344 whole cells without the use of detergent to permeabilize the plasma membrane ( Fig. 1) [17]. GPBB is a cytosolic protein and hence is unlikely a direct Ligab antigen. 2) GPBB knockdown in the Lig-8 and WB-F344 cells did not affect Ligab recognition of the GPBB knockdown cells (Fig. 4). 3) The apparent mass difference between the 38 kDa band submitted to mass spectrometry (Fig. 2) and the 97 kDa GP band detected by our antibody (Fig. 3) suggests an alternative GPBB splice variant. However, our RT-PCR results found no evidence of smaller GPBB splice variants with a protein size close to 38 kDa (data not shown). Thus, the major protein in the 38 kDa band is unlikely GPBB with 97 kDa. How could we identify a 97 Survival of GPBB knockdown WB-F344 and Lig-8 cells under low glucose conditions. Control or lentivirus expressing shRNA3119 or shRNA3759 targeting GPBB was used to infect Lig-8 (A) and WB-F344 (B) cells. After puromycin selection, stably knockdown cells were cultured in mediums with various levels of glucose: high (4.5 g/L), low (1.0 g/L) or zero glucose/pyruvate for 24 hours. All cells in the supernatant and those attached to the plate were harvested and mixed. The nuclei were stained with propidium iodide before flow cytometry analysis. Cells with nucleus staining intensity less than those of the G 0 /G 1 phase cells were considered apoptotic. Values are mean ± SEM. * P < 0.05 and ** P < 0.01, Student's test against values of the control knockdown (shControl).
doi:10.1371/journal.pone.0122528.g008 kDa protein in a 38 kDa gel piece? Per our experiences in mass spectrometry, it is not uncommon that proteins are identified in gel pieces that do not correspond to the molecular weights of the proteins [25,26]. This is particularly the case when mild detergents were used in immunoprecipitation experiments. Since GPBB is present in the Ligab immunoprecipitates but not the control IgG immunoprecipitates (Fig. 2), it appears that the Ligab might bind to an as yet unidentified membrane protein of the Lig-8 cells and GPBB somehow co-precipitated along with the Ligab bound protein complex.
The identification of GPBB in the liver progenitor cells was interesting in that GPBB is a fetaltype GP expressed at a high level in early embryonic stages i.e. undifferentiated stages where progenitor cells are in. As the embryos develop, GPBB was gradually replaced by mature forms of GPBB, GPLL and GPMM in the brain, liver and muscles, respectively [21][22][23][27][28][29]. We observed a similar switch in the GP isoforms in the liver progenitor cells undergoing SB-induced cell differentiation. GPBB was expressed at a high level in the WB-F344 cells before SB-induced differentiation (Fig. 6). Upon addition of SB, GPBB started to decrease in the differentiated WB-F344 cells while the mature cell marker GPLL and CK19 began to express (Fig. 6). Consistent with this switch from GPPB to GPLL expression in the differentiated WB-F344 cells, it was interesting to note that GPBB is the predominant GP in poorly differentiated and rapidly growing hepatoma [21] whereas GPLL is the predominant GP in the adult liver cells [30].
GPBB does not seem to maintain stemness of the liver progenitor cells as GPBB knockdown did not induce cell differentiation into either hepatocyte or cholangiocyte lineage (data not shown). Since GP catalyzes the rate-limiting step in glycogenolysis in animals by releasing glucose-1-phosphate from the terminal alpha-1,4-glycosidic bond [28], we suspected roles of GPBB in glycogen metabolism in liver progenitor cells. Our results show that GPBB helps liver progenitor cell survival under low glucose conditions (Fig. 8). GPBB may also facilitate liver progenitor cell differentiation because GPBB knockdown delayed SB-induced cell differentiation (Fig. 7). These results suggest that extracellular glucose concentrations affect liver progenitor cell survival and differentiation. Similar observations were made in human tenocytes where extracellular glucose concentrations were reported to determine cell fate following oxidative stress [31]. GPBB expression, however, is not necessarily a favorable factor for the liver progenitor cells under high glucose conditions which cause a high percentage of apoptotic cells (Fig. 8B). Similarly, high glucose concentrations were reported to cause apoptosis of mesenchymal stem cells [32].
We conclude for the first time that GPBB is a novel liver progenitor cell marker. Expression of GPBB in the liver progenitor cells appears to play dual roles in facilitating liver progenitor cell survival under low glucose conditions and cell differentiation. | 4,956 | 2015-03-31T00:00:00.000 | [
"Biology",
"Medicine"
] |
Robust priors for regularized regression
Induction benefits from useful priors. Penalized regression approaches, like ridge regression, shrink weights toward zero but zero association is usually not a sensible prior. Inspired by simple and robust decision heuristics humans use, we constructed non-zero priors for penalized regression models that provide robust and interpretable solutions across several tasks. Our approach enables estimates from a constrained model to serve as a prior for a more general model, yielding a principled way to interpolate between models of differing complexity. We successfully applied this approach to a number of decision and classification problems, as well as analyzing simulated brain imaging data. Models with robust priors had excellent worst-case performance. Solutions followed from the form of the heuristic that was used to derive the prior. These new algorithms can serve applications in data analysis and machine learning, as well as help in understanding how people transition from novice to expert performance.
Introduction
Inference from data is most successful when it involves a helpful inductive bias or prior belief. Regularized regression approaches, such as ridge regression, incorporate a penalty term that complements the fit term by providing a constraint on the solution, akin to how Occam's razor favors solutions that both fit the observed data and are simple. By incorporating such constraints or prior beliefs, the hope is that models will better predict future outcomes.
What makes a good prior belief or inductive bias? In the case of ridge regression, the norm of the regression coefficients is shrunk toward zero (Hoerl & Kennard, 1970;Tikhonov, 1943) to control model complexity and reduce overfitting. However, in many domains, zero is not a reasonable a priori guess for the true association between variables. For example, it would be strange to a priori predict the quality of a new home would be unaffected by the experience of the workers, quality of materials, reputation of the architect, etc. Because the world is somewhat predictable, a prior centered on the origin (Fig. 1) is inappropriate.
If not zero, where does one turn for a useful prior? One answer is to look to human behavior. Humans use an assortment of clever strategies for learning and decision-making that perform well even in conditions of low knowledge. Simple heuristics that are fast and frugal (Czerlinski, Gigerenzer, & Goldstein, 1999) excel when training examples are scarce (Parpart, Jones, & Love, 2018). People can also shift to more complex strategies when resources are available (Rieskamp & Otto, 2006). With increasing experience and expertise, humans often acquire a sophisticated understanding of domains.
Although heuristics are efficient and robust models in their own right, we propose they are a useful starting point or prior for more complete characterizations of domains. Advantages of heuristics include their ecological validity (Czerlinski et al., 1999; S. Bobadilla-Suarez et al. ≠ ⃖ ⃗ ). Using priors based on a heuristic (i.e., a constrained model) can increase robustness and interpretability. Eq. (1) is shown at the bottom of panel B. The other equation simply drops , equivalent to standard notation for ridge regression, here termed the Zero prior model.
Fig. 2. TAL and TTB decision heuristics. (A)
A hypothetical decision -choosing between a rural (−1) or urban (+1) home based on cues ordered by cue validity: low pollution, low price, and proximity to museums. Each cue is coded as −1 when favoring the left option (rural), +1 for the right option (urban), and 0 when the two options are equal on that cue. TAL sums cue values, choosing the rural home. TTB chooses based on the best cue (measured bŷ, see Eq. (4)) that distinguishes the options. Here, TTB would choose the urban home based solely on proximity to museums. (B) The covariance of the cues with the criterion (urban or rural home), whicĥmeasures. (C) The covariance of the cues with one another; TAL and TTB heuristics disregard this information. (D-F) Illustrations of OLS, TAL and TTB. Here, OLS strikes a balance for correlated cues; low price and proximity to museums are (negatively) correlated. Thus, low pollution presents a higher weight 1 . TAL and TTB equate the absolute value of all weights. TTB additionally ranks and scales predictors according to their predictive valuêin a non-compensatory way (multiplying cues by powers of 2). Mata et al., 2012) and robustness across decision problems. Their weakness is insensitivity to aspects of the data due to their rigid inductive bias (Geman, Bienenstock, & Doursat, 1992;Parpart et al., 2018). This weakness is ameliorated when heuristics function as priors within more complex models because priors can be overcome by additional data, much like how human experts develop more complex and nuanced knowledge with increasing experience in a domain. When data are abundant, the encompassing model would master the subtleties of the domain, whereas when data are scarce the heuristic prior would help guide predictions and increase robustness. Because the heuristics themselves are interpretable models, the solution of the encompassing model could be understood in terms of deviations from the heuristic prior. Accuracy and normalized entropy for the 20 datasets in Application 1. Training set size was fixed at 50. (A) Test set accuracy across penalty values for the Obesity dataset. At low penalties, the models all agree with OLS (unpenalized). Under strong penalties, the Zero prior model (standard ridge regression) converges to chance performance as weights shrink toward ⃖ ⃗ . TAL-prior and TTB-prior models converge toward their respective heuristics and are robust. (B) Test set accuracy for all 20 datasets for best and worst performing penalty values for each model (see SI). The OLS permuted prior model is a penalized regression model with a permuted OLS solution as prior. Heuristic prior models are most robust. (C) Normalized entropy (Eq. (17)) for the Professors' Salaries dataset, which measures how compensatory the weights are. The TAL-prior model becomes maximally compensatory as penalty increases, unlike the TTB-prior model. (D) Normalized entropy for all 20 datasets across all penalty values, which orders as TAL-prior, Zero prior, TTB-prior. For B and D, violins represent density estimates for the respective metric. Each dot is one of 20 datasets. For A and C, shaded areas represent 1 standard deviation.
Robust priors based on decision-making heuristics
We used two well-known heuristics, tallying (TAL) and take-the-best (TTB) (Czerlinski et al., 1999), as priors in regularized regression models. These heuristics predict which of two options is preferable. For example, TAL or TTB could predict whether a rural or urban home is preferable based on several cues ( Fig. 2A). Each cue is ternary valued and indicates whether the left (−1) or right (+1) option is preferred on that dimension, with 0 for a tie. TAL is a simple majority voting rule whereas TTB bases its decision on the single most predictive cue that can discriminate between the alternatives (both heuristics explained below; also, see Fig. 2).
To use a heuristic as a prior, we use a two-step model-fitting procedure (cf. (Zou, 2006)). In step 1, we fit the heuristic to the training data. The resulting point estimate for the weight vector provides the penalty term (weighted by the penalty parameter ) within a regularized regression model in step 2. The penalty term shrinks regression coefficients toward the heuristic solution, as opposed to ⃖ ⃗ as in ridge regression (Fig. 1). Increasing increases the strength of the prior, eventually pushing the regression solution to fully agree with the heuristic (cf. (Parpart et al., 2018)).
Our approach integrates heuristics with full-information (regression) models in a principled way that applies to a broad class of heuristics. The approach is to subtract a carefully constructed vector inside the penalty term of the well-known 2 cost function used in ridge regression. The cost function for standard ridge regression iŝ where ‖ ⋅ ‖ 2 is the Euclidean ( 2 ) norm, is the dependent variable [ 1 , … , ] , is an × matrix with one column for each of the predictor variables , is a column vector of zeros ⃖ ⃗ = [0 1 , … , 0 ] , is a vector of estimated regression coefficients [ 1 , … , ] , and ≥ 0 is a tunable penalty parameter. 1 Note that implicit priors are also found in regularized regression with all other norms ( ) too, including LASSO regression ( 1 ). Thus, our insight generalizes to all other norms as well.
S. Bobadilla-Suarez et al. The first term inside the arg min of Eq. (1) promotes goodness-of-fit in the model, whereas the second term -known as the penalty term -promotes smaller weights . As increases, the weights tend to in the limit (Fig. 1). (The derivation of the optimal weights * is included in the SI.) However, when = 0, the model is equivalent to ordinary least squares (OLS) regression. OLS regression estimates coefficients without the penalty term: Normally, is not included in Eq. (1); it is implicit in more standard specifications of ridge regression, where the penalty term is written simply as ‖ ‖ 2 2 . Nevertheless, instead of a ⃖ ⃗ vector, one can generalize ridge regression with alternative constructions of ( Fig. 1). As argued above, choosing this vector intelligently might improve learning of̂by imposing a more sensible inductive bias (Geman et al., 1992). Although the decision-making literature has traditionally proposed that humans use certain classes of heuristics due to cognitive limitations (Bobadilla-Suarez & Love, 2018;Kahneman, Slovic, & Tversky, 1974;Simon, 1978), heuristics have also been justified from their ecological validity (Czerlinski et al., 1999;Dawes, 1979;Parpart et al., 2018). That is, the inductive biases they embody agree with the statistical structure of many natural environments, thus leading to better performance. Taking inspiration from the TAL and TTB heuristics, both in their success in describing human decision making (Bobadilla-Suarez & Love, 2018;Bröder, 2003;Otworowska, Blokpoel, Sweers, Wareham, & van Rooij, 2018) and in application to real world statistical problems (Czerlinski et al., 1999;Parpart et al., 2018), we propose a construction of based on these heuristics.
Below we discuss how to construct priors from TAL and TTB. We then report three applications. In the first application, we compared generalization performance (test set accuracy) and interpretability of model solutions on 20 classical datasets previously used in the decision-making literature (Czerlinski et al., 1999;Parpart et al., 2018). There, the decision-making problem was to choose the better item within a pair (see Fig. 2A and below). In the second application, we evaluated our approach within a classification paradigm in which a single item is assigned to one of two classes (e.g., friend or foe?). In the final application, we demonstrated the generality and benefits of our approach by analyzing simulated brain imaging data where the prior is derived from a technique (Mumford, Turner, Ashby, & Poldrack, 2012) that seeks to minimize collinearity amongst predictors in a manner that parallels how we derive heuristic priors.
TAL and TTB heuristics
TAL and TTB do not adapt their form or complexity in light of the data. For example, TAL is an equal-weights algorithm that uses only the signs of the coefficients (Czerlinski et al., 1999;Dawes, 1979): the estimated weightŝare constrained to be either −1 or 1 (Fig. 2E).
The Tallying decision rule (TAL) is defined aŝ = sign wherê= A cue's estimated cue validitŷis defined as the difference between the numbers of correct predictions and incorrect predictions , divided by the total number of predictions across all observations + (Martignon, Hoffrage, et al., 1999). 2 Observations that do not present a prediction (i.e., = 0) are ignored. Notably, cue validities depend only on the relationship between each cue and the outcome, and not on covariance between cue. Thus, the definition of̂is what makes heuristics insensitive to cue covariance. When assessing model performance, the validitieŝare estimated for each training set.
The Take-the-Best (TTB) decision rule iŝ Whereas TAL sums the signs of the predictors to determine its response, TTB relies on the top predictor that differentiates the two options. When there is no evidence for either option, both TAL and TTB choose randomly (with probability = 0.5 for each option). This occurs for TAL when Eq. (3) yields 0 and for TTB when every equals 0. We now define based on the TAL heuristic, referred to as . First, we determine a scalar coefficientŝ hared across all predictor variables : This equation is the same as Eq.
(2) except that the vector has been replaced by the product of a scalar and a column vector , which has cue directionalitieŝ= sign(̂). Using this scalar, we define:
=̂̂.
(8) To construct , we build from the intuition that TTB is equivalent to a noncompensatory weight vector such as 2̂, wherê is a vector of ascending ranks for the absolute cue validities,̂= |{ ′ ∶ |̂′ | < |̂|}|. Paralleling the definition of the , for the , we also determine a shared scalar: However, we have a new design matrix in Eq. (9), defined by =̂ (11) witĥbeing a diagonal matrix This transformation has the effect of encoding cue validity directly into the design matrix by scaling each regressor according to a geometric progression. In order for to function appropriately as a , the original design matrix is replaced with in Eq. (1). When instead ∶= 1, we recover the TAL prior. Note also that this entire procedure is nearly equivalent to working with the original design matrix and taking to be proportional tô̂(the vector of signed exponentiated ranks,̂) rather than tô, except that the weights are differentially penalized according to their ranks. With these priors defined, we can now formally specify two regularized regression models. The TAL-prior model is defined by Eq. (1) The TTB-prior model is defined by Eq. (1) with = and with replaced by . In contrast to OLS, the use of the common scalar for all cues in the prior for both TAL-prior and TTB-prior highlights that both heuristics are insensitive to cue covariance information (see Fig. 2E,F). For TAL-prior, the common scalar reflects the fact that TAL is a (fully) compensatory strategy, whereas the design matrix in TTB-prior, , reflects the fact that TTB is a non-compensatory strategy. Later, we will evaluate how these differing priors affect the nature of penalized regression solutions.
Logistic ridge regression
The first two applications reported here use logistic ridge regression (Le Cessie & Van Houwelingen, 1992;Schaefer, Roi, & Wolfe, 1984;van Wieringen, 2015). To estimate weights for penalized logistic regression,̂( ) , we first obtain a scale parameter for an unpenalized logistic regression via maximum likelihood, where as above the weight vector is constrained to be proportional to the cue directionalities: The likelihood for logistic regression is as usual: where ⋅ denotes the th row of . We then define as We then insert into our final objective function for regularized logistic regression: See Supplementary Information (SI) for an approximation of̂( ) for regularized logistic regression via the Newton-Raphson iterative algorithm.
Application I: Heuristic decision making
Regularized regression models with heuristic priors were evaluated on the 20 datasets that have been previously used to compare heuristics with ordinary least squares (OLS) regression (Czerlinski et al., 1999;Katsikopoulos, Schooler, & Hertwig, 2010;Parpart et al., 2018). For each of the 20 problems, the cues for the two options on each trial were binary valued (see Methods below for more details), which leads to ternary-valued inputs according to our coding scheme (see Fig. 2A).
Methods
The preprocessed data were retrieved from an Open Science Foundation (OSF) repository (Parpart, Jones, & Love, 2019), used to evaluate the half-ridge and COR models by Parpart et al. (2018). In accord with previous research, cue attributes were dichotomized by median split (Czerlinski et al., 1999;Parpart et al., 2018).
The data were transformed into a format appropriate for decision-making problems where all pairwise comparisons between observations were encoded as the signed differences in (binary) attributes (possible values: −1, 0, and +1). The decision in our coding scheme is −1 for the left choice and +1 for the right choice, which was mapped to 0 and 1 for logistic regression: e.g., in the homestead example ( Fig. 2A), rural is coded as −1 (or 0 for logistic models) and urban is coded as +1. This is common procedure in the decision-making literature (Czerlinski et al., 1999;Katsikopoulos et al., 2010;Parpart et al., 2018). Formally, this consists of training pairs ( 1 , 1 ), … , ( , ) with ∈ {−1, 0, 1} . Training sets consisted of 50 training pairs from which the priors were learned from. All results were computed for 1000 iterations (i.e., different partitions into training and test sets) for all penalty values.
As the penalty parameter increases, the penalized regression models with the and converge to their corresponding heuristics (Fig. 3A). As a sanity check, the TAL-prior and TTB-prior models were validated on simulated data in the SI (Figures S1 and S2, respectively) by tracking their agreement with OLS predictions. Effectively, agreement with OLS is higher for low penalty values and agreement with TAL-prior or TTB-prior is higher for high penalty values. Individual plots for each of the twenty datasets are also included in the SI (Figures S3 and S4).
Although regression models are interpretable in that each feature's importance follows from its weight, the heuristic penalty terms make clear how the prior shapes the solution and how the solution differs from the prior, which itself is an interpretable solution. To evaluate how the form of the solution changes as a function of the prior, we calculated normalized Shannon entropy defined as: ∶= 1 for TAL-prior and ∶= 2 for TTB-prior, and ‖ ⋅ ‖ 1 is the 1 norm, such that̃∈ [0, 1] for any number ( ) of predictors. Eq. (17) provides an intuitive measure of how compensatory a solution is. The measure will peak at 1 when the predictive force of the weights is uniform, as in the TAL heuristic.
Results
As predicted, these models are robust across the range of values because they converge to a reasonable estimate (i.e., a sensible heuristic). In contrast, while ridge regression performs well overall, its performance suffers at higher penalty values as its weights are pulled toward ⃖ ⃗ . The robustness of the penalized regression models with heuristic priors held across the 20 datasets (Fig. 3B). Notice that regularization using any nonzero prior is not sufficient for robustness -an ad hoc nonzero prior (OLS Permuted Prior) was not robust. The OLS permuted prior model is a penalized regression model with a permuted OLS solution as prior (i.e., where the weights from the OLS solution have been permuted).
We confirmed that̃for the TAL-prior model would converge to 1 with increasing penalty , in contrast to the TTB-prior model. We also predicted ridge regression's̃would be somewhat lower than TAL-prior's. That is, convergence to a ⃖ ⃗ weights vector for standard ridge regression is nonsensical and effectively resisted in the optimization, providing more heterogeneous weights than otherwise expected. These predictions held (Figs. 3C,D).
The results presented here hold under an alternative training scheme where we also evaluate OLS as a prior itself (Splitting training data in the SI). OLS performs worse as a prior on the majority of datasets ( Figure S7 in the SI) and also shows higher variance overall ( Figure S8 in the SI).
In these 20 decision problems, models using priors based on TAL and TTB were robust across the entire range of prior strengths. These penalized regression models shrunk to a reasonable prior based on a simple heuristic that discards covariance information amongst predictive cues. The forms of the solutions were interpretable and followed from the priors.
Application II: Breast Cancer classification
In this application, we conducted the same analyses as in Application I, but for a classification problem as opposed to a forced choice between two options. We applied the models to the Breast Cancer Wisconsin (Diagnostic) Data Set from the UCI data repository (Blake, Keogh, & Merz, 1995). In this task, models predicted whether an item was cancerous or not based on binary features (see Methods for more details). The predictors were discrete as in Application I, though the identical approach would apply to continuous predictors or to a mixture of discrete and continuous predictors. 3 S. Bobadilla-Suarez et al. The key finding is that models with heuristic priors are most robust. (B) Normalized entropy (Eq. (17)) averaged across the range of penalty values reflects how compensatory a model's predictions are, led by TAL-prior, followed by the Zero prior, and finally the TTB-prior model. Each dot represents one of the tested penalty values averaged over 1000 train-test splits. The gray violins represent the respective density estimates in both panels.
Methods
The data comprises nine cues ( = 9) that describe characteristics of the cell nuclei present in digitized images of fine needle aspirate (FNA) of breast masses (Blake et al., 1995). Data points with missing cue values were removed, resulting in a total of = 478 observations. All variables were binarized by median split.
In an analogous fashion to how we constructed in Application I, here we transformed the original data by median splits. For each cue, if the value was equal to the median, it received a value of 0, if it was above the median it was equal to +1 and if it was below the median it was equal to −1. Formally, this also produces training pairs ( 1 , 1 ), … , ( , ) with ∈ {−1, 0, 1} . Training sets consisted of 100 training pairs from which the priors were learned from. However, we did not construct a matrix of pairwise comparisons of observations as before. The dependent variable was binary, ∈ {−1, +1}, coding for malignant and benign tumors, respectively. This preprocessing of the data is closer to the way regression models are calculated for everyday applications. Both the mean test accuracy (Fig. 4A and Figure S5 in the SI) and the mean normalized entropy (Fig. 4B and Figure S6 in the SI) were averaged over 1000 iterations for each penalty value.
Results
The results were in accord with Application I. The models with a heuristic prior were robust across the range of values (Fig. 4A). As in Application I, the priors shaped the form of the solution in the predicted manner ( Fig. 4B) with the TAL-prior model having the most compensatory solutions.
Application III: Estimation in brain imaging analyses
In Applications I and II, the task was to generalize from training items to make decisions about test items. In Application III, the objective was to estimate the weights themselves. We considered simulated functional magnetic resonance imaging (fMRI) time series that allowed for comparing estimates to ground truth.
Brain imaging datasets are challenging to analyze because they measure the brain's hemodynamic response, which is a temporal and spatially autocorrelated, high-dimensional, noisy, and time-lagged signal. The signal is composed of thousands of voxels (voluminous pixels) with coordinates in space ( , , ) and time ( time-points). Correlations across space and time due to psychological (e.g., Visscher, Kahana, and Sekuler (2009)), neurovascular (Boynton, Engel, & Heeger, 2012) and physical (Smith et al., 1999) effects complicate the independence and linearity assumptions used to model the signal in each voxel. Furthermore, the observed blood-oxygen-level dependent (BOLD) signal is only indirectly related to the outcome variable of interest (neural activity), via the hemodynamic response function (HRF), which is normally modeled as a double gamma function.
In task fMRI, the BOLD time series for a voxel is modeled by weighting events, such as a sequence of pictures (e.g., dog, truck, face, etc.) presented to a study participant. In addition to nuisance regressors, one typically estimates a beta weight for each event (convolved with the HRF). We refer to this standard method as least squares all (LSA), which is unpenalized and plays a role analogous to OLS in Applications I and II.
However, for the reasons discussed above, collinearity in the time series can compromise parameter estimation (Mumford, Poline, & Poldrack, 2015), particularly in rapid event designs (e.g., trial duration of one or two seconds). One proposed solution, which we refer to as least squares separate (LSS), is to estimate a separate model for each event rather than a single model for all events (Rissman, Gazzaley, & D'esposito, 2004). Each model estimates one beta weight for the target event (i.e., trial) and a second shared beta weight for all other events (Turner, 2010). In practice, LSS produces better (less variable) estimates by being less sensitive to collinearity in the time series (Mumford et al., 2012).
We view LSS as analogous to the heuristics considered in Applications I and II. The TAL and TTB heuristics are insensitive to cue covariance. Specifically, cue validity and cue direction are estimated individually for each predictor. Moreover, we implemented these heuristics in a regression framework with a single beta weight (e.g.,̂) to derive a prior. In both models, simplification is achieved by forcing multiple predictors to share a single regression weight. Analogously, each LSS model forces all but the target event (out of potentially hundreds of events) to share a common beta weight.
Like TAL and TTB, we predicted that LSS would provide an effective prior for a penalized regression model because it provides a reasonable and robust starting point to move from when the data warrant. We predicted that a penalized regression model with an LSS prior would outperform both LSS (high ) and the LSA approach ( = 0).
Methods
To build a continuum of models between LSA and LSS, we include the weights derived from LSS as a target (i.e., prior) in the penalty term within a regularized LSA model. Thus, the weights from the LSS-prior model are estimated with the following objective: Paralleling our treatment of the decision heuristics as priors, Eq. (19) specifies a continuum of models ranging from LSA ( = 0) to LSS ( → ∞). For all models, is the activation time series for a single voxel; with spatial indices (i.e., coordinates in brain space) its notation is . Both LSA and LSS are known as massive univariate GLMs, since they model each voxel independently. For a given voxel, LSA estimates weights aŝ where is the BOLD response time series for a voxel and is the × design matrix with number of columns equal to the number of trials , with only one event per trial. (Each is an event for LSA, but this changes for LSS.) This means that a column in models a single event in the experiment. The number of brain scans or time-points is usually larger than the number of trials (events) ( > ) because more than one brain scan is acquired per trial. Quite commonly, a regressor models an event (such as stimulus presentation) with a boxcar function, that models the duration of the stimulus, convolved with a double gamma HRF (Boynton et al., 2012). We will not focus here on how the regressors that model the BOLD signal are constructed. Instead, we focus on the GLMs that receive those regressors as input.
The LSS model differs from the LSA model in that the matrix is replaced with a set of matrices 1 , … , which results in one GLM per trial: Each has dimensions × 2, where is the same as before. Each weight̂is selected as the first coefficient from its respective GLM, via multiplication by = [1 0]. Each is constructed as mentioned above, with the first predictor variable modeling a single experimental trial of interest (i.e., the th trial) and the second predictor being a nuisance variable [0 1] modeling all other trials in the experiment (i.e., all − 1 trials excluding trial ). The LSS-prior model in the main text useŝaŝ.
Simulated fMRI data
There were 1000 simulations performed for each of 9 different designs (see below) with varying levels of signal-to-noise ratio (SNR) and interstimulus intervals (ISI; time between events). The simulations were performed on modified code from the rsatoolbox , which can be consulted at: https://github.com/bobaseb/rsa_toolbox_lss/tree/develop/LSS_project Each simulation consisted of a cluster (i.e., region of interest) of task-sensitive signal voxels with observed data generated for all trials by weights ∈ R × × × , where = and each spatial dimension = 7. The weights were embedded in an array tripled along each spatial dimension ∈ R ×3 ×3 ×3 (i.e., the simulated brain). The weights for non-task-sensitive voxels in (i.e., those not in ) were set to zero. Scanner noise ∈ R ×3 ×3 ×3 had entries drawn i.i.d. from a centered normal distribution (0, 2 ), where 2 = 10 000, and was added to to generate the observed signal: Thus, for a single voxel described by a set of spatial coordinates , , , we have data across time in ∈ R ×3 ×3 ×3 , represented as . For observations , the subset corresponding to voxels that are task-sensitive is denoted as . Notice the use of to generate simulated data instead of . In fact, there is no straightforward way to construct weights (embedded in ) to multiply with the set of matrices . To simulate spatiotemporal correlations in the data, the scanner noise was smoothed along its four axes for each run (two runs total, see below), using a Gaussian spatiotemporal smoothing kernel with full width at half maximum (FWHM) equal to 4 mm for the three spatial dimensions and 4.5 s for the temporal dimension. (Voxel size was set in millimeters at the default value of 3 × 3 × 3.75 in the rsatoolbox.) For each simulation, each coordinate of the effect center ( , , , as defined in the rsatoolbox) -where the signal voxels were placed inside the simulated brain -was uniformly sampled between 1 and 11 inclusive. Two separate runs ( = 2) were simulated on each of the 1000 iterations and each run had 20 repetitions of each of two stimulus types ( = 20 s and = 2). Simulating more than one run and stimulus type contributes to the ecological validity of the simulation, especially for studies that focus on classification (MVPA) where one run is used for training and another for testing. Repetition time (TR; duration for obtaining one full brain scan) was set to 1 s and event duration (ED, the duration of a stimulus on the screen in the MRI scanner) was set to 1.5 s.
A trial's duration is given by + . There are also ⌈ ∕3⌉ null epochs, randomly interspersed with the trials, where no stimulus is shown, each with a duration of + seconds. This kind of experimental design is common because it further helps reduce collinearity between trials and aid in the estimation of . Thus, for each run ⪆ 4 3 ×( + )+ , where is a temporal slack after the last trial that allows the BOLD signal enough time to decay. The exact number of time-points depends on the HRF model that was used (Boynton et al., 2012). This information is encoded in the design matrix . To sample the data-generating weights with correlations between (task-sensitive) voxels, we did the following: For each of the ∕ = 20 trials per stimulus, we sampled from ( , ). Each entry 1 , … , 3 in mean vectors 1 and 2 (for each stimulus, respectively) was i.i.d., drawn from a normal distribution (0, 2 ) for three levels of SNR ( 2 ∈ {10, 15, 20}). These were sampled for each iteration (a thousand iterations total) in each of the nine designs (Fig. 5) but kept constant across runs. The covariance matrix , with dimensions 3 × 3 , induces the correlations between task-sensitive voxels and was kept constant across runs but resampled on different iterations. It was drawn from a scaled Wishart distribution ( , )∕ with degrees of freedom = 3 . The symmetric positive definite matrix was constructed with ones on the diagonal and 0.7 for all off-diagonal values, representing a high degree of correlation between task-sensitive voxels. As presented in Fig. 5, the 3 × 3 design of the simulations had three levels of ISI ∈ {2, 3, 4} (in seconds) and three levels of SNR (as mentioned above).
After sampling all × 3 weights for a run, we have the object ∈ R × 3 . This matrix of weights (trials by voxels) was permuted along the temporal dimension and was arbitrarily mapped to the spatial coordinates of -and by implication, of too -such that → → , before applying Eq. (22).
Model scoring
Our evaluations of the models were done with the root mean squared error (RMSE) of eacĥfor each model (i.e., LSA, LSS, LSA-prior and LSS-prior models) with respect to the ground truth of each vector in : averaged across all the weights for task-sensitive voxels in and the 1000 iterations of simulated data:
Results
Our main prediction held (see Fig. 5). Across a range of task conditions, our penalized regression approach with outperformed both LSS (equivalent to large ) and LSA (equivalent to = 0) for intermediate penalty values of (Fig. 5). LSS provided an effective prior for our penalized regression. Replicating previous work, RMSE was lower for LSS than LSA, akin to less-is-more effects in which decision heuristics can best OLS (e.g., TTB in Fig. 3A).
General discussion
We looked toward human decision making to identify an effective prior for regularized regression and found that decision heuristics which disregard cue covariance information offer a number of advantages, such as robustness and interpretability. These heuristics offered a sensible starting point compared to the usual way of defining for most ridge regression applications (i.e., as the zero vector).
Here we have presented three different types of applications in over twenty different datasets, germane to the fields of decision making, fMRI analysis, and statistical modeling. We have validated the utility of using heuristics like TAL and TTB to construct , as well as using other algorithms which lack a normative foundation and parallel the operation of heuristics -like LSS in the case of fMRI time series modeling.
Three main benefits of no-covariance priors are worth highlighting. First, predictions using a no-covariance prior are likely to provide at least as good, if not better, predictions than a vector of zeros as coefficients. Examples of the TTB-prior model outperforming other models are seen in Fig. 3A and in the lower RMSE values obtained in Application III.
Second, catastrophic failure of the model is avoided for extremely high values of , whereas in normal ridge regression, convergence to the zero vector for high penalty values results in essentially random guessing for the comparison and classification tasks presented here. Convergence toward very small weights may also create implementation issues on digital computers which have limited precision. For example, differences in how floating point numbers are represented in supporting software libraries could reduce the reproducibility of results.
Third, this class of priors has theoretical significance. On the one hand, the model class introduced here further integrates the notions of heuristic decision making and full information algorithms along one continuum of models (as in (Parpart et al., 2018)). Choosing heuristic priors that contrast compensatoriness of the environment, like TAL and TTB do, helps us interpret both the solutions of our models and the environment itself in an easier way than is possible with OLS or the Zero prior model. Likewise, the solution of the encompassing model could be understood in terms of deviations from the heuristic prior. Other informative comparisons could be made to the OLS solution, including how it diverges from the heuristic prior. Finally, our framework provides a way to simulate fMRI data with LSS weights, previously not possible due to the arbitrariness of defining weights for the LSS nuisance variables (see Application III).
The theoretical contribution of this model class is worth emphasizing since it also provides a lens on why heuristics are useful in the first place. The priors offered by heuristics confer robustness; unlike the Zero prior, they embody a sensible inductive bias. This dovetails with why heuristics can operate defeasibly. Speculatively, humans and other cognitive agents may have evolved to implement these priors as a rule. Like Occam's razor, humans also show bias toward simple solutions for many decisionmaking tasks (Gigerenzer, Todd, & TAR, 1999;Kahneman et al., 1974). With expertise (i.e., acquiring more data), the solutions can change (Hornsby & Love, 2014), but initially, very general strategies like assuming independence among covariates have been documented (Bröder, 2000;Gigerenzer & Goldstein, 1996). Of course, this is only one notion of expertise. Other notions could include less effort during inference or rule application, finding appropriate features of a domain and ease of searching for new strategies or creating new ones. Furthermore, experts are not even guaranteed to perform better than statistical techniques (cf. (Meehl, 1954)).
Instead of being all-or-none, heuristic use may move along a continuum (Newell, 2005) as a function of prior strength and experience. Indeed, heuristic use in human decision making is not without its caveats (Newell, Weston, & Shanks, 2003), as is their supposed frugality (Bobadilla-Suarez & Love, 2018;Dougherty, Franco-Watkins, & Thomas, 2008). What is clear is that no heuristic will be best in all environments (cf. no free lunch theorem). Instead, each heuristic is best suited to certain environments and can be seen as embodying a prior that reflects beliefs about the environment.
Of course, this non-universality raises the critical question of how does one choose which heuristic to use? This question closely mirrors the inductive challenge of choosing a prior for a Bayesian model. A general solution to choosing the best heuristic is computationally intractable (Rich et al., 2021), though effective solutions have been offered (Rieskamp & Otto, 2006;Scheibehenne, Rieskamp, & Wagenmakers, 2013). Intuitively, if one believed, for whatever reason, that an environment was governed by numerous additive factors, then a heuristic like TAL would be a good strategy to adopt. The problem of strategy selection closely relates to the problem of meta-learning, or learning to learn, in which one determines how to choose hyperparameters, architectures, general strategies, etc. that will perform well in a task (Schweighofer & Doya, 2003). With enough data one can test which heuristic performs best on a sub-sample. However, in the low data regime this might not be possible. Our results show the differences between TTB and TAL used as a prior may not be too significant but future work should explore this angle.
With reference to models of human decision making, this class of algorithms has further potential. Referring back to the roots of regularized regression, Tikhonov (1943) initially constructed this type of regularization in a more general form: where has been replaced with a matrix . This enables the implementation of different penalty values for different directions in weight space. Admittedly, choosing would require knowledge of the data. Our results suggest there might be some advantage in this kind of stepwise approach, where one model's output provides another model's prior. From a psychological point of view, this would enable modeling attention through the scaling of dimensions (Nosofsky, 1986). Although empirical studies show humans usually employ attention solely along individual dimensions (i.e., the diagonal of (Jones, Love, & Maddox, 2005;Kruschke, 1993)), other applications (like our fMRI example) could benefit from this generality (Bobadilla-Suarez, Ahlheim, Mehrotra, Panos, & Love, 2020). Generalizations of such regression algorithms include adding a matrix that puts weights on observations themselves (van Wieringen, 2015) or even using heuristic regularizers for more complex models like neural networks. As in all modeling endeavors, the researcher should make clear how the model is intended (Jones & Love, 2011). For instance, a penalized regression approach could be proposed and evaluated as a normative account of what should be done, a high-level description of what people actually do, or an algorithmic account of the processes people engage in. We suggest further work on expertise (e.g., transitioning from novice to expert) could engage with any of the mentioned modeling strategies.
Furthermore, the models presented here provide only point estimates of̂, but there is also no obstacle in expanding them to the Bayesian setting to obtain the full posterior distribution as well. In fact, it is well known that ridge regression, like LASSO regression, has a Bayesian interpretation (Friedman, Hastie, & Tibshirani, 2001;Parpart et al., 2018;Tibshirani, 1996). Our twostep approach engages in a double counting of the data (cf. (Zou, 2006)) which could suffer from bias and undue confidence in predictions. A Bayesian formulation could address this potential issue, providing new insights on why our two-step approach works and in which environments. This exciting future direction could expand the reach of our approach by placing it on a normative footing, enabling inquiry into the models' confidence.
In conclusion, we find that priors motivated by decision heuristics are valuable both methodologically and theoretically. Assuming independence among predictor variables offers a reasonable prior or starting point in most situations. These priors are themselves data-informed models that perform robustly when the penalty value (i.e., prior) is overly strong. Although ridge regression may not routinely suffer from extreme penalty values in practice, use of the TAL and TTB priors do not appear to have any significant downside and may be judged a more sensible choice and perhaps more akin to how people learn than ridge regression's null vector prior. Linking insights across fields as disparate as decision making and advanced methodologies for fMRI data analysis, we are confident that these robust priors for regularized regression will find even further utility in other fields, surpassing the theoretical contributions that we have hinted at here.
CRediT authorship contribution statement
Sebastian Bobadilla-Suarez: Coded all the analyses and simulations, derived the objective function for Equation 16 along with its Newton-Raphson estimate, Wrote the initial draft, Interpreted the results and provided critical comments on the manuscript. Matt Jones: Interpreted the results and provided critical comments on the manuscript. Bradley C. Love: Developed the study concept and derived the objective function for Equation 1, Interpreted the results and provided critical comments on the manuscript. | 9,791.6 | 2020-10-06T00:00:00.000 | [
"Computer Science",
"Mathematics"
] |
A novel feature of the ancient organ: A possible involvement of the subcommissural organ in neurogenic/gliogenic potential in the adult brain
The subcommissural organ (SCO) is a circumventricular organ highly conserved in vertebrates from Cyclostomata such as lamprey to mammals including human. The SCO locates in the boundary between the third ventricle and the entrance of the aqueduct of Sylvius. The SCO functions as a secretory organ producing a variety of proteins such as SCO-spondin, transthyretin, and basic fibroblast growth factor (FGF) into the cerebrospinal fluid (CSF). A significant contribution of the SCO has been thought to maintain the homeostasis of CSF dynamics. However, evidence has shown a possible role of SCO on neurogenesis in the adult brain. This review highlights specific features of the SCO related to adult neurogenesis, suggested by the progress of understanding SCO functions. We begin with a brief history of the SCO discovery and continue to structural features, gene expression, and a possible role in adult neurogenesis suggested by the SCO transplant experiment.
Introduction
The subcommissural organ (SCO) is a circumventricular organ with a long history but is still enigmatic (Oksche et al., 1993;Meiniel et al., 1996;Rodriguez and Yulis, 2001;Kiecker, 2018). From Cyclostomata such as lamprey to mammals, the SCO is substantially conserved among vertebrates. The SCO is located at the boundary between the third ventricle and the entrance to the aqueduct of Sylvius (Figures 1A, B; Duvernoy and Risold, 2007;Kiecker, 2018;Corales et al., 2022). Its importance to maintaining the homeostasis of cerebrospinal fluid (CSF) dynamics has drawn attention to its functions (Perez-Figares et al., 2001;Guerra et al., 2015). The SCO has been hypothesized to play a role in neurogenesis (Guerra et al., 2015), although extensive research on dynamics of CSF and neuropathology of hydrocephalus has been conducted. Recent advancement in our understanding of the function of the SCO have led us to hypothesize that the SCO possesses features relevant to adult neurogenesis. In this review, we overview how the SCO is discovered before discussing its structural characteristics, gene expression, and discuss a potential involvement in adult neurogenesis suggested by the SCO transplant experiment.
A brief history of the SCO The first clear description of the SCO appears in the 1900's (Dendy, 1902). In an anatomical study on ammocoetes, a New Zealand Lamprey (Geotria australis), the SCO was described as "a pair of ciliated grooves" (Figure 1C), which in earlier study was referred as the epithelial layer (Edinger, 1892;Studnicka, 1900). The epithelium of the SCO is distinct from those in other brain regions due to its cylindrical structure. Dendy described it as follows; "They are most conspicuous beneath the commissure itself (figs. 1, 2), in which region they are lined by a sharply defined epithelium of very long columnar cells, totally different in appearance from the epithelium which lines the remainder of the brain-cavity." (Dendy, 1902). The SCO function was speculated, at that time, as making the circulation of the brain fluid due to its ciliated form and location. The term "Sub-Commissural Organ" was first proposed by Dendy and Nicholls in Dendy and Nicholls (1910). The same report also mentioned that the SCO exists in higher vertebrates such as mice, cats, and chimpanzees. At this time, "a pair of ciliated grooves" or "the epithelial layer beneath posterior commissure" was established as the "Sub-Commissural Organ."
Structural feature of the SCO
The SCO surface is sparsely ciliated and covered with microvilli, contrasting with the other ependymal areas, which are composed with multiciliated cells (Collins and Woollam, 1979;Rodriguez et al., 1998). The SCO consists of an inverted U-shape ependymal layer(s) lining the third ventricular side of the posterior commissure in the coronal section (Collins and Woollam, 1979;Rodriguez et al., 1998;Figures 2A, B). Later, it was proposed that the SCO is composed of two layers, ependyma and hypendyma, in vertebrates (Rodriguez et al., 1984). A small number of hypendymal cells are observed in amphibia and reptiles such as frogs, lizards, and snakes. In contrast, the hypendymal layer is more distinct in larger mammals such as bovines and primates, but its existence is species-dependent (Rodriguez et al., 1984). It appears that the presence of hypendymal cells is species-specific and be uncertain in animals with smaller brains. For example, a hypendymal layer is observed in rats but not in mice ( Figure 2C; Corales et al., 2022).
As stated in the previous section, the SCO has been considered a secretory organ due to its cylindrical structure ( Figure 1D). The SCO cells have an elongated shape, compared to typical ependymal cells, a large nucleus at the basal side adjacent to the posterior commissure (PC), the endoplasmic reticulum and the Golgi apparatus containing secretary molecules such as SCOspondin and transthyretin (TTR). The apical pole of the SCO cells is exposed to the ventricular cavity. Apparent zonulae adherence is observed to connect adjacent cells. SCO-spondin secreted from the apical side forms the Reisner's fiber (RF) (Meiniel, 2001). This unique structure is also confirmed by a study using electron microscopy in detail (Meiniel, 2007).
Gene expression related to brain development in the SCO cells
Our knowledge on molecules expressed in the SCO is limited because of its restricted region, although a study reported a systematic DNA-chip analysis of circumventricular organs including SCO (Szathmari et al., 2013).
Secretory proteins
Due to their feature as secretory cells, two major secretory proteins, SCO-spondin and TTR, have been well studied in the SCO (Gobron et al., 1996;Montecinos et al., 2005).
Transthyretin is a carrier for thyroid hormones in CSF (Alshehri et al., 2015;Richardson et al., 2015). It has previously been considered that only the choroid plexus produces TTR in the brain, although the SCO is shown to secrete TTR into the CSF (Montecinos et al., 2005). TTR might contribute to adult neurogenesis by regulating thyroid hormone homeostasis (Kapoor et al., 2015) since adult neural stem cell cycling in vivo requires thyroid hormone and its alpha receptor (Lemkine et al., 2005). Cell division and apoptosis are affected at the neural stem cell niche in the TTR null mice (Richardson et al., 2007).
The adult rat SCO shows strong expression of FGF2 (also known as a basic fibroblast growth factor, b-FGF) (Cuevas et al., 1996). FGF2 is critical in maintaining adult neurogenesis in the neurogenic niches (Mudo et al., 2009;Woodbury and Ikezu, 2014). The SCO also expresses Wnt1, a secretory glycoprotein critical for morphogenesis during brain development. Mutation in Wnt1 caused abnormal differentiation of the SCO in the mouse embryo Frontiers in Neuroscience 02 frontiersin.org ( Louvi and Wassef, 2000). Wnt1 expression is observed in the SCO at the RNA level based on Allen Mouse Brain Atlas. 1 Wnt signaling pathway has been reported to regulate neural differentiation from neural stem/progenitor cell (NSPCs) in early embryonic brain development (Hirabayashi et al., 2004;Machon et al., 2007;Munji et al., 2011;Inestrosa and Varela-Nallar, 2015). These secretory proteins from the SCO could contribute to neurogenesis in the embryonic brain and adult neurogenesis (discussed later). In addition, these secretory proteins could also 1 https://mouse.brain-map.org/experiment/show/112644522 Histological structure of the SCO. (A) The SCO region of mouse brain. The SCO is composed of an ependymal cell layer(s) with an inverted U-shape lining the third ventricular side of the posterior commissure (Allen Brain Atlas: Mouse Brain, https://atlas.brain-map.org/). (B) The SCO region of human brain (BrainSpan Atlas of the Developing Human Brain, https://atlas.brain-map.org/). The SCO regions are surrounded by red dotted lines. (C) Immunostaining of SCO-spondin in the SCO region of the adult mouse and rat brain. The SCO-spondin staining pattern in the adult rat SCO shows a weak staining pattern in the nuclear region compared to the adult mouse SCO allowing better visualization of hypendymal cells between the ependymal layer and the posterior commissure. Arrows indicate hypendymal cells. Scale bars: 50 µm (Corales et al., 2022). 3V, third ventricle; Ep, ependymal cells; NR, nuclear region; PC, posterior commissure.
contribute to maintaining the SCO functions in an autocrine manner since the FGF2 receptor is expressed in the SCO (Szathmari et al., 2013).
The SCO development is consistent during embryonic stages among species, but its maintenance appears species dependent. For example, the SCO begins to differentiate at embryonic day 12.5 (E12.5) in mice, well developed by E16.5 (Estivill-Torrús et al., 2001), and appears to be maintained through postnatal stages (Corales et al., 2022). On the other hand, the SCO is well developed at embryonic stages (3-to 5-month-old fetuses) but gradually regresses the SCO structure after 5-month-old fetuses . The SCO cells are significantly reduced in 1-yearold infants and lose their secretory features in the adult stages .
Transcription factors
It has been reported that several transcription factors are critical for the formation of the SCO. Ectopic expression of Engrailed 1 (En1), a homeobox transcription factor related to Wnt1 signaling, interferes with the differentiation of circumventricular organs, including the SCO (Louvi and Wassef, 2000). En1 expression is also confirmed in the SCO at the RNA level based on Allen Brain Atlas: Mouse Brain. 2 Deficiency in the function of Pax6, an essential transcription factor for neurogenesis in the embryonic and adult brain (Osumi et al., 2008), results in impairment of the SCO differentiation in early brain development (Estivill-Torrús et al., 2001). The SCO's structural and functional features as a secretory organ are completely lacking in the Pax6 Sey/Sey mutant mouse, suggesting that Pax6 is critical for SCO formation. Msx1, a homeodomain transcription factor, is expressed in the SCO of the mouse embryonic brain, and a mutation in Msx1 gene causes a defect in the SCO formation (Bach et al., 2003;Ramos et al., 2004). However, since these transcription factors are related to regionalization in early brain development, it remains unknown whether abnormality in the SCO might be directly caused by defects in these transcription factors or secondarily induced by compartmentation deficits.
Expressions of RFX3 and RFX4, members of the regulatory factor X gene family related to ciliogenesis, are also reported in the SCO. Their mutation or misexpression causes a severe hydrocephalus phenotype in mice, possibly by malformation of the SCO (Blackshear et al., 2003;Baas et al., 2006;Xu et al., 2018). Functional expression of CREB was also confirmed in the isolated SCO cells in an aspect to the cAMP-PKA pathway (Nurnberger and Schoniger, 2001;Schoniger et al., 2002).
As mentioned above, the secretory proteins from SCO could be associated with several signaling pathways, such as integrin, Wnt/β-catenin, and Notch signaling pathways ( Figure 3B). Recent studies suggested that these signaling pathways are involved in multiciliated cell differentiation or hydrocephalus (Failler et al., 2021;Lewis and Stracker, 2021;Liu et al., 2023), emphasizing critical SCO roles on ciliogenesis and hydrocephalus caused by its malfunction.
Proliferation and differentiation potential of SCO cells: Neurogenic/gliogenic and embryonic/adult A significant role of the SCO is maintaining CSF homeostasis by forming RF, which consists of SCO-spondin. However, several lines of evidence suggest its contribution to adult neurogenesis. Recently, we have reported that the SCO cells have a unique feature as immature neuroepithelial cells in the adult mouse brain (Corales et al., 2022). The SCO cells in the adult brain expressed known NSPC markers, i.e., Pax6, Sox2, and vimentin, and a proliferating marker, PCNA (Figure 4). Neither expression of another proliferation marker Ki67, indicating a G2/M phase, nor incorporation of BrdU, an indicator for DNA synthesis in the S phase, are undetectable, suggesting that the SCO cells have a potential for proliferation but are quiescent for cell division in the adult. The SCO cells also express other neuroepithelial cell markers, 2 https://mouse.brain-map.org/gene/show/13576 such as Nestin, as well as Notch 1, Hes1 and Hes3, Occludin, E-cadherin, MSI-1, Sox9, and BMI-1 at the RNA level based on Allen Brain Atlas. 3 These data demonstrate that the adult SCO cells maintain neuroepithelial cell characteristics, suggesting that the SCO is a possible adult neural stem cell niche.
The quiescent SCO might express its neurogenic activity by appropriate stimuli. For example, tanycytes in the adult hypothalamus are a subtype of ependymal cells with a long radial process . It has been shown that subpopulation of the tanycytes contain stimuli-responsive NSPCs (Lee et al., 2012;Cheng, 2013;Robins et al., 2013;Maggi et al., 2014). A recent study has reported that tanycyte-like ependymal cells in circumventricular organs (CVOs) and the central canal (CC) show a neural stem cell-like phenotype in the adult mouse brain (Furube et al., 2020). In the study, tamoxifen-induced EGFP labeling under the control of Nestin-CreERT2 transgene has identified NSPCs and shown that the EGFP-labeled ependymal cells distribute in the organum vasculosum laminae terminalis (OVLT), subfornical organ (SFO), CC, and the arcuate nucleus (Arc) of the hypothalamus. Furthermore, EGFP-labeled ependymal cells increased by stimulation with FGF-2/EFG (Furube et al., 2020). The EGFP-labeled tanycyte-like ependymal cells of the OVLT and SFO express both GFAP and Sox2 but not Pax6, while the cells of the CC express those three marker proteins (Furube et al., 2020), which is similar to our results in the SCO (Corales et al., 2022). However, no EGFP-labeled ependymal cells in the SCO were mentioned in the study (Furube et al., 2020). Possibly, the SCO might be activated by the other factors than FGF-2/EFG. In addition, SCO cells are distinct from the tanycytes by structural features. The SCO cells have primary cilia while tanycytes or tanycyte-like cells are unciliated or unciliated/bi-ciliated, respectively (Langlet et al., 2013;Mirzadeh et al., 2017). The cytoplasm of the tanycyte often shows smooth endoplasmic reticulum (Wittkowski, 1998), while the cytoplasm of SCO cells are filled with rough endoplasmic reticulum , consistent with their secretory function. Therefore, even though the SCO cells and tanycytes share a part of NSPC marker expression, they are different subtypes of ependymal cells.
Although the SCO cells share many properties with NSPCs in the gene expression, it is difficult to clarify whether they could express proliferation activity and whether they are neurogenic RG-like cells and/or gliogenic progenitors. At this point, what kind of cells could be produced from the SCO remains unknown. Considering its location near the posterior commissure and the existence of oligodendrocyte precursor cells (OPCs) at the periphery of the SCO (Corales et al., 2022), the SCO might be a niche for oligogenesis rather than neurogenesis.
Another possible involvement of the SCO in adult neurogenesis is obtained by transplantation of SCO cells into a lateral ventricle. In the study by Rodríguez et al., tissue blocks containing the SCO and the PC were transplanted into the left lateral ventricle in 2-3 months-old Sprague-Dawley rats (Rodriguez et al., 1999). It is observed that the grafted SCOs keep an ultrastructure similar to those of the SCO in situ and have the ability of production and secretion of SCO-spondin, resulting in the RF in the explanted lateral ventricle. The transplantation using bovine Structure of SCO-spondin and its downstream signaling pathways. (A) A cartoon of SCO-spondin structure. EMI, elastin microfibril interface; vWF-D, von Willebrand factor type-D; FA5/8C, Factor V/Factor VIII type C; LDLrA, low-density lipoprotein receptor class A; TIL, trypsin inhibitor-like; TSR thrombospondin type I repeat; vWF-C, von Willebrand factor type-C; EGF, epidermal growth factor; CTCK, C-terminal cystine knot-like. (B) Possible interactions of domains contained in SCO-spondin with soluble factors. FGF2, fibroblast growth factor 2; TGF-β, transforming growth factor-β; VEGF, vascular endothelial growth factor; HSPG, heparan sulfate proteoglycan; LPR, LDLr-related protein. Expression of NSPC markers in the SCO. Immunostaining of NSPC markers in the mouse SCO (Corales et al., 2022).
SCO explants also support that the secretory active SCO can induce cell proliferation. The bovine SCO cultured in vitro for a few weeks express and secrete the SCO-spondin and TTR into the culture medium (Schobitz et al., 2001;Montecinos et al., 2005). Xenografts of bovine SCO explants into a lateral ventricle of rats indeed promote cell proliferation in the ipsilateral than contralateral SVZ niche (Guerra et al., 2015), suggesting that the SCO might contribute to the proliferation of NSPCs and possibly leading to neurogenesis through secreted factors CSF circulation in the adult brain.
One possible interpretation for the increased PCNA positive cells by the SCO transplant is that neuroprotective factors secreted by the grafted SCO might enhance or maintain cell proliferation, as suggested in the previous study where the SCO secretes SCO-spondin or TTR (Schobitz et al., 2001;Montecinos et al., 2005). This possibility can be supported by the contribution of the SCO to the early embryonic brain development. Especially, the SCO has been reported to show its contribution to neurogenesis during embryogenesis. In the embryonic stages, the SCO seems to regulate cell proliferation and neuronal differentiation through the secretion of the SCO-spondin (El-Bitar et al., 2001;Vera et al., 2013Vera et al., , 2015. SCO-spondin knockdown experiments using chick embryos have demonstrated that the protein released into embryonic CSF is required for neurogenesis and regulation of neuroepithelial cell proliferation/neuronal differentiation (Vera et al., 2013). A subsequent study showed that low-density lipoprotein (LDL) and SCO-spondin form a complex and that this interaction is essential in modulating the neuroepithelium differentiation generated by both molecules (Vera et al., 2015). However, in ovo inhibition of SCO-spondin using shRNA in the chick embryo has reduced neuronal cell number and increased PCNA-positive cells (Vera et al., 2013). A similar result was obtained by in vitro culture with the explanted optic tecta in the SCO explant conditioned medium and SCO-spondin-depleted embryonic CSF (Vera et al., 2015).
These results are inconsistent with those from SCO transplantation experiments. Cellular response to the SCO-spondin might be different between embryonic and adult brains, or among species.
Conclusion
In this review, we highlighted a possible role of the SCO in adult neurogenesis: as a neurogenesis/gliogenesis niche or a neurogenesis/gliogenesis regulatory region through secretory factors. Our knowledge about functions of the SCO in adult neurogenesis remains limited due to lacking SCO specific conditional knockout animals. Only a few studies related to functional disruption of the SCO, especially with SCO-spondin deficient animals, mainly focused on the relationship between the RF formation, CSF flow, and hydrocephalus (Perez-Figares et al., 2001;Sepulveda et al., 2021). For example, immunological blockage of the SCO function was able to induce a hydrocephalus phenotype in adult rats (Vio et al., 2000). Recently, a series of studies using zebrafish showed that a mutation in the SCO-spondin gene (sspo) causes a phenotype with an abnormal ventral curvature of the body axis and idiopathic scoliosis (Cantaut-Belarif et al., 2018;Lu et al., 2020;Rose et al., 2020). However, the effect of SCO disruption on adult neurogenesis has not been investigated. There are still enigmas on the SCO function on CSF homeostasis and neurogenesis. Comprehensive transcriptome analysis of genes expressed in the SCO and systematic analysis using SCO-specific conditional knockout animals of the related genes would be essential to elucidate the SCO contribution to adult neurogenesis.
Author contributions
HI was involved in writing the first manuscript. HI, LC, and NO contributed to the manuscript revision, reviewed, and approved the submitted version of the manuscript. All authors contributed to the article and approved the submitted version.
Funding
This work was supported by the Grants-in-Aid for Scientific Research (C) (17K08486) and (B) (19H03318) to HI and NO, respectively, from the Ministry of Education, Culture, Sports, Science and Technology of Japan (MEXT). | 4,419.8 | 2023-03-07T00:00:00.000 | [
"Biology"
] |
Hyphenated LC-ICP-MS/ESI-MS Identification of Halogenated Metabolites in South African Marine Ascidian Extracts
Extracts of 13 species of marine ascidian collected in Algoa Bay were analyzed by LC-ICP-MS/ESI-MS. This technique allows parallel analysis of the molecular species and the presence of certain elements. The LC-ICP-MS/ESI-MS technique was used to target iodinated metabolites in this study. Three ascidian species afforded the known 3,5–diiodo-4-methoxyphenethylamine (12), which was confirmed by the isolation of this metabolite from Aplidium monile. MS also suggested the presence of the known 3,5–dibromo-4-methoxyphenethylamine (10) and the new 3-bromo-5–iodo-4-methoxyphenethylamine (11) in the A. monile extract. The presence of the known 3,5-dibromotetramethyltyrosine (21) and the new 3-iodotetramethyltyrosine (23) in extracts of an unidentified Didemnum species was similarly proposed from MS evidence. This is the first report of the occurrence of iodinated metabolites in South African marine invertebrates.
As part of an ongoing search for new halogenated metabolites from the diverse ascidian populations of Algoa Bay, South Africa, we have expanded our search to target naturally occurring iodinated metabolites.Iodide and iodate ion concentrations in the ocean (c.60 ppb) significantly exceed those in fresh water (c.0.03-6 ppb). 3,4Of the 182 known iodinated secondary metabolites reported in a recent review of naturally occurring organoiodine compounds, more than 80 % are marine. 5Given that over 5000 halogenated metabolites have been isolated from natural sources, organoiodines can be considered to be rare in nature. 5he first naturally occurring organoiodine to be identified, 'jodgorgosaure' (3,5-diiodotyrosine, 8), was originally isolated in 1896 from the marine octocoral (sea fan), Gorgonia cavolinii. 6,5-Diiodotyrosine and its mono-iodo analogue, 3-iodotyrosine (9), have also been reported from other eukaryotic marine phyla, e.g.kelp (order Laminariales). 4 Kelp species can accumulate iodide in concentrations 300 000 times higher than the iodide concentration in the surrounding seawater, 4 and as much as 10 % of the iodine sequestered by kelp is incorporated into the organoiodines, 8 and 9. 5 A possible endocrine hormone role for 8 and 9 has been postulated in which these two compounds may mediate cell to cell communication in algae and control developmental processes in other eukaryote species.5,7 Rapid identification of iodinated metabolites, and their brominated congeners, in Algoa Bay ascidian extracts was facilitated by access to the hyphenated LC-ICP-MS/ESI-MS facilities at the Marine Biodiscovery Centre at the University of Aberdeen.The University of Aberdeen hyphenated LC-ICP-MS/ESI-MS facility has successfully been used to, inter alia, explore the distribution of naturally occurring organoarsenic compounds, [8][9][10][11] metal chelated ascidian metabolites, [12][13][14][15][16] and recently identifying new secondary metabolites containing heteroatoms, e.g.iodinated metabolites in marine extracts.17 Hyphenated LC-ICP-MS/ESI-MS, in which a high performance liquid inductively coupled plasma (ICP) and high resolution electrospray ionization (ESI) mass spectrometers arranged in parallel (Fig. 2), provides an opportunity to simultaneously acquire elemental and molecular information for the individual peaks separated by HPLC.High-resolution ESI-MS data then reveals possible molecular ions of the separated organic compounds from which molecular formulas can be predicted.In a marine bioprospecting context this technique can afford both the rapid de-replication of known compounds and the discovery of new secondary metabolites.
RESEARCH ARTICLE
C.L. Bromley
General
LC-ICP-MS/ESI-MS analyses were carried out using an Agilent 1100 series HPLC with ESI-MS performed on a Thermo Orbitrap mass spectrometer whilst ICP-MS was performed on an Agilent 8800 triple quad ICP-MS, with micro-flow PFA nebulizer and Pt cones, and 7 % O 2 reaction gas.Chromatography for LC-ICP-MS/ESI-MS analyses were carried out by injecting 100 µL of sample onto an analytical C18 Waters SunFire column with a solvent flow rate of 1 mL min -1 and gradient profile of 100 % H 2 O to 100 % MeOH over 20 min.HPLC solvents which were used for chromatography were made up with 0.1 % formic acid (FA).
Collection of Ascidian Material
The ascidian Aplidium monile Monniot, F., 2001 18 Aplousobranchia: Polyclinidae was collected by SCUBA from a depth of 12-15 m at Bell Buoy Reef, Algoa Bay, South Africa (33.9833°S, 25.6987°E), on 18 November 2011 and given the voucher code TIC2011-032.Polycitor sp.(suborder Aplousobranchia, family Polycitoridae) was collected by SCUBA at a depth of 21 m from Haarlem Reef, Algoa Bay, South Africa (33.9889°S, 25.6984°E), on 23 July 2004 and given the voucher code SAF2004-068.Leptoclinides sp. was collected by SCUBA from the White Sands Reef, Algoa Bay, South Africa, (33.9961°S, 25.7072°E), from a depth of 21 m, on 13 July 2004 and given the sample code SAF2004-015.Didemnum sp.2 (suborder Aplusobranchia, family Didemnidae) was collected by SCUBA at a depth of 18 m at White Sands Reef, Algoa Bay, South Africa (33.9986°S, 25.7096°E), on 20 July 2004 and given the sample code SAF2004-61.After collection in the field all ascidian samples were carefully separated, cleaned of epibionts and frozen immediately as whole specimens of individual species and kept at -20 °C until extracted.
Extraction of Frozen Ascidian Samples for LC-ICP-MS/ESI-MS Analysis
All glassware and laboratory equipment used in the preparation of extracts for LC-ICP-MS/ESI-MS screening were acid washed using 10 % HNO 3 and subsequently rinsed using MilliPore ® water.HPLC grade solvents were used and to prevent contamination.Extracts of internal portions of each species of ascidian (~15 g wet mass) were made using HPLC grade MeOH and CH 2 Cl 2 .Extractions were carried out in the normal way except that all glassware and tools used were acid washed and an iced water bath was used when sonicating the material.The crude ascidian extracts were stored at -20 °C until LC-ICP-MS/ESI-MS screening.Aliquots of methanol solutions of the crude organic extracts were used for the LC-ICP-MS/ESI-MS speciation studies.
Results and Discussion
Representative specimens of 13 ascidian species from five different families (Clavelinidae, Didemnidae, Holozoidae, Polyclinidae and Polycitoridae), belonging to the suborder Aplousobranchia, were collected by SCUBA from Algoa Bay over the period 2004-2011, carefully separated from any epibionts and frozen separately as whole specimens of individual species (-20 °C) immediately after collection.Portions of the frozen material were carefully extracted with methanol and dichloromethane for LC-ICP-MS/ESI-MS analysis following an established protocol 16 that minimized the possibility of metal ion cross-contamination.The mass spectrometry data revealed that four of the ascidian extracts: Aplidium monile, Polycitor sp., Leptoclinides sp., and Didemnum sp. 2 (Fig. 3) contained both iodinated and brominated metabolites.Iodinated metabolites have not previously been reported from South African marine invertebrates.
The ICP-MS extracted ion chromatograms (EIC), selected for the 127 I and 79 Br isotopes, together with the matching HRESI EICs and the corresponding mass spectra for the methanolic extract from A. monile are presented in Fig. 4. The isotopic ratios of the pseudo molecular ion (M+H) peaks in the ESI mass spectra of 10 (1:2:1) and 11 (1:1) suggested di-and monobromination in these two compounds, respectively.Mass fragmentation, consistent with the loss of 127 I, in the mass spectra of 11 and 12 corroborated the presence of iodine in these compounds.The loss of 17 atomic mass units (M+H-NH 3 ) from the pseudomolecular ions in the ESI mass spectra of 10-12 indicated the probable presence of a common amino functionality in these compounds.
Partitioning of the A. monile extract between 70 % aqueous methanol and dichloromethane, followed by further fractionation of the methanol partition fraction on a C18 SepPak ® cartridge afforded 12 as the only metabolite in the 40 % aqueous acetonitrile fraction.The 1 H and 13 C NMR data of 12 were consistent with published data for this compound.Further exhaustive semi-preparative HPLC of the C18 SepPak ® fractions failed to yield either 10 or 11 suggesting that these two compounds may be present in trace amounts in the ascidian.The detection of very low concentrations of 10 and 11 in the presence of the major metabolite 12 further highlights the sensitivity and value of the LC-ICP-MS/ESI-MS technique.
Three decades ago Ireland and Sesin reported the isolation of 12 together with its urea derivative 13 from an unidentified species of Didemnum ascidian. 19This was the first isolation of these two compounds from a natural source.Compound 11 was later isolated as the major compound in two ascidians, an Indonesian Didemnum sp. and Palauan specimens of D. rubeum. 23,24ore recently the chemistry of D. rubeum was revisited and an expanded series of iodinated tyramine derivatives were isolated from this ascidian that included 12 and 13 and six new analogues 14-19. 20While we report here the first isolation of 12 from an African ascidian, an Axinella sponge collected off the coast of Ghana recently afforded the related iodotyramine analogue dakaramine (20). 17ompound 10 appears to be less common, in the marine environment, than its iodo congener.The only previously reported isolation of 10 was by Ireland and co-workers 25 from the Indonesian ascidian Eudistoma sp..6][27] In a biological evaluation of a series of synthetic bromotyramines Schoenfeld et al. found 10 exibited potent antifouling and cytotoxic properties. 28he C3 monohalogenation and C3, C5 dihalogenation of the phenyl ring in halogenated marine tyramine and tyrosine analogues appears ubiquitous, and without exception (Fig. 1), thus suggesting that other possible regioisomers within this cohort of halogenated natural products are unlikely.Biosynthetic arguments were therefore used to support the C3, C5 dihalogenation pattern proposed for 10 and 11.Interestingly, albeit speculative from high-resolution mass data alone, this is the first report of an asicidian yielding 10 and 12 and the previously unreported 11.Similar hyphenated LC-ICP-MS/ESI-MS examination of the extracts of the Polycitor sp. and Leptoclinides sp. (Fig. 3b,c) also revealed the presence of 12 in these extracts suggesting that this metabolite is relatively common in Algoa Bay ascidian species with 25 % of the small cohort of 13 ascidians screened in this study containing this compound.Although the LC-ICP-MS/ESI-MS mass data suggested the presence of further iodinated and brominated metabolites in both the Polycitor and Leptoclinides extracts, the structures of these compounds could not be resolved from the mass data.The paucity of ascidian material in hand prevented their isolation and identification by other spectroscopic techniques.
The ICP-MS EICs, selected for the 127 I and 79 Br isotopes, together with the matching HRESI EICs and the corresponding mass spectra of selected peaks from LC-ICP-MS/ESI-MS of the methanolic extract from Didemnum sp 2 are presented in Fig. 5. 26 and was found to be inactive in both anti-fouling and anti-parasitic bioassays. 30,31he iodine ( 127 I) ICP-MS EIC (Fig. 5a) revealed two major peaks M + precursor ion (m/z 304.9661,M + -N(CH 3 ) 3 ; m/z 258.9614,M + -HCOOH-N(CH 3 ) 3 ; m/z 178.0624, M + -I-N(CH 3 ) 3 ).A search of the chemical literature 21,22 revealed that 23 has not been previously reported from nature.Regrettably, the paucity of Didemnum sp. 2 available precluded the chromatographic isolation of 21 and 23 for further spectroscopic analysis.
Conclusion
This preliminary survey of the distribution of halogenated metabolites in a small subset of the ascidian fauna in Algoa Bay South Africa suggests that iodinated and brominated tyrosines and tyramines may be relatively common in aplousobranch ascidians.The potential of the LC-ICP-MS/ESI-MS technique to detect these metabolites in trace amounts is clearly apparent.
Figure 2
Figure 2 Schematic diagram of the hyphenated LC-ICP-MS/ESI-MS technique.Circa 85 % of the HPLC eluent is diverted to the ESIMS and the remainder to the ICPMS.
(TR 8 .
64 and 13.99 min).Unfortunately, no ESI mass spectrum was observed at T R 13.99 min and the source of this peak in the 127 I EIC is unknown.A molecular formula of C 13 H 19 O 3 NI (M + m/z 364.041Dmmu -0.3) emerged for 23 from the closest fit molecular formula simulation. 29With the putative structure of 21 in hand the 3-iodotetramethyltyrosine structure for 23 was proposed.Further mass spectroscopic evidence in support of the chemical structure of 23 (Fig. 5b) emerged from the fragment ion (m/z 237.0949,M + -127 I) in the HRESI mass spectrum and the product ions from tandem mass spectrometry (MSMS) of the RESEARCH ARTICLE C.L. Bromley, A. Raab, S. Parker-Nance, D.R. Beukes, M. Jaspars and M.T. Davies-Coleman, 115 S. Afr.J. Chem., 2018, 71, 111-117, <http://journals.sabinet.co.za/content/journal/chem/>. | 2,751 | 2018-01-01T00:00:00.000 | [
"Environmental Science",
"Chemistry"
] |
Optimization of Intelligent English Pronunciation Training System Based on Android Platform
Oral English, as a language tool, is not only an important part of English learning but also an essential part. For nonnative English learners, effective and meaningful voice feedback is very important. At present, most of the traditional recognition and error correction systems for oral English training are still in the theoretical stage. At the same time, the corresponding high-end experimental prototype also has the disadvantages of large and complex system. In the speech recognition technology, the traditional speech recognition technology is not perfect in recognition ability and recognition accuracy, and it relies too much on the recognition of speech content, which is easily affected by the noise environment. Based on this, this paper will develop and design a spoken English assistant pronunciation training system based on Android smartphone platform. Based on the in-depth study and analysis of spoken English speech correction algorithm and speech feedback mechanism, this paper proposes a lip motion judgment algorithm based on ultrasonic detection, which is used to assist the traditional speech recognition algorithm in double feedback judgment. In the feedback mechanism of intelligent speech training, a double benchmark scoring mechanism is introduced to comprehensively evaluate the speech of the speech trainer and correct the speaker’s speech in time. The experimental results show that the speech accuracy of the system reaches 85%, which improves the level of oral English trainers to a certain extent.
Introduction
As the most mature language in globalization, English has become an indispensable communication tool in people's daily life. Oral English, as an important part of English learning, its corresponding pronunciation, the language environment of nonnative English-speaking countries, and the teachers of oral English are the factors that restrict the development of oral English [1,2]. With the rapid development of intelligent speech recognition technology, professional machines and corresponding learning software related to oral English training emerge in endlessly. However, such software often has only some single functions, which can only assist oral English practitioners to carry out some simple oral pronunciation training, thus lacking effective feedback on some learners' oral English pronunciation, so its corresponding functions are limited. e training effect is often unsatisfactory [3][4][5]. At the same time, the traditional speech recognition technology itself is vulnerable to the interference of the surrounding environment and other problems. At the same time, the professional spoken English training machine is too complex and huge and it does not have the characteristics of portable, so it is difficult to achieve civil and lightweight [6]. erefore, how to design a piece of intelligent oral English training equipment and design a reasonable and efficient training algorithm is very important and meaningful.
Based on this, a large number of scientists and related research institutions have carried out research on the oral English training algorithm. Traditional oral English training mainly focuses on the research of speech recognition algorithm, which mainly depends on the rapid development of computer technology, artificial intelligence technology, and information and communication technology [7][8][9]. In the continuous development of speech recognition technology, there are mainly speech signal linear prediction coding technology, dynamic time planning adjustment technology, linear prediction cepstrum technology, and dynamic time warping technology [10][11][12]. In the application of speech technology in computer-aided language learning, it is mainly the application of information technology to combine speech recognition technology with oral English training courses, so as to create a real oral English learning environment for oral English learners, and on this basis, in order to promote the virtuous circle of oral English learning, we should increase the corresponding multimedia technology and improve the interest of English learners [13][14][15]. Relevant scientific research institutions have designed different language training algorithms and equipment based on mature speech recognition systems, such as the language assisted learning system of Carnegie Mellon University in the United States, the education assisted learning system of relevant countries in Asia, and the language assisted teaching system of player in relevant European and American countries [16][17][18][19]. Based on the above research, the traditional oral English pronunciation practice relies too much on the realization of certain speech recognition technology, and it also depends on the development of related multimedia technology. Based on this, this paper will develop and design the spoken English assisted pronunciation training system based on Android smartphone platform. On the basis of in-depth research and analysis of spoken English pronunciation correction algorithm and speech feedback mechanism, this paper proposes a lip movement judgment algorithm based on ultrasonic detection, which is used to assist the traditional speech recognition algorithm to make double feedback judgment. In the feedback mechanism of intelligent speech training, a double benchmark scoring mechanism is introduced to evaluate the pronunciation of speech trainers in an all-round way and correct the pronunciation of speakers in time.
e experimental results show that the pronunciation correction rate of the proposed system reaches 85%, which improves the level of oral English trainers to a certain extent.
Based on the above research background and significance analysis, the structure of this paper is as follows: in Section 2, we will analyze the related research literature of intelligent English pronunciation training system and give the disadvantages of the current related system; in Section 3, we will focus on the research and analysis of the lip movement judgment algorithm based on ultrasonic detection under the technology of speech recognition algorithm and give some suggestions. Section 4 will be based on an Android mobile phone development test and analysis of the corresponding experimental results; finally, this paper is summarized in Section 5.
Related Work Analysis: Research and
Analysis of Intelligent Oral English Pronunciation Training e main difficulty of intelligent spoken English pronunciation training system lies in the realization of recognition algorithm and corresponding feedback evaluation mechanism algorithm [20][21][22]. In the aspect of speaker speech recognition algorithm, the main recognition algorithms are focused on the speaker speech recognition algorithm. A large number of researchers and research institutions have studied and analyzed this kind of algorithm. e fluency pronunciation training system designed by researchers from European and American countries adopts the automatic speech recognition technology based on Sphinx engine. e system can correct the errors of syllables and prosody of oral English learners. However, the system points out that the errors of relevant syllables are too limited, and the whole system is too mechanical to analyze the learners' pronunciation based on words. Japanese research institutions have developed an oral English assisted pronunciation training system, which is mainly designed for Asian nonnative English-speaking countries. It has its own limitations. It only accepts nonnative accents and only provides feedback and visualization for the corresponding accurate information. e model used in the system is also a specific model; the relevant research and development institutions in the United States have designed a software development kit for oral English pronunciation training, which can realize the oral evaluation of English speakers and also provide the pronunciation spectrum and the corresponding duration scoring function, but the system depends on professional auxiliary training equipment. It is not portable and light. At the level of computer-aided pronunciation, computer-aided pronunciation assessment technology can make learners know their pronunciation level and ability at any time, so as to learn more targeted and train in the right direction. On the basis of speech evaluation technology based on statistical speech recognition, related research institutions and linguists have studied the core algorithm of speech evaluation, the adaptive method of acoustic model of speech evaluation, the application of duration and speed in speech evaluation, and the scoring mapping model of speech evaluation system, and many reliable algorithms are given.
Analysis and Research on Lip Motion Judgment Algorithm of Ultrasonic Detection Based on Speech Recognition Technology
is section will focus on the analysis of spoken English pronunciation training core algorithm design. is paper focuses on the original speech recognition algorithm based on the addition of ultrasonic detection lip motion judgment algorithm, in order to assist the accurate recognition of the whole system. e corresponding algorithm core architecture of speech recognition technology is shown in Figure 1, from which we can clearly see the operation mechanism of the whole core algorithm.
Analysis of Dual Detection Algorithm for Spoken English
Pronunciation. In this paper, the core algorithm of spoken English speech recognition algorithm is the hybrid detection method which combines the conventional speech recognition algorithm with the auxiliary lip detection algorithm. e 2 Complexity algorithm is mainly composed of speech signal preprocessing module, speech feature and speaker lip feature extraction module, speech dynamic time adjustment recognition algorithm, and speech recognition feedback evaluation mechanism algorithm. e corresponding recognition principle block diagram is shown in Figure 2, and the technical details of the corresponding modules are as follows.
Speech signal preprocessing module: when the speaker carries out oral English training, the corresponding voice is processed through the speech preprocessing module. e corresponding processing steps mainly include digital processing of speech signal, speech endpoint detection, speech framing detection, speech windowing detection, and speech pre-emphasis detection.
In the preprocessing part of speech digitization, the analog signal corresponding to speech is mainly converted into the corresponding digital signal, and the corresponding digitization process includes speech analog sampling and digital quantization processing. In the actual system design, this paper uses the unique platform of Android mobile phone platform for sampling and reasonably analyzes the corresponding digital sampling rate and corresponding quantization in the digitization process to achieve the current input voice configuration requirements. After sampling and digital quantization, it is necessary to preemphasize the current voice file, so as to enhance the highfrequency signal of the input voice signal and filter out the low-frequency signal in the corresponding voice signal, so as to make the signal spectrum more flat. In this module, the corresponding digital filter is installed. e core calculation formula of the corresponding digital filter is shown as formula (1), where W is the corresponding cutoff frequency and I is the order of the filter. After the completion of digital filtering, the corresponding digital voice file needs to be processed by frame and window. In the process of processing, the corresponding voice input occurred in a very short time is defined as a steady-state input signal, and it is regarded as a fixed signal. For the long-time signal, the shorttime signal will be used as the frame reference for segmenting processing, so as to achieve the goal of long-time voice input reasonable analysis and preprocessing; based on this, the framing function used in this paper corresponds to formula (2), in which the corresponding frequency is set to 50 Hz, the corresponding sampling point is designed to about 5000 points, the corresponding window function in the formula is fn, the corresponding voice signal length is set to N, the corresponding "inc" represents the adjustment length, and the corresponding "overlap" represents the overlap part.
In the aspect of voice signal endpoint detection technology, there are mainly two kinds of algorithms. e voice endpoint detection part is mainly based on the short-term energy detection technology to process the voice part, while the action detection algorithm is mainly used to judge the lip action components corresponding to the speaker. In the actual judgment, the "and" calculation results will be combined with the judgment results of the two components, only when the results are not correct. When there is a problem, make a speech recognition judgment. In the short-term energy detection part of the speech part, it is mainly based on the rule that the speech energy changes with time. e corresponding core calculation formula is shown as formula (3) frames of the speech signal, the corresponding energy is represented by W n , and the corresponding b (n) represents the window function: In the motion detection algorithm, it is mainly based on the speaker's lip frequency shift detection technology, which mainly includes two detection steps, corresponding to the boundary scan and the corresponding secondary frequency peak retrieval. In the corresponding boundary scan technology level, it is mainly to determine the range of the frequency spectrum of the corresponding lip reflection, realize the fast Fourier transform of the signal, and quickly transform the corresponding time-domain signal. After the corresponding spectrum signal is processed, the corresponding main frequency peak is selected as the center point of the transmission frequency, and the frequency points in the positive and negative directions are scanned. In the corresponding secondary frequency peak retrieval part, the main purpose is to determine the secondary frequency peak of the corresponding spectrum result.
In the aspect of speech signal feature extraction, it mainly depends on two aspects, namely, speech feature extraction and lip motion feature extraction. In the corresponding speech feature extraction level, it mainly includes preprocessing technology, fast Fourier transform technology, spectral line energy processing technology, and DCT cepstrum algorithm. In the corresponding preprocessing stage, it is essentially similar to the preprocessing module of the whole system. In the fast Fourier transform level, it is mainly to quickly extract the energy information of speech signal and quickly determine the corresponding energy distribution. In addition, the corresponding spectral line energy is calculated based on the corresponding fast Fourier transform results, and finally the speech features are extracted by the DCT cepstrum algorithm (the main speech features include voiceprint, volume, and tone). For lip motion feature recognition, the main task is to cut the lip reflected signal, extract the mouth features of the corresponding unit based on the unit, and query the corresponding pronunciation table (based on the pronunciation table of 10 basic mouth types) to segment the corresponding signal, extract the corresponding frequency shift features. e schematic diagram of speech signal feature extraction is shown in Figure 3, from which we can clearly see the principle process diagram of speech recognition and lip recognition algorithm feature extraction. e dynamic time adjustment algorithm is mainly to solve the difference between the actual input speech and the speech evaluation system. In this paper, the dynamic time algorithm is used to stretch or compress the corresponding speech input until it is consistent with the reference. e core principle of the dynamic time adjustment algorithm used in this paper is shown in Figure 4. It can be seen from the figure that when the dynamic time adjustment algorithm is running, it mainly selects the corresponding reference speech template and the corresponding test speech template for score analysis. It directly projects the corresponding frame number into the corresponding coordinate axis x and the corresponding coordinate axis y; then the corresponding frame number that corresponds to the each intersection point in the corresponding grid can be defined as the corresponding matching degree. At this time, the corresponding dynamic time warping algorithm is to find the best path for the starting point and the endpoint to pass through each intersection point at the same time and ensure that the corresponding measure of the frame distance of each intersection point on the corresponding path is the minimum.
In the corresponding speech evaluation mechanism, two different standard pronunciation templates (corresponding to template 1 and template 2) are used for feedback comparison. e corresponding schematic diagram is shown in Figure 5. It can be seen from the figure that the spoken English speaker's corresponding pronunciation is test 3. In the actual feedback comparison, the frame matching distance of the corresponding characteristic parameters between the two is mainly compared, and the corresponding frame matching distance is compared. e frame matching distances in the graph are D1, D2, and D3, respectively. e frame distance between the corresponding learner and the corresponding standard pronunciation can be expressed by the average distance. Based on the corresponding average distance, it is converted into the corresponding evaluation score, and the corresponding pronunciation level of the speaker is calculated.
Based on the analysis and research of the above principles, the core algorithm of oral English pronunciation training in this paper is basically constructed. At the same time, the problems of poor anti-interference ability of traditional speech recognition algorithms are solved. e corresponding dual speech feature extraction technology uses a hybrid algorithm of conventional speech recognition and lip motion recognition, which improves the accuracy of the whole speech recognition. e dual benchmark evaluation mechanism can significantly improve the judgment of the speaker's speech level, and the corresponding judgment standard can also help to improve the speaker's own oral level.
Research and Design of Oral English Pronunciation
Training Based on Android Smartphone Platform. In order to verify the realizability of the algorithm, this paper designs and verifies the algorithm based on Android mobile platform. e main purpose is to realize the intelligent English 6 Complexity pronunciation training system in the form of animation, sound assistance, pictures, and text through the Android mobile platform. e corresponding system organization module diagram is shown in Figure 6. It can be seen from the figure that the corresponding hardware design level mainly includes I/O module design, scoring module design, feedback module design, and user interface design. e corresponding I/O module design mainly uses the built-in microphone and headset of Android mobile phone system to realize the input, sampling, and processing of voice signal; the corresponding scoring module design level mainly includes the double benchmark scoring mechanism, in which the corresponding scoring parameter generation and the corresponding score conversion mechanism are very Complexity important; in the corresponding feedback module design level, two hardware processing modules are mainly used, which are, respectively, the Fourier transform processing of speech signal and the lip movement processing of the speaker, and the comprehensive signal is processed in the corresponding way through the specific processing mechanism. Graphical form is sent to Android mobile platform for display; at the same time, based on this, the formant contrast diagram of mixed processing signal is generated by using chart engine open source software; the corresponding user interface is mainly developed based on Android platform, and the corresponding function keys of voice assistant system are mainly set on the corresponding functions.
At the software level, the corresponding development environment is mainly implemented in the eclipse integrated environment.
e corresponding development environment is as follows: the PC uses Windows XP system; the development components use JDK 6; the platform hardware environment uses a brand of Android smartphone; the software platform uses Android OS2.2; and the programming language uses Java. Taking the development of image reference function in the corresponding scoring mechanism as the template, the corresponding core code is as follows, and the corresponding chart activity is the graphic drawing class function.
Experiment and Analysis
In order to verify the advantages of this algorithm compared with the traditional spoken English pronunciation training algorithm, this paper carries out a comparative implementation, and the corresponding experimental platform is an Android mobile phone. e corresponding experimental level is mainly from the three levels of comparative analysis that are speech recognition rate test, environment anti-interference test, and oral English pronunciation auxiliary training effect test.
Speech Recognition Rate Test Experiment.
Before the experiment, the recorded standard speech database was put into the system, and 10 volunteers were used for speech test. In this test, the corresponding environmental variables were controlled (the environmental noise level was controlled to keep the same level). e average speech recognition accuracy of the test samples is shown in Figure 7. It can be seen from the figure that the corresponding speech recognition accuracy of this algorithm is significantly higher than that of traditional speech recognition, which shows that the speech recognition algorithm combining speech recognition with lip motion detection has obvious advantages.
Environmental Anti-Interference Performance Test.
In this experiment, 60 dB music is selected as the environmental noise, and the corresponding noise source distance is consistent in the corresponding environment. Based on this environment, the environmental antijamming performance of this algorithm and the traditional algorithm is tested, and the evaluation index is the TPR value of the system. e contrast curve of the environment anti-interference performance corresponding to the test samples is shown in Figure 8. It can be seen from the figure that the TPR value of the corresponding system under the proposed algorithm is about 75%, which is far higher than the traditional algorithm.
Oral English Pronunciation Training Test.
e purpose of this test is to test the effect of oral English improvement for oral English speakers. e corresponding training effect is transformed into the corresponding oral English test scores for quantitative evaluation. Ten volunteers from nonnative English-speaking countries are selected for the test. e experiment is based on the improvement of oral English scores of the trainees before and after the test and the corresponding experimental week. e training period is set at 30, 40, 50, and 60 days. e average oral English scores of the volunteers before and after the training are shown in Table 1. It can be seen from the table that the volunteers have made great progress in different indicators of oral English after the training. Figure 9 provides a detailed comparison chart of the effect of oral English pronunciation training in four cycles. From the chart, it can be seen that the algorithm in this paper has obvious advantages, and these advantages will become more and more obvious with the growth of training time.
Comprehensive analysis shows that the proposed algorithm and the designed system have obvious advantages in improving the oral pronunciation effect compared with the traditional system.
Summary
is paper mainly analyzes the disadvantages of traditional oral English pronunciation practice and points out that the corresponding core algorithm has serious noise problems. Based on the analysis of the current research status of oral English pronunciation assistance, this paper develops and designs oral English pronunciation assistance training system based on Android smartphone platform. Based on the indepth research and analysis of oral English pronunciation correction algorithm and voice feedback mechanism, this paper proposes a lip movement judgment algorithm based on ultrasonic detection, which is used to assist oral English pronunciation training. e traditional speech recognition algorithm makes double feedback judgment. In the feedback mechanism of intelligent speech training, a double benchmark scoring mechanism is introduced to evaluate the pronunciation of speech trainers in an all-round way and correct the pronunciation of speakers in time.
e experimental results show that the pronunciation correction rate of the proposed system reaches 85%, which improves the level of oral English trainers to a certain extent. In the follow-up research, this paper will focus on the application of the speech recognition hybrid algorithm in other language learning and further promote the use of it.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare that they have no known conflicts of interest or personal relationships that could have appeared to influence the work reported in this paper. | 5,299 | 2021-01-01T00:00:00.000 | [
"Computer Science"
] |
Information Fusion for Assistance Systems in Production Assessment
We propose a novel methodology to define assistance systems that rely on information fusion to combine different sources of information while providing an assessment. The main contribution of this paper is providing a general framework for the fusion of n number of information sources using the evidence theory. The fusion provides a more robust prediction and an associated uncertainty that can be used to assess the prediction likeliness. Moreover, we provide a methodology for the information fusion of two primary sources: an ensemble classifier based on machine data and an expert-centered model. We demonstrate the information fusion approach using data from an industrial setup, which rounds up the application part of this research. Furthermore, we address the problem of data drift by proposing a methodology to update the data-based models using an evidence theory approach. We validate the approach using the Benchmark Tennessee Eastman while doing an ablation study of the model update parameters.
I. INTRODUCTION
systems accompany the operators during the machinery operation by providing assessment during decision-making.These systems support the operators with (real-time) information on the process in terms of production, machine condition, and recommendations to handle faults or to improve the machine's performance.Assistance systems have typical com-ponents such as a (real-time) data collection system, a (fault) detection system, a knowledge base, a computing engine, and an (interactive) user interface [1] [2] [3].Due to their high performance, data-based models are a popular choice when selecting a detection system with reported applications in medicine [4], industry [1] [3], road infrastructure [5], and agriculture [2].Usually, the data-based models are trained using a specific dataset presenting good results.However, not all data-based models can handle new upcoming faults in the data.Hence, an anomaly detection system must have a mechanism to recognize an upcoming anomaly and the capability to learn upcoming data that differs from the original training data.Equally important is the system's capability to adapt or retrain the data-based models automatically.The retraining or automatic update of the models must consider a minimum size of training data that assures that the models capture the essential patterns to be learned.
Systems composed by the combination or fusion of several individual models often present better results and robustness than individual models (e.g., bagging and boosting).Though data-based models attain high performance, alternatively, expert-centered knowledge-based models provide versatile features, which are production context and expert domain knowledge.The challenge here lies in how to combine a data-based model and a knowledge-based model.Thus, a common framework is required to perform a fusion of both systems.Such a framework must provide not only a way to combine the models' outputs but to quantify the uncertainty.The uncertainty provides information regarding how reliable the combined system output is.
We propose a novel methodology for assistance systems that rely on information fusion in production assessment, in which several information sources can be combined into a more robust system output.The novelty of this paper is presenting a common framework that allows the fusion of several information sources on the decision level using the evidence theory.Besides, we quantify the uncertainty of the system output to provide a better assessment of system output reliability.An essential contribution of this paper is the ability of the databased model to handle unknown fault cases in the data, which allows the model to update the models automatically.
The individual contributions of this paper are: • A methodology for the automatic model update of ECSs, while feeding up data with unknown fault cases.The methodology includes an uncertainty monitoring strategy that improves the anomaly detection of the EC, stores the data of the unknown condition, and retrains the pool of classifiers of the EC.We present the parameters of the automatic update module: threshold size, window size, and detection patience.The automatic update methodology is rounded up with experiments using the benchmark dataset Tennessee Eastman.The EC is tested using different fault class scenarios, in which we test the impact of a window during anomaly detection.Moreover, we present a detailed analysis of the automatic update parameters regarding retrained EC performance.• A general framework to combine n number of information sources on the decision level to generate a robust system prediction.The framework uses the Dempster-Shafer evidence theory.Besides, the framework quantifies the uncertainty of the prediction, which can be used to assess the reliability of the system prediction.• A methodology to combine a multiclass EC with an expert-centered knowledge-based model, in which we apply the general framework of the information fusion.The system architecture shows the components of each model, namely, the inference model and model update module.The application of the information fusion system is tested with the data of an industrial setup using a small-scaled bulk good system.The performance of the individual models (EC and knowledge-based) is compared with the combined system.
This paper is structured as follows: Section II presents a literature survey on the main topics of this paper.The theoretical background is described in section III.Our proposed approach is detailed in section IV.Section IV-C and section IV-D present the methodology for information fusion and model update, respectively.Section V portrays a use case for retraining the EC using the benchmark Tennessee Eastman.Whereas section VI presents a use case for information fusion using the data of a bulk good system laboratory plant.Finally, section VII summarizes the conclusions and future work.
II. RELATED WORK
This section reviews the literature related to information fusion, update of data-based models, and assistance systems.
A. Assistance Systems
Assistance systems provide valuable information for the users.They can be whether non-invasive or with direct control of the process.The assistance can range from recommendation systems [6] [7], interactive systems [8], or even systems that prevent actions from the user.Architectures of assistance systems commonly contemplate the modules: data collection, a condition detection engine, a knowledge base, and an (interactive) user interface [9].The (fault) condition detection engine is vital to identify the current state of the machinery or process.The engine is usually powered either by a knowledgecentered model [9] or a data-based model [10].The knowledge base plays a crucial role in the assistance system because it provides the information that supports the user when a (faulty) condition is active [9].There are different ways to build a knowledge base, namely using ontologies [9] [11], knowledge graphs [8] [12], or static databases.The proposed architectures of assistance systems contain the primary modules to support the users.However, there are factors to be considered, such as the update of the condition detection engine and the knowledge base, and the quantification of the system uncertainty.The challenge lies in a holistic architecture that addresses these factors and proposes the interactions of the primary systems.This research differentiates from the state-of-the-art, in which we propose a holistic methodology using information fusion for assistance systems with a special focus on production assessment.In this sense, the methodology addresses the major components of the assistance system architecture.We propose a novel architecture based on the evidence theory that can combine n number of information sources while quantifying the uncertainty of the resulting system prediction.For this purpose, we provide a detailed description of the architecture in terms of components and their relationships, with a special focus on the role of uncertainty.
B. Information Fusion
Information fusion is a popular approach to combining several sources of information because the combined system often yields better performance and robustness.Information fusion on the decision level is a common practice using data-based models (e.g., supervised classifiers in the case of bagging) [13].The use of information fusion and databased models is reported in [14] [15], in which evidence theory combines models at the decision level.Information fusion using evidence theory provides an additional feature: the uncertainty quantification [10].The uncertainty serves to assess the output reliability of the combined system [16].
Alternatively, knowledge-based models are expert-centered approaches containing valuable expert domain and environment context [17].Different knowledge-based approaches can be found in the literature using case-based reasoning (CBR) and natural language processing (NLP) [3], ontologies, and assistance systems [9].Though combining the strength of databased and knowledge-based models might be considered a logical step to follow, finding a common framework to perform the fusion is challenging.Besides, knowledge-based models often have a low number of input features in comparison with data-based models.The last aspect requires special attention while performing an inference of the primary systems before performing an information fusion.Current research methodologies cover the information fusion of data-based models [14] [15].However, existing literature does not report the fusion of data-based and knowledge-based models, though the heterogeneity of the sources could improve the overall result.We propose a methodology for the information fusion of a data-based model with an expert-centered model, in which we use the Dempster-Shafer evidence theory as a general framework for the fusion.Besides, we test the feasibility of the methodology using data from an industrial setup.
C. Update of Data-based Models
The ability of data-based models to handle data with unknown fault cases has grown interest in the research community [18] [19].A primary step is identifying the unknown fault case or anomaly from the upcoming data.There are different approaches reported in the literature to detect anomalies, which propose the use of evidence theory [20] and unsupervised learning [21] [22].After identifying the anomaly from the data, the next step is updating the model.In this sense, some methodologies are focus on concept drift detection [23] [24], incremental learning [25] [26], emerging classes or labels [27] [28] [29], and incremental class [28].Thus, detecting an anomaly is followed by an update or retraining of the databased model.However, there are challenges associated with the retraining or updating of models: the size of the training data sufficient to capture the essence of the upcoming fault.An essential factor to consider is the performance evaluation of the retrained models.A careful study of the parameters is required because only some upcoming faults might be handled with the same set of retraining parameters.Existing literature addresses the anomaly detection [20] [21] [22], and even the identification of emerging classes (or unknown conditions) [27] [28] [29].However, the model update using uncertainty remains unexplored.To this end, we propose a methodology for updating data-based models using DSET, in which we monitor the uncertainty of the fusion to trigger a model update.We focus on the model update of databased models, specifically for ensemble classification using evidence theory.Besides, we perform an ablation study of the retraining parameters while showing their impact on the model performance.We demonstrate the robustness of the model update using the benchmark Tennessee Eastman.
III. THEORETICAL BACKGROUND
This section presents the basic theory for performing information fusion and the transformation of model predictions using an evidential treatment.The equations of this section are applied during the development of the sections IV-C and IV-D.
A. Evidence Theory
Dempster-Shafer [30] defined a frame of discernment Θ = {A, B} for the focal elements A and B. The power set 2 Θ is defined by 2 Θ = {ϕ, {A}, {B}, Θ}}.The definition of a basic probability assignment (BPA) is given by: m: 2 Θ → [0, 1], in which the BPA must comply with m(ϕ) = 0, and A⊆Θ m(A) = 1.The last equation represents the sum of BPAs.The focal elements of Θ are mutually exclusive: The Dempster-Shafer rule of combination (DSRC) defines how to perform the fusion of two mass functions (e.g., sources of information) using the equation: where m DS (A) is the fusion of the mass functions m 1 and m 2 .The conflicting evidence b k is defined by: It is important to remark that, while using DSRC, the conflicting evidence is distributed by each focal element.Yager [31] defined an alternative rule of combination, which in contrast to DSRC, assigns the conflicting evidence to the focal element Θ.The Yager rule of combination (YRC) is defined by the equation: where m Y (A) is the fusion of the mass functions m 1 (B) and m 1 (C).The focal element Θ of the mass function m Y (A) is defined by: m Y (θ) = q(θ) + q(ϕ), where q(ϕ) represents the conflicting evidence.Likewise DSRC, the conflicting evidence q(ϕ) is represented by: In the case of multiple fusion operations, the mass functions are combined using the following equation: where m(A) is the fusion of the n mass functions, and N ∈ N.
B. Evidential Treatment of Model Predictions
We consider models with a common frame of discernment Θ = {L 1 , L 2 , ..., L N }, where N represents the number of labels or classes, , and N ∈ N. The power set is represented by 2 Θ = {ϕ, {L 1 }, {L 2 }, {L 1 , L 2 }.The last term represents the overall uncertainty U .Each model (e.g., classifier or a rulebased system) provides a prediction in the form of a unique label p = L 1 or as an array, p = [L 1 , L 2 , ..., L n ].In section III-A, the sum of BPAs is defined as A⊆Θ m(A) = 1.In [10], we proposed a strategy to transform a prediction into a mass function.This operation plays an essential role in the fusion of different information sources.We presented a sum of BPA that considers the weights of each focal element w m , and the quantification of the overall uncertainty U : S wbpa = N j=1 m j • w mj + U = 1, where n ∈ N and w m is the weight of the evidence m.The following conditions must be fulfilled: ∀m j .m j > 0 and w mj → [0, 1].The overall uncertainty is defined as U = 1 − N j=1 m j • w mj , in which a high value of U represents a high uncertainty on the body of evidence (e.g., lack of evidence).We consider that the focal elements are mutually exclusive, which means that only one label is active at the time, which transforms S wbpa into S wbpa = m Rj • w m R j + U = 1.However, we adapted the sensitivity to zero approach of Cheng et al. [32], using the equation [33]: k = 1 − 10 −F , where k ∈ R, F ∈ N, and F ≫ 1.Thus, we transform S wbpa into: where m ′ pj represents the j th focal element, and is defined using: where k is the approximation factor, N is the number of focal elements of Θ, and N ∈ N. The active prediction p can be transformed into a mass function m using: m = m ′ p .w p .The mass function can be represented as a row vector using the following equation: and the uncertainty U is defined as:
IV. INFUSION: INFORMATION FUSION FOR ASSISTANCE SYSTEMS IN PRODUCTION ASSESSMENT
This research proposes an INformation FUsion approach for asSIstance systems in productiON assessment (INFU-SION).This section covers the topics: theoretical background, prediction systems, information fusion, model update of the prediction system, and the assistance system.
As a first insight into this theme, we present a general system overview as seen in Fig. 1.
The general system is conformed by n systems used as information sources.The motivation behind this is the creation of a more robust system.The general system overview is composed of the blocks: • The batch data is the numerical representation of the physical behavior of a machine.The data is split in three categories: training data D T r , validation data D V a , and testing data D T e .The data is used during the training and inference processes of the models.• The modules form the production assessment system: - -The assessment module matches each ensemble prediction with its corresponding assessment.-The knowledge base has the assessment for each ensemble prediction.
• The assessment is presented to the user (operator) through a user interface.A primary motivation of this paper is the integration of databased and knowledge-based models because the combined outcome profits from the strengths of both models.Therefore, the n systems of Fig. 1 are transformed into two major systems: an ensemble classifier (EC) that groups different databased models and a knowledge-based model.Section IV-A details both systems.
A. Prediction Systems
As presented in Fig. 1, a (prediction) system is conformed by an inference model and an update module.The trained model represents the physical system and is used to predict the system's answer while feeding data to it.The inference model can be data-based (e.g., a supervised classifier), an ensemble classifier (EC) formed by several models, a model built on equations representing the physical system, an ontology, or a knowledge-based model.The model update module adapts the system when the initial conditions have changed (or unknown events occur).The update is performed automatically or manually, depending on the module strategy.
A model M i is trained using a training dataset D T r (in the case of data-based models), or is modeled using the relationships between the process variables and thresholds (in the case of a knowledge-based model).A training dataset D T r contains N o T r number of observations, N f T r number of features, and N c T r number of classes.A frame of discernment Θ is formed by all the labels (or classes) that the model can predict: Θ = {C 1 , ..., C N }, where N ∈ N.
where ŷi ∈ Θ.The prediction ŷi is transformed into the mass function m i using equations ( 6)-( 9): where w Mi represents the (confidence) weights for each class predicted by the model M i .
We focus this research on a prediction system using EC and rule-based knowledge models.Previous research deepened in these two topics separately [20] [10].Fig. 2 details the INFUSION system, where the prediction systems are adjusted to a data-based and knowledge-based model.Thus, the databased model is represented by the EC using the ensemble classification and evidence theory (ECET) approach [20], and the knowledge-based model is built using the knowledge transfer framework and evidence theory (KLAFATE) methodology [10].It is important to remark that each system has an inference model and a model update module.It is important to note that ECET is an EC formed by n systems, specifically the n supervised classifiers.ECET presents a similar structure from Fig. 1 for the system's prediction, except for the model update module.
The model update module of KLAFATE is manual because it relies on the expertise of the team expert.The methodology is explained in detail in [10].The automatic model update module of ECET is introduced in this research and is explored in detail in section IV-D.The main blocks of this module are: • The pool of classifiers and the list of hyperparameters reported in [20].• The anomaly detection module which monitors the ensemble uncertainty U E and the anomaly prediction ŷAN of ECET, and the system uncertainty U Sys and the system prediction ŷSys .1) ECET Prediction System: In [20], we presented an approach of ensemble classification using evidence theory (ECET), in which we propose the use of information fusion to combine the predictions of N number of classifiers.In this paper, we extend the contribution of [20] by formalizing the approach theoretically.This theoretical formalization plays a crucial role in section IV-C and section IV-D, which correspond to the methodologies of information fusion and model update, respectively.Thus, given a n number of classifiers, each classifier produces an output ŷi using equation (10), where ŷi ∈ Θ.The output is subsequently transformed into a mass function m i using equations ( 6)-( 8).The ensemble classifier (EC) is obtained by combining all the classifiers, specifically using the DSRC on the mass function of each classifier prediction.As described in equation ( 5), the DSRC can be used for multiple fusion operations.However, the fusion is performed in pairs.For instance, in the case of three classifiers, the fusion of m 1 (corresponding to the output ŷ1 of model C 1 ) and m 2 is performed first, the result of this fusion m 1 ⊕ m 2 is then combined with m 3 .The fusion of the pair of mass functions m i and m Di−1 is represented using: where i ∈ N, m i is the mass function of the current classifier, and m Di−1 is the fusion of the previous mass functions.After where i ∈ N. The last element of the fusion F Di , which is a row vector, corresponds to the uncertainty U Di : , where N is the cardinality of the frame of discernment Θ, and N ∈ N.After performing the last fusion, the system prediction ŷEN is calculated using: where ŷEC ∈ Θ.The system uncertainty is calculated using: A similar procedure is performed when using the YRC to calculate the fusion F Yi , the previous mass function m Di−1 , and the uncertainty U Yi .It is important to remark that the current mass function m i is used for DSRC and YRC.
2) KLAFATE Prediction System: In [10] we presented a knowledge-based model using the knowledge transfer framework using evidence theory (KLAFATE) [10].The knowledge was extracted from a failure mode and effects analysis (FMEA) and modeled in rules.Thus, a knowledge rule R i is defined as the function: where V 1 represents a process variable, T 1 is a threshold or limit value of the process value, N V is the number of process variables, N T is the number of thresholds, N V and N T ∈ N. The knowledge rules are mutually exclusive: R i ∩ R i+1 = ϕ.The knowledge model is represented as a set of rules [10]: where L T R i represents the approximated rule R i , m is the number of knowledge rules, m ∈ N, and L T R , R i ∈ Θ.The active rule is obtained using equations ( 6)-( 9): where k is the approximation factor, N is the cardinality of Θ, k ∈ R, and N ∈ N. Thus, the mass function is defined using equation ( 8): where w R1 is the (confidence) weight of the rule R 1 , and U is the overall uncertainty.The uncertainty U is calculated using the equation ( 9): The (confidence) weight w Rj is defined using the equation [10]: The mass function m Ri is transformed into the prediction ŷKE using: where ŷKE ∈ Θ.
B. Assistance System
The assistance system provides an interactive source of assessment for the user while receiving the process data.It provides the current status of the system (e.g., system prediction and uncertainty), the assessment (e.g., troubleshooting through the FMEA knowledge base) in the case of a fault case, and notifies in case of an unknown condition for the consequent model update.
The knowledge of the FMEA is stored as a knowledge tuple T U i [10]: where F M represents a failure mode, P is a process, SP a subprocess, C a set of causes, E a set of effects, RE a set of recommendations, and i ∈ N. A set of recommendation is also represented as: where N RE ∈ N. The latest representation applies to the sets of effects and causes.
In the assessment context, the rule R corresponds to the system prediction ŷSys , and the confidence weight w R to the system weight w ŷSys , where R, ŷSys ∈ Θ Sys , and w Sys = 1.It is important to remark, that each system prediction ŷSys is linked to a knowledge tuple T U i, a failure mode F M , and to a weight w Sys : ŷSys ⇐⇒ T U i , ŷSys ⇐⇒ F M , and ŷSys ⇐⇒ w ŷSys .In contrast, a system prediction ŷSys can be associated to a set of causes C, effects E, and recommendations RE.The assessment module is modeled through a matching function that associates a system prediction ŷSys to the rest of the knowledge of the tuple T U i : where i ∈ N. The matching function f M a provides the assessment while feeding the system prediction ŷSys , specifically returning the troubleshooting information associated with the failure mode: the process P , the subprocess SP , the set of causes C, the set of effects E, and the set of recommendations RE.The assistance system was described in detail in a previous work [10].
C. Information fusion
Information fusion has a growing research interest because it improves robustness while combining different models.To this end, we propose a novel framework for combining n number of models using DSET.Moreover, this framework is used for the fusion of a data-based model and a knowledgebased model.
Thus, as presented in Fig. 1, the system is formed by n number of subsystems.The system mass function m Sys is obtained after applying the information fusion to all subsystems: where n ∈ N, and m Sys (A) ∈ Θ Sys .The system mass function m Sys is also referred as F Sys .It is important to remark that all the systems share the same frame of discernment: Θ KE = Θ EC = Θ Sys , and where C 1 represents the first class (or fault case), N Sys is the number of classes (or fault cases), and N Sys ∈ N.
The equation (22) can also be represented as: where i, N Sys ∈ N.This paper adapts the system to two main subsystems: a data-based model M EC and a knowledge-based model M KE .
As a first step we obtain the outputs ŷEC and ŷEC by feeding data to the models M KE and M EC : and where D T e is the testing data.
The predictions ŷEC and ŷKE are transformed into the mass functions m EC and m KE respectively, using equations ( 6)-( 9): and where w Mi = 1 ∀i, and i ∈ N. The next step is to obtain the system fusion F Sys by applying either DSRC or YRC.Thus, the system fusion F D Sys is calculated using DSRC and applying the equations ( 1), ( 2), ( 22), (24): Likewise, the system fusion F Y Sys is calculated using YRC and applying the equations ( 3), ( 4), ( 22), (24): . The system uncertainty U D is calculated using the last DSRC fusion F Di : ŷSys using: where F Di [|Θ Sys |] corresponds to the overall uncertainty of the system fusion F Di .Likewise, the system uncertainty U Y is calculated using the last YRC fusion F Yi : where F Yi [|Θ Sys |] corresponds to the overall uncertainty of the system fusion F Yi .The last step is the calculation of the system mass function m Sys and the system uncertainties using DSRC U D and YRC U Y .The system mass function m Sys is obtained from the last DSRC system fusion F D Sys : m Sys = F Di .The mass function m Sys , then, is transformed into the prediction ŷSys using: where ŷSys ∈ Θ Sys .Algorithm 1 describes the steps for the information fusion of N Sys number of subsystems while feeding the testing data D T e , where N Sys ∈ N. Algorithm 1 is an updated version of the algorithm presented in [20].
D. Model Update
The anomaly detection functionality is crucial in the model update because it identifies when an unknown condition is present.We present an (automatic) model update for ECET based on uncertainty monitoring.The (manual) model update of KEXT was proposed in [10].The model update is a sequence of five steps: anomaly detection, collection of unknown data, data isolation using a window, retraining, and inference.
1) Model Update for ECET: Performing ECs are usually the result of a suitable dataset that fits the patterns of the existing data.However, the occurrence of new unknown fault cases might undermine the performance of the ECs, leading to a retraining procedure of the models.To this end, our methodology provides the theoretical basis for updating the data-based models using DSET, in which we monitor the uncertainty of the fusion to trigger a model update.The model Algorithm 1 Information Fusion of N Sys Systems [20] 1: procedure INFORMATION FUSION 2: for j = 1 to N Sys do ▷ N Sys Subsystems 3: ŷi ← M j (S i ) ▷ by Eq. ( 25) 5: ▷ by Eq.( 6)-( 9), ( if i = 1 then 7: else 11: 12: 13: 16: 17: ŷSys = arg max Θ m Sys ▷ by Eq.( 33) 19: return ŷSys , U D , U Y update of ECET is performed automatically using an anomaly detection strategy, in which the uncertainty is monitored.However, The model update can be set as semi-automatic (e.g., the user receives a notification from executing the model update module) in case the unknown condition needs to be analyzed in detail first.Algorithm 2 describes the sequence of the model update.
Algorithm 2 Model Update of ECET.
1: procedure MODEL UPDATE 2: ŷEC ← M EC (S j ) 3: m EC ← f m ( ŷEC , w EC ) ▷ by Eq.( 6)-( if C A = T rue then ▷ by Eq. (36) 5: D T empj ← collect data(X A , ŷA ) if C S = T rue then ▷ by Eq. ( D A ← D T emp 10: 11: 12: 13: MT r ← retrain(M, D ′T r ) 14: M T r ← MT r ▷ Replace old models We proposed an anomaly detection strategy using ECET in [20], in which an unknown condition A K was detected: where ŷA is a parallel prediction to the EC prediction ŷEC , A K ∈ Z, and K ∈ N. The condition for anomalies C A is defined as: where The terms b k and q(ϕ) are calculated using the equations ( 1)-( 2), and ( 3)-( 4), respectively.
In this paper, we propose the monitoring of the EC uncertainties U D EC and Y D EC , as well as the system uncertainties U D Sys and Y D Sys .The condition for anomalies from equation ( 35) is transformed into: where C A EC and C A Sys represent the condition for anomalies of EC and system, respectively.Thus, the anomaly detection of the system is defined as: The data collection of (unknown) conditions needs to satisfy the condition C D : where C S is the condition that satisfies a minimum number of consecutive data samples.The condition C S is defined as: where i A is the number of consecutive data samples, S M n is the minimum number of consecutive data samples, and i A , S M n ∈ N.
The collected data of the unknown condition D A has the same features f T r of the (old) original data D, such as f A = f T r .In contrast, the number of observations o A might differ from that of the original data o T r .Thus, the data D A is represented by a number of observations N o A , in which each observation is composed by the features X A = f A and the associated label (or class) ŷA .
The data D A is represented as: where S M n is the minimum number of consecutive samples of the unknown condition, N f A is the number of features, The data D A is split into training D T r A and testing data D T e A : The training data is split The next step is to integrate the existing data D with the collected data D A using the following equations: The EC prediction ŷEC usually has not a constant steady value because of the diversity of the classifier's predictions.For this reason, we propose a window on the EC prediction ŷEC that can ease the data isolation of the unknown condition.The window smoothes the EC output because it considers a window of N w number of the last samples for the calculation of the windowed EC output ŷEC : where ŷECi w ∈ Θ Sys .A graphical representation of the window procedure is exemplified in Fig. 3. Having the data and the frame of discernment updated, we can proceed with the retraining of the pool of classifiers.The retraining is performed using the training methodology presented in [20].
The last step is to test the EC using the testing data D T e .For this purpose, we first update the frame of discernment Θ Sys : where Θ Sys Old is the old frame of discernment, A K is the new focal element, and N, K ∈ N. Thus, the updated Θ Sys is transformed into: 2) Model Update for KLAFATE: Though knowledge-based models contain valuable expert-domain knowledge, the modeling process is time-consuming and requires frequent updates to avoid knowledge obsoleteness.To this end, our methodology provides the theoretical framework for uncertainty monitoring using DSET, which can be used to trigger the update of the knowledge model by the team of experts.The model update of KLAFATE is triggered by an uncertainty rise, either on the system or the knowledge model.Thus, the expert team is gathered to analyze the possibility of an unknown condition.Consequently, the expert team recommends adding information sources by including signals, process variables, or hardware to capture new physical signals.The latest purpose is to ease the identification of unknown conditions to create new knowledge rules in the FMEA.Once the expert team analyzes the acquired knowledge, the knowledge rules are validated using key performance indicators (KPI) in the short and long term.The process to create a rule-based system is described in [10].
V. USE CASE: MODEL UPDATE FOR ENSEMBLE CLASSIFICATION USING TENNESSEE EASTMAN DATASET
As described in section IV-D, the approach's novelty is a methodology for updating data-based models while injecting unknown fault cases in the data.The methodology uses primarily an uncertainty monitoring approach based on DSET.This section presents the results of the improved anomaly detection approach and the model update methodology.The robustness of the approaches is tested using the benchmark Tennessee Eastman.We present a description of the dataset.We describe the experiment design explaining the defined scenarios and the performance metrics.The subsection results provide the performance of the experiments.A discussion subsection closes this section by presenting the findings and limitations of the approach.The model update for the data-based model (ECET) and knowledge-based model (KLAFATE) are green highlighted in Fig. 2.
A. Description of the Tennessee Eastman Dataset
The benchmark Tennesse Eastman (TE) was created by Down and Vogel with the motivation to provide an industriallike dataset based on the Tennesse Eastman chemical plant [34].The TE chemical plant have five principal process components: condenser, reactor, compressor, separator, and stripper.The dataset is amply used in literature to compare the performance of data-based models.The dataset models a chemical process considering 21 fault cases and a normal operation case.The dataset is divided into training sets and testing sets.The training set consists of 480 rows of data containing 52 features for each fault.In contrast, the training set of the normal condition contains 500 rows of data.The testing set consists of 960 rows of data, in which the first 160 rows belong to the normal condition and the rest 800 rows belong to the fault case.Given the prediction difficulty, the fault cases are usually grouped into three categories: easy cases (1, 2, 4, 5, 6, 7, 12, 14, 18), medium cases (8,10,11,13,16,17,19,20) and hard cases (3, 9, 15 and 21) [35].A detailed dataset description can be found in [34] [20].
B. Experiment Design
We followed the procedure proposed in [20], in which we used the benchmark TE to test the performance of the proposed approaches.Besides, we considered a pool of ten classifiers (e.g., five NN-based models and five non-NN-based models) as the basis of the ECs.We considered only experiments using ML-based ECs, and Hybrid ECs (a combination of non-NNbased classifiers and NN-based classifiers).The procedure is documented in detail in [20].We trained the classifiers of the ECs using the fault cases (0,1,2,6,12) as the basis of the experiments.We defined two experiment scenarios: data isolation using a window and an update of ECs.We develop the approach using the IDE Anaconda and the libraries Scikitlearn and PyTorch [36] [37] [38].We perform the experiments on a Ubuntu 20.04.3 LTS environment using a CPU i7-7700 @3.60GHz x 8, 32GB RAM, and a GPU NVIDIA GeForce GTX 1660 SUPER.
1) Data isolation using a window: We selected the MC ECs M3 and H5-2 from the previous work [20] with the best performance criteria.The EC M3 consists of non-NN classifiers, whereas the EC H5-2 is hybrid.We compared the results obtained by performing a variation on the window size.The base classifiers' and ECs' hyperparameters are detailed in [20].
2) Update of ECs: We selected the ML-based ECs M3, M4, and M5 to perform the experiments and comparisons.Given the constraint of limited retraining data, we discard NN-based and Hybrid ECs.The procedure consists of two data batches for each experiment.The first batch contains the known fault cases (0,1,2,6,12) and one anomaly case (e.g., fault case 7).The EC identifies the anomaly through uncertainty monitoring, collects the anomalous data, and retrains the EC if the data is sufficient.We assign the anomaly data with the arbitrary label 30.The second batch contains testing data of the fault cases (0,1,2,6,12) and the anomaly (e.g., fault case 7).For comparison purposes, the original label 7 is changed by the new label 30.We defined three main experiments, namely, the retraining of the ECs using all the fault cases (1,...,21), the study of the retraining parameters (e.g., threshold size, window size, and detection patience) using the fault cases (7,8,15), and the fine-tuned retrained ECs using all the faults (1,...,21).We selected the fault cases (7,8,15) as anomalies to have a case for each primary data group (easy, medium, and hard).
3) Performance Metrics: We use the performance metrics F1-score (F1) and fault detection rate (FDR, also known as recall).F1 and FDR are detailed in [39].
C. Results
This subsection presents the experiment results of the model update approach.For this purpose, the experiments are divided into two parts: data isolation using a window and a model update of EC.
1) Data Isolation using a Window: We perform experiments using different window sizes to study their impact on the EC performance.We compare the effects of using no-window (w = 0) and a window (w = 20, w = 50).
Table II presents the F1-scores of the BIN EC M5 and MC EC H5-2.The hyperparameters of the base classifiers and ECs were reported in detail in [20].The BIN EC M5 presents comparable results while varying the window size with average F1-scores of 0.6%, 0.64%, and 0.65% for the window sizes (0, 20, 50), respectively.In contrast, the MC EC H5-2 presented higher results using a window (20,50) compared to no-window w = 0.The MC EC H5-2 presented average F1-scores of 0.63%, 0.81%, and 0.88% for the window sizes (0,20,50), respectively.Fig. 4 presents the plots of the MC EC H5-2 trained with fault cases (0,1,2,6,12) and using the anomaly fault case (7) while doing a variation on the window size (0, 20, 50).Figures 4a, 4b and 4c show the confusion matrices for the window sizes w = 0, w = 20, and w = 50, respectively.The confusion matrices for the window sizes w = 20 and w = 50 present better results than the confusion matrix with window size w = 0.The predictions plots of figures 4d, and 4d confirm the results of the confusion matrices, in which the predictions (blue) are closer to the ground truth (red) for EC using the window sizes w = 20 and w = 50.The anomaly case (7) is represented as the label (-1) in the predictions plot.It is important to remark that the approach using a window smooths the EC predictions.
2) Model update of EC: We perform three different experiments in this subsubsection: the model update of the EC (retraining), the study of the variation of the retraining parameters, and finally, selecting a fine-tuned retrained EC.
We test the model update of the EC using all the fault cases of the TE dataset.For this purpose, we selected the MC ECs M3, M4, and M5.The hyperparameters of the base classifiers and ECs were reported in detail in [20].Table III presents the F1-scores of the MC ECs M3, M4, and M5 trained with the fault cases (0, 1,2,6,12).The MC ECs M3, M4, and M5 present comparable results with an average F1score of 0.39, 0.36, and 0.37, respectively.The EC MC M3 detected the anomalies (7,17) with F1-scores higher or equal to 0.43 and the anomalies (13,14) with F1-scores higher or equal to 0.33 and less than 0.43.The EC MC M4 detected the anomalies (8,14,17) with F1-scores higher or equal to 0.67 and the anomalies (7,10,11,15) with F1-scores higher or equal to 0.38 and less than 0.54.Alternatively, the EC M5 detected the anomalies (14,18,20) with F1-scores higher or equal to 0.54 and the anomalies (8,17) with F1-scores higher or equal to 0.43 and less than 0.54.Fig. 5 presents the plots of the MC ECs M3, M4, and M5 trained with fault cases (0,1,2,6,12) and using the anomaly fault 7. Figures 5a, 5b and 5c show the confusion matrices for the ECs M3, M4, and M5, respectively.The confusion matrix of the MC EC M5 presents better results than the confusion matrices of the other ECs.Alternatively, the prediction plots of figures 5d, 5e and 5f present mixed results, in which M3 identifies the anomaly better, but the case ( 12) is confused with the anomaly.In addition, M5 presents a better prediction of the known fault cases but has a lower anomaly detection.The uncertainty quantification (UQ) using DSET is presented in figures 5g, 5h and 5i for the MC ECs M3, M4, and M5, respectively.The MC EC M5 presents steadier values than the MC ECs M3 and M4, which confirms the prediction pattern.The latest can be enunciated as the lower the uncertainty, the better the classification performance (likeliness).The next step is the study of the retraining parameters.For this purpose, we test the effects of the threshold size, window size, and detection patience.We chose the MC EC M3 to perform the experiments and selected the threshold sizes (150,250,350) and anomalies (7,8,15).a) Effects of the threshold size: Table IV presents the F1-scores of the MC ECs M3, M4, and M5 trained with the fault cases (0,1,2,6,12).The retraining parameters window size and detection patience are fixed with values of ws = 20 and pt = 15, respectively.The MC EC M3 presented higher results using a threshold size th = 150 with an average F1-score of 0.81 for the anomaly (7), compared with the values of 0.57 and 0.50, corresponding to the threshold sizes (250, 250).The MC EC M3 presents comparable results for the anomaly (8) with average F1-scores of 0.81, 0.82, and 0.82 for the threshold sizes (150, 250, 350), respectively.In contrast, the MC EC M3 presented higher results using a threshold size th = 350 with an average F1-score of 0.74 for the anomaly 15, in comparison with the values of 0.54 and 0.55, which correspond to the threshold sizes (150, 250), respectively.Fig. 6 displays the EC M3 performance for each class while effectuating variations on the threshold size (150,250,350) for the anomalies (7,8,15).The best performance corresponds to the anomaly (8), in which the EC M3 detects the fault cases (0,1,2,6,12) often correctly, and it has limited anomaly detection.In contrast, the EC M3 presents a lower performance while applying the anomalies (7,15).b) Effects of the window size: Table V presents the F1scores of the MC ECs M3, M4, and M5 trained with the fault cases (0,1,2,6,12).The retraining parameters threshold size and detection patience are fixed, with values of th = 250 and pt = 15, respectively.The MC EC M3 presented average F1-scores higher than 0.84 using window size (10,50) for the anomaly (7).Alternatively, the MC EC M3 presented average F1-scores higher than 0.72 for the anomaly (8) using the window size (20,50).In contrast, the MC EC M3 presented higher results using a window size ws = 50 with an average F1-score of 0.74 for the anomaly (15), in comparison with the values of 0.50 and 0.55, which correspond to the window sizes (150, 250), respectively.Fig. 7 displays the EC M3 performance for each class while effectuating variations on the memory size (10,20,50) for the anomalies (7,8,15).The best performance corresponds to the anomaly (8) using a window size ws = 20, in which the EC M3 detects the fault cases (0,1,2,6,12) mostly correct, and it has a limited anomaly detection.In contrast, the EC M3 presents a lower performance while applying the anomalies (7,15).c) Effects of the detection patience: Table VI presents the F1-scores of the MC ECs M3, M4, and M5 trained with the fault cases (0,1,2,6,12).The retraining parameters threshold size and window size are fixed with values of th = 250 and ws = 20, respectively.The MC EC M3 presented an average F1-scores of 0.84 using detection patience of pt = 5 and pt = 30, respectively, compared to the average F1-score of 0.57 for pt = 15.In the case of anomaly (8), the MC EC M3 presented higher results using detection patience pt = 15 with an average F1-score of 0.82, in comparison with the values of 0.78 and 0.58, which correspond to the detection patience (5,30), respectively.The MC EC M3 presented average F1scores higher than 0.73 for the detection patience (5,30), while the average F1-score of 0.55 is obtained with the detection Table IV: Anomaly detection results of MC EC M3 using the fault cases (0,1,2,6,12), the anomalies (7,8,15), thresholds variations (150,250,350), window size (20), patience (15), and F1-score.Fig. 8 displays the EC M3 performance for each class while effectuating variations on the detection patience (5,15,30) for the anomalies (7,8,15).The best performance corresponds to the anomaly (8) using detection patience pt = 15, in which the EC M3 detects the fault cases (0,1,2,6,12) mostly correct and has a limited anomaly detection.In contrast, the EC M3 presents a lower performance while applying the anomalies (7,15).Finally, we present the performance of the ECs with the tuned retraining parameters.Table VII presents the F1-scores of the MC ECs M3, M4, and M5 retrained with the fault cases (0,1,2,6,12) and the respective anomaly.In this case, the anomalies cases are all fault cases except for the original training cases.The retraining dataset contains the original fault cases and the detected data from the anomaly (unknown fault case from the data).The retraining parameters are threshold size th = 250, window size ws = 20, and detection patience The MC EC M3 detected the anomalies (7,11) with F1-scores higher or equal to 0.55 and the anomalies (9,13,17) with F1-scores higher or equal to 0.34 and less than 0.42.The MC EC M4 detected the anomalies (8,14,17) with F1-scores higher or equal to 0.67 and the anomalies (7,10,11,15) with F1scores higher or equal to 0.38 and less than 0.54.Alternatively, the EC M5 detected the anomalies (14,18) with F1-scores higher or equal to 0.68 and the anomalies (7,11,15,17,20) with F1-scores higher or equal to 0.31 and less than 0.54.
D. Comparison with Literature
Though the current approach can automatically update the models while detecting unknown fault cases from the data, the stored data to retrain the models might be insufficient for some fault cases.Thus, the stored data for some fault cases might not capture the essential patterns to identify the Table VI: Anomaly detection results of MC EC M3 using the fault cases (0,1,2,6,12), the anomalies (7,8,15), patience variations (5,15,30), threshold (250), memory size (20) condition.In contrast, the contributions of literature presented in the comparison consider all the extent of the testing data.
Table VIII compares the anomaly detection results between the proposed approach and the literature.The multiclass ECs M3, M4, and M5 are originally trained using the fault cases (0,1,2,6,12).The testing data consists of the fault cases (3,9,15,21), which represent unknown conditions to the ECs.For this purpose, each EC is retrained with one fault case at a time.We use the F1-score as a performance metric to compare the proposed approach with other literature contributions.It is essential to mention that the MC EC H5-2 from a previous work [20] uses the full extent of testing data, as well in the case of Top-K DCCA [21].The results of the ECs M3, M4, and M5 present lower results with average F1-scores of 20.36%, 3.50%, and 2.59%, respectively.The results of H5-2 and Top-K DCCA present general scores of 63.69% and 50.04%, respectively.Only M3 presents a score of 31.07%for the fault case 21, which still lies under the better performance results of H5-2 and Top-K DCCA with scores of 63.1% and 50.05%, respectively.respectively.
E. Discussion
The ECs improved the anomaly detection capability after implementing the window size.In the case of the MC EC M5, the general F1-score improved from 0.6 to 0.65 using a window of w = 50 for the latest score.In the case of H5-2, the results are remarkable, in which the general F1-score score improved from 0.63 to 0.88 using a window of w = 50 for the latest score.However, a side effect of the window is a delay effect on the ensemble prediction, which is reflected while comparing Fig. 4d and Fig. 4f.
There are remarkable effects on the EC M3 performance while doing variations on the retraining parameters, namely, threshold size, window size, and detection patience.The results are mixed, and the average performance depends on the studied anomaly.However, from the results, it is possible to identify that a threshold of T h = 150 presented the best average results for anomaly 7.In contrast, a threshold of T h = 350 presented the best results for anomaly 8.Alternatively, the plots of Fig. 6 visualize the performance of each class while doing variations on the threshold.The MC EC M3 presents an overall good performance while applying anomaly 8, in which the EC classifies the known cases mostly correctly and has a limited detection of the anomaly.In contrast, the anomaly detection feature decreases the performance of the known fault cases for some fault cases, which is visually represented in Fig 6a while applying anomaly 7. Variation of the window size reported favorable average performance results for a window of me = 50 while considering all the anomalies (7,8,15).In contrast, the plots of Fig. 7 show that the best results correspond to the window size me = 20 while applying anomaly 8, in which the EC classifies known cases properly, and it has a limited detection of the anomaly.Likewise the threshold experiments, a similar effect of decreasing classification performance of the known cases is detected.Generally, a patience of pt = 5 presented the best average results for all the anomalies (7,8,15).In contrast, the plots of Fig. 8b show that the best results correspond to the patience pt = 15 while applying anomaly 8, in which likewise the window size experiment, the EC classifies the known cases mostly correctly, and it has a limited detection of the anomaly.Likewise the threshold and window size experiments, the performance of the EC is affected by some faults while using the anomaly detection approach.
The performance of the retrained MC ECs presented mixed results.For instance, the EC M3 detected the anomaly cases (4,5,7,11,13) with FDR scores higher than 77% and the anomalies (10,20,21) with FDR scores higher than 53%.However, the results of the retrained ECs presented a lower performance than other literature contributions.The average FDR scores of M3, M4, and M5 are 50.18%,43.60%, and 51.44%.It is important to remark that the retrained models only use 250 samples as training data (only 52% of the available data), in which other fault cases might be included as a side effect of the parameter patience.
VI. USE CASE: PRODUCTION ASSESSMENT USING INFUSION ON A BULK GOOD SYSTEM
As described in section IV-C, the approach's novelty is a methodology for the information fusion of data-based and knowledge-based models.The methodology primarily uses a novel framework for combining n number of models using DSET.
This section presents the results of the information fusion approach and an ablation study considering the different system configurations.The system configurations consist of the detection system using: the data-based model, the knowledge model, or a hybrid model (data-based model together with a knowledge model) using information fusion.We test the approach using a dataset of an industrial setup, namely, a bulk good system laboratory plant.We describe the testbed and the dataset.We present the results and a discussion of the findings.Fig. 2 displays the main blocks of this section: the data-based model (ECET), the knowledge-based model (KLAFATE), and the outer module for the information fusion of both models.
A. Description of the Bulk Good System Laboratory Plant and Dataset
The bulk good system (BGS) laboratory plant is an industrial setup used for testing production and fault detection experiments.The BGS consists of four stations that represent standard modules of a bulk good handling system on a small scale: loading, storing, filling, and weighing stations.A detailed description of the BGS and applications can be
B. Experiment Design
This subsection presents the methodology followed for the ECET and INFUSION experiments using the BGS dataset.Besides, we describe the performance metric used to compare the experiments.
1) ECET using the BGS Data: We followed the same methodology of [20] for the creation of MC ECs using the BGS data, which includes the pool of base classifiers, the grid of hyperparameters of each classifier, and the grid of hyperparameters for each EC.We used the data-based models: decision tree (DTR), K-nearest neighbors (KNN), AdaBoost (ADB), support vector machine (SVM), and naive Bayes (NBY).For this purpose, we first trained the pool of classifiers using only ML models, which implies the search for the proper hyperparameters for each model.The second step is creating the ECs, using the EC hyperparameters.The last step presents the inference results of the ECs while injecting the BGS data.
2) INFUSION using the BGS data: The knowledge-based model KEXT was presented in [10], in which we describe the knowledge rules.We only use the failure modes fm1, fm2, and fm3 for the INFUSION experiments.We present a comparison table using knowledge, data fusion, and knowledge and data fusion models.The KEXT model represents the knowledge model.The data fusion models are represented by the ECET ECs models and a fusion of two data-based models.Lastly, the knowledge and data fusion models are represented by the combination of the SVM-KNN-KEXT models and the INFUSION models composed of an MC EC and the KEXT model.
3) Performance Metrics: We use the F1-score as the main performance metric to compare the different experiments.Panda et al. [39] present a detailed description of the F1-score calculation.
C. Results
This subsection presents the results using the BGS data for the ECET and the INFUSION architectures.For this purpose, we present the F1-score results of the models or ECs.Besides, we display the confusion matrix, classification predictions, and uncertainty for the different architectures.
1) ECET using the BGS Data: The first is to train the pool of base classifiers, which we performed using the module grid search of scikit-learn.Table X presents the hyperparameters of the base classifiers trained with the cases (1,2,3), which corresponds to the failure modes (fm1, fm2, fm3), respectively.The next step is applying the ECET methodology to find the most performing MC ECs.We obtained the ML-based MC ECs, shown in Table XI.The hyperparameters expert (Exp), diversity (Div), version of diversity (Ver), and pre-cut (PC) are set to False.score of 1.00, whereas the base classifiers DTR, KNN, and ADB have values of 1.0, 1.0, and 0.96, respectively.Fig. 9 presents the plots of MC ECs M3, M4, and M5 trained using the cases (1,2,3), which correspond to the failure modes (fm1, fm2, fm3), respectively.Fig. 9a, 9b, 9c show the confusion matrices for the MC ECs M3, M4, and M5, respectively.The confusion matrices present the same performance for the MC ECs M3, M4, and M5.Fig. 9d, Fig. 9e, Fig. 9f display the predictions in blue color compared with the ground truth in red color for the MC ECs M3, M4, and M5, respectively.Likewise, in the previous case, the prediction plots are identical for the MC ECs M3, M4, and M5.Fig. 9g, Fig. 9h, Fig. 9i present the DSET UQ for MC ECs M3, M4, and M5, respectively.In contrast to the previous plots, the uncertainty is reduced as the ensemble size increases.In the case of the MC EC M5, the model presents the clearest plot, except for the fm3, which has a noisy behavior.
2) INFUSION using the BGS Table XIII presents the F1-scores of the knowledge-based model, the fusion of databased models, and the fusion of data-based and knowledgebased models.The knowledge-based model is represented by the model using the KEXT methodology.The fusion of databased models is represented by the models using the ECET methodology (M3, M4, M5) and an additional case performing a DSET fusion of the data-based models KNN and SVM (without the ECET methodology).The fusion of data-based models and the knowledge-based model is represented by the models using the INFUSION methodology (IFS3, IFS4, IFS5) and an additional case performing a fusion of the models KNN, SVM, and KEXT.The KEXT model presents an average F1-score of 0.75, whereas the individual cases (1,2,3) presented values of 0.95, 0.79, and 0.52, respectively.The ECET and INFUSION models (IFS3, IFS4, IFS5) present the best average F1-score with a value of 1.00.The fusion of SVM and KNN presents an average F1-score of 0.96, whereas the fusion of KEXT, SVM, and KNN presents an improved average F1-score with a value of 0.98.It is important to remark on the INFUSION robustness, in which we perform the fusion of a high-performing ECET with a low-performing KEXT.The low performance of KEXT for some fault cases did not affect INFUSION's performance.INFUSION performance presents a steady high performance while examining table XIII and the confusion matrix from Fig. 10c.Alternatively, a detailed examination of the uncertainty provides an additional perspective on INFUSION's performance, in which the uncertainty presents areas with high values.Thus, uncertainty monitoring can be used to evaluate ECET and KEXT to determine the causes of low performance.
D. Discussion
The knowledge-based model KEXT presented mixed results, in which some faults are well identified or predicted.However, the strength of this approach relies on how well the rule represents a machine condition.Representing knowledge rules is a challenging task and often time demanding.An additional positive characteristic of the knowledge-based model relies on its explainability: an expert user can directly observe the logic and transform the rules.
Alternatively, the data-based models using ECET outperformed the knowledge-based model, which is clearly reflected in the F1-scores of Table XIII.However, the relationships between the features and outputs are often hidden (except for data-based models such as DTR, where the rules can be observed).It is important to remark on the number of features the models use, in which the knowledge-based models are built using less than ten features.In contrast, the ECET models are built using 133 features.
The fusion of data-based and knowledge-based models slightly improved the overall system's performance.The fusion model SVM-KNN-KEXT presented an improvement of fault 3 to the fusion model SVM-KNN, with scores of 0.95 and 0.92, respectively.In the case of INFUSION, the ECET results were already outstanding, resulting in a predominant effect on the fusion.The poor performance of some fault cases of KEXT did not affect the system performance.
The INFUSION methodology performed a fusion of the KEXT knowledge-based model and the ECET data-based models.No performance changes were reported since the ECET data-based models (M3, M4, and M5) presented already
• The (re)-training pool of classifiers module, which is formed by the blocks: train model using either the prior training data D T r , or using the re-training data D T r ′ .model validation either the prior validation data D V a , or the new validation data D V a ′ .uncertainty quantification
Figure 4 :
Figure 4: Anomaly detection using different window sizes for the MC EC H5-2 trained with the known cases 0,1,2,6,12, and using the fault case (7) as an anomaly.The confusion matrices of H5-2 are displayed in (a)-(c), and the predictions in (d)-(f).
( a )
Bar chart for M3 using A7 (b) Bar chart for M3 using A8 (c) Bar chart for M3 using A15
Fig. 10
presents the plots of the main models: the KEXT knowledge-based model, ECET data-based model (M3), and the INFUSION model (fusion of KEXT and ECET).
Fig. 10a, 10b, 10c show the confusion matrices for the models KEXT, ECET (M3), and INFUSION (IFS3), respectively.The confusion matrices with the best performance correspond to the models ECET and INFUSION.In contrast, KEXT presents a poor performance by detecting fm3.Fig. 10d, Fig. 10e, Fig. 10f display the predictions in blue color compared with the ground truth in red color for the models KEXT, ECET, and INFUSION, respectively.The clearest plots correspond to the ECET and INFUSION models, whereas the KEXT model presents a noisy plot.Fig. 10g, Fig. 10h, Fig. 10i present the DSET UQ for the models KEXT, ECET, and INFUSION, respectively.In the case of KEXT, the plot presents a continuous line since the expert team can only change the uncertainty's value.In contrast, ECET presents an extremely noisy plot for the fm3.In the case of INFUSION, the plot presents a steadier uncertainty.
Table I :
List of symbols and abbreviations.
Table II :
Anomaly detection results of selected ensemble multiclass classifiers using all the fault cases, and F1-score.
Table III :
Classification results of the ECs after retraining using all the fault cases, and F1-score.The retraining parameters are threshold size th = 100, window size ws = 20, and detection patience pt = 15.
Table VII :
Classification results of the RT ECs after retraining using all the fault cases, and F1-score.The retraining parameters are threshold size th = 250, window size ws = 20, and detection patience pt = 15.
Table
IX compares the anomaly detection results between our approach and the literature.We use the FDR to compare our results with the literature results.The retrained MC ECs M3, M4, and M5 present lower results with average FDR scores of 53.02%, 41.68%, and 35.04%, respectively.The MC ECs M3 and H3-4 present FDR scores of 87.97% and 73.76%, respectively.The approaches DPCA-DR, AAE, and MOD-PLS have FDR scores of 83.51%, 78.55%, and 83.83%,
Table VIII :
Classification results of the ECs after retraining using all the fault cases, and F1-score.The retraining parameters are threshold size th = 250, window size ws = 20, and detection patience pt = 15.
Table IX :
Classification results of the ECs after retraining using all the fault cases, and FDR.The retraining parameters are threshold size th = 250, window size ws = 20, and detection patience pt = 15. | 14,553 | 2023-08-31T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
HLE-UPC at SemEval-2021 Task 5: Multi-Depth DistilBERT for Toxic Spans Detection
This paper presents our submission to SemEval-2021 Task 5: Toxic Spans Detection. The purpose of this task is to detect the spans that make a text toxic, which is a complex labour for several reasons. Firstly, because of the intrinsic subjectivity of toxicity, and secondly, due to toxicity not always coming from single words like insults or offends, but sometimes from whole expressions formed by words that may not be toxic individually. Following this idea of focusing on both single words and multi-word expressions, we study the impact of using a multi-depth DistilBERT model, which uses embeddings from different layers to estimate the final per-token toxicity. Our quantitative results show that using information from multiple depths boosts the performance of the model. Finally, we also analyze our best model qualitatively.
Introduction
SemEval-2021 Task 5: Toxic Spans Detection (Pavlopoulos et al., 2021) consists in detecting which spans make a text toxic. This is quite relevant for nowadays lifestyle in which, aggravated by the COVID-19 pandemic, online conversations have become key to communicate with our family, friends and job mates, or socialize through social networks and streaming chats. Being able to moderate all this digital content is crucial in order to promote healthy online conversations and discussions.
To tackle this problem, in HLE-UPC we have used a BERT-based model with a fully-connected layer on top to perform Named-Entity Recognition and Classification (NERC), with the goal of tagging each word as either toxic or not. Moreover, we have studied and proved that the use of information from different-depth layers enriches the final classification.
Our contributions to Toxic Spans Detection are: • The proposal of an ensemble of three different multi-depth DistilBERTs, achieving an F1score of 68.54% and being ranked 14th out of 91 teams in the challenge, just 2.29% below the best performing model.
• The study of multi-depth BERT-based models in the task of Toxic Spans Detection, showing an improvement on the performance compared to non-multi-depth architectures.
• A qualitative analysis presenting some ethical concerns regarding racial bias.
2 Related work Toxicity The task in which we are participating is not the first one to focus on text toxicity. Without going any farther, in last year's edition of SemEval we can find Task 12, also known as OffensEval 2020 (Zampieri et al., 2020), in which the goal was to identify offensive language in multilingual social media data. In the previous year's competition, SemEval 2019, Task 6 (Zampieri et al., 2019) was also tackling the identification and categorization of offensive language in social media.
NERC All the mentioned models approach the task as Sequence Classification, this is, encoding a whole sentence and providing a unique prediction for it. Toxic Spans Detection, however, goes a step further by asking participants to detect toxic spans, the exact characters or words that make a text toxic. For this reason, instead of modelling the task as sentiment analysis or document/comment classification, it seems more natural to approach it as token classification, generating an output for each token. More specifically, this task could be seen as a Named Entity Recognition and Classification (NERC) task, in which the goal would be to output the most probable sequence of labels (toxic or not) given an input sentence.
In the field of NERC, we also find some interesting models. The state-of-the-art today are attentionbased models, usually stemming from transformers such as BERT (Devlin et al., 2019), which can be easily converted into a token classifier by adding a simple linear layer on top of the per-token output. However, we can also find some other attentionbased models using CNNs (Baevski et al., 2019) or even recurrent architectures such as Jiang et al.
Data Description
For this task, the organizers provide us with the Toxic Spans Detection (TSD) dataset, also presented in Pavlopoulos et al. (2021), containing phrases and comments that may contain toxic spans. Together with each comment, there is the set of indices of the characters that are considered toxic.
The TSD dataset is split into three subsets: trial, train and test sets with approximately 700, 8000 and 2000 comments respectively. All the models presented in this work have been trained exclusively on the TSD training set, while the trial set has been used to validate our systems. Finally, the test set has served to evaluate the performance of our final models using the available limited submissions for the competition.
The TSD dataset contains very diverse comments. Some of them seem quite simple, but others may be ambiguous, require context knowledge or an understanding of tone, which makes the task extremely challenging. There are also some words that have been written in an ingenious way, to avoid naïve toxic detectors, or that are bleeped or censored. Following we present a couple of examples, where toxic characters are underlined: • This is a stupid example, so thank you for nothing a<EMAIL_ADDRESS>• I bet you can't wait to see him behind bars.
Data Cleaning
With a simple data exploration, it can be seen that approximately 90% of the toxic spans exactly match with word boundaries, but in the remaining cases we find strange cases such as the following ones: 1. You are an idiot: There is a whitespace as a toxic span boundary.
2. You are an idiot: A random singleton character is marked as toxic.
3. You are an idiot: "Y" is not marked as toxic but "ou" is.
The majority of these inconsistencies are already known by the organizers of the task and other participants. However, they should still be tackled to provide the best data possible to our models. For this reason, we have cleaned the data using three simple steps and following the idea of toxicity coming from complete words but not from single characters. For each group of consecutive annotated toxic offsets: 1. Iteratively remove the first or last toxic offset if it belongs to a whitespace. This solves the first type of inconsistencies.
2. Remove the toxic offset if it is a singleton: a single consecutive character marked as toxic. This helps in the second type of strange cases.
3. Iteratively left-expand the range of toxic offsets if the previous character is alphanumeric (so it belongs to the same word). Same for right-expansion. This solves the third problem by including the offsets of the whole word as toxic whenever more than one character is marked as so. 1 After cleaning the data, almost the totality of the annotations matches word boundaries. On one side this confirms our hypothesis that toxicity comes from words or expressions but not from characters. On the other side, this enables a word by word analysis in a consistent and robust manner. Nevertheless, the task remains challenging given the subjectivity of the annotations.
Preprocessing
Once data is cleaned and before feeding it to the models, we lower case the text and tokenize it using WordPiece (Wu et al., 2016), the tokenizer used by BERT-based models, which splits text into (usually) sub-word units. Each of these units has its associated token embedding at the first layer of the respective models.
In this step, we also use the information of the already-cleaned toxic offsets to create a per-token binary label regarding its toxicity.
Models
LSTM Long Short-Term Memory was introduced in 1991 by Hochreiter and Schmidhuber (1997) as an extension of recurrent neural networks (RNNs), providing them with the ability to capture and memorize long-term dependencies and therefore help prevent the vanishing/exploding problems (Bengio et al., 1994;Pascanu et al., 2013).
We use an LSTM tagger as our baseline model to determine the lower bound performance that we should compare with. We use it as a first approach to solve the task, even though we know that the sentences of the dataset might be too long for the network to memorize and capture all long-term dependencies and the entire sentence context. As input for this model, we use pre-trained word embeddings from GloVe (Pennington et al., 2014).
Attention-based models In 2018, Google Research released Bidirectional Encoder Representation from Transformer (BERT) (Devlin et al., 2019) which achieved many state-of-the-art results on different NLP tasks. This success led to the creation of a lot of new models and improvements based on the BERT architecture: DistilBERT, RoBERTa, ALBERT, ... This architecture uses the same multihead transformer structure presented by Vaswani et al. (2017), which is basically composed of several stacked Transformer blocks/encoders, including self-attention and feed-forward modules. These help the model obtain richer word representations by finding correlations with other tokens in the sentence.
For our task, we use two BERT-based models, BERT and DistilBERT, with a token classification head -a linear layer on top of the hidden state output of the last Transformer encoder-. These models are pre-trained on huge corpus from different sources and fine-tuned for our downstream task.
Multi-depth models Based on the previously presented BERT-like models, we implement a modification that consists in feeding the classification layer an augmented embedding for each token. This augmented embedding is formed by concatenating the hidden outputs of different Transformer blocks, instead of using the last output directly as done in common models for token classification. The empirical results show that using embeddings from different layers provides better representations and boosts the model's performance.
Postprocessing
Once a model outputs its predictions, we loop through them and, for those tokens predicted as toxic, we take their offsets and add them to the final set of toxic spans for that sentence.
Additionally, we add a postprocessing step to increase the correctness of our predictions regarding white characters. These are not returned as tokens by the tokenizer but occupy a character offset. For this reason, for each pair of consecutive tokens predicted as toxic, we also include to the final set the offsets of any white characters in between.
Results
All the results presented in this section have been calculated using the official metric, the F1-score on the predicted toxic offsets. For detailed information please refer to Pavlopoulos et al. (2021).
Model Comparison
In Table 1 we report the results for the best configuration of each of our models both in the official trial and test sets. In these results, we can first note that all the models clearly outperform our baseline. Moreover, using the information of multiple layers is proved to be beneficial for this task, as it improves each of the respective base models, by 0.64% -1.42%. Finally, note that although BERT is larger and more powerful than DistilBERT, it performs poorer in the test set. This might be due to the fact that we select our hyperparameters based on the F1-score on the trial set, which is relatively small and may not be representative of the test data. For this reason, our submitted model is a multi-depth DistilBERT, as it provides better generalization within this task and data.
Layer Selection
In this study, we have trained a multi-depth Distil-BERT using the outputs of different layers or trans- former blocks to study its impact on the model's performance. Table 2 shows the results for different experiments in which we have concatenated the outputs of the last N layers of DistilBERT before feeding these enlarged hidden states to the fully connected layer that performs classification.
Results show that performance can be improved by adding different block's outputs, but can also degrade when using too many. For DistilBERT, which has 6 transformer blocks, the sweet spot seems to be using the last 3 layers. Using all 6 also provides good results, which may imply that the first layer's output is also quite informative for this task in which words themselves already help predicting their toxicity.
Ablation Study
In these experiments, we took apart one component of our system at a time to see its effect on the system's performance. The main components of our method are presented in Section 3, and details about our implementation can be found in Appendix A. Table 3 shows the results for this study, in which we can easily see that all components work towards the performance of our model. Apart from the multi-depth component, which has already been studied, Dropout has been key for our giant model to generalize and prevent overfitting the small data.
Using Label Smoothing has also helped, letting the model adapt to the intrinsic subjectivity of the annotations.
Regarding data preparation, it can be seen that the cleaning step has been crucial for the good performance of our system, supporting the known quote "Garbage in, garbage out". Finally, our simple postprocessing stage has also provided some tenths to the final performance.
Ensemble
Given the results we obtained with single models, we found it interesting to mix some of them to see if they were focusing on different parts of data and could improve the predictions while working together.
Following this idea, we created a simple majority-voting ensemble using the multi-depth models with "last N layers" for N = 1, 3, 6; this is, a base DistilBERT, a model that concatenates the output of the last 3 transformer blocks and another one that uses all 6 layers of DistilBERT.
The final result for this ensemble is 69.34% in the trial set -used as validation-and 68.54% in the test set, our best submission. Note that although being worse than our best single model in the trial set, it has better generalization skills and boosts the performance in the unseen test set.
Qualitative
Apart from the quantitative analysis done before, we analyze in a qualitative manner the performance and behaviour of our best model, to see how well detects offensive and toxic words and in which cases it fails.
Below we present some examples of sentences in the dataset together with their ground truth spans and the detection done by the model. The ground truth toxic words appear underlined while the prediction is shown in red.
Correct predictions We observe how our system is highly capable of identifying toxic and offensive words, both when they appear alone and in multi-word expressions.
• Billy, are you a complete idiot, being thick headed or just not reading what people...
• People insist on being dumb. No other explanation.
• Could you please kill yourself?
Wrong predictions However, our system also fails in some challenging comments. As seen below with the word "poorly", our method misses some words marked as toxic which are not very offensive or disrespectful but can become toxic due to the context.
• People don't buy that poorly built Russian houses...
In other cases, our system identifies toxicity when it is not annotated, although under our perspective the prediction seems correct. This could be due to the ambiguity of the task or inconsistencies in the annotations. An example of it is the expression "freaking donkeys": • These freaking donkeys all need to be removed from office. I'm so sick and tired of... Finally, our model fails to detect connectors such as "of" and "and" in between toxic words. In the dataset there are several annotation philosophies: some annotations tend to mark entire expressions as toxic and some others are more word-oriented, excluding connectors between words.
• Are these some of those Russian pieces of crap that they seem to be building all over Alaska.
Ethical concerns While doing the qualitative analysis we found several examples indicating that there could be racial bias in the predictions of our model, and although it is beyond the scope of the challenge, we found important to pay attention to it. For this reason, we took some examples from the trial set containing comments about races and changed the words referring to races or origin by others. Below we show an example. The first comment belongs to the competition dataset, while the other is a modification of it, with words "black" changed for "white" and vice versa, and "Mexican" changed for "American".
• Black folks built this nation and got lynching for the work. Heck, white folks can be so mean that when they lost their slaves they invited illegal Mexican immigrants to do the work black slaves use to do.
• White folks built this nation and got lynching for the work. Heck, black folks can be so mean that when they lost their slaves they invited illegal American immigrants to do the work black slaves use to do.
We observe that in both cases the system identifies the word "black" as toxic, but not "white", even when these non-toxic adjectives are the only difference between them. Furthermore, the system only identifies "immigrants" as toxic when appearing next to "Mexican" but not with "American".
This undesired discrimination happens because there are lots of racist comments in the dataset, which are obviously annotated as toxic. Given that it seems there are more comments against some specific ethnic groups than others, the system associates certain racial references with racism and thus with toxicity.
This is a problem that comes from the data, including the one used in the pre-training phase of BERT models. However, there are several de-bias techniques in the literature (Manzini et al., 2019;Sun et al., 2019;Liang et al., 2020) that could be applied to our model to alleviate it.
Conclusion
In this work, we have presented a solution for the SemEval-2021 Task 5: Toxic Spans Detection competition, which is a challenging task due to the subjectivity of toxicity and the requirement of context knowledge.
During the development of our solution, a multidepth DistilBERT model, we have proved the power of pre-trained models and transfer learning to a downstream task with limited data, at the same time that we have demonstrated the benefits of combining the outputs of multiple BERT models' layers for token classification.
With an F1-score of 68.54% the presented model ranks 14 out of 91 participating teams in the competition and, although it presents some racial bias that could be corrected, from the qualitative results we conclude that it has a very good performance, hence being able to be used in real-life applications. | 4,198.6 | 2021-04-01T00:00:00.000 | [
"Computer Science"
] |
The Aurora-A inhibitor MLN8237 affects multiple mitotic processes and induces dose-dependent mitotic abnormalities and aneuploidy.
Inhibition of Aurora kinase activity by small molecules is being actively investigated as a potential anti-cancer strategy. A successful therapeutic use of Aurora inhibitors relies on a comprehensive understanding of the effects of inactivating Aurora kinases on cell division, a challenging aim given the pleiotropic roles of those kinases during mitosis. Here we have used the Aurora-A inhibitor MLN8237, currently under phase-I/III clinical trials, in dose-response assays in U2OS human cancer cells synchronously proceeding towards mitosis. By following the behaviour and fate of single Aurora-inhibited cells in mitosis by live microscopy, we show that MLN8237 treatment affects multiple processes that are differentially sensitive to the loss of Aurora-A function. A role of Aurora-A in controlling the orientation of cell division emerges. MLN8237 treatment, even in high doses, fails to induce efficient elimination of dividing cells, or of their progeny, while inducing significant aneuploidy in daughter cells. The results of single-cell analyses show a complex cellular response to MLN8237 and evidence that its effects are strongly dose-dependent: these issues deserve consideration in the light of the design of strategies to kill cancer cells via inhibition of Aurora kinases.
INTRODUCTION
The Aurora-A kinase is a major regulator of cell division and operates in distinct processes required for spindle assembly: in human cells it regulates separation and maturation of centrosomes at mitotic entry, mitotic microtubule (MT) nucleation [1][2][3] and the integrity of spindle poles [2,4,5]. Recent data also indicate a role of Aurora-A in central spindle assembly at telophase [6,7]. The highly homologous Aurora-B kinase also operates in control of the fidelity of chromosome segregation, by regulating chromosome condensation, correction of improper attachments between MTs and kinetochores, spindle checkpoint function, cytokinesis and abscission [8].
As other mitotic regulators, Aurora kinases are often abnormally expressed in tumor cells and are being investigated as targets of anti-mitotic compounds for cancer therapy [9,10]. Many efforts have converged in the last years to develop Aurora inhibitors: molecules acting as ATP-competitors have been identified and some of them are currently in clinical trials [11]. Only a few of those molecules discriminate Aurora-A vs Aurora-B and may thus prove useful both in clinical studies for comparing the efficacy of anti-tumor responses and for dissecting the functions of Aurora kinases in mammalian cells. MLN8237 (Alisertib) is a second generation Aurora inhibitor currently undergoing Phase-I/III clinical trials [11-16; www.clinicaltrials.gov]. Thus far, it is one of the molecules displaying highest specificity for Aurora-A over Aurora-B (300-fold in in vitro assays and 200-fold in HCT116 colorectal carcinoma cells [17]). Most pre-clinical studies based on whole cell population analyses in tumor cell lines showed cell growth inhibition, accumulation of polyploid cells over time, as well as induction of cell death [17][18][19]. Anti-tumor activity was also demonstrated in xenograft mouse models [17,20,21].
Available data on MLN8237-treated cells were mostly obtained from asynchronous cultures analyzed in bulk populations. This approach reveals the predominant cellular behaviour after long exposure to Aurora-A inhibition (24 to 96 hours) but can miss out transient phenomena and so mask the unfolding of relevant processes. In addition, inhibition of as pleiotropic a kinase as is Aurora-A, yields multiple phenotypes over time, making it difficult to dissect distinct functional roles within a bulk population. Microscopy-based single cell analyses are proving of critical importance to visualize the array of possible cell responses to anti-mitotic drugs [22]. Here we have coupled high resolution microscopy and high-throughput analysis of single cells treated with increasing doses of Aurora-A inhibitor to investigate the possible fates of cells with inactive Aurora-A.
A protocol was set up for treating pre-synchronized cultures when they reach G2 and analyze progression through G2 and mitosis as soon as Aurora-A inhibition is achieved. Because MLN8237 induces spindle pole abnormalities [23], we assessed the occurrence of chromosome mis-segregation events and aneuploidy induction, which would represent undesirable effects of the treatment in anti-cancer therapy. Our results highlight a partial specificity of MLN8237 in the U2OS cell line, with multiple cellular responses in a dose-dependent manner. The single cell analysis enabled us to depict a fraction of cells with defective spindle orientation, a defect that was not appreciated in previous studies of Aurora-A inhibition in human cells. In addition, we find that low and high MLN8237 concentrations yield mild and massive aneuploidy, respectively, representing a tumor-inducing or a tumor-suppressing condition [24]. Collectively, these results draw attention to the variability and the nature of cellular responses to the loss of Aurora kinase function, which may represent potential caveats deserving consideration when designing and interpreting clinical trials.
MLN8237 displays dose-dependent target selectivity on Aurora kinases
Prior to analyzing mitotic division in cells with inhibited Aurora-A, we sought to precisely define the specificity of MLN8237 inhibition in dose-response assays. We used the U2OS osteosarcoma cell line for its ease of cytological analysis, which renders it especially suitable for high-resolution single-cell microscopy analysis, and employed in our previous studies of RNA interference-mediated Aurora-A inactivation [4,5,23].
We set up a protocol by pre-synchronizing U2OS cells at the G1/S transition by thymidine treatment, then releasing from arrest into G2 and mitosis ( Figure 1A). MLN8237 was added 6 hours after thymidine release (late S-phase/early G2) and cells were harvested after further 4 hours. Aurora-A activity was measured at the single cell level by anti-Aur-A-phospho-Thr288 immunofluorescence (IF) staining in dose-response assays ( Figure 1B, left panels). Aurora-A auto-phosphorylation was significantly inhibited at concentrations ranging from 5 nM to 250 nM. With concentrations higher than 20 nM the residual signal at spindle poles was below 15% compared to controls. In Western blot analysis, no phospho-Thr288-Aurora-A was detectable in mitotic extracts from cultures treated with 20 and 50 nM MLN8237 for 4 hours, while some residual amount was present after 1 hour ( Figure 1C).
Previous reports indicated that MLN8237 above 100 nM also inhibits Aurora-B activity in other cell lines [19,[25][26][27]. We therefore assessed the specificity of MLN8237 by measuring Aurora-B activity using anti-Aur-B-phospho-Thr232 antibody ( Figure 1B, right panels). Surprisingly, we noticed that Aurora-B activity is already significantly compromised by 50 nM MLN8237; that was not evident when using anti-phospho-Histone-H3 (Ser10) as a reporter of Aurora-B activity (Supplementary Figure S1), possibly reflecting kinase redundancy or delay in detecting modulation of phosphorylation of downstream targets vs auto-phosphorylation.
Our single-cell analysis in U2OS cultures delimits therefore a narrow MLN8237 concentration window (20-50 nM) yielding effective and specific Aurora-A inhibition.
MLN8237 delays mitotic entry and prolongs mitotic duration in a dose-dependent manner
We investigated the influence of MLN8237 on mitotic entry: after 4 hours of treatment, we found a significantly lower percentage of mitotic cells in MLN8237-treated cultures compared to controls ( Figure 2A). This effect is dose-dependent, appearing at ≥ 20 nM, and is stronger at 250 nM MLN8237. To clarify whether cells were arrested in the G2 phase or rather delayed in progression through the G2/M transition, we recorded cultures in time-lapse experiments from the treatment start up to 16 hours later ( Figure 2B). The peak of interphases entering mitosis in control cultures (DMSO) was between 4 and 8 hours from the treatment start ( Figure 2B) and was not significantly affected by partial Aurora-A inhibition (5-10 nM MLN8237). Entry into mitosis was instead delayed above 20 nM MLN8237. In the 16 hours of time-lapse recording, 70-80% of the interphase cells entered mitosis in all treated cultures (about 60% with 250 nM MLN8237): thus, the majority of cells exposed to MLN8237 are delayed in G2, yet mitotic onset is not prevented.
Extending the time-lapse recording to 30 hours indicated that MLN8237 prolonged the duration of mitosis in a dose-dependent manner (about 300 minutes with 50 nM, compared to about 80 minutes in control cells; Figure 3). Importantly, although slowed down, cells eventually exited mitosis. Together, the results indicate that both the G2-to-mitosis transition and the overall duration of the mitotic process are strongly dependent on Aurora-A.
Inhibition of Aurora kinases yields impaired MT nucleation, disorganized spindles and multipolar or failed cell division
We next analyzed spindle structure in cells that entered mitosis with different degrees of Aurora-A and Aurora-B inhibition.
With the highest MLN8237 concentration (250 nM) a strong impairment of MT nucleation was evident, with 70% of prometaphases displaying no MTs ( Figure 4); this was associated with a prolonged prometaphase duration in time-lapse recording experiments, yielding an accumulation of prometaphase figures over all mitoses in fixed samples (Supplementary Figure S2). The MT nucleation defect was strongly dose-dependent and appeared in a relevant fraction of mitotic cells treated with 50 nM MLN8237 or above.
In cells in which MT nucleation was not visibly affected, spindles were highly disorganized (affecting 30% to 60% of all prometaphases with 10 nM or above; Figure 4); a fraction of these prometaphases displayed spindles with multiple poles, consistent with previous observations [17,23]. A non-significant fraction of monopolar spindles was present at 10-20 nM MLN8237.
The influence of these defects on the global execution of mitosis was examined in depth in timelapse movies of MLN8237-treated cells. Figure 5 B. Quantification of IF signals for active pThr288-Aurora-A (left, mean intensity at poles) or active pThr232-Aurora-B (right, sum intensity at chromosomes) in control (DMSO) or MLN8237-treated prometaphases is shown in the box-plots (center lines show the medians; box limits indicate the 25th and 75th percentiles as determined by R software; whiskers extend 1.5 times the interquartile range from the 25th and 75th percentiles, outliers are represented by dots). Fluorescence intensity is shown in arbitrary units (a.u.). **: p<0.0001, unpaired t test or Mann-Whitney test. n=90 spindle poles (p-Aurora-A) or 50 prometaphases (p-Aurora-B) from 3 experiments. Representative IF images are shown. Scale bars: 10 µm. C. p-Aurora-A (active) levels decrease in mitotic extracts (shake-off) from MLN8237-treated (1 or 4 hours before harvesting) compared to DMSO-or nocodazole (NOC)-treated (4 hours) U2OS cultures. Total Aurora-A levels are also shown; actin is used as loading control. p-Aurora-B was not assessed in Western blot due to the lack of a suitable antibody for this application. originating two asymmetric daughter cells ( Figure 5, third row). Lower MLN8237 doses (10-20 nM), which yielded disorganized spindles in fixed prometaphases (Figure 4), do not yield multipolar divisions, suggesting that in those cultures a bipolar or pseudo-bipolar spindle is eventually assembled before anaphase.
In cultures treated with high MLN8237 doses we observed prolonged prometaphase (average duration 150-200 minutes) followed by a complete lack of chromosome segregation and cell division ( Figure 5, fourth row): cells eventually re-adhered to form a single large or multinucleated interphase, often preceded by repeated "blebbing" movements ( Figure 5, fifth row; Supplementary Movie S3). The "lack of division" phenotype appeared in a small fraction of mitoses treated with 50 nM MLN8237 and became predominant (about 90% of mitoses) with 250 nM. Similar phenotypes were previously observed in other human cell lines treated with high MLN8237 concentrations [26,27]. Since the activity assays in Figure 1 indicate dual inhibition of both Aurora-A and -B under high doses, it was important to establish the contribution of each individual kinase to the no-division phenotype appearing above 50 nM. Smallscale time-lapse recording of cell cultures subjected to individual (Aurora-A or Aurora-B) or combined (Aurora-A+Aurora-B) RNA interference were set up to clarify this issue (Supplementary Figure S3): the no-division phenotype was not recorded in mitoses with selective inactivation of Aurora-A alone, yet appeared in a small fraction of Aurora-B defective cells (15,4%) and was amplified by the concomitant inactivation of both kinases (41,9%). Mitotic index (MI) from control (DMSO) and MLN8237-treated cultures (protocol as in Figure 1A In summary, these results show that spindle organization is the most sensitive process affected by MLN8237 and is readily altered by even a partial reduction of Aurora-A activity, associated with unbalanced chromosome segregation. Stronger inhibition of the kinase induces impairment in MT nucleation, associated with prolonged prometaphase duration. The highest MLN8237 dose leads to a complete failure of cell division, largely ascribable to the concomitant inhibition of Aurora-B.
Aurora-A inactivation induces defects in the orientation of cell division
The time-lapse recording experiments also revealed that a fraction of cells did not divide parallel to the growing surface: that was already evident under conditions of partial Aurora-A inhibiton, occurring in about 15% of mitoses in U2OS cultures treated with 5, 10 or 20 nM MLN8237 ( Figure 6A-B and Supplementary Movie S4). These cells took a longer time to reach the stage of chromosome segregation from mitotic round-up (on average 65 minutes with 5-10 nM and 110 with 20 nM MLN8237) compared to control cells (35 minutes). The time from the onset of chromosome segregation to re-formation of daughter interphase cells was instead unaltered, indicating that the process of chromosome segregation per se was not disrupted. However, we often recorded a delay between re-adhesion of the lower and the upper cell (see the example in Figure 6A and Supplementary Movie S4). No significant induction of mis-oriented division was observed above 50 nM MLN8237 ( Figure 6B). In the case Figure 3 were recorded by timelapse from treatment start for the following 24 hours. DIC images were acquired with a 40x objective every 5 minutes; representative single photograms are shown; time from round-up is indicated. First row: normal mitosis; second and third rows: multipolar mitoses (a, b and c indicate daughter cells); fourth and fifth rows: mitoses passing directly from prometaphase to defective interphase, with (lower) or without (upper) a "blebbing" phase. Defects are quantified (%) in the table below; number of recorded mitoses (n) and independent experiments per condition are indicated. *: 0.01<p<0.02; **: p< 0.001, χ2 test. Scale bar: 10 µm.
Figure 4: Spindle defects in MLN8237treated mitoses. Cultures harvested 4 hours
after MLN8237 treatment (protocol as in Figure 1A) were stained for DNA and alphatubulin. Histograms represent the percentage of prometaphases displaying normal or defective spindles (IF panels on top). 250-300 counted cells per condition from 3 experiments; s.d. are shown. Scale bar: 10 µm. www.impactjournals.com/oncotarget of 250 nM MLN8237 treatment, this is consistent with the predominance of the no-division phenotype (see Figure 5). The absence of mis-oriented divisions with 50 nM, which induced highly disorganized spindles and/or defective MT nucleation (see Figure 4), suggests that MTs are required and that spindle mis-orientation drives the abnormally oriented divisions.
To gain better resolution we used an U2OS cell line derivative stably expressing fluorescently labelled alpha-tubulin and histone H2B in video recoding assays (Supplementary Figure S4). MLN8237-treated mitoses that did not divide parallel to the culture dish displayed spindle rotation during prometaphase, such that often only one of the two poles was visible ( Figure 6A, lower panels). To define the spindle orientation axis, we analyzed fixed samples and measured the angle formed between the centrosome-centrosome axis and the growing surface ( Figure 6C). This analysis was performed in prometaphases from cultures treated with 5, 10 or 20 nM MLN8237 compared to controls. The average angle in control prometaphases was 11° and almost doubled (19°, mean value) in MLN8237-treated cells (p<0,01), reaching a >30° distortion in about 20% of prometaphases. Thus, the inhibition of Aurora-A under conditions that do not impair spindle formation altogether influences the proper orientation of the spindle axis and hence of cell division.
Induction of aneuploidy in the progeny of MLN8237-treated mitoses
Some of the defects observed in MLN8237-treated mitoses suggest the possibility that genetically imbalanced daughter cells are generated.
To address this issue we treated pre-synchronized cells with MLN8237 in G2 as described, fixed the cells after 24 hours and screened defects in interphase cells presumably representing the progeny of treated mitoses ( Figure 7). Cells were stained with combinations of DAPI and antibodies against alpha-tubulin and pericentrin (examples in Figure 7B, upper panels) or lamin-B1 and alpha-or gamma-tubulin (middle and lower panels) and categorized: cells with loss or gain of 1 or few chromosomes (1-2 micronuclei), polyploid (1 large nucleus with 4 pericentrin or gamma-tubulin signals), binucleated, or multinucleated.
Indeed, following treatment with 250 nM MLN8237, most interphases appeared to have undergone chromosome mis-segregation (>65% multinucleated cells); a smaller fraction (about 10%) became polyploid. Only less than 10% of interphases were apparently normal ( Figure 7C). Abnormalities were also observed in cells generated during treatment with 50 nM MLN8237: some 20% were multinucleated or polyploid and about 15% were binucleated. Interestingly, in about one third of the binucleated cells the nuclei were not equivalent in size, suggesting that they represent aberrant products of the multipolar/asymmetric divisions (see Figure 5). Timelapse imaging of fluorescently labelled U2OS cells showed that some interphases remained very close after division with a connecting alpha-tubulin bridge (Supplementary Figure S4), possibly representing intermediate figures before binucleation. 20 nM and 50 nM MLN8237 also yielded a significant induction (14-15%) of cells with micronuclei, indicative of mild aneuploidy ( Figure 7C). To assess whether micronuclei reflected genuine chromosome loss events, and hence aneuploidy, we assessed whether they contained whole chromosomes by staining with CREST antibodies to human centromeres ( Figure 8A). We observed a 6-and 17-fold increase of CREST-positive micronuclei in 20 and 50 nM MLN8237-treated cultures, respectively, compared to controls. Consistent with this, recording of GFP-labelled chromosomes dynamically visualized chromosome bridges and micronuclei formation; interestingly, these defects were always present in mitoses with multipolar spindles (Supplementary Figure S4).
These results indicate different extents of ploidy alterations in MLN8237-treated cultures, which we decided to investigate directly by employing FISH (Fluorescence In Situ Hybridization) analysis. We counted hybridization signals produced by chromosome-specific centromeric probes (chromosome 7 and 11) in interphases originated from MLN8237-treated mitoses (protocol in Figure 7). Most (about 85%) control interphases displayed 3-4 signals for both chromosome 7 and 11 ( Figure 8B Together these analyses indicate that MLN8237 treatment yields variable levels of aneuploidy in daughter cells, in a dose-dependent manner.
Long term fate of MLN8237-treated cells
The time-lapse analysis thus far indicates that MLN8237 (5-250 nM range) induces no significant cell death in U2OS mitotic cells. We investigated the longterm outcome of the treatment in high-throughput timelapse analyses for a length of time (48 hours) roughly corresponding to 2 division cycles. Using the fluorescently labelled U2OS cell line, we analyzed the data with an automated method, the CellCognition software [28], trained to classify cells as interphasic, mitotic, dead, multinucleated or polyploid (examples in Figure 9A). In the 48 hours of the recording time, the cell number increased threefold in control cultures, yet dose-dependent growth inhibition was observed in cultures treated with MLN8237, with almost no increase with 250 nM MLN8237 ( Figure 9B, left panel). Consistent with this, the number of normal interphase cells was dramatically reduced by the treatment (Figure 9B, right panel): this effect could result either from an increase in mitotic cells (due to prolonged mitotic duration) or from the generation of abnormal interphase cells. The kinetics of appearance of mitotic cells depicted two waves of division in control cultures ( Figure 9C). In MLN8237-treated cultures the first mitotic peaks were shifted in time and appeared broader. Both effects were dose-dependent, consistent with our data on mitotic entry and duration (Figures 2 and 3). At 250 nM inhibitor, no second wave of division was observed. Concomitantly, we observed a strong increase of multinucleated cells at 250 nM and a milder effect at 50 nM MLN8237. Some polyploid cells appeared under these conditions (below 3%). Detection of multinucleated and polyploid cells was therefore consistent with the results from fixed samples (Figures 7 and 8). Importantly, the induction of cell death remained below 3% throughout the recording time ( Figure 9C, bottom right panel), and remained at a similar level in one time-lapse experiment extended to 65 hours (not shown).
Together, the high-throughput data indicate that MLN8237 induces a dose-dependent lengthening of mitotic progression and the generation of abnormal daughter cells that are impaired in re-dividing, with a cytostatic effect, while low cytotoxicity is observed within the first 48 hours of treatment.
We also performed proliferation/cell death analyses after several days ( Figure 10) by cell counting and FACS measurement of the DNA content. After 48 hours of treatment, corresponding to the end of the high-throughput video-recording, a slight decrease in the total cell number was observed with 5-10-20 nM MLN8237 compared to control cultures ( Figure 10A); the overall proliferation trend remained comparable to controls over 96 hours. Cell growth inhibition was instead appreciated with 50 nM MLN8237 and even more effectively with 250 nM MLN8237 ( Figure 10A).
To assess cell death we measured cells with a sub-G1 DNA content by FACS analysis (Figure 10B). That revealed a generally higher level of cell death compared to the microscopy analysis, possibly reflecting technical specificities in the methodology: indeed, detached dead cells are counted by cytofluorimetry, while being preferentially lost in microscopy analysis. Nevertheless, by FACS analysis, only 250 nM MLN8237 induced remarkable cell death (about 30% of sub-G1 cells after 48 hours of treatment, increasing to about 50% after 96 hours, compared to about 10% in control cultures; Figure 10B).
Thus, MLN8237 treatment in concentrations that genuinely inhibit Aurora-A does not efficiently promote cell death in U2OS cells; a toxic effects is only observed after very long exposure times to high doses that simultaneously target Aurora-B.
DISCUSSION
Here we have used the MLN8237 small molecule inhibitor to investigate mitotic roles of the Aurora-A kinase in osteosarcoma U2OS cells; this molecule is under clinical trial in several cancer types including osteosarcoma.
We first determined the extent and specificity of kinase inhibition by assessing Aurora-A and Aurora-B self-phosphorylation. MLN8237 was effective over Aurora-A in the 20-50 nM range, but also inhibited Aurora-B above 50 nM. The loss of selectivity with high doses was previously reported [25][26][27]. The inhibition of Aurora-B identified here with 50 nM MLN8237 (a very close condition to that required for complete Aurora-A inhibition) was instead not noticed before using histone H3 phosphorylation as a reporter, indicating that the selection of activity reporters is critical to ascertain the selectivity of kinase inhibitors. It also evidences that the window at which MLN8237 fully inhibits Aurora-A, without concomitantly affecting Aurora-B, can be very narrow in some cell lines, indicating that selective inhibition of Aurora-A vs Aurora-B remains a critical issue, even with the best performing ATP-competitors.
MLN8237 treatment prolongs the G2 phase duration. The G2 delay was under-appreciated in previous studies using MLN8237 in asynchronous cultures, yet was observed when Aurora-A was inhibited by either antibody microinjection or RNA interference [1,29], or by conditional ablation in mouse embryonic fibroblasts (MEFs; [30]). No permanent arrest is however induced, suggesting that Aurora-A functions in G2 are important, but can be taken over by other kinases. Indeed, the G2 delay is more severe with 250 nM, under which condition Aurora-B is also inhibited. These findings raise the possibility that centrosome-and/or MT-associated defects induced by Aurora-A inactivation in G2 evoke a checkpoint response that delays the transition towards mitosis onset.
Dose-response analyses of MLN8237-treated cells that entered mitosis revealed processes that are differentially sensitive to Aurora-A inactivation, as depicted in the schematics in Figure 11. Complete Aurora-A inhibition impaired MT nucleation; consistent with this, previous data showed that partial vs complete RNA interference-mediated depletion of Aurora-A differentially affects maturation of centrosomes, required for mitotic MT nucleation [1,2]. Partial Aurora-A inhibition (10 nM MLN8237) instead yielded multipolar or disorganized spindles, the frequency of which increased in a dose-dependent manner up to 50 nM. Defective spindle assembly was associated with longer prometaphase duration compared to untreated cells. The highest frequency of induction of spindle organization defects (50 nM) was associated with multipolar divisions. 250 nM MLN8237 also prolonged prometaphase duration, after which cells re-adhered and eventually exited mitosis without division. The failure of chromosome segregation could be appreciated in timelapse experiments and was consistent with previous results obtained from concomitant inactivation of Aurora-A and B kinases by high MLN8237 doses in Hep3B or HeLa cells or, to a lesser extent, following Aurora-B inhibition in HeLa cells [26,27]. Our own recording experiments of interfered mitoses for Aurora-A, or -B, or both, confirm a mild effect of Aurora-B inactivation alone and a synergic effect of the inactivation of both kinases. Complementary functions of Aurora-A and B in chromosome segregation also emerged in chicken DT40 Aurora-A KO cells treated with an Aurora-B specific inhibitor [31]. None of the approaches used to inactivate Aurora-A alone in human cultured cells yielded chromosome segregation failure [2,27,32]. The latter was instead described in Aurora-A-null MEFs [30,33], suggesting specific modes of action of the two kinases in this cellular context, which may reflect a different stoichiometry between Aurora-A and B and/or their substrates.
An interesting finding from this study is the induction of mis-oriented divisions by low MLN8237 concentrations (5-20 nM); the inhibitor is selective for Aurora-A at these doses and, as recalled above, does not impair MT nucleation: this directly implicates the lack of Aurora-A in the mis-orientation phenotype and suggests that MTs are required. Interestingly, Aurora-A is implicated in spindle orientation in asymmetric cell divisions in Drosophila [34] via phosphorylation of Pins [35]. Excess Aurora-A can also influence the spindle orientation and the cell fate in human mammary epithelium stem/progenitor cells [36]. In our time-lapse assays using a U2OS cell line with fluorescent MTs, we actually visualized spindle rotation movements: thus, Aurora-A activity is required for pathways that determine the mitotic spindle orientation, raising the possibility that Aurora-A inhibition influences the fate of asymmetrically dividing cells and/or tissue architecture.
Time-lapse experiments also revealed that MLN8237 generates chromosome mis-segregation. At high concentrations, when both Aurora-A and Aurora-B are inhibited, massive aneuploidy is observed, with the generation of multinucleated daughter cells. Lower doses that inhibit Aurora-A alone yield mild aneuploidy, with increased number of CREST-positive micronuclei (schematics in Figure 11). FISH analysis depicted the differential generation of mild and massive aneuploidy by low and high MLN8237 doses, respectively. This is relevant, given the pro-or anti-tumorigenic effects of aneuploidy depending on the extent of chromosome missegregation [24]. Consistent with our observations, Yang and colleagues [37] described the generation of an 8N population following simultaneous RNA interferencemediated inactivation of Aurora-A and Aurora-B, but not of Aurora-A alone, in U2OS cells. Segregation defects (lagging chromosomes and chromatin bridges) were observed in Aurora-A-null MEFs, with increased ploidy over time [30,33], again evidencing a crucial contribution of Aurora-A to chromosome segregation in this system.
We also observed binucleated cells in the progeny of MLN8237-treated mitoses; in time-lapse analyses, telophase cells often remained in closer proximity to one another compared to controls, and tubulin bridges persisted, a phenotype that may possibly evolve into a binucleated cell. The observation of binucleated cells with 20 nM MLN8237 would support a recently proposed direct role of Aurora-A in late steps of cell division [6,7].
Importantly, MLN8237 failed to induce cell death in mitosis nor did it cause a highly effective elimination of the cellular offspring within the first 48 hours of treatment. Recent time-lapse studies with MLN8237 reported variable results regarding mitotic cell death: no mitotic toxicity was observed in Hep3B cells [26], whereas some death from mitosis was recorded in HeLa cells [27]. Aurora-A inactivation therefore appears to require a set of concomitant conditions, as yet elusive, for mitotic cell death activation. The highly aneuploid progeny generated in our assays at 50 nM (and, to a higher extent, 250 nM) MLN8237 originates cells impaired in further cell division, hence unviable in the long term. The mechanisms underlying the cytostatic effects of Aurora-A inactivation are controversial: a dose-and time-dependent induction of apoptosis was described in different cell lines treated with MLN8237 [18,19,38,39], while in other cases the MLN8237-induced cytostatic effect is attributed to senescence [40] consistent with results described in Aurora-A-null MEFs [33], or to induction of differentiation pathways [41]. These observations suggest that both the treatment parameters and the cellular background contribute to determine the long-term outcome of MLN8237-treated cultures.
In conclusion, the broad variability in the U2OS cell response to MLN8237 highlighted in this study is an important issue in the light of the use of this compound in anti-cancer therapy. In the human organism undergoing treatment, the compound dose cannot be constant and is expected to rise and fall over time. It will be important to extend these studies and shed light on the pathways driving the response towards one or another cell fate, with the perspective to modulate such choice and drive cells towards death pathways.
Cell cultures, synchronization protocols and treatments
The human U2OS osteosarcoma cell line (ATCC: HTB-96) was grown at 37 0 C in a 5% CO 2 atmosphere in DMEM, supplemented with 10% faetal bovine serum.
Cell counting and FACS analysis
Cells were harvested after 48 and 96 hours from treatment. One sample was harvested when cultures were released in thymidine-free medium (t=0h) as reference for the initial number of cells and for verifying the efficacy of the thymidine arrest. For counting the number of cells a Z1 Coulter Particle Counter (Beckman Coulter) was used. For FACS analysis samples were permeabilized with 0.1% TritonX-100 in PBS. Cell cycle phase distribution was analyzed after incubation with propidium iodide (PI, Sigma P4170, 0.04 mg/ml) using a flow cytofluorimeter Epics XL apparatus (Beckman Coulter). Parameters SS and FL-3 were acquired in a linear amplification scale, FS and FL-2, in a log scale. Cell aggregates were gated out on the bi-parametric graph FL-3lin/Ratio. Apoptosis was determined as the proportion of cells exhibiting a DNA content lower than that of G1 cells after gating out cell debris on the bi-parametric graph FS/SS, using the WinMDI software.
Quantitative analysis of IF signals.
Signals were measured using Nis Elements AR 4.2 (nd2 file format). Analysis of endogenous Aurora-A and Aurora-B activity in mitotic cells was performed as follows: a) p-Thr288-Aurora-A staining: average pixel intensity at spindle poles, corrected for external background; b) p-Thr232-Aurora-B staining: sum intensity at chromosomes, corrected for external background. Images for quantification of mitotic signals were Maximum Intensity Projections from z-stacks (0.6 µm, ranging over a 5-10 µm). Box-plots were generated using the web-tool BoxPlotR.
For measuring the angle between the centrosomecentrosome axis and the culture surface z-stacks serial images were used. The "arctan(xy/z)" formula for calculating the angle of a right triangle was applied with "xy" being the distance between centrosomes in xy in the maximum intensity projection, and "z" being the distance between centrosomes along the z-axis. A schematization is shown in Figure 6C. Values were statistically analyzed using the InStat3 software, using either (i) the unpaired t test (for Gaussian distributions), applying the Welch correction when required, or (ii) the Mann-Whitney test, when the populations did not follow a Gaussian distribution.
Fluorescent in situ hybridization (FISH)
Cells grown on coverslips were fixed at room temperature with 3:1 methanol:acetic acid, air dried and stored at −20°C for at least one day. Before hybridization, coverslips were again incubated in 3:1 methanol:acetic acid, 1 hour at room temperature, then heated 2 hours at 65 0 C and dehydrated in 70%-90%-100% cold ethanol. Denaturation of probes in the hybridization mix (Fluorescein-labeled Chromosome 7 Satellite Probe, cat: PSAT0007-G; rhodamine-labeled Chromosome 11 Satellite Probe, cat: PSAT001-R; Hybridization Buffer QB007, all from Q-BIOgene) was performed at 96 0 C for 10 minutes. When the mix was applied on coverslips a co-denaturation step was performed (2 minutes, 72 °C), followed by the hybridization incubation, performed at 37 0 C, overnight. Coverslips were then washed in SSC, at 37 0 C and 60 0 C, and counterstained with DAPI (0.2 µg/ ml) in SSC, 10 minutes at room temperature. Coverslips were mounted with Vectashield and analyzed with a Nikon Eclipse 90i microscope, using a 20x objective (Plan Fluor, 0.5 N.A.). Signals per nucleus were counted using the Object count function of Nis Elements AR 4.2 (nd2 file format), setting the parameters using the Spot detection function.
Imaging of the U2OS cell line stably expressing H2B-GFP and RFP-alpha-tubulin was performed with the 60x objective: images were acquired every 7 minutes in the 3 channels. z-stacks of the fluorescent channels were acquired every 1 µm over a range of 8 µm, attenuating the fluorescence lamp intensity to 1/32. Under these conditions, time-lapse acquisition, starting 6 hours after the treatment, was extended for 8 hours only to avoid phototoxic effects.
For high-throughput experiments, cells were seeded in 96-well plates. Images were acquired with a ScanR microscope (Olympus) using a 10x objective, an Olympus DBH1 camera and the ScanR acquisition software. Temperature and CO 2 were kept constant by an incubation system on the microscope. Acquisition was performed every 30 minutes using Phase contrast and fluorescence imaging, for a total duration of 48-65 hours.
Automated analysis of high-throughput data
Data from high-throughput experiments were analyzed by the CellCognition software (v 1.3.3-28, [28]). The software was used to segment the cells, extract features and classify them using a support vector machine. The classes defined were: interphases, prometaphases, metaphases, ana-telophases, multinucleated interphases, poliployd cells, dead cells; an additional class of debris or of cells with non homogeneous morphology which could not be included in any of the other categories was created and was then excluded from subsequent data elaboration to avoid artifacts. The classifier was trained with images from independent experiments with a training set comprising a minimum of 100 cells per class (with the exception of the polyploid class, of which only 33 examples were found). A confusion matrix and a classification test were used to assess the quality of classification before the analysis. The output data were further analyzed using Microsoft Excel; a single class for Mitoses was generated by pooling the prometaphase, metaphase and ana-telophase classes. Values for each well were pooled from 6 imaging fields; average values from 4 replicates within one experiment are shown in Figure 9. A similar trend was observed in 2 independent experiments (6 additional replicates). | 7,845.6 | 2014-07-09T00:00:00.000 | [
"Biology",
"Medicine"
] |
Collection of Simulated Data from a Thalamocortical Network Model
A major challenge in experimental data analysis is the validation of analytical methods in a fully controlled scenario where the justification of the interpretation can be made directly and not just by plausibility. In some sciences, this could be a mathematical proof, yet biological systems usually do not satisfy assumptions of mathematical theorems. One solution is to use simulations of realistic models to generate ground truth data. In neuroscience, creating such data requires plausible models of neural activity, access to high performance computers, expertise and time to prepare and run the simulations, and to process the output. To facilitate such validation tests of analytical methods we provide rich data sets including intracellular voltage traces, transmembrane currents, morphologies, and spike times. Moreover, these data can be used to study the effects of different tissue models on the measurement. The data were generated using the largest publicly available multicompartmental model of thalamocortical network (Traub et al., Journal of Neurophysiology, 93(4), 2194–2232 (Traub et al. 2005)), with activity evoked by different thalamic stimuli.
Introduction
The complexity of experimental protocols in neuroscience grows with technology. This enables more voluminous data collection and their analysis requires increasingly sophisticated approaches. Such methods of analysis remain speculative unless tested properly. Striving to extract knowledge from experimental data we are often forced to apply analytic methods beyond their proven applicability domains. For example, consider multielectrode recordings of extracellular electric potential which is a common method to investigate brain activity. Accurate interpretation of recorded signals is a challenging task, due to the complex relationship between electric field and the neuronal activity. The high frequency component of the extracellular signal is dominated by the spiking activity of neurons near the recording electrodes (multiunit activity, MUA), while the low frequency part 1 (LFP) is believed to reflect mainly postsynaptic activity, although other non-synaptic events such as action potentials, calcium spikes, or glial activity, also affect this signal (Buzsáki et al. 2012).
In the analysis of extracellular potentials we may wish to use signal decomposition methods, such as principal or independent component analysis for signals coming from coupled neural populations (Di et al. 1990;Łęski et al. 2010;Makarov et al. 2010), or other more complex methods which take into account the physiology, such as laminar population 1 LFP came about as the acronym of 'local field potential', which, as we know today, is a misnomer. It was shown by many (see Buzsáki et al. 2012;Einevoll et al. 2013 for an overview) including us Łęski et al. (2007) and Hunt et al. (2011) that due to the long-range nature of the electric field the same sources are visible in the rat's brain on distances of the order of the whole brain, which makes LFP a very non-local quantity. This is why we suggest to drop the name local field potential and read LFP as 'low frequency part' of extracellular potential, which is the definition of LFP. analysis (LPA) (Einevoll et al. 2007;Głąbska et al. 2016). We may wish to localize the neural activity through reconstruction of current sources from the LFPs (Mitzdorf 1985;Pettersen et al. 2006;Łęski et al. 2007Potworowski et al. 2012), or combine different methods in more complex protocols (Łęski et al. 2010;Głąbska et al. 2014), and so on.
The results of such an analysis of experimental data might be consistent with our expectations, common sense, and literature. But how can we be sure that they are not accidental? They may be a consequence of fortuitous selection of a problem where a confluence of factors makes our analysis plausible even though, in fact, it is incorrect. Or how can we tell which method gives the best results, for example, in spike sorting or CSD analysis (Wójcik 2015)? To answer these questions we must properly validate the methods of analysis before their application to the data of interest (Denker et al. 2013). In mathematics we prove the applicability of a technique, however, in real world situations this is usually not feasible. We believe the correct approach is to use simulated ground truth data with models as close as possible to the systems being studied and including the models of measurement.
By saying ground truth data we imply that we have access to the complete state of the system for any simulated moment, that is, we can access any variable, such as membrane potential, transmembrane current passing through any channel type, complete set of spike times, etc. In neuroscience, so far, ground truth data were mostly considered in the context of spike sorting (Harris et al. 2000;Quian Quiroga et al. 2004;Gold et al. 2007). There one typically considers benchmark data consisting of a set of simulated or recorded extracellular potentials accompanied by independent information on spike trains, coming from a simulation or from an independent, more direct recording, such as intracellular or juxtacellular (Rossant et al. 2016;Neto et al. 2016). For a broader discussion of ground truth data in validation of spike sorting methods, calcium imaging, LFP and CSD data analysis, as well as for related concepts, see Denker et al. (2013).
If we require validation of every analytic protocol with the ground truth data we are faced with the task of building complex network models for systems of interest combined with models of different experimental modalities. Modeling the measurement has been a factor largely ignored in the literature, yet the fact that we are forced to make inference on the behavior of thousands or millions of cells from tens of extracellular potential recordings, in our view, requires building testable links through complex models between the system's activity and its measurement. We may believe in population (field) models providing adequate representations of measurement but then again, how do we verify this postulate in the first place?
In practice, to model the extracellular potential or other measurement modalities using complex compartmental models we have two basic approaches. We may specify the model, define electrode positions, and compute the potential on the fly. This is the usual approach used, for example, by Gold et al. (2006), Lindėn et al. (2013), and Parasuram et al. (2016). The advantage is that one avoids extensive storage of compartmental data. The disadvantage is that if we need the potential at a new point we must repeat the whole simulation, which may be difficult if we run a complex network where each simulation takes hours of runtime. An alternative, which we follow here, is to record the complete state of the whole system throughout the duration of the simulation. The disadvantage is that of large storage demands, however, we can use such data post hoc to compute multiple measurement modalities, test different models of field propagation in tissue, etc, without the need to repeat the simulation. Especially if one wants to use such data as ground truth for validation of methods of data analysis from arbitrary multielectrode setups, clearly, one cannot a priori specify all possible setups. We thus believe that for this kind of applications the approach we advocate here is superior, or at least, a useful alternative. For such data to be truly useful they must be publicly available, well documented, citable, and easily accessible.
Generating ground truth data requires significant modeling experience, time to prepare, run and document the simulation, and access to high performance computers. This whole exercise is often impractical for someone who would just want to validate applicability of a specific method of data analysis and apply it to her experimental data. To facilitate validation and comparisons of different methods of data analysis here we present a collection of data generated using a thalamocortical network model based on Traub et al. (2005), which is the most comprehensive publicly available model of early sensory systems available at the time of writing. The data provided here were used to test a combination of kernel Current Source Density method with Independent Component Analysis (Głąbska et al. 2014), to study the propagation of electric fields in a cortical slice (Ness et al. 2015), and to validate the generalized Laminar Population Analysis method (Głąbska et al. 2016). The data provided here include intracellular voltage traces, transmembrane currents, spike times, and morphologies, which can be used to calculate different measurement modalities. We also provide a collection of scripts to to compute the extracellular potential at arbitrary electrode positions. We intend these data to serve as a proxy for experimental ground truth data and as benchmarks for validation and comparisons of different methods of neural data analysis.
These datasets are provided in the Neuroscience Simulation Data Format (NSDF) (Ray et al. 2016). NSDF is an Hierarchical Data Format version 5 (HDF5) subspecification (The HDF Group 1997), providing specific internal organization for neural simulations. We believe that providing data in a standardized format will further aid its scientific merit, as visualization tools and analytic methods can assume a common interface facilitating their generalization.
Thalamocortical Column Model
The data provided here were generated with a network model of a single cortical column receiving inputs from thalamic neurons based on the work by Traub et al. (2005). The model consists of 3560 multicompartment neurons in fourteen populations: twelve cortical populations from four cortical layers, and two thalamic neuron populations. The structure of the model is described in Table 1, see also Traub et al. (2005) and Głąbska et al. (2014). The original model was tuned to experimental data from the rat's auditory cortex (in vitro) and the barrel cortex (in vivo) and provided in IBM Fortran (ModelDB, accession number 45539). To simulate extracellular potentials, where the placement of neural morphology in space is meaningful, we combined the versions in NEURON (ModelDB, accession number 82894) which was well parallelized, and the NeuroML version (ModelDB, accession number 127353) from which we took the 3D shapes of neurons. In defining the multi-compartmental models, we retained the specification from the NEURON version, where each section consisted of exactly one segment. Finally, we added mechanisms in every segment of every cell to facilitate tracking of the transmembrane currents which are essential to compute the extracellular potentials and made the necessary modifications to store these data on an IBM Blue Gene Q.
The axonal gap junctions from the original Fortran model were turned off for two reasons. First, the NEURON implementation of the Traub's model was not tested sufficiently with gap junctions which could have been a consequence of significantly greater simulation times. Second, we were unable to use active gap junctions in the variable time step simulations which are necessary for precise computation transmembrane currents when using NEURON 7.1 and 7.2 versions on an IBM Blue Gene Q.
Spatial Organization of the Network
The contribution to the extracellular potential from a current source is proportional to its amplitude and inversely proportional to the distance between the source and the electrode. Therefore, the spatial organization of the sources, in this case the positions of all the segments, is essential to compute the extracellular potentials. In the previous versions of the Traub's model (ModelDB, accession numbers 45539, 82894, 127353), the spatial location of neurons was not specified. To allow computation of extracellular potential we placed the cells so that somas of a given population were distributed uniformly in cylinders of diameter 400 μm Note that in this model there is one segment per section. In most of the datasets (1-23 of Table 2), all the above 3560 cells were used. In some datasets (24-28 of Table 2), only 10 % of the cells from each population were used and height corresponding to the vertical extent of the layers as described in Table 1.
Simulations
The simulations were carried out with the NEURON simulator (Hines and Carnevale 1997) version 7.2 on a Blue Gene Q computer utilizing 512 cores. Random ectopic axonal action potentials were turned on Traub et al. (2005). To achieve high precision of computation (monitored by tracking the sum of all the currents) we used the variable time step integration implemented with CVODE (Cohen and Hindmarsh 1996) in NEURON. The values thus obtained at variable time steps were linearly interpolated with a NEURON function with 0.1 ms time step and saved. In several datasets we used one tenth (in terms of the number of cells) of the model, henceforth referred to as the small model, to decrease the size of the data accumulated.
To bring layer 5 and layer 6 pyramidal cells to fire we injected a constant depolarization current of amplitude 1 nA (or 0.5 nA for the small model) to the layer 5 pyramidal cell somas and 0.75 nA (or 0.375 nA for the small model) to the layer 6 pyramidal cell somas (Table 3, Datasets 1-8, 17-28). Without such a depolarization these cells would remain silent even after prominent input. These values were consistent with the default NEURON version of this model and These are described in detail in Section "Datasets". For all the datasets here, recordings begin at 0 ms. In the column Size, 100 % size corresponds of 3560 cells and 10 % size to 356 cells. Next columns define: Stop -the end of the recordings as well as the end of the simulation; Stim. delaythe beginning of the stimulus; Stim. duration -the duration of the stimulus. Depol pop5 -a constant depolarization current injection to layer 5 pyramidal cell somas to depolarize these cells to fire, and likewise Depol pop6 -a constant depolarization current injection to layer 6 pyramidal cell somas, for them to fire. The following abbreviations are applicable here: depol -depolarization current injection, TCR -thalamocortical relay cells, pop23 -pyramidal cells in layer 2/3, pop4 -pyramidal cells in layer 4, pop5 -pyramidal cells in layer 5, pop6 -pyramidal cells in layer 6, Na -Sodium correspond to the parameter awake set to 1. (or to 0.5 for the small model). For comparison, we also provide datasets without this current injection (Table 2, Datasets 9-16).
In addition to the transmembrane currents and the membrane potentials from all the segments, we recorded spike times of all the cells. In some simulations of the small model we also recorded the contributions to the currents from different channel types (potassium, sodium, calcium) and other sources (through synapses GABA A, NMDA and AMPA; capacitive currents, passive currents). For details of channel mechanisms included in specific cells consult the original paper (Traub et al. 2005) and the provided code.
Stimuli
During the first 50 to 60 ms (in some cases even longer) the network exhibits a transient turn-on behavior following which the spiking activity settles down. We then stimulated the network with two types of stimuli: 1) a sinusoidal current injection to thalamocortical relay (TCR) cells (oscillations), or 2) a short constant current injection to TCR cells (pulse). The sinusoidal stimulus was used to study the properties of the network and to increase the diversity of the datasets, while the pulse stimulus emulates an evoked response in the network, e.g. a response of the rodent barrel cortex to deflection of a few whiskers. Further details are provided in Section "Datasets".
Transmembrane Currents
Following the Kirchoff's current law, the sum of all transmembrane currents in a cell must be zero. However, this does not hold true for contributions from individual channel types. This makes analysis of such contributions challenging, nevertheless, there is some interest in such analysis in the community (Reimann et al. 2013). To allow studies of contributions to extracellular potential from different partial currents (passive, active, synaptic, etc.) we tracked the capacitive and passive currents as well as the currents through every channel present in the Traub's model (sodium, potassium, calcium currents, NMDA, AMPA, GABA A, anomalous rectifier currents, two types of low threshold T type currents Traub 2003;Traub et al. 2005), as well as steady bias and ectopic currents (Traub et al. 2005).
Calculation of Extracellular Potential
Our main goal with these simulations was to provide a collection of datasets to validate different methods of analysis of extracellular recordings (LFP, multi-unit activity, spike trains). Since we cannot foresee specific arrangements of electrodes needed, specific bands to filter signals, models of field propagation, etc, we provide the recorded currents with a Python script to compute the extracellular potential at required positions. Figure 1 shows an example plot of extracellular potential in 2D plane spanned by a regular grid of 16 x 20 electrodes placed 25 μm away from the cylindrical axis of the cortical column, as in a multielectrode array (MEA), at 301 ms after the start of recorded simulation, i.e., just after the onset of stimulus.
The script can be easily modified to indicate arbitrary electrode positions, use selected cortical cell populations, or even select specific currents to compute their contributions to the simulated recordings. For example, one can evaluate the contribution to the extracellular potential from capacitive currents of all pyramidal cells, etc. In these computations, we assume infinite homogeneous resistive extracellular medium recorded by ideal point electrodes and use the point source formula (Nunez and Srinivasan 2005): where N is the number of all the segments in the cortical part model, I n is the transmembrane current from the n-th current source positioned at x n , σ is the extracellular conductivity. We assumed σ = 0.3S/m. The point sources were placed at the centers of every segment. This script is provided only as a starting point for exploration. The users may want to consider more complex models of extracellular potential computation, such as the line source model for LFP (Holt and Koch 1999), or more complex models of tissue, such as a cortical slice in a multielectrode array dish (Ness et al. 2015), or frequency dependence of field propagation (Gomes et al. 2016), and the provided data could still be used.
Sample Scripts Accompanying the Data
To show how to access the data we provide several example scripts in the Github repository https://github.com/ Neuroinflab/Thalamocortical. In folder figures we provide scripts to generate the figures from this manuscript.
Here, in the folder analysis_scripts we provide four scripts performing several basic tasks. These scripts are specific to the presented data but they can be easily extended to generic NSDF files.
lfp_parameters.py
Here you select the dataset to be used, 1D, 2D, or 3D geometry of electrode setups probing the field generated by the cortical column, and the cell populations and the model size to be used for computing the LFP in the next steps. The default parameters used by the rest of the files are set here.
calc_lfp.py Computes the extracellular potentials using the transmembrane currents from the selected populations of We also provide here a function to convert the LFP calculated here into a NEO object list (Garcia et al. 2014), which can be used, e.g., in elephant (http://neuralensemble.org/ elephant/) or other compatible software. In this script we use the point source approximation, i.e., the segments are treated as point sources placed at the mid point of the segments. We also provide here the low pass filter function we used to compute the LFP from extracellular potentials (2nd order Butterworth filter).
create_plot.py Plots the measured potentials for 1D and 2D electrode setups. It also shows the midpoints of the segments used in the LFP computation, marks the electrode positions, and shows interpolated potentials recorded using the 1D and 2D probes. The plot displayed for 1D case (laminar probe) shows potentials (y axis) versus time (x axis), and for the 2D case it shows interpolated potential in the plane of simulated MEA at a selected time point.
raster_spikes.py Shows the raster plot for the whole network. The different colors represent different neuron types.
Up arrow indicates an excitatory neuron. Down represents an inhibitory neuron.
These scripts do not constitute a toolbox although they may grow in the future. We provide these scripts to facilitate the uptake of the data provided here and implementation of other models of measurement. For more information read the provided Readme file and the scripts.
File Format
The datasets are available in the Neuroscience Simulation Data Format (Ray et al. 2016), version 1.0. NSDF is a subspecification of Hierarchical Data Format version 5 (The HDF Group 1997), which introduces an organization of data within HDF5 in a way useful for storing the results of neuroscience simulations. HDF5 itself was developed particularly for storing scientific data, it is flexible, hierarchical, self describing, and allows efficient reading, writing, and storage of data. According to the NSDF specification the data must include some essential information about the simulation as attributes. These include units, start time, and time step of the simulation, units of the time step etc. Additionally, NSDF datasets must have meta-data attributes, which include the software used, the methods used, the name of the creator of the dataset, license and so on.
Each dataset includes the following data: 1) morphology information i.e., segment geometry and position, stored as HDF5 compound arrays under data/static/morphology/ of the NSDF file; 2) spike times (1-24 in Table 2) and/or input spikes (17-20 and 25-28 in Table 2) stored as variable length arrays stored under data/event; and 3) transmembrane currents and membrane potentials stored in data/uniform/pop_name. For some datasets (24-28 in Table 2) individual ionic current contributions to transmembrane currents are also included here. The abbreviation used here for pop_name are from Table 1. For instance, the total transmembrane currents for all the pyramidal regular spiking layer 2/3 cells are located in a 2D array at data/uniform/pyrRS23/i. In these 2D arrays, the rows correspond to the unique segment id (for arrays in data/static and data/uniform), or the cell name (for arrays in data/event). These unique id's are stored as lists in map. The connection between the arrays in data and map is via the HDF5 standard Dimension Scales specification as per NSDF. For example, the array in data/static/morphology/pyrRS23 are row-wise mapped onto the elements in map/static/pyrRS23_names using Dimension Scales, and likewise /data/uniform/pyrRS23/i, and /data/event/pyrRS23/spikes to map/uniform/ pyrRS23_names and map/event/pyrRS23_spikes, respectively. This organization is illustrated in Fig. 2. For the sake of simplicity, this figure shows the NSDF file architecture only for the down-scaled version of the model for two cell populations. The units for the arrays are included as array attributes according to the NSDF specification, for example /data/uniform/pyrRS23/i array attribute for "unit" is "nA".
Datasets
In total, we present 28 datasets as listed in Table 2, 22 of these are responses to oscillatory input currents of different Fig. 2 An illustration of the hierarchical storage of data in NSDF for an example subset of two cell populations from the down-scaled model. Note that the datasets we provide include data from all the cell populations. The left side indicates the paths to the data arrays under the data. The unique id's of the segments or cell names on the right are stored as arrays under map. For instance, the morphology information for pyramidal regular spiking layer 2/3 cells is stored as a compound array at data/static/morphology/pyrRS23. Each row in this compound array is a segment whose unique id is located in the respective row of the /map/static/pyrRS23_names array. Here, the columns as indicated, x0, y0, ..., d compose the header for the compound array; correspond to the proximal (x0, y0, z0) coordinate, the distal (x1, y1, z1) coordinate and the diameter d of the segment. The connection between the arrays in data/ and map/ is given by the HDF5 Dimension Scales according to the NSDF specification. Likewise, data/uniform/pyrRS23/i contains transmembrane currents through each segment (rows) over time (columns) and map/uniform/pyrRS23_names has the respective segment ids. Similarly, data/event/pyrRS23/spikes is a variable length array where each row corresponds to a cell and has the instances when it fired. The units for all the arrays are attached as the corresponding array attributes frequencies (oscillations), and the other 6 are responses to pulse stimulus (pulse).
Datasets 1-16
A sinusoidal current of amplitude I inj = 2 nA was injected into all of the TCR cells t 0 = 100 ms after the onset of simulation. This caused an oscillatory response in the cortex. The frequency of these stimuli was f = 200, 100, 50, 25, 12.5, 8, 4 or 2 Hz in the different datasets. Here, (t) is the Heaviside function. In datasets 1-8, during the simulation, the somas of layer 5 and layer 6 pyramidal cells were depolarized with 1 nA and 0.75 nA current, respectively. These data were used to study the meaning of independent components of current source density reconstructed from LFP (Głąbska et al. 2014). They were also used to validate the generalized Laminar Population Analysis (Głąbska et al. 2016). The activity of the network recorded in the dataset 5 (12.5Hz oscillatory stimulus with depolarized infragranular pyramids) is shown in Fig. 1 of Głąbska et al. (2016), while the dataset 2 is used in Figs. 2 and 5A of Głąbska et al. (2014). Datasets 9-16 correspond to sets 1-8, except they were simulated without depolarization currents in infragranular pyramidal cells and are provided for comparison.
Datasets 17-21
Generalized Laminar Population Analysis (Głąbska et al. 2016) uses LFP and MUA from laminar recordings to decompose network activity into physiologically meaningful components. This can be interpreted as contributions to extracellular activity arising from the cells active in a specific layer. To validate this method we started with the simulation leading to the dataset 5 (12.5 Hz oscillatory input with depolarized infragranular pyramids). We then disabled all the connections and used the spiking activity from individual populations in dataset 5 to activate the network. In this way we generated four datasets corresponding to inputs Fig. 3 Dataset 24. a raster plot of the network activity. Up and down pointing triangles are excitatory and inhibitory neurons, respectively, the black vertical line shows the stimulus onset (b-l) contributions to the extracellular potentials as recorded by 28 electrodes with interelectrode distance of 92.6μm, left y-axis shows electrode number, right y-axis shows depth; (b) LFP and contributions to LFP (extracellular potential filtered below 100 Hz using second order Butterworth filter) from specific currents, respectively: (c) NMDA + AMPA (d) GABA (e) capacitive, (f) potassium, (g) passive, (h) calcium, (i) sodium (j) two kinds of calcium low threshold T type currents not causing [Ca2+] influx (k) anomalous rectifier, (l) all other currents, such as ectopic currents and depolarizing currents. Note that the scales used in different panels differ to emphasize the individual contributions from pyramids in layer 2/3 (dataset 17), spiny stellate cells in layer 4 (18), pyramids in layer 5 (19) and pyramids in layer 6 (20). To get a baseline LFP, another dataset (21) was generated with no population input. The spiking activity and the LFPs obtained from these simulations are shown in Fig. 5 in Głąbska et al. (2016).
Dataset 22
These are the data from a simulation similar to that of the dataset 5, except that only 20 % of the TCR cells received the oscillatory input, (Eq. (2)), here. This resulted in less correlated network activity.
Dataset 23
To simulate the cortico-thalamic responses to a whisker deflection in a rodent we injected a constant current pulse of amplitude 3 nA for a duration of 2 ms into the TCR cells, 70 ms from the start of the simulation. Layer 5 and 6 pyramidal cells' somas were depolarized with 1 and 0.75 nA currents, respectively, during the simulation. Such a stimulus caused a brief (about 5 milliseconds) activation of the TCR cells. The activity then propagates to the spiny stellate cells in layer 4, the deep basket interneurons in layers 5-6, and the nucleus reticularis in the thalamus. Then, the activation appears in the fast rhythmic bursting cells in layer 2/3 and several milliseconds later in tufted pyramidal intrinsic bursting and regular spiking cells in layer 5, pyramidal regular spiking cells and interneurons in layer 2/3. Finally, the stimulus reaches the rest of the cortical cells: nontufted pyramidal regular spiking neurons in layer 6 and the interneurons in layer 5/6. After 50 ms from the onset of the stimulus the network settles down again. In this case we used the whole model and recorded only the spike times, the membrane potential and the total transmembrane current in every segment. The simulationends at 180 ms when the evoked activity dies out. Figure 2 in Głąbska et al. (2014) shows the raster plot for this kind of simulation.
Dataset 24
This is similar to the dataset 23, however, only 10 % of the cells were used and layer 5 and layer 6 cells were depolarized by 0.5 nA and 0.375 nA respectively to obtain firing rates consistent with the full model. The stimulus was applied 300 ms from simulation onset after initial transient died out. In this dataset we recorded the sum of transmembrane currents, membrane potential, as well as different contributions to the transmembrane currents: current flowing through synapses GABA A, NMDA and AMPA, capacitive, passive, potassium, sodium, calcium, two kinds of calcium low threshold T type currents (not causing [Ca2+] influx), anomalous rectifier, and other currents (e.g. steady bias + ectopic currents). Figure 3 shows the raster plot from this simulation, the LFP (extracellular potential filtered below 100 Hz using second order Butterworth filter), and the components of the LFP which originate from the different types of recorded current sources.
Datasets 25-28
The simulations in which these data were obtained are all similar to those leading to the dataset 24. They were obtained to investigate how the transmembrane currents through active channels are reflected in extracellular recordings. For this we performed simulations with active channels turned off in: every segment (dataset 25), in somas and axons (26), only in axons (27), or we turned off only fast sodium currents in every segment (28). Since closing these channels silenced the network, we stimulated the system with spikes recorded in dataset 24. Thus these datasets can be used to identify the differences in LFP where the same synaptic stimuli are provided while different combinations of the active channels are closed. The LFP resulting from these simulations are presented in Fig. 4. The stripy structures visible in panel A (unfiltered extracellular potential) are the extracellular signatures of spiking (not shown but can be visualized with the provided data).
Discussion
Traub's model of the thalamo-cortical column (Traub et al. 2005) is one of the largest and most popular conductance based multi-compartmental models that is publicly available. It serves as an important benchmark for every simulator and its established position is apparent by the fact that the original FORTRAN model has been translated into NEURON, MOOSE, and NeuroML versions. This model has its limitations, some of which were discussed in the original paper (see Discussion in Traub et al. 2005), some were discovered later on Gleeson et al. (2013), some are consequences of translations to new platforms (see Fig. 10, Gleeson et al. 2010). A number of difficulties with different translations of this model were documented and tested by the research community outside the laboratories where the model was originally developed (Gleeson et al. 2013). This model is a good starting point for modeling thalamocortical system, as it illustrates the complexity of such large scale modeling studies.
Here we presented 28 simulation datasets of Traub's thalamo-cortical model. In all the datasets positions of the cells, cell morphologies, membrane voltage potential, transmembrane currents, and the spiking information are provided. In three datasets, 22-24, we also provide contributions from individual channel types to the transmembrane current.
Some of these datasets were used or are equivalent to those used in our previous research. In particular, datasets 1-8 and 23 or equivalent were used in a study of physiological meaning of independent components of current source density reconstructed from LFPs (Głąbska et al. 2014). In a study of the effects of physical and geometrical properties of a cortical slice in saline on extracellular potentials recorded with multielectrode arrays and for reconstruction of current source density in such a setup (Ness et al. 2015) we used data from the dataset 3. We also used the datasets 1-8 and 17-21 to validate the generalized Laminar Population Analysis (Głąbska et al. 2016).
The data provided here can be used to test and compare different spike detection algorithms, or to test the validity of hybrid methods for calculating the extracellular potentials, where combinations of point neurons with single multicompartmental neuron models are used. These data may also be used to investigate the relationship between the actual transmembrane currents and the reconstructed CSD (e.g. Głąbska et al. 2014, Fig. 4), serve as a ground truth for analysis methods based on the reconstructed CSD, or used to compare different methods of current source density estimations using the extracellular potentials they generate.
Due to the latest advancements in microelectrode technology, sophisticated configurations of electrode placements are possible. With the help of the provided data it is possible to model simultaneous extracellular potential recordings in these configurations. For example, it is possible to compare the recordings of a laminar probe placed next to the cortical column, on a 2D electrode grid of multishank electrode, but one could also investigate contributions to ECoG or EEG. However, this would require more complex models of field propagation taking into account geometry and conductivity of the cranium, scull and scalp. We hope that the availability of these data would facilitate understanding of the relationship between the network activity and measurement, would help with the interpretation of the results of specific analytic methods and lead to new insights.
The data are provided in NSDF (Ray et al. 2016), a well documented subspecification of HDF5, developed for storage of the data from simulations. Any visualization or analysis tools developed to support HDF5 in general, such as HDFView, will be applicable to these datasets.
The first 16 datasets show responses of the network to 8 different stimuli in two network states (with extra depolarization of infragranular pyramids and without). To facilitate validation of more involved methods of data analysis we performed additional simulations for injection of 12.5 Hz sinusoidal current. Datasets 17-21 attempt to uncover what part of the whole network activity is driven by specific populations (Głąbska et al. 2016). These contributions cannot be obtained experimentally, as they require the use of spikes from a population functioning in a fully connected network to drive the network of disconnected cells. In experiment, even if we were able to silence connections within the network except from those coming from a particular population, system activity would change, so the network response would also change.
Driving all the TCR cells with the same oscillatory stimulus imposes strong correlations on the network activity. Since the correlations in spiking activity and input currents affect the extracellular potential power spectrum and the spread of the signal (Łęski et al. 2013), we generated additional datasets where only 20 % of the TCR cells were driven (dataset 22). That was enough to observe a response in the whole network which was less correlated.
Datasets 24-28, can be used to test different hypotheses and interpretations of LFP on the model data, study the relation of LFP to postsynaptic currents, identification of currents contributing the most to the LFP, relation between spiking and the LFP, etc. To facilitate investigations of such questions, we recorded all of the transmembrane currents separately. Since the size of these datasets is substantial, we decided to run a down-scaled version of the Traub model with only 10 % of the cells. Interestingly, the largest contributions to the extracellular potential comes from excitatory synaptic currents as well as from the active currents, and they are an order of magnitude larger than the final LFP, see Fig. 3. However, due to extensive but nontrivial cancellations, all the currents, including passive and capacitive currents, contribute significantly to the LFP, which makes interpretation of the LFP signal a challenging task. In the Fig. 4 Potentials as recorded by 28 electrodes with inter-electrode distance of 92.6μm distance, left y-axis is electrode number, right y-axis is its depth (a) Extracellular potential from dataset 24 (b) the same as A but filtered below 100 Hz (LFP) using second order Butterworth filter. The following panels show not filtered extracellular potential from data sets (c) 25 (d) 26 (e) 27 (f) 28. The stripy structures visible in panel A (unfiltered extracellular potential) are the extracellular signatures of spiking (not shown but can be visualized with the provided data) datasets 25-28 we prevent spiking behavior of the network using different approaches, (for details see Section "Datasets" or Table 1), but we provided the same synaptic stimulus as in the dataset 24. In every case the high frequency signal in extracellular potential was reduced. With the exception of the case where we blocked all the active channels (dataset 25, Fig. 4 c), the low frequency part of the potential remained similar as in the original simulation (dataset 24, Fig. 4 b). This is a strong evidence that the LFP is evoked by synaptic activity, while at the same time we see that, at least in these model data, active channels in the dendrites play a critical role in setting up the LFP signal.
To obtain the data presented here, powerful computational resources are needed, that are not easily accessible to every researcher. Even if these computational resources are available, it is impractical and wasteful to duplicate efforts to set up and run such large simulations for a single laboratory use. For example, we performed the computations on the IBM Blue Gene Q computer at the Interdisciplinary Center of Modeling, University of Warsaw, using 64 nodes, each equipped with 16 cores and 16 GB of memory. The wait time to avail of these resources was typically 1-3 days and the simulations lasted typically 8-10 hours. We hope that if the interest in this model sustains and other researchers perform new simulations with parameters different from those listed here, a more extensive collection of ground truth data will be established, to further facilitate studies of relations between system internals and measurements, and for validation of complex methods of data analysis, called for by the results of the present day experiments. The data provided here were generated with public resources and the results of such an endeavor rightfully belong to the community at large.
Information Sharing Statement
The complete collection of datasets provided here is available at http://dx.doi.org/10.18150/repod.6394793, hosted by RepOD (RRID:SCR_014697). These files are available under the Open Database License (ODbL 1.0 license). We provide the NEURON (RRID:SCR_005393) code used to generate these datasets at https://github.com/Neuroinflab/ Thalamocortical. We also provide here Python scripts to convert IBM Blue Gene Q computer output of NEURON to NSDF file format, and some Python based analysis scripts to generate LFP for 1, 2, and 3 dimensional electrode layouts, raster plots, NEO objects, etc, for an NSDF file. These scripts are available under the GNU GPL 3.0 License. | 9,023 | 2016-11-11T00:00:00.000 | [
"Computer Science",
"Biology"
] |
Hiding Missing Energy in Missing Energy
Searches for supersymmetry (SUSY) often rely on a combination of hard physics objects (jets, leptons) along with large missing transverse energy to separate New Physics from Standard Model hard processes. We consider a class of ``double-invisible'' SUSY scenarios: where squarks, stops and sbottoms have a three-body decay into two (rather than one) invisible final-state particles. This occurs naturally when the LSP carries an additional conserved quantum number under which other superpartners are not charged. In these topologies, the available energy is diluted into invisible particles, reducing the observed missing energy and visible energy. This can lead to sizable changes in the sensitivity of existing searches, dramatically changing the qualitative constraints on superpartners. In particular, for m_LSP>160 GeV, we find no robust constraints from the LHC at any squark mass for any generation, while for lighter LSPs we find significant reductions in constraints. If confirmed by a full reanalysis from the collaborations, such scenarios allow for the possibility of significantly more natural SUSY models. While not realized in the MSSM, such phenomenology occurs naturally in models with mixed sneutrinos, Dirac gauginos and NMSSM-like models.
I. INTRODUCTION
With the successful operation of the LHC at 7 and 8 TeV energies, experimental results have now probed the energy regime well above the weak scale. While the incredible agreement of the Standard Model is a major success of particle physics, the absence of any clear signs of new physics challenges our basic assumptions about naturalness. In particular, it is expected that a top partner should be present to cancel the leading quadratic divergence to the Higgs mass. As a consequence, a hadron collider such as the LHC should be capable of copiously producing such top partners and any other associated colored particles. Specific arguments within supersymmetry for a stable R-parity odd particle, and more generally for a stable T-parity odd particle [1] motivate a robust search strategy for jets+missing energy. Such searches have shown no sign of the excesses expected of squarks at several hundred GeV (see, e.g., [2][3][4][5][6][7]). As a consequence, there is a greater movement to reconsider naturalness entirely [8][9][10][11][12][13][14][15][16].
Technically natural models can still be found by restricting the low energy spectrum to the minimal content needed in order to avoid fine-tuning of the electroweak scale (generally stops and Higgsinos with a cutoff) [17,18]. While such scenarios can achieve technical naturalness, they are often ad hoc in removing other particles from the spectrum (such as unflavored squarks).
The weak scale may still be generically natural, however, if these jets+MET signals are hidden within Standard Model backgrounds. Since large missing transverse energy (MET) is what generally distinguishes these signal events from multijet backgrounds, the simplest possibility is to deform this class of signals by converting MET into visible energy, and hadronic energy in particular. This is realized simply through hadronic R-parity violation, for instance [18,19]. Detailed questions of flavor violation and baryon number conservation constrain these models [20], but even more pertinent are the con-straints from high jet multiplicity searches [21,22] on how well such models hide SUSY.
A second approach is to kinematically suppress missing transverse energy with the presence of nearly degenerate states. This could arise by squeezing the spectrum of squark and bino, for instance, through an accidental degeneracy of the spectrum. Alternatively, "stealth" SUSY models [23,24] invoke an approximately supersymmetric dark sector to achieve this degeneracy. Both of these approaches attempt to suppress the missing energy by converting as much of the available energy into a visible form. This is successful in suppressing the efficiency of jets + MET searches, but can make other (often dedicated) searches more sensitive, such as [25][26][27].
In this Letter we will consider an alternative possibility -that one can "dilute" the final state energy into many invisible particles, and in doing so, obscure signals of New Physics. Momentarily counterintuitive, a brief reflection on the kinematics of the process will make it clear why this suppresses the sensitivity of existing jets+MET searches.
A. Hiding Missing Energy in Missing Energy
The most conventional scenarios in SUSY involve cascades that conclude with a neutralino LSP. In such cases, these cascades generally end with only a single invisible particle -e.g., a single squark will cascade to a single R-parity odd neutralino and (mostly) visible energy otherwise. However, this "single-invisible" aspect of SUSY is particular to scenarios like the MSSM where the LSP only carries a single quantum number or parity (in this case R-parity). If the LSP carries a second conserved quantum number not shared by the mother particle, then, to conserve that, there must always be a second stable particle in the cascade (for instance, the R-parity even partner of the LSP). If this particle is invisible, the total amount of missing energy can be increased. A simple example of this exists already in the MSSM: the sneutrino. Cascades must always conclude with not only the sneutrino, but also an associated lepton. In the case where that lepton is a neutrino, there are two invisible particles in every cascade. Considering the decay of a squark in particular, we can haveq → qB followed byB →νν. In this case, with an on-shell Bino decaying invisibly, there is no phenomenological difference with simply having a Bino LSP.
In contrast, if the Bino is off-shell, the squark will undergo a 3-body decay,q → qνν, where the energy is now shared with two invisible particles. The simplified model that one can consider is one that simply replaces the single invisible decay with a multi-body decay with two invisible particles. We refer to such a scenario and related simplified models as "double-invisible." While one might think that increasing the multiplicity of invisible particles in the final state would increase the sensitivity of jets+MET searches, the opposite is actually true. This is because the extra invisible states dilute the energy of the visible particles. Since MET (E T ) is a vector-sum of visible energy, the increase in missing (scalar-sum) energy leads to a decrease in missing (vector-sum) energy. We can see an example of this in Fig. 1. These changes naturally have a significant impact on SUSY searches.
II. EXPERIMENTAL SENSITIVITY ON DOUBLE-INVISIBLE SIMPLIFIED MODELS
Generically, SUSY searches for colored superpartners are optimized for standard (single-invisible) MSSM decays. That typically entails hard cuts on missing energy, hadronic energy and leading jets' transverse momenta. Such cuts substantially reduce backgrounds without compromising sensitivity to standard topologies. However, hard requirements on kinematics can lead to a significant reduction of signal efficiency for double-invisible topologies, as suggested by the distributions on Fig. 1.
In this section, we will attempt to recast [28] the lim-its from ATLAS and CMS SUSY searches to the doubleinvisible scenario. As we shall see, they are significantly weakened, by our estimates by almost an order of magnitude in cross section at times. Before we lay out our goals, we should emphasize that our limits should not be taken as precise limits, but as our best current estimates, and as motivations for the experiments to properly recast these limits themselves. Secondly, we would argue that these limits motivate new analyses, more optimized for these kinematics. As 13 TeV data may be more challenging to apply to these low masses, such analyses should be a high priority prior to the next LHC run.
We generate Monte Carlo events for double-invisible simplified models and survey their constraints from relevant ATLAS and CMS searches. In order to validate our simulation and calculation of the experimental efficiencies, we first attempt to reproduce the experimental limits quoted by the searches. We only present our estimated limits for analyses we were able to validate, i.e., whose results we were able to reproduce to within a factor of two.
We simulate pair-production of colored superpartners in Madgraph 5 [29], which are decayed, showered and hadronized in Pythia 6 [30]. For a crude simulation of detector response, we use PGS4 [31]. For searches requiring b-jets, we have modified PGS's b-tagging efficiency as a function of the b-jet's transverse momentum and rapidity in order to more closely match the working point used by the relevant searches.
For squarks and gluinos, we validated and recast the searches in [5,6]. The validated and recast analysis for third generation squarks were [2- 4,7]. Other potentially relevant searches will not be discussed in this note either because we have found that they were not competitive with the analyses listed above, or because we were not able to validate their limits to a satisfactory degree. Instances of the former category are α T , razor and monojet searches. We expect a lower sensitivity of the CMS α T analysis in [32] due to its lower luminosity (11.7 fb −1 ) and hard requirements on the transverse energy of the two leading jets (E j1,j2 T ≥ 100 GeV). The CMS razor analyses, at the time of writing of this letter, have not been updated with the 8 TeV data. Even though we might expect non-trivial 7 TeV razor limits to our scenario, we do not expect that they will be stronger than other 8 TeV hadronic analyses with higher energy and four to five times more integrated luminosity. As for the monojet analyses, ATLAS has a dedicated search for compressed stops decaying to a charm quark and a neutralino [33], excluding the very compressed region with mt 230 GeV. Their limits can be straightforwardly recast to eight compressed squarks of the 1st and 2nd generations, being roughly mq 360 GeV. We expect this search to have a reduced efficiency on non-compressed double-invisible topologies, and therefore can be ignored for our purposes for not being competitive with the CMS limits from [5]. The second category of searches not included in this study, i.e., those we cannot validate, spans searches that use multivariate analyses, neural networks, boosted decision trees, etc., for which we do not have enough information or tools to reproduce. Fig. 2 shows our recast limits from [5] on degenerate 1st and 2nd generation squarks with 3-body decaỹ q → qXX, where X andX are invisible and m X = 0. Gluinos are assumed to be decoupled. In the left plot, we set mX = 100 GeV and show the limit on the production cross section as a function of the squarks' mass (red line). The shaded yellow band corresponds to an ad hoc factor of two uncertainty in our estimates. We also show the reference NLO-QCD production cross section (black line) computed with Prospino 2 [34], the official CMS limits on the standard two-body topology (purple line) and our validation of the CMS limits (blue line). One can see that for most of the squark mass range, the cross section limits we find on the double-invisible topologies are reduced by roughly a factor of 5 relative to their singleinvisible counterparts. Squark mass limits are weakened from mq 800 GeV to mq 450 GeV assuming mX = 100 GeV, and disappear for mX 160 GeV, as shown on the plot on the right, which contrasts double and single-invisible constraints in the mq −mX plane. Interestingly, our recast of the ATLAS jets+MET search [6] on this topology yielded no constraint on squark masses, regardless of mX . That can be explained by the tight cuts applied to the event selection, in particular to the leading jet transverse momentum (p j1 T ≥ 130 GeV). Fig. 3 shows our estimated limits for 3rd generation squarks in the mb /t − mX plane. Again we assume m X = 0 for the purpose of illustration and add an ad hoc factor of two uncertainty in our estimates, delimited by the yellow region. We only display the limits from [3,7], which are the most sensitive to the topologiest → tXX andb → bXX (constraints from other 3rd generation searches are shown in the Appendix). These plots again suggest that bounds on stops and sbottoms are substantially reduced for double-invisible topologies, even disappearing for mX 120 GeV.
As previously mentioned, the limits just discussed assume decoupled gluinos. If gluinos are kinematically accessible, one has to consider additional colored production, such as pp →gg,gq ( * ) andqq (the later being enhanced via t-channel gluino). That can substantially increase the constraints on squarks, for instance mq 1380 GeV for mq = 0.96×mg. For mq = 500 GeV, the gluino must be heavier than ∼ 2.5 − 3 TeV. Such a separation could be natural if gluino and squark masses are generated at a low scale, with m 2 q two-loop suppressed relative to Mg (as occurs with Dirac gauginos [35]).
III. MODEL REALIZATIONS OF DOUBLE-INVISIBLE SUSY
Model realizations of double-invisible SUSY are straightforward (but not trivial) to construct. There are two essential elements for the model: first, the LSPX must carry some additional charge or parity (not shared by other superpartners) so that it is always accompanied Fig. 2, the (shaded) yellow band corresponds to an ad hoc factor of two uncertainty in our estimated limits.
by an additional particle X carrying that same charge or parity. Moreover, this additional particle must be neutral. [43] Having the appropriate final state is not enough, obviously, as the 3-body decayq → qXX must be the dominant decay mode. If the only R-parity-odd and kinematically open channel is XX, then the double-invisible phenomenology is realized fairly trivially. However, this dictates a somewhat specific class of spectra, with squarks the next-to-lightest sparticles. We would be interested in exploring whether models can exist with additional light sparticles but retaining the double-invisible phenomenology.
It is fairly clear that for two-body decays to be suppressed, the gauginos must be heavier than the squarks. As discussed in Sec. II, for light squarks (mq ∼ 500 GeV), the gluino must satisfy mg 2.5 − 3 TeV. Such a separation between squarks and gluinos is most natural in the context of Dirac gauginos, where the loop corrections to the squark masses squared are "supersoft", or finite to all orders [35]. Moreover, in this scenario the gluino t-channel contribution to squark pair-production is suppressed [37,38], further reducing limits on squark production. Because Dirac gauginos seem to provide the natural basic framework in which such phenomenology is viable, we shall focus our model building efforts there.
We add to the MSSM Lagrangian terms where W α = θD is an effective D-term spurion (which may arise from the D-term of a hidden sector U (1) or from a composite vector D 2 D α X † X = θF 2 ). We assume the first term provides the dominant contribution to the Bino mass. Note that while we have included a mass term for X, the vev for S induced after EWSB will generate a small X mass in the absence of an explicit mass term. Note that we use ∼ to denote the R-parity odd state here, but there is a choice whether that is the scalar or fermion state (or, equivalently, whether to expand the definition of R-parity to include the X-charge).
Assuming sleptons are kinematically accessible, the partial width for leptonic decays scales as Γq →qll ∝ g 4 Y m 5 q /m 4 B , while the double-invisible decay scales as The different scaling is due to the fact that the Dirac mass insertion on the Bino propagator flips to a right-handed state that has no couplings to SM leptons [39]. Consequently, the branching ratio to charged leptons will fall as Br(q → qll) ∼ (g 2 Y m 2 q )/(y 2 m 2 B ) and will be sufficiently suppressed for mB O(TeV) and y ∼ O(1), allowing the doubleinvisible phenomenology to dominate.
A. Displaced Scenarios
If squarks are the next-to-lightest R-parity odd superpartners (X being the LSP), another intriguing possibility arises, namely that of displaced decays. Since the decay arises from a higher dimension operator, displaced decays can be quite natural.
Rather than decaying the squarks through the Bino portal as above, one can consider the Higgs portal, by adding to the MSSM Lagrangian the terms The decayq → qXX will proceed either via mixing with the Higgsino (and thus with an amplitude proportional to y, λ, and the fermion's Yukawa, y f ) or via the Bino through its Higgsino mixing, and thus proportional to y, λ and m Z /mB. This raises the possibility that the squark decay will be displaced. The phenomenology will be similar to that in "mini-split" scenarios [14,16], where the gluino decays through a dipole operator to a gluon and a neutralino. Here, however, such signals arise at a lower energy scale, and the cross section magnitude is set by squark pair-production, rather than gluino pairproduction.
B. N-Invisible SUSY
While we have focused so far on double-invisible SUSY, it is straightforward to extend the scenario to a multibody decay with N invisible final-state particles. As multibody decays are inevitably from higher dimension operators, the displaced scenario is much more likely here. Putting that aside for the moment (assuming the intermediate states are sufficiently light to allow prompt decays), we can consider a modification to the above model with the additional terms If the decayq → qXX is kinematically forbidden because, say, the scalar X is too heavy, then the decaỹ q → qXY Y will be the only allowed one, realizing a "triple-invisible" scenario.
Admittedly, this particular model realization is somewhat contrived, and adding additional fields to achieve four and five invisible particles in the final state may be more so. Nonetheless, these are still logical possibilities and warrant a recast of existing analyses, if not dedicated analyses.
IV. CONCLUSIONS
The successful run of the LHC at 7 and 8 TeV has significantly constrained a large number of scenarios for physics beyond the Standard Model. In particular, most conventional SUSY models are tightly constrained unless the majority of colored particles are above O(TeV).
Such limits can be dramatically alleviated in "doubleinvisible" supersymmetric scenarios, in which squarks 3-body decay into a quark and two invisible particles, rather than a single neutralino. Such scenarios are natural if the LSP carries a new conserved quantum number (or parity) such that it must be produced with an Rparity even partner.
In those scenarios, the total energy carried away by the invisible particles is increased, diluting the visible energy in the final state. While a (naive) paradox, this increased invisible energy decreases the measured missing energy, thus lowering the sensitivity of existing searches to squarks decaying double-invisibly. In particular, our recasts of the existing ATLAS and CMS searches indicate that for m LSP 160 GeV and mg 3 TeV, (un-flavored) squarks, sbottoms and stops lack any robust LHC constraints (in large contrast with the strongly constrained parameter space of their single-invisible counterparts). Non-trivial limits still hold for lighter LSP masses, m LSP 160 GeV, though substantially reduced. This goes counter to the conventional wisdom that colored particles decaying into jets+MET are tightly constrained, unless a kinematical tuning suppresses the missing energy. At a minimum, this warrants a proper analysis of these scenarios by the ATLAS and CMS collaborations and, should those be as weak as our study suggests, dedicated searches should be performed taking into account the modified kinematics. We emphasize that the two limiting cases (m X = 0 and m X = mX ) have no additional parameters beyond the usual simplified models of squarks and neutralinos, making a thorough study viable. Models with "multi-invisible" phenomenology can be constructed easily, but in particular find a natural home with Dirac gauginos. While the Dirac gaugino framework has its own issues [35,[40][41][42], the possibility of light squarks and a genuinely "natural" weak scale remains, motivating further study.
Regardless of whether the phenomenology presented in this Letter is realized in nature, it highlights the importance of not assuming that the few-hundred GeV scale has been thoroughly explored for colored particles. Especially as the LHC moves on to even higher energies, it is essential to remain critical of existing searches to make sure some subtlety has not caused us to miss New Physics under our noses. [4], which was originally interpreted in thet → bχ + topology, and for which reason we do not display our validation limits. | 4,699.8 | 2013-12-17T00:00:00.000 | [
"Physics"
] |
The use of membrane technologies in boiler houses of heat and power enterprises
. The article is about the use of membrane technologies in water treatment cycle for boiler houses which use water of superficial reservoirs for operation. Working process of a flat membrane element is described in the article. Movement scheme of purified water streams is introduced. The layer of undesirable deposits which are slowing down membrane operation is formed on membrane surface during its operation. This formed layer is a result of the arising concentration polarization. Impact of low-frequency fluctuations on saturation water stream is offered to decrease concentration polarization which leads to decline in production of membrane devices. Fluctuations were brought in a stream by means of the vibrator. Steady standing waves are in the membrane device as a result of vibration effect. These waves reduce the formation of undesirable deposits layer on a membrane surface. Results received by mathematical modeling and confirmed by experimental studies state that process productivity of ultrafiltration increased by 30% in frequency range of 60-70 Hz. Frequency range is in standing wave.
Introduction
Boiler-houses' water treatment in various enterprises is an important technological process allowing to ensure effective functioning of boilers. This is a guaranty of their accident-free operation and other thermal equipment of the boiler house during all production cycle in heating season. Now the majority of boiler houses have material expenditures on water treatment at the expense of annually growth tariffs for water use, deterioration in water quality indicators in sources suitable for industrial use, standards' toughening for quantitative and quality indicators for dumped drains, requirements' increase of conditioned water quality used in technological cycle. [1] Various artificial water intakes or natural sources such as rivers, ponds are sources of water pumped to the boiler house. The main negative characteristic of these waters is their impurity by various mechanical inclusions, low and high-molecular connections, rigidity salts. Impurity of water is changing according to source type. Surface water has more mechanical inclusions, high-molecular connections (HMC), low-molecular connections (LMC) and has less hardness. Hardness of surface water is subjected to noticeable seasonal fluctuations. Maximum value is usually at the end of winter and minimum value is in high water period. In this period it is diluted with soft rain and melt water.
Composition of ground waters is characterized by high content of salts, little content of HMC and LMC and by absolutely low content of mechanical inclusions.
The following shortcomings are noted in water treatment system of heat and power enterprises of Russia: [2] -extremely rare modernization of water purification filters of boiler houses; -softening of water for boilers is carried out by ineffective outdated ion-exchange material such as sulphated coal; -low qualification of staff in boiler houses; -depreciation of filters in boiler houses. Different versions of water treatment schemes are used for water preparation of boilers with various powers. The scheme can include the following operations depending on quality of initial water and feed water criteria: -preliminary water purification (decrease in content of organic substances, suspensions, iron, if necessary reagent softening), -sodium cation exchange, hydrogen cation exchange with "hungry regeneration", parallel and consecutive hydrogen-sodium-cation exchange, -ion-exchange desalting; -desalting by reverse osmosis method; -decarbonization and deaeration; -complex treatment; -correctional treatment for corrosion and sediment's prevention, and for hardness scaling; -other methods and their various combinations. Membrane technologies are basis of modern approaches for water treatment in boiler houses. These technologies allow excluding a number of above-mentioned operations and this leads to decrease in material expenditures for receiving water of required quality.
Theory and experimental methods
Modern technologies allow creating a wide range of membranes which have good mechanical, thermal and chemical properties. Membrane devices and installations are also various. They are getting better and better from year to year.
However, there is one problem when using membranes of any type. This problem is formation of deposit layer on membrane surface. This layer is slowing down or completely stops membrane operation.
A formed layer of deposits is a result of concentration polarization. Preliminary purification of solutions which is carried out before membrane separation leads to increase in hardware registration of a production cycle and consequently also leads to increase in prime cost of an output product.
There are ways for prevention or decrease in impact of concentration polarization process. These ways are generally connected with design features of devices. [3] So research of various constructive decisions and processing methods of impact on membrane processes for the purpose of their intensification is up to date.
Flat membrane devices are widely used in water treatment processes. These devices are simple in maintenance. Their advantage is the possibility of replacement only of that module where membrane is damaged. Replacement of all membrane surface is necessary if membrane surface is damage in devices of other types. Figure 1 shows membrane operation of flat membrane device. Fabric substrate (2) is tensed on a mesh framework (1) of membrane element. Membrane (3) is above membrane substrates. It is put to an active layer of saturation solution.
Similar elements can have various positions in flat framework devices. They can be located in parallel order, in chessboard order and etc. Let's describe the mechanism of flat membrane element operation. Figure 2 shows the scheme of streams' movement which is formed when the device is operating Purified water stream moves in above membrane space (1). It moves with Wx speed in the mode of ideal replacement. Purified water streams come to submembrane space (4) through the top and low membranes (2). Streams mix up and with Wy speed a general stream moves to an orifice (5). And it is taken away from the device through this orifice.
Mathematical model specification of two-component saturation solution is described as multiphase continuous mixture. Saturation solution consists of liquid phase and mechanical inclusions. These inclusions are granules of various sizes. This is a mixture with variable concentration. A segment of intermembrane channel is studied in flat 0Х1 Х2 coordinate system. The mix having concentration х1 moves to the allocated area through А1В1 and C1D1 and comes out through АВ and CD with х2 concentration. Liquid component of the solution partially passes through semi permeable membrane ВВ1 and СС1. And it is removed on permeable line ВС if membrane thickness is δ → 0. As pore size of d membrane is much less than the sizes of mechanical inclusions there is deposits' layer formation of a disperse phase under ultrafiltration. This interferes liquid phase passing through the membrane (fig. 3).
Differential equation system of multicomponent medium was used to simulate ultrafiltration problem of two-component solution [4]. The first two equations are respectively laws of momentum and mass conservation. The third equation is rheological. It states connection between stress tensor and strainrate tensor, average volume of solution concentration during the separation process and temperature mix. L =L/mL is volume concentration of L material (L = 1,2); m1 is liquid density; m2 is material density of dispersed phase particles; РL is stress tensor; DL is solution strain rate; For a flat problem in the projection on 0Х1Х2 coordinate system axis, equations for the liquid phase will be as follows [13]: For the second phase of solid inclusions on 0Х1Х2 coordinate system axis, these equations will be as follows: Inflow ratio γ is changed due to membrane pollution. Inflow ratio of the first material is inversely proportional to inflow ratio of the second material. Analyzing the data obtained during the experiment, we came to a conclusion that inflow ratio of a liquid phase of twocomponent solution depends on density of deposits' layer on a membrane. Inflow ratio decreases with the increase in thickness of deposits' layer.
Such mathematical transformations confirm surface increase of membranes in a device. This can be done by means of various layouts which use membrane elements in a module. It should be done for more efficiency of separation process.
Increase of membrane elements space, their form modification, use of various arrangement schemes assumes creation of an additional stream turbulization in the membrane device. This leads to unstable standing waves formation which can destroy deposits' layer.
Experimental study
An additional processing method is necessary for steady standing waves creation. This method allows keeping stationary mode of their operation for a long time without design change of membrane devices.
Such processing method is vibration impact on the separation solution. [5] Introduction of resonance frequency vibrations into the separation solution allows decreasing negative impact of the formed deposits' layer on ultrafiltration process productivity.
Water flow moved to intermembrane channels of the device under excessive pressure. Pressure pulsations and mass-flow rate perturbation are introduced into separation solution by vibrating device. Pulsations have maximum amplitude and resonant frequency. These parameters are chosen experimentally on membrane productivity correlation on a permeata from vibrations' frequency. Figure 4 shows the scheme of experimental installation. Electrodynamic vibration generator was installed to study influence of vibrations on ultrafiltration. The generator introduces separation vibration stream. Power and control system provides control of key parameters defining reproduced vibration. In this case key parameters were amplitude and frequency changes of alternating current in moving coil of vibration generator 5.
Electric fluctuations generator of sound frequency range was used for power supply of vibration generator. Applied frequency range was 5-5000 Hz. Vibration impact on circulating solution from electrodynamic generator was transmitted through a special vibrotransferring knot 4. Rod vibration was done by means of sinusoidal or harmonious law. Rod vibration is mechanical oscillations of certain frequency and amplitude.
Fluctuations' intensity was determined by acceleration amplitude ratio to gravity acceleration where is angular frequency (rad / s). It can be expressed through an oscillation period T (s) and oscillation frequency f (Hz).
Results and discussions
After mathematical modeling and experimental research it is possible to say that ultrafiltration process productivity increases by 30% in frequency range of 60-70 Hz. And productivity peak is at 67 Hz. ( figure 5). [5] According to these results we came to a conclusion that vibrations brought into a stream of purified water promote intensive turbulization increase of the stream which destroys a layer of formed deposits. where ∆V is oscillating speed amplitude. It leads to standing wave formation which destroys undesirable deposits which are formed on the membrane surface. The use of membrane technologies with vibration impact on cleaned water stream which passes through membrane device can increase operation of boiler houses. These technologies should be used in the system of water treatment in boiler houses using surface water for their operation. Saved funds for various reagents and processing equipment acquisition can increase economy of an enterprise operation and improvement of ecological indicators. | 2,476.6 | 2019-01-01T00:00:00.000 | [
"Engineering"
] |
Synthesis of rigidified flavin–guanidinium ion conjugates and investigation of their photocatalytic properties
Flavin chromophores can mediate redox reactions upon irradiation by blue light. In an attempt to increase their catalytic efficacy, flavin derivatives bearing a guanidinium ion as oxoanion binding site were prepared. Chromophore and substrate binding site are linked by a rigid Kemp’s acid structure. The molecular structure of the new flavins was confirmed by an X-ray structure analysis and their photocatalytic activity was investigated in benzyl ester cleavage, nitroarene reduction and a Diels–Alder reaction. The modified flavins photocatalyze the reactions, but the introduced substrate binding site does not enhance their performance.
Introduction
Flavins are redox-active chromophores [1][2][3][4][5][6] and represent one of the most abundant classes of natural enzyme co-factors [7][8][9]. Recently, the photo redox properties of flavins have been used to catalyze chemical reactions . A general drawback of photochemical processes in homogeneous solution is the limited preorganization of the reactants and the chromophore, which may lead to low selectivities and slow conversions in diffusion controlled reactions. To overcome this problem, Kemp's acid [31] derivatives have been used as sterically defined templates enhancing the efficiency and selectivity of photoreactions [32][33][34][35][36][37][38][39][40][41][42][43]. Flavins with geometrically defined substrate binding sites have not been reported so far and we expected that the close vicinity of substrate and flavin should enhance the rate of photoinduced electron transfer processes, which strongly depend on distance [44]. We present here the synthesis of geometrically defined flavin-guanidinium ion conjugates based on a Kemp's acid skeleton (Scheme 1). The guanidinium moiety should serve as a hydrogen bonding site for oxoanions or carbonyl groups [45][46][47][48][49]. The structure of the new flavins was determined in solid state and in solution and their photocatalytic properties were tested.
Synthesis
The synthesis of the potential photocatalysts 1 and 2, consisting of the flavin chromophore, the guanidinium substrate binding site and a Kemp's acid derived rigid linker, starts from Kemp's acid anhydride (5) [50][51][52]. The anhydride 5 was allowed to react with previously prepared flavins 4 and 8 [21] in the presence of DMAP as catalyst. The amide formation of the carboxyl group with Boc-protected guanidine was achieved using standard peptide coupling conditions. Boc-deprotection with hydrogen chloride in diethyl ether yielded the guanidinium chloride salts 1 and 2 (Scheme 2). The guanidinium salts are soluble in water and methanol, but also in chloroform and acetonitrile.
Structural investigations
The structure of compounds 1, 2, 6, and 9 was examined in the solid state and in solution. Figure 1 shows the X-ray crystal structures of 6 and 9. The planar flavin chromophore is turned outward relative to the Kemp's acid. Intermolecular π-π-interactions between the flavin heteroarenes are observed.
The structure of compound 1 in the solid state ( Figure 2) shows an almost identical orientation of the flavin group to that of the acid 6. The acyl guanidinium ion group is almost planar and in a parallel orientation relative to the Kemp's acid imide group. 2-D NMR spectra of compounds 1 and 2 revealed several NOE contacts, but the flexibility of the molecule did not allow the determination of preferred conformations.
The most stable conformer of compound 1 in the gas phase was determined by computational methods (semi-empirical AM1, Spartan program package, Figure 3, see also Supporting Information File 1) [53]. In this structure the flavin is turned towards the guanidinium ion forming a hydrogen bond between the flavin carbonyl oxygen atom and the guanidinium moiety (distance ~2.1 Å). However, simple gas phase calculations overestimate the effect of hydrogen bonds [54][55][56][57][58] and in solution the flavin chromophore is expected to rotate freely around the C-C single bonds of the ethane linker.
Photocatalytic reactions
Compounds 1 and 2 were tested as photocatalysts in three different reactions and their performance was compared to tetraacetyl riboflavin 3 or compound 8. Dibenzyl phosphate esters are oxidatively cleaved by blue light irradiation (440 nm) in the presence of compounds 1 and 2 (Scheme 3). The acceleration of the reaction in acetonitrile by 1 and 2, bearing a guan-idinium ion binding site with phosphate affinity, is significantly larger ( Table 1, entries 1+2) in comparison to the ammonium salt 8 (entry 3). In water, however, the accelerating effect is not observed (entries [5][6][7][8]. The presence of the photocatalyst is essential in all cases, as the non-catalyzed hydrolysis is slow under the reaction conditions (<5% conversion). In the presence of sacrificial electron donor substrates, such as aliphatic amines, flavins can photoreduce nitro arenes to anilines under blue light irradiation (Scheme 4). 4-Nitrophenyl phosphate was used as a substrate for photoreduction in water and in acetonitrile. The results summarized in Table 2 show that 10 mol% of flavin 2, the same amount of tetraacetyl riboflavin (3) or compound 8 catalyze the photoreaction equally well. The guanidinium ion binding site of 1 and 2 does not lead to a more effective conversion. was probed by UV/vis and emission spectroscopy in acetonitrile and buffered aqueous solution. The emission intensity of the chromophore of 2 decreased slightly in the presence of the anions in acetonitrile indicating a weak interaction. In aqueous solution the presence of the anions did not induce significant changes of the emission properties suggesting affinity constants smaller than 10 3 L/mol.
Photo Diels-Alder reactions in the presence of a sensitizer and light have been described [59][60][61][62][63][64]. Therefore flavins 1 and 2 were tested as catalyst for the cycloaddition of maleimide to anthracene in toluene (Scheme 5). Table 3 summarizes the results. A significantly higher yield of the cycloaddition product was obtained after 8 h at 40 °C in the presence of compound 2 (entry 3), if compared to the control reaction (entry 6). Upon irradiation with blue light the yield after 8 h reaction time increased further (entry 2) and was significantly higher as in the absence of a photocatalyst (entry 5). However, a comparison with tetraacetyl riboflavin (3) under identical reaction conditions showed an even more pronounced acceleration of the reaction (entry 4). Blue light irradiated flavins accelerate the anthracene maleimide cycloaddition significantly, but flavins 1 and 2 do not provide additional benefit if compared to tetraacetyl flavin 3.
Conclusion
We have prepared new flavin derivatives that bear an acyl guanidinium group, which is linked to the chromophore via a rigid Kemp's acid spacer. The connectivity and expected relative geometry of 1 and of the carboxylic acids 6 and 9 was confirmed by X-ray structure analysis. Guanidinium cations are known to bind oxoanions, such as phosphates, via hydrogen bonds. Therefore a benefit to the photocatalytic activity of 1 and 2 was expected, as the binding site could keep reaction substrates in close proximity to the redox active chromophore, facilitating photoinduced electron transfer processes. Initial exemplary photocatalytic experiments showed that flavin-derivatives 1 and 2 catalyze oxidative benzyl ether cleavage, nitro arene reductions and Diels-Alder reactions. However, no significant gain in photocatalytic performance by the guanidinium ion substrate binding site was observed in comparison to flavins lacking the binding site and the rigid Kemp's acid skeleton. The primary interaction between the aromatic substrates and the heteroaromatic flavin chromophore seems to dominate the formation of the substrate-catalyst aggregate. Hydrogen bonds between the substrate and the acylguanidinium group are not decisive for their interaction. The rigidity of the Kemp's triacid skeleton is not effectively transferred in 1 and 2 to the relative flavin-guanidinium ion orientation, which is due to the flexible ethane linker between imide and flavin. Derivatives with a more constrained conformation of the flavin chromophore and the substrate binding sites may lead to chemical photocatalysts with better performance.
Experimental General
The , 1 H, N-H | 1,764 | 2009-05-28T00:00:00.000 | [
"Chemistry"
] |
Correlates of support for international vaccine solidarity during the COVID-19 pandemic: Cross-sectional survey evidence from Germany
During the COVID-19 pandemic, many residents of high-income countries (HICs) were eligible for COVID-19 vaccine boosters, while many residents of lower-income countries (LICs) had not yet received a first dose. HICs made some efforts to contribute to COVID-19 vaccination efforts in LICs, but these efforts were limited in scale. A new literature discusses the normative importance of an international redistribution of vaccines. Our analysis contributes an empirical perspective on the willingness of citizens in a HIC to contribute to such efforts (which we term international vaccine solidarity). We analyse the levels and predictors of international vaccine solidarity. We surveyed a representative sample of German adults (n = 2019) who participated in a two-wave YouGov online survey (w1: Sep 13–21, 2021 and w2: Oct 4–13, 2021). International vaccine solidarity is measured by asking respondents preferences for sharing vaccine supplies internationally versus using that supply as boosters for the domestic population. We examine a set of pre-registered hypotheses. Almost half of the respondents in our sample (48%) prioritize giving doses to citizens in less developed countries. A third of respondents (33%) prefer to use available doses as boosters domestically, and a fifth of respondents (19%) did not report a preference. In line with our hypotheses, respondents higher in cosmopolitanism and empathy, and those who support domestic redistribution exhibit more support for international dose-sharing. Older respondents (who might be more at risk) do not consistently show less support for vaccine solidarity. These results help us to get a better understanding of the way citizens’ form preferences about a mechanism that redistributes medical supplies internationally during a global crisis.
Introduction
In an increasingly globalized world, pandemics have the potential to spread to far more people than in the past. High levels of vaccination across the globe will be key to containing a number of them [1,2]. However, some countries are better equipped than others to vaccinate their population. During the COVID-19 pandemic, we saw a situation in which many residents of high income countries (HICs) were eligible for COVID-19 vaccine boosters, while many residents of lower income countries (LICs) had not yet received a first dose. HICs were able to secure contracts for a disproportionate amount of vaccine supply. A new normative literature emphasizes the importance of international vaccine solidarity [1,3,4]. The development of international vaccine transfer initiatives such as the COVID-19 Vaccine Global Access (or COVAX) represents an important step in trying to address these disparities in practice [e.g . 5].
A key component of HICs being able to contribute to schemes that redistribute vaccines internationally is marshaling public opinion in favor of providing it.
Our analysis contributes to the empirical literature which focuses on levels of public support for international vaccine solidarity and the factors that shape citizens' preferences [6][7][8][9][10][11]. We add empirical evidence by investigating the correlates of directing vaccine supply abroad. This is important to understand where public support and opposition to such schemes is coming from. This has policy implications for situations that require a process in which medical supplies are redistributed internationally. Germany is an interesting country case because it is one of the largest donors of foreign aid [12]. It has a robust welfare system with moderate levels of redistribution as well as support for domestic redistribution [13,14].
One of the central issues we shed light on is whether attitudes in HICs towards sending COVID-19 vaccines to LICs have the same etiology as attitudes toward foreign aid. Citizens could treat sending vaccines as fundamentally different from foreign aid. If a pandemic is raging throughout both the donor and recipient countries, the decision to deliver life-saving vaccines may appear unusually zero-sum and therefore rather unlike foreign aid (funds) which are often a small share of a nation's budget. Any vaccine that goes into the arm of a low-income country recipient is not going into the arm of a recipient in one's own country. Yet, some citizens already seem to see foreign aid as a zero-sum game. For instance, countries lacking robust welfare systems contain people who both want increased domestic spending and decreased allocation to foreign aid [15]. This implies that wanting one means discounting the other. If people already see foreign aid as zerosum, the correlates of support for sending vaccines abroad will be fairly similar. People are also more willing to give foreign aid when they see their country as in a better position to lose resources [16]. An unequal international vaccine distribution might be approached by voters as a special and particularly acute case of global inequality, making it likely for citizens' views to be driven by similar factors. We assume that this is the case even if foreign aid might invoke monetary transfers more directly than the redistribution of vaccine doses.
Existing work on support for foreign aid finds a few empirical regularities. For instance, when the public sours on foreign aid, foreign governments tend to invest less in it [17]. This happens both as a result of governments taking cues from voters and changes in party control that come from elections [18,19]. People with the capacity to trust, identify with, and otherwise empathize with potential aid recipients tend to support aid more [20][21][22][23]. This is especially the case if the recipient seems "deserving" of that aid, whether due to material need or stereotypes about the recipient's agency [22,24]. Additionally, as a consequence of low numeracy, people tend to overestimate the level of foreign aid their country gives [25] and underestimate how good of a position their countries are in to give aid [16]. Once they know how little their country spends on foreign aid relative to its resources, people tend to be more supportive of foreign aid [16,25,26].
In recent years, both surveys as well as conjoint experiments have attempted to describe and untangle factors underlying vaccine solidarity preferences. Surveys have generally found overall support to be high; a plurality of people in a wide range of HICs support various international vaccine solidarity schemes aimed at LICs, albeit there is important variation [6,10,11]. Indeed, some studies even found preference for vulnerable populations in LICs over the respondent's co-national recipients [9]. On the other hand, though, some studies have shown Germans in particular to display a preference for sharing schemes that include only HICs [8] or co-national recipients [9]. Preferences also appear to differ along ideological lines and related orientations [7,11], with leftwing orientations being associated with higher support than rightwing ones in the US and Germany (but see [8]). Lastly, it seems self-interest may play a role [9], especially in terms of older respondents exhibiting less support for redistribution [9,11].
We tested a set of pre-registered hypotheses on the correlates of international vaccine solidarity using a large, nationally representative panel survey in Germany. The survey was fielded before an increase in infections in the fall of 2021, though at a time when the supply of booster shots was limited [27,28]. Our results demonstrate that public attitudes towards international vaccine solidarity are consistent with broader views concerning (global) inequality. We found that support or opposition to sharing vaccine doses with LICs is associated with similar factors to those predicting redistribution and foreign aid preferences [e.g. 15,23,29]. That is, citizens seem to understand the question of unequal COVID-19 vaccine distribution as a specific, critical instance of the broader question of inequality in the distribution of economic resources rather than a unique phenomenon.
Hypotheses
What shapes redistribution attitudes more broadly? Research suggests worldviews, personality traits, and self-interest each likely contribute [22,[30][31][32][33]). Delton et al. [30], for instance, identify a mixture of ideology, compassion, and self-interest as key determinants of public support for such policies. Similarly, we would also expect these broad set of factors to predict preferences related to an international redistribution of vaccines. Here, we specify which particular political orientations, worldviews, personality traits, and markers of self-interest we expect to play a role. We present a series of hypotheses regarding factors associated with international vaccine solidarity below (pre-registration link: https://osf.io/4umzv).
Political orientations and worldviews
Research in political economy sees citizens' views on domestic redistribution as an expression of their personal income situations. Those with no or low incomes (i.e., who stand to benefit from redistribution) prefer more redistribution than those with high incomes (who stand to lose out financially from redistribution) [34]. However, citizens' preferences for redistribution can also or in part be an expression of their considerations about inequality [15,35]. Since those who support domestic redistribution prefer a more equal distribution of resources, we expect them to be more supportive of redressing current global inequalities in vaccine distribution, even if this does not have any bearing on their personal income level (in our preregistration, we use the term ideological positioning in the context of this hypothesis.) We also test this logic with a more general measure for citizens' political positions, namely their left-right selfplacement (but we do not insert both items into the same model to avoid multicollinearity). We expect that how people position themselves on a broader left-right political spectrum would be associated with support for vaccine dose sharing: those further to the left will tend to be more supportive.
Similarly, cosmopolitanism strongly shapes attitudes towards redistribution at the international level. Cosmopolitans generally see themselves as citizens of the world [36,37]. People tend to allocate resources to in-group members more generously than outgroup members [38]. This matters also in the context of COVID-19, e.g. in a situation of scarce vaccines, citizens prioritise natives over immigrants [39]. Cosmopolitans are more likely to include people outside their country as ingroup members. This is because they appreciate "other human beings irrespective of their national origin" [36, p. 1762]. Hence, cosmopolitans are more willing to redistribute resources to countries in need, including poorer EU member states [36,[40][41][42] and poorer countries in general [23,43]. As such, we expect cosmopolitans to be more supportive of international dose sharing.
Empathy and self-interest
We expect that the psychological trait of empathy is linked to support for international dosesharing. Empathy is characterised by experience, understanding, and interest in the feelings or welfare of other people [22,44]. Higher empathy predicts higher support for foreign aid [22]. Therefore, we expect higher empathy to predict higher support for international vaccine solidarity.
Conversely, though, we pre-registered a hypothesis on the role of age. We believe age may function as a proxy for self-interest regarding COVID-19 vaccinations. Sharing doses with other countries limits the number of doses available domestically. It stands to reason that, to the extent that citizens factor in their own self-interest in forming their attitudes, those who stand to benefit most from vaccination will be least likely to support sending those vaccines overseas. Restricting the domestic vaccine supply might be seen as particularly risky among populations vulnerable to COVID-19, particularly older citizens [45,46]. Therefore, we expect older citizens to oppose international vaccine solidarity.
Research questions
In addition to these main hypotheses, we also explore an additional set of pre-registered research questions. First, we examine an additional worldview that has particular relevance to vaccine attitudes-conspiratorial thinking. Conspiratorial thinking captures an individual's propensity to assume conspiratorial intent behind various events and policies. Citizens high in conspiratorial thinking tend to be more vaccine-hesitant [47]. Since vaccine hesitancy captures perceptions about the effects of vaccines for people in general, one might not expect a link between conspiratorial thinking and attitudes towards international dose sharing. However, conspiratorial thinking is concomitant with higher skepticism towards international organisations [48], which may preclude support for international collaboration regardless of personal beliefs about vaccines. We thus consider the possibility of an association between conspiratorial thinking and international vaccine solidarity.
Lastly, we examine a potential interaction between cosmopolitanism and empathy [31,33]. The discussions above point to the possibility that the role of empathy might be dependent on the role of cosmopolitanism, and vice versa. Even though empathy predicts higher support for foreign aid, vaccines, unlike foreign aid, can benefit both the in-group and the out-group. Even highly empathetic people can choose to withhold their empathy from out-groups in favor of the ingroup, driving polarisation in perceptions of opposing partisans [49]. One such ingroup can be the national community. Therefore, among those lower in cosmopolitanism, who are less apt to count those in other countries as in-group members, empathy may lead to lower support for international dose-sharing. Conversely, among cosmopolitans that count people in other countries as in-group members, empathy may not lead to this kind of parochialism. We explore this possibility.
Materials and methods
We conducted a two-wave online survey in Germany (Wave 1: September 13-21, 2021, N = 2,801; Wave 2: October 4-13, 2021, N = 2,019). In the second wave, respondents were 50.22 years old on average (SD = 17.07), 51% female, and 26% university educated. Ethical approval for this study was obtained from a UK Russell Group university (approval ID 489681). Informed consent was recorded before participants began the survey. Respondents were shown information about the study on an introductory screen that ended with the following statement: I voluntarily agree to participate and to the use of my data for the purposes specified above. Respondents expressed written consent by selecting an "I agree to participate" button. The authors did not have access to information that could identify participants. Questions relating to vaccine solidarity were asked in wave two. The survey fieldwork was conducted by YouGov with a representative sample. All results reported below were based on a weighted sample, using weights provided by YouGov. The composition of the unweighted sample can be found in S1 File.
Measures
Our outcome measure was a variable that measures attitudes towards vaccine solidarity. Given the absence of an established measure when we prepared our survey questionnaire, we developed the following new question: "Coping with the COVID-19 pandemic requires difficult decisions. By the end of September, about 64 percent of people eligible for vaccination had been vaccinated at least once. What do you think is the more important priority now for the use of Germany's vaccine stocks: offering a third vaccine dose ("booster vaccination") to people in Germany or giving vaccine stocks for first and second vaccine doses to less developed countries?" Our survey was fielded when the number of daily COVID-19 cases was decreasing. This was also at a point well before the Omicron variant was discovered. This is important to note, because the trade-off that respondents faced was one between improving protection against COVID-19 (for themselves or others) or some initial protection for individuals in LICs. This was obviously different from a situation in which cases were rising or when a new variant spread, which might have meant that two doses offered little protection and a third dose was more essential. Nevertheless, the supply of booster shots was still limited both within Germany and much more so globally [27,28]. Booster shots were only available to larger groups of the population later in Germany. We believe however that these issues would affect baseline levels of public support for international vaccine solidarity, though, not the factors that shape citizens' views (which is the focus of our analysis).
We measured ideological orientations in two ways. Our main measure (support for domestic redistribution) asked respondents whether the government should do more to reduce income inequality on a 5-point scale from 1 (strongly agree) to 5 (strongly disagree) [50,51]. Our secondary measure required respondents to place themselves on an 11-point scale from 0 (very left) to 10 (very right).
To measure cosmopolitanism, we asked respondents whether they believe that globalization threatens Germany's identity on a five-point scale from 1 (strongly agree) to 5 (strongly disagree) [52]. We also used immigrant sentiment and authoritarianism as alternative measures of cosmopolitanism in S1 File.
To measure empathy, we assessed respondents' agreement with an item from a common empathic concern scale [53], with responses ranging from 1 (strongly agree) to 5 (strongly disagree): "When I see a person being taken advantage of, I want to protect them".
We divided respondents into four age groups: 18-24, 25-44, 45-54, and 55+. The youngest cohort was our reference category in all models, but we examined the robustness of findings to changes in how we coded age.
To measure conspiratorial thinking, we assessed respondents' agreement with three items [54] and a 5-point scale from 1 (strongly agree) to 5 (strongly disagree). An example item is: "Much of our lives are being controlled by plots hatched in secret places".
All question wordings and coding decisions (also for the control variables gender, education, partisanship, and social class) can be found in S1 File.
Descriptive results
Overall, we find that 48 percent of respondents prioritized giving available doses to citizens in LICs, 33 percent preferred these doses to be used as boosters domestically, and 19 percent of respondents selected the "don't know" category (after employing weights).
We tested our hypotheses using logistic regression models. While our pre-registration specified OLS regressions, we relegated these to S1 File (see S3 Table in S1 File) in favor of logistic regressions with odds ratios. Table 1 shows results for four models. The leftmost column shows our main model (model 1). Each of the other models built on the main model save for making a single change. In the second model, left-right self-placement was used as a measure of ideological orientation instead of support for domestic redistribution. In the third model, there was an added cosmopolitanism x empathy interaction. The fourth model added controls for party affiliation (which other literature has linked to citizens' views on international vaccine solidarity [7,11]).
In line with our hypothesis, we found that empathy predicts higher willingness to share doses internationally (OR = 1.272, SE = .069, p < .001). We also found that cosmopolitanism and support for domestic redistribution predict higher support for international dose sharing (cosmopolitanism: OR = 1.279, SE = .061, p < .001; support for domestic redistribution: OR = 1.307, SE = .067, p < .001). These results held also when we used attitudes toward immigrants (OR = 1.250, SE = .050, p < .001, see S9 Table in S1 File) or authoritarianism, a reverse measure (OR = 0.582, SE = .090, p = .003, see S11 Table in S1 File), instead of cosmopolitanism.
In our main model, those above age 55 showed less support for vaccine solidarity (OR = 0.581, SE = .363, p = .024). However, this finding was not robust to controlling for ideology (OR = 0.788, SE = .481, p = .346). Since the reference category (those aged 18-24) was small (4.5% of the sample), we estimated models using a continuous age parameter and found a negative relationship between older age and international vaccine solidarity (ps.010, see S12 Table in S1 File). We found the same when we re-estimated the model with a single indicator for whether respondents were aged 55 and above (ps.023, see S13 Table in S1 File). Therefore, while we mostly found support for our hypothesis on the role of age, future research should examine whether or not personal risk is a persistent predictor that decreases vaccine solidarity, and how perceptions of personal risk are associated with age.
In only one of our models did we find that conspiratorial thinking predicted higher support for international dose-sharing. However, in most specifications, it was not systematically related to international vaccine solidarity (ps.094). We examined if the role of empathy was conditional on the community conception of respondents-that is, whether the effect differed between non-cosmopolitans and cosmopolitans. The interaction between cosmopolitanism and empathy was not significant (p = .491). Empathy's effect on support for international dose-sharing was positive and significant at all levels of cosmopolitanism. Cosmopolitanism's effects on international dose-sharing were not significant at the lowest levels of empathy (p = .217), but were significant at all other levels of empathy (ps < .038). We treat these results with caution given the low number of respondents at the lowest levels of empathy (n = 36).
Support for international vaccine solidarity
Finally, we also examined the role of party identification but we did not register hypotheses on these associations. Respondents who identified with the Social Democrats (SPD; winner of the election prior to our data collection) were the reference category. We found, relative to SPD supporters, that respondents who identified with the conservative CDU showed less support for international vaccine solidarity (OR = 0.610, SE = .087, p = .003). Individuals who identified with the Green Party, meanwhile, showed more support for dose sharing (OR = 1.721, SE = .291, p = .008), reinforcing results found in related work [7].
Our control variables (education, gender, and social class) were not associated with support for international vaccine solidarity.
As per our pre-registration, our analysis focused on the factors associated with support for or opposition to international vaccine solidarity, and therefore we excluded respondents who selected the "don't know" category in the main analysis. We used listwise deletion in case of missing data. To verify the robustness of our results, we also conducted OLS and multinomial logit analyses that tested whether our results held when respondents were not discarded who selected the "don't know" category of our main outcome. Results were substantively identical, and can be found in S1 File, where we also show the factors associated with a "don't know" response. Additional sample and questionnaire details are also available in S1 File.
Discussion
In the context of the COVID-19 pandemic, a new normative literature and public debate revolved around the willingness of citizens in HICs to donate vaccines to LICs. Our analysis contributes empirical insights on support for international vaccine solidarity and the characteristics of individuals who exhibit more vaccine solidarity. We see this as an instance where aid decisions might be perceived as zero-sum: any vaccines that go into the arms of people in other countries would not make it into the arms of people in the country.
Based on a population-based survey in Germany, we found that a plurality prefer sharing doses of the COVID-19 vaccine internationally over keeping them in the host country. This is in line with other recent findings [6,10,11] and highlights that politicians might have some room to manoeuvre and fulfil international vaccine sharing pledges.
Our result is particularly noteworthy given that international vaccine sharing, at the time of the survey, was not a prominent part of public discourse, which was mostly focused on national vaccine uptake. It is also important to note that almost one in five respondents had no view, leaving room for opinions to crystallise. In sum, there seems to be potential for more international vaccine sharing and for elite communication that increases the salience of the issue, which could mobilize further support [7].
We also showed that those individuals who have been found to generally support foreign aid in the literature are more likely to support vaccine sharing in the current context: for instance, just as cosmopolitans are more supportive of foreign aid, they are more supportive of sharing doses with LICs. Moreover, those who score higher on empathy and left leaning citizens are more inclined to support redistributing vaccines internationally. This suggests that the German public, to the extent they think about vaccine solidarity, treated it like a typical foreign aid issue.
Seemingly in contrast to other work that linked views on domestic redistribution to those on international redistribution [15], we found that citizens who support domestic redistribution also tend to support foreign redistribution. However, this may be a product of using a different level of analysis (while aggregate support for foreign aid was used elsewhere [15], we used individual level data). Moreover, other work [41] also found a positive relationship between support for domestic redistribution and international redistribution at least among some voters (at the individual level). The reasoning is that these voters might support redistribution primarily because they want to see a reduction of inequality rather than because they would gain from redistribution personally. This logic could apply also in the case of international vaccine solidarity.
Although we draw on pre-registered analyses of a large, nationally representative panel survey of a notable case (one of the largest donors of foreign aid), this research includes limitations. A noteworthy limitation is that our cross sectional survey only provides a snapshot of citizens' attitudes towards international vaccine solidarity at a particular point in time. Citizens' attitudes on this issue might in fact be very volatile in nature and driven also by context conditions. Our survey was fielded before a major increase of COVID-19 infections in the autumn of 2021 in Germany. It was also fielded before the discovery and spreading of the Omicron variant. Sharing doses internationally is likely to be seen as a different trade-off depending on the infection risks that citizens face and the protection that previous vaccine doses provide. When we fielded the survey, getting a booster shot was less essential for protection (among those who are not in a high-risk group) than in a context in which a new variant is spreading. We believe that our results still include an important message for public policy. COVID-19 infections are apt to rise and fall repeatedly over the long run, new variants are likely to appear, and in fact, we might face pandemics resulting from different viruses entirely. Our findings show that there is substantial public support among citizens to share doses internationally at least when infection rates are at a modest level and falling. Moreover, while levels of support for dose sharing might change as a result of the domestic risk situation, we argue that the factors that shape citizens' views on this issue are likely to remain the same-though the magnitude of their effects is of course likely to vary. This is important for the public debate, as it tells us how citizens understand the topic, where support for this policy comes from, and where opposition is likely to be large.
The role of context conditions has not been the focus of our analysis, but it is undoubtedly an important one. Future research should analyse the volatility of public support for an international redistribution of medical supplies such as vaccine doses and what role domestic infections play for the willingness of citizens to share their medical supply internationally.
Another limitation relates to the measures that we employed in our analysis. We relied on single-item measures for several complex concepts (e.g., support for vaccine solidarity, empathy, cosmopolitanism) rather than multi-item scales. Our robustness checks reinforced our results, but future research could use multi-item batteries in order for the analysis to better capture individual differences. Future research could also collect data on support for foreign aid side by side with support for international vaccine solidarity. Moreover, while we used the literature on public support for foreign aid to derive hypotheses, our study does not test the correlation between citizens' views on both issues.
Finally, it should be noted that the extent to which COVID-19 posed a real or perceived risk and affected citizens is likely to vary in ways beyond the issues we could account for. Individuals who consider themselves to be less at risk or who are less affected-for other reasons-might be more willing to share vaccine shots. Future research might address these factors in a more detailed fashion. | 6,347.6 | 2023-06-23T00:00:00.000 | [
"Economics",
"Medicine",
"Political Science"
] |
The microwave properties of simulated melting precipitation particles: sensitivity to initial melting
A simplified approach is presented for assessing the microwave response to the initial melting of realistically shaped ice particles. This paper is divided into two parts: (1) a description of the Single Particle Melting Model (SPMM), a heuristic melting simulation for ice-phase precipitation particles of any shape or size (SPMM is applied to two simulated aggregate snow particles, simulating melting up to 0.15 melt fraction by mass), and (2) the computation of the singleparticle microwave scattering and extinction properties of these hydrometeors, using the discrete dipole approximation (via DDSCAT), at the following selected frequencies: 13.4, 35.6, and 94.0 GHz for radar applications and 89, 165.0, and 183.31 GHz for radiometer applications. These selected frequencies are consistent with current microwave remotesensing platforms, such as CloudSat and the Global Precipitation Measurement (GPM) mission. Comparisons with calculations using variable-density spheres indicate significant deviations in scattering and extinction properties throughout the initial range of melting (liquid volume fractions less than 0.15). Integration of the single-particle properties over an exponential particle size distribution provides additional insight into idealized radar reflectivity and passive microwave brightness temperature sensitivity to variations in size/mass, shape, melt fraction, and particle orientation.
Introduction
Present methods of passive and active microwave remote sensing of precipitation have a key problem: the uncertainty of the physical and associated radiative properties of iceand mixed-phase snowflakes.In nature, ice particles mani-fest themselves in an extraordinarily diverse variety of sizes, shapes, and habits -ranging from simple crystals such as needles or plates to complex aggregates and rimed particles.
Microwave radiation is sensitive to the presence of liquid water on ice-phase precipitation (e.g., Klassen, 1988), due, primarily, to the large difference in the dielectric constants of the two materials at microwave frequencies.However, the relationship between the early stages of melting and incident microwave radiation has not been well described in the literature.With this in mind, the present research seeks to quantify the sensitivity of microwave scattering and extinction cross sections to the onset of melting (melt fractions less than 0.15), with an emphasis on ultimately improving the forward-model simulation of the physical and radiative properties of realistically shaped mixed-phase precipitation particles.The onset of melting is generally believed to represent the most rapid changes in the scattering and extinction properties of hydrometeors.Bohren and Battan (1982) made direct measurements and simulation of a normalized radar backscattering cross section for spongy ice spheres with various liquid water contents.Most notable is that they show a a rapid change in the backscattering cross section between 0 and about 7 % liquid water volume fraction.Willis and Heymsfield (1989) also noted that the onset of melting represents a critical point where the radar reflectivity increases rapidly.A comprehensive examination of the microwave sensitivity to the entire range of melting is left for future research.
The model described for the first time here, named the Single Particle Melting Model (hereafter SPMM), is a heuristic model designed to provide a basis for simulating the physical description of melting individual ice crystals having an Published by Copernicus Publications on behalf of the European Geosciences Union.
B. T. Johnson et al.: Microwave properties of melting snowflakes
arbitrary shape.By using simple rules and nearest-neighbor interactions, the melting process is simulated with a reasonable facsimile of reality.In SPMM, there are no explicit thermodynamic or physical properties, other than the 3-D shape and the relative positions of liquid and ice constituents.SPMM is an extremely computationally efficient algorithm for creating a series of melted particles ranging from unmelted to completely melted, only requiring several minutes on a normal desktop computer for a single particle.Thermodynamic melting-layer models can easily be employed to determine bulk meltwater generation (Mitra et al., 1990;Olson et al., 2001), and the SPMM-melted particles can be mapped into an appropriate particle size distribution according to the layer-averaged melting properties.
While the general physical properties and thermodynamics of melting snowflakes are fairly well understood, the complex interaction between realistically shaped melting snowflake aggregates and incident microwave radiation has been sparsely examined.Recently, radar properties of melting aggregates at 3.0 and 35.6 GHz have been simulated by Botta et al. (2010).There are no known simulations of realistically shaped melting aggregate hydrometeors at frequencies above 35.6GHz, for either passive or active microwave remote-sensing applications.For dry realistically shaped aggregates, Petty and Huang (2010) compared simulated reflectivities from aggregates and soft spheres at Ku, Ka, and the passive microwave response 18.7, 36.5, and 89.0 GHz, and Kulie et al. (2014) has examined the Ku-, 35,and 94 GHz) response to variations in shape and size, but neither paper consider melting.
In this study, the discrete dipole approximation (DDA), using DDSCAT (Discrete Dipole Scattering) version 7.3 (Draine and Flatau, 1994), is employed to compute the scattering and extinction efficiencies of individual particles with mass Ftmelt fractions ranging from from 0.0 (unmelted) to 0.15 (lightly melted).Due to the significant computational requirements of DDSCAT when liquid water is present, we have limited this study to two selected aggregate particles, shown in Fig. 1.The chosen microwave frequencies -13.4, 35.6, 89.0, 94.0, 165.0, and 183.31GHz -are relevant to current passive and active microwave sensors, such as Cloud-Sat (Stephens et al., 2002), the recently launched Global Precipitation Measurement mission (GPM; Hou et al., 2014), and instrumentation employed during aircraft-and groundbased precipitation validation experiments over the past decade (e.g., MC3E -Petersen and Jensen, 2012;GCPEx -Hudak et al., 2012;and IPHEx -Barros, 2014) and the upcoming OLYMPEx field campaign in winter 2015/16 (Mc-Murdie et al., 2015).
The following sections describe the melting simulation methodology, the single-particle scattering and extinction properties at standard radar and radiometer center frequencies, and the particle size distribution averaged properties with implications for remote-sensing applications.
Physical description of melting
In an actual melting snowflake, the distribution of water on the surface of a melting ice crystal is governed primarily by the local amount of liquid water.According to Oraltay and Hallett (2005), in the initial stages of melting, meltwater distributes more or less evenly over the surface of the ice crystal.At a locally critical liquid water content, surface tension effects take over and convex water droplets form at branch points and other energetically favorable regions on the surface of the underlying ice crystal.As the crystal shape changes due to additional melting, these water droplets coagulate and tend to collect toward a common center under the influence of surface tension.Previous studies have examined the effects of modifying the distribution of meltwater on oblate spheroids (Tyynelä et al., 2014), and one has explicitly examined melting realistically shaped snowflakes, but with ad hoc placement of water spheres to simulate melting (Botta et al., 2010).In reality, tumbling, breakup, shedding, collision/aggregation, evaporation/condensation, and other physical interactions often significantly alter growth and melting processes.Due to the complexities of such simulations, we do not explicitly consider these effects in the present physical model -although aggregation is simulated through the creation of aggregate snowflakes.
Single Particle Melting Model
The Single Particle Melting Model, developed by the primary author and described for the first time here, performs physical particle simulations on an integer-indexed three-dimensional Cartesian grid.Each occupied point represents a small, finite unit of mass of either ice or liquid.This assemblage of ice and water points constitutes the particle mass and volume; Fig. 1 illustrates this for the two selected aggregate shapes used in this study.In this figure, blue regions represent solid ice and red regions represent meltwater (red is used for visual clarity).The maximum melt fraction is 0.15 for the present study, in order to assess the microwave sensitivity to the initial stages of melting.Throughout this melting range, casual inspection reveals that very little structural changes occur in either the dendrite aggregate (DA) or needle aggregate (NA).
The basis for melting and meltwater movement in SPMM occurs through nearest-neighbor interactions.In a 3-D domain, any given point has 26 nearest neighbors (including all diagonals).The interaction distance is limited to one neighbor, simplifying the computational requirements of the algorithm.Figure 2 conceptually illustrates the melting process in two dimensions; the same logic applies to the threedimensional model.The melting simulation proceeds iteratively until all ice is melted and a nearly spherical droplet is formed.
The SPMM proceeds with the following steps: 1. Populate a 3-D Cartesian grid with "ice" points, the ensemble of which comprise the entire volume of the simulated snowflake or aggregate.
3. The ice points having the fewest ice neighbors are melted (see Fig. 2a).A stochastic control factor is employed to control the rate of melting.
4. After each melt iteration, a movement check is applied to those liquid points having zero ice neighbors.Prohibiting the movement of liquid points that do have ice neighbors simulates a "coating" effect, whereas liquid points with no ice neighbors are able to move (Fig. 2b and c).
5. Movement is a weighted random walk, subject to certain constraints.The walk is weighted toward total particle center of mass, simulating the coalescence of liquid water.The movement phase iterates until all moving liquid cannot move to an open space closer to the center of mass than the current position.Return to step 2.
In this simple model, ice structure collapse, breakup, or water shedding are not explicitly simulated -any orphaned droplets created during melting will naturally migrate towards the total center of mass as cohesive droplets.
Although the present study considers melt fractions up to 0.15, SPMM provides melting simulations for any arbitrary particle shape until it is completely melted.In principle, it can be applied to the melting of any material, where surface tension is a dominant factor in the liquid phase.The melting increments and distribution of meltwater can be finely tuned to suit the application.On a modern desktop computer, a particle having 200 000 ice points ("dipoles" in DDA parlance) can complete the entire melting process in less than 5 min using a single processor core.The Fortran 90 codes and MAT-LAB codes for SPMM are freely and indefinitely available upon request from the primary author.
Single-particle scattering and extinction properties
In the previous section we described the method for simulating the melting of realistically shaped precipitation hydrometeors using SPMM.Given these two example sets of melted ice particles, the single-particle extinction cross section, backscattering cross section, and asymmetry parameter properties were computed at 23 melting steps between 0.0 (unmelted) and 0.15 (lightly melted; see Fig. 1).To compute the single-particle radiative properties, the DDSCAT 7.3 model is used (Draine and Flatau, 1994).
Numerical scattering calculations
DDSCAT is an implementation of the DDA method for solving Maxwell's equations for linearly polarized electromagnetic plane waves incident upon an arbitrarily shaped dielectric material consisting of up to nine different dielectric materials; here we only use two discrete constituent materials: ice and liquid water, neglecting the dielectric constant of air and water vapor.For satisfactory convergence, DDSCAT requires that |m|kd<0.5,where |m| is the magnitude of the complex index of refraction of ice (Warren and Brandt, 2008) or liqwww.atmos-meas-tech.net/9/9/2016/Atmos.Meas.Tech., 9, 9-21, 2016 uid water (Liebe et al., 1991), k is the wave number of the incident radiation, and d is the minimum spacing between adjacent dipoles.In each of the following calculations, specific care was taken to ensure that this convergence criterion was adequately satisfied.In practice this was accomplished by having a sufficiently large number of dipoles representing the shapes, minimizing the dipole spacing d compensating for the increase in |m| imposed by the increasing amount of liquid water present in the melting simulation.DDSCAT requires the following inputs: the polarization and wavelength of incident radiation, the 3-D Cartesian position and index of refraction for each dipole point, the effective radius of the entire particle (aeff: the radius of a sphere having equal mass), and 3-D rotation angles of the target in the reference frame.Here, the effective radius (aeff) acts as a proxy for particle mass, independent of the particle shape.
The outputs of interest for this study are extinction cross section, scattering asymmetry parameter, and radar backscattering cross section at 30 effective radius intervals -ranging in log scale from 50 microns to 2500 microns.For each effective radius value, the input shape (relative dipole positions) remain the same, but each dipole mass (and consequently effective volume) is scaled appropriately.We also examine the extinction and backscattering efficiencies, Q ext and Q bck , which are simply the respective cross sections (e.g., C ext ) divided by the cross-sectional area of the equivalent sphere (defined in DDSCAT by Q * = C * /π a 2 eff ).It should be noted that scaling particles in this manner does not create a mass-dimension relationship that is consistent with observations when these particles are used together in an ensemble.Each particle and its associated scattering and extinction properties are intended to be considered as independent particles if they are being used in an actual retrieval framework, and it is up to the researcher to ensure that the mass-dimension relationship they prefer to use is followed.Leinonen and Moissev (2015) found that the method of producing aggregates can result in significantly different optical properties, depending on the ultimate mass-dimension relationship.in the present study, the purpose of choosing the shape-preserving scaling approach was so that the the melting morphology would be preserved for a given base shape across all ranges of masses, enabling a consistent comparison independent of shape changes.
To simulate randomly oriented hydrometeors, an average over multiple orientations of the aggregate relative to a fixed direction of incident radiation is computed.This provides an orientation-averaged set of scattering and extinction properties.Although not shown here, our sensitivity studies suggest that 75 discrete orientations, sampling a full 3-D rotation, are sufficient to provide a reasonably precise orientationally averaged set of scattering and extinction calculations.This trade-off keeps the computational requirements tractable: for a single effective radius, single shape, and single frequency, it requires 24 h to perform one set of calculations running in a parallel implementation a 24-core Intel(R) Xeon(R) CPU X5670 at 2.93 GHz and requires up to 32 GB of allocated RAM per DDSCAT process.For 6 frequencies, 2 shapes, and 30 effective radii, it requires computation times on the order of 200 days of continuous calculations (with 3 processes running in parallel).In addition the orientation-averaged quantities, the scattering and extinction properties are also tabulated for each individual orientation, which has important implications for exploring the polarization of scattered radiation, but this is left for future research.
In the current study, the single particle scattering and extinction calculations are divided into two frequency groups:
Backscattering and extinction at radar frequencies
In Fig. 3, the 13.4 GHz extinction and radar backscattering cross sections are computed for the DA in panels (a) and (b) respectively, and for the NA panels (c) and (d).Colors represent the particle effective radius (values on the color bar are in microns), which is directly related to the particle mass.Both extinction and backscattering cross sections are most strongly influenced by changes in size.At the smaller sizes, however, the onset of melting has a strong influence on extinction (panels a and c).This indicates that the onset of melting is characterized by a rapid increase in extinction, while the backscattering tends to exhibit a more linear increase.This is consistent with the integrated backscattering and extinction properties, shown in Sect. 4.
Figure 4 shows the single-particle DDSCAT calculations of the extinction efficiency (Q ext ) for the DA and NA at 50-micron effective radius for (a) 13.4,(b) 35.6, and (c) 94.0 GHz.Shaded regions represent the range of variations in Q ext due to the range of orientations of the particle.Black lines are the equal-mass ice sphere Q ext values at 1, 10, 50, and 100 % densities (100 % is equivalent to a density of 917 kg m −3 ).The extinction increases rapidly with the onset of melting.For every 0.05 increase in melting fraction, the extinction nearly doubles at all three frequencies.Both the needle aggregate and dendrite aggregate extinction efficiency exhibit roughly similar behavior, but it is obviously different from the spherical properties and often outside of the range that could be captured by spherical particle shapes alone (e.g., Liao et al., 2013).
Figure 5 is the same as Fig. 4 except that the effective radius has increased to 2500 microns.The behavior at (a) 13.4 and (b) 35.6 GHz shows a similar sensitivity to the onset of melting, but not quite as rapid as the smaller-particle case.Panel c (94 GHz) shows a relative insensitivity of extinction to onset of melting; in this case it is due to a trade-off between a rapidly decreasing scattering contribution and equally increasing absorption contribution to the total extinction.Also of note in panel c is that the needle aggregate now shows an overall larger extinction than the dendrite aggregate, different from the other frequencies.This is indicative of the increased sensitivity of the smaller wavelength (approximately 3.2 mm at 94 GHz) to the finer-scale structures present in the needle aggregate.We believe that this is the first evidence in the existing literature that points towards this behavior; however additional research will be required on this topic.
Following the same approach as Figs. by π a 2 eff (Bohren and Huffman, 1983).Integrating the singleparticle backscattering cross section over a distribution of particle sizes yields the radar reflectivity, which is discussed in Sect. 4. Figure 6 is the backscattering efficiency for particles of a 50-micron effective radius, whereas Fig. 7 is the same at 2500-micron effective radius.
At 50-micron effective radius, the scattering is well into the Rayleigh regime at all of the radar frequencies considered here.The response is a relatively gentle increase in backscattering over the 0 to 0.15 melt fraction range.The 94 GHz backscattering efficiency is roughly 3 orders of magnitude larger than the 13.4 GHz efficiency.Also of note is that the variable density spheres (black lines) consistently underestimates the total radar backscattering for all densities compared to the two dendrites, implying that no modification of the density could reproduce the backscattering obtained by the non-spherical particles.
Figure 7 at 2500 microns shows a more complex relationship to melting and particle orientation.The backscattering efficiency exhibits a large variance due to particle orientation (the shaded regions) in all panels.The spheres at 13.4 GHz encompass the range of variability and exhibit a general in-crease in backscattering with melting.However this breaks down at 35.6 and 94 GHz, where the Mie resonance effects start to have a stronger influence on the computed backscattering.Similar to what was seen in Fig. 5, the reversal of the backscattering roles occurs at 35.6 and 94 GHz, with the needle aggregate exhibiting a higher backscattering than the dendrite aggregate.It is also notable that at 94 GHz the backscattering decreases with increasing melt fraction.
Scattering and extinction at passive microwave frequencies
At passive microwave frequencies commonly employed for snowfall retrieval (e.g., 89, 165, and 183.31GHz), the particle interaction with microwave radiation comes primarily through extinction (scattering + absorption) and emission.Due to space constraints, only one effective radius (1500 microns) is shown here in order to understand the general sensitivity of the single-particle extinction, single-scattering albedo and asymmetry parameter to the onset of melting.In Fig. 8 we have computed the extinction efficiency for the NA and DA vs. melt fraction.Generally the spheres do not adequately cover the observed ranges of extinction for the non-spherical particles.At 89 GHz (panel a), the DA extinction is higher than NA; this role reverses at 165 and 183.31GHz.Roughly speaking, 165 and 183.31GHz exhibit similar sensitivities to melting at all effective radii (only 1500-micron effective radius is shown here), suggesting that, for rough estimates, computing the 165 GHz properties may be sufficient for capturing the scattering and extinction behaviors at 183.31 GHz (and nearby channel offsets employed on the GPM microwave imager, and other imagers and sounders).In actual remote-sensing applications, the difference in water vapor emission/absorption at 165 vs. 183.31GHz is likely to dominate the signal, except in the driest of atmospheric profiles (Skofronick-Jackson and Johnson, 2011).
The single-scattering albedo (the ratio of scattering to the total extinction) presented in Fig. 9 tells us the primary story of interest with regard to the onset of melting.Specifically we see that a small change in melt fraction yields a significant linear decrease in single-scattering albedo, which is an indicator of the rapidly increasing contribution of absorption to the total extinction.This, in turn, drives the thermal emis-sion (according to Kirchhoff's law) that we will later observe in Sect. 4.
Finally, in Fig. 10, the scattering asymmetry parameter (cosine-averaged scattering contribution over all angles) describes the degree to which incident radiation is forward scattered (g>0) or backward scattered (g<0).At a 1500-micron effective radius, very little sensitivity to the onset of melting was observed.Similar results were found at other effective radii.This is consistent with the notion that the scattered radiation depends primarily on the shape of the particle.Over this range of melting, the actual particle shape has changed very little, so the degree of forward scattering is not expected to change much.If melting were to continue beyond a melt fraction of 0.15 (not shown), the asymmetry parameter would change as the shape of the particle changes.
Overall, the single-scattering properties show a marked sensitivity to the onset of melting for scattering and extinction, with the exception of the asymmetry parameter.The spherical particle approximation does not produce scattering and extinction properties that have similar behavior to the non-spherical particle, particularly as the frequencies www.atmos-meas-tech.net/9/9/2016/Atmos.Meas.Tech., 9, 9-21, 2016 change.In some cases, the spherical particle properties do not bracket the non-spherical particles' properties, suggesting that under the current formulation no amount of modifying the density parameter could result in a reliable substitute for more physically realistic shapes when one considers all of the scattering and extinction properties of interest to passive and active remote-sensing applications.
Integrated properties
In the previous section, we examined a subset of the singleparticle scattering and extinction properties.Of interest for atmosphere radiative transfer and remote-sensing applications is how these quantities behave in an ensemble of particles.
Radar response
The equivalent radar reflectivities (Z e ) at each frequency are calculated using the single-particle radar backscattering cross sections, C bck (D), and integrating these over a given particle size distribution, N (D): where D is the mass-equivalent diameter (twice the effective radius), λ is the wavelength of incident radiation with the same units as D, and K w = (m 2 w − 1)/(m 2 w + 2) is the dielectric factor computed from the refractive index of water, m w .We note that in the present case we specifically chose |K w | 2 ≈ 0.93 for all wavelengths and melt fractions as a comparative convenience, so that only the backscattering properties vary with melt fraction.In actual radar reflectivities, the response to the dielectric properties of the ice/snow/mixed-phase particles within the radar range gate will be different, and completely dependent on the individual constituents shape, composition, temperature, and frequency of incident radiation.
For lack of a suitable alternative, we have assumed that the melt fraction of each size of particle is the same fraction across all particles in the distribution.This provides a constant melt fraction quantity, independent of mass, so that radar sensitivity to variations in the melt fraction can be read- ily examined without confusion.Future research will explore the variation of melt fraction for particles of different sizes in a given volume of the atmosphere.
For the snowfall particle size distribution, we choose the exponential size distribution from Sekhon and Srivastava (1970) and Johnson (2007).In each of the following calculations, all comparisons are made using an equal particle mass distribution.The particle size distribution N (D) is given by where N 0 is the intercept parameter and D 0 is the "characteristic" diameter of the PSD.N 0 and D 0 are related as follows: when liquid-equivalent precipitation rate, R, is given in units of millimeters per hour (mm hr −1 ).Figures 11 and 12 show the simulated equivalent radar reflectivity, averaged over all particle orientations, vs. melt fraction for the NA (green line) and DA (blue line), along with the mass-equivalent variable-density spheres (black lines).In Fig. 11, we have selected the characteristic diameter D 0 = 0.11 cm (corresponding to an ice water content of approximately 0.13 g m −3 ) for analysis.At all three frequencies, the reflectivity increases by a few decibels over the range of melting.With a judicious choice of density, spheres could reasonably approximate the simulated reflectivities of the non-spherical particles, except for the dendrite aggregate at 13.4 GHz (Fig. 11a).
In Fig. 12, the orientation-averaged reflectivities are computed for D 0 = 0.55 cm, corresponding to an ice water content of approximately 3 g m −3 , which is roughly 15 mm h −1 liquid-equivalent snowfall rate -an exceptionally high snowfall rate with very large snowflakes, representing an upper limit.Consequently, the simulated equivalent radar reflectivities are as high as 47 dBZ for DA at 13.4 GHz.The increase in reflectivity over the onset of melting is, similar to Fig. 11, only a few decibels per Z (dBZ) from 0 to 0.15 melt fraction.
We note that because D 0 = 0.55 cm is greater than the upper limit of the individual particle size, truncation errors in the particle size distribution integration are potentially significant.So these quantities should be interpreted with this in mind.Due to computational limitations at the time of research, the individual melt particle sizes cannot be reliably computed above 2500-micron effective radius (5000-micron effective diameter).
Although not shown here, continued melting does not significantly increase the reflectivity beyond this point.This result may appear to be inconsistent with radar observations of melting precipitation (i.e., the radar bright band) (Klassen, 1988).This suggests that the changing dielectric properties alone do not create the observed rapid increase in radar reflectivities associated with melting.It is postulated, without proof, that enhanced aggregation of particles at or near the melting point causes a rapid increase in total particle size, enhancing the D 4 diameter dependence of the reflectivity factor (e.g., Fabry and Zawadzki, 1995).A more detailed analysis of enhanced aggregation is beyond the scope of the current research.
Figures 13 and 14 are physically consistent with Figs.11 and 12, respectively, but now show the volume extinction coefficient.The most apparent feature is the rapid increase in extinction with the onset of melting.This behavior was observed in the single particle scattering properties (e.g., Figs.3-5).The integrated extinction suggests that attenuation of the radar beam starts to accumulate rapidly with only a modest increase in reflectivity.As before, the range of extinction of the spherical particle approximation does not always encompass the extinction of the two selected nonspherical particles in Fig. 13a and c.
Passive microwave response
In addition to the radar sensitivity to melting, there is also an interest in understanding the sensitivity of passive microwave brightness temperatures to the onset of melting.In reality, the melting layer of a precipitating cloud is likely to be obscured by an overlying ice region, obscuring it partially or wholly, depending on the wavelength of radiation.For the following simplified analysis, a single layer of melting hydrometeors is simulated, with no atmospheric gases or other layers intervening.To compute brightness temperatures, a twostream approximation is used, which in past studies (Johnson et al., 2012) has been found to be remarkably accurate at describing thermal emission from a plane-parallel slab viewed at the 55 • incidence angle, typical of a satellite microwave imager.The details of the two-stream approximation are provided in Petty (2008).Under the assumption of an infinitely thick slab, the transmittance through the slab is zero.Consequently the resulting upwelling brightness temperature from the two-stream approximation is only a function of the layer temperature, T a , and the layer reflectivity, r ∞ : where Here ω and g have been appropriately integrated over the exponential particle size distribution (Eq.2).In this simplified model, r ∞ is uniquely determined given the following physical properties: the wavelength of incident radiation, melting fraction (i.e., the dielectric properties), the particle orientation, and the particle size distribution.The utility of the two-stream model should not be oversold: it is a useful method for understanding the bulk sensitivity of upwelling microwave TBs to modifications in the underlying physical properties of hydrometeors, but it ignores all other contributions that would normally be present in an actual remotesensing scene.In this sense, it provides a "worst-case" scenario for the influence of microphysical properties on the upwelling brightness temperature.
Figure 15 shows a surface plot of two-stream brightness temperatures computed at 89, 165, and 183.31GHz.The y axis is D 0 , and the x axis is the melt fraction from 0 to 0.15.Given a fixed D 0 (i.e., a fixed ice water content), the onset of melting can have a rapid and dramatic impact on the simulated TBs.For example, with D 0 around 0.03 cm, pro- ceeding from an unmelted particle to a melt fraction of 0.01 (only 1 % melted), the brightness temperatures increase by almost 100 K, whereas for larger particles (higher ice water contents) the sensitivity to melting is not as rapid but nevertheless significant throughout the range of D 0 values.The onset of melting, from 0.0 to 0.15 melt fractions, leads to increases of up to approximately 125 K.In order to better understand the impact of this, further analysis is needed wherein a 1-D vertical or 3-D simulation is used to simulate a more realistic atmospheric column.It is expected that an ice layer overlaying the melting layer would partially or wholly obscure emission through scattering at the frequencies examined here -this is left for future studies.
Conclusions
In this paper, the Single Particle Melting Model was introduced as a novel and computationally efficient method to simulate the melting of an arbitrarily shaped ice hydrometeor.SPMM uses a novel nearest-neighbor method for determining when a particular point will melt and when previously melted points can move.It is easy to implement and map into any existing thermodynamic/melting-layer model, where a simple mapping between meltwater generation in a thermodynamic model and the melt fractions generated by the melting simulation are linked as appropriate.This also provides finer control over the particle size distribution within the melting layer.
A limited study of the onset of melting, for melt fractions ranging from 0 to 0.15, was performed in order to quantify the sensitivity of microwave radiation scattering and extinction.Two snowflake aggregate shapes were selected: one composed of needles and the other composed of dendritic crystals.For comparison with past studies, the scattering and extinction properties of spherical particles having 1, 10, 50, and 100 % volume fractions of ice (relative to air) were simulated.Single particle calculations using the discrete dipole approximation, via the DDSCAT code, were made, highlighting the individual particle and sizedistribution integrated particle properties.These calculations were made at 13.4, 35.6, and 94 GHz, consistent with the GPM dual-frequency precipitation radar, and at 89, 165, and 183.31GHz, consistent with the GPM microwave radiometer.We believe that this study represents the first simulation of the scattering and extinction properties of realistically shaped melting hydrometeors for a wide range of microwave frequencies and particle sizes.
There is a significant sensitivity of the computed extinction and scattering properties to the base hydrometeor shape and to the onset of melting.We found, in particular, that the spherical particle assumption was unable to capture the range of computed scattering properties from the non-spherical particles and did not provide consistent relationships between scattering and extinction throughout the onset of melting.The conclusion one could draw from this is that from a modeling perspective it appears that spherical particles (no matter how the density/mass is modified) cannot fully represent the range of uncertainties in the absence of of knowledge of the hydrometeors present in a given remote-sensing scene.Capturing this behavior in physical models is critical for accurately computing uncertainty estimates in forward model simulations and retrieval algorithms.
Validation of these simulations could be, in future work, performed by examining, for example, radar observations of stratiform melting layers -especially in cases where in situ observations of particle shape are available.The present model is currently being adapted to simulate melting for existing ice hydrometeor databases consisting of tens of thousands of particles, and it will allow for more realistic comparisons with observational data.
Figure 1 .
Figure 1.Selected steps in the onset of melting for (a) needle aggregate (NA) and (b) dendrite aggregate (DA).Blue regions are ice, and red regions are liquid water; mass fraction of meltwater is indicated at each step.Particle mass is conserved throughout the melting process.
Figure 2 .
Figure 2. A simplified diagram depicting the steps in the SPMM.At each step, ice (blue) points are converted to liquid (red) following nearest-neighbor rules (see text).Numbers depict the number of nearest ice neighbors.Panel (a) indicates the first iteration of the melting algorithm; (b) shows the second iteration and the allowable move conditions; and (c) shows the third iteration, where more ice has melted and three points are now free to move into the green regions.
Figure 3 .
Figure 3. Overview of 13.4 GHz extinction and backscattering: dendrite aggregate (row 1) (a) mean extinction cross section vs. melt fraction and (b) mean backscattering cross section vs. melt fraction for various values of effective radius in microns (color bar).Needle aggregate (row 2) (c) mean extinction cross section vs. melt fraction and (d) mean backscattering cross section.
Figure 4 .
Figure 4. Single-particle DDSCAT calculations of the extinction coefficient (Q ext ) for the dendrite aggregate (DA) and needle aggregate (NA) at 50-micron effective radius.Shaded regions represent the range of Q ext due to various orientations of the particle relative to the direction of incident radiation.Black lines are the equivalent ice sphere Q ext values at 1, 10, 50, and 100 % densities (100 % is the bulk density of ice, 917 kg m −3 ).(a) 13.4,(b) 35.6, (c) 94.0 GHz -note different vertical axis ranges on each panel for optimal visualization.
Figure 5 .
Figure 5. Same as Fig.4except effective radius is now 2500 microns (50-times-larger radius; 125 000-times-larger mass).As before, the vertical axes are on different scales.Notice the swap at 94 GHz, where the needle aggregate exhibits a higher extinction than the dendrite aggregate.
Figure 6 .
Figure 6.Similar to Fig. 4, the single-particle backscattering efficiency coefficient (Q bck ) is shown for 50-micron effective radius.Both aggregates show consistently larger backscattering efficiencies for the same mass as melt fractions compared to the spheres.Note the difference in the vertical axes on each panel.
Figure 7 .
Figure 7. Same as Fig. 6 except at 2500-micron effective radius.The needle aggregate at 35 GHz has a very large range due to particle rotation, whereas the dendrite aggregate exhibits about half the same range.Similar to Fig. 6c, the needle aggregate exhibits a larger backscattering compared to the dendrite aggregate efficiencies at 35.6 and 94 GHz.Panel (a) shows behavior consistent with spheres; however, in (b) 35.6 and (c) 94 GHz there is no consistent behavior.Note that the backscattering efficiency in panel (b) for DA (blue region) is completely encompassed by NA, and there is significant overlap between the two in panel (c).
Figure 8 .
Figure 8. Extinction efficiencies (Q ext ) computed for both aggregates and spheres at an effective radius of 1500 microns for (a) 89, (b) 165, and (c) 183.31GHz.Similar to previous plots, shaded regions represent the ranges of Q ext due various orientations of the particle relative to incident radiation.Vertical axes are on the same scale.
Figure 10 .
Figure 10.Following Figs. 8 and 9, the scattering asymmetry parameter (indicating the degree of forward scattering) is shown here.At all frequencies, the asymmetry parameter is relatively insensitive to the early stages of melting, consistent with the fact that the overall structure of the melting particles does not change too much over this range.In panel (a), the asymmetry for NA (blue region) is fully encompassed by the asymmetry from DA, with partial overlaps in (b, c).
Figure 11 .
Figure 11.Simulated radar reflectivities at (a) 13.4,(b) 35.6, and (c) 94 GHz.Reflectivities were computed by integrating over the orientationaveraged single-particle backscattering cross sections, assuming an exponential particle size distribution, for both aggregates and spheres.The reflectivities are presented only for one value of D 0 = 0.11 cm (i.e., the liquid-equivalent median volume diameter).N 0 and D 0 are computed following Sekhon and Srivastava, 1970 (see text).Note different scaling on the vertical axes.
Figure 12 .
Figure 12.Similar to Fig. 11 except for a larger D 0 = 0.55 cm.Notice the flip in the computed reflectivities at 94 GHz, similar to what was observed in the single-particle backscattering properties (Fig.7).Note: the results in this figure may be subject to large particle diameter truncation, as discussed in the text.
Figure 13 .
Figure 13.Volume extinction coefficient for D 0 = 0.11 cm at (a) 13.4,(b) 35.6, and (c) 94.0 GHz.The orientation-averaged single-particle extinction cross section was integrated over the same particle size distribution as was used in Fig. 11.Note different scaling on the vertical axes, including the exponents.
Figure 14 .
Figure 14.Same as Fig. 13 except for D 0 = 0.55 cm.Note the different scaling on the vertical axes, including the exponents; the results in this figure may be subject to large particle diameter truncation, as discussed in the text.
Figure 15 .
Figure 15.A surface plot of simulated brightness temperatures (TBs) for D 0 vs. melt fraction assuming an "infinite" layer of melting hydrometeors at (a) 89, (b) 165.0, and (c) 183.31GHz.A two-stream radiative transfer model (see text) was used to compute the TBs.Particle size distributions used here are the same as those used in the reflectivity computations. | 8,678.6 | 2016-01-15T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Visualization of the Motion of Textiles through a Waste Water Pump at Different Operating Points
In this paper, the motion of textiles through a waste water pump is studied by aid of vision technologies. The steel volute of a commercial pump is replaced with a similar volute made in acrylic glass, which allows recording the motion of textiles inside the pump. Recordings are made at four different operating points to investigate the influence of rotational speed of the impeller and flow rate on the passage of textiles through the pump. The experiments show that the textiles flow rapidly through the pump when the pump is operated near the best efficient point for both high and low impeller speed. The textiles tend to stay inside the pump when the pump is operated at part load for both low and high impeller speed. At low impeller speed, the textiles often stick to the tongue in the pump casing. At higher impeller speed, the textiles flow multiple rounds in the volute. For fail-safe operation, it is recommended not to operate waste water pumps far away from the best efficiency point.
Introduction
At the time being, waste water systems around the world are challenged by an increased amount of synthetic products in the waste water.Items like flannels, sanitary towels, cotton buds, condoms and wet wipes are flushed in toilets, and afterwards they flow through pipes and pumps towards the wastewater treatment plant, where they are sorted out in mechanical filters.On their way, especially the passages of wastewater pumps are critical as the synthetic solid parts of the wastewater can cause stoppage due to clogging of impellers.Pöhler et al. [1] H. Sørensen, A. L. Jensen DOI: 10.4236/jfcmv.2018.6100440 Journal of Flow Control, Measurement & Visualization reports that more than 12,000 failures are seen yearly in Berlin, where the majority of these failures are due to clogging.Universities and pump manufactures collaborate in research on wastewater topics to gain a better understanding of the clogging and to be able to solve these problems on pumping wastewater containing solid objects.One way of getting a better understanding of how to avoid these problems is through a comprehensive experimental program like presented by Thamsen [2], where different scenarios are studied.This approach is expensive and often limited to a single or maximum a few pump designs.As a result of this, an effort is made to develop reliable computational models, where simulations will replace the experimental programs.For clean water applications simulations have proven to be an accurate and powerful tool for pump development [3], while the development in simulation tools for complex flows including a carrier phase (water) and foreign objects are under rapid development.Boundary conditions for the latter type of simulations can be obtained like in Jensen et al. [4], where the influence of operating point on the shape and position of textile material at the inlet pipe to a dry-installed wastewater pump is investigated.Having the right boundary conditions for these types of simulations, the next step is to gather information to validate the new models.Gerlach et al. [5] did a comprehensive study of vortex pump impeller designs, where the clogging behavior was studied through experiments utilizing artificial wastewater and a pump with a transparent housing.Recently, Tan et al. [6] reported an experimental study of pass-through and collision characteristics of coarse particles Since a significant number of pumps are wet-installed [7], there is also a need for further research involving this type of pumps.The aim of this paper is to provide detailed information on how textile flows through a wet-installed pump operated at the best efficiency point (BEP) and part load (55% of BEP).
Experimental Setup and Instrumentation
The movement of dust cloths flowing through the pump is filmed by an industrial camera placed outside the test rig with its optical axis normal to the rotational plane of the impeller.The camera view angle is shown in Figure 1 (middle).The impeller is rotating counter clock wise seen from the camera view.All images are stored as jpeg-files on a personal computer for later analysis.Specifications of the optical components are presented in Table 1.
In this experiment a prototype pump with an acrylic pump casing is used.The impeller is an asymmetric single channel S-tube impeller made in cast iron.The free passage through the impeller is 80 mm.The impeller geometry is shown in maximum flow rate is 133 m 3 /h and the maximum head is 11.6 m.The best efficiency point is located at 116 m 3 /h.The pump has a relative high specific speed of 81.The installation of the pump is illustrated in Figure 1 (left).The pump is connected to a variable speed drive which makes it possible to adjust the impeller speed to a given rpm.For flow measurements the test rig is equipped with an electromagnetic flow meter (Danfoss MAG3000).For the test reported in this work the operating points of the pump were adjusted using 48 mm and 80 mm flow restrictors.
The textile material used in these experiments is Aro super dust cloths made for house hold cleaning.The research group led by Thamsen at Technical University Berlin has found that the Aro super dust cloth can be used as an artificial substitution of waste water.The dust cloths are 220 mm × 300 mm and have a thickness of 0.77 mm.The material is non-woven polyester (90%) and polypropylene (10%).The material structure of the dust cloths is shown in Figure 1 (right).To ensure the right wetting of the dust cloths, they were stored in water for minimum 24 hours before the tests were carried out.
Experiment Cases
For the experiments included in this paper the impeller speed was set to 600rpm or 1200rpm.Flows at best efficiency point (Q BEP ) and heavy part load (55% of flow at Q BEP ) were tested for both impeller speeds.This parameter variation gives in total 4 different test cases (see Table 2), where 50 runs were carried out for each case.In total 298,677 images were recorded for this experiment.
Experimental Procedures
The operating points of the pump were adjusted by mounting the right flow re- All acquired images were streamed to the hard drive of a computer
Data Treatment
The image sequences are afterwards analyzed through an inhouse vision program, where the dust cloth is identified inside the pump.The image processing steps are shown in Figure 3.Primary information as time, size and position is stored.The retention time in the volute is then calculated as the time between cloth entered and left the pump casing.The time for one rotation in the pump casing is also calculated for cloths taking more than one round in the pump casing.
Flow Measurement
Measurements by aid of Particle Image Velocimetry were carried out by Bjerg et al. [8]
Results
The output of the experiments contains almost 300,000 images, which are impossible to present on paper.Instead informative selected sequences are extracted and presented.
Operation near Best Efficiency Point at 600 rpm
The cloths flow relatively smoothly through the pump when operated near best efficiency point at low impeller speed (600 rpm).Based on the 50 runs the average retention time in the volute is calculated to 0.8s.Inspection of the 50 cases shows that in 19 out of the 50 runs the textiles shortly sticks to the tongue for 0.5 -2.0 s.In two cases, the textile sticks to tongue for a longer period of 7 -8 s.Figures 5(a)-(h) shows an example of how a cloth flows out of the volute.Figure 5(g) shows how the cloth shortly sticks to the tongue before leaving the volute.
Operation at Part Load and 600 rpm
The cloths tend to be trapped at the tongue for longer time when the pump is
Discussion
This experiment does not show any results which may give causes for concerns when the pump is operated at a flow near the best efficiency point.But when the pump operates in part load (55% of Q BEP ) the cloth tends to stay inside the pump.For low impeller speed the cloth sticks to the tongue and for high impeller speed the cloth simply flows around at the circumference of the impeller.An causes the cloth to be dragged away from the outlet of the volute towards the tongue region.The findings in this research support the conclusions in [2] where it is stated that clogging occurs in heavy part load and low speeds.
Conclusion
For both high and low impeller speed, the wastewater pump tested in this study is working as intended, when the operation point is around the Q BEP .For part load (55% of Q BEP ), the cloth stays inside the pump for several seconds, which increases the risk of damages of the cloth.This can lead to unraveled textile which can flow into cavities or groves and cause serious problems over time.
( 6 -
15 mm) in a radial impeller mounted in a transparent pump facility.Common for the research done by Jensen et al., Gerlach et al. and Tan et al. is that the experiments were carried out in dry-installed pumps and without focus on the movement of textiles.
Figure 2 .
Figure2.The pump is equipped with a 2.9 kW 4-pole motor.At nominal speed, the
Figure 1 .
Figure 1.Left: Wastewater pump with acrylic pump casing used in the experiments.Middle: The pump casing seen from camera angle.The outer contour of the volute is marked with a red line.The tongue is the triangle region marked with red.Seen from camera angle the impeller is rotating counter clock wise.Right: Zoomed view of the Aro Non-woven synthetic dust cloth used in the experiment.
Figure 2 .
Figure 2. Left: Plane sketch of the prototype volute and impeller for mid sectional view.The impeller rotates counter clock wise.Right: Sketch of prototype S-type impeller.Table 2. Impeller speed and flow settings for the experimental test cases.Q/Q BEP = 100% Q/Q BEP = 55% Head (Impeller speed 600 rpm) 0.82 m 1.35 m Head (Impeller speed 1200 rpm) 3.27 m 5.39 m
Figure 3 .
Figure 3. Image processing steps in NI Vision for detection of the textile inside the volute.
Figure 4 .
Figure 4. Schematic view of PIV setup for flow measurements inside the volute of the pump.
operated at low speed and part load (55% of Q BEP ).In total 26 out of 50 runs show the cloth trapped on the tongue.In 18 of the 26 cases the experiment was stopped after 30s for removal of the cloth.Inspection of the recorded image series shows that the cloths are trapped within one or two revolutions in the volute.The average time for one rotation of the cloth in the volute is 435 ms.Figures 6(a)-(d) shows an example of how a cloth running with an impeller speed H. Sørensen, A. L. Jensen DOI: 10.4236/jfcmv.2018.6100444 Journal of Flow Control, Measurement & Visualization
Figure 5 .
Figure 5. Sequence of images showing the dust cloth passing through the volute of the pump running with an impeller speed of 600 rpm at best efficiency point.The selected pictures (a)-(h) have an inter frame time of 40 ms.
Figure 6 .
Figure 6.Sequence of images showing the dust cloth sticking to the tongue in the volute.The pump is running with an impeller speed of 600 rpm at part load (55% of Q BEP ).The selected pictures (a)-(h) have an inter frame time of 40 ms.
Figure 7 .
Figure 7.In two cases, the cloth is attached to the leading edge of the impeller for less than one revolution.
Figure 7 .
Figure 7. Sequence of images showing the dust cloth passing through the volute.The pump is running with an impeller speed of 1200 rpm at best efficiency point.The selected pictures (a)-(h) have an inter frame time of 20 ms.
3. 4 .
Operation at Part Load and 1200 rpmIn the experiment where the pump was operated in part load at 1200 rpm the cloth tends to stay inside the volute for multiple rotations.In 42 out of 50 runs the cloth took more than 15 revolutions (retention time > 2.7 s).The average time for one rotation of the cloth in the volute is 179 ms.In 4 out of 50 runs the cloth sticks to the tongue for a short period before being dragged out in the volute and continuing in the flow for multiple rotations as seen in Figures8(a)-(h).For all the 50 runs carried out at this operating point the cloth succeeded to leave the volute within 20 seconds from the time of arrival.
explanation of these two phenomena is that the flow in the near tongue region is changing when the pump is operated in part load.Measurements by aid of Particle Image Velocimetry done in advance by Bjerg et al. [8] reveal the flow change.Compared to a high flow rate through the pump a relative larger amount of the fluid is just moving around in the volute, when the pump is operated at lower flow rates.The physical explanation of this change is that for lower flow rates the pressure at the pump outlet increases and thereby the pressure difference across the tongue region also increases.The relative change of the flow is shown in Figure 9 (right).This change in local velocity distribution
Figure 8 .
Figure 8. Sequence of images showing the dust cloth sticking to the tongue in the volute.The pump is running with an impeller speed of 1200 rpm at part load (55% of Q BEP ).The selected pictures (a)-(h) have an inter frame time of 40 ms.
Figure 9 .
Figure 9. Left: Flow pattern for 60% of Q BEP .Middle: Flow pattern for 90% of Q BEP .Right: Relative flow around the tongue when changing the flow from 90% of Q BEP to 60% of Q BEP .
Table 1 .
Specification and settings of the camera used in the experiments. | 3,197.6 | 2018-01-26T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
T HE DETERMINANTS OF I NDIAN FDI IN A FRICA : A STRUCTURAL EQUATION APPROACH
Much has been written in economic circles about the rising investment of the BRICS countries in Africa, yet there is scant literature on the determinants of FDI from these countries to Africa, and no studies have reported on that from India. In 2012, Indian FDI surpassed that of China, making India the largest developing country that is a direct investor in Africa. This study focuses on understanding the determinants of Indian FDI in Africa using structural equation modelling (SEM), which includes factor analysis and regression estimations. The specific determinants that influence the number of Indian FDI deals in Africa include government effectiveness, control of corruption, crude oil price, school enrolment and exports. The value of the investments is influenced by government effectiveness and rule of law. We conclude that India’s increasing involvement in Africa is driven by trade and resources. It is, however, differentiated through a strong focus on good governance.
INTRODUCTION
The current economic debate regarding development in Africa is fuelled by the growing influence of the BRICS (i.e.Brazil, Russia, India, China, South Africa) countries across Africa and the perceived threat they pose to the historical investors such as the United States, the United Kingdom, France, Germany and Japan.At the forefront of this debate is the role of FDI by these countries.A decade ago, none of these countries were viewed as important investors in Africa.Today, China and India have risen to become two of the top five investors in Africa, with India surpassing China to become the number four investor in 2012 (Financial Times, 2012).
According to Broadman (2011), most research focuses on Chinese involvement in Africa.Countries such as India and Brazil are mostly omitted from studies on investment in Africa, although these countries also have extensive interests in Africa.Studies focusing on the rise of BRICS investment in Africa are scarce, especially India, and therefore this study further investigates the rise of Indian FDI in Africa.
Growing collaboration between India and Africa was signalled by the inaugural Indo-African Forum Summit in New Delhi in 2008.Its core focus was India's interest in Africa.India is striving to change the perception the world has of it by becoming more of a development donor than a recipient (Kragelund, 2010).At this first summit, issues such as agriculture, trade, industry, investment, peace and security, good governance and information technology were discussed (Bhatt, 2008).During this summit, it also became clear that the main objective is the expansion of South-South cooperation between equal partners.Africa's ability and need for development have proved to be a great opportunity for India and its prospects, since the African image of war and poverty has been superseded by many seeing Africa as an opportunity with great promise as well as a growing economy.This is supported by Clark's (2012:16) statement that 'Most economic interpretations of the past and present presume that "Africa is poor".Not true.Africa is not really "poor" as portrayed: it is poorly managed and yet to be developed.Its inherent natural wealth is yet to be fully unlocked, leading many to expect a better future, and there is promise in this direction'.Several African countries are among the fastest-growing economies in the world, which are boosted by FDI with high riskadjusted returns (PricewaterhouseCoopers, 2011) Africa's increase in demand for investment opportunities has led to tremendous changes and growth.The United States and United Kingdom are still the largest providers of FDI in Africa, but the emerging Asian markets have become significant players in FDI to Africa (Price Waterhouse Coopers, 2011).
This study focuses on understanding the determinants of Indian FDI in Africa by profiling these determinants through a review of literature and using structural equation modelling (SEM) to establish the specific determinants.The rest of the paper consists of a literature review, an overview of Indian FDI in Africa, a discussion on the data used and the data limitations, the method used, and estimation results, where after the paper is concluded.
LITERATURE REVIEW
Some researchers aver that FDI can play a significant role in economic development and integration.Other researchers argue that FDI is harmful to local markets and has the risk of only extracting natural resources without developing the country (OECD, 2008;Stefanović, 2008).The state of FDI flows and stock between and in countries has undergone changes, and has received scrutiny, opposition and support.World trends in FDI flows demonstrate large shifts to emerging markets.These shifts have motivated new research to examine FDI and possible effects and causes (Sauvent et al., 2009).FDI is broadly split into two methods of entry, namely Greenfields FDI and Mergers and Acquisitions (M&As).Greenfield FDI entails a new venture that requires new operational facilities to be established, while an M&A is the acquisition of a significant executive share in an existing enterprise in the host economy (Investopedia, 2013).
The reasons and motivations for multinational enterprises (MNEs) to invest are supported by the different types of FDI.Market-seeking FDI, resource-seeking FDI and efficiency-seeking FDI are the main types of FDI.Market-seeking FDI indicates MNEs that obtain market access in a foreign market; market-seeking FDI can replace local domestic capabilities in regions; and resourceseeking FDI involves MNEs that require certain resources that are available in foreign markets.Nigeria -an oil-rich country -is a good example, attracting plenty of resource-seeking FDI.It has significant growth and increasing FDI flows that are for the most part motivated by resource-seeking FDI (Te Velde, 2006).Efficiency-seeking FDI focuses on assembling the best and most affordable production process factors, and can transform production structures and therefore the growth performance of firms.FDI inflows are affected and determined by a variety of factors and risks.The determinants can be divided into two classification groups, i.e. micro-and macro-economic factors (Naude & Krugell, 2003).The micro-economic determinants include market size and growth, labour cost, host government policies, tariffs and trade barriers, taxes, transport cost, agglomeration effects and environmental factors.The macro-economic factors include openness and exports, exchange rates, inflation, budget deficits, investment and infrastructure, political instability and natural resource availability (Naude & Krugell, 2003;Bezuidenhout, 2007).
The determinants are not the only influence on an investor's decision, according to White and Fan (2006).Risks also influence FDI flows.These can be classified as global, country, industry and enterprise risks.African markets are sensitive to these risks, especially to country or sovereign risk, which includes political considerations.These should be considered when the determinants of FDI are profiled.Broadman (2011) and Katakey (2010) describe the need for resources by an ever-expanding Indian economy.Global resource prices or commodity prices therefore also warrant further investigation as determinants for Indian interests in Africa.To accommodate commodity influences, we include the following factors: agricultural prices, various metals prices (see TABLE 1) as well as the crude oil price as global factors.
TABLE 1 provides an overview of the different determinants of FDI and their categorisation as micro-economic, macro-economic, risk, political (good governance) and global factors.The specification of these categories of determinants distinguishes the behavioural patterns of Indian FDI in Africa on whether it is market-seeking, efficiency-seeking and/or resource-seeking FDI.FIGURE 2 shows that overall FDI from India to Africa proportionately increased during the past decade and rose rapidly in 2011, reaching a high of US$45 billion.FDI flows from India to Africa have expanded geographically, increasing by 837% over the period.Indian firms are doing business in over 32 African countries today (Financial Times, 2012).Bhattacharya (2011) states that the steep rise in Indian FDI to Africa clearly indicates that Indian companies see potential in the African market and that it may well be the market of the future.
African countries that received the most FDI from India are mostly those with high levels of development and larger GDP values.These countries are also among the top FDI recipients of FDI in Africa from across the globe.Nigeria receives most of India's Greenfields FDI to Africa, with the majority being in the coal, oil and gas sectors (Nyagah, 2009).Nigeria has abundant resources of oil, which is the reason it is expected to become one of the largest economies in the world.Large amounts of Indian FDI also flow to the chemical industry in Nigeria.Mozambique and Egypt also benefited from large Indian FDI inflows, the major sectors also including coal, oil, gas and chemicals (Nyagah, 2009).Zimbabwe's metal sectors attract the majority of Indian FDI in this country, while South Africa's coal, oil, gas and metal industries attract the most Indian FDI (Nyagah, 2009).The top five African countries that attract Indian Greenfields FDI are mainly those with abundant natural resources.), with the majority going to the petroleum industry.M&As taking place in South Africa are mainly in the metals and telecommunications industries.In Zambia, Indian firms have invested mainly in the telecommunication sector.Mauritius, a gateway for financial services, because of its low tax regime, has attracted Indian investment in machinery and financial services.Indian FDI in Mozambique is mainly to acquire hard-coal production.
Indian MNEs in Africa tend to be in the private firms; they prefer to acquire existing firms and help to encourage engagements in vertical integration improving the African socio-economic network (Broadman, 2011).Indian as well as Chinese firms have many similarities with regard to their African operations.The increasing presence of the Chinese and Indians in Africa seems to have a positive correlation with trade as shown by Africa's exports (Broadman, 2011).
TABLE 3 indicates that most of Indian FDI flows are to the natural-resource industries, such as petroleum, coal, oil and gas.Most of the mergers and acquisitions are in the resource sector, while the Greenfields FDI includes the private sector.The dominance of the sectors such as software, IT and telecommunication motivates the fact that India focuses on sectors in which it has a competitive advantage.These sectors are also to Africa's advantage since FDI may lead to knowledge as well as skills transfers into the African market.The nature of Indian FDI can be summarised as resource-seeking FDI with a changing focus on market-seeking and efficiency-seeking FDI.Private sector firms are playing a more prominent role in Indian FDI flows than Chinese enterprises, which indicates less direct political influence.India is set to become the second-largest economy in the world and its increasing beneficial influence in Africa can be an advantage to the development and growth of African markets (Schneidman & Lewis, 2012).
DATA AND DATA LIMITATIONS
Data for Indian FDI in Africa is not widely available.We use the Financial Times FDI Markets database for Greenfields FDI as a source of specific Indian FDI in African countries on a deal-bydeal basis.The data covered by the Financial Times database is all recorded FDI deals from 2003 to 2012 for all African countries.This implies an observation-based population rather than a time series.
The main limitations of the database are that some deals may not be recorded due to a lack of information and secondly the data cannot be compared to FDI totals regularly published by the World Bank and the IMF (this data is only recipient country totals and does not include source country information).It should also be pointed out that not all countries received Indian FDI every year and the vast majority of deals are skewed towards the coal, oil and natural gas and metals sectors.This limits the application of econometric techniques, as the short sample period and irregular investment intervals per country and sector (these generate missing values) will render time series sample size too small for satisfactory analysis.We use SEM to accommodate the nature of the data, by looking at the population of FDI deals.
Variables that reflect the determinants of Indian FDI are linked to each individual observation.
METHODOLOGY
Standard econometric techniques are not relevant to accommodate the data limitations and irregularity of FDI in country-specific settings.We have therefore introduced SEM to answer the research question of which determinants are significant for Indian FDI in Africa and to estimate their relevance.SEM is widely used in the behavioural sciences.It is a large-sample observationbased technique that allows a combination of factor analyses and regressions.SEM also allows for multiple dependent variables, whereas correlation between explanatory variables does not affect the results (Arbuckle, 2011).Another advantage of using SEM is that it is based on measurement of error, as opposed to regular regression, which assumes perfect measurement.
SEM is a confirmatory technique that requires a predetermined model specification based on theory, to be estimated in order to confirm the theory.It tests the measurement and structural relationship of variables simultaneously.The basis of a SEM model is covariance, which tests the strength of the association between the variables.This helps to explain the pattern of correlation among the variables in the model and explains as much of the variables' variance as possible.The use of observation-based data rather than time series makes SEM, as a general statistical modelling technique, more suited to estimate possible relationships between variables in this case (Pearl, 1998).The theoretical specification therefore consists of the theoretical determinants of FDI being factors to unobserved determinant categories such as micro-and macro-determinants, and the identified determinants being regressed on the number of FDI inflows and FDI value.
There are three different types of SEM specifications: confirmatory factor analysis (CFA), path analysis with observed variables, and path analysis with latent variables.This study uses the CFA method because it is more deductive than inductive.This implies that it is a more logical approach since it is a bottom-up strategy, which indicates that conclusions are derived empirically.A top-down approach is one where a conclusion is developed based on theory.The SEM is estimated based on the theoretical model of the determinants of FDI.These models are the empirical part, and depending on the goodness of model fit, conclusions are derived from the empirical results (Arbuckle, 2011).
In this study, models are estimated to indicate factor groupings; factor analysis and SEM allow for correlation between the different factors (Pearl, 1998).We then further our investigation by estimating another set of models that determine the linear relationship between factor variables and FDI; these models also allow for correlation among the independent variables.Once we have established the relevant determinants for each of the groupings specified by theory, we include them in a final model where only the relevant factors are included and regressed against the number of deals and the investment value.
According to Arbuckle (2011), there are two main classes of model fit, namely absolute fit and relative fit.These classes enable the evaluation of the goodness of fit and the general acceptability of the findings.The absolute fit of the model can be established by evaluating the chi-square, the minimum discrepancy for the model based on chi-square (CMIN), root mean squared error of approximation (RMSEA) and goodness of fit (GFI) measures.These indices provide a clear perspective on the ability of a model to duplicate the covariance matrices.Two models are compared by observing the absolute fit as well as the relative fit.The relative fit of a model is established by a comparison between the theoretical model and a baseline model.The baseline model has a standard specification, with an assumption that there is no relationship among variables; therefore, the relative fit of a model will indicate whether or not the estimated model is better than one with no correlation between variables.The Normed Fit Index (NFI), Incremental Fit Index (IFI), the Comparative Fit Index (CFI), Bolen's Relative Fit Index (RFI) and its derivative the Tucker-Lewis Coefficient (TLI) are mostly used as further measures to indicate the relative fit of a model (Arbuckle, 2011, Pearl, 1998).The biggest limitation on using measures of fit in this study is linked to the recursive nature of the factor analysis, in which case the probabilities of the individual variables are also taken into account to ensure the correct specification.
SPECIFICATIONS
For the purposes of this study, we make a basic assumption that the number of FDI deals that each country receives includes a risk decision and a set of determinants, although the actual value of the FDI deal might depend on a completely different set of determinants.The value or level of the investment is determined by capital cost and the availability of capital inputs in the market and sector; each industry will have a different profile.These two FDI variables are used as the dependent variables in a regression to derive the determinants of Indian FDI in Africa.
To identify the determinants as accurately as possible, variable groups are established and processed in a factor analysis.This allows identifying the factors with the most significant influence and greatest weights more accurately.The variable groups consist of micro-economic, macro-economic, risk, political (good governance) and global factors.The factors that are identified in the factors analysis are those that are included to be tested as determinants of Indian FDI in Africa in the final model.
The risk factor analysis is conducted first in order to determine the main risk areas that require focus in the estimation of the determinants.The factor analysis is then expanded to a regression model that tests the structural relationship between the estimated determinants and the population of FDI deals as well as the value of FDI inflows.The determinants of Indian FDI are correctly identified by testing the relationships separately in the different variable groups.This allows for more accurate correlation between variables.By placing variables in an appropriate group, those that create instability in the model can be identified and removed from the analysis.
The empirical analysis is done in steps.Factor analysis removes factors with low weights that do not exercise a significant influence, whereas regression analysis identifies the variables that have a significant influence.The process that is followed allows for accurate results and eliminates problems with high correlation between variables, such as endogeneity.
TABLE 4 provides a detailed list of the variables used during the estimation process.These include the actual variables used to represent the theoretical determinants based on the FDI literature as discussed by Blonigen (2005), Naudé and Krugell (2003), Bezuidenhout (2007) and Asiedu (2002;2006) Masca and Demirhan (2008) highlight the problematic nature of labour costs in FDI estimations, especially when dealing with multiple countries.No uniform variable exists for dealing with African labour costs.Labour force participation rates do not reflect cost directly, but together with skills, risk and growth give an indication.
ESTIMATION RESULTS
An iterative process is first used to investigate the different groupings of FDI determinants in order to derive a final specification model.Based on the limitations imposed by the data, the interpretation of results is deliberately conservative.The findings of the iterative process are reported in full in the Appendix and can be interpreted as follows: All factor analysis models have a good fit, except for macro-economic factors.
The analysis of estimated risk factors indicates factors such as finance, government effectiveness, foreign trade and payment, infrastructure, labour force, legal and regulatory issues, and tax policy as notably influential.We conclude that the role of government and a country's financial position play a significant role in the decision on whether to invest.The 'good governance' factors represent a country's political climate.The variables found to be significant also include control of corruption, government effectiveness and rule of law.
The global estimates consist of prices of the different global commodities as indexed by the World Bank (World Bank, 2012b).Significant commodities in the factor analysis include maize, metals and minerals, crude oil, precious metals and agricultural commodities.On the other hand, the only commodity that indicates a significant influence on the FDI flows is the crude oil price.This implies the importance of oil in Africa.This fact also reinforces the conclusion that India's investment has a strong element of 'resource seeking'.
The significant macro-economic factors include the exchange rate, crude oil price, government effectiveness and exports.The regression model augments this to crude oil, government effectiveness and exports.This not only emphasises the importance of oil and good governance, but also highlights the 'market-seeking' element in the relevance of exports.Overall, the macroeconomic model has the worst fit and this indicated that macro-factors by themselves do not represent a significant influence; only when viewed with the other groups do they contribute.
The micro-economic variables explain the influence of industry and firm-level factors.Factor analysis found population growth and school enrolment to be the significant factors in the micro-economic cluster.These variables indicate the vital importance of the labour force size and human capital in a country, which emphasises an 'efficiency-seeking' element in Indian FDI to Africa.
The determinants identified as being significant among the variables are used to derive the final model.There are two models that are estimated: one is to identify the determinants that influence the Indian investor's decision to invest, which is illustrated by the number of deals taking place.The other model estimates the determinants that influence the value or scale of the investment from Indian firms.Government effectiveness has a relatively high positive coefficient that indicates that Indian firms tend to prefer more efficient political regimes.One could query the result on control of corruption, which has the only negative coefficient.The implication is that higher levels of corruption (a negative value) will lead to more deals.Different arguments can be made surrounding the statistical validity based on the severe data limitations.We suggest that the strong Indian investment in the coal, oil and gas sector, along with the reputation of the governments involved, provide enough of a counter-argument.A more in-depth sector study will resolve this issue once more data becomes available.
The relevance of the oil price as a determinant of Indian FDI advances the previous argument and confirms the strong investment repertoire of Indian firms in the coal, oil and gas sector.The oil price is a global determinant that cannot be controlled by African countries.
School enrolment and exports as determinants of Indian FDI extend our understanding of Indian multinational behaviour in that there is a significant distinction between resource-seeking FDI and both the efficiency-seeking and market-seeking forms of FDI, which require level of education and higher levels of trade.Exports are also seen as market seeking; this is important because it indicates the openness of the country, in other words, the manner in which a country interacts and operates with other nations.The measurements that are illustrated in Table 5 indicate a good-fitting model.A chi-square of 1.7 implies a good-fitting model and indicates that the covariance is close to the sample covariance.Given the weaknesses in the data, a zero value for perfect results cannot be expected.The RMSEA indicates a close-fitting model with a significant value that is close to 0. NFI, IFI and CFI values are above 0.9, which indicates the model is a better fit than the default model with no correlation.The final model therefore indicates accurate interpretation and results, since it is accepted as having a good fit.
FIGURE 4: Final regression analysis output on the level of investment
Source: Authors' calculations The model that illustrates the determinants of the level or amount of FDI that Indian investors make is conducted in two parts.The variables indicate high covariance and similarity and therefore two separate models are estimated in order to determine a more accurate picture of the FDI flows.Both models are demonstrated in Figure 4.Both have negative coefficients that indicate that higher levels of efficiency and law enforcement negatively impact the level of investment.This confirms the arguments previously regarding the oil, coal and gas sector.
The determinants of the level of investment are government effectiveness and rule of law.These determinants are highly correlated with one another and therefore estimated in separate models.Both determinants indicate a negative relationship: in other words, the higher the levels of adherence to political regulations the less an investor will invest.Nigeria, Algeria, Angola, Gabon and Equatorial Guinea represent the bulk of the oil sector investments that have negative indices values, which leads to the negative coefficient.
The political stability (risk) of an African country is indicated as being an important factor that influences an investor's decision on the amount to invest.The political status of African countries is rather weak and therefore a significant factor that needs attention and improvement in order to sustain FDI flows to Africa.
CONCLUSION
The objective of this study was specifically to explore the determinants of Indian FDI inflows into Africa.The intention was to determine, with the help of the literature and empirical evidence, an accurate profile of the specific determinants that are relevant in the debate regarding Indian FDI inflows into Africa.Very few studies exist on FDI flows to Africa, let alone determinants from developing countries.
We provide an empirical foundation to assess the relevant determinants of Indian FDI inflows to Africa.The primary result is that Indian firms are mostly attracted to effective, stable governments in efficiency-and market-seeking FDI; however, indications exist that large-scale investments in the oil, coal and gas sector prefer the opposite.This study furthermore makes a contribution to the scarce literature on FDI to Africa.
Speculation on the relativity of traditional determinants of FDI to Africa is extended to the sphere of developing countries investing in Africa.The results confirm that patterns differ from traditional patterns in the literature, with a strong emphasis on government actions to protect the investment and the exports of goods and services.We conclude that traditional determinants provide the overall framework for an investment decision concerning Africa, but a different approach is needed when investing in individual countries.The results obtained are of interest to African countries looking to attract FDI through favourable investment policies.African governments need to identify and implement sustainable policies on good governance, trade promotion and human skills development in order to strengthen their investment climate.
It can be concluded that if an Indian company considers investing in any African market the role of the recipient country's government is the single-biggest determinant, with government effectiveness, rule of law and control of corruption playing the most significant roles.Oil being a major area of investment for Indian firms in African markets indicates an important rule; this is mainly due to the high demand for oil in India due to its own shortage of the resource.The level of development in terms of the labour force in any African country indicates that a significant proportion of Indian FDI is efficiency seeking.The significance of exports indicates that the openness of a country is important for many Indian investors.The more exports take place, the better a country's ability to operate in a multi-stage international supply chain.This indicates a growing relevance for market-seeking FDI.
Recommendations for further research include a comparison with Chinese FDI and a further expansion to traditional partners.As more data becomes available, sectoral studies will become possible and should be pursued.Research could also be conducted on investment into individual African countries in order to gain an in-depth insight into which determinants of FDI inflows are significant to that country.Country analyses will provide a different insight from the general overview that was used in this study.
For a study involving many African countries, the major constraint is the availability of adequate data.Although the situation is changing with more and more international bodies and private entities collecting more specific data, the need still exists for African governments to produce more detailed and transparent data, especially data concerning M&As and infrastructure projects.
APPENDIX: (All results are authors' own calculations)
Factor analysis
Risk factor analysis Good governance factor analysis
FIGURE
FIGURE 1: India's relative position as an FDI source country 2003 to 2012 Source: Financial Times (2012) Between 2003 and 2012, India lagged the United States and the United Kingdom in FDI, but in most years surpassed China.In 2012, India was Africa's largest source of FDI (FIGURE 2).
FIGURE 2 :
FIGURE 2: India's relative performance as an FDI source country 2003 to 2012 Source: Financial Times (2012)
FIGURE 3
FIGURE 3 illustrates the results of the final model that reflects the determinants of Indian FDI in Africa.These factors, which are estimated in each variable group, are included in the estimation; only the factors with a significant influence are included in the final model.The factors are identified in the factor analysis and regression model.The factors that are identified as significant determinants include government effectiveness, control of corruption, crude oil, school enrolment and exports.These determinants indicate a significant influence on the decision to invest.The figure also shows the high levels of covariance (arching bi-directional arrows) between the factors that would make estimation using other techniques less accurate.
Figure 3 :
Figure 3: Final regression analysis output on the decision to investSource:Authors' calculations
TABLE 2
further indicates that Sudan received the highest amount of Indian Mergers and Acquisitions (M&A
TABLE 3 : Top African sectors receiving FDI from India, 2003-2012
(Broadman, 2011)dian and Chinese investments in Africa are based on natural resources, MNEs are diversifying into other sectors, including infrastructure, IT, software, services and telecommunications(Broadman, 2011).In 2010, a third of all acquisitions made in sub-Saharan Africa were made by Indian firms (Bureau van Dijk, 2012) in terms of the total value of deals.Indian companies prefer to enter the market via acquisitions, even though the value of Greenfields FDI exceeds them by far.Large investments have been made in the telecommunications industry -the Indian firm Bharti Airtel bought Zain Africa for US$10.7 billion in 2010(The Financial Times, 2012).The other notable large investments by Indian companies were companies such as Tata motors, the Mahindra Group, Cipla and Ashok Leyland Risk variables from the Economist Intelligence Unit (EIU) operational risk model and the World Bank development indicators are used as determinants.These variables are discussed in the Specification section later on in the article.The EIU model covers 42 African countries from 2006 to 2012, but only 22 from 2003 to 2005.The World Bank covers all African countries, although not all data is available for all African countries. | 6,830.6 | 2014-10-31T00:00:00.000 | [
"Economics"
] |
Nonlinear Dynamics of Adaptive Gun Head Jet System of Fire-Fighting Monitor
The working medium of the adaptive gun head jet system of fire-fighting monitor is generally water containing a little bit of air. During the operation, the pressure pulsation of the fluid will cause the fluctuation of the equivalent stiffness of the gas-liquid mixed fluid, so that the motion of the fluid in the jet system has obvious nonlinear characteristics. In this paper, the nonlinear dynamic model of the jet system is established. The analytical expressions of the nonlinear vibration response of the jet system are derived via the multi-scale method. The main resonance and combined resonance of the jet system are determined. The results show that the external excitation frequency is the dominant frequency of the main resonance response of the jet system, and the combined frequency between the natural frequency of each order and the equivalent stiffness fluctuation frequency of the fluid unit has a small effect on the main resonance, and the maximum amplitude is 0.2592mm; the dominant frequency of the combined resonance response of the jet system is the combined frequency between the natural frequency of each order and the equivalent stiffness fluctuation frequency of the fluid unit, the system amplitude in combined resonance is smaller than that in the main resonance, and the maximum amplitude is 0.002532mm; the main resonance and the combined resonance will adversely affect the dynamic characteristics of the jet system. This research can provide a theoretical basis for the dynamic optimization of the adaptive gun head jet system of the fire-fighting monitor.
I. INTRODUCTION
The nozzle opening of the adaptive gun head of the firefighting monitor can be adjusted according to the inlet flow and pressure of the jet system, so that the fire-fighting monitor can operate in optimal condition under various flows and extinguish large fires quickly and efficiently [1]. The working medium of the fire-fighting monitor is generally water containing a little bit of air. When the fire pump converts the mechanical energy of the prime mover into the kinetic energy of the fluid, it is likely that the air will be released because the pressure is lower than the air apart pressure. Meanwhile, a certain amount of air foam is often mixed at the entrance or exit of the fire pump to enhance the effect of The associate editor coordinating the review of this manuscript and approving it for publication was Lei Wang. fire extinguishing. Therefore, the jet fluid of the fire monitor is actually water containing a little bit of air, i.e. a gasliquid mixed fluid. During the operation, the pulsation of both the flow and pressure of the fire pump inevitably causes the bulk modulus and stiffness of the jet fluid to constantly change. The stiffness of the gas-liquid mixed fluid determines the natural frequency of the fluid transmission system and directly affects the static and dynamic performance [2]. Since the fluid state parameters change periodically caused by the fluid compressibility and pressure pulsation, the jet system is a non-autonomous system and has obvious nonlinear characteristics.
The pressure pulsation of the jet system mainly comes from the pulsation of the fire pump itself. The fluid in the pump has two different types of pressure pulsations, namely turbulent pulsations that ignore the compressibility of the fluid and pulse pulsations that ignore the viscosity of the fluid. It is generally considered that turbulent pulsations are random pulsations close to white noise, and pulse pulsations include harmonic signals, which are mainly composed of pulsations whose frequency is blade-passing frequency and harmonics, and pulsations whose frequency is rotation frequency and harmonics. The pressure pulsation of the fire pump is not only related to the rotation of the pump blades and shafts, but also affected by the cavitation and turbulence of the fluid. However, even the same fire pump still has pressure pulsations with different properties under different working conditions. Therefore, it is difficult to predict the pressure pulsations theoretically. With the popularization of computer technology, especially based on the development of computational fluid dynamics, modern intelligent fault diagnosis, digital signal processing and fast Fourier transform algorithm [3], [4], the multi-mode simulation and pulsation spectrum analysis of the internal flow field of the centrifugal pump have become a reality. Li et al. calculated the unsteady flow characteristics of the mixed-flow pump, analyzed the pressure pulsation characteristics of the mixed-flow pump under near stall conditions, and revealed the stall propagation mechanism [5]. Gao et al. analyzed the unsteady pressure pulsation and internal flow characteristics of the centrifugal pump, expounded the root cause of the periodic pressure pulsation, and pointed out that the interaction between the impeller and the volute tongue has a significant effect on the unsteady pressure pulsation of the centrifugal pump [6]. Zhang experimentally studied the effect of blade cutting on pump performance, especially the effect of pressure pulsation, and discussed the relationship between internal flow and pressure pulsation by numerical calculation [7]. Through experiments and numerical simulations, Appiah et al. verified that the instability of internal flow field during the rotation of the centrifugal pump impeller was the main factor causing the pressure pulsation, and pointed out that the rotor-stator interaction generated the highest pressure pulsation distribution at the volute tongue [8].
Aiming at the nonlinear dynamics of non-autonomous systems, scholars focus on applying nonlinear dynamic theory and methods to study the nonlinear vibration law, such as bifurcation and chaos, when random parameters change [9], [10]. The perturbation method, including the Krylov-Bogolubov-Mitropolsky (KBM) method, the harmonic balance method, and the multi-scale method, is often adopted in parametric vibration research, which can be directly used to solve the system's nonlinear differential equations [11], [12]. Moreover, the homotopy analysis method based on the continuously changing topological theory, is often used in analysis of strong nonlinear systems [13]. Liu et al. studied the nonlinear damped vibration of a fabric membrane under impact loading through the KBM perturbation method [14]. Keleshteri and Jelovica applied the harmonic balance method along with the direct iterative approach to the research of the free vibration response of functionally graded porous (FGP) cylindrical panels [15].
Wang et al. established the nonlinear free vibration model of a cantilever beam considering the effects of gravity, and analyzed super-harmonic resonances by the time-domain multi-scale method and harmonic balance method [16]. Sadri et al. applied multi-scale method to analyze the primary and secondary resonance conditions of a cantilever beam with intermediate lumped masses [17]. Armand et al. analyzed the effect of fretting wear on the nonlinear dynamic behaviors of assembly structures by the multi-scale method [18]. Hao researched the forced responses, the main resonances, and the superharmonic resonances of electromechanical integrated magnetic gear (EIMG) system via the multi-scale method, and found that when the wave frequency was close to the natural frequency or twice/half the natural frequency of the derived EIMG system, strong resonance occurred [19], [20]. Odibat proposed an optimal homotopy analysis approach, which accelerated the convergence of series solutions and was expected to be adopted in nonlinear problems in fractional calculus [21]. In order to deal with the common nonlinear problems of fluid transmission systems, scholars have analyzed the nonlinear dynamic characteristics through amplitude-frequency diagrams, time histories, Fourier spectra, phase portraits, and Poincare maps, and studied the instability mechanism [22], [23]. Some researchers employed several control strategies, such as adaptive robust control [24], [25], active disturbance rejection control [26], and multilayer neural-networks [27], to improve the control accuracy of the transmission system, thereby reducing the impact of vibration on the dynamic performance of the system. There are also scholars who proposed a novel pump and valve combined electro-hydraulic system [28] to improve the static and dynamic performance of the transmission system via principle innovation.
In summary, there is a periodic pressure pulsation during the operation of the fire pump, and the pressure pulsation directly affects the stability of the jet system. Research on the parameter vibration of the adaptive gun head jet system of fire-fighting monitor considering the pressure pulsation of the fire pump, however, has not been carried out. Based on the assumption that the fluid pressure pulsation is a harmonic function, a dynamic model of the adaptive gun head jet system of fire-fighting monitor is established, and the main resonance and combined resonance response under the parameter vibration of the jet system are determined by the multi-scale method, which can provide a theoretical basis for dynamic optimization design of the jet system.
II. DYNAMIC MODEL OF THE JET SYSTEM
The structure of the adaptive gun head is shown in Fig. 1. The inlet of the gun head is on the left side and the outlet is on the right. The adaptive mechanism consisting of the spray core, the end cap, the core rod, and the spring is the core component of the adaptive gun head. The end cap and the core rod are fixedly connected to the enclosure through the regulator. The spray core can slide in the axial direction. The left side of the spring acts on the spray core and the right VOLUME 8, 2020 side acts on the end cap. At the initial moment, the spring is in a pre-compressed state. Meanwhile, the spray core is closely attached to the inner nozzle and the nozzle opening is zero. When the nozzle inlet flow increases, the force on the left side of the spray core increases. When the fluid force is greater than the spring force, the spray core moves to the right, and the nozzle opening increases. In contrast, when the inlet flow decreases, the spray core moves to the left, and the nozzle opening decreases. The adaptive mechanism enables the adaptive gun head to automatically adjust the nozzle opening according to the changes of inlet flow and pressure, so that it can achieve better jet performance under various operating conditions, and extinguish large fires quickly and efficiently.
The internal structure of the adaptive gun head of the firefighting monitor is shown in Fig. 2, and the pipe sections and their cross-sectional dimensions are shown in Table 1.
In Fig. 2, the red line with arrows is the flow line of the fluid. It can be known from the direction of the line that after flowing through the section n-n, where the nozzle opening locates, the fluid is reflected by the internal surface of the outer nozzle and converges at the front end of the gun head to form a jet. The regulator, installed at the fifth section of the gun head as shown in both Fig. 2 and Table 1, can turn the radial velocity of the fluid into the axial velocity, make the flow line more regular, and improve the stability of the jet. At the center of the spray core guidance surface, there are circular holes distributed evenly along the circle. During the operation of the fire-fighting monitor, the fluid enters the interior of the spray core through circular holes and forms a certain hydrostatic pressure.
In order to facilitate theoretical modeling and analysis, the dynamic model of the adaptive gun head jet system of fire-fighting monitor makes the following assumptions: 1. Except the fluid unit and the spring, the parts such as the spray core and the enclosure are considered to be rigid bodies and their deformation is not considered.
2. The spray core and the fluid are only subjected to the axial force, and the force of the fluid on the spray core is simplified to the spring force along the axial direction.
3. The damping between the fluid unit and the solid element is equivalent to the axial linear damping, and the damping formed by the uniformly distributed small hole on the nozzle is equivalent to the axial linear damping.
4. Processing and installation errors of all parts are ignored. The established dynamic model of the adaptive gun head jet system of the fire-fighting monitor is shown in Fig. 3.
In Fig. 3, F is the pulsating excitation force caused by the pressure fluctuation of the fire pump. m 1 , m 2 , and m 3 are the masses of the fluid unit 1, the spray core and the fluid unit 2 in the jet system, respectively. The fluid unit 1 is the fluid contained by the inlet flow cross section, the outer surface of the spray core, and the flow cross section n-n of the jet system, along with the internal surface of the fire-fighting monitor parts. The fluid unit 2 is the fluid contained by the inner surface of the spray core and the left end surface of the end cap. k f1 is the stiffness of fluid unit 1. k f21 is equal to k f22 , the total stiffness obtained by paralleling the two is the stiffness of fluid unit 2. k 1 is the stiffness of the mechanical spring inside the spray core. c 1 is the equivalent linear damping between the pipe wall and the outer wall of the monitor and the fluid unit 1 in the jet system. c 2 is equal to c 3 , and the total damping obtained by paralleling the two is equivalent to the structural damping of the orifice of the spray core.
III. DERIVATION AND SOLUTION OF VIBRATION EQUATION OF JET SYSTEM A. TIME-VARYING EQUIVALENT STIFFNESS OF FLUID
In the dynamic model shown in Fig. 3, the stiffness of the fluid unit needs to be calculated equivalently. According to the bulk modulus theory of the gas-liquid mixed fluid, let Bf represents the bulk modulus of the gas-liquid mixed fluid, then we can get: Assuming that the average area of the flow section of the fluid unit is S a and the axial length of the fluid domain is l, then the volume of the fluid domain is: The definition of stiffness is: Combining (1), (2), and (3), the fluid equivalent stiffness can be expressed with the fluid bulk modulus as: During the operation, the pulsation of the flow and pressure of the fire pump is inevitable, so the fluid density and equivalent stiffness constantly change. Under actual working conditions, the fluid pressure consists of two parts: average pressure and pulsating pressure. Assuming that the pulsating pressure varies by cosine, according to Euler's theorem, the time-varying pressure pulsation can be expressed as: where p is steady pressure, p is the pressure pulsation amplitude (Pa), and ω o is the pressure pulsation angular frequency (rad/s). Under a certain initial gas content, the fluctuation of the equivalent fluid stiffness is consistent with the fluid pressure and can be expressed as: where i = 1, 2,k fi is the steady equivalent stiffness of the fluid unit (N/m), k fi is the equivalent stiffness fluctuation of the fluid unit (N/m), ε is a small parameter and ε = k fi 2k fi , ω f is the time-varying equivalent stiffness angular frequency of the fluid unit (rad/s).
B. PARAMETRIC VIBRATION EQUATION OF JET SYSTEM
Based on the lumped parameter method, k f2 represents paralleled k f21 and k f22 , and the parametric vibration equation obtained from the dynamic model of the jet system shown in Fig. 3 is: where the mass matrix is: The damping matrix is: The stiffness matrix is: Under stable operation, the equivalent stiffness of the two fluid units and the external force on the jet system vary by cosine. According to Euler's formula, the time-varying external excitation of a jet system can be expressed as: The stiffness fluctuation matrix is: The regular mode ψ and the spectral matrix of the system are known, after the regularization of (7), we can obtain:η where η is the regular displacement vector, C N is the regular damping matrix, Q is the regular external excitation vector and K N is the regular equivalent stiffness fluctuation matrix of the fluid unit. Among them, C N is: Q can be expressed as: And K N is: Based on the multi-scale method, the quadratic approximate solution and small parameter are introduced: where T 0 = t, T 1 = εt, and values of i and j are 1, 2, and 3, respectively. Substituting the above equations into (13), we can get the zero power equation of ε: And the first power equation of ε is: where, cc is the complex conjugate. Let the solution of (18) be: where, i = 1, 2, and 3.
The following equation can be obtained by substituting (20) into (19): When the external excitation frequency of is close to the first natural frequency of the jet system, the main resonance of the jet system will occur. After introducing the tuning parameter σ , ω o can be expressed as: After substituting (22) into (21), we need to eliminate the secular term, so: Equation (23) can be solved by method of variation of constant, and the solution is: where C i (i = 1, 2, 3) is a constant determined by the initial conditions of the jet system.
The first formula in (23) can be simplified by combining trigonometric function, complex number, and Euler's formula: The term C i e − c Nii 2 T 1 in (24) will gradually approach zero over time, so the steady zero-order approximate analytical solution of the jet system can be obtained by substituting (24) into (20): The steady first-order approximate analytical solution of the jet system can be obtained by substituting (24) and (25) into (21): Then the steady response of the main resonance of the jet system in rectangular coordinates is: When the external excitation frequency approaches the second and third natural frequencies of the jet system respectively, the main resonance response can be obtained referring to the above solution process.
From the above results, it can be known that the main resonance response of the jet system includes multiple frequency components, including the first natural frequency, the second natural frequency, the third natural frequency, the fluid equivalent stiffness fluctuation frequency, and the combination frequency of the natural frequency and the stiffness fluctuation frequency.
D. APPROXIMATE ANALYTICAL SOLUTION OF THE COMBINED RESONANCE OF JET SYSTEM
Similar to the derivation of the main resonance, the multiscale method is also used to derive the combined resonance response of the jet system. The quadratic approximate solution and small parameter are introduced, then: where, T n = ε n t and values of i and j are 1, 2, and 3, respectively. Substituting (29) into (13), we can get the zero power equation of ε: where cc is the complex conjugate. And the first power equation of ε is: Let the solution of (30) be: where, D i = The following equation can be obtained by substituting (32) into (31): According to (33), except that when the external excitation frequency is close to the natural frequencies of the jet system, the main resonance of the jet system will occur. When the external excitation frequency is close to the combined frequency between the natural frequency of each order and the equivalent stiffness fluctuation frequency of the fluid unit, the combined resonance of the jet system will also occur. The emergence of the main resonance and combined resonance makes the vibration of the jet system more complex and diverse.
When the external excitation frequency of is close to the first natural frequency of the jet system ω n1 and the equivalent stiffness fluctuation frequency of the fluid ω f , after introducing the tuning parameter σ , ω o can be expressed as: After substituting (34) into (33), we need to eliminate the secular term, so: Equation (35) can be solved by method of variation of constant, and the solution is: where E i (i = 1, 2, 3) is a constant determined by the initial conditions of the jet system, and θ = arctan( c N11 2σ ). The steady zero-order approximate analytical solution of the jet system can be obtained by substituting (36) into (32): The steady first-order approximate analytical solution of the jet system can be obtained by substituting (36) into (33): Then the steady response of the combined resonance of the jet system in rectangular coordinates is: When the external excitation frequency approaches the combined frequency between the natural frequencies of the second and third order and the equivalent stiffness fluctuation frequency of the fluid unit respectively, the combined resonance response can be obtained referring to the above solution process.
IV. ANALYSIS OF PARAMETRIC VIBRATION RESPONSE OF JET SYSTEM
The time-domain response of the jet system can be determined based on the time-domain theory, and the amplitudefrequency characteristics of the jet system can be determined by Fourier transform. The parameters of adaptive gun head jet system of the fire-fighting monitor are shown in Table 2.
A. ANALYSIS OF THE MAIN RESONANCE RESPONSE OF JET SYSTEM
When the external excitation frequencies are close to the stable values of the natural angular frequency of each order of the jet system, the time domain response and the frequency domain response of the jet system are shown in Fig. 4 and Fig. 5, respectively.
It can be seen from Fig. 4 that when the external excitation frequency approaches steady values of the first, second, and third natural frequency, the main resonance of the jet system is strong, and the amplitudes of the fluid unit 1, the spray core and the fluid unit 2 reach the maximum respectively, which is closely related to the modal characteristics of the jet system. It can be seen from Fig. 5 that when the main resonance occurs, the external excitation frequency is the dominant frequency. From (25) and (26), it can be known that the main resonance response also includes the combined frequency between the natural frequency of each order and the equivalent stiffness fluctuation frequency of the fluid unit. Since the external excitation frequency is smaller than the combined frequency, the combined frequency has less effect on the main resonance of the jet system, but it still has a regulating effect. When the external excitation frequency is equal to the first natural frequency, the amplitude of the main resonance is the largest, which is 0.2592 mm.
B. ANALYSIS OF THE COMBINED RESONANCE RESPONSE OF JET SYSTEM
When the external excitation frequency is close to the sum of the stable values of the natural angular frequency of each order and the equivalent stiffness fluctuation angular frequency of the fluid unit, the time domain response and the frequency domain response of the jet system are shown in Fig. 6 and Fig. 7, respectively.
It can be seen from Fig. 6 that when the external excitation frequency is close to the combined frequency, the combined resonance occurs in the jet system and the amplitude is relatively large, but the amplitude is smaller than that in the main resonance. It can be seen from Fig. 7 that when the combined resonance occurs in the jet system, the dominant frequency is the combined frequency, and the natural frequency of each order has effects of regulation. Meanwhile, it can be found that when the external excitation frequency is the sum of the first and second natural frequency and the fluid stiffness fluctuation frequency, respectively, the displacements of the fluid unit 1 and the spray core reach the maximum, and when the external excitation frequency is the sum of the third natural fluid stiffness fluctuation frequency, the amplitude of the combined resonance is the largest, which is 0.002532 mm.
V. EXPERIMENTAL VERIFICATION
In order to verify the accuracy of the dynamic model of the adaptive gun head jet system of the fire-fighting monitor, a modal test is required, which can be divided into two parts, namely the fluid pressure pulsation modal analysis and the jet system modal analysis.
A. MODAL ANALYSIS OF THE FLUID PRESSURE PULSATION
Due to the pressure pulsation of the fluid caused by the centrifugal pump, the bulk modulus of the fluid unit in the jet system changes dynamically, which exacerbates the VOLUME 8, 2020 complexity of the dynamics of the jet system. In order to determine the pressure pulsation frequency of the input fluid during the experiment, the amplitude-frequency characteristics of the fluid pressure signal are analyzed, and the cause of frequency components is not discussed here. The amplitudefrequency characteristics of the fluid pressure pulsation are shown in Fig. 8.
It can be seen from Fig. 8 that the amplitude-frequency characteristic curve of the fluid pressure pulsation has a peak at 46.8 Hz, and this frequency corresponds to the shaft frequency (2800 RPM) of the centrifugal pump, so the pressure pulsation frequency of the fluid in the jet system is 46.8 Hz.
In addition, the turbulence in the jet system introduces white noise components, leading amplitude-frequency characteristic curve to be very complicated as a whole.
B. MODAL ANALYSIS OF JET SYSTEM
A platform for the dynamic experiment of the jet system was built and the modal test was carried out by the hammering method. The experiment platform is shown in Fig. 9.
The fast Fourier transform is used to analyze the acceleration signal output from the jet system in the frequency domain. The amplitude-frequency characteristics of the acceleration signal of the jet system under the superposition of the fluid pulsation excitation and the hammer step excitation are shown in Fig. 10.
It can be seen from Fig. 10 that the jet system has peaks at 19.5 Hz and 46.8 Hz. Since the excitation of the jet system at this time is the superposition of the hammer step excitation and the fluid pulsation excitation, and the fluid pulsation frequency is 46.8 Hz, the experimental value of the first natural frequency is 19.5Hz. Since the natural frequencies of the other orders of the jet system are relatively high, and are far from the external excitation frequency, and the highfrequency part is chaotic under the influence of turbulence, the natural frequencies of other orders are not analyzed for the time being. The experimental data and theoretical value of the first natural frequency of the jet system are shown in Table 3.
It can be seen from Table 3 that the theoretical value of the first natural frequency of the jet system is very close to the experimental one, and the error is small, which illustrates the validity and accuracy of the theoretical analysis of the dynamics of the jet system.
VI. CONCLUSIONS
Due to the pressure pulsation of the fluid, the adaptive gun head jet system of the fire-fighting monitor is a typical parametric vibration system. When the pulsation excitation frequency is close to the natural frequency of the jet system or the combined frequency between the natural frequency and the equivalent stiffness fluctuation frequency of the fluid unit, the main resonance or combined resonance of the jet system will occur. Resonance will seriously deteriorate the dynamic behaviors of the jet system, and the jet system has the following vibration characteristics: 1. The dominant frequency of the main resonance response is the external excitation frequency of the jet system, and the combined frequency has a small effect of regulation on the main resonance of the jet system. When the external excitation frequency is equal to the first natural frequency, the amplitude of the main resonance is the largest, which is 0.2592 mm.
2. When the external excitation frequency is close to the combined frequency, the amplitude is smaller than that in the main resonance. The dominant frequency is the combined frequency, and the natural frequency of each order has effects of regulation. When the external excitation frequency is the sum of the first and second natural frequency and the fluid stiffness fluctuation frequency, respectively, the displacements of the fluid unit 1 and the spray core reach the maximum. When the external excitation frequency is the sum of the third natural frequency and fluid stiffness fluctuation, the displacement of the fluid unit 1 is the largest. When the external excitation frequency is the sum of the first natural frequency and the fluid stiffness fluctuation frequency, the amplitude of the combined resonance is the largest, which is 0.002532 mm.
3. A platform for the dynamic experiment of the adaptive gun head jet system of the fire-fighting monitor was built and the modal test was carried out. The experimental value of the first natural frequency of the jet system is very close to the theoretical one, and the error is small, which illustrates the validity and accuracy of the theoretical analysis of the dynamics of the jet system. | 6,620.6 | 2020-01-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
A Best-Fitting B-Spline Neural Network Approach to the Prediction of Advection–Diffusion Physical Fields with Absorption and Source Terms
This paper proposed a two-dimensional steady-state field prediction approach that combines B-spline functions and a fully connected neural network. In this approach, field data, which are determined by corresponding control vectors, are fitted by a selected B-spline function set, yielding the corresponding best-fitting weight vectors, and then a fully connected neural network is trained using those weight vectors and control vectors. The trained neural network first predicts a weight vector using a given control vector, and then the corresponding field can be restored via the selected B-spline set. This method was applied to learn and predict two-dimensional steady advection–diffusion physical fields with absorption and source terms, and its accuracy and performance were tested and verified by a series of numerical experiments with different B-spline sets, boundary conditions, field gradients, and field states. The proposed method was finally compared with a generative adversarial network (GAN) and a physics-informed neural network (PINN). The results indicated that the B-spline neural network could predict the tested physical fields well; the overall error can be reduced by expanding the selected B-spline set. Compared with GAN and PINN, the proposed method also presented the advantages of a high prediction accuracy, less demand for training data, and high training efficiency.
Introduction
At present, industry and scientific research extensively involve various forms of physical fields and their computation and simulation, such as flow field, temperature field, etc.The traditional way, which applies various numerical methods to solve these complex mathematical models, usually encounters problems related to complex methods, poor convergence, and the high cost in terms of time, and presents poor adaptability, especially for systems with strong nonlinearity and whose model parameters are difficult to determine [1][2][3][4][5][6].In recent years, neural networks represented by deep learning have been successfully applied to the learning and prediction of many complex systems and scenarios, and applying machine learning to predict fields has become a hot topic [7][8][9].However, a field has the characteristics of continuous spatial distribution, and field prediction needs to consider each spatial point of the huge amount of field data rather than one or more individual features.If a traditional fully connected neural network is directly used to predict the field, the number of nodes in the network output layer will be very large, making network training extremely difficult.By selecting an appropriate set of basis functions, fitting the huge amount of field data as low-dimensional vectors, and then training an ordinary neural network on the low-dimensional data, a better neural network model can
Theory 2.1. Best-Fitting B-Spline Function
A one-dimensional B-spline function set with an order of k and an element number of m can be defined as follows: where {t j } is a knot sequence, {t j |j = 1, 2, . .., m + k + 1}.Assuming that x goes from x min to x max , the knot sequence for a uniform B-spline function can then be provided in ascending order as follows: Here, t j (j < k + 1 or j > m + 1) are external knots, t j (j > k + 1 or j < m + 1) are internal knots, and the other two are end knots.For a higher-dimension B-spline function, all the one-dimensional functions defined in each dimension need to be multiplied, as in Equation (1).Giving a two-dimensional function ϕ(x, y), we can have approximation ϕ*(x, y) composed of a linear combination of B-splines and a weight vectors a, as follows: Here, m x and m y are the numbers of B-spline functions in the x and y dimensions, respectively, and k x and k y are their corresponding orders.Rewriting this formula in terms of matrices, we have: and where N lj = N k x j (x)N k y l (y).In practice, function ϕ(x, y) is often given by n x × n y discrete points, ϕ(x i , y p ), for which we have a fitting error: Applying the least square method to minimize the fitting error E, we have a best-fitting weight vector a: Here and By using the best fitting, a function ϕ(x, y) or its discrete version ϕ*(x, y) can be represented by a weight vector and a pre-selected B-spline function set; as a consequence, a data set including amounts of ϕ*(x, y) can be degraded into a series of corresponding weight vectors along with a known B-spline function set.In general, the scale of ϕ*(x, y) is far greater than that of the corresponding weight vector, which is determined by the number of B-splines in the selected function set.Additionally, B-spline fitting can filter the high-frequency modes in ϕ*(x, y); the extent of this can be modified by changing the number or the order of the B-splines in the function set.Since the B-spline function is piecewise, the B-spline fitting can well approximate the high-frequency modes through the addition of more low-order B-splines to the function set instead of higher-order basis functions.
Combination of Neural Network and B-Spline Function
The architecture of the bBSNN is shown in Figure 1, which combines a fully connected neural network and the B-spline function fitting.A data set {OUT: ϕ(x, y)} with r samples along with r corresponding control vectors {IN: p} can be used as an example.
The training and prediction of this method can be illustrated as follows (Please refer to the Supplementary Materials for the bBSNN code.): (1) Use the numbers m x and m y and the orders k x and k y , and calculate the knot sequence in each dimension according to the domain boundaries of the data set, then generate the B-spline function set; (2) Calculate the best-fitting weight vector a using the least square method; thus, the data set can be replaced by: (3) Train an ordinary fully connected neural network whose input and output are now as follows: {In (4) Once the neural network is well trained, a new control vector p can be used to predict a new weight vector a; predicted function ϕ*(x, y) is then restored by Equation (3).
Numerical Experiments and Discussions
In this section, we focus on the numerical experiments conducted on general advection-diffusion physical fields, using absorption and source terms to examine the performance of the bBSNN, in which the boundary conditions and the scalar field are
Numerical Experiments and Discussions
In this section, we focus on the numerical experiments conducted on general advectiondiffusion physical fields, using absorption and source terms to examine the performance of the bBSNN, in which the boundary conditions and the scalar field are regarded as the input control vector {IN: p} and the output data {OUT: ϕ(x, y)}, respectively.Here, we consider the physical fields as follows: where ϕ is a scalar field and u is a vector field, such as the velocity field.a, b, c, and d are the coefficients of the diffusion, convection, absorption, and source terms, respectively.In the numerical experiments, a vortical velocity field with an average velocity of 1 is implanted into the PDE and expressed as follows: Here, L x and L y are the dimensions of the domain; u = u(x, y) and v = v(x, y) are the velocity components.The physical fields can be characterized by three parameters: (1) α = a/b, representing the diffusion rate of the scalar field; (2) Pécklet number Pe = LU/α, a dimensionless number representing the ratio of the convection intensity to the diffusion intensity of the scalar field; (3) My = L 2 C/a, a dimensionless number representing the ratio of the absorption intensity to the diffusion intensity of the scalar field.Here, L, U, and C are the characteristic length, velocity, and absorption rate of the system, respectively.(Please refer to the Supplementary Materials for the finite volume method calculation code of the above physical field.)
Numerical Experiment Setup and Preliminary Verification
For the numerical experiment setup, a square domain is configured for the physical fields, and two independent and variable conditions and two fixed conditions are set at the four boundaries of the domain for preliminary verification, as shown in Figure 2, yielding an input control vector p = [B 1 , B 2 ] T , whose values are slightly smoothed in the vicinity of their junction points by a hyperbolic tangent function.The parameters of the physical fields are given as a = 10, b = 200, and c = 0.5, d = 1000, which yields a field state with α = 0.05, Pe = 20, and My = 0.05.The numerical result of the scalar field ϕ(x, y) is obtained with a grid resolution of 128 × 128 using a finite volume method.The configuration of the B-spline function set is k x = k y = 2 and m x = m y = 10, determining a weight vector a with a size of 100 × 1.For the preliminary tests and verification, we generated a data se to train the neural network, where the input control vectors p were ra and B1, B2 ∈ [0, 100].By computing the DSSA physical fields corresponding field data ϕ(x, y) can be obtained, which yields 100 cor For the preliminary tests and verification, we generated a data set with 100 samples to train the neural network, where the input control vectors p were randomly generated and B 1 , B 2 ∈ [0, 100].By computing the DSSA physical fields numerically, 100 corresponding field data ϕ(x, y) can be obtained, which yields 100 corresponding weight vectors a when using the best fit and the selected B-spline function set.Three hidden layers with 8, 10, and 12 nodes, respectively, and one output layer were configured for the fully connected neural network, while two S-shaped tangent functions and two pure linear functions were selected for the corresponding layers.After the network was trained, we verified the bBSNN method via predicting a case with [B 1 B 2 ] T = [80 50] T , which is a new sample that has not appeared in the training data set.Figure 3 shows the ground truth and predicted fields of this case, as well as the error between them.To evaluate the error field, we calculated the averaged value µ, standard deviation σ, and the averaged absolute error µ |ϕ*−ϕ| .When comparing Figure 3a with Figure 3b, they are shown to be extremely similar.Figure 3c shows the error field, and it can be seen that most of the errors are small, except those reaching about 10 at the corners or close to the boundaries.The average error µ = 0.0131, the standard deviation σ =1.4080, and the averaged absolute error µ |ϕ*−ϕ| = 0.6952.This quantitatively indicates that the agreement between the prediction and the ground truth is good.For the preliminary tests and verification, we generated a data set with 100 samples to train the neural network, where the input control vectors p were randomly generated and B1, B2 ∈ [0, 100].By computing the DSSA physical fields numerically, 100 corresponding field data ϕ(x, y) can be obtained, which yields 100 corresponding weight vectors a when using the best fit and the selected B-spline function set.Three hidden layers with 8, 10, and 12 nodes, respectively, and one output layer were configured for the fully connected neural network, while two S-shaped tangent functions and two pure linear functions were selected for the corresponding layers.After the network was trained, we verified the bBSNN method via predicting a case with [B1 B2] T = [80 50] T , which is a new sample that has not appeared in the training data set.Figure 3 shows the ground truth and predicted fields of this case, as well as the error between them.To evaluate the error field, we calculated the averaged value µ, standard deviation σ, and the averaged absolute error µ|ϕ*−ϕ|.When comparing Figure 3a with Figure 3b, they are shown to be extremely similar.Figure 3c shows the error field, and it can be seen that most of the errors are small, except those reaching about 10 at the corners or close to the boundaries.The average error µ = 0.0131, the standard deviation σ =1.4080, and the averaged absolute error µ|ϕ*−ϕ| = 0.6952.This quantitatively indicates that the agreement between the prediction and the ground truth is good.Qualitatively, it can be seen from Figure 3 that where the gradient is large, the error is likely to be large.To further reduce the error, we added more B-splines to the function Qualitatively, it can be seen from Figure 3 that where the gradient is large, the error is likely to be large.To further reduce the error, we added more B-splines to the function set to increase the resolution of the high-frequency components of the field when finding the best fitting.Figure 4a shows the prediction results when using the B-spline function set with k x = k y = 3 and m x = m y = 15.The error field in Figure 4b shows that the averaged error µ = −0.0012, the standard deviation σ = 0.3589, the averaged absolute error µ |ϕ*−ϕ| = 0.1160, and the maximum error is about 6.This indicates that the error can be greatly reduced by increasing the number of functions of the selected B-spline set.However, the computational cost and complexity of the training are also greatly increased.
the best fitting.Figure 4a shows the prediction results when using the B-spline function set with kx = ky = 3 and mx = my = 15.The error field in Figure 4b shows that the averaged error µ = −0.0012, the standard deviation σ =0.3589, the averaged absolute error µ|ϕ*−ϕ| = 0.1160, and the maximum error is about 6.This indicates that the error can be greatly reduced by increasing the number of functions of the selected B-spline set.However, the computational cost and complexity of the training are also greatly increased.
Effect of the Size of the Input Control Vectors
To examine the performance of the bBSNN method when predicting more complicated DSSA physical fields, we add more independent and variable boundary conditions to expand the size of the input control vectors.In this subsection, two numerical experiments are carried out, where the configuration of the selected B-spline function set is still so that kx = ky = 2 and mx = my = 10, while the independent and variable boundary conditions are increased to 4 and 6, respectively.The data sets of these two numerical experiments are given 150 and 200 random samples, respectively, with the control vector Bi ∈ [0, 100].For the architecture of the fully connected neural network, we only change the nodes of the input layer to match the size of the control vector and keep the hidden layers, the output layer, and the activation functions the same as those in Section 3.1.
Figure 5a shows the configuration of the numerical experiment with four independent boundary conditions, and the DSSA physical fields with the same parameters used in Section 3.1 were still computed at a grid resolution of 128 × 128, obtaining the ground-truth field under a tested condition [B1 B2 B3 B4] T = [80 50 25 60] T , as shown in Figure 5b.After the neural network was well trained, the predicted field obtained under the same conditions by the bBSNN is shown in Figure 5c, which is also very close to the ground-truth field.Figure 5d shows the error field, where µ = −0.0061,σ = 0.8270, µ|ϕ*−ϕ| = 0.3991, and the maximum error is about 12. Compared with the experiment with two independent boundary conditions, the characteristics of the error field are similar, while the error statistics are even smaller.This is likely due to the smaller field gradient under the tested condition.
Effect of the Size of the Input Control Vectors
To examine the performance of the bBSNN method when predicting more complicated DSSA physical fields, we add more independent and variable boundary conditions to expand the size of the input control vectors.In this subsection, two numerical experiments are carried out, where the configuration of the selected B-spline function set is still so that k x = k y = 2 and m x = m y = 10, while the independent and variable boundary conditions are increased to 4 and 6, respectively.The data sets of these two numerical experiments are given 150 and 200 random samples, respectively, with the control vector B i ∈ [0, 100].For the architecture of the fully connected neural network, we only change the nodes of the input layer to match the size of the control vector and keep the hidden layers, the output layer, and the activation functions the same as those in Section 3.1.
Figure 5a shows the configuration of the numerical experiment with four independent boundary conditions, and the DSSA physical fields with the same parameters used in Section 3.1 were still computed at a grid resolution of 128 × 128, obtaining the ground-truth field under a tested condition [B 1 B 2 B 3 B 4 ] T = [80 50 25 60] T , as shown in Figure 5b.After the neural network was well trained, the predicted field obtained under the same conditions by the bBSNN is shown in Figure 5c, which is also very close to the ground-truth field.Figure 5d shows the error field, where µ = −0.0061,σ = 0.8270, µ |ϕ*−ϕ| = 0.3991, and the maximum error is about 12. Compared with the experiment with two independent boundary conditions, the characteristics of the error field are similar, while the error statistics are even smaller.This is likely due to the smaller field gradient under the tested condition.
Similarly, Figure 6 shows the configuration of the numerical experiment with six independent boundary conditions and the ground-truth field, as well as that predicted under a tested boundary condition [B 1 B 2 B 3 B 4 B 5 B 6 ] T = [80 50 25 60 100 40] T .Figure 6d shows the error field, where µ = −0.0138,σ = 0.9471, µ |ϕ*−ϕ| = 0.4792.Compared with the experiment with four independent boundary conditions, the error slightly increases, while it is still less than that of the experiment with two.These numerical experiments indicate that the size of the input control vector has little effect on the accuracy of the bBSNN in predicting the steady-state DSSA physical fields, and the error variations are mainly due to the differences in the local field gradients.
while it is still less than that of the experiment with two.These numerical experiments indicate that the size of the input control vector has little effect on the accuracy of the bBSNN in predicting the steady-state DSSA physical fields, and the error variations are mainly due to the differences in the local field gradients.while it is still less than that of the experiment with two.These numerical experiments indicate that the size of the input control vector has little effect on the accuracy of the bBSNN in predicting the steady-state DSSA physical fields, and the error variations are mainly due to the differences in the local field gradients.
Effect of the Field Gradient
The aforementioned numerical experiments also imply a potential great influence from the field gradient on the accuracy of the bBSNN method.To further examine the effect of the field gradient, we change the value range of the boundary conditions to generate two data sets with greatly different field gradients in the physical fields with two independent boundary conditions, where B1, B2 ∈ [0, 300] and B1, B2 ∈ [0, 900], respectively.Additionally, the configuration of the domain, the selected B-spline function set (kx = ky = 3 and mx = my = 15), and the architecture of the fully connected neural network are the same as those used in the second test in Section 3.1.For these two data sets, we randomly generated 150 samples in their corresponding ranges.After these two networks were trained, the cases with conditions [B1 B2] T = [240 150] T and [720 450] T were predicted, respectively.Figure 7a-c
Effect of the Field Gradient
The aforementioned numerical experiments also imply a potential great influence from the field gradient on the accuracy of the bBSNN method.To further examine the effect of the field gradient, we change the value range of the boundary conditions to generate two data sets with greatly different field gradients in the physical fields with two independent boundary conditions, where B 1 , B 2 ∈ [0, 300] and B 1 , B 2 ∈ [0, 900], respectively.Additionally, the configuration of the domain, the selected B-spline function set (k x = k y = 3 and m x = m y = 15), and the architecture of the fully connected neural network are the same as those used in the second test in Section 3.1.For these two data sets, we randomly generated 150 samples in their corresponding ranges.After these two networks were trained, the cases with conditions [B 1 B 2 ] T = [240 150] T and [720 450] T were predicted, respectively.Figure 7a-c = 0.0254, σ = 3.2288 and µ|ϕ*−ϕ| = 1.0429, and the maximum error is about 55.It can be seen that the error fields of these two cases are fairly similar except for their exact values.These results indicate that the prediction error of the bBSNN method is sensitive to the local field gradient.Specifically, comparing these two cases and the second case in Section 3.1, where B1, B2 ∈ [0, 100], we can further conclude that the error of the bBSNN when predicting the steady-state DSSA physical fields is approximately proportional to the field gradient.
Effect of the Field State
In this subsection, we further examine the performance of the bBSNN in predicting the physical fields with various field states, which are governed by the coefficients a, b, c, and d of Equation (13).Here, we carry out two representative numerical experiments: (1) a = 1, b = 1, c = 1, and d = 1, yielding the system parameters α = 1, Pe = 1, and My = 1; (2) a = 1, b = 100, c = 10, and d = 10, yielding the system parameters α = 0.01, Pe = 100, My = 10.The system parameters indicate that the conditions are moderate for the first experiment while they are much more severe for the second one.Specifically, the second experiment will suffer from a much lower diffusion rate alongside much stronger advection and absorption effects, resulting in much higher local field gradients.Additionally, the domain configuration, B-spline function set (kx = ky = 3 and mx = my = 15), and the architecture of the fully connected neural network are kept the same as those used in the second test in Section 3.1, where 100 samples are used for each data set to train the network.The tested conditions for these two numerical experiments are both [B1 B2] T = [80 50] T .Their ground-truth and predicted results, as well as the error fields, are shown in Figure 8.It can be seen that the pattern agreements between the predicted and the ground-truth fields for these two experiments are very good; however, the patterns of the error field are quite different, although the maximum errors are very close to each other.We calculated the error statistics for these: µ = 0.0069, σ = 0.2960, µ|ϕ*−ϕ| = 0.0781 for the first one and µ = 0.5788, σ = 1.0534, µ|ϕ*−ϕ| = 0.9219 for the second one.This indicates that the error ranges for these two experiments with different field states are similar, while the error statistics of the experiment with severe conditions are much larger than those of the one with moderate conditions.
Effect of the Field State
In this subsection, we further examine the performance of the bBSNN in predicting the physical fields with various field states, which are governed by the coefficients a, b, c, and d of Equation (13).Here, we carry out two representative numerical experiments: (1) a = 1, b = 1, c = 1, and d = 1, yielding the system parameters α = 1, Pe = 1, and My = 1; (2) a = 1, b = 100, c = 10, and d = 10, yielding the system parameters α = 0.01, Pe = 100, My = 10.The system parameters indicate that the conditions are moderate for the first experiment while they are much more severe for the second one.Specifically, the second experiment will suffer from a much lower diffusion rate alongside much stronger advection and absorption effects, resulting in much higher local field gradients.Additionally, the domain configuration, Bspline function set (k x = k y = 3 and m x = m y = 15), and the architecture of the fully connected neural network are kept the same as those used in the second test in Section 3.1, where 100 samples are used for each data set to train the network.The tested conditions for these two numerical experiments are both [B 1 B 2 ] T = [80 50] T .Their ground-truth and predicted results, as well as the error fields, are shown in Figure 8.It can be seen that the pattern agreements between the predicted and the ground-truth fields for these two experiments are very good; however, the patterns of the error field are quite different, although the maximum errors are very close to each other.We calculated the error statistics for these: µ = 0.0069, σ = 0.2960, µ |ϕ*−ϕ| = 0.0781 for the first one and µ = 0.5788, σ = 1.0534, µ |ϕ*−ϕ| = 0.9219 for the second one.This indicates that the error ranges for these two experiments with different field states are similar, while the error statistics of the experiment with severe conditions are much larger than those of the one with moderate conditions.
Comparison with Analytical Solutions
The general form of the two-dimensional diffusion equation is as follows: where Φ is the scalar field and D is the diffusion coefficient.If the above-mentioned diffusion equation has an analytical solution, it usually has specific boundary conditions and initial conditions.Therefore, we assume a case where there is an instantaneous point diffusion source in the region, as shown in Figure 3, and the initial conditions are as follows: Here, δ is the Dirichlet function.Under such conditions, there exists an analytical solution for Equation (15), as follows: Set D = 0.2, take the field distribution at t = 1 s as the target, take any point in the region shown in Figure 3 as the diffusion source, and record its coordinates (x0, y0) as the control
Comparison with Analytical Solutions
The general form of the two-dimensional diffusion equation is as follows: where Φ is the scalar field and D is the diffusion coefficient.If the above-mentioned diffusion equation has an analytical solution, it usually has specific boundary conditions and initial conditions.Therefore, we assume a case where there is an instantaneous point diffusion source in the region, as shown in Figure 3, and the initial conditions are as follows: Here, δ is the Dirichlet function.Under such conditions, there exists an analytical solution for Equation (15), as follows: Set D = 0.2, take the field distribution at t = 1 s as the target, take any point in the region shown in Figure 3 as the diffusion source, and record its coordinates (x 0 , y 0 ) as the control vector.Randomly generate 100 sets of control vectors and their corresponding field distributions as training data sets for the bBSNN, where the network structure and parameter settings are the same as in Section 3.1.Finally, the prediction effect is verified with the diffusion source coordinates of (0.7, 0.8) and the verification results are shown in Figure 9.
It can be seen from Figure 9a,b that the bBSNN prediction performance is good, the ground truth and prediction are extremely similar, and the error distribution is very average.The average error µ = 0.0187, the standard deviation σ = 0.0471, and the averaged absolute error µ |ϕ*−ϕ| = 0.0398.This shows that the bBSNN also shows great potential for predicting the diffusion field distribution with analytical solutions.
Entropy 2024, 26, x FOR PEER REVIEW 12 of 16 vector.Randomly generate 100 sets of control vectors and their corresponding field distributions as training data sets for the bBSNN, where the network structure and parameter settings are the same as in Section 3.1.Finally, the prediction effect is verified with the diffusion source coordinates of (0.7, 0.8) and the verification results are shown in Figure 9.It can be seen from Figure 9a,b that the bBSNN prediction performance is good, the ground truth and prediction are extremely similar, and the error distribution is very average.The average error µ = 0.0187, the standard deviation σ =0.0471, and the averaged absolute error µ|ϕ*−ϕ| = 0.0398.This shows that the bBSNN also shows great potential for predicting the diffusion field distribution with analytical solutions.
Method Comparision
To further demonstrate the effectiveness of bBSNN in predicting diffusion-advection-absorption-source physical fields, we compare it with GAN [18] and PINN [25].The network structure of GAN is set up as shown in Figure 10a, mainly comprising a generator and discriminator.The generator is composed of seven transposed convolutional layers, and the discriminator is composed of six convolutional layers and one fully connected layer.The network structure of PINN is shown in Figure 10b, where the loss function includes three parts: partial differential structure loss (PDE loss), boundary value condition loss (BC loss), and real data condition loss (data loss), and the hidden layer is composed of six fully connected layers.The numerical example presented in Section
Method Comparision
To further demonstrate the effectiveness of bBSNN in predicting diffusion-advectionabsorption-source physical fields, we compare it with GAN [18] and PINN [25].The network structure of GAN is set up as shown in Figure 10a, mainly comprising a generator and discriminator.The generator is composed of seven transposed convolutional layers, and the discriminator is composed of six convolutional layers and one fully connected layer.The network structure of PINN is shown in Figure 10b, where the loss function includes three parts: partial differential structure loss (PDE loss), boundary value condition loss (BC loss), and real data condition loss (data loss), and the hidden layer is composed of six fully connected layers.The numerical example presented in Section 3.1 is used to train the GAN and PINN, and the boundary condition of the test group is also set as [B 1 B 2 ] T = [80 50] T .vector.Randomly generate 100 sets of control vectors and their corresponding field distributions as training data sets for the bBSNN, where the network structure and parameter settings are the same as in Section 3.1.Finally, the prediction effect is verified with the diffusion source coordinates of (0.7, 0.8) and the verification results are shown in Figure 9.It can be seen from Figure 9a,b that the bBSNN prediction performance is good, the ground truth and prediction are extremely similar, and the error distribution is very average.The average error µ = 0.0187, the standard deviation σ =0.0471, and the averaged absolute error µ|ϕ*−ϕ| = 0.0398.This shows that the bBSNN also shows great potential for predicting the diffusion field distribution with analytical solutions.
Method Comparision
To further demonstrate the effectiveness of bBSNN in predicting diffusion-advection-absorption-source physical fields, we compare it with GAN [18] and PINN [25].The network structure of GAN is set up as shown in Figure 10a, mainly comprising a generator and discriminator.The generator is composed of seven transposed convolutional layers, and the discriminator is composed of six convolutional layers and one fully connected layer.The network structure of PINN is shown in Figure 10b, where the loss function includes three parts: partial differential structure loss (PDE loss), boundary value condition loss (BC loss), and real data condition loss (data loss), and the hidden layer is composed of six fully connected layers.The numerical example presented in Section All the numerical cases were computed on PYTHON 3.6 with PYTORCH 1.10 and a computer with the configurations of Intel Xeon<EMAIL_ADDRESS>and an RAM of 128 GB.To compare the performances of the bBSNN, GAN, and PINN, we summarized the training (4) The data set used to train the bBSNN can be very small, even for the case with six independent boundary conditions, the bBSNN trained by a data set with only 200 random samples can yield a good predicted field, and the training efficiency is also very high.(5) Compared with GAN and PINN, bBSNN presents obvious advantages in terms of training efficiency and prediction accuracy.
Consequently, we believe the bBSNN method can be used as a good surrogate model to predict advection-diffusion physical fields under relatively complex conditions, and could also be used to accelerate the numerical computation of such physical fields.Although this work is carried out for steady-state advection-diffusion physical fields, the bBSNN method, if combined with a recurrent neural network, may have great potential to deal with transient physical fields, which could be our next step in the near future.
Figure 1 .
Figure 1.A schematic of the architecture of the best-fitting B−spline neural network.
Figure 1 .
Figure 1.A schematic of the architecture of the best-fitting B−spline neural network.
opy 2024 ,
26, x FOR PEER REVIEW configuration of the B-spline function set is kx = ky = 2 and mx = my = weight vector a with a size of 100 × 1.
Figure 2 .
Figure 2. Configurations of the numerical experiments.
Figure 2 .
Figure 2. Configurations of the numerical experiments.
Figure 2 .
Figure 2. Configurations of the numerical experiments.
Figure 3 .
Figure 3. (a) Ground-truth steady-state field obtained by computing the DSSA physical fields using the finite volume method on a grid resolution of 128 × 128; (b) result predicted by the bBSNN with second-order 10 × 10 functions; (c) the error field between the ground truth and the prediction, where µ, σ, and µ|ϕ*−ϕ| are the averaged error, standard deviation, and the averaged absolute error, respectively.
Figure 3 .
Figure 3. (a) Ground-truth steady-state field obtained by computing the DSSA physical fields using the finite volume method on a grid resolution of 128 × 128; (b) result predicted by the bBSNN with second-order 10 × 10 functions; (c) the error field between the ground truth and the prediction, where µ, σ, and µ |ϕ*−ϕ| are the averaged error, standard deviation, and the averaged absolute error, respectively.
Figure 4 .
Figure 4. (a) The result predicted by the bBSNN with the second-order 15 × 15 functions and (b) the error field.
Figure 4 .
Figure 4. (a) The result predicted by the bBSNN with the second-order 15 × 15 functions and (b) the error field.
Figure 5 .
Figure 5. (a) The configuration of the numerical experiment with four independent boundary conditions, (b) the ground-truth field under the tested conditions [B1 B2 B3 B4] T = [80 50 25 60] T , (c) the predicted field, and (d) the error field of steady-state DSSA physical fields.
Figure 5 .
Figure 5. (a) The configuration of the numerical experiment with four independent boundary conditions, (b) the ground-truth field under the tested conditions [B 1 B 2 B 3 B 4 ] T = [80 50 25 60] T , (c) the predicted field, and (d) the error field of steady-state DSSA physical fields.
Figure 5 .Figure 6 .
Figure 5. (a) The configuration of the numerical experiment with four independent boundary conditions, (b) the ground-truth field under the tested conditions [B1 B2 B3 B4] T = [80 50 25 60] T , (c) the predicted field, and (d) the error field of steady-state DSSA physical fields.
Figure 6 .
Figure 6.(a) The configuration of the numerical experiment with six independent boundary conditions, (b) the ground-truth field under the tested conditions [B 1 B 2 B 3 B 4 B 5 B 6 ] T = [80 50 25 60 100 40] T , (c) the predicted field, and (d) the error field of a steady-state DSSA physical fields.
show the ground-truth field, predicted field, and the error field under the tested condition [B 1 B 2 ] T = [240 150] T , respectively, where µ = 0.0103, σ = 1.0766, and µ |ϕ*−ϕ| = 0.3525, and the maximum error is about 18. Figure 7d-f show the corresponding results under the tested conditions [B 1 B 2 ] T = [720 450] T , where µ = 0.0254, σ = 3.2288 and µ |ϕ*−ϕ| = 1.0429, and the maximum error is about 55.It can be seen that the error fields of these two cases are fairly similar except for their exact values.These results indicate that the prediction error of the bBSNN method is sensitive to the local field gradient.Specifically, comparing these two cases and the second case in Section 3.1, where B 1 , B 2 ∈ [0, 100], we can further conclude that the error of the bBSNN when predicting the steady-state DSSA physical fields is approximately proportional to the field gradient.
Figure 9 .
Figure 9. (a) Ground truth obtained by computing the diffusion fields using the analytical solutions; (b) the result predicted by the bBSNN with the third-order 15 × 15 functions; (c) the error field between the ground truth and the prediction, where µ, σ, and µ|ϕ*−ϕ| are the averaged error, standard deviation, and the averaged absolute error, respectively.
Figure 10 .
Figure 10.Schematics of the architectures of the used GAN and PINN.
Figure 9 .
Figure 9. (a) Ground truth obtained by computing the diffusion fields using the analytical solutions; (b) the result predicted by the bBSNN with the third-order 15 × 15 functions; (c) the error field between the ground truth and the prediction, where µ, σ, and µ |ϕ*−ϕ| are the averaged error, standard deviation, and the averaged absolute error, respectively.
Figure 9 .
Figure 9. (a) Ground truth obtained by computing the diffusion fields using the analytical solutions; (b) the result predicted by the bBSNN with the third-order 15 × 15 functions; (c) the error field between the ground truth and the prediction, where µ, σ, and µ|ϕ*−ϕ| are the averaged error, standard deviation, and the averaged absolute error, respectively.
Figure 10 .
Figure 10.Schematics of the architectures of the used GAN and PINN.Figure 10.Schematics of the architectures of the used GAN and PINN.
Figure 10 .
Figure 10.Schematics of the architectures of the used GAN and PINN.Figure 10.Schematics of the architectures of the used GAN and PINN. | 8,904 | 2024-07-01T00:00:00.000 | [
"Physics",
"Environmental Science",
"Computer Science"
] |
Remarks on the Behavior of an Agent-Based Model of Spatial Distribution of Species
: Agent-based models have gained considerable notoriety in ecological modeling as well as in several other fields yearning for the ability to capture the emergent behavior of a complex system in which individuals interact with each other and with their environment. These models are implemented by applying a bottom-up approach, where the entire behavior of the system emerges from the local interaction between their components (agents or individuals). Usually, these interactions between individuals and their enclosing environment are modeled by very simple local rules. From the conceptual point of view, another appealing characteristic of this simulation approach is that it is well aligned with the reality whenever the system is composed of a multitude of individuals (behavioral units) that can be flexibly combined and placed in the environment. Due to their inherent flexibility, and despite of their simplicity, it is necessary to pay attention to the adjustments in their parameters which may result in unforeseen changes on the overall behavior of these models. In this paper we study the behavior of an agent-based model of spatial distribution of species, by analyzing the effects of the model parameters and the implications of the environment variables (that compo se the environment where the species lives) on the models’ output. The presented experiments show that the behavior of the model depends mainly on the conditions of the environment where the species live, and the main parameters presented in life cycle of the species.
Introduction
Agents have their own behaviors and act in order to accomplish a purpose. Agent-based models (ABM) describe individuals (agents) as a unique and autonomous entities that normally interact with each other and their environment [1]. ABM are considered computational models that show how the dynamics of a system have emerged resulting from the interactions of its entities (agents) in a shared environment [2]. ABM have been applied in several areas such as ecology, biology, engineering, climate change, and many other fields [3][4][5]. In the ecological modeling field, agent-based models (also referred as individual-based models) are simulation models that consider agents or individuals as unique and discrete entities with proprieties that change during its life cycle [6]. Normally, four classification criteria are taken into account to distinguish classical models and agent-based models in ecology: (1) the individuals life cycle reflected in the model, (2) the considered resources (like food, www.aetic.theiaer.org habitat quality), (3) the representation of population size, (4) the variability of individuals of the same age that is considered [7]. Agent-based model (ABM) brings to the ecological modeling field the ability to simulate ecological phenomena (such as distribution of species) in more realistic ways [8], allowing management and conservation of species more suitable. Several studies have shown how ABM have helped ecological modelers to create and simulate species distribution models in certain study areas analyzing and comparing its results [9][10][11]. However, uncertainty related to the ABMs outputs and the production of more realistic models output remains a challenge for modelers [3].
This paper presents the results of the analyses performed to study the effects that model parameters have on the behavior of an agent-based modeling approach which has been designed to study spatial distribution of species in actual and foreseen environmental scenarios. With that purpose, a series of simulations are run by modifying colonization scenarios in a simple heterogeneous environment.
The remaining of this paper is organized as follows: In the second section we characterize our model by describing the model purpose and behavior, as well as the life cycle adopted by the model; In the third section we perform three experiments in order to analyze and compare the results of the model in three different scenarios that mimic common real situations; In the fourth section we discuss the results of our study and present the main conclusions.
Characterization of the Model
Agents can represent several entities that have behaviors and react according to their states and their environment in different granularities (different levels of observation of the environment) [12]. A single agent from the outside of a system could be used to define a set of agents. For example, when simulating a complex system composed with another small systems, each small system can be defined as an agent, but internally representing a set of agents.
This study considers an agent as a colony of individuals (instead of one particular individual or species), that depends on the suitability of the environment to establish itself. Notice that the purpose of this model is to analyze the spatial distribution of species in a heterogeneous habitat. Suitable environment can be seen as places (habitat units) with appropriate environmental conditions and enough resources for the species to survive and reproduce. The environment consists of habitat units or cells characterized by their location (x,y) coordinates, the quantity of species in that location, and a suitability value of the cell. The suitability value of each cell has values between zero and one. Cells with values close to one are more suitable for the specie to survive and reproduce. An artificial environment was set in a grid, with a dimension of 200 × 200. From the practical point of view, characteristics of agents can be defined as follows [13][14]: -An agent is an identifiable, discrete, or modular individual with a set of characteristics and rules that drive his behavior and decision-making ability. Since we are interested in studying the spatial distribution of species, we conceptualized the agent as a square area in a geographical map. Each area has a number (may be zero) of individuals of a given species. Each species has a number of attributes such as birth rate, death rate and spread rate.
-An agent is autonomous and self-directed. An agent can function independently in your environment, and interacting with other agents, for a limited range of situations of its interest. In our model, the agents interact with their environment (habitat units or grid cell) in a way that a percentage of the species population is transferred to their neighboring cells (agents) in each iteration.
-An agent is social, interacting with other agents. Agents have interaction protocol as well as communication between agents. Agents have the ability to recognize and distinguish the particularities of other agents. In our model each agent (cell) has access to the suitability of the neighboring cells as well as to what amount of the cells are filled with the species population. Each agent exchanges material with its neighbors.
-An agent is situated in an external environment with which it interacts, in addition to other agents. In this work the agent's interaction with the environment is closely coupled with the maps of environmental variables that determine the suitability of the external environment.
www.aetic.theiaer.org -An agent can be directed by objectives, having goals to achieve in relation to his behavior. This allows an agent to compare their results with the goals they want to achieve. In our model the main objective of the agent is mostly encoded in the spread rate of the species: a greater value means that the species tries to colonize the entire environment whereas a smaller one means that the species wants to establish colonies and settle in place.
-An agent is flexible in the sense of having the ability to learn and adapt his behaviors based on experience (this requires some kind of memory). On the other hand, an agent may have rules that modify its behavior. In our model the rules of behavior depend entirely on the values of the environmental variables. Those determine the suitability of the surrounding environment hence species tend to survive and reproduce more widely in locations (cells) considered suitable. In the less suitable locations, the content of the cell will be depleted. During the implementation we followed the ODD (Overview, Design concepts, Detail) protocol [15][16][17] for the description of the model. Some of its main components are briefly presented in the following.
Process Overview
The goal of the species is to move and establish itself (survive and reproduce) in more suitable places, where the suitability values are closer to one. In this process three main parameters are taken into account: birth rate, death rate and the spread rate. These three parameters are independent of each other. However, the composite effect of these parameters on the model's output is observed. Algorithm 1 emphasizes the species' life cycle and how it was implemented in this work.
Underlying the model, it is assumed that there is a description of the suitability of the environment, its determination (a modeler's task) is outside the scope of this article. After setting the parameters of the model, the environment is initialized with the map of suitability. Similarly, the population of species is initialized. In this specific case, a random number of species is set in a randomly chosen cell. In each iteration (tick) is applied a birth and death rates for each quantity of species in each cell. The birth and death of a species are affected by the suitability of the cell. Therefore, the quantity of species in suitable places (higher suitability) grows in greater quantities compared to the places with low suitability having the same birth rate. Likewise, the death rate has a higher incidence in less suitable places. After that, the species tries to expand and colonize the neighboring cells. Each neighbor cell will receive a percentage of the quantity of species (determined by the spread rate).
Design Concepts
The species life cycle consists of three main steps: (1) at each time (tick) the species reproduce according to the birth rate and the conditions of its cell (suit-ability value); (2) an amount of species dies according to the death rate and the suitability of the cell (low suitability, more death); and (3) neighboring cells receive each one an amount of individuals according to the spread rate, see The model uses as input data, environmental variables (maps) that influence the behavior of the species. These environmental values are disposed in a grid of cells, that contains in each cell a value normalized to the unit interval. The suitability map will be composed by these environmental variables, see Figure 2.
Selected Experiments and Results
In the reported experiments only one cell in the environment is initialized (species' origin), with a random quantity of species, the remaining cells are depleted of individuals. The location of the origin is randomly chosen between the cells with suitability values close to one (places where the species have more probability to survive and reproduce). The model considers 1000 as the maximum relative quantity of species in each cell.
Before drawing any conclusions regarding the model's behavior, several parameters combination will be tested, and its results will be compared. Combinations are made between birth rate, death rate and spread rate. For the birth and death rates the following values were chosen: 0.1, 0.3, 0.5, 0.7 and 0.9, and for the spread rate: 0.03, 0.05, 0.07, 0.09.
Three different experiments are reported. The first experiment presented in [18] shows the effects of the main parameters of the model in a setup where only one environmental variable is considered as the determinant for the species' suitability. The environment is assumed to be smoothly changing from an area of high suitability (a level next to 1) towards a hostile area (a suitability level next to 0). Departing from a small population in a suitable area, the propagation in the environment is compared to the suitability map after an equilibrium states reached.
The next experiment introduces a second environmental variable as a way to mimic the presence of migratory routes or otherwise corridors which are propitious to the development of a given species.
The third setup shows the combined effect of two environmental variables, each one with a graduation from suitability to non-suitability in different directions. It worthwhile to mention that these environmental variables are artificial, and were created only for experimental purposes, however their distribution, at least at a local level, are not far away from some situations that occur in the real environments. Due to the high number of simulated scenarios, only a selected set of results is presented.
One fundamental aspect related to these experiments was the exact time to stop each simulation (stop criteria). We run several simulations in order to find the point where the system reached the stabilization -no noticeable change be-tween two consecutive states of the model. We analyzed the differences between two sequential states of the system (time t and time t-1). In our model, the difference between one state of the system and another, lies in the quantity of species presented in each cell. Thus, we calculate the sum of the Cell-by-Cell differences in these sequential states of the www.aetic.theiaer.org system. The simulation was interrupted when this difference was maintained below a small threshold for several ticks. Figure 2 depicts an environment that is gradually changing from an area of high suitability (a level next to 1 in the bottom) towards a non-suitable area (suitability level next to 0 over the top). After randomly placing the origin of species in a suitable environment, the simulation starts with a random quantity of individuals and then the model evolves according to their life cycle. In the following we present the results off our simulations scenarios varying the spread rate for values equals to (A) 0.03, (B) 0.05, (C) 0.07 and (D) 0.09 and keeping fixed the birth rate (0.7) and the death rate (0.1). These values were used not only for illustrative purposes, but also to analyze the effect of the spread rate on the results of the model. Figure 3 shows the output of the model for different spreading ratios after reaching stability. As can be seen in Figure 3, species tend to establish themselves in locations where the environmental conditions are suitable to them in order to survive and reproduce. Excluding the scenarios where the species cannot survive neither re-produce, model outputs often follow the same pattern, although the capacity of species to expand varies according to the three parameters (birth rate, death rate and spread rate). In a first approach, doing a visual comparison between these results ( Figure 3) and the suitability map (Figure 2), it is possible to verify similarities between them. Model www.aetic.theiaer.org output follows the transition (gradation) presented in the environment map. However, a visual comparison is not enough to draw conclusions related to the model's behavior. Often, species did not survive when the values of birth rate and death rate are equal, and in scenarios where the value of birth rate is less than death rate. In order to analyze the output of the model in these different parameters' combination, Figure 4 depicts the comparison of the model's output in all scenarios with the suitability map (see the environment map in Figure 2). We converted the model output to the same scale (0, 1) of the environment map to facilitate comparison. The overall comparison technique adapted from [19] was performed for each model output. In Figure 4 it is possible to observe the scenarios with the lowest differences. The combination (birth rate = 0.5, death rate = 0.1, spread rate = 0.09) presented the lowest difference, followed by the combination (0.9, 0.3, 0.09), and the combination (0.5, 0.1, 0.07) in the same order of the rates. According to Figure 4 for values of death rate greater or equal than 50% even with a birth rate of90% the chances of the species to survive are remote. On the other hand, at a birth rate less than 20% species have few chances to survive and expand. In this regard we can say that for higher spread rates -subsumed to the hypothesis on the suitability of the species -the model can achieve a more consentaneous filling of the environment. Figure 5 shows the number of iterations necessary to reach a stability state for four different spread rates (everything else being equal). Observing the Figure 5 we notice that at the beginning of the simulation the difference between two sequential states increases very quickly. We verify the increase of the difference until a certain number of iterations and then these differences start to decrease to the point that it stabilizes. Another interesting finding is that in our model a higher spread rate promotes a quicker stability.
Conceptualization of a Suitability Corridor
For this experiment we considered the synthetic environmental variable presented in the previous section, see Figure 6-A, and we introduced a second environmental map, Figure 6-B, representing a suitability corridor (we can think of it as a migratory route, for instance). The combined suitability cell values were obtained by summing up the values of the two environmental variables and sequent normalization to the unit interval, see Figure 6-C. According to Figure 7 species tend to colonize all the environment. Unlike the previous maps (Figure 3), where there were no conditions for the species to expand on the top, in this particular case, there are a set of suitable cells that allows species do expand. Another factor that influences the expansion of the species to the top of the map is the suitability corridor (the vertical line). This www.aetic.theiaer.org corridor allows species to reach to the less suitable locations. The difference between birth rate and death rate (0.7 and 0.1) also has a significant impact in the colonization effect, and we can observe a larger filling of the map when the spread rate is lower, see Figure 7-A. Comparing Figure 7 with the suitability map (Figure 6 C) we can observe the same pattern between them. The transition (gradation) and the vertical line presented in suitability map are also observed in the model results. Figure 8 shows the Cell-by-Cell comparison between the model output (smooth gradation + suitability corridor) and the environment map. These model results were converted in the scale (0, 1) in order to facilitate the comparison with the suitability map ( Figure 6). Doing the comparison between each simulation results (output) and the suitability map (Figure 8), we can verify that the combination (death rate = 0.1, birth rate = 0.5, spread rate = 0.09), presented the lowest difference, followed by the combination (0.1, 0.3, 0.03), and the combination (0.1, 0.5, 0.07) in the same order of the rates. Observing Figure 8, at death rate greater or equal than 70% species do not survive, even with a birth rate greater or equal than 90%. At the birth rate less than 20% the chances for the species to survive are remote. Contrary to the first experiments, the three best results were obtained with different spread rates namely: 0.09, 0.03 and0.07.
Different of the first experiment, the best results were obtained in different spread rate (0.09, 0.03, 0.07). Figure 9 shows the number of iterations necessary to reach a stability state for four different spread rates. As in the first experiment, we observe in Figure 9 that the differences between two sequential states start to grow quickly, until reach its peak. These differences start to decrease until the point of stabilization. Comparing with the previous experiment, this experiment takes longer to converge due to a greater heterogeneity of the environment resulting from the combination of two environment variables. These results show that the simulation with spread rate equal to 0.03(A) takes much longer to converge; It allows a larger filling of the map when the combination of birth rate and death rate is suitable for the species (for example: birth rate equal to 0.7 and death rate equal to 0.1).
Compound Effect of two Environmental Variables
In this experiment we consider the environmental variable presented in the first experiment, see Figure 10-A, and we introduced a second variable by rotating 90ºthis map resulting in a similar gradation but with different orientation, see Figure 10-B. The suitability map was obtained by combining these two environmental variables, see Figure 10-C. In Figure 11 species occupy the most suitable places for them to stabilize and reproduce. Species tend to disappear in locations where the suitability values are low. We can observe in each Figures (A, B, C and D) the gradation pattern presented in suitability map. The impact of the spread rate is highly noticeable. In the resulting suitability map (Figure 10-C) the least suitable places for the species to survive are located at top left. Therefore, species do not reach these places. As we saw in previous experiments, species colonize in more abundance for the scenarios where the spread rate is lower. www.aetic.theiaer.org Figure 12 shows the Cell-by-Cell comparison between the model output (compound effect of two environmental variables) and the environment map. The simulations results allow us to verify that for this experiment, the combinations (death rate = 0.1, birth rate = 0.5, spread rate = 0.09) presented the lowest difference with respect to the suitability map, followed by the combination (0.1, 0.5,07), and the combination (0.3, 0.9, 0.03) in the same order of rates. In Figure 12 we can verify that there are no chances for the species to live neither reproduce at birth rate equal than death rate. The lowest differences can be observed for the four spread rates: 0.03, 0.05, 0.07 and 0.09. Figure 13 shows the number of iterations necessary to reach a stability state for four different spread rates. As the simulation proceeds, the differences between two sequential states gradually increase until reach its pick. Then, the differences start to decrease until reach a point in the simulation where the differences remain in a very low range, that corresponds to the stabilization point, see Figure 13. As we observed in the previous experiments, the lower the spread rate, the longer the simulation will take to converge. www.aetic.theiaer.org
Concluding Remarks
In this study we analyzed the effects of an agent-based model's parameters in the spatial distribution of species, by implementing an ABM able to deal with a heterogeneous environment represented by a combination of (environmental)variables of interest. We performed a parametric study in order to find the parameters combination that fits the purpose of our model. The results showed that in addition to the environmental conditions, the combination of the model parameters has a significant impact on its results. Our study is limited in the sense that the environment of our model was not real, however the initial conditions of the presented experiments are well aligned with a number of real local environmental constraints that we intend to explore in future studies for the prediction of the geographical distribution of biological species (both flora and fauna) with economical interest in a setup of environmental uncertainty. Model behavior and model outputs are deeply coupled with the chosen parameters and the selected environment. The parameters of the reported model are completely independent of each other in the sense that any adjustment made to any parameter does not affect the value of the remaining parameters. However, any small change in a subset of parameters can result in drastic changes on the overall behavior of the model; the same happens if we change the environmental conditions.
In order to better understand the model's behavior, it is necessary to perform a thorough parameter analysis, and verify which are the environmental variables that compose the environment and its values. It is a well-known fact that com-prehensive analysis of the output to input variability is an important step during the development of an agent-based model [20]. Model parameterization allows the model to produce more realistic results [21]. Parameters analyzed in our study have each one its effect in the model. However, we cannot consider only these parameters individually but instead we must consider the effect that the combination of the different parameters has in the model's output. Discarding the effect of either one of the parameters will jeopardize the ability to explain the output of the model.
One of the aspects to take into account is the distinction between birth rate and death rate. In order to observe reproduction, it is important to have a significant distinction between birth rate and death rate, fixing the values of birth rate al-ways greater than death rate. This is the only case that the species can survive and reproduce. However, without spread rate there is no way for the species to expand (colonize) to other cells in the environment. Once chosen the birth and death rates, the spread rate determines if the species have propensity to consolidate the occupied places or if instead, they have a greater predisposition to colonize new territories.
The choice of parameters will always constraint the desired results. When using a model as the one described in this work one must analyze several scenarios in order to find the parameters' combination that answers the purpose of the reference model. | 5,775 | 2021-04-01T00:00:00.000 | [
"Computer Science",
"Environmental Science",
"Biology"
] |
Trap Exploration in Amorphous Boron-Doped ZnO Films
This paper addresses the trap exploration in amorphous boron-doped ZnO (ZnO:B) films using an asymmetric structure of metal-oxide-metal. In this work, the structure of Ni/ZnO:B/TaN is adopted and the ZnO:B film is deposited by RF magnetron sputtering. The as-deposited ZnO:B film is amorphous and becomes polycrystalline when annealing temperature is above 500 °C. According to the analysis of conduction mechanism in the as-deposited ZnO:B devices, Ohmic conduction is obtained at positive bias voltage because of the Ohmic contact at the TaN/ZnO:B interface. Meanwhile, hopping conduction is obtained at negative bias voltage due to the defective traps in ZnO:B in which the trap energy level is lower than the energy barrier at the Ni/ZnO:B interface. In the hopping conduction, the temperature dependence of I-V characteristics reveals that the higher the temperature, the lower the current. This suggests that no single-level traps, but only multiple-level traps, exist in the amorphous ZnO:B films. Accordingly, the trap energy levels (0.46–0.64 eV) and trap spacing (1.1 nm) in these multiple-level traps are extracted.
Introduction
Zinc oxide (ZnO) is an attractive material for the applications of semiconductor devices [1]. It has a direct and wide band gap in which the value is 3.37 eV at room temperature and increases to be about 3.44 eV at 4.2 K. This property makes ZnO transparent in visible light and enables optoelectronic applications in blue and ultraviolet region, such as light emitting devices, laser diodes and photosensors [2]. Additionally, the large free-exciton binding energy of 60 meV in ZnO, compared with 25 meV in GaN, is of interest to achieve excitonic stimulated emission for the realization of low-threshold lasers at or even above room temperature [3,4]. One interesting feature of ZnO is the ability of bandgap engineering by its alloying with CdO (E g = 2.3 eV) or MgO (E g " 7.7 eV). Namely, bandgap energy of 2.99 eV (Cd y Zn 1´y O, y = 0.07) can be achieved by doping with Cd 2+ , while Mg 2+ increases the bandgap energy to 3.9 eV (Mg x Zn 1´x O, x = 0.33) [5][6][7]. ZnO can be used for phosphor applications because of a strong luminescence in the green-white region of the spectrum. The n-type conductivity of ZnO enables the applications in vacuum fluorescent displays and field emission displays [1,8].
In general, ZnO with a wurtzite structure is an unintentional n-type semiconductor because of the deviation from stoichiometry. The background free electrons basically result from the shallow donor levels related to the presence of native defects such as oxygen vacancies and/or zinc interstitials [2]. To achieve higher n-type conductivity of ZnO films, intentional n-type doping can be implemented by the substitution of Group III elements (B, Al, Ga, and In) on the Zn sites or Group VII elements (F and Cl) on the O sites [2]. After doping of Group III elements, ZnO is favorable for replacing tin oxide (SnO 2 ) or indium tin oxide (ITO) as a transparent conducting electrode in liquid crystal displays or solar cell devices due to the advantages of abundant raw material, low synthetic temperature, available large single crystal, amenable wet chemical etching, simple manufacturing process, competitive optical and electrical properties, nontoxic and stable in plasma, and radiation hardness [1,9]. Note that the Group III elements used for doping ZnO to enhance conductivity are substituting Zn atoms in the host lattice. Although there are some literature studies advocating that suggestion, there is no conclusive evidence yet. The Group III elements may exist as interstitials instead of substituting the Zn atoms in the host lattice [1]. Recently, ZnO-based diluted magnetic semiconductors showed ferromagnetism in ZnO by doping with boron or a transition metal, which is promising to achieve practical Curie temperature for future spintronic devices [3,4]. In addition, transparent boron-doped ZnO (ZnO:B) films sandwiched between two tungsten electrodes showed memristive behavior, which is attractive to overcome the physical limitations of traditional Flash memory for the next generation nonvolatile memory applications [10].
Because the defect issue is generally critical in the ZnO:B-based devices, the carrier trapping characteristics in ZnO:B films is important. For thermal budget reduction, the as-deposited amorphous ZnO:B films grown by RF magnetron sputtering are concerned. In this work, the trap exploration in the amorphous boron-doped ZnO films was studied. An asymmetric metal-oxide-metal (MOM) structure with ZnO:B was fabricated and investigated for the studies of metal/oxide interface property and defect trap nature in ZnO:B films. The structure of Ni/ZnO:B/TaN was used. Based on the analysis of current-voltage (I-V) characteristics, Ohmic conduction is obtained at positive bias voltage due to the low work-function electrode TaN which forms Ohmic contact at the TaN/ZnO:B interface. Whereas, hopping conduction dominates at negative bias voltage due to the defective traps in ZnO:B in which the trap energy level is lower than the energy barrier at the Ni/ZnO:B interface. Lower energy obstacle leads to higher carrier transport, and therefore dominates the conduction current through the oxide. In the hopping conduction, the temperature dependent I-V characteristics reveal that the higher temperature, the lower current. This implies that the current decrease at higher temperature results from the multiple-level traps existed in the amorphous ZnO:B films.
Results and Discussion
To examine the microstructure of boron-doped ZnO (ZnO:B) films, the X-ray diffraction (XRD) patterns were measured at room temperature using a powder diffractometer (Cu target, 45 kV, 40 mA, scanning speed = 3˝/min, scanning ranged from 2θ = 20˝to 2θ = 80˝, (PANalytical, Almelo, The Netherlands). According to the experimental results, carbon contamination level was extremely low in our films deposited from the ZnO:B source material and was below the detection limit of the Auger electron spectroscopy system (ULVAC-PHI , PHI 700, Kanagawa, Japan). Figure 1 depicts the indexed XRD spectra for as-deposited and annealed ZnO:B films. According to the XRD spectra, the intensity related to (002) and (103) planes for the as-deposited and 400˝C-annealed samples is too weak to deduce the polycrystalline phase in ZnO: B film. Thus, this figure reveals that the as-deposited ZnO:B film is amorphous and becomes polycrystalline when annealing temperature is above 500˝C. The polycrystalline ZnO:B film has a strong (002) peak and a weak (103) peak, similar to the results in a previous work [9]. These X-ray peaks result from the hexagonal wurtzite structure of ZnO with preferred orientation along c-axis. The mean grain size of both 500˝C-and 600˝C-annealed ZnO:B films estimated by Scherrer formula according to the full-width at half-maximum (FWHM) of (002) peak in Figure 1 is around 11 nm [9]. Herein, the trap investigation and conduction mechanism concentrated primarily on as-deposited amorphous ZnO:B films. Aside from the peaks of ZnO:B in Figure 1
Results and Discussion
To examine the microstructure of boron-doped ZnO (ZnO:B) films, the X-ray diffraction (XRD) patterns were measured at room temperature using a powder diffractometer (Cu target, 45 kV, 40 mA, scanning speed = 3°/min, scanning ranged from 2 = 20° to 2 = 80°, (PANalytical, Almelo, The Netherlands). According to the experimental results, carbon contamination level was extremely low in our films deposited from the ZnO:B source material and was below the detection limit of the Auger electron spectroscopy system (ULVAC-PHI , PHI 700, Kanagawa, Japan). Figure 1 depicts the indexed XRD spectra for as-deposited and annealed ZnO:B films. According to the XRD spectra, the intensity related to (002) and (103) planes for the as-deposited and 400 °C-annealed samples is too weak to deduce the polycrystalline phase in ZnO: B film. Thus, this figure reveals that the as-deposited ZnO:B film is amorphous and becomes polycrystalline when annealing temperature is above 500 °C. The polycrystalline ZnO:B film has a strong (002) peak and a weak (103) peak, similar to the results in a previous work [9]. These x-ray peaks result from the hexagonal wurtzite structure of ZnO with preferred orientation along c-axis. The mean grain size of both 500 °C-and 600 °C-annealed ZnO:B films estimated by Scherrer formula according to the full-width at half-maximum (FWHM) of (002) peak in Figure 1 is around 11 nm [9]. Herein, the trap investigation and conduction mechanism concentrated primarily on as-deposited amorphous ZnO:B films. Aside from the peaks of ZnO:B in Figure 1 Since metal-oxide interface plays an important role in current conduction in a metal-oxide-metal structure, different metal electrodes are adopted to investigate the carrier transportation in this work. Nickel (Ni) is a high work-function metal and its value is 5.15 eV [11]. Whereas, tantalum nitride (TaN) is a low work-function electrode and its value is 4.15 eV [12]. Hence, the asymmetric structure of Since metal-oxide interface plays an important role in current conduction in a metal-oxide-metal structure, different metal electrodes are adopted to investigate the carrier transportation in this work. Nickel (Ni) is a high work-function metal and its value is 5.15 eV [11]. Whereas, tantalum nitride (TaN) is a low work-function electrode and its value is 4.15 eV [12]. Hence, the asymmetric structure of Ni/ZnO:B/TaN capacitors were fabricated in this work. Because ZnO:B is an n-type semiconductor and its electron affinity is 4.1-4.2 eV [13,14], Ohmic TaN/ZnO:B contact can be obtained due to low work-function of TaN. On the contrary, the Ni/ZnO:B interface yields an energy barrier due to high work-function of Ni [15]. Figure 2 shows the temperature dependence of I-V characteristics in Ni/ZnO:B/TaN capacitors. Under positive bias, linear I-V behaviors are observed, as indicated in the inset of Figure 2. The current conduction yields the Ohmic nature as a consequence of Ohmic contact at the TaN/ZnO:B interface. Meanwhile, under negative bias the non-linear I-V behaviors are obtained because of the electron energy barrier at the Ni/ZnO:B interface. In this case, there are a number of conduction mechanisms that may all contribute to the conduction current through the ZnO:B film at the same time. To distinguish these conduction mechanisms, measuring the temperature dependence on conduction current may afford us some valuable information to know the constitution of the conduction currents because several conduction mechanisms depend on the temperature in different ways [16]. Generally, a certain conduction mechanism may dominate the conduction current and the dominant conduction mechanism can be usually discovered after some typical analyses. In this work, the temperature dependence on I-V characteristics in Ni/ZnO:B/TaN structure is shown in Figure 2. According to the I-V characteristics at negative bias in Figure 2, the current level is lower as the temperature is higher. This nature is quite different from the normal I-V characteristics in oxide films in which the higher the temperature is, the larger the current has. Furthermore, the breakdown voltage of ZnO:B is around 4 V, namely, the breakdown field of ZnO:B is around 1.5 MV/cm. To investigate the current conduction mechanism in Ni/ZnO:B/TaN structure, oxide current simulations and typical plots of characteristic dependence on current density (J) and electric field (E) can be adopted [16]. The simulation results exhibit that the experimental data measured at negative bias match the theory of hopping conduction very well when the electric field is larger than about 0.2 MV/cm, as shown in Figure 3. Hence, the dominant conduction mechanism in Ni/ZnO:B/TaN structure at negative bias is the hopping conduction. The hopping conduction can be expressed as [16]: where q is the electronic charge; a is the hopping distance (i.e., mean trap spacing); n is the electron concentration in the conduction band; v is the frequency of thermal vibration of electrons at trap sites; E is the applied electric field; T is the absolute temperature; k is the Boltzmann's constant; and Φ t is the energy level from the trap states to the bottom of the conduction band (E C ). In this work, the electron concentration is about 10 18 cm´3 in the ZnO:B films according to the Hall measurement. Based on Equation (1), the mean trap spacing can be determined by the slope of the linear part of log(J) versus E at each temperature. Hence, the trap spacing in ZnO:B films is extracted to be 1.1˘0.1 nm according to Figure 3. In hopping conduction, the electron energy is lower than the maximum energy of the potential barrier between two trapping sites, as shown in the inset of Figure 4. Therefore, the electron transport in ZnO:B films results from the tunneling effect in oxide films. Based on Equation (1), the hopping conduction current depends mainly on both the field energy (Φ E ) induced from qaE and the trap energy level Φ t in oxide films. If Φ E > Φ t , then the hopping conduction current decreases with increasing temperature. On the contrary, the hopping conduction current increases with increasing temperature when Φ E < Φ t . Taking the conditions of the largest electric field (i.e., breakdown field 1.5 MV/cm) and average hopping distance (1.1 nm), the maximum field energy Φ E max is around 0.165 eV. This indicates that the hopping conduction current will increases with increasing temperature when Φ t is larger than 0.165 eV and the other parameters are fixed. However, the device current exponentially decreases with temperature in this work, as shown in Figure 3. Thus we consider that the trap energy level in ZnO:B films is not a constant but increases with temperature. Before the simulation work for determining the trap energy levels in ZnO:B films, the electron concentration and the frequency of thermal vibration of electrons at trap sites need to be resolved. In this work, the electron concentration is about 10 18 cm´3 in the ZnO:B films according to the Hall measurement. Moreover, the frequency of thermal vibration of electrons at trap sites can be qualitatively represented by the frequency of optical phonons in the solid [17]. The phonon notion is generally associated with a super-lattice structure (polycrystalline materials). Although the range order in amorphous solids is smaller than that in polycrystalline ones, the phonon concept is also used in this work. The literature on the energy of optical phonon in ZnO is approximately within the range of 300-600 cm´1 [5]. Namely, the frequency of optical phonons in ZnO is around 1-2ˆ10 13 Hz. In this work, the frequency of 1ˆ10 13 Hz was assumed. The deviation induced from the uncertainty of optical phonon frequency in ZnO:B films is smaller than 0.02 eV. This value in the determination of trap energy level can be neglected in this work. Hence, the temperature dependence of trap energy levels in ZnO:B films were obtained, as shown in Figure 4. According to the simulation work, the trap energy level increases with temperature. This suggests that there are traps with deeper energy level incited by the elevated temperature. Accordingly, these incited traps lead to the exponential decrease in current at higher temperatures. This phenomenon is also observed in W/ZnO:B/W and Pt/MgO/Pt structures in which the resistive switching behavior was revealed [9,18]. ZnO films are 0.46, 0.5, and 0.56 eV, respectively [19]. This result suggests that the defects of Zn i , Zn i + , and V Zn 2´m ay play the important roles in the current conduction in ZnO:B films. These defects may be introduced during the ZnO:B deposition process. Consequently, not single-level but multiple-level traps were found in the amorphous ZnO:B films. Note that defects, such as interstitials and vacancies, are imperfections in the crystal lattice. Interstitials signify extra atoms occupying interstices in the lattice. Meanwhile, vacancies signify missing atoms at regular lattice positions. In addition, current flow through the oxides will be raised at higher temperature for oxide films with the single-level traps. Based on our previous study [20], the single-level traps with 0.46 eV exist in the non-doped ZnO films in which the current density in a high resistance state increases with increasing temperature. However, Figure 3 shows that the current density decreases with increasing temperature in the boron-doped ZnO films due to the multiple-level traps. Because these two device fabrication processes are similar except the doping condition, we consider that the multiple-level traps come from the boron doping process in this work and are the origin of current reduction at higher temperatures in Ni/ZnO:B/TaN structure under negative voltage bias. According to the study of current conduction mechanisms in this work, we revealed that the trap energy levels in ZnO:B films are around in the range of 0.46-0.64 eV below the conduction band edge (E C ). To explore the chemical defects in the amorphous ZnO:B films, the X-ray photoelectron spectroscopy (XPS) spectra were used to examine the chemical states of zinc and oxygen. Thermo Fisher Sceientific Theta Probe XPS system (with Al K α source, Waltham, MA, USA) was used to collect the photoelectron spectra of the samples with a take-off angle of 90˝relative to the sample surface. The vacuum pressure was below 10´9 torr during spectra data acquisition and using high resolution scans (0.02%"2%). In order to obtain meaningful binding energies, charge referencing was performed for the XPS measurements. In the beginning of the XPS measurements, the binding energy of the photoelectron was calibrated by assigning 284.8 eV to the C1s peak corresponding to adventitious carbon. For detecting the binding energies in the middle part of ZnO:B films, XPS spectra were collected after sputter-cleaning with 1-keV Ar + ions for 1.2 min. Figure 5 shows the XPS spectrum of B 1s in ZnO:B film. The binding energy peak located at approximately 192 eV is associated with the B 3+ in B 2 O 3 structure, which provides the evidence for the incorporation of boron into the zinc oxide [21]. Figure 6 shows the O 1s XPS spectrum of the amorphous ZnO:B films. The profile of the O 1s spectrum was fitted using the Lorentzian-Gaussian functions. The binding energy peaks located at 529.7 [22] and 531.1 eV are attributed to lattice oxygen (ZnO) and nonlattice oxygen (oxygen vacancy) ions, respectively. Figure 7 shows the Zn 2p double spectra of the amorphous ZnO:B films. The binding energies of Zn 2p 1/2 and 2p 3/2 for Zn 2+ correspond to the peaks at 1044.7 and 1021.5 eV, respectively [23,24]. Meanwhile, the peaks of binding energies of Zn 2p 1/2 and 2p 3/2 for nonlattice zinc ions are located at 1043.9 and 1021.1 eV, respectively. According to Figures 6 and 7 the peak intensity (Zn 2p 1/2 and 2p 3/2 ) of nonlattice zinc ions is much more obvious than that (O 1s) of nonlattice oxygen ions. Namely, the peak area ratio of nonlattice to lattice zinc ions is much higher than that of nonlattice to lattice oxygen ions. This implies that the number of zinc deficient states is much larger than that of oxygen deficient states. As a consequence, the defects related to nonlattice zinc ions play the more important role in the current conduction in the amorphous ZnO:B films. A literature report pointed out some defect energy levels regarding nonlattice zinc and oxygen ions [19]. The defects related to nonlattice zinc ions include neutral zinc interstitials (Zn i ), singly charged zinc interstitials (Zn i + ), and doubly charged zinc vacancies
Experimental Section
In this work, the metal-oxide-metal (MOM) capacitors were fabricated. The boron-doped ZnO (ZnO:B) thin films were deposited on TaN/SiO 2 /Si substrates by radio frequency (RF) magnetron sputtering in argon ambient at room temperature using a ceramic ZnO:B target. The boron doping concentration of ZnO:B films was about 0.8 wt %. The flow rate of argon was 20 standard cubic centimeters per minute (sccm). The working pressure during deposition was 4 mTorr. The RF power was 60 W. The deposited ZnO:B film thickness was 27 nm. To investigate the crystal properties of ZnO:B films, rapid thermal annealing (RTA) was performed in N 2 for 30 s at temperatures ranging from 400˝C to 600˝C. To achieve the MOM structure, a nickel (Ni) top electrode was deposited by thermal evaporation with a round area of 3.14ˆ10´4 cm 2 patterned by the metal shadow mask. The electrical characteristics of the fabricated Ni/ZnO:B/TaN capacitors were measured by an semiconductor parameter analyzer (Agilent 4156 C, Hachioji, Japan). During the measurement, the voltage bias was applied on the top electrode (Ni) with the bottom electrode (TaN) grounded. All the measurements were performed under dark condition.
Conclusions
In conclusion, the trap properties in amorphous boron-doped ZnO films were studied using the structure of Ni/ZnO:B/TaN. The Ohmic conduction dominates at positive bias voltage; meanwhile, the hopping conduction dominates at negative bias voltage. Based on the analysis of current conduction mechanism, we revealed that the defects with multiple trap levels exist in the amorphous ZnO:B films. These defects are related to the nonlattice zinc ions and play the key role in the current conduction. Simulation results show that the defect trap spacing between the defect sites is around 1.1 nm. Furthermore, the defect trap level increases with increasing temperature. | 4,816 | 2015-08-31T00:00:00.000 | [
"Materials Science",
"Physics",
"Engineering"
] |
Testing of WS 2 Nanoparticles Functionalized by a Humin-Like Shell as Lubricant Additives
Nanoparticles of transition metal dichalcogenides (TMDC) have been known to reduce friction and wear when added to oil-type liquid lubricants. Aggregation limits the ability of the nanoparticles to penetrate into the interface between the two rubbing surfaces—an important factor in friction reduction mechanisms. Doping has been successfully used to reduce agglomeration, but it must be done in the production process of the nanoparticles. The use of surface-functionalized nanoparticles is less common than doping. Nonetheless, it has the potential to reduce agglomeration and thereby improve the reduction of friction and wear. In this study, we present the results of preliminary tribological ball-on-flat tests performed with WS2 nanoparticles functionalized by a humin-like conformal shell, as additives to polyalphaolefin-4 (PAO-4) oil. We tested WS2 inorganic nanotubes (INTs) and two grades of inorganic fullerene-like nanoparticles (IFs). The shell/coating was found to improve friction reduction for IFs but not for INTs through better dispersion in the oil. The thicker the coating on the IFs, the less agglomerated they were. Coated industrial-grade IFs were found, by far, to be the best additive for friction reduction. We suggest the combination between reduced agglomeration and poor crystallinity as the reason for this result.
Introduction
In the last few decades, nanoparticles (NPs) have been widely studied and used to reduce both friction and wear.Inorganic NPs with lamellar anisotropic structure are good candidates for lubrication.They provide high compression strength and low shear strength.Examples of such materials are chlorides, borates and oxides of transition metals.Within this group, the transition metals dichalcogenides (TMDC) excel at tribological applications.TMDCs are sulfides, selenides and tellurides of the transition metals tungsten, molybdenum, tantalum, titanium, and niobium.Molybdenum disulfide (MoS 2 ) and tungsten disulfide (WS 2 ) are by far the most studied in tribology, somewhat in their nanotube (INT) form, but much more extensively in their fullerene-like particle (IF) form.These polyhedral onion-like nanostructures were first discovered by Tenne et al. in 1992 [1] and have constantly shown the ability to improve the tribological properties of different systems ever since.Lower friction and wear can be achieved by adding TMDC NPs to oils [2][3][4][5][6][7][8] or by incorporating them in solid matrices [6,[9][10][11][12] and in coatings [13][14][15][16][17].
The mechanisms of friction reduction by TMDC NPs are also a widely researched topic.They have been studied both by theoretical-computational techniques and by experimental-analytical methods such as electron microscopy, atomic force microscopy (AFM), and X-ray photoelectron spectroscopy (XPS) [6,7,10,[16][17][18][19][20][21][22][23][24][25][26][27][28].Generally, there are three main mechanisms considered.The first friction reduction mechanism is the sliding motion of the NP, where it maintains its shape.The NPs can penetrate into the grooves between two rough surfaces and serve mainly as a separator between the two rubbing surfaces.The second mechanism occurs when pressure increases and a rolling motion begins to take place, causing the nanoparticle to act as a tiny ball bearing.A further increase in pressure causes the nanoparticle to reach the limits of its elasticity, and the third mechanism emerges.Layers are exfoliated from the nanoparticle, forming a protective layer, or a tribofilm, on the surface.
Among the important parameters that influence the friction reduction provided by TMDC nanoparticles is their ability to penetrate the interface between the rubbed surfaces [18].These nanoparticles, especially WS 2 IFs [19], tend to come in agglomerated forms.The agglomerates are hard to disperse in oil.Disturbing the particles through sonication or extended mixing times can sometimes help to disperse the NPs in the oil, but not always.As a result, the nanoparticles will have limited access to the rubbed surfaces.If the aggregate is larger than the gap between the surfaces, the NPs will not even be able to enter, and so will be unable to reach the interface and aid in lubrication [20,24].
One strategy to increase the stability of TMDC NPs-in-oil dispersion is doping.Preparation of doped TMDC nanoparticles is not new to the literature.For example, nanostructures of WS 2 and MoS 2 doped with carbon [29], titanium [30], or niobium [31] were reported more than a decade and a half ago.Later, the preparation of rhenium-doped TMDC IFs was reported [32].Doping seems to have a positive effect on the tribological properties of the NPs.Tannous et al. [33] tested a series of MoxW1-xS2-type of IFs as oil additives using a pin-on-flat setup in order to determine their effect on friction and wear.They found that 0.5 < x ≤ 0.8 and x = 1 IFs gave the best results.A possible explanation is the presence of lattice defects, which facilitate exfoliation.In another study, the doping of MoS 2 IFs with less than 1 at.% of rhenium led to better dispersion in oil and to ultra-low friction coefficient (Coefficient of Friction, CoF) values.Here, negative surface charges caused by the doping may explain repulsion between the NPs and, in turn, an improved tribological behavior [34].Recently, Cammarata and Polcar [35] used ab initio density functional theory techniques to study how structural and electronic features of TMDC NPs affect their tribology on the macroscopic scale.They also used their design to engineer a titanium-doped MoS 2 NP that is expected to have enhanced tribological properties.A possible technical drawback to doping is the fact that it must be done as a part of the synthesis of TMDC NPs.
Another strategy to reduce agglomeration is to functionalize non-doped NPs.As they are relatively chemically inert in their bare form, surface functionalization of TMDC NPs, resulting in nanocomposite material formation, can potentially help in achieving better dispersion ability and in reducing agglomeration [24].Still, compared to the doping method, there are few studies found in the literature that address the tribological properties of surface-functionalized NPs.Shahar et al. [36], who performed tribological measurements on silane-coated WS 2 IFs, found that alkyl-silane functionalization increased the stability of oil-nanoparticle dispersions.In tests done one and two hours after the sonication of the functionalized nanoparticles in oil, the CoF values decreased by 33% compared to non-functionalized nanoparticles.A more recent work was reported by Yegin et al. [37], who worked with silica NPs as additives to an ionic liquid-type of lubricant.They found that octadecyltrichlorosilane-functionalized NPs reduced the CoF by 37.2% compared to bare ionic liquids, whereas non-functionalized silica NPs reduced the CoF by only 16.7%.Moreover, the functionalized NPs combined the advantages of a hard core and a soft shell.
For this work, our main goal was to get a preliminary idea about the tribological behavior of WS 2 nanostructures functionalized with a humin-like conformal shell, which we recently reported the preparation and characterization of [38].More specifically, we wanted to determine if the conformal shell leads to an improvement in the friction reduction by providing the NPs better access to the rubbed surfaces.Other parameters of interest were the effects of the shell thickness and the type of coated NP on the tribological properties.
Preparation of Humin-Like Shell-Coated WS 2 NPs
Humin-like shell-coated WS 2 NPs were prepared based on a procedure we previously published [38], scaled up for a 100-mL round-bottom flask (for quantities of the reagents, see Table 1).Multi-walled WS 2 INTs/IFs were purchased from NanoMaterials Ltd. (Yavne, Israel).For the samples IF-IND-1 and IF-IND-2, industrial-quality IFs were used.For the rest of the samples, laboratory-quality INTs and IFs were used.NPs were dispersed in 25 mL of chloroform (ACS reagent, Carlo Erba, Milan, Italy).A solution of glucose pentaacetate (98%, Acros Chemicals, Geel, Belgium, Cat.No. 119910250) in 25 mL was then added, followed by BF 3 •Et 2 O (Sigma-Aldrich, Rehovot, Israel, Cat.No. 175501).The reaction flasks were heated to 60 • C and stirred for 24 h.After cooling to room temperature, the samples were washed and dried (for a more detailed procedure, refer to [38]).
Tribological Tests
All tests were performed using a reciprocal ball-on-flat setup (Figure 1).The tests were performed using a load of 2.8 N and a sliding velocity of 1.5 mm/s over 3000 cycles.A steel-bearing ball (AISI 50100) with a diameter of 5 mm was moved against a stainless steel plate (AISI 316) with a hardness of 23-24 HRc.The flat plates were ground up to Ra = 0.1 µm.Three drops of polyalphaolefin-4 (PAO-4) oil (viscosity 18 mPa•s at 40 • C; Paz Oil Company Ltd., Haifa, Israel) with or without 1% (wt %) WS 2 NP additives were added at the beginning of each 1000 cycles at the interface between the ball and the plate.The CoF and the wear (width of track) were measured during the tests.The experimental setup was developed and built in Holon Institute of Technology (HIT, Holon, Israel).All tests were performed at room temperature (23-24 • C, humidity 45-55%) and repeated 3 times for each sample.
Preparation of Humin-Like Shell-Coated WS2 NPs
Humin-like shell-coated WS2 NPs were prepared based on a procedure we previously published [38], scaled up for a 100-mL round-bottom flask (for quantities of the reagents, see Table 1).Multiwalled WS2 INTs/IFs were purchased from NanoMaterials Ltd. (Yavne, Israel).For the samples IF-IND-1 and IF-IND-2, industrial-quality IFs were used.For the rest of the samples, laboratory-quality INTs and IFs were used.NPs were dispersed in 25 mL of chloroform (ACS reagent, Carlo Erba, Milan, Italy).A solution of glucose pentaacetate (98%, Acros Chemicals, Geel, Belgium, Cat.No. 119910250) in 25 mL was then added, followed by BF3•Et2O (Sigma-Aldrich, Rehovot, Israel, Cat.No. 175501).The reaction flasks were heated to 60 °C and stirred for 24 h.After cooling to room temperature, the samples were washed and dried (for a more detailed procedure, refer to [38]).
Tribological Tests
All tests were performed using a reciprocal ball-on-flat setup (Figure 1).The tests were performed using a load of 2.8 N and a sliding velocity of 1.5 mm/s over 3000 cycles.A steel-bearing ball (AISI 50100) with a diameter of 5 mm was moved against a stainless steel plate (AISI 316) with a hardness of 23-24 HRc.The flat plates were ground up to Ra = 0.1 µm.Three drops of polyalphaolefin-4 (PAO-4) oil (viscosity 18 mPa•s at 40 °C; Paz Oil Company Ltd., Haifa, Israel) with or without 1% (wt %) WS2 NP additives were added at the beginning of each 1000 cycles at the interface between the ball and the plate.The CoF and the wear (width of track) were measured during the tests.The experimental setup was developed and built in Holon Institute of Technology (HIT, Holon, Israel).All tests were performed at room temperature (23-24 °C, humidity 45-55%) and repeated 3 times for each sample.It is important to state that this work is our first with such functionalized WS2 NPs, and the tribology tests performed here were meant to get a preliminary idea about their potential as additives compared to non-functionalized ones.There are more parameters to be tested within the experimental setup, like changes in the speed and load.In addition, conditions such as mixing time It is important to state that this work is our first with such functionalized WS 2 NPs, and the tribology tests performed here were meant to get a preliminary idea about their potential as additives compared to non-functionalized ones.There are more parameters to be tested within the experimental setup, like changes in the speed and load.In addition, conditions such as mixing time and temperature are to be evaluated by sedimentation tests, similarly to our previous work with non-functionalized WS 2 IFs [39].
Electron Microscopy Characterizations
HRSEM images and elemental EDS mappings were acquired using a Magellan 400L high-resolution scanning electron microscope (FEI, Hillsboro, OR, USA) equipped with an EDS detector.After the friction test, the plates were carefully rinsed with hexane (95%, Sigma-Aldrich, Rehovot, Israel, Cat.No. 296090) and air-dried.
Transmission electron microscopy (TEM) images were acquired by a Tecnai Spirit Bio-Twin microscope (FEI, Hillsboro, OR, USA) equipped with a 1 × 1 k CCD camera (Gatan, Pleasanton, CA, USA).Samples for TEM analysis were dispersed in ethanol, placed on a formvar/carbon film on a 400-mesh copper TEM grid (FCF400-Cu, Electron Microscopy Sciences, Hatfield, PA, USA) and dried.
Preparation of Humin-Like Shell-Coated WS 2 Nanostructures
TEM images (Figure 2) show WS 2 NPs coated with a conformal humin-like shell of different thicknesses, similarly to a previous report [38].Here, too, no electron diffraction is obtained from the shell, meaning it is amorphous.Additionally, as expected, the shell thickness increases with an increasing glucose pentaacetate/WS 2 ratio (as seen in images b, c and e-g).Another point worth noting is the visual difference between the two grades of WS 2 IFs.The non-coated laboratory-quality IFs (d) have a well-defined, pebble-like shape, and their walls seem to be intact.The non-coated industrial-quality IFs (h) seem to be more damaged and some of their external walls are exfoliated.
Lubricants 2018, 6, 3 4 of 14 and temperature are to be evaluated by sedimentation tests, similarly to our previous work with nonfunctionalized WS2 IFs [39].
Electron Microscopy Characterizations
HRSEM images and elemental EDS mappings were acquired using a Magellan 400L highresolution scanning electron microscope (FEI, Hillsboro, OR, USA) equipped with an EDS detector.After the friction test, the plates were carefully rinsed with hexane (95%, Sigma-Aldrich, Rehovot, Israel, Cat.No. 296090) and air-dried.
Transmission electron microscopy (TEM) images were acquired by a Tecnai Spirit Bio-Twin microscope (FEI, Hillsboro, OR, USA) equipped with a 1 × 1 k CCD camera (Gatan, Pleasanton, CA, USA).Samples for TEM analysis were dispersed in ethanol, placed on a formvar/carbon film on a 400-mesh copper TEM grid (FCF400-Cu, Electron Microscopy Sciences, Hatfield, PA, USA) and dried.
Preparation of Humin-Like Shell-Coated WS2 Nanostructures
TEM images (Figure 2) show WS2 NPs coated with a conformal humin-like shell of different thicknesses, similarly to a previous report [38].Here, too, no electron diffraction is obtained from the shell, meaning it is amorphous.Additionally, as expected, the shell thickness increases with an increasing glucose pentaacetate/WS2 ratio (as seen in images b, c and e-g).Another point worth noting is the visual difference between the two grades of WS2 IFs.The non-coated laboratory-quality IFs (d) have a well-defined, pebble-like shape, and their walls seem to be intact.The non-coated industrial-quality IFs (h) seem to be more damaged and some of their external walls are exfoliated.
Friction and Wear Experiments
The results of the friction and wear tests are presented in Table 2.It can be seen that the highest values for both parameters were obtained for additive-free PAO-4: 0.17 and 235 µm, respectively.For friction tests with additives, the CoF values are relatively close to each other and do not follow any specific trend.Coated INTs do not improve the friction and wear properties compared to non-coated INTs.The case is different for IFs: the width of wear track decreases for the group of laboratory-quality WS 2 IFs, the thicker the humin-like shell is.Coated industrial-quality WS 2 IFs give the lowest values for both CoF and width of wear track: 0.075 and 70 µm, respectively.These values make them the most preferred additive to PAO-4 for friction and wear reduction.
To explain the results of the friction and wear tests, we first explore the morphology of the rubbed surfaces by examining the optical microscope (OM) images.Figures 3 and 4 show OM images of the wear spots on the surface of the balls and the wear tracks after 3000 cycles of friction.
It can be seen that for the tests with IFs additives (Figure 3), transferred films and thin plowing marks are observed in the wear tracks.When looking at the balls, similar plowing marks and transferred films around the wear spot are visible only for the tests with non-coated IF-LAB-1 nanoparticles (ai).Clearly, the distribution of the nanoparticles around the wear track plays an important role in friction reduction for the IF-LAB series.In fact, non-coated WS 2 IFs are spread in relatively large aggregates of nanoparticles both on the surface of the ball and on the flat plate (ai,aii).For the coated sample IF-LAB-3, the aggregates are almost absent (av,avi).Visible aggregation decreases with increasing coating thickness, and this sits well with the wear track values shown in Table 2.
A difference in aggregation between the coated and non-coated samples cannot be concluded solely from these OM images for the tests with industrial-quality WS 2 IFs (bi-biv).There seems to be a concentration of IFs in the track margins in image bii compared to small clusters of IFs in image biv, but a deeper look into the wear track is required, particularly for sample IF-IND-2 which gave the best results for friction and wear reduction.For the tests with INTs additives (Figure 4), OM images of the contact surfaces during friction agree with the results in Table 2: the CoF and width of the wear track seem to be the lower for the non-coated INTs compared to the coated ones.It was found that the non-coated INTs form a thin film on both contact surfaces (a,b).For coated INTs (c,d), no transferred film is observed on the ball surface, and big clusters of INTs appear around the wear track.As the coating does not improve the friction reduction ability of WS 2 INTs, we will present a deeper examination of the IFs tests.We will, however, further discuss the different friction reduction mechanisms of IFs and INTs in the discussion section.To better understand the role of the coating and the differences between IFs grades regarding friction and wear reduction, HRSEM images of the wear tracks were analyzed (Figure 5).
Lubricants 2018, 6, 3 7 of 14 discussion section.To better understand the role of the coating and the differences between IFs grades regarding friction and wear reduction, HRSEM images of the wear tracks were analyzed (Figure 5).HRSEM images give us a hint about the role the coating plays in reducing the friction.The images imply that coated IFs tend to be less aggregated than non-coated IFs, allowing them better access into surface features.The images of the plates tested with IF-LAB-3 and IF-IND-2 additives demonstrate this well (Figure 5f,j).To get a better idea of the difference between the two IFs grades, EDS elemental mappings were used.We compared the abovementioned samples with their non-coated equivalents IF-LAB-1 and IF-IND-1, respectively.Figure 6 shows elemental EDS mappings and compositional data for the plates tested with laboratory-quality IFs as additives to PAO-4 oil.HRSEM images give us a hint about the role the coating plays in reducing the friction.The images imply that coated IFs tend to be less aggregated than non-coated IFs, allowing them better access into surface features.The images of the plates tested with IF-LAB-3 and IF-IND-2 additives demonstrate this well (Figure 5f,j).To get a better idea of the difference between the two IFs grades, EDS elemental mappings were used.We compared the abovementioned samples with their noncoated equivalents IF-LAB-1 and IF-IND-1, respectively.Figure 6 shows elemental EDS mappings and compositional data for the plates tested with laboratory-quality IFs as additives to PAO-4 oil.On the plate tested with non-coated IFs (a), tungsten and sulfur signals are found over the entire tested area with no preference regarding the wear track.On the plate tested with coated IFs (b), the tungsten signal is clearly stronger within the wear track.A higher magnification (inset image ii) shows stronger tungsten signals in the scratches and dents, implying that the coated IFs entered these surface features.Generally speaking, even though the coated IFs did access the wear track, it does not contain a significant amount of them: looking at the compositional data (c), percentages of tungsten, sulfur and carbon are only slightly higher for the IF-LAB-3 sample compared to IF-LAB-1.In addition, out of the elements contained in the coated IFs, tungsten was the only one detectable in the mapping.The following comparison to industrial-quality IFs will emphasize this point.On the plate tested with non-coated IFs (a), tungsten and sulfur signals are found over the entire tested area with no preference regarding the wear track.On the plate tested with coated IFs (b), the tungsten signal is clearly stronger within the wear track.A higher magnification (inset image ii) shows stronger tungsten signals in the scratches and dents, implying that the coated IFs entered these surface features.Generally speaking, even though the coated IFs did access the wear track, it does not contain a significant amount of them: looking at the compositional data (c), percentages of tungsten, sulfur and carbon are only slightly higher for the IF-LAB-3 sample compared to IF-LAB-1.In addition, out of the elements contained in the coated IFs, tungsten was the only one detectable in the mapping.The following comparison to industrial-quality IFs will emphasize this point.
Figure 7 shows elemental EDS mappings and compositional data for the plates tested using industrial-quality IFs as additives to PAO-4 oil.On the plate tested with non-coated IFs (a), both tungsten and sulfur signals appear on the entire tested area.The tungsten signal, however, (ii) is slightly stronger within the wear track than in the rest of the tested area.On the plate tested with coated IFs (b), the contrast in the tungsten mapping (ii) is stronger compared to (aii).The same goes for sulfur and carbon.A closer look into the wear track on this plate (inset images bii-bv) and the compositional data (c) implies a high "concentration" of the coated IFs in the wear track.Additionally, the coated IFs are filling the scratches and dents in the wear track so well that an iron "absence" in the filled parts is clearly noticeable (inset image bv).
Lubricants 2018, 6, 3 9 of 14 Figure 7 shows elemental EDS mappings and compositional data for the plates tested using industrial-quality IFs as additives to PAO-4 oil.On the plate tested with non-coated IFs (a), both tungsten and sulfur signals appear on the entire tested area.The tungsten signal, however, (ii) is slightly stronger within the wear track than in the rest of the tested area.On the plate tested with coated IFs (b), the contrast in the tungsten mapping (ii) is stronger compared to (aii).The same goes for sulfur and carbon.A closer look into the wear track on this plate (inset images bii-bv) and the compositional data (c) implies a high "concentration" of the coated IFs in the wear track.Additionally, the coated IFs are filling the scratches and dents in the wear track so well that an iron "absence" in the filled parts is clearly noticeable (inset image bv).To further verify the connection between the presence of the shell on the WS 2 IFs and the effect of disaggregation which plays a role in friction reduction, we compared between 1% (wt %) dispersions of coated and non-coated IFs in paraffin oil, under an optical microscope (Figure 8).The dispersions were prepared by stirring the particles with the oil for one hour at room temperature, using a magnetic stirrer.Then, three drops of the dispersions were placed on clean glass slides and observed under the microscope.
Lubricants 2018, 6, 3 10 of 14 To further verify the connection between the presence of the shell on the WS2 IFs and the effect of disaggregation which plays a role in friction reduction, we compared between 1% (wt %) dispersions of coated and non-coated IFs in paraffin oil, under an optical microscope (Figure 8).The dispersions were prepared by stirring the particles with the oil for one hour at room temperature, using a magnetic stirrer.Then, three drops of the dispersions were placed on clean glass slides and observed under the microscope.For both the laboratory-and industrial-quality IFs, there is a clear difference between the dispersions of non-coated IFs (a,d) and the dispersions of the coated IFs (b,c,e).While the non-coated IFs are present in the oil as large (tens of microns long) "islands" or aggregates, the coated IFs form much smaller clusters.In the image of industrial-quality non-coated IFs (d), exfoliated walls from the nanoparticles are clearly observed in the background.Another point worth noticing is the difference between the laboratory-quality IFs with different coating thicknesses (b,c).Both types of coated particles form small clusters, but the clusters of the thinly coated nanoparticles (b) are much more densely packed compared to the particles in image (c), with the thicker coating.A possible explanation is that the thin coating might help separation between the particles, but does not induce sufficient repulsion interactions to in comparison to the thicker coating.Out of all of the IFs groups, the coated industrial-quality WS2 IFs (e) appear to be dispersed in small clusters with the largest spaces between them.
Discussion
Both WS2 INTs and IFs showed some improvement in reducing friction compared to additivefree PAO-4 oil, as expected.When comparing INTs and coated IFs, it appears that the coating affects each of them in opposite ways: coated IFs reduce friction better than non-coated IFs.In contrast, coated INTs do not reduce friction as well as the non-coated INTs.As mentioned in the introduction section, the use of metal dichalcogenide IFs for lubrication is well-studied in the literature.The use of INTs for lubrication, however, is much less common.With this being said, INTs do reduce friction and wear to some extent [40,41].The friction reduction mechanisms of the INTs and IFs in relation to this study will be discussed here.
Kalin et al. [40], in their tribological tests done with MoS2 nanotubes, suggested two main mechanisms for friction reduction.The first mechanism is exfoliation of the nanotube walls, leading to formation of tribofilm in the wear track.The second mechanism is the aggregation of compacted and accumulated nanotubes, mainly at the margins of the wear track, acting as a thick film barrier For both the laboratory-and industrial-quality IFs, there is a clear difference between the dispersions of non-coated IFs (a,d) and the dispersions of the coated IFs (b,c,e).While the non-coated IFs are present in the oil as large (tens of microns long) "islands" or aggregates, the coated IFs form much smaller clusters.In the image of industrial-quality non-coated IFs (d), exfoliated walls from the nanoparticles are clearly observed in the background.Another point worth noticing is the difference between the laboratory-quality IFs with different coating thicknesses (b,c).Both types of coated particles form small clusters, but the clusters of the thinly coated nanoparticles (b) are much more densely packed compared to the particles in image (c), with the thicker coating.A possible explanation is that the thin coating might help separation between the particles, but does not induce sufficient repulsion interactions to in comparison to the thicker coating.Out of all of the IFs groups, the coated industrial-quality WS 2 IFs (e) appear to be dispersed in small clusters with the largest spaces between them.
Discussion
Both WS 2 INTs and IFs showed some improvement in reducing friction compared to additive-free PAO-4 oil, as expected.When comparing INTs and coated IFs, it appears that the coating affects each of them in opposite ways: coated IFs reduce friction better than non-coated IFs.In contrast, coated INTs do not reduce friction as well as the non-coated INTs.As mentioned in the introduction section, the use of metal dichalcogenide IFs for lubrication is well-studied in the literature.The use of INTs for lubrication, however, is much less common.With this being said, INTs do reduce friction and wear to some extent [40,41].The friction reduction mechanisms of the INTs and IFs in relation to this study will be discussed here.
Kalin et al. [40], in their tribological tests done with MoS 2 nanotubes, suggested two main mechanisms for friction reduction.The first mechanism is exfoliation of the nanotube walls, leading to formation of tribofilm in the wear track.The second mechanism is the aggregation of compacted and accumulated nanotubes, mainly at the margins of the wear track, acting as a thick film barrier and reducing friction.With INTs, unlike IFs, penetration into surface features is less relevant as a mechanism for reducing friction.This is due to the rod-like shape and micron-scale lengths of the nanotubes.In the case of our work, the conformal coating supposedly has a negative effect on both mechanisms: it most likely protects the nanotubes from exfoliation and it makes the nanotubes less likely to aggregate.Therefore, it makes sense that the non-coated WS 2 INTs were more capable of reducing friction than the coated WS 2 INTs.
The story is different for IFs.As mentioned, WS 2 IFs commonly form large aggregates, leaving exfoliation of the IFs walls to become the dominant friction reduction mechanism.On one hand, the conformal coating we present here might protect the IFs from in situ exfoliation, strongly reducing its role as a friction reduction mechanism.On the other hand, however, surface functionalization of WS 2 IFs helps to improve their dispersion, therefore reducing aggregation.We are assuming the reason for this is that the humin-like conformal shell may make the surface of the NPs somewhat lipophilic and hence more easily dispersed in an oily medium [24].Smaller aggregates of IFs show a better ability to reduce friction and protect the rubbed surface from direct metallic contact, thus decreasing the wear on the surface [36].There is also a role played by the thickness of the coating.We previously reported that as the coating thickness increases, coated IFs powder is finer and more free-flowing [38].This leads to better dispersion of the IFs in oil as the coating thickness increases.This effect is well demonstrated in OM images of the particles-in-oil dispersions, and the SEM images of the IFs plates after friction tests.
Coated industrial-quality IFs gave the best results in our friction tests.The reason for this is most likely the combination of the two parameters of friction reduction mechanisms described above for IFs: tribofilm formation and easy penetration into the interface.It has been reported that poorly crystalline MoS 2 IFs presented better tribological properties then their perfectly shaped, closed-caged equivalents, as surface defects facilitate exfoliation and the formation of tribofilm [25,33,42].The industrial-quality WS 2 IFs in our work are poorly crystalline and highly exfoliated to begin with, which is an advantage in friction reduction when comparing these two grades.In addition to the pre-exfoliated walls, which make a tribofilm formation easier, the coating adds a disaggregating effect, making the surface penetration mechanism more efficient.Hence, it is not surprising that the test performed with coated industrial-quality IFs resulted in significantly lower values for CoF and track width compared to the rest of the tested samples.
Conclusions
In this work, we tested WS 2 INTs and IFs coated with a conformal humin-like shell as additives to PAO-4 oil for friction reduction using a ball-on-flat setup.Two grades of IFs were tested, and different coating thicknesses were examined.From this research, the following conclusions can be drawn: Coated WS 2 INTs did not show any improved friction and wear properties compared to the non-coated ones.The reason is that the formation of a tribofilm is the dominant mechanism in friction reduction for INTs.This mechanism is assisted by exfoliation and accumulation, which are both interrupted by the presence of the coating.
2.
In contrast, coated WS 2 IFs showed an improved ability to reduce friction and wear compared to the non-coated ones.The improved tribological properties of the coated IFs are explained by the reduced aggregation the coating provides, allowing better dispersion of the IFs in the oil phase.This leads to a better penetration of the IFs into the interface, providing easy shearing at thin surface layers.As the coating thickness increases, so too does the friction reduction provided by the NPs, because the aggregation decreases.
3.
Coated industrial-quality WS 2 IFs, used as a PAO-4 additive, gave the best friction reduction results.This is associated with a combination of two mechanisms: the presence of pre-exfoliated walls facilitates tribofilm formation, and the disaggregating effect of the coating makes it easier for the NPs to penetrate the interface.
Performing sedimentation tests is a significant part of our future work with the functionalized NPs.Optimized mixing conditions and experimental setup may lead to an additional improvement in the friction reduction enabled by the coated IFs.
Another interesting aspect to test is IFs with various other chemically different conformal coatings as lubrication additives.In any case, it is our hope that these preliminary findings will be of help with further improvement of metal dichalcogenides nanostructures tribological properties.
Figure 1 .
Figure 1.Schematic illustration of the ball-on-flat rig.
Figure 1 .
Figure 1.Schematic illustration of the ball-on-flat rig.
Figure 2 .
Figure 2. Transmission electron microscopy (TEM) images of the non-coated and coated WS2 INTs (a-c), laboratory-quality WS2 IFs (d-g), and industrial-quality WS2 IFs (h,i), tested as oil additives for friction reduction.
Figure 2 .
Figure 2. Transmission electron microscopy (TEM) images of the non-coated and coated WS 2 INTs (a-c), laboratory-quality WS 2 IFs (d-g), and industrial-quality WS 2 IFs (h,i), tested as oil additives for friction reduction.
Figure 4 .
Figure 4. OM images of wear spots on the surface of the ball and wear track after friction tests in PAO-4 oil with WS2 INTs samples used as additives: non-coated (a,b) and coated (c,d).
Figure 4 .
Figure 4. OM images of wear spots on the surface of the ball and wear track after friction tests in PAO-4 oil with WS2 INTs samples used as additives: non-coated (a,b) and coated (c,d).
Figure 4 .
Figure 4. OM images of wear spots on the surface of the ball and wear track after friction tests in PAO-4 oil with WS 2 INTs samples used as additives: non-coated (a,b) and coated (c,d).
Figure 5 .
Figure 5. High resolution scanning electron microscope (HRSEM) images of wear tracks after friction tests for coated and non-coated: laboratory-quality WS2 IFs (a-f) and industrial-quality WS2 IFs (g-i) tested as oil additives for friction reduction.
Figure 5 .
Figure 5. High resolution scanning electron microscope (HRSEM) images of wear tracks after friction tests for coated and non-coated: laboratory-quality WS 2 IFs (a-f) and industrial-quality WS 2 IFs (g-i) tested as oil additives for friction reduction.
quality IF-IND-2 nanoparticles were found to be the most efficient for friction reduction.Image i shows very low wear on the tested plate and even smaller clusters of IFs compared to the IF-IND-1 sample.A closer look into the wear track (j) shows that single IFs or clusters of a few IFs can be seen.
Table 1 .
Data for preparation of coated WS 2 NPs.
Table 1 .
Data for preparation of coated WS2 NPs.
Table 2 .
Typical coefficients of friction (CoF) and widths of wear tracks for additive-free oil and oil with additives. | 7,833 | 2018-01-04T00:00:00.000 | [
"Materials Science"
] |
A High-Throughput Method to Analyze the Interaction Proteins With p22 Protein of African Swine Fever Virus In Vitro
African swine fever virus (ASFV) has been identified as the agent of African swine fever, resulting in a mortality rate of nearly 100% in domestic pigs worldwide. Protein p22 encoded by gene KP177R has been reported to be localized at the inner envelope of the virus, while the function of p22 remains unclear. In this study, p22 interacting proteins of the host were identified by a high-throughput method and analyzed by Gene ontology terms and Kyoto Encyclopedia of Gene and Genomes (KEGG) pathways; numerous cellular proteins in 293-T that interacted with p22 protein were identified. These interacting proteins were related to the biological processes of binding, cell structure, signal transduction, cell adhesion, etc. At the same time, the interacted proteins participated in several KEGG pathways like ribosome, spliceosome, etc. The key proteins in the protein–protein interaction network were closely related to actin filament organization and movement, resulting in affecting the process of phagocytosis and endocytosis. A large number of proteins that interacted with p22 were identified, providing a large database, which should be very useful to elucidate the function of p22 in the near future, laying the foundation for elucidating the mechanism of ASFV.
INTRODUCTION
African swine fever (ASF) is caused by African swine fever virus (ASFV), a linear, large, doublestranded DNA virus that is the only member of the Asfarviridae family (1). ASFV is an enveloped DNA virus with genome length of 170-193 kbp (1). The genome encoded 151-167 open reading frames. ASFV is an icosahedral symmetric virus that replicates in the cytoplasm of infected cells. Warthogs, Bush pigs, and soft ticks are natural hosts of the virus, which can persist to infect without any signs of disease (2). Once introduced into domestic pigs, ASFV is a highly pathogenic virus and could spread directly among pigs, resulting in nearly 100% mortality (3). Typical clinical symptoms include high fever, cyanosis, hemorrhagic lesions, anorexia, and ataxia (4). The lesion tissues display severe pathological vascular changes, such as renal ecchymosis, skin erythema, and diffuse hemorrhages in lymph nodes, kidneys, lungs, and urinary bladder; pulmonary edema; disseminated intravascular coagulation; and thrombocytopenia (5).
The disease has caused serious economic losses to the pig industry and has a severe impact on the world. Especially in 2018, the breakout of ASF in China spread quickly over the country, threatening the pig industry severely (6). Recent studies have shown that some viral proteins are involved in the adhesion and entry of ASFV. Some encoded structural proteins are involved in genome replication and virus infection (7). It is reported that 15 of the 26 virus-encoded proteins were detected in the virus proteome with predicted transmembrane domains (8). However, some detected proteins remain uncharacterized. Among the detected proteins on the membrane, protein p22 (pKP177R) has been predicted to be externally located in the virion (9). Some studies have reported that p22 was localized around the virus factories rather than at the cell surface (10,11). In another study, it was tricky that protein p22 was weakly detected throughout the cytoplasm, including the virus factories, but could be detected at the periphery of assembling and mature icosahedral particles. Protein p22 was localized at the inner envelope (12). Other viral membrane proteins, like p17, pE183L, p12, and pE248R, were also at the cell surface but were localized at precursor viral membranes and intracellular icosahedral particles within the viral factories (13)(14)(15)(16). Some structural proteins have been reported to be involved in virus entry, like p12, pE248R, and pE199L; some are required for the assembly process, like protein p17 and pE183L (17). These proteins are localized at the membrane of the virus, helping the entry or assembly process of the virus. However, the receptors of ASFV are still unclear. In recent study, p22 was proven not to be involved in virus replication or virulence in swine by KP177R gene deletion in recombinant virus (18). It might be due to the potential replacement of the KP177R gene by one of the L101L genes; there might be a potential overlapping in the function of these two genes. Therefore, the function of p22 is still unknown.
In this study, we studied the proteins that interacted with p22 of ASFV by proteomics analysis. Although a large number of structural protein studies have been performed, further researches on the function and molecular mechanism are desperately in need and will help prevent and control the spread of the disease.
Sample Preparation
Gene KP177R (p22) of the ASFV and tagged by hemagglutinin (HA) at C terminus was synthesized into the plasmid pcDNA-3.1(+) by the GeneScript Corporation (Shanghai, China) and sequenced correctly. The 293-T cells were grown into 80% confluency in Dulbecco's modified Eagle's medium, supplemented with 10% fetal bovine serum (FBS) and antibiotics (penicillin/streptomycin) (Thermo Fisher, MA, USA) in tissue culture plates. Cells were maintained at 37 • C in a humidified atmosphere and supplemented with 5% CO 2 . The cells were then separately transfected with pcDNA3.1(+)-p22-HA and pcDNA3.1(+) (1 µg each) by Lipofectamine 3000 according to the instruction of the manufacturer (Thermo scientific, MA, USA) and were proven successfully expressed in 293-T cells. At 24 h post-transfection, p22-expressed or mock cells were washed once in cold phosphate-buffered saline (PBS) and suspended in 1 ml of cold immunoprecipitation (IP) buffer (Beyotime, Shanghai, China) (50 mM Tris-HCl, pH 7.4, 150 mM NaCl, 1 mM EDTA) supplemented with 0.5% Nonidet P-40 Substitute (NP-40, Fluka Analytical) on ice with 1% protease inhibitor cocktail (Roche, Shanghai, China). Cells were lysed for 30 min at 4 • C with constant rotation, and the lysates were cleared by centrifugation at 5,000 × g for 5 min; lysate was removed for Western blot analysis (whole-cell lysate fraction). The remaining lysate was incubated with 1 µg of anti-HA antibody (Santa Cruz, Shanghai, China) overnight at 4 • C, and then pre-coupled to 40 µl of A/G Plus agarose beads for 4 h at 4 • C according to the instruction of the manufacturer. The immune complexes were precipitated, washed, and subjected to SDS-PAGE gels and Western blotting analysis.
Liquid Chromatography Tandem Mass Spectrometry (LC-MS/MS) Analysis and Data Processing
The preparation of peptides for MS of triplicate samples of each group and LC-MS/MS were all performed by the Shanghai Applied Protein Technology Company, and the LC-MS/MS was executed on an Q Exactive HF mass spectrometer (Thermo Scientific, MA, USA). In order to exclude possible contaminants, the databases were deleted in p22-immunoprecipitated proteins, which were obtained in mock immunoprecipitated proteins (the brief procedures are seen in Figure 1).
Western Blot
293-T cells were transfected with 1 µg of pcDNA3.1-p22-HA or mock plasmid for 24 h. At 24 h post-transfection, 293-T cells were lysed with lysis buffer containing 1% protein inhibitor. The cell lysates were subjected to SDS-PAGE and transferred onto 0.22-µm nitrocellulose membranes (Pall, Port Washington, NY, USA). Then, the membranes were incubated with 5% defatted milk at room temperature for 2 h, washed with PBS containing 0.05% Tween 20 three times, followed by anti-HA rabbit polyclonal antibody incubation at 4 • C overnight, washed with PBST three times, and then incubated with horseradish peroxidase-conjugated goat anti-rabbit IgG secondary antibody at room temperature for 1 h. After washing three times with PBST, detection was performed using the ECL Kit (Thermo Fisher Scientific).
Indirect Immunofluorescence Assay (IFA)
293-T cells were transfected with 1 µg of pcDNA3.1(+)-p22-HA or mock plasmid for 24 h. At 24 h post-transfection, cells were fixed in 4% paraformaldehyde at 4 • C for 30 min, and then the cell membranes were permeabilized with PBS containing 0.2% Triton X-100 for 5 min. The cells were incubated with 1:200 diluted anti-HA antibody at 37 • C for 1 h. Then, the cells were incubated with Alexa Fluor 555-conjugated goat anti-rabbit IgG at 1:400 dilution at 37 • C for 1 h and washed with PBS three times before examination.
Gene Ontology (GO) Enrichment and Kyoto Encyclopedia of Gene and Genomes (KEGG) Pathway Analysis of p22 Interacting Proteins
GO is the concept of the combination of gene-gene functions and is designed to detect cell biological functions via a systematically dynamic and computational interpretation of genes, RNA, and proteins. It covers three main areas (19) including cellular components, molecular function, and biological processes. The KEGG database aims to systematically analyze genes and their related gene functions with an interacting network of molecules in the cells in a hierarchical order (20). GO enrichment and KEGG pathway analysis of p22 interacting proteins were conducted. DAVID (http://david.abcc.ncifcrf.gov/) used in this study is short for Database for Annotation, Visualization, and Integrated Discovery.
Protein-Protein Interaction (PPI) Network Construction
The PPI plays an extremely important role in understanding cellular or systemic processes of cell growth, reproduction, and metabolism (21) and provides a platform for the annotation of functional, structural, and evolutionary properties of proteins. To further investigate the molecular mechanism of p22 of ASFV, PPI networks of p22 interacting proteins were constructed through the STRING database (http://www.string-db.org/). STRING is an online database that includes experimental as well as predicted interaction information and comprises >1,100 completely sequenced organisms. To select core genes from the PPI network, we analyzed the top biological structure of the network and obtained the proteins that directly interact with the target protein in the network. We selected the PPIs to construct the PPI network for visualization and analysis.
Sample Identification
Protein p22 was expressed in the 293-T cells by Western blotting analysis; the band was shown as the predicted size of 25 kDa (Figure 2A). The sample of p22-HA and its mock immunoprecipitated for LC-MS/MS were also identified ( Figure 2B). IFA analysis also proved the p22 protein expressed in 293-T cells in both the nucleus and plasma ( Figure 2C).
Enriched GO Terms Analysis
In this study, to establish the host cell proteins or pathways that have been enriched in the p22 interacting partners' interaction networks, we performed gene ontology annotation and analysis of the target proteins in the p22 protein expressed 293-T cells to predict the biological function. There were 578 p22-interacted partners screened out compared with control samples in total (the data that repeated with mocked-interacted proteins were deleted to exclude the background contamination). From the GO map, thousands of enriched GO terms were obtained, and their corresponding proteins are shown in Supplementary Tables 1, 3. Go terms mainly covered three parts: biological process, molecular function and cellular component. A total of 359 proteins were related to the biological process (Figure 3). The top two enriched GO terms of the biological process were cellular process and metabolic process, followed by biological regulation, cellular component organization or biogenesis, etc. A total of 463 proteins were related to molecular function Frontiers in Veterinary Science | www.frontiersin.org ( Figure 3). Major enriched GO terms of molecular function were binding (as high as 378 proteins were included), catalytic activity, structural molecule activity, etc., indicating that p22 may play an important role in virus entry. For the cellular component, 374 proteins were involved. The GO terms analysis of the interacted proteins mainly included cell part, organelle, protein-containing complex, and membrane-enclose lumen membrane, indicating that p22 interacting partners may participate in cell structure maintenance. Collectively, the GO annotation and analysis of target proteins inferred that the p22 protein may participate in several processes such as protein binding, catalytic activity, and metabolism.
KEGG Pathways Analysis
To further predict the cellular pathways and signal transduction of p22-interacted host protein candidates, the KEGG and the top 20 enriched pathways with the highest representation of each term were enlisted (Figure 4A). A total of 165 KEGG pathways were screened out, and their corresponding protein numbers are shown in Supplementary Tables 2, 3. According to the result, the KEGG pathways in which the p22-related proteins were involved were ribosomes ( Figure 4B) and spliceosome (Figure 4C), wherein the involved protein numbers were as high as 31 and 23, respectively. Furthermore, enrichment analysis also indicated that the proteins might participate in pathogenic Escherichia coli infection, tight junction, necroptosis, ribosome biogenesis in eukaryotes, RNA transport, regulation of actin cytoskeleton, cardiac muscle contraction, adrenergic signaling in cardiomyocytes, etc. It is noteworthy that KEGG pathway analysis showed that seven related proteins participated in endocytosis (Figure 4D), and six proteins were involved in cyclic GMP-dependent protein kinase (cGMP-PKG) signaling pathway and focal adhesion. Minor proteins (four proteins) participated in the cAMP signaling pathway and AMP-activated protein kinase (AMPK) signaling pathway. The KEGG enrichment analysis suggested that pathways involved in immune response, regulation of necroptosis, ribosome biogenesis, and endocytosis were preferentially targeted.
PPI Network
The p22-interacted proteins were placed in the STRING database for PPI analysis and visualization in Cytoscape software. The selected proteins that interacted with p22 protein in the endocytosis process were connected as a network. The proteins included myosin-9 (MYH9), actinrelated protein 2/3 complex subunit 2 (ARPC2), actin-related protein 2, actin-related protein 2/3 complex subunit 1B, ADP-ribosylation factor 6 (ARF6), beta-actin-like protein 2 (ACTBL2), alpha-actinin-4 (ACTN4), clathrin heavy chain A (CLTC), and Ras-related protein-10 (RAB10). The PPI network contained eight nodes and 20 edges. All the hub proteins were at key positions in the interaction network. The nodes represented the interacted proteins, and the edges represented the interactions between these proteins. The selected proteins in the PPI network might relate to p22 more closely in the process of endocytosis. In addition to endocytosis, other PPI networks were also involved, including regulation of actin cytoskeleton, DNA replication, spliceosome, tRNA ligases, and mitochondria, implying that p22-interacted proteins functioned widely (Figure 5).
DISCUSSION
As the protein localized at the inner envelope of ASFV, the function of p22 is rarely known. In an attempt to acquire the p22 function, p22-interacted proteins of the host were identified by a high-throughput method and analyzed by GO terms and KEGG pathways; numerous cellular proteins in 293-T that interacted with p22 protein were identified. Although the facility to transfect 293-T cells made us select this cell type as the target cell, the interacted partners with p22 protein in the host cell derived from pig will be further investigated in the future. This study provides a large database and a useful tool to figure out the function of p22.
In this study, GO terms mainly covered three parts: biological process, molecular function, and cellular component. The top two enriched GO terms of the biological process were cellular process and metabolic process, implying that p22 might utilize the host proteins directly or indirectly to affect cell growth, function, and stability. Main enriched GO terms of molecular function were binding, catalytic activity. GO analysis revealed that the most significant ontology categories of molecular function of p22 interacting proteins is binding, suggesting a role of p22 as the protein at the inner envelop in virus binding and entry into the cell. The interesting result would inspire us to dig out the real function of p22 in virus entry. The GO term analysis of the p22 interacting proteins in the cell component mainly included cell part, organelle, proteincontaining complex, membrane-enclose lumen, and membrane; the results further verified the conclusion that p22 located at the membrane of the viron might participate in virus structure maintenance and contact with the host membrane via the binding and endocytosis process. Of course the suspicion needs further to be proven.
For KEGG pathways analysis, a large number of KEGG pathways were screened out (as high as 165); the KEGG pathways that p22 interacting proteins participated in mainly were ribosomes and spliceosome. Ribosomes are essential nanomachines for protein production and protein synthesis. The initial steps of ribosome biogenesis take place in the cell compartment. Spliceosome executes eukaryotic precursor messenger RNA (pre-mRNA) splicing to remove non-coding introns. It depends on the interactions of RNA-RNA, RNAprotein, and protein-protein. It is composed of several nucleoproteins and has the function of recognizing 5 ′ splicing site, 3 ′ splicing site, and branching point of mRNA precursor. It indicated that p22 interacting proteins mainly participated in the process of gene expression in the host cells, gave us a hint that p22 affected the gene and protein expression of cell host, and directly or indirectly affected the function of the biological process.
Furthermore, it is interesting that p22 interacting proteins were involved in pathogenic E. coli infection, tight junction, necroptosis, ribosome biogenesis in eukaryotes, RNA transport, regulation of actin cytoskeleton, cardiac muscle contraction, adrenergic signaling in cardiomyocytes, etc. The widely affected pathways reflected the wide range functions of p22 or its related proteins.
It is noteworthy that KEGG pathway analysis showed that seven of p22 interacting proteins participated in endocytosis. The results of the GO analysis indicated that a large number of p22 interacting proteins participated in binding. Above all, p22 was predicted to be involved in the entry process at the envelop of the virus. FIGURE 5 | PPI network of key proteins that interacted with p22 in the endocytosis process. The size of each node in the PPI networks presents the connect degree of each gene. Those nodes that were not connected to any node were omitted in the network. The selected proteins that interacted with the p22 protein in the endocytosis process were connected as a network. The network was generated in the STRING database. At last, it was possible that other pathways had an important influence on the progression of ASFV entry via some biological process, such as cGMP-PKG signaling pathway, cAMP signaling pathway, and AMPK signaling pathway, which were screened out by KEGG analysis. cGMP is the intracellular second messenger that mediates the action of nitric oxide (NO) and natriuretic peptides (NPs), affecting a wide range of physiologic processes (22). cGMP/PKG signaling pathway was associated with the replication of some viruses (23,24). cAMP is also one of the most common and universal second messengers; cAMP regulates pivotal physiologic processes including metabolism, secretion, calcium homeostasis, muscle contraction, cell fate, and gene transcription (25). AMPK is a central regulator of cellular energy homeostasis, regulating growth and reprogramming metabolism, as well as in cellular processes including autophagy and cell polarity (26). cAMP and AMPK are also closely connected with virus replication (27, 28). These involved pathways put forward the possibility that p22 and its interacting proteins might affect the replication of ASFV.
In those hub proteins connected in the PPI network, the ADP-ribosylation factor (Arf) protein family is part of the large Ras superfamily that encompasses small GTPases (29). Among this family, ARF6 stimulates actin polymerization, drives phagocytosis through multiple mechanisms, and assists autophagy as well (30). Other than Arf6, RAB10 also influences the GTPase activity (29), Rab10 is located on both Golgi and early endosomal/recycling compartments and plays an important role in lysosome exocytosis and plasma membrane repair (31). Alpha actinin belongs to the spectrin gene superfamily, which represents a diverse group of cytoskeletal proteins. Alpha actinin is an actin-binding protein. In non-muscle cells, it is involved in actin binding to the membrane. In skeletal, cardiac, and smooth muscle isoforms, it is localized to the Z-disc and analogous dense bodies and participates in anchoring the myofibrillar actin filaments. ACTN4 encodes a non-muscle, alpha actinin isoform, which is concentrated in the cytoplasm and involved in metastatic processes (32). MYH9 is involved in several important functions, including cytokinesis, cell motility, and maintenance of cell shape (33). ARPC2, actin-related protein 2/3 complex subunit 2, contains seven subunits, of which Arp2 and Arp3 belong to actin-related proteins (34). The activation of Arp2/3 complex could promote the synthesis of F-actin in the suitable condition (35). The Arp2/3 complex is involved in the rearrangement of the macrophage cytoskeleton and affected the phagocytosis of macrophages (36). Knockout of the Arp2/3 complex APC2 gene in mouse macrophages results in a decrease in F-actin polymerization and subsequent reduction in phagocytic capacity (37). In summary, the key proteins mentioned above and other hub proteins in PPI network were closely related to actin filament organization and movement, resulting in affecting the process of phagocytosis and endocytosis. Additional studies on the role of p22 in the process of endocytosis should be conducted. In addition to endocytosis, other PPI networks including regulation of actin cytoskeleton, DNA replication, spliceosome, tRNA ligases, and mitochondria were screened out, indicating that p22 interacting proteins functioned widely and participated in several biological processes.
CONCLUSIONS
Although several studies have been reported to elucidate the pathogenesis of ASFV, the viral protein function remains unclear. In this research, the proteins in the host cells interacted with p22, and the signaling pathways they might participate in were screened out by a high-throughput method, laying the foundation to elucidate the function of p22. For the pig industry, it would also be advantageous to study the pathogenesis of the disease and to monitor and predict the outcome to control the disease in the near future. | 4,832.8 | 2020-12-08T00:00:00.000 | [
"Biology",
"Medicine"
] |
A proposal for the EI index for fuzzy groups
In this article, a measure that quantifies the relational structure within and between groups is proposed, comprising not only the analysis of disjoint or non-disjoint groups, but also of fuzzy groups. This measure is based on the existing measure known as EI index. The current EI index is a measure of homophily applied to networks with the presence of disjoint groups, although disjoint groups on a large scale rarely exist in many empirical networks. In addition, the combination of edge and node weights in the evaluation of the EI index is also proposed. We tested the measure in two networks in different contexts. The first is a co-authorship network, where researchers, actors in the network, are divided according to the time of Ph.D. completion. The second network is formed by trade relations between countries of the American continent, where countries are grouped according to the Human Development Index. The application of the proposed measure in these two networks is justified by the imprecision of the information or by the difficulty of allocating nodes in a specific group, being necessary to define affiliation levels. Therefore, the new measure allows expanding the analysis of social networks, for different types of attributes, thus generating previously unexplored knowledge.
Introduction
In general, a social network is a structure formed by nodes (actors) and edges (interactions) used in studies of the relationships between individuals, groups or organizations. Focused essentially on topological structure, social networks studies apply a set of methods and measures to identify, visualize and analyze social networks looking for patterns of interactions and their implications (Newman 2001b, a).
In several networks, it is common to observe that actors tend to have affinities or similarities (attributes) with their peers. According to Crandall et al. (2008) there are two mechanisms of reasons for this, for example, actors can modify their behavior to make them more in line with the behavior of their peers, a process known as social influence (Friedkin 2006). Another distinct reason, an effect termed homophily, is that actors tend to form relationships with others who are already like them. In other words, in homophily, individual characteristics drive the formation of links, while in social influence, the links existing in the network serve to engage actors' characteristics. Kim and Altmann (2017) mention that the nature of homophily is shown in many empirical and theoretical studies. The study of these authors also concluded that homophily affects network formation. Homophily is the term used for the preference of actors to connect with other actors who share common attributes (McPherson et al. 2001). In studies on homophily, we seek to know if the nodes of a network disproportionately establish links with others that resemble them in some way, that is, we want to verify the occurrence of a higher incidence of relations between actors that have similar attributes.
However, actors can belong to many associative groups simultaneously, with various levels of affiliation, and distinct disjoint groups rarely exist on a large scale in many empirical networks (Leskovec et al. 2008). (Saha et al. 2014) also mentions that people participate in a wide variety of groups. In addition, Lee and Brusilovsky (2017) point out that society is currently goaded by information and knowledge, what generates new homophily dimensions. Information, knowledge and some attributes such as economic blocks in commercial networks; communities on social networks such as Facebook, Twitter, among others; and other attributes linked to behaviors, tastes and attitudes generate non-disjoint groups. Currently, publications that use the E I index as a measure of homophily are concentrated in disjoint or mutually exclusive groups. Situations in which network actors are present in more than one group are not commonly explored. One of the barriers found in the analysis of non-disjoint groups is the absence of a measure, since the E I index is defined for disjoint groups (Andrade and Rêgo 2019).
Motivated by this gap, Andrade and Rêgo (2019) suggest a method that generalizes the E I index developed by Krackhardt and Stern (1988). This method quantifies the relational structure within and between groups that encompasses the analysis of both disjoint and non-disjoint groups. Furthermore, we observe that the process of social influence has already been studied in the context of fuzzy groups (Li and Wei 2019;Khalid and Beg 2019).
In this context, the objective of this work is to expand the generalized metric suggested by Andrade and Rêgo (2019), adapting it to also cover groups where the nodes present several levels of affiliations, fuzzy groups. We can highlight as advantages of study, for example, the ability to address networks that analyze political behavior, studying relationships between voters with different positions in the political spectrum and networks of friendships with bilingual speakers, analyzing the relationships between speakers with different levels of language fluency. In our work, we analyzed two networks. A co-authorship network formed by researchers with a Ph.D. in production engineering, where the time of Ph.D. completion, defined the fuzzy groups. The other network is formed by trade relations between American countries, in which we use the Human Development Index (HDI) to form fuzzy groups. This paper is organized as follows. In Sect. 2, we briefly present the E I index proposed by Krackhardt and Stern (1988), which measures homophily in networks with disjoint groups. Then, in Sect. 3, we present our measure, which is a generalization of the current E I index, encompassing fuzzy groups. Two applications of the proposed measure are made in Sect. 4. Finally, we discuss the results of the applications in Sect. 5 and present conclusions.
EI index
The E I index, proposed by Krackhardt and Stern (1988), essentially quantifies the relational structure within and between groups (Everett and Borgatti 2012;Krackhardt 1994). The E I index was implemented in the popular social network analysis package UCINET (1999) as a measure for homophily. This measure analyzes the tendency of people to connect with others similar to them, as well as social insertion, i.e., how a node or group of nodes decides to connect to other nodes in a network Hanneman and Riddle (2005).
Homophily is one of the most widespread and robust trends in human interaction, describing how people tend to seek out and interact with others who are more like themoften characterized as "birds of a feather" named by McPherson et al. (2001). As a mechanism of social relations, it can explain the group composition in terms of social identities ranging from ethnicity to age (Lazarsfeld et al. 1954). Indeed, ethnicity, along with geography and kinship, are the main motivating factors behind homophilic practices (McPherson et al. 2001). Everett and Borgatti (2012) are among the researchers who treat the E I index as a measure of homophily and heterophily, where smaller values (internal connections) indicate greater homophily and larger values (external connections) indicate lower homophily or greater heterophily. The E I index as a measure of homophily is essentially used to quantify the individuals' propensity to interact with similar actors (Burt 1991;McPherson et al. 2001). In addition, the E I can be used as a segregation measure (Sweet and Zheng 2017), where segregation is defined as the "unequal" distribution of two or more groups of people in different units or social positions (Bojanowski and Corten 2014).
The E I index is defined as the difference between the intergroup and intragroup ties divided by the total number of ties for normalization. It is a simple and attractive measure of homophily because it does not depend on the density of the network (Everett and Borgatti 2012). Formally, the E I index is given by where E L is the number of external links (links between nodes belonging to different groups); I L is the number of internal links (links between nodes belonging to the same group). The E I index ranges from -1 (all bonds are internal) to +1 (all bonds are external). The index can be calculated for the entire network, for each group or for each individual actor.
Although commonly used in an unweighted network, some authors like Andrade and Rêgo (2018) and Danchev and Porter (2016) have also used the E I index in weighted networks. In weighted networks, the E I index is calculated using the weight of the edges, this way E L is the sum of the weights of the edges that connect different cells of the partition and I L is the sum of the weights of the edges that connect actors of the same cell of the partition. As with the unweighted network, the E I index for weighted networks assumes values between −1 and +1. Generally, the weight of an edge represents the frequency or strength of the relationship. Therefore, when the value of the E I index approaches −1, it means that the internal relations are stronger or more intense. As the index approaches +1, it shows that external relations are stronger or more intense.
In recent years, the inclusion of numerical attributes has been observed in the analysis of social networks. Attributes are resources of nodes and are used to give weight to them, representing their importance or contribution in the network (Andrade and Rêgo 2018;Liu et al. 2015;Benyahia and Largeron 2015). In this work, we will also consider the nodes' weights and insert them in the topological structure of the network. For this, we use the method proposed by Andrade and Rêgo (2018). By this method, the edge weight is equal to the frequency or strength of the relationship between two nodes multiplied by the average weights of the nodes. The intuition is that, in cases where information about quantitative features of nodes is available, the weight of a link should not only depend on the strength of the connection (original edge weight), but also on the average importance of the connected nodes. Formally, if v i is the weight of node i and w i j is the original weight of the link between nodes i and j, then, including the nodes' weights, the new edges' weights are given by The inclusion of the nodes' weights contributes to a more efficient analysis of the network by combining factors inherent to the network with external factors (Andrade and Rêgo 2018). External factors attribute a certain "status" to individuals in the network and through the E I index it is possible to verify whether this status also influences the formation of relationships. However, this conclusion is only reached by comparing it with the E I index without considering external factors.
EI index: fuzzy case
Every day, when describing certain phenomena (characteristics), we use degrees that represent qualities or partial truths.
As an example, let us consider the group of elderly people. There are at least two approaches to mathematically formalize this set. The first, distinguishing from which age the individual is considered elderly. For example, A = {x : x ≥ 65}, where x is an individual age measured in years. In this case, the set is well-defined. The second, less conventional, occurs in such a way that individuals are considered elderly to a greater or lesser extent, that is, there are ele-ments that would belong more to the elderly class than others. This means that the younger the individual, the lower his or her degree of belonging to that class. Thus, we can say that individuals belong to the elderly class, with greater or lesser intensity. Mathematically, we call fuzzy sets the sets to which the elements have degrees of membership. As opposed to the traditional sets where elements belong or not to them, to define a fuzzy set, B, we need to specify a membership function, μ B : → [0, 1], where μ B (w) represent for an element w of the universe, , to what extent w belongs to B and higher values of μ B (w) indicate a higher membership degree. The formalization of fuzzy sets was presented by Zadeh (1996) as an extension of the classical notion of sets.
To explore cases of fuzzy groups, we have developed a new metric to obtain the E I index, which is an adaptation of the metric proposed by Andrade and Rêgo (2019) to generalize the original E I index measure for use with overlapping groups.
Let A be the set of all attributes for nodes in a social network with n nodes. For X ∈ A, let μ X (v i ) be the membership level of node v i to a given group, 0 ≤ μ X (v i ) ≤ 1. Moreover, for a generic set of nodes, S, consider the following sets of indices Thus, the number of external and internal links for a generic set of nodes, S, is given, respectively, by: where in the unweighted case x i j is 1 or 0 depending on whether there is or not a link between nodes v i and v j , in the case of only edge weights x i j = w i j and in the case of edge and node weights x i j = z i j . Alternatively, for X ∈ A, we can define the number of external and internal links for the group of nodes, S X , which has attribute X , respectively, as follows: and where x i j is defined exactly as before.
Since membership functions by definition assume values between 0 and 1 and the definitions of external and internal links involve products of membership functions, in order to avoid overestimating the external links, we recommend the use of trapezoidal membership functions. In order to obtain the trapezoidal membership functions, we suggest performing the following steps: (i) Determine the highest value before which the degree of membership is known to be null. (ii) Determine the lowest value from which it is known for certain that the degree of membership is null. (iii) Determine the lowest value with degree of membership 1. (iv) Determine the highest value with degree of membership 1.
To better explain our proposed method, we present here a simple example to explain how the new metric works on a specific network. Suppose there is a network with four nodes that belong with different membership levels to two groups, A and B (as show in Figure 1). In the network, let us consider calculating the E I index for the set of nodes {1, 2}. Note that nodes 1 and 2 have no connection and that node 0 is connected to both of them. Disregarding the edges' and nodes' weights, we have x 10 = 1 or x 01 = 1 and x 20 = 1 or and Fig. 1. It is easy to verify that the proposed metric is a generalization of the E I index proposed in Krackhardt and Stern (1988) in the sense that if groups are disjoint and the membership functions are either 0 or 1, then it coincides with (1).
Homophily in co-authorship and trade networks
In this section, we apply the proposed method in two networks studied in previous publications. These networks present the fundamental element for our approach, which is the presence of fuzzy groups, in addition to information about the nodes' weights. As a means of comparison, we also analyze the cases of disjoint (Everett and Borgatti 2012) and non-disjoint (Andrade and Rêgo 2019) groups. In this way, the E I index will be obtained for 4 situations: without considering the weight of edges and nodes, unweighted (UW); regarding only the nodes' weight, Z_unweighted (ZU); considering only the edges' weight, weighted (W); taking into account both weights, Z_weighted (ZW).
To evaluate whether the E I index for a given group is compatible with what is expected when connections occur randomly, i.e., without preference of members for external or internal relations, for the unweighted and the Z_unweighted cases, we calculate the expected E I index for each one of the analyzed cases considering the average of 5000 randomly generated binomial graphs with the same density and size as that of the original graphs. We also added a probability, p-value, which expresses how unlikely it is to obtain an E I index at least as extreme as that observed in the randomly generated binomial graphs. We considered one-sided p-values calculated by the relative frequency of times that the simulated E I obtained a value greater (resp., smaller) than or equal to the observed E I , when the expected E I is smaller (resp., larger) than the observed one.
Data
To implement the proposed E I index, we use data from two real networks. Next, we give some details about these networks.
PQ network
First, we show how the arbitrary choice of disjoint groups, according to the Ph.D. completion time, affects the E I index of these groups. We delimit three cases of disjoint groups (T1, T2 and T3) varying the limits of the groups, Table 2, in the fuzzy regions, Table 3. Figure 2 shows the E I index for the entire network, for each of the arbitrary limits. As expected, the result is heavily dependent on these limits. The definitions of the groups formed according to the Ph.D. completion time for the disjoint, non-disjoint and fuzzy case, followed the criteria in Table 3. For the disjoint case, we consider the intermediate case T2.
We use the researchers' h-index as the node weights. The h-index is a measure that combines, in a simple way, the number of publications and the impact of publications and is given by the maximum value h such that a researcher has published h works and each of these works has been cited h or more times Hirsch (2010). Figure 3 shows how the relationships between researchers occur. In general, most nodes in the non-disjoint case have an E I index of −1 (60%). In the fuzzy case and in the disjoint case, the nodes present similarity in relation to the proportion of E I index higher and lower than zero; however, in the fuzzy case, the distribution of the E I index is more uniform. Figure 4 shows the E I index for the entire network. In general, when the nodes belong to non-disjoint groups, it is observed that the E I indexes are smaller, with a predominance of in-group relationships. On the other hand, when the groups are disjoint, the network has higher but still negative Fig. 5. In general, when nodes belong to nondisjoint groups, it is observed that the E I indexes are smaller. In the case of disjoint and fuzzy groups, the E I indexes are The experienced group's E I indexes are negative, especially in the non-disjoint case. This shows that the internal connections of this group are larger than the external ones. The youth and senior groups have a positive EI index, with the youth being superior to seniors. This shows that the external relations of these surpass the internal ones. Therefore, we can conclude that the experts cooperate with each other while young and senior Ph.D. are more open to cooperating with other groups. It is worth mentioning that the E I indexes obtained do not reveal a tendency towards homophily or heterophily, as they do not differ significantly from the results obtained by the random simulated network, since the p-values are all greater than 0.05. Note that the edge weighting affected more the E I index of the disjoint case, making the relationships more heterogeneous. This is most noticeable in the case of experienced groups.
We also analyzed the behavior of groups of researchers with the same level of scholarship in relation to the experience level group attributes. The scholarship level in order of importance and the total number of researchers are: 1A (8%), 1B (5%), 1C (8%), 1D (19%) and 2 (59%). The analyses of the EI index of these groups are shown in Fig. 6 for the cases of disjoint, non-disjoint and fuzzy groups, and studying the UW, ZU, W and ZW networks. In general, when nodes belong to non-disjoint groups, it is observed that the E I indexes are smaller, with in-group relationships predominating. On the other hand, when the groups are disjoint or fuzzy, the network has higher E I indexes.
As for scholarship levels, there is a different behavior of the E I indexes for the different connection types, weighted or not. Level 1A has the highest E I indexes in the unweighted network, without or with the inclusion of the node weights and in the weighted network considering the node weights. Level 1A, the highest level of the scholarship, concentrates the most productive and influential researchers in the research area, being composed of 10 exclusively senior researchers and 2 exclusively experienced researchers. Although most are seniors, the in-group relationship is predominant in the non-disjointed case and external relationships are more common when the group is fuzzy or disjoint. Level 1A E I indexes are all negative in the weighted network. Level 1C, an intermediate scholarship level, also does not include young researchers. In the weighted network, with and without node weights, as well as in the unweighted network (only in the non-disjoint case), the E I index of the level 1C is the small- est and negative. Therefore, for researchers at this level, most connections occur between researchers in the same experience level group. It is noteworthy that the E I indexes obtained do not reveal a tendency towards homophily or heterophily, as they do not differ significantly from the results obtained by random simulated networks since the p-values are all greater than 0.05.
Trade of American countries network
We use the Human Development Index (HDI) to form groups and first show how the arbitrary choice of disjoint groups, according to the HDI, affects the E I index of these groups. We delimited three cases of the disjoint groups (T1, T2 and T3) varying the thresholds of the groups, Table 4, in the fuzzy regions, Table 5. Figure 7 shows the E I index for the entire network, for each of the arbitrary thresholds. As expected, the result is heavily dependent on these limits.
The definitions of the groups formed according to the HDI for the disjoint, non-disjoint and fuzzy case, followed the criteria in Table 5, where the intermediary case T2 was used for the disjoint case. Figure 8 shows the E I index at the individual level of the 30 countries. In general, countries have positive E I indexes, that is, intergroup relations higher than in-groups. In the non-disjoint case, it is possible to notice that some countries predominate in-group relations. The in-group relationship is also more visible when the network is unweighted. Figure 9 shows the E I index for the entire network. In general, when nodes belong to non-disjoint groups, it is observed that the E I indexes are smaller. On the other hand, when the groups are fuzzy, the network has higher E I indexes. The E I indexes are positive, except the E I index in the case of Fig. 10. In general, the low and medium groups have the highest E I indexes, close to 1. The countries of these groups have intergroup relations higher than in-groups, the E I indexes are statistically significant, that is, these groups are prone to heterophilia. The group with high HDI has the lowest E I indexes in the unweighted network, being the one with the highest ingroup relationship, but the E I indexes increase significantly in the Z_Unweighted, weighted and Z_Weighted networks. Thus, the relationships are stronger with other groups in these networks. The group of countries with very high HDI has the lowest E I indexes in the weighted network, with and without the node weights, revealing a closer relationship between countries in the group. The E I indexes of the groups with high and very high HDI do not differ statistically from those presented by the random simulated network.
We also analyzed the behavior of groups of countries by region in relation to the HDI group attributes. The regional divisions are north, south and central, with 3, 12 and 15 countries, respectively. The analyses of the E I indexes of these groups are shown in Fig. 11 for the cases of disjoint, nondisjoint and fuzzy groups, and studying the UW, ZU, W and ZW networks. In general, when nodes belong to non-disjoint groups, it is observed that the E I indexes are smaller. On the other hand, when the groups are disjoint or fuzzy, the regions have higher E I indexes.
As for the regions, there is a behavior different from the E I index depending on the connection type, weighted or unweighted. The northern region has the highest E I indexes on the UW and ZU networks. The northern region's E I indexes decrease in the weighted network, indicating that northern region have stronger relations with countries in the same HDI group. The southern region in the UW network has the lowest E I indexes, positive in the disjoint and fuzzy case, and negative in the non-disjoint case. In weighted networks, with and without node weights, the E I indexes are positive and higher in the southern region, indicating that the forces of relations are more intense between countries of different HDI groups. The E I indexes of the regions do not reveal a tendency towards homophily or heterophily, as they do not
Conclusion
In this work, we have proposed a new network measure, which is a generalization of the E I index to measure homophily in cases of fuzzy groups. Fuzzy groups are particularly important when actors may belong to many associative groups simultaneously and with various levels of affiliation. Therefore, for a better understanding of the structure of networks, the measure developed allows the analysis of multiple associations and different levels of association. We also show that incorporating node weights into the analysis can give us more insights into the homophily of relations.
We explored two networks with the new measure. In a coauthorship network, the Ph.D. completion time was used to form groups. In a commercial network among countries, we use the Human Development Index (HDI) to form groups. We obtain the E I index for the networks considering the cases of disjoint, non-disjoint and fuzzy groups, and analyzing different relational forces, unweighted, weighted, without and with node weights. As we have seen in these networks, the proposed measure allows expanding the analysis of social networks. Through a homophily analysis, it is possible to identify whether a certain group of nodes has a tendency to work together or not.
In general, it is clear that fuzzy groups generate more homogeneous cooperation or commercial relations. This was already expected due to the fact that the actors present multiple associations with the same degree of association, equal to 1. In the co-authorship network, we noticed that the researchers allocated as experienced are the ones that cooperate the most with each other. These relationships are favored because there are more experienced researchers. The smaller number of young and senior researchers also justifies the predominance of external relations by these researchers. In the trade network, we noticed that relations between countries with different levels of development are more common. In the case of the groups with low and medium HDI, we note that the E I index close to 1 is statistically significant, revealing the tendency towards heterophilia in these two groups, revealing their dependency on more developed countries.
In addition to the two examples of networks used to illustrate the measure, other networks also present actors that belong to different groups of attributes and that, due to the imprecision or limitations of the information, it is necessary to resort to the fuzzy system. Thus, we expect that many other studies may benefit from this measure. Data availability Enquiries about data availability should be directed to the authors.
Conflict of interest
The authors have not disclosed any competing interests. | 6,519.8 | 2021-11-08T00:00:00.000 | [
"Computer Science",
"Mathematics"
] |
Discrepancy estimates for variance bounding Markov chain quasi-Monte Carlo
Markov chain Monte Carlo (MCMC) simulations are modeled as driven by true random numbers. We consider variance bounding Markov chains driven by a deterministic sequence of numbers. The star-discrepancy provides a measure of efficiency of such Markov chain quasi-Monte Carlo methods. We define a pull-back discrepancy of the driver sequence and state a close relation to the star-discrepancy of the Markov chain-quasi Monte Carlo samples. We prove that there exists a deterministic driver sequence such that the discrepancies decrease almost with the Monte Carlo rate $n^{1/2}$. As for MCMC simulations, a burn-in period can also be taken into account for Markov chain quasi-Monte Carlo to reduce the influence of the initial state. In particular, our discrepancy bound leads to an estimate of the error for the computation of expectations. To illustrate our theory we provide an example for the Metropolis algorithm based on a ball walk. Furthermore, under additional assumptions we prove the existence of a driver sequence such that the discrepancy of the corresponding deterministic Markov chain sample decreases with order $n^{-1+\delta}$ for every $\delta>0$.
Introduction
Markov chain Monte Carlo (MCMC) simulations are used in different branches of statistic and science to estimate an expected value with respect to a probability measure, say π, by the sample average of the Markov chain. This procedure is of advantage if random numbers with distribution π are difficult to construct.
When sampling the Markov chain the transitions are usually modeled as driven by i.i.d. U(0, 1) s random variables for some s ≥ 1. But in simulations the driver sequences are pseudo-random numbers. In many applications, if one uses a carefully constructed random number generator, this works well. Instead of modeling the Markov chain with random numbers, or imitating random numbers, the idea of Markov chain quasi-Monte Carlo is to construct a finite, deterministic sequence of numbers, (u i ) 0≤i≤n in [0, 1] s for all n ∈ N, to generate a deterministic Markov chain sample and to use it to estimate the desired mean.
The motivation of this conceptual change is that carefully constructed sequences may lead to more accurate sample averages. For example, quasi-Monte Carlo (QMC) points lead to higher order of convergence compared to plain Monte Carlo, which is a special case of MCMC. Numerical experiments for QMC versions of MCMC also show promising results [LS06,Lia98,OT05,Sob74,Tri07]. In particular, Owen and Tribble [OT05] and Tribble [Tri07] report an improvement by a factor of up to 10 3 and a better convergence rate for a Gibbs sampler problem.
In the work of Chen, Dick and Owen [CDO11] and Chen [Che11] the first theoretical justification for Markov chain quasi-Monte Carlo on continuous state spaces is provided. The authors show a consistency result if a contraction assumption is satisfied and the random sequence is substituted by a deterministic 'completely uniformly distributed' sequence, see [CDO11,CMNO12,TO08]. Thus the sample average converges to the expected value but we do not know how fast this convergence takes place.
Recently, in [DRZ13] another idea appears. Namely, the question is considered whether there exists a good driver sequence such that an explicit error bound is satisfied. It is shown that if the Markov chain is uniformly ergodic, then for any initial state a deterministic sequence exists such that the sample average converges to the mean almost with the Monte Carlo rate.
However, in [CDO11] and [DRZ13] rather strong conditions, the contraction assumption and uniform ergodicity, are imposed on the Markov chain.
We substantially extend the results of [DRZ13] to Markov chains which satisfy a much weaker convergence condition. Namely, we consider variance bounding Markov chains, introduced by Roberts and Rosenthal in [RR08], and show existence results of good driver sequences. In the following we describe the setting in detail and explain our main contributions.
Main results
The MCMC sampling can be represented via X i+1 = ϕ(X i , U i ) for i ≥ 2, with X 1 = ψ(U 1 ) and the U i ∼ U(0, 1) s are i.i.d. The state X i is an element in G ⊆ R d , the function ϕ : G × [0, 1] s → G is called update function and ψ : [0, 1] s → G is called generator function. The update function corresponds to a transition kernel, say K. For f : G → R let E π (f ) = G f (x)π(dx) be the desired mean and P f (x) = G f (y)K(x, dy) be the Markov operator induced by the transition kernel K. We assume that the transition kernel is reversible with respect to the distribution π and that it is variance bounding, see [RR08]. Roughly, a Markov chain is variance bounding if the asymptotic variances for functionals with unit stationary variance are uniformly bounded. Equivalent to this is the assumption that Λ < 1 with Λ = sup{λ ∈ σ(P − E π | L 2 )} (1) where σ(P − E π | L 2 ) denotes the spectrum of P − E π on L 2 . For example let us consider the two state Markov chain which always jumps from one state to the other one. It is periodic and satisfies Λ = −1, thus it is variance bounding. With this toy example in mind let us point out that the Markov chain does not need to be uniformly or geometrically ergodic, it might even be periodic, and the distribution of X i , for i arbitrarily large, is not necessarily close to π. By a deterministic sequence (u i ) i≥0 we generate the deterministic Markov chain (x i ) i≥1 with x 1 = ψ(u 0 ) and The efficiency of this procedure is measured by the star-discrepancy, a generalized Kolmogorov-Smirnov test, between the stationary measure π and the empirical distribution π n (A) = 1 where A denotes a certain set of subsets of G. By inverting the iterates of the update function we also define a push-back discrepancy of the driver sequence (the test sets are pushed back). We show that for large n ∈ N both discrepancies are close to each other.
The main result, in a general setting, is an estimate of D * A ,π (S n ) (Theorem 2) under the assumption that we have an approximation of A , for any δ > 0, given by a so-called δ-cover Γ δ of A with respect to π (Definition 5). The proof of the main result is based on a Hoeffding inequality for Markov chains. After that we prove that a sufficiently good δ-cover exists if π is absolutely continuous with respect to the Lebesgue measure and the set of test sets is the set of open boxes restricted to G anchored at −∞, i.e. we consider the set of test sets with f H 1 defined in (24). Thus a bound on the discrepancy leads to an error bound for the approximation of E π (f ). We show for all n ≥ 16 that there exists a driver sequence u 0 , . . . , u n−1 ∈ [0, 1] s such that S n = {x 1 , . . . , x n } given by where dν dπ is the density of ν = P ψ (the probability measure induced by ψ) with respect to π and Λ 0 = max{Λ, 0} with Λ is defined in (1). For the details we refer to Corollary 4 below. This implies, by the Koksma-Hlawka inequality, that the sample average converges to the mean with O(n −1/2 (log n) 1/2 ).
Additionally we might take a burn-in period of n 0 steps into account to reduce the dependence of the initial state in the discrepancy bound. Roughly, the idea is to generate a sequence x 1 , . . . , x n 0 +n by the Markov chain quasi-Monte Carlo procedure and to consider the discrepancy of the last n 0 states, i.e. of S [n 0 ,n] = {x n 0 +1 , . . . , x n 0 +n }. Under suitable convergence conditions of the Markov chain, for example the existence of an absolute L 2 -spectral gap (see Definition 1), the density d(νP n 0 ) dπ is close to 1, see Subsection 4.3. If we further assume that one can reach every state from every other state within one step of the Markov chain, then we prove that there exists a driver sequence such that the discrepancy converges with O(n −1 (log 2 n) (3d+1)/2 ). We call the additional assumption 'anywhere-to-anywhere' condition. The result shows that in principle a higher order of convergence for Markov chain quasi-Monte Carlo is possible. Note that, many well studied Markov chains satisfy such a condition, for example the hit-and-run algorithm, the independent Metropolis sampler or the slice sampler, see for example [Liu08]. However, it is not clear how to obtain suitable driver sequences which yield such an improvement. We provide an outline of our work in the following.
Outline
In the next section the necessary background information on Markov chains is stated. Section 3 is devoted to the study of the relation of the discrepancies. The Monte Carlo rate of convergence for deterministic MCMC is shown in Section 4. There we also provide results for the case when a burn-in period is taken into account. Section 5 deals with the set of test sets which consists of axis parallel boxes, see B above. We show the existence of a good δ-cover and how the discrepancy bounds can be used to obtain bounds on the error for the computation of expected values of smooth functions. This yields a Koksma-Hlawka inequality for Markov chains. To illustrate our results, we provide an example of a Metropolis algorithm with ball walk proposal on the Euclidean unit ball. A special situation arises when the update function of the Markov chain has an 'anywhere-to-anywhere' property, see Section 6. In this situation we show that a convergence rate of order almost n −1 can be obtained.
Background on Markov chains
Let G ⊆ R d and let B(G) denote the Borel σ-algebra of G. In the following we provide a brief introduction to Markov chains on (G, B(G)). We assume that K : G × B(G) → [0, 1] is a transition kernel on (G, B(G)), i.e. for each x ∈ G the mapping A ∈ B(G) → K(x, A) is a probability measure and for each A ∈ B(G) the mapping x ∈ G → K(x, A) is a B(G)-measurable realvalued function. Further let ν be a probability measure on (G, B(G)).
Then let (X n ) n∈N , with X n mapping from some probability space into (G, B(G)), be a Markov chain with transition kernel K and initial distribution ν. This might be interpreted as follows: Let X 1 = x 1 ∈ G be chosen with ν on (G, B(G)) and let i ∈ N. Then for a given X i = x i , the random variable X i+1 has distribution K(x i , ·), that is, for all A ∈ B(G), the probability that X i+1 ∈ A is given by K(x i , A).
Let π be a probability measure on (G, B(G)). We assume that the transition kernel K is reversible with respect to π, i.e. for all A, B ∈ B(G) holds This implies that π is a stationary distribution of the transition kernel K, i.e. for all A ∈ B(G) holds G K(x, A) π(dx) = π(A). (3) We assume that the stationary distribution π is unique. Let L 2 = L 2 (π) be the set of all functions f : G → R with The transition kernel K induces an operator acting on functions and an operator acting on measures. For x ∈ G and A ∈ B(G) the operators are given by where f ∈ L 2 and ν is a signed measure on (G, B(G)) with a density dν dπ ∈ L 2 . By the reversibility with respect to π we have that P : L 2 → L 2 is self-adjoint and π-almost everywhere holds P ( dν dπ )(x) = d(νP ) dπ (x) . For details we refer to [Rud12].
In the following we introduce two convergence properties of transition kernels. Let the expectation with respect to π be denoted by E π (f ) = G f (y)π(dx). Let L 0 2 = {f ∈ L 2 : E π (f ) = 0} and note that L 0 2 is a closed subspace of L 2 . We have P − E π L 2 →L 2 = P L 0 2 →L 0 2 , for details see [Rud12, Lemma 3.16, p. 44].
Definition 1 (absolute L 2 -spectral gap) We say that a transition kernel K and its corresponding Markov operator P has an absolute L 2 -spectral gap if β = P L 0 2 →L 0 2 < 1, and the absolute spectral gap is 1 − β.
Let us introduce the total variation distance of two probability measures ν 1 , ν 2 on (G, B(G)) by Note that for a Markov chain (X n ) n∈N with transition kernel K and initial distribution ν holds P ν,K (X n ∈ A) = νP n−1 (A), where ν and K in P ν,K indicate the initial distribution and transition kernel. Then we obtain the following relation between the absolute L 2 -spectral gap and the total variation distance. The result is an application of [Rud12, Corollary 3.15 and Lemma 3.21].
Proposition 1 Let ν be a distribution on (G, B(G)) and assume that there exists a density dν dπ ∈ L 2 . Then The next convergence property is weaker than the existence of an absolute spectral gap.
Definition 2 (Variance bounding or L 2 -spectral gap) We say that a reversible transition kernel K and its corresponding Markov operator P is variance bounding or has an L 2 -spectral gap if where spec(P | L 0 2 ) denotes the spectrum of P : L 0 2 → L 0 2 .
For a motivation of the term variance bounding and a general treatment we refer to [RR08]. In particular by [RR08,Theorem 14] under the assumption of reversibility our definition is equivalent to the one stated by Roberts and Rosenthal. Note that the existence of an absolute L 2 -spectral gap implies variance bounding, since We have the following relation between variance bounding and the total variation distance.
Lemma 1 Let the transition kernel K be reversible with respect to π and let n ∈ N with n ≥ 2. Further, let P be variance bounding. Then the Markov operator P n = 1 n n−1 j=0 P j has an absolute L 2 -spectral gap. In particular, if ν is a distribution on (G, B(G)) with dν dπ ∈ L 2 , then Proof. By the spectral Theorem for bounded self-adjoint operators we have for a polynomial F : spec(P | L 0 2 ) → R that For details see for example [Rud91] or [Kre89, Theorem 9.9-2]. In our case The last inequality is proven by spec(P |L 0 2 ) ⊆ [−1, 1] and the following facts: For λ ∈ [−1, 0] holds 1−λ n n·(1−λ) ≤ 1 n and for λ ∈ [0, 1] the function 1−λ n n·(1−λ) = 1 n n−1 j=0 λ j is increasing. The estimate of the total variation distance follows by Proposition 1. ✷ The next part deals with an update function, say ϕ, of a given transition kernel K. We state the crucial properties of the transition kernel in terms of an update function. This is partially based on [DRZ13].
Let λ s denote the Lebesgue measure on R s . Then the function ϕ is an update function for the transition kernel K if and only if where P is the probability measure for the uniform distribution in [0, 1] s .
Note that for any transition kernel on (G, B(G)) there exists an update function, see for example [Kal02, Lemma 2.22, p. 34]. For x ∈ G and A ∈ B(G) the set B(x, A) is the set of all random numbers u ∈ [0, 1] s which take x into the set A using the update function ϕ with arguments x and u.
We consider the iterated application of an update function. Let ϕ 1 (x; u) = ϕ(x; u) and for i > 1 with i ∈ N let Thus, x i+1 = ϕ i (x; u 1 , u 2 , . . . , u i ) ∈ G is the point obtained via i updates using u 1 , u 2 , . . . , u i ∈ [0, 1] s , where the starting point is x ∈ G.
Proof. The proof follows by induction on i. ✷ is the set of all random numbers u 1 , u 2 , . . . , u i ∈ [0, 1] s which take x into the set A using the ith iteration of the update function ϕ, i.e. ϕ i with arguments x and u 1 , u 2 , . . . , u i .
In [DRZ13] we considered the case where the initial state is deterministically chosen. The following assumption is useful to work with general initial distributions.
Assumption 1 For a probability measure ν on (G, B(G)) we assume that ψ : [0, 1] s → G is a generator function, i.e. ψ satisfies For a probability measure ν on (G, B(G)) let Assumption 1 be satisfied.
is the set of possible sequences to get into the set A with starting point ψ(u 0 ) and i updates of the update function. The next lemma is important to understand the relation between the update function, generator function, transition kernel and initial distribution.
Lemma 3 Let K be a transition kernel and ν a distribution on (G, B(G)). Let ϕ be an update function for the transition kernel K. Let (X n ) n∈N be a Markov chain with transition kernel K and initial distribution ν. Further, let Assumption 1 for ν be satisfied. Let i ∈ N and F : G i → R. The expectation of F with respect to the joint distribution of X 1 , . . . , X i is given by whenever one of the integrals exist.
Proof. By Assumption 1 we have and by Lemma 2 we obtain By iterating the application of Lemma 2 the assertion is proven. ✷ Note that the right-hand-side of (8) is the expectation with respect to the uniform distribution in [0, 1] is .
Proof. By Lemma 3 we have which completes the proof. ✷
On the push-back discrepancy
Let A ⊆ B(G) be a set of test sets. Then the star-discrepancy of a point set S n = {x 1 , . . . , x n } ⊆ G with respect to the distribution π is given by Assume that u 0 , u 1 , . . . , u n−1 ∈ [0, 1] s is a finite deterministic sequence. We call this finite sequence driver sequence. Further, let ϕ : G × [0, 1] s → G and ψ : [0, 1] s → G be measurable functions. Then let the set S n = {x 1 , . . . , x n } ⊆ G be given by where x 1 = ψ(u 0 ). Note that ψ might be considered as a generator function and ϕ might be considered as an update function. We now define a discrepancy measure on the driver sequence. We call it push-back discrepancy. Below we show how this push-back discrepancy is related to the star-discrepancy of S n .
Definition 4 (Push-back discrepancy) Let U n = {u 0 , u 1 , . . . , u n−1 } ⊂ [0, 1] s and let C i,ψ (A) for A ∈ B(G) and i ∈ N ∪ {0} be defined as in (7). Define the local discrepancy function by Let A ⊆ B(G) be a set of test sets. Then we define the discrepancy of the driver sequence by The discrepancy of the driver sequence D * A ,ψ,ϕ (U n ) is a 'push-back discrepancy' since the test sets C i,ψ (A) are derived from the test sets A ∈ A from the star-discrepancy D * A ,π (S n ) via inverting the update function and the generator.
The following theorem provides a relation between the star-discrepancy of S n and the push-back discrepancy of U n , this is similar to [DRZ13, Theorem 1].
Theorem 1 Let K be a transition kernel and ν be a distribution on (G, B(G)). Let ϕ be an update function for K and let us assume that ν satisfies Assumption 1 with generator function ψ. Further, let U n = {u 0 , u 1 , . . . , u n−1 } ⊂ [0, 1] s be the driver sequence, such that S n is given by (10). Let A ⊆ B(G) be a set of test sets. Then Proof. For any A ∈ A we have by (9) that λ (i+1)s (C i,ψ (A)) = νP i (A). Thus The inequality follows by the same arguments. ✷ Corollary 2 Assume that the conditions of Theorem 1 are satisfied. By P denote the Markov operator of K. Further, let K be reversible with respect to π, let P be variance bounding and let dν dπ ∈ L 2 . Then where Λ 0 = max{0, Λ} and Λ is defined in (4).
Proof. With P n = 1 n n−1 Thus, the assertion follows by Lemma 1 and Theorem 1. ✷ Remark 1 For the moment let us assume that we can sample with respect to π. For any initial distribution ν with dν dπ ∈ L 2 , for all x ∈ G and A ∈ B(G) we set K(x, A) = π(A), hence Λ = 0. Thus Note that the discrepancies do not coincide. The reason for this is that the initial state is taken into account in the average computation. However, if ν = π, then for any reversible transition kernel with respect to π we obtain D * A ,π (P n ) = D * A ,ψ,ϕ (U n ).
Monte Carlo rate of convergence
In this section we show the existence of finite sequences U n = {u 0 , u 1 , . . . , u n−1 } ⊂ [0, 1] s , which define S n by (10), such that converge to 0 approximately with order n −1/2 if the transition kernel or the corresponding Markov operator is variance bounding. The main result is proven for D * A ,π (S n ). The result with respect to D * A ,ψ,ϕ (U n ) holds by Theorem 1.
Useful tools: delta-cover and Hoeffding inequality
The concept of a δ-cover will be useful (cf. [Gne08] for a discussion of δcovers, bracketing numbers and Vapnik-Červonenkis dimension).
The following result is well known for the uniform distribution, see [HNWW01, Section 2.1] (see also [DRZ13, Remark 3] for the particular case below).
Proposition 2 Let Γ δ be a δ-cover of A with respect to π. Then, for any Z n = {z 1 , . . . , z n }, holds Instead of considering the supremum over the possibly infinite set of test sets A in the star-discrepancy we use a finite set Γ δ and take the maximum over C ∈ Γ δ by paying the price of adding δ.
For variance bounding Markov chains on discrete state spaces, i.e. the second largest eigenvalue of the transition matrix is less than 1, in [LP04] a Hoeffding inequality is proven. In [Mia12] this is extended to non-reversible Markov chains on general state spaces. The following Hoeffding inequality for reversible, variance bounding Markov chains follows by [Mia12, Theorem 3.3 and the remark after (3.4)].
Proposition 3 (Hoeffding inequality for Markov chains) Let K be a reversible transition kernel with respect π and let ν be a distribution on (G, B(G)) with dν dπ ∈ L 2 . Let us assume that the Markov operator of K is variance bounding. Further, let (X n ) n∈N be a Markov chain with transition kernel K and initial distribution ν. Then, for any A ∈ B(G) and c > 0, we obtain with Λ 0 = max{0, Λ} and where Λ is defined in (4).
We provide a lemma to state the Hoeffding inequality for Markov chains in terms of the driver sequence. We need the following notation. Let Lemma 4 Let K be a transition kernel and ν be a distribution on (G, B(G)). Let ϕ be an update function of K and let us assume that ν satisfies Assumption 1 with generator function ψ. Further, let (X n ) n∈N be a Markov chain with transition kernel K and initial distribution ν. Then, for any A ∈ B(G) and c > 0, holds where P denotes the uniform distribution in [0, 1] ns and P ν,K denotes the joint distribution of X 1 , . . . , X n .
Discrepancy bounds
We show that for any s ∈ N, for any update function of the transition kernel K, for every initial distribution ν with dν dπ ∈ L 2 and every n there exists a finite sequence u 0 , u 1 , . . . , u n−1 ∈ [0, 1] s such that the star-discrepancy of S n , given by (10), converges approximately with order n −1/2 . The main idea to prove the existence result is to use probabilistic arguments. We apply a Hoeffding inequality for variance bounding Markov chains and show that for a fixed test set the probability of point sets with small ∆ n,A,ϕ,ψ , see (12), is large. We then extend this result to all sets in the δ-cover using the union bound and finally to all test sets. The result shows that if the finite driver sequence is chosen at random from the uniform distribution, most choices satisfy the Monte Carlo rate of convergence of the discrepancy for the induced point set S n .
Theorem 2 Let K be a reversible transition kernel with respect to π and ν be a distribution on (G, B(G)) with dν dπ ∈ L 2 . Assume that P , the Markov operator of K, is variance bounding and that ν satisfies Assumption 1 with generator ψ. Let A ⊆ B(G) be a set of test sets and for every δ > 0 assume that there exists a set Γ δ ⊆ B(G) with |Γ δ | < ∞ such that Γ δ is a δ-cover of A with respect to π. Further, let ϕ be an update function for K.
Theorem 3 Let K be a reversible transition kernel with respect to π and ν be a distribution on (G, B(G)) with dν dπ ∈ L 2 . Assume that P , the Markov operator of K, is variance bounding and that ν satisfies Assumption 1 with generator ψ. Let A ⊆ B(G) be a set of test sets and for every δ > 0 assume that there exists a set Γ δ ⊆ B(G) with |Γ δ | < ∞ such that Γ δ is a δ-cover of A with respect to π. Further, let ϕ be an update function for K.
We refer to Remark 2 and Lemma 6 for a relation between δ and |Γ δ |. Thus, we showed the existence of a driver sequence with small push-back discrepancy. Note that by using Corollary 2 one could also argue the other way around: If one can construct a sequence with small push-back discrepancy then the star-discrepancy of S n is also small.
Remark 3 Let us consider a special case of Theorem 2 and Theorem 3. Namely, let us assume that we can sample with respect to π. Thus, we set ν = π and K(x, A) = π(A) for any x ∈ G, A ∈ B(G). Then since Λ 0 = Λ = 0. This is essentially the same as Theorem 1 in [HNWW01] in their setting. However, it is not as eloberate as Theorem 4 in [HNWW01], which is based on results by Talagrand [Tal94] and Haussler [Hau95]. We do not know a version of these results which apply to Markov chains (such a result could yield an improvement of Theorems 2 and 3).
Burn-in period
For Markov chain Monte Carlo a burn-in period is used to reduce the bias of the initial distribution. We show how a burn-in changes the discrepancy bound of Theorem 3. Let us introduce the following notation. Let ϕ : G × [0, 1] s → G and ψ : [0, 1] s → G be measurable functions. Let n 0 , n ∈ N, let U n 0 ,n = {u 0 , . . . , u n 0 , u n 0 +1 , . . . , u n 0 +n−1 } ⊂ [0, 1] s and assume that S [n 0 ,n] = {x n 0 +1 , . . . , x n 0 +n } ⊆ G is given by (10), i.e. x 1 ; u 1 , . . . , u i ), i = 1, . . . , n 0 + n − 1, where x 1 = ψ(u 0 ). As before ψ might be considered as a generator function and ϕ might be considered as an update function. We now define a discrepancy measure on the driver sequence where the burn-in period is taken into account. We call it push-back discrepancy with burn-in.
Definition 6 (Push-back discrepancy with burn-in) Let C i,ψ (A) for A ∈ B(G) and i ∈ N ∪ {0} be defined as in (7). Define the local discrepancy function with burn-in by ∆ loc n 0 ,n,A,ψ,ϕ (U n 0 ,n ) = 1 n Let A ⊆ B(G) be a set of test sets. Then we define the discrepancy of the driver sequence by D * n 0 ,A ,ψ,ϕ (U n 0 ,n ) = sup A∈A ∆ loc n 0 ,n,A,ψ,ϕ (U n 0 ,n ) .
We call D * n 0 ,A ,ψ,ϕ (U n 0 ,n ) push-back discrepancy with burn-in of U n 0 ,n .
By adapting Proposition 3 and Lemma 4 to the setting with burn-in we obtain, by the same steps as in the proof of Theorem 2, a bound on the star-discrepancy for S [n 0 ,n] . Further, adapting Theorem 1 and Corollary 2 to the burn-in leads to a bound on D * n 0 ,A ,ψ,ϕ (U n 0 ,n ) for a certain set U n 0 ,n .
Theorem 4 Let K be a reversible transition kernel with respect to π and let ν be a distribution with dν dπ ∈ L 2 . Assume that P , the Markov operator of K, is variance bounding and that ν satisfies Assumption 1 with generator ψ. Let A ⊆ B(G) be a set of test sets and for every δ > 0 assume that there exists a set Γ δ ⊆ B(G) with |Γ δ | < ∞ such that Γ δ is a δ-cover of A with respect to π. Further, let ϕ be an update function for K.
Then there exists a driver sequence U n 0 ,n = {u 0 , u 1 , . . . , u n 0 +n−1 } ⊂ [0, 1] s such that with Λ 0 = max{0, Λ} and Λ defined in (4). If P has an absolute L 2 -spectral gap we have with β = P L 0 2 →L 0 2 , see Definition 1. In particular, by Λ ≤ Λ 0 ≤ β < 1 and |Λ| ≤ β, we deduce Equations (18) and (19) reveal that the burn-in n 0 can eliminate the influence of the initial state induced by ψ under the assumption that there exists an absolute L 2 -spectral gap. A variance bounding transition kernel is not enough, since it could be periodic and then νP n 0 would not converge to π at all.
Application
We consider the set of test sets B which consists of all axis parellel boxes anchored at −∞ restricted to G ⊆ R d , i.e.
In the following we study the size of δ-covers with respect to such rectangular boxes.
We then focus on the application of Theorem 2 and state the relation between the discrepancy and the error of the computation of expectations. The Metropolis algorithm with ball walk proposal provides an example where one can see that the existence result shows an error bound which depends polynomially on the dimension d.
Delta-cover with respect to distributions
We now use an explicit version of a result due to Beck [Bec84], for a proof and further details we refer to [AD, Theorem 1]. We state it as a lemma. Then for any r ∈ N there exists a set Z r = {z 1 , . . . , z r } with z 1 , . . . , z r ∈ suppµ such that Note that log 2 denotes the dyadic and log the natural logarithm.
Proof. The assertion follows by [AD,Theorem 3] This implies a version of [AD, Corollary 1], thus a version of [AD, Theorem 1], with x 1 , . . . , x N ∈ suppµ. ✷ By a linear transformation we extend the result to general, bounded state spaces G ⊂ R d .
Corollary 3 Let G ⊂ R d be a bounded, measurable set and let (G, B(G), π) be a probability space. Let the set of test sets Then for any r ∈ N there exists a set S r = {x 1 , . . . , x r } ⊆ G such that By Lemma 5 we have that there exists a set Z r = {z 1 , . . . , z r } ⊆ supp µ such that (20) is satisfied. Let x i = T −1 (z i ) for i = 1, . . . , r and for z ∈ [0, 1] d let x = T −1 (z). Then Since z 1 , . . . , z r ∈ suppµ ⊂ T (G) and By taking the supremum over the test sets on the right-hand side and using (20) the assertion follows. ✷ As in [DRZ13, Lemma 4] a point set which satisfies a discrepancy bound can be used to construct a δ-cover. The idea is to define for each subset of the point set a minimal and maximal set for the δ-cover, see [DRZ13,Lemma 4]. To simplify the bound of Corollary 3, for any r ∈ N and 0 < ε < 1 we have With this notation we obtain the following result.
Lemma 6 Let G ⊂ R d be a bounded measurable set and let π be a probability measure on (G, B(G)) which is absolutely continuous with respect to the Lebesgue measure. For the test set B = {(−∞, x) G | x ∈ R d }, any 0 < δ ≤ 1 and 0 < ε < 1, there is a δ-cover Γ δ of B with respect to π with where C ε,d is given by (21).
Proof. The proof of the assertion follows essentially by the same steps as the proof of [DRZ13,Lemma 4]. The only difference is that we use the discrepancy bound of Corollary 3 instead of [HNWW01,Theorem 4]. ✷ The dependence of the size of the δ-cover on δ is arbitrarily close to order δ −d in Lemma 6, whereas in [DRZ13, Lemma 4] it is of order δ −2d . Furthermore, the constant in Lemma 6 is fully explicit (one can choose 0 < ε < 1 to obtain the best bound on the size of the δ-cover).
By Theorem 2 and Lemma 6 we obtain the following result.
Corollary 4 Let G ⊂ R d be a bounded set. Let K be a reversible transition kernel with respect to π and ν be a distribution on (G, B(G)) with dν dπ ∈ L 2 . Assume that P , the Markov operator of K, is variance bounding and that ν satisfies Assumption 1 with generator ψ. Let B = {(−∞, x) G | x ∈ R d } be the set of test sets and ϕ be an update function of K.
Integration error
In this section we state a relation between a reproducing kernel Hilbert space and the star-discrepancy. As in [DRZ13, Appendix B] we define a reproducing kernel Q by The function Q uniquely defines a reproducing kernel Hilbert space H 2 = H 2 (Q) of functions defined on R d . Reproducing kernel Hilbert spaces were studied in detail in [Aro50]. It is also known that the functions f in H 2 permit the representation for some f 0 ∈ C and function f ∈ L 2 (R d , ρ), see for instance [SC08, Theorem 4.21, p. 121] or follow the same arguments as in [BD14, Appendix A]. The inner product in H 2 is given by With these definitions we have the reproducing property For 1 ≤ q ≤ ∞ we also define the space H q of functions of the form (23) for which f ∈ L q (G, ρ), with finite norm The following result concerning the integration error in H q is proven in [DRZ13, Theorem 3].
Theorem 5 Let G ⊆ R d and π be a probability measure on G. Further let We assume that 1 ≤ p, q ≤ ∞ with 1/p+1/q = 1. Then for Z n = {z 1 , z 2 , . . . , z n } ⊆ G and for all f ∈ H q we have , for functions f : B d → R which are integrable with respect to π ρ . Note that for an approximation of E πρ (f ) the functions f and ρ are part of the input of a possible approximation scheme. We assume that sampling directly with respect to π ρ is not feasible. We consider the Metropolis algorithm with ball walk proposal for the approximate sampling of π ρ . Let γ > 0, x ∈ B d and C ∈ B(B d ), then the transition kernel of the γ ball walk is where λ d denotes the d-dimensional Lebesgue measure and D γ (x) = {y ∈ R d | x − y ≤ γ} denotes the Euclidean ball with radius γ around x ∈ R d . The transition kernel of the Metropolis algorithm with ball walk proposal is where θ(x, y) = min{1, ρ(y)/ρ(x)} is the so-called acceptance probability. The transition kernel M ρ,γ is reversible with respect to π ρ . Now we provide update functions of the ball walk and the Metropolis algorithm with ball walk proposal. Let S d−1 = {x ∈ R d | x = 1} be the unit sphere in R d . Let ψ : [0, 1] d−1 → S d−1 be a generator for the uniform distribution on the sphere, see for instance [FW94]. Then, ψ γ : [0, 1] d → D γ (0) given by withū = (v 1 , . . . , v d ) ∈ [0, 1] d , is a generator for the uniform distribution in D γ (0) (the Euclidean ball with radius γ around 0). Thus, an update function This leads to an update function ϕ M,γ,ρ : B d ×[0, 1] d+1 → B d of the Metropolis algorithm with ball walk proposal. Let A(x;ū) = min{1, ρ(ϕ W,γ (x,ū))/ρ(x)} then an update function for the Metropolis algorithm with ball walk proposal is where u = (v 1 , . . . , v d+1 ) ∈ [0, 1] d+1 and x ∈ B d . Thus the algorithms are given by the update functions above. We assume that the functions f : B d → R and ρ : B d → (0, ∞) have some additional structure. Let f ∈ H 1 with f H 1 ≤ 1, where H 1 is defined in Subsection 5.2. For α > 0 let ρ ∈ R α,d if the following conditions are satisfied: (i) ρ is log-concave, i.e. for all λ ∈ (0, 1) and for all x, y ∈ B d holds (27) Next we provide a lower bound for Λ γ,ρ , defined as in (4) for the transition kernel M γ,ρ , where the density ρ is log-concave and log-Lipschitz. The result follows by [MN07, Corollary 1, Lemma 13].
Proposition 4 Let us assume that ρ ∈ R α,d . Further let The combination of Proposition 4, Theorem 5, Lemma 6 and Corollary 4 lead to the following error bound for the computation of E πρ (f ) for f ∈ H 1 and ρ ∈ R α,d .
Thus by Corollary 4 and Theorem 5 the assertion follows. ✷ Let us emphasize that the theorem shows that for any ρ ∈ R α,d there exist a deterministic algorithm where the error depends only polynomially on the dimension d and the Log-Lipschitz constant α.
Beyond the Monte Carlo rate
In the previous sections we have seen that there exist deterministic driver sequences which yield almost the Monte Carlo rate of convergence of n −1/2 . Roughly speaking, the proof of Theorem 2 reveals that, if the driver sequence is chosen at random from the uniform distribution the discrepancy bound of (14) is satisfied with high probability. In this section we use a stronger assumption to achieve a better rate of convergence. Again this result is an existence result. We want to point out that the proof of the result does not reveal any information on how to find driver sequences which leads to good discrepancy bounds. Its proof is based on the 'anywhere-to-anywhere' condition and Corollary 3.
Definition 7 Let ϕ : G × [0, 1] s → G be an update function. We say that ϕ satisfies the 'anywhere-to-anywhere' condition if for all x, y ∈ G there exists a u ∈ [0, 1] s such that ϕ(x; u) = y.
Now we use the 'anywhere-to-anywhere' condition to reformulate Corollary 3. We obtain a bound on the star-discrepancy for the Markov chain quasi-Monte Carlo construction.
Corollary 6 Let G ⊂ R d be a bounded, measurable set and let (G, B(G), π) be a probability space. Let the set of test sets B = {(−∞, x) ∩ G | x ∈ R d } be the set of anchored boxes intersected with G. Let ϕ be an update function and assume that ϕ satisfies the 'anywhere-to-anywhere' condition. Let ψ : [0, 1] s → G be an arbitrarily surjective measurable function. Then for any n ∈ N there exists u 0 , u 1 , . . . u n−1 ∈ [0, 1] s such that S n = {x 1 , . . . , x n } given by x 1 = ψ(u 0 ) and The corallary states that if the 'anywhere-to-anywhere' condition is satisfied, in principle, we can get the same discrepancy for the Markov chain quasi-Monte Carlo construction as without using any Markov chain. If the update function and underlying Markov operator P satisfies the conditions of Corollary 2, then a similar discrepancy bound as in Corollary 6 also holds for the driver sequence U n = {u 0 , u 1 , . . . , u n−1 }. Namely
Concluding remarks
Let us point out that the discrepancy results of Subsection 4.2 and Subsection 4.3, in particular, also hold for local Markov chains which do not satisfy the 'anywhere to anywhere' condition and the proof of this bound reveals that a uniformly i.i.d. driver sequence satisfies the discrepancy estimate with high probability. In other words, there are many driver sequences which satisfy the discrepancy bound of order (log n) 1/2 n −1/2 . On the other hand, the choice of the driver sequence depends on the initial distribution ν and the transition kernel. It would be interesting to prove the existence of a universal driver sequence, which yields Monte Carlo type behavior for a class of initial distributions and transition kernels. (For a finite set of initial distributions and transition kernels such a result can be obtained from our results since for any given initial distribution and transition kernel we can show the existence of good driver sequences with high probability.) Another open problem is the explicit construction of suitable driver sequences. The results in this paper do not give any indication how such a construction could be obtained. However, we do obtain that the push-back discrepancy is the relevant criterion for constructing driver sequences. | 10,607.8 | 2013-11-08T00:00:00.000 | [
"Mathematics"
] |
DNA damage response activates respiration and thereby enlarges dNTP pools to promote cell survival in budding yeast
The DNA damage response (DDR) is an evolutionarily conserved process essential for cell survival. Previously, we found that decreased histone expression induces mitochondrial respiration, raising the question whether the DDR also stimulates respiration. Here, using oxygen consumption and ATP assays, RT-qPCR and ChIP-qPCR methods, and dNTP analyses, we show that DDR activation in the budding yeast Saccharomyces cerevisiae, either by genetic manipulation or by growth in the presence of genotoxic chemicals, induces respiration. We observed that this induction is conferred by reduced transcription of histone genes and globally decreased DNA nucleosome occupancy. This globally altered chromatin structure increased the expression of genes encoding enzymes of tricarboxylic acid cycle, electron transport chain, oxidative phosphorylation, elevated oxygen consumption, and ATP synthesis. The elevated ATP levels resulting from DDR-stimulated respiration drove enlargement of dNTP pools; cells with a defect in respiration failed to increase dNTP synthesis and exhibited reduced fitness in the presence of DNA damage. Together, our results reveal an unexpected connection between respiration and the DDR and indicate that the benefit of increased dNTP synthesis in the face of DNA damage outweighs possible cellular damage due to increased oxygen metabolism.
the "checkpoint kinases" is the key component of the DDR (Fig. 1 and Table 1) (6 -9). In the presence of damaged DNA, the sensor kinases (ATM/ATR in mammals and Tel1/Mec1 in budding yeast) become active and phosphorylate the effector kinases (CHK1 and CHK2 in mammals and Chk1p and Rad53p in budding yeast) (10). Rad53p, the yeast ortholog of CHK2, is an essential intermediate kinase in the checkpoint pathway, as it connects the upstream kinases and downstream effectors to mediate an array of cellular outcomes in response to DNA damage (6,11,12). One of the Rad53p targets is another checkpoint kinase Dun1p (13).
Activation of the checkpoint kinases results in cell cycle arrest, activation of DNA repair, and reprogramming of transcription. One of the key outcomes of the DDR in yeast is the enlargement of the deoxyribonucleoside triphosphate (dNTP) pools, which is a prerequisite for effective DNA repair ( Fig. 1) (14,15). The rate-limiting step of dNTP synthesis is the reduction of ribonucleoside diphosphates into corresponding deoxyribonucleoside diphosphates, catalyzed by ribonucleotide reductase (RNR) (16). In most eukaryotes, RNR enzymes are ␣22 heterotetramers, in which the ␣2 homodimer and the 2 homodimer represent the large and small subunits, respectively. In yeast, however, the small subunit is a heterodimer of Rnr2p and Rnr4p; the large subunit is a homodimer of Rnr1p. The catalytic site is contained within the large subunit of both mammalian and yeast RNR enzymes. Both mammalian and yeast RNR genes are regulated transcriptionally, and the enzymes are regulated allosterically (17)(18)(19). In yeast, transcription of RNR2, RNR3, and RNR4 genes is induced following checkpoint activation and Dun1p-mediated phosphorylation and inactivation of the transcriptional repressor Crt1p (20). Transcription of RNR1 is regulated in a cell cycle-dependent manner by the transcriptional complex MBF and by high mobility group-domain protein Ixr1p, but not by Crt1p (21)(22)(23)(24). Dun1p regulates RNR activity and dNTP synthesis by at least two additional mechanisms. Dun1p phosphorylates Dif1p, a protein required for nuclear localization of Rnr2p and Rnr4p. Phosphorylation of Dif1p by Dun1p releases Rnr2p and Rnr4p into the cytoplasm, where they assemble with Rnr1p to form an active RNR enzyme (25)(26)(27)(28)(29)(30). During S phase or after DNA damage, Dun1p also phosphorylates and induces degradation of Sml1p, a protein that binds and inhibits the Rnr1p subunit ( Fig. 1) (31)(32)(33)(34).
Proliferating cells need to maintain a delicate balance between histone and DNA synthesis to ensure correct stoichiometric amounts for chromatin assembly and to avoid genome instability (35,36). Treatment with genotoxic agents that damage DNA or interfere with DNA replication triggers repression of histone genes (37)(38)(39). We have previously shown that a decrease in histone expression induces respiration (40). This poses an intriguing question: does DDR induce mitochondrial respiration? One of the sources of reactive oxygen species (ROS) is the oxidative electron transport chain (ETC) in the mitochondria. It is widely believed that DDR results in downregulation of respiration to protect DNA from endogenous ROS (41)(42)(43). Surprisingly, our data show that DDR and growth in the presence of sublethal concentrations of genotoxic chemicals activate respiration to increase ATP production and to elevate dNTP levels, which are required for efficient DNA repair and cell survival upon DNA damage.
DDR stimulates aerobic respiration
To determine whether DDR stimulates respiration, we used two approaches to introduce DDR. The first approach utilized the genotoxic chemicals bleocin and 4-nitroquinoline 1-oxide . Bleocin belongs to the antibiotic bleomycin family and causes DNA double-strand breaks (44). 4-NQO mimics the effect of UV light and forms DNA adducts (45). Both bleocin and 4-NQO trigger DDR. When compared with control cells, cells grown in the presence of sublethal concentrations of either chemical consumed more oxygen and produced more ATP, two parameters reflecting the activity of aerobic respiration in the mitochondria (Fig. 2, A and B) (40,46). Oxygen consumption of cells treated with bleocin or 4-NQO increased 1.8-and 1.5-fold, respectively, whereas the cellular ATP level increased 2.6-and 2.0-fold, respectively (Fig. 2, A and B).
The second approach to introduce DDR employed rad52⌬ mutation. RAD52 is required for DNA double-strand break repair and homologous recombination. Inactivation of RAD52 renders cells unable to repair DNA strand breaks and thereby triggers DDR (47). Compared with WT cells, rad52⌬ cells consumed 1.6 times more oxygen and displayed 3.2 times increased ATP levels (Fig. 2C).
Checkpoint kinases Mec1p and Rad53p are required for DDRinduced respiration
DDR is mediated through activation of checkpoint kinases and their cellular targets, which coordinate cell cycle arrest and repair of damaged DNA (Fig. 1). To investigate the requirement
DNA damage response activates respiration
of the checkpoint kinases Mec1p, Tel1p, Chk1p, Rad53p, and Dun1p for induction of respiration, we introduced the corresponding mutations into rad52⌬ cells and determined oxygen consumption (Fig. 3A). The oxygen consumption of tel1⌬ cells was elevated compared with WT cells, and introducing the tel1⌬ mutation into the rad52⌬ cells further increased oxygen consumption above the rad52⌬ level. chk1⌬ cells consumed less oxygen than WT cells, and the oxygen consumption of the rad52⌬chk1⌬ cells was attenuated compared with rad52⌬ cells. Because mec1⌬ and rad53⌬ cells are viable only if harboring crt1⌬ or sml1⌬ mutations (20, 31), we measured oxygen consumption in mec1⌬sml1⌬, mec1⌬crt1⌬, rad53⌬sml1⌬, and rad53⌬crt1⌬ strains. Surprisingly, mec1⌬sml1⌬ and rad53⌬sml1⌬ strains displayed increased oxygen consumption compared with WT cells. This increase can be attributed to the sml1⌬ mutation, because sml1⌬ cells also displayed slightly increased oxygen consumption. This finding is not entirely surprising, because sml1⌬ cells display increased copy number of mitochondrial DNA, likely due to the elevated dNTP level (48). The oxygen consumption in rad53⌬crt11⌬ was comparable with WT cells, while the oxygen consumption of mec1⌬crt1⌬ cells was decreased. Importantly, introducing the rad53⌬sml1⌬ and rad53⌬crt1⌬ mutations into rad52⌬ cells completely abrogated the elevated oxygen consumption of rad52⌬ cells. Extremely slow growth of rad52⌬mec1⌬sml1⌬ and rad52⌬mec1⌬crt1⌬ did not allow culturing these cells in sufficient quantity for further analysis. The oxygen consumption of dun1⌬ cells was elevated compared with WT cells, and introducing dun1⌬ mutation into rad52⌬ cells did not diminish the oxygen consumption of rad52⌬ cells. This result suggests that induction of respiration in rad52⌬ cells is not due to degradation of Sml1p and increased dNTP synthesis, because degradation of Sml1p requires Dun1p. The requirement of Rad53p for DDR-induced respiration is consistent with suppression of the elevated ATP levels in rad52⌬ cells by introducing rad53⌬sml1⌬ and rad53⌬crt1⌬ mutations (Fig. 3B).
To corroborate these results, we induced DDR by growing cells bearing deletions of the individual checkpoint kinases in the presence of sublethal concentrations of bleocin. Growth in the presence of bleocin induced respiration in WT, tel1⌬, chk1⌬, and dun1⌬ cells. The induction of respiration was completely absent in mec1⌬sml1⌬, mec1⌬crt1⌬, rad53⌬sml1⌬, and rad53⌬crt1⌬ cells (Fig. 3C). We interpret these results to mean that Mec1p and its downstream effector kinase Rad53p are required, whereas Tel1p, Chk1p, and Dun1p are not required for DDR-induced respiration. As a control, we also included the cyt1⌬ strain. CYT1 encodes cytochrome c 1 , and cyt1⌬ cells are not able to respire (40). The results show that the ETC is responsible for about 95% of the oxygen consumed by the cells growing in the absence or presence of bleocin (Fig. 3C).
To test the possibility that the increased oxygen consumption in the WT cells grown in the presence of genotoxic chemicals or in rad52⌬ cells is caused by a delayed progression through the S phase of the cell cycle, we arrested WT and rad52⌬ cells in G 1 phase with ␣-factor and compared oxygen consumption of arrested and asynchronous cells (Fig. 3D). Because the oxygen consumption in the two cell populations does not significantly differ for both WT and rad52⌬ cells, we conclude that the DDR induces respiration in a cell cycle-independent manner.
DDR down-regulates histone levels through activation of Rad53p
Because the connection between DDR and respiration is not immediately obvious and to some extent is counterintuitive, we asked the following question. What is the molecular mechanism underlying this phenomenon? We have recently reported that decreased histone expression results in reduced nucleosome occupancy across the genome and altered chromatin structure, which triggers respiration (40). To determine whether a similar mechanism is responsible for DDR-induced respiration, we assessed histone expression in cells grown in the presence of sublethal concentrations of bleocin or 4-NQO. Under these conditions, the expression of all four histone genes as well as the protein levels of histone H3 were markedly decreased (Fig. 4, A-C). A very similar trend of decreased histone gene
DNA damage response activates respiration
expression and protein level of histone H3 was observed in rad52⌬ cells, where the DDR is induced genetically (Fig. 4, D-H). Furthermore, down-regulation of histone transcripts and protein levels depended on Rad53p, as introducing rad53⌬ mutation into rad52⌬ cells restored histone transcript and protein levels in rad52⌬rad53⌬crt1⌬ and rad52⌬rad53⌬sml1⌬ cells to the WT levels ( Fig. 4, D-G). Down-regulation of histone transcripts in rad52⌬ cells was not suppressed by introducing tel1⌬ or dun1⌬ mutations into rad52⌬ cells and was only partially suppressed by chk1⌬ mutation.
DNA damage response activates respiration DDR-mediated repression of histone levels alters chromatin structure and induces expression of TCA cycle and ETC genes
Yeast genes can be categorized as growth genes and stress genes, with each group featuring distinct nucleosomal architecture of their promoters (49). The promoter of a growth gene contains a "nucleosome-free region," whereas the promoter of a stress gene is usually occupied by delocalized nucleosomes. As a result, growth genes are constitutively expressed in contrast to stress genes, which are regulated by factors that affect chromatin structure, including abundance of histone proteins. Considering that respiratory genes in Saccharomyces cerevisiae belong to the stress gene category (50) and that reduced histone
DNA damage response activates respiration
expression induces respiration (40), we reasoned that by downregulating histone expression DDR might affect chromatin structure and induce respiratory genes. To test this possibility, we determined histone H3 and RNA pol II occupancy in the promoters of CIT1, IDH1, and QCR7, genes encoding enzymes of the TCA cycle and ETC. The histone H3 occupancy of CIT1, IDH1, and QCR7 promoters was significantly reduced in cells treated with bleocin or 4-NQO compared with control cells, whereas the occupancy of RNA pol II at the same set of promoters was increased (Fig. 5, A and B). Similarly, the histone H3 occupancy of CIT1, IDH1, and QCR7 promoters was decreased, whereas RNA pol II occupancy at the same promoters was increased in rad52⌬ cells (Fig. 5, C and D). The changes in chromatin structure and RNA pol II occupancy were accompanied by increased transcription of the corresponding genes upon treatment with bleocin or 4-NQO and in rad52⌬ cells (Fig. 5E). In this analysis, we also included the COX1 gene that is encoded by the mitochondrial genome. As we have shown previously, decreased histone expression and chromatin changes of the nuclear genome affect transcription of genes encoded by the mitochondrial genome. The corresponding mechanism involves elevated expression of nuclear genes RPO41 and MTF1, encoding mitochondrial RNA polymerase and its associated factor, respectively (40).
Inactivation of RAD52 increased respiration (Fig. 2) and down-regulated histone levels (Fig. 4) in a Rad53p-dependent manner. To determine whether the increased expression of genes required for the TCA cycle, ETC, and OXPHOS in rad52⌬ cells also requires Rad53p or other checkpoint kinases, we determined transcript levels for CIT1, IDH1, QCR7, and COX1 genes in rad52⌬ cells containing deletions of the individual checkpoint kinases (Fig. 6). We found that only inactivation of RAD53 (in rad53⌬sml1⌬ and rad53⌬crt1⌬) but not inactivation of TEL1, CHK1, or DUN1 suppressed the elevated expression of CIT1, IDH1, and QCR7 genes in rad52⌬ cells.
Elevated histone levels suppress respiration in rad52⌬ cells
To test whether the decreased histone levels and altered chromatin structure are indeed responsible for induction of respiration when cells grow in the presence of sublethal concentrations of genotoxic chemicals, we elevated histone levels in WT cells by ectopic expression of extra histones. A high copy number plasmid encoding all four core histones significantly reduced oxygen consumption when cells were grown in the presence of bleocin (Fig. 7, A and B). In addition, overexpression of histones suppressed oxygen consumption in rad52⌬ cells (Fig. 7C).
In another approach, we tested whether the histone levels can be increased in rad52⌬ cells by introducing tom1⌬ or hir1⌬ mutations. Tom1p functions in a pathway responsible for degradation of free histones. Free histones that are not assembled into chromatin are degraded in a pathway that depends on phosphorylation by Rad53p and ubiquitylation by Ubc4p, Ubc5p, and Tom1p (51,52). When TOM1 was deleted in rad52⌬ cells, the protein level of histone H3 was restored almost to the WT level, confirming the usefulness of this approach (Fig. 7D). Hir1p is a subunit of the HIR complex that acts as a histone chaperone and a repressor of the majority of histone genes (53,54). Inactivation of HIR1 induces expression of histone genes (35). Introducing tom1⌬ or hir1⌬ mutations into rad52⌬ cells significantly suppressed oxygen consumption, suggesting that it is indeed the decreased level of histones that is responsible for induction of respiration in rad52⌬ cells (Fig. 7E).
DDR-induced respiration activates dNTP synthesis
Does the DDR-induced respiration provide any advantage to yeast cells? One major outcome of DDR is the increase in activity of RNR, the key enzyme that catalyzes the rate-limiting step in dNTP synthesis (16). The complex regulation of RNR activity suggests that the control of the dNTP pools is very important for maintaining genome integrity and cell survival under genotoxic stress. Indeed, the enlargement of dNTP pools is essential for effective DNA repair (14,15).
In our previous work, we have shown that reduced histone expression and altered chromatin structure induce respiration and significantly elevate cellular ATP levels (40). To determine whether increased respiration and elevated ATP levels can drive up dNTP synthesis, we evaluated the sizes of dNTP pools in swi6⌬ and asf1⌬ cells (Fig. 8A). Swi6p is the transcriptional activation subunit of the SBF and MBF complexes that regulate histone gene transcription (35,36). Asf1p is a histone chaperone involved in chromatin assembly. Both swi6⌬ and asf1⌬ cells display markedly up-regulated oxygen consumption and ATP levels (40). We found that the dNTP pools are significantly increased in swi6⌬ and asf1⌬ cells, and the increase is abolished in swi6⌬cyt1⌬ and asf1⌬cyt1⌬ cells (Fig. 8A). CYT1 encodes the cytochrome c 1 subunit, and deletion of CYT1 inactivates the ETC. When we induced DDR in WT cells either chemically or genetically, the dNTP pools were also significantly increased. However, blocking respiration by introducing the cyt1⌬ mutation appreciably decreased dNTP levels in rad52⌬ cells or WT cells treated with bleocin or 4-NQO (Fig. 8B). These results indicate that decreased histone expression and the defect in chromatin structure or growth in the presence of genotoxic chemicals activate respiration and increase ATP and dNTP levels.
Survival of yeast cells in the presence of DNA damage depends on the availability of dNTPs for effective DNA repair (14,19). Consistently with this role of increased dNTP pools for effective DNA repair, cyt1⌬ cells that cannot up-regulate respiration and are thus unable to effectively enlarge their dNTP pools are more sensitive to bleocin or hydroxyurea (Fig. 8C). Inactivation of SML1 does not significantly change growth on bleocin or hydroxyurea, but sml1⌬cyt1⌬ cells are more sensitive to bleocin or hydroxyurea than sml1⌬ cells. Interestingly, sml1⌬cyt1⌬ cells appear to be more sensitive to hydroxyurea than cyt1⌬ cells. The difference in sensitivity to hydroxyurea between cyt1⌬ and sml1⌬cyt1⌬ cells is quite subtle but reproducible. Because both ATP and Sml1p are allosteric regulators of RNR (55), this observation may indicate that under conditions of direct inhibition of RNR by hydroxyurea, the absence of Sml1p sensitizes RNR to low ATP levels.
To test whether the lethality of rad53⌬ mutation can be suppressed by increasing dNTP synthesis by up-regulating respiration and ATP synthesis, we introduced mbp1⌬ mutation into rad53⌬ cells. MBP1 encodes the DNA-binding subunit of the DNA damage response activates respiration DNA damage response activates respiration MBF transcriptional factor. We have shown previously that mbp1⌬ cells display increased respiration and ATP levels (40). The rad53⌬mbp1⌬ cells are viable and grow significantly better than the rad53⌬mbp1⌬cyc1⌬ cells (Fig. 8D). CYC1 encodes cytochrome c, and cyc1⌬ cells are not able to respire (56). This result indicates that increasing respiration and presumably ATP and dNTP synthesis represent the major mechanism for suppression of rad53⌬ lethality by mbp1⌬ mutation.
Only long-term but not acute genotoxic stress induces respiration
Several studies found that DDR represses transcription of genes encoding enzymes of the TCA cycle, ETC, and OXPHOS. These studies evaluated acute exogenous genotoxic stress, typically created by 1-2-h cell exposure to genotoxic chemicals (37,60). In contrast, our results show that chronic activation of DDR rendered by rad52⌬ mutation or by growing WT cells in the presence of sublethal concentrations of genotoxic chemicals activates transcription of the TCA cycle, ETC, and OXPHOS genes and elevates oxygen consumption. To deter-mine whether the duration of the genotoxic stress is responsible for the difference between our results and studies that found the inhibitory effect of DDR on transcription of respiratory genes (37, 60), we performed a time-course experiment and measured oxygen consumption and transcription of CIT1, IDH1, QCR7, and COX1 genes for several hours after addition of bleocin (Fig. 9). Indeed, the most significant increase in oxygen consumption (Fig. 9A) and transcription of the respiratory genes ( Fig. 9B) is observed after more than 2 h of growth in the presence of bleocin. These results show that increased respiration is a long-term response to a chronic sublethal genotoxic stress and suggest that long-term survival and growth under chronic genotoxic stress requires increased respiration to support ATP and dNTP synthesis.
Discussion
The key finding of this study is that the DDR activates respiration to increase ATP production and elevate dNTP levels, which are required for efficient DNA repair. Based on our results, we propose a model in which DDR regulates dNTP
DNA damage response activates respiration
synthesis by a bifurcating mechanism (Fig. 10). In one wellestablished branch of the pathway, Dun1p inactivates Crt1p, Sml1p, and Dif1p, leading to increased RNR activity and dNTP synthesis (31)(32)(33)(34). In the second branch of the pathway, Mec1p and Rad53p down-regulate transcription of histone genes (Fig. 10). Decreased histone levels result in altered chromatin structure and induction of TCA cycle and ETC genes required for respiration (40). A direct outcome of elevated respiration is increased production of ATP, a potent allosteric activator for the RNR enzyme (16,18), increased synthesis of dNTPs, and improved cell survival (Fig. 10). This "histone" branch of the pathway does not require Dun1p and inactivation of Crt1p, Sml1p, and Dif1p, because inducing DDR in dun1⌬ cells activates respiration almost to the same level as in the WT cells (Fig. 3, A and C).
Based on chromatin architecture, yeast genes belong to one of two broad groups: growth genes and stress genes (49). Growth genes are expressed rather constitutively, and their promoters feature a nucleosome-free region where transcription factors bind upstream of the ORF. Stress genes are expressed at a lower level, and their promoters are dominated by delocalized nucleosomes rather than by the nucleosome-free region. Consequently, stress genes are regulated by factors that affect the structure of chromatin, including histone level. The respiratory genes in S. cerevisiae belong to the stress category, unlike respiratory genes in higher eukaryotes (50). Consequently, reduced histone expression or a defect in chromatin assembly induces respiration by allowing increased activation of the TCA cycle, ETC, and OXPHOS genes by the Hap2/3/ 4/5p complex (40).
DNA damage response activates respiration
Although it is well-established that DNA replication is coordinated with transcription of histone genes and that DDR represses histone transcription, the role of Rad53p in this process is not fully understood. The expression of histone genes is regulated by two G 1 /S-specific transcription complexes SBF and MBF, in addition to other transcription regulators (35,36). Swi4p/Swi6p and Mbp1p/Swi6p form SBF and MBF, respec-tively. DDR induces Rad53p-dependent phosphorylation of Swi6p, which results in down-regulation of CLN1 and CLN2 transcription and delayed G 1 to S progression (58,59). On the other hand, Rad53p phosphorylates and inactivates Nrm1p, the co-repressor of MBF, which results in activation of MBF targets (23,24). Systematic phosphoproteomics screen identified Swi6p, Swi4p, and Mbp1p as direct targets of Rad53p (60).
DNA damage response activates respiration
Because the transcription of the histone genes is reduced in swi4⌬ and mbp1⌬ cells (61), the simplest explanation for the role of Rad53p in regulation of histone gene transcription is that Rad53p phosphorylates and down-regulates SBF and MBF complexes.
How does elevated respiration and ATP levels regulate dNTP synthesis? The large subunit of both mammalian and yeast RNR contains the catalytic site as well as two allosteric sites (16,17,19). One of the allosteric sites, the "specificity site," binds dTTP, dGTP, and dATP and regulates the appropriate ratios among the four dNTP pools. The second allosteric site, the "activity site," binds ATP or dATP and regulates the total dNTP pool size by monitoring the dATP/ATP ratio. When the cellular ATP-to-dATP ratio increases, the binding of ATP to this allosteric site activates RNR, promoting synthesis of all dNTPs. When the dNTP concentration reaches a certain level, the RNR activity is allosterically inhibited by binding of dATP to the "activity site." DNA replication fidelity requires correct absolute and relative concentrations of the dNTPs (62,63), and mutations in both the "specificity" and "activity" sites of yeast RNR result in significantly reduced replication fidelity (14,63).
In yeast RNR, the allosteric dATP feedback inhibition is more relaxed, allowing the increase of dNTP pools upon DNA damage (14). The increase in dNTP pools significantly improves survival following DNA damage; however, it also results in higher mutation rates (14,57,64). Our results suggest that the DDR-induced expansion of the dNTP pools is partly facilitated by elevated respiration and ATP production.
The relationship between respiratory metabolism and DNA replication and repair is contentious. Leakage of electrons from the ETC is one of the endogenous sources of ROS, which damage cellular structures, including DNA, contributing to the pathogenesis of cardiovascular diseases, inflammatory diseases, and cancer, and a shorter life span (65,66). However, mitochondria can also function as a cellular antioxidant defense, and increased mitochondrial activity enables more efficient operation of the ETC, limiting ROS production and increasing antioxidant capacity (67). In addition, DNA replication and repair are energetically costly (45), and ETC and OXPHOS generate significantly more ATP than glycolysis. This energetic aspect of DNA repair is evolutionarily conserved, as illustrated by increased fatty acid oxidation, oxidative phosphorylation, and oxygen consumption in response to both chronic endogenous and acute exogenous genotoxic stress in mice (68).
DNA damage induced by methylmethane sulfonate (MMS) was reported to suppress respiration (69), down-regulate yeast AMP-activated protein kinase ortholog Snf1p (43), and suppress transcription of genes regulated by the Hap1p and Hap2/ 3/4/5p complex (37, 60). Hap1p and Hap2/3/4/5p activate genes encoding enzymes of the TCA cycle, ETC, and OXPHOS. In one well-established branch of the pathway, Dun1p inactivates Crt1p, Sml1p, and Dif1p, leading to increased RNR activity and dNTP synthesis. In the second branch of the pathway, Mec1p and Rad53p downregulate transcription of histone genes. Decreased histone levels result in altered chromatin structure and induction of TCA cycle and ETC genes required for respiration. A direct outcome of elevated respiration is increased production of ATP, a potent allosteric activator for the RNR enzyme, increased synthesis of dNTPs, and improved cell survival.
DNA damage response activates respiration
The repression of Hap1p and Hap2/3/4/5p targets was independent of checkpoint kinases and was not observed in cells exposed to ionizing radiation (37). The authors concluded that the MMS-induced repression of Hap1p and Hap2/3/4/5p targets was not specific for DNA damage and was a consequence of oxidative stress or other effect of MMS on Hap1p and Hap2/3/ 4/5p signaling (37,60). These results are in agreement with our observations. When we tested different genotoxic chemicals for respiration inducers, we also included MMS. Even with MMS concentrations spanning from 0.0001 to 0.01%, we could not detect any stimulatory effect on oxygen consumption, although we observed consistent and marked stimulation of oxygen consumption in cells treated with bleocin or 4-NQO (Fig. 2). We conclude that the effect of MMS on respiration is not representative of genotoxic chemicals and DDR but rather reflects the particular properties of MMS and/or the specific pathways MMS affects. This conclusion is supported by direct inhibition of respiration in isolated mitochondria by MMS (69).
This study connects respiratory metabolism and DDR, two processes deemed not to be very compatible. We speculate that the benefit of increased ATP and dNTP synthesis for cell survival offsets the deleterious effect of respiratory metabolism on DNA repair.
Yeast strains, media, and plasmid construction
All yeast strains used in this study are listed in Table 2. Standard genetic techniques were used to manipulate yeast strains and to introduce mutations from non-W303 strains into the W303 background (71). Cells were grown at 30°C in yeast extract/peptone/dextrose (YPD) medium containing 2% glucose or under selection in synthetic complete medium containing 2% glucose and, when appropriate, lacking specific nutrients to select for a particular genotype. Cell cycle arrest in G 1 phase by ␣-factor was carried out by adding ␣-factor to 10 g/ml to cells exponentially growing in YPD medium. Following ␣-factor addition, the cultures were incubated for 3 h, and the arrest was monitored by examining cell morphology (72). For construction of plasmid pRS426-HTA1/HTB1-HHF1/ HHT1, the HTA1/HTB1 locus was amplified by PCR using forward primer 5Ј-TTCACACGAGCGAATTCTCTGAAG-3Ј and reverse primer 5Ј-AGCAACAGTGCTCGAGGAACC-TAA-3Ј. The HHF1-HHT1 locus was amplified using forward primer 5Ј-AAATACGAGCTCCGTGTAAGTTACAGAC-3Ј and reverse primer 5Ј-TTTCGAGGGGATCCCCAGGAAAA-3Ј. The HHF1-HHT1 fragment was digested with SacI and BamHI and ligated into pRS426. The HTB1-HTA1 fragment was digested with EcoRI and XhoI and ligated into the pRS426-HTA1/HTB1 plasmid.
Oxygen consumption measurement
Oxygen consumption measurements were performed essentially as described (40,46). Cells were grown to an A 600 nm of 0.6 in YPD medium containing 2% glucose, and 3 A 600 nm units (9 ϫ 10 7 ) of yeast cells were harvested by centrifugation. Cells were resuspended in a buffer containing 10 mM HEPES, 25 mM K 2 HPO 4 , pH 7.0, and incubated at 30°C in an oxygen consumption chamber (Instech Laboratories, Inc.) connected to a Neo-FOX fluorescence-sensing detector using NeoFOX software (Ocean Optics, Inc.). Results were calculated as picomoles of O 2 /10 6 cells/s and expressed as percentages of the WT value. The oxygen consumption rate in WT cells grown in YPD medium was 5.08 pmol/10 6 cells/s and was set as 100%.
Cellular ATP assays
Cellular ATP levels were determined as described (40,46). Cells were grown to an A 600 nm of 0.6 in YPD medium containing 2% glucose, and 3 A 600 nm units (9 ϫ 10 7 ) of yeast cells were harvested, and cells were harvested by centrifugation and lysed in 5% TCA with pre-chilled glass beads. Cell lysate was neutralized to pH 7.5 with 10 M KOH and 2 M Tris-HCl, pH 7.5. ATP levels were measured using the ENLITEN ATP assay (FF2000, Promega) according to the manufacturer's instructions and normalized by the number of cells. The ATP level in WT cells grown on YPD medium was 0.58 mol/10 10 cells and was set as 100%.
Western blotting
Whole-cell lysates were prepared, and Western blotting was performed as described previously (40). Briefly, cells were grown in YPD medium containing 2% glucose to an A 600 nm of 0.6. Four A 600 nm units (1 A 600 nm unit is equal to 3 ϫ 10 7 cells) of yeast cells were harvested and immediately boiled in SDS sample buffer. Anti-histone H3 polyclonal antibody (Abcam, ab1791) was used at a dilution of 1:1000, and anti-Pgk1p (Invitrogen, 459250) was used at a dilution of 1:3000.
dNTP quantitative analysis
Four A 600 nm units (12 ϫ 10 7 ) of yeast cells were harvested and lysed in 5% TCA with pre-chilled glass beads. Cell lysate was neutralized to pH 7.5 with 10 M KOH and 2 M Tris-HCl, pH 7.5. The lysate was used immediately for probe-based quantitative PCR analysis of individual dNTP levels or aliquoted and stored at Ϫ70°C until the time of analysis. The procedure for the fluorescence-based dNTP quantitative analysis was essentially as described previously with minor modifications (74). Specifically, both detection templates (DT) 1 and 2 for each dNTP were used, and the results were compared. Fluorescence signal was recorded every 5 min (1 cycle) for a total of 50 min (10 cycles) to monitor the kinetics of individual dNTP incorporation, and the fluorescence signal after a 40-min incubation (8 cycles) was used for calculation of the normalized fluorescence units. Standard curves using appropriate individual dNTPs were established for assay validation.
Spotting assay
Cells were grown to log phase at 30°C, and 10-fold serial dilutions were spotted on the YPD plates with or without genotoxic chemicals and incubated at 30°C for 48 -72 h.
Statistical analysis
The results represent at least three independent experiments. Numerical results are presented as means Ϯ S.D. Data were analyzed by using an InStat software package (Graphpad, San Diego). Statistical significance was evaluated by one-way analysis of variance, and p Ͻ 0.05 was considered significant. | 7,075.6 | 2019-05-09T00:00:00.000 | [
"Biology"
] |
Torsional Rigidity on Compact Riemannian Manifolds with lower Ricci Curvature Bounds
In this article we prove a reverse H\"older inequality for the fundamental eigenfunction of the Dirichlet problem on domains of a compact Riemannian manifold with lower Ricci curvature bounds. We also prove an isoperimetric inequality for the torsional ridigity of such domains.
Introduction and main results
In 1972; following the spirit of the works of Faber and Krahn [1,2], Payne and Rayner [3,4] proved a reverse Hölder inequality for the norms L 1 and L 2 of the first eigenfunction of the Dirichlet problem on bounded domains D of R 2 : where 1 is the lowest eigenvalue of the fixed membrane problem, and u 1 the corresponding eigenfunction. Equality holds if and only if D is a disc. The work of Payne and Rayner was generalized by Köhler and Jobin [5] for bounded domains of R n ; n 3: In 1982, G.Chiti [6], generalized the reverse Hölder inequality for the norms L q and L p ; q p > 0 for bounded domains of R n ; n 2 0 @ Z D u q dxdy This inequality is isoperimetric in the sense that equality holds if and only if D is a ball.
The main ideas of this paper were early investigated in the PhD thesis of H.Hasnaoui [7], the first one was to establish a generalization of Chiti's reverse Hölder inequality for the norms L q and L p ; q p > 0 for compact Riemannian manifolds with Ricci curvature bounded from below and the second one is a version of the Saint Venant Theorem for such manifolds.
In fact, Modern geometric analysts, including Chavel and Gromov, have identified such manifolds as important, and have related the Ricci bound to many estimates of eigenvalues, as well as to other quantities of interest in differential geometry. Thus, it is very natural to consider similar questions about the Laplacian and the torsional rigidity in this context. Both are isoperimetric results, in the sense that the quantity of interest is dominated by the analogous expression on spheres. Much of the background and many results for the spectrum of such manifolds could be found in [8], [9], [10] and more recently in [11]. In 1856, Saint-Venant [12] observed that columns with circular cross-sections offer the greatest resistance to torsion for a given cross-sectional area. This fact was proved a century or so later by Pólya using Steiner symmetrization [13]. We also note the independent proof by Makai in 1963, one can see [14]. Torsional rigidity is a physical quantity of much interest, see for example [15,16], the recent [17,18], the more recent [19,20] and the classical papers of Payne [21,22], Payne-Weinberger [23]. As we show here, it is also a quantity of much interest for geometers as well.
Parts of our work follow an analysis of Ashbaugh and Benguria [24] for subdomains of hemispheres, one can see also [25]. In the sequel, we will introduce the main results of this paper: Let .M; g/ be a compact connected Riemannian manifold of dimension n 1 without boundary. We denote by the infimum of the Ricci curvature Ri c of .M; g/; here U T .M / is the unit tangent bundle of the manifold .M; g/. Let .S n ; g ? / be the unit sphere of the space R nC1 , endowed with the induced metric, then R.S n ; g ? / D n 1. We suppose, as done in [9], that R.M; g/ is strictly positive, and normalize the metric g so that R.M; g/ R.S n ; g ? / D n 1. In the sequel, we will denote by V .M / D R dv g the volume of .M; g/; where the element of volume is denoted dv g , ! n the volume of the unit sphere .S n ; g ? / andˇD V .M /=! n : Let D be a connected bounded domain of M with smooth boundary, and D ? the geodesic ball of S n centered at the north pole such that Vol.D/ DˇVol.D ? /: We are interested in the comparison of the fundamental solutions u and v of problems .P 1 / and .P 2 / respectively: .
B  1 . / is the geodesic ball of S n centered at the north pole of radius  1 D  1 . /; where is the first eigenvalue for the Dirichlet problem .P 2 /; and denotes indifferently the laplacian operator on M or S n : Let u be the decreasing rearrangement of u and u ? the corresponding radial function defined on D ? the geodesic ball of .S n ; g ? / which has the same relative volume as D; (see Section 2 for notations and details).
As a consequence of Theorem 1.1, we obtain: Corollary 1.2 (Chiti's Reverse Hölder Inequality for Compact Manifolds). Let p; q be real numbers such that q p > 0, then u and v are related by this inequality with equality if and only if the triplet .M; D; g/ is isometric to the triplet .S n ; D ? ; g ? /.
Next, we focus on the torsional rigidity T .D/ of the domain D: Recall that: where w is is the smooth solution of the boundary value problem of Dirichlet-Poisson type ( w is called the warping function of D) w D 0 on @D: We obtain the following result: the equality holds if and only if the triplet .M; D; g/ is isometric to the triplet .S n ; D ? ; g ? /.
And finally, we give a comparison formula for the warping function w which allows us to obtain directly the result of Theorem 1.3.
Preliminary tools
Denote by .D/ the first eigenvalue of the Laplacian for the Dirichlet problem on D, and let u be the positive associated eigenfunction. Therefore, u satisfies The variational formula of .D/ is given by where jdf j is the Riemannian norm of the differential of f . We have equality if and only if f is of class C 2 and is an eigenfunction associated with the first eigenvalue .D/: The co-area formula gives Here, d is the (n-1)-dimensional Riemannian measure in .M; g/. For what follows, we will also denote the .n 1/dimensional Riemannian measure in .S n ; g ? / by d . Since D has bounded measure, the above shows that the function is integrable, and therefore the function V is absolutely continuous. Hence, V is differentiable almost everywhere and for almost all t 2 OE0; u. The function V is then a non increasing function and has an inverse which we denote by u : The function u is absolutely continuous. Now, applying the Cauchy-Schwartz inequality, we obtain Hence Next, we use the following isoperimetric inequality due to M. Gromov [26] which relates the volume of the boundaries of D and D : Lemma 2.1. Under the same hypothesis given above Vol n 1 .@D/ ˇVol n 1 .@D ? /; (14) where Vol n 1 is the .n 1/-dimensional volume relative to g and g ? . Equality holds, if and only if the triplet .M; D; g/ is isometric to the triplet .S n ; D ? ; g ? /: Let The quantity ! n 1 R  0 .sin / n 1 d is the n-volume of the geodesic ball of radius  in S n . If we let, the function L.Â/ denotes the .n 1/-dimensional volume of the geodesic ball of radius  , i.e.
Inequality (14) can then be written as where Â.s/ is the inverse function of A defined in (15). Now, applying inequality (17) to the domain D u .s/ and combining it with inequality (13), we obtain We then apply Gauss Theorem to the Dirichlet problem on D t , using the smoothness of its boundary @D t ; we get Here, we used the fact that the outward normal to D t is given by ru jruj . Remark. 8p 0; we have The change of variables Á D V . / gives Finally combining (20) for p D 1 with equalities (18) and (19), we obtain the following Lemma 2.2. Let u be a solution of problem .P 1 /. Then u , its decreasing rearrangement, satisfies the integrodifferential inequality for almost every s > 0. Let B  1 . / be the geodesic ball of S n centered at the north pole with radius  1 D  1 . /; such that the following problem has a solution. Let v > 0 be the first Dirichlet eigenfunction on B  1 . / then by lemmas 3.1 and 3.2 of [27], we conclude that v depends only on  and is strictly decreasing on OE0;  1 /. Therefore, we denote by v. / the function v. So, in polar coordinates the problem .P 2 / can be rewritten as Integrating the equality (23), we obtain (25) Then, using (24), we can rewrite the left-hand side of (25) as for all s in OE0; A. 1 /. The change of variables D A.˛/ in the right-hand side of (25) gives Finally, from (26) and (27), we obtain v 0 .s/ D .ˇ! n 1 / 2 .sin Â.s// 2 2n
Chiti's Reverse Hölder Inequality
In this section, we will prove the extension of the Chiti's Comparison Lemma, given for domains in the case of R 2 and R n in the original papers of Payne-Rayner [3,4], then, extended by Kohler-Jobin [5] and Chiti [6,28]. In [24], Proof. Using ( Now, an integration by parts in the second member of the last inequality and the fact that give Considering that is the minimum of the Rayleigh quotient on B  1 . / , it follows that this minimum is achieved for u ? , and so u ? is indeed an eigenfunction associated with on B  1 . / . Now, using the simplicity of the fundamental eigenvalue and the hypothesis of our Lemma, we get u ? D v. Since the functions u and v are nonnegative, and A. 1 / Ä vol.D/ (see (30)), it is then clear that We will first prove that v .0/ u .0/. Assume that v .0/ < u .0/. In this case, 9 Ä > 1, such that Ä v .0/ D u .0/. By Lemma 3.1, we have It follows from (22) and (28) that ' satisfies From '; define a radial function in B  1 . / byˆ.
Then,ˆis an admissible function for the Rayleigh quotient on B Â 1 . / . Using this fact, we proceed exactly as in the proof of inequality (35), we obtain It follows that the Rayleigh quotient ofˆis equal to ; therefore,ˆis an eigenfunction for . Consequently, u D v and so u .s/ D v .s/ in OEs 1 ; s 2 , which contradicts the maximality of s 1 and hence completes the proof of the theorem. .v .Á// p dÁ: We complete the argument using the following result Lemma 3.2 ([29]). Let R; p; q be real numbers such that 0 < p Ä q, R > 0; and f; g real functions in L q .OE0; R/.
If the decreasing rearrangements of f and g satisfy the following inequality: From this, it is clear, that for all q p Finally, combining this inequality and equality (1), we obtain the desired result. Now, assume that we have equality in (4), from the normalization for the function v given in (1), we deduce that for all p > 0 R D u p dv g DˇR B Â 1 . / v p dv g ? : Hence, vol.D/ Dˇvol.B Â 1 . / /; and since D ? and B Â 1 . / are geodesic balls of S n centered both at the north pole with the same volume, it yields that D ? D B Â 1 . / : By hypothesis is the fundamental eigenvalue of B Â 1 . / ; hence of D ? ; thus we obtain that 1 .D/ D 1 .D ? / D and this is possible if and only if the triplet .M; D; g/ is isometric to the triplet .S n ; D ? ; g ? /; ( one can see Theorem 5 in [9]). The proof of Corollary 1.2 is thereby complete.
Saint-Venant Theorem
Let .M; g/ be a compact Riemannian manifold of dimension n, without boundary satisfying R.M; g/ n 1, and let D be a bounded connected domain of M with smooth boundary. We are interested in the following geometric quantity where w is the smooth solution of the boundary value problem of Dirichlet-Poisson type The geometric quantity T .D/ is called the "torsional rigidity of D", and it is customary to call the solution w of (53) the warping function. In Theorem 1.3, we will give explicit upper bounds for the torsional rigidity T .D/ which amounts to a version of the Saint-Venant Theorem for compact manifolds. Let C 1 0 .D/ denote the space of C 1 functions with compact support in D. We define the Sobolev space H 1 0 .D/ as the closure of C 1 0 .D/ in H 1 .M / the space of square integrable functions with a square integrable weak derivatives. The variational formulation for T .D/ is given by Indeed, by the scaling property of the functional,ˆ.cf / Dˆ.f / for all c > 0, one can reformulate the minimizing problem of the functionalˆas a minimizing problem of the functional R D jrf j 2 dv g subjected to the constraint R D f dv g D 1. By the above mentioned Lagrange multipliers theorem, this gives the existence of the Lagrange multiplier , such that for any h 2 Hence, f is a weak solution of the equation By standard regularity results, f is unique and smooth. Then w D 1 f is also a critical point ofˆ. Finally, the fact thatˆ.w/ D 1 T .D/ proves the equality. We will now give the proof of the Saint-Venant Theorem.
Proof of Theorem 1.3. The proof follows the steps of Talenti's method (one can see [30]) tailored to our setting. For 0 < t Ä m D supfw.x/; x 2 Dg, we define the set and introduce the following functions The smooth co-area formula gives Then differentiating (58) with respect to t; one obtain Let w be the inverse function of V; w ? the radial function defined in D ? by w ? . / D w .A. //; and Since w ? is a radial decreasing function, its level sets D ? t are geodesic balls with radius r.t / D w ? 1 .t /. Therefore and Differentiating with respect to t , we get and On one hand, multiplying (64) by (65), we obtain On the other hand, the Cauchy-Schwartz inequality gives Next, the fact thatˇZ D ?
In the sequel, we will give a comparison theorem for the warping function in the case of a smooth compact Riemannian manifold. This theorem is based on a method of Talenti ([30]). which is the solution to the problem Then, w ? the symmetrized function of w satisfies where 0 Ä t < m. We introduce the function defined by Then The function ‰ is decreasing in t then, for h > 0; we have Letting h go to zero, we obtain, for the right derivative of ‰.t / Now, for  2 .0;  0 /, integrating this inequality from  to  0 ; we obtain which is the desired result. Now, assume that we have equality in (7), integrating this equality, we get Finally, by applying the Saint-Venant Theorem, we deduce that the triplet .M; D; g/ is isometric to the triplet .S n ; D ? ; g ? /; which completes the proof of the theorem.
Which is the result of Theorem 1.3. | 3,777.2 | 2015-04-10T00:00:00.000 | [
"Mathematics"
] |
IoT Based Electrical Devices Surveillance Control
. Analogue electronics devices are losing ground to digital electronics components in today's technologically evolved society. Another aspect of our increasingly digital society is the home device control system. A lot of people are chatting it up over the cellular phone network. These phone networks rely on WIFI modules as their foundation. A shorthand way to say "global system of mobile communication" is WIFI. In addition to being widely employed in industry, it is also used in several electronics projects undertaken by engineering students. It is possible to operate equipment from afar using WIFI-based projects. For instance, what if you wanted to be able to operate any equipment from afar using only your cell phone? Would it be possible? Sure, I can help you with that. Wi-Fi modules and cell phones make it easy to power on and off various gadgets. The home devices control system was developed to enable the control of home equipment from a mobile phone by using the aforementioned ideas.This document makes use of a regulated 5V, 1A power source. When controlling voltage, a 7805 three-terminal regulator is ideal. When a 230/12V step-down transformer's secondary AC output needs rectifying, a bridge-type full wave rectifier is the way to go.
Introduction
The pathway from power supplier to end user is the primary focus of our article.It used to be standard practice for the power board to dispatch workers to the customer side at the end of each instant to take metre readings.Typically, these workers are hired on a contract basis or are supplied by subordinate authorities on a contract basis exclusively.Because conflicts between authorities might lead to data loss, this task can quickly become tedious and difficult to do.[1] The final bills are produced and distributed to the consumers when the data obtained by the staff is relayed to the power board.This overall method is successful, but it might need some tweaks since it's expensive, time-consuming, and a pain to do.Additionally, due to different geographical and atmospheric circumstances, there are several locations that are difficult for personnel to access.[2] Research and development of smart electrical energy metre technology has been ongoing for almost a decade.A number of methods for quantifying power use have been developed.
After consumers create and deliver energy via one of various ways, the energy board will send them a bill.For instance, most Malaysian homes still utilise antiquated with electromechanical watt metres that do not include any kind of automation in their readings.the third users will be need to wait for their monthly energy bill in order to pay for their energy use.In a typical month, a member of the metre board billing staff will visit each residence to collect metre readings and deliver bills simultaneously.Electricity metres, often called energy metres, are devices that track how much power a home or company uses.The two most common kinds of metres used by domestic ordinary power consumers are threephase and single-phase.The kilowatt-hour (kWh) metre is used by all electrical services to monitor energy use.Afterwards, electronic metres were produced, which the electromechanical ones but used a digital system instead of an analogue one.Users may record the time and date of energy use in addition to voltage, power reading unit, current, and more using this system.[4] Thanks to globalisation, India is becoming a major market for many nations.Because of the rise in industry, the demand for power has skyrocketed.India is now experiencing a massive power outage.The electrical system has made significant progress in the previous forty years of planning, but it still isn't enough to meet demand.This means the nation has been dealing with a power outage for quite some time.Power outages are making already unpleasant summers in India's capital and other cities even worse, as temperatures continue to rise.Our country's businesses and economy are feeling the effects of the power outage.Because they are unable to utilise the electrical equipment that are essential to their daily lives, customers find no solace in power shortages and outages.The majority of people's electrical loads come from necessities like cell phones, fans, lighting, and the like.[5]
Literature Survey
The current setup here is to manage electrical equipment in the business sector by means of a remote control system that operates on radio frequency.By using a radio frequency (RF) based wireless remote-control system, it is possible to turn on or off electrical equipment from anywhere in the home or business, even when there is no direct line of sight.The controlling circuit is composed of a few passive components, an HT12E encoder and decoder, and an RF transmitter and receiver module that operate at 434 MHz.For the decoder, a relay links the four output channels to the appliances, and for the encoder, those same channels act as input switches.The circuit operates on 9 V and uses an amplitude shift keying (ASK) gearbox mechanism.Using radio frequency technology, this project aims to build a circuit that can operate in the absence of direct line of sight and does not need programming skills. [6]
Proposed System
Analogue electronics devices are losing ground to digital electronics components in today's technologically evolved society.Another aspect of our increasingly digital society is the home device control system.A lot of people are chatting it up over the cellular phone network.These phone networks rely on WIFI modules as their foundation.A shorthand way to say "global system of mobile communication" is WIFI.In addition to being widely employed in industry, it is also used in several electronics projects undertaken by engineering students.Projects that rely on WIFI allow users to remotely operate gadgets using mobile devices.[7] is cited.For instance, what if you wanted to be able to operate any equipment from afar using only your cell phone?Would it be possible?Sure, this article can help you with that.Wi-Fi modules and cell phones make it easy to power on and off various gadgets.The home devices control system was developed to enable the control of home equipment from a mobile phone by using the afore-mentioned ideas.[8] the block diagram of the suggested system shown in figure 1.
Wi-Fi module 4.1 Global System for Mobile Communication
WIFI, which stands for Global System for Mobile communications, reigns (important) as the world's most widely used cell phone technology.Cell phones use a cell phone service carrier's WIFI network.WIFI, short for "Global System for Mobile communications," is the de facto standard for mobile phone networks worldwide.By identifying and connecting to nearby cell phone towers, mobile phones are able to access the WIFI network provided by cellular service providers.Wireless cellular data transmission using the World Wide Web Protocol (Wi-Fi) is a universally recognized technology.In 1982, a group of experts came together under the acronym WIFI to establish a standard for mobile phones in Europe.Their goal was to develop requirements for a 900 MHz mobile cellular radio system that could be used throughout the continent.Many nations outside of Europe are expected to become WIFI partners.
Modem Specifications
The WIFI modem shown in figure 2 is SIM300 which is plug-and-play tri-band WiFi module for the suggested system.The SIM300 is a small and power-efficient module that supports voice, text, data, and fax via WIFI/GPRS900/1800/1900Mhz. It also has an industry-standard interface.The SIM300's top characteristics allow it to work with an almost infinite variety of applications, including WLL (Fixed Cellular Terminal) applications , M2M apps, portable devices, and many more.40x33x2.85mm tri-band WIFI/GPRS module Support for customized MMI and keypad/LCD technologies.
Mobile Station
The mobile station (MS) comprises of the radio handset, show, and computerized signal processors, while the SIM is a brilliant card.With the SIM, a client's very own versatility is
Base Station Subsystem
The Base Station Controller (BSC) and the Base Transceiver Station (BTS) make up the Base Station Subsystem.Like the rest of the system, they are able to work with components from various manufacturers because of the defined Abis interface that they communicate over.The Base Handset Station houses the radio handsets of a cell and is responsible for the conventions that interface the Portable Station to the radio organization.It is possible to install a huge number of BTSs in a broad metropolitan area.For a BTS to be considered, it must be affordable, dependable, portable, and tough.The Base Station Regulator is accountable for the radio assets of at least one Base Stations (BTSs).Then, at that point, it deals with setting up radio channels, bouncing frequencies, and taking care of handoffs.The versatile assistance exchanging focus (MSC) is associated with the cell phone through the BSC.In addition to its primary purpose, the BSC may change the voice channel from 13 kbps on the radio connection to the standard 64 kbps on ISDN.
Network Subsystem
Mobile services switching centers (MSCs) are the central processing units (CPUs) of a network.It handles everything needed to manage mobile subscribers, such as authentication, location updates, handovers, and routing calls to subscribers who are traveling.A standard PSTN or ISDN switching node is another one of its uses.The Network Subsystem is the collective name for the several functional units that work together to offer these services.The ISDN utilizes the ITUT Flagging Framework Number 7 (SS7), which is generally used in current public organizations, and the MSC gives the association with the public fixed network for motioning between utilitarian units.
To guarantee proper call routing and the potential worldwide transit of WIFI, the MSC collaborates with the Home Location Register (HLR) and the Visitor Location Register (VLR).For every WIFI network, the HLR stores all the subscriber's administrative data, including the GPS location of their mobile device.Versatile Station Meandering Numbers are standard ISDN numbers that are utilized to course calls to the portable station where the versatile is genuinely found.Despite its potential implementation as a distributed database, conceptually, each WIFI network has one HLR.
The Visitor Location Register keeps track of certain administrative data retrieved from the HLR for each mobile device currently under the authority of the VLR.This information is vital for call the board and the arrangement of membership administrations.Most producers of exchanging hardware utilize one VLR in mix with one MSC, notwithstanding how each useful element might be carried out autonomously.Since the MSC and VLR both handle the same geographical area, signaling may be simplified in this manner.The location records, and not the MSC, include information on individual mobile stations.
The other two registries primarily serve to authenticate and ensure security.The EIR is a database that records all authorized mobile devices on the network.It does this by utilizing the Global Portable Hardware Character (IMEI) to distinguish explicit versatile stations.An IMEI becomes invalid when it has been accounted for taken or isn't type approved.The Validation Place keeps a copy of the mystery key kept on the SIM card of each and every endorser to confirm their character and scramble the radio channel.All database information is protected.
Base Station Subsystem (BSS)
The BSS is made up of two sections: a.The BTS, or Base Transceiver Station, b.The computer that controls the base station.By exchanging data over the designated Abis interface, the BTS and the BSC can facilitate operations involving components manufactured by several vendors.Four, seven, or even nine cells might make up a BSS's radio components.It is possible for a BSS to have many base stations.The BSS uses the Abis interface for correspondence with the BTS and BSC.The next step is to establish a dedicated high-speed connection (T1 or E1) between the BSS and the Mobile MSC.The connection between the BSS and MSC is shown in figure 3. The base station transceiver subsystem (BTS) which is shown in figure 4 houses the radio transceivers that control the protocols for radio connections with the MS and identify a cell.It is possible to install a huge number of BTSs in a broad metropolitan area.Each network cell's transceivers and antennas are represented by the BTS.The typical location for a BTS in a cell is in the middle.The dimensions of a cell are defined by its transmitting power.The quantity of handsets in a BTS goes from one to sixteen, contingent upon the client thickness in the cell.All BTSs are relegated to specific cells.Works like encoding, decoding, multiplexing, regulating, and taking care of RF signs to the recieving wire are additionally essential for it.Different capabilities incorporate transcoding, rate variation, time and recurrence synchronization, voice through full-or half-rate administrations, irregular access identification, timing advances, and uplink channel estimations.[9]
The Base Station Controller (BSC)
At least one BTSs' radio assets are administered by the BSC.Arrangement of radio channels, recurrence jumping, and handovers are completely taken care of by it.For the versatile to speak with the MSC, the BSC should be available.Also, the BSC changes over the radio connection's 13 kbps voice channel to the 64-kbps standard channel utilized by PSDN or ISDN.Schedule openings and frequencies are appointed and delivered by it for the MS.Intercell handover is likewise overseen by the BSC.It directs the BSS and MS power transmission under its locale.The BSC's responsibility is to split the accessible time allotments between the MSC and the BTS.It deals with radio assets and capabilities as an exchanging gadget.Among its different capacities, it has attributes like as the executives of recurrence bouncing, utilizing traffic focus to eliminate MSC line counts, Interfacing the BSS's Tasks and Upkeep Center Exchanging frequencies between BTSs, The synchronization of time and recurrence Power the board, Time-postpone estimations of gotten signals from the MS.
Relay
A switch that is powered by electricity is called a relay.While different working principles are also used by relays, most of them employ an electromagnet to power their switching mechanisms and schematic view of the relay shown by figure 5.At the point when a low-power signal is expected to control a circuit or when a few circuits should be constrained by a solitary sign, transfers are utilized.The first reason for transfers was to copy and retransmit signals over significant distance broadcast associations.The consistent capabilities performed by transfers were generally utilized in early PCs and phone trades.Contractors are an exceptional sort of hand-off that can deal with the monstrous measures of power expected to drive an electric engine straightforwardly.A semiconductor device that is activated by light is used to operate power circuits in solid-state relays, which eliminate the need for moving components.Computerized gadgets alluded to as "security transfers" in the present electric power frameworks complete similar obligations as their simple ancestors, which included aligned working attributes and, at times, various working curls, to protect electrical circuits against over-burden or blunders.
Arduino Uno
With their single-board micro controllers and micro controller kits, the open-source hardware and software business, project, and user community known as Arduino creates digital gadgets and interactive things with physical and digital sensing and control capabilities.The General Public License (GPL) and the Lesser General Public License (LGPL) are used to license its goods, which means that anybody may make Arduino boards and distribute the software.You may get Arduino boards in two different forms like preassembled and as DIY kits.
The designs of Arduino boards make use of a wide range of controllers and microprocessors.Figure 6 shows the front view of the Arduino Uno and figure 7 shows the Back side view.The boards are associate with different circuits and development circuits utilizing the computerized and simple I/O pins that accompany them.A few sorts of boards have sequential communications interfaces, like All Universal Serial Bus (USB), which might be utilized for programming from PCs.The microcontrollers are typically coded utilizing a blend of C and C++ highlights.The Arduino project offers an IDE based on top of the Handling language project, notwithstanding the standard compiler toolchains.In 2003, students at Italy's Interaction Design Institute Ivrea launched the Arduino project with the goal of giving anybody, from complete beginners to seasoned pros, a simple and inexpensive means to build devices that can sense and respond to their surroundings.Common examples of gadgets like this aimed at novice hobbyists include basic thermostats, motion detectors, and robots.The name Arduino is gotten from a bar in Ivrea, Italy, where a couple of the undertaking's organizers would assemble.The bar was called after Arduin of Ivrea, who filled in as both lord of Italy from 1002 to 1014 and margrave of the Walk of Ivrea.
PIN Capability
The Arduino is open-source software and we can also get the equipment reference plans on the Arduino site; they're given under an Innovative Hall Attribution Offer The same 2.5 permit.You may likewise get creation and design documents for specific equipment variations here.The makers have asked that the name Arduino stay remarkable to the first item and not utilized for subordinate items without consent, regardless of whether the product and equipment plans are openly available under copyleft licenses.In the authority strategy proclamation on the utilization of the Arduino name, it is focused on that the undertaking will incorporate work from others into the last conveyance.A number of commercially available Arduino-compatible devices have used names that finish in -Arduino to sidestep the project name.
The 8-bit AVR microcontrollers used on most Arduino boards are manufactured by Atmel.These microcontrollers come in a variety of models, each with its own set of features, pinout, and flash memory capacity (ATmega8,[24] ATmega168, ATmega328, ATmega1280, ATmega2560).Atmel introduced the 32-bit Arduino Due in 2012 and it used the SAM3X8E as its foundation.To make it easier to program and integrate with other circuits, the boards employ female headers or single-or double-row pins.They have the option to link up with supplementary modules called shields.An I²C serial bus allows for the independent addressing of several, and even stacked, shields.Most boards come with a ceramic resonator or crystal oscillator that operates at 16 MHz and a 5 V linear regulator.Arduino microcontrollers are pre-stacked with a boot loader that works with the transferring of projects to their on-chip streak memory.As a default, the Arduino UNO utilizes the Opti boot bootloader.Boards may be programmed by connecting them to an external computer using a serial connection.To transform signals from RS-232 logic levels to TTL levels, a level shifter circuit is included into certain serial Arduino boards.The FTDI FT232 and other USB-to-serial converter chips allow modern Arduino boards to be programmed using the Universal Serial Bus (USB).Some boards may include an AVR chip with USB-to-serial software instead of an FTDI chip; this device may be flashed with fresh settings using its own ICSP header.This is common on later-model Uno boards.Bluetooth, a separable USB-to-chronic converter board, or both are utilized by a few variations, for example, the unapproved Boarduino and the little Arduino Small.While managing customary microcontroller devices rather than the Arduino IDE, standard AVR in-framework programming (ISP) is utilized.
There is a plethora of boards that are either directly or indirectly related to Arduino.Some of them are fully compatible with Arduino and may be used in place of it.To make building buggies and tiny robots easier, several people improve upon the original Arduino by adding output drivers; these boards are often used in educational settings.Others are functionally similar but have a different shape; some of these variants are compatible with shields while others are not.There are versions that employ processors that aren't compatible with each other.Using WIFI, the electrical load management system connection diagram is shown in figure 9.The energy meter comes after the device's connection.A current sensor measures the total load current and sends the microcontroller a voltage signal that corresponds to it.To find the current, the microcontroller will transform the analogue reading into a digital one.After that, it checks the current reading against the default value.Do nothing when the current reading is below the threshold you've defined.The load will lose power as soon as the current reading goes beyond the set limit.Following a drop in load below the threshold, the user must re-engage the relay.
Operation of proposed surveillance system
All the power consumption sites should have a device installed so that we can monitor the power consumption of individual loads at any given moment.Every customer has a certain amount of electricity that they are not allowed to exceed during the power outage.This way, we can meet the needs of all our customers.There is a power deficit, which determines the power limit.Using IoT technology, the device may be informed of the limit's value by SMS.In response to the current limit value, the controller will update the initial limiting value to reflect the current limit.In addition, we are also offering customers the ability to remotely turn their electrical loads on and off using text message.Separate passwords for the customer and the energy supplier guarantee the system's security by preventing the customer from setting the power limit value.At each given consumer point, the power supplier or distribution center may activate or deactivate loads, as well as establish current limit values.
The hardware experiment setup with light and fan control was shown in figure 9.A relay coil, Arduino uno board and power supply, metering equipment, Wi-Fi module with IoT With facility and LCD display were connected as per connection diagram.The experiment was conducted to control and operate the bulb and fan in various modes, and it was showing satisfactory results.The fan and light were operation on the control of proposed surveillance system which is shown in figure 10.
Conclusion
This article proposes a simple and quick process for surveillance system with the help of IoT with Arduino.Additionally, we have mitigated the drawbacks of the existing method using the proposed method.It is possible to set up direct, user-friendly interfaces between distributors and consumers without any third-party influence.Along with this, the system's accuracy improves when the number of mistakes decreases.In light of the above, it is safe to say that a user-friendly, cost-effective, and highly accurate system analysis is within reach and the proposed system provides all the above said features.
Figure 1
Figure 1 Block Diagram of the proposed system
Figure 2
Figure 2 Wi-Fi modem guaranteed, permitting them to get to all membership benefits no matter what the terminal's area or use.A user may access their subscription services, make and receive calls on another WIFI cellular phone, or both only by putting the SIM card into another phone.International Mobile Equipment Identity (IMEI) is a framework that interestingly recognizes cell phones.A International Mobile Subscriber Identity (IMSI) is put away on the SIM card notwithstanding the endorser's very own subtleties and a confirming mystery key.The autonomy of the IMEI and the IMSI considers the opportunity of portability of people.The SIM card might be safeguarded with a PIN or secret word to forestall unapproved use.
Figure 3 Figure 4
Figure 3 BSS and the Mobile MSC
Figure 5
Figure 5 Schematic diagram of Relay
Figure 6
Figure 6 Front view of the Arduino Uno Figure 7 Back side of module
Figure 9
Figure 9 Connection diagram of surveillance system | 5,250 | 2024-01-01T00:00:00.000 | [
"Engineering",
"Computer Science",
"Environmental Science"
] |
Large-Scale Preparation of Long-Chain ADMET Synthons
Abstract We report a convenient process with minimal purification to produce large quantities of α,ω-alkenyl alcohols. These reagents are indispensable precursors in ADMET chemistry. Icos-19-en-1-ol, nonacos-28-en-1-ol, and octatriacont-37-en-1-ol were produced effortlessly in large quantities (up to 45 g in a single batch) from undec-10-en-1-ol. By extension of the method, any desired methylene run length in the ADMET precursor can be achieved. GRAPHICAL ABSTRACT
INTRODUCTION
Acyclic diene metathesis (ADMET) polymerization, first reported in 1991 by the Wagener research group, [1] has been extensively studied since then [2] as a versatile tool to access precision polyolefins. [3][4][5] These polymers feature a functionality positioned at unequivocal intervals along a polymer chain and display enhanced properties as compared to their random counterparts. For instance, in the precision butyl-branched polyethylene, a linear relationship between the branch frequency and fusion temperature of the polymer was reported. [6] Precision polyolefins are synthesized from their corresponding symmetrical a,x-diene monomers. The olefin moiety and the desired number of methylene spacers are in many instances introduced with the use of a,x-alkenyl bromides [7][8][9] or with a,x-alkenyl alcohols. [10] Reducing the functionality frequency along the chain in precision polymers correlates with augmenting the methylene run length in the monomer, and therefore in alkenyl synthons. These intermediates are commercially available or can be synthesized [11] for methylene run lengths up to 10; however, no convenient methodology to produce long-chain a,x-alkenyl synthons has been reported thus far, although there is a synthesis to prepare an a,x-alkenyl bromide precursor containing 36 methylene spacers, with 4 leading to a butyl branch on every 75th carbon (Fig. 1).
Nonetheless, onerous purification processes were precluded from yielding more than milligrams of the polymer of interest, supplementary work exploiting such lengthy spacers has been pursued. This small-scale limitation has also confined precision polymers to spectral and morphologic characterization only. Larger scales are strongly desired for further analyses and mechanical studies, leading to the evident need for a convenient synthetic route to access ADMET precursors with lengthy spacers in large quantities. This is the object of the research, a result never achieved before. We report a dependable synthesis based on a series of well-established reactions, for an iterative and tunable n-carbon homologation of a,x-alkenyl alcohols on the multidecagram scale, thereby offering an easy route to alkenyl synthons of any desired methylene run length.
RESULTS AND DISCUSSION
Preparation of nonisomerized alkenyl synthons is paramount in the design of flawless precision polymers. While internal olefins would remain active toward most metathesis catalysts, [12] albeit at a lower conversion rate, [13] an ill-defined material with a statistical distribution of methylene run lengths would result from the use of an isomerized reagent, thereby destroying the precision character. Detectable isomerization by 1 H NMR or infrared (IR) inevitably disqualifies any starting material. Commercially available synthons must be tested prior to use as some lots may contain up to 5% isomerized reagent, based on our experience. Our new methodology avoids isomerization completely. The synthesis relies on two building blocks: the parent a,x-alkenyl alcohol to be homologated by n carbons, and an a,x-bromoalcohol containing n methylene spacers as a homologation agent. We are interested in synthesizing alkenyl synthons containing 18, 27, and 36 methylene spacers. 9-Bromo-1-nonanol [14] (1) is the sole homologation reagent utilized in this report.
The methodology for the homologation of a,x-alkenyl alcohols is presented in Scheme 1. To simplify our presentation, a,x-alkenyl alcohols containing X methylene spacers are abbreviated XspOH. Initial investigations were conducted on a gram scale and all the intermediates shown in the sequence were isolated while establishing the methodology.
Step-to-step purifications were not necessary.
tert-Butyldimethylsilyl chloride was chosen to protect 1 for its excellent stability in most environments and facile cleavage. [15][16][17][18] Following a recently published high-yield protection procedure by Stawinski and coworkers, [19] the corresponding silyl ether 2 was obtained in 98% yield in only 10 min. Batches as large as 150 g could be purified using simple silica plugs. Commercially available aldehyde 3 can shorten the synthesis although lots containing detectable levels of isomerization must be discarded. Compound 3 was also synthesized by oxidation of undec-10-en-1-ol (9spOH) with pyridinium chlorochromate, affording the desired intermediate in 90% yield. Silyl ether 2 was treated with magnesium, and the corresponding Grignard reagent was reacted with the previously prepared aldehyde 3, affording secondary alcohol 4 in 96% yield. Xanthate 5 was obtained by treatment of 4 with sodium hydride, carbon disulfide, and methyl iodide at 0 C to attenuate the formation of undesired products. [20] Reductive deoxygenation of the xanthate was first attempted under standard Barton-McCombie conditions, [21] but an unidentified by-product had to be arduously removed from deoxygenated intermediate 6.
As the occurrence mentioned previously would have most certainly precluded scale up, an alternative deoxygenation procedure was done. Using hypophosphorous acid [22] instead of tributyltin hydride successfully promoted 6 and alleviated our purification concerns. The presence of free radicals may induce olefin isomerization; [23] however, no detectable levels of isomerization by 1 H NMR resulted from this reaction. Typically, NMR experiments with concentrated samples (400 mmol=mL CDCl 3 ) and large number of transients (512 or more) enable the detection of low levels isomerization (0.5% and above). Subsequent deprotection of the resulting deoxygenated compound was done by refluxing an acetone-water mixture containing 5 mol% of copper(II) chloride dihydrate, as demonstrated by Tan et al. [24] 18spOH was isolated as white flakes with limited solubility with an overall yield of 59% for the first homologation.
The synthesis was repeated on a large scale with no systematic purification after each step. No systematic purification is key to the success we report herein.
LONG-CHAIN ADMET SYNTHONS 2411
In fact, each subsequent reaction was performed using the crude product of the former, with the exception of 3 and 4, which were refined by distillation and flash column chromatography respectively. The desired homologated alcohol 18spOH was obtained pure after recrystallization in pentane (about 5 mL pentane per gram of 18spOH). The synthetic sequence afforded approximately 45 g of 18spOH from 45 g of 9spOH, equivalent to an overall 58% yield.
With the intention of producing 27spOH, the synthetic sequence was similarly repeated with limited purifications (Scheme 2). Compounds 7 and 8 were both purified by flash column chromatography, and 27spOH was recrystallized from hexanes. A yield of 20 g of the latter resulted from 26 g of 18spOH, corresponding to 55% overall yield for the second homologation.
Likewise, 36spOH, which in our previous work was especially difficult to make, was produced with only two purification steps (Scheme 3). Purification of 7 was performed by flash column chromatography using near boiling eluent mixtures (50-55 C) to circumvent the lower solubility of the intermediate. The subsequent xanthate was formed at higher temperatures and reduced concentration as 7 was not soluble in tetrahydrofuran (THF) at 0 C. As for the deprotection, twice as much solvent (in comparison with the deprotection of 6) as well as addition of heptane was necessary to solubilize the parent silyl ether of 36spOH. The unprotected alcohol precipitated as the reaction proceeded, and fully crystallized as white pure flakes upon cooling of the reaction mixture. Nearly 10 g of 36spOH were obtained from 16 g of 27spOH corresponding to 48% overall yield for the third homologation. The reduced overall yield was attributed to the limited solubility of the intermediates, not to the actual chemistry at hand. In most instances, a diminished conversion of the starting materials was observed. 36spOH was synthesized from 10-undecen-1ol after three consecutive 9-carbon homologations. Based on our reported yields, 48 g of 36spOH could be prepared from 100 g of 9spOH.
CONCLUSION
To conclude, we report a successful, convenient, and reliable synthetic route to prepare large quantities of lengthy a,x-alkenyl alcohols without isomerization of the olefin moiety. We effectively prepared alkenyl alcohols containing 18, 27, and 36 methylene units. The alkenyl alcohols can be transformed straightforwardly to their corresponding alkenyl bromides counterparts, building blocks of precision polymers (Scheme 4).
We are convinced that the straightforward and limited number of purifications can indubitably promote a fast production of alkenyl synthons on a much greater scale than we described in this report. Virtually any methylene run length can be accessed by tuning the homologating agent.
Icos-19-en-1-ol (18spOH)
In a 1-L, flame-dried, three-necked, round-bottom flask, 4 (97.2 g, 227.8 mmol) and THF (300 mL) were added. The solution was cooled to 0 C and sodium hydride 60% in mineral oil (15.1 g, 378.1 mmol) was carefully added. After 30 min, the mixture was allowed to slowly warm up to room temperature. After 2 h, the reaction was cooled to 0 C, and carbon disulfide (52.0 g, 683.2 mmol) was added dropwise. After 3 h, the mixture was allowed to slowly warm to room temperature. Eight h later, the reaction was cooled to 0 C, and methyl iodide (48.4 g, 341.6 mmol) was added dropwise to the reaction. After 1 h, the reaction was allowed to slowly warm to room temperature, after which the mixture became more viscous. Four h later, the reaction was cooled to 0 C and was carefully quenched with a saturated ammonium chloride solution. Diethyl ether was added, and aqueous phase was extracted with diethyl ether. The combined organic layers were washed with brine and dried over sodium sulfate. Removal of the solvents in vacuo afforded an impure orange oil (124.3 g). Triethylamine (350 mL, 2.51 mol) and a hypophosphorous acid solution in water Scheme 4. Synthesis of precision polymers from alkenyl alcohols.
LONG-CHAIN ADMET SYNTHONS
(50% w=v, 150.3 g, 588.7 mmol) were added to a 2-L flask equipped with a condenser containing the impure oil and dioxane (770 mL). The mixture was refluxed and an AIBN solution (7.48 g, 45.55 mmol) in dioxane (100 mL) was added continuously over 4 h. After disappearance of the starting material by thin-layer chromatography (TLC) analysis, the solvents were removed in vacuo, and 100 mL of hexanes were added. The resulting mixture was passed through a silica plug using hexanes=ethyl acetate (98:2) as the mobile phase. The solvents were removed in vacuo, affording a slightly yellow oil (68.4 g). Water (82 mL) and copper(II) chloride dihydrate (1.94 g, 11.39 mmol) were added to a 2-L flask equipped with a condenser containing the yellow oil and acetone (1.6 L). The green mixture was refluxed for 2 h, after which the solvents were removed in vacuo. The resulting solid was dissolved in hexanes, and the solution was filtered over a bed of celite. The solvent was removed in vacuo, and the crude solid was recrystallized in pentane (250 mL), affording 18spOH as pure white flakes (45.3 g, 67% from 4).
FUNDING
We gratefully acknowledge the National Science Foundation (DMR-1203136) and the Army Research Office (W911NF-13-1-0362) for the financial support of this research. | 2,508.8 | 2014-05-05T00:00:00.000 | [
"Physics"
] |
Weighted spectral clustering for water distribution network partitioning
In order to improve the management and to better locate water losses, Water Distribution Networks can be physically divided into District Meter Areas (DMAs), inserting hydraulic devices on proper pipes and thus simplifying the control of water budget and pressure regime. Traditionally, the water network division is based on empirical suggestions and on ‘trial and error’ approaches, checking results step by step through hydraulic simulation, and so making it very difficult to apply such approaches to large networks. Recently, some heuristic procedures, based on graph and network theory, have shown that it is possible to automatically identify optimal solutions in terms of number, shape and dimension of DMAs. In this paper, weighted spectral clustering methods have been used to define the optimal layout of districts in a real water distribution system, taking into account both geometric and hydraulic features, through weighted adjacency matrices. The obtained results confirm the feasibility of the use of spectral clustering to address the arduous problem of water supply network partitioning with an elegant mathematical approach compared to other heuristic procedures proposed in the literature. A comparison between different spectral clustering solutions has been carried out through topological and energy performance indices, in order to identify the optimal water network partitioning procedure.
Introduction
Civil engineering networks regard different infrastructures (e.g. transport, energy, phone, internet, water, gas, logistic). Water Distribution Networks (WDNs) are among the most important civil networks, because they deliver drinking and industrial water to metropolitan areas. From a topological perspective, a WDN with multiple interconnected elements may be represented essentially as a link-node planar weighted spatially organized graph for which pipes (and valves) correspond to links m, and nodes/junctions (such as pipe intersections, water sources and nodal water demands) correspond to graph nodes n. Planar graph have vertices whenever two edges cross, whereas nonplanar graphs can have edges crossing but not forming vertices (Boccaletti et al., 2006). WDN belong to the class of networks with nodes occupying precise positions in two or three-dimensional Euclidean space, edges being real physical connections, and strongly constrained by their geographical embedding (Boccaletti et al, 2006), like other spatially organized urban infrastructure systems (Carvalho et al, 2009;Newman, 2003).
In an abstract modelling context, a mathematical graph can be used to express the relationships between groups of linked nodes. An important aspect of spatial networks is that node degrees are constrained, as the number of possible connections to a single node is physically limited. Furthermore, in WDN it is unlikely to have direct connections between very distant nodes, so that significant limitations to the small-world behaviour of such networks arise (Boccaletti et al, 2006). In particular, little variability is observed in the connectivity patterns of the nodes in WDN, no hubs (nodes with much more connections than the others) are present, and most of the nodes have very low degree (usually two or three, and mostly less than five), so in general they present a fairly homogeneous degree distribution (Di Nardo et al, 2015a). Furthermore, such networks are also equally sensitive to random or malicious failures (Barthelemy and Flammini, 2008). WDN can be considered as complex networks for many reasons (Mays, 2000): they are often very large (up to tens of thousands nodes and links); they are buried underground, and thus are not easily accessible for monitoring and maintenance; they are strongly looped; their modelling includes non-linear equations requiring sophisticated numerical resolution methods; they often present severe water losses. Compared to other civil networks (e.g. gas, electricity, transport, telephone, internet), some of these WDN characteristics are peculiar, and make their management arduous, with many operational problems (such as water and energy losses). For all these reasons, in the last decades, the scientific community has proposed different approaches to improve WDNs management, without compromising their main function, i.e. providing water to end users ensuring a minimum level of service.
In this context, the implementation of the paradigm of "divide and conquer" in a WDN allows simplifying the management, defining sub-systems named District Meter Areas (DMAs), by inserting gate valves and flow meters along network pipes, properly selected, in order to define a Water Network Partitioning (WNP). In this way, it is possible to improve water losses identification (Water Industry Research Ltd, 1999), control district pressure (Alonso et al, 2000), and protect users from accidental and intentional contamination (Di Nardo et al, 2015b), because these activities are simpler to achieve if the network is divided in sub-systems. By dividing the water network in DMAs, implementing innovative Information and Communications Technology (ICT) remote-controlled devices and big data analysis, it is possible to change the traditional approach to the management of WDN, transforming the water systems into modern Smart Water Network (SWAN) (Di Nardo et al, 2016a), considered as part of Smart Cities. It is important to underline that, to define a good WNP, it is necessary to satisfy two crucial major requirements for the optimal functioning of a WDN: 1) network connectivity, i.e. each demand node of the water network must be connected to at least one water source, and 2) nodal minimum pressure, i.e. each node must have a pressure equal or higher than the minimum level of service that allows satisfying the water demand of the users. Therefore, the design of a WNP, as any problem of network subdivision, is a complex challenge for operators, because the permanent partitioning changes the original topological layout of water systems. Indeed, network partitioning, achieved by pipe closures, reduces the overall pipe section availability, with the consequent decrease of network water pressure, especially during peak hours, worsening the level of service offered to users. In the last years, different procedures have been proposed in the literature for finding an optimal WNP layout (reviews are given in Di Nardo et al, 2013a;Perelman et al, 2015), essentially based on heuristic algorithms and optimization procedures. Generally, they consist of two different phases: a) clustering, aimed at defining the shape and the dimensions of the network subsets, based on different theories, among which: graph theory algorithms, obtaining the number of independent sectors through connectivity analysis, (Tzatchkov et al, 2006); identifying the pipes along which to insert hydraulic devices by searching minimum dissipated power paths using graph theory principles (Di Nardo et al, 2013a;Alvisi and Franchini, 2014); with an optimization model solved by a simulated annealing algorithm with an objective economic function (Gomes et al, 2012); based on shortest path search with dissipated power weight on pipes and refining through an objective function of the Genetic Algorithm based on network mean pressure (Di Nardo et al, 2013b); spectral approach with spectral clustering algorithm applied to adjacency matrix with different supply constraints (Herrera et al, 2010) or recursive bipartition of the graph through weighted graph Laplacian matrix (Di Nardo et al, 2017); multi-agent approach taking into account multiple interacting agents of WDN (Izquierdo et al, 2011); community structure, based on social network theory and graph partitioning algorithms (Di Nardo et al 2015a) or with an automatic identification of boundaries on the basis of the property that density of edges within communities should be higher than between them (Diao et al, 2013); b) dividing, aimed at physically partitioning the network, by selecting pipes for the insertion of flow meters or gate valves: based on recursive bisection procedure and an algorithm for graph traversal to verify the reachability of each district from the water source and node connectivity (Ferrari et al, 2014); on genetic algorithms implementing an automatic heuristic optimization technique for DMAs definition with minimum hydraulic deterioration (Di Nardo et al., 2015c, 2016b, with the objective of identifying the optimal layout that minimises the economic investment and the hydraulic performance deterioration. Generally, such a two steps approach allows simplifying the water network partitioning, as, once the optimal node clustering is identified, then it becomes the starting point of the subsequent dividing phase. It is worth to highlight that the proposed procedures can be more effective if the clustering phase takes into account some hydraulic features of the network (i.e., energy, geometry), as reported in other studies (Di Nardo et al., 2013a, 2016a) depending on the adopted clustering algorithm. To such aim, in this work, the most important energy parameters are taken into account for the clustering stage.
This paper, extending a previous basic work (Di Nardo et al, 2017), aims at investigating the feasibility of adopting weighted spectral clustering to identify the optimal sub-graphs layout, comparing different weights of pipes and different spectral methods (von Luxburg, 2007), and then, subsequently, to define the optimal water network partitioning not only from a topological but also from a hydraulic point of view.
Methodology
As described above for other approaches, the proposed procedure consists of two distinct phases (Di Nardo et al, 2016a), separately described in the following subsections.
Phase 1: water network clustering As known, considering a simple graph G = (V,E), where V is the set of n vertices v i (or nodes) and E is the set of m edges e l (or links), a k-way graph clustering problem consists in partitioning V vertices of G into k subsets, P 1 , P 2 ,…, P k such that:⋃ k 1 P k ¼ V (the union of all clusters P k must contain all the vertices V i ), P k ∩P t = Ø (each vertex can belong to only one cluster P i ), Ø ⊂ P k ⊂ V (at least one vertex must belong to a cluster and no cluster can contain all vertices) and 1 < k < n (the number k of clusters must be different from one and from the number n of vertices). Clustering is usually defined in terms of weighted, undirected graphs, where weights correspond to either similarity scores, or distances, or, more generally, they express the strength of the link between elements in order to define sub-graphs which take into account proximity and/or similarity between elements.
Graph clustering can be achieved with many procedures aimed to define the optimal layout of each cluster, finding community structures minimizing or maximizing an objective function that emphasizes one of the clustering aims. In literature (wide reviews are provided in Boccaletti et al, 2006;Fortunato, 2010), several procedures were proposed: k-means; Markov cluster algorithm; spectral methods (as optimization algorithm of the cut problem, such as min-cut, ratio-cut, normalized-cut); hierarchical clustering; modularity; multi-level-recursive algorithm, Girvan and Newman algorithm and some other methods.
In recent years, spectral clustering, based on eigenvectors and eigenvalues of the graph Laplacian matrices (defined hereinafter), has become one of the most popular clustering algorithms (Chung, 1997;Saerens et al., 2004;von Luxburg, 2007), because it can be solved by standard linear algebra software developed by the authors in MATLAB™ (SimuLink Reference Books 2006) and so it is easy to implement. So, in this paper, the clustering phase to define sub-graph for the subsequent dividing phase has been achieved with different weighted spectral clustering techniques, investigating the effectiveness of this approach and the optimal choice of weights. As known, the main tools for spectral clustering are graph Laplacian matrices and, in the following, G is assumed as an undirected, weighted graph with weight matrix W ω , where w ij = w ji ≥ 0. In particular, as explained above, different weights have been adopted for the pipes to investigate which of them provides the best results. The choice of pipe weights is crucial, as different weights lead to significantly different layouts of the districts. As aim of the partitioning is to identify a balanced layout of the districts (i.e. districts with similar dimensions) least affecting the hydraulic performance of the network (i.e. minimising the unavoidable increase of head losses), pipe characteristics related to hydraulic resistance have been here tested as weights.
Given a graph G = (V, E), the adjacency nxn matrix A (in the following indicated as W A and corresponding to the no-weight matrix) expresses the connectivity of the graph, where elements a ij = a ji = 1 indicate that there is a link between nodes i and j and a ij = a ji = 0 otherwise.
Three spectral clustering methods have been tested. The first one, which solves relaxed versions of the RatioCut problem (von Luxburg, 2007), is based on the eigenvalues of the unnormalized graph Laplacian L, defined as: where D k = diag(K i ) and K i is the degree of a node i. The other two methods, both solving relaxed versions of the NCut problem (Shi and Malik, 2000), belong to normalized spectral clustering, as they use the eigenvalues of normalized graph Laplacian. In particular, the normalized spectral clustering according to Shi and Malik (2000) is based on the normalized Laplacian L rw , closely related to a random walk (von Luxburg, 2007) and defined as: The third tested method is the normalized spectral clustering proposed by Ng et al. (2001), based on eigenvectors of the normalized Laplacian L sym , a symmetric matrix defined as: The above mentioned three spectral clustering algorithms have been applied to identify the optimal clusters in a WDN. Namely, the tested W ω matrices have been: W A (i.e.no weights are given to the pipes, so to take into account only the connectivity); W D (weight equal to pipe diameter D, related to pipe hydraulic resistance in formulas with exponent close to -5); W 1/L (weight equal to the inverse of pipe length, linearly related to pipe hydraulic resistance); W C (weight equal to pipe conductance, here assumed as proportional to D 5 /L, under the simplifying hypothesis that all the pipes in the network share the same roughness coefficient); W F (weight equal to pipe flow, indirectly related to both pipe hydraulic conductance and water demand distribution at nodes).
Specifically, the clustering phase for the proposed water network partitioning consists of the following steps: 1. abstraction of the water supply network as a graph G = (V, E); 2. definition of adjacency matrix and pipe weight matrices W ω as defined above; 3. computation of the spectrum of unnormalized Laplacian matrix based on adjacency matrix in order to define the best number of clusters, k, according to the k-smallest eigenvalue, as explained below; 4. computation of the first k eigenvectors of unnormalized and of two normalized Laplacian matrices for all weight matrices W ω ; 5. definition, for all the weights and for the three spectral algorithms, of the matrix U nxk containing the first k eigenvectors as columns; 6. clustering the nodes of the network into clusters C 1 ,…,C k using the k-means algorithm applied to the rows of the U nxk matrix; 7. check of the continuity of the obtained clusters C k ; 8. definition of the set of edge-cuts (or boundary pipes) N ec .
The boundary pipes are links for which the start node and the end node belong to different clusters C k.
It is important to highlight that in all three algorithms, an important aspect is to change the representation of the nodes n from Euclidian space to points of the matrix U nxk, that enhances the cluster-properties in the data, so that clusters can be trivially detected in the new representation, in particular, through the simple k-means clustering algorithm (Tibshirani et al., 2001;von Luxburg, 2007).
Phase 2: water network dividing
Phase 1 provides the edge-cut between clusters, i.e. the set of N ec boundary pipes along which gate valves or flow meters must be installed. First, the number N fm of flow meters to be inserted in the network is chosen, so that the remaining boundary pipes N bv = (N ec -N fm ) are closed by inserting gate valves. In order to simplify the water budget computation, it is better to keep N fm as low as possible (Di Nardo et al, 2016b). This problem can be assimilated to a valve placement problem in WDNs. This is a NP-hard problem (Bodlaender et al., 2010) and it requires heuristic algorithms to find optimal solutions (Tindell et al., 1992). In other terms, once defined all the e ij boundary pipes between clusters, those that must be closed must be chosen among all the possible combinations N DL of water network partitioning layouts, expressed by the binomial coefficient: It is important to underline that N DL can be, already for a small water supply network and for a small number k of DMAs, such a huge number that it is often computationally impossible to investigate all the solution space.
However, closing pipes to divide the districts significantly changes the network layout, reducing the topological connectivity and the energy redundancy and, consequently, worsening the hydraulic performance.
Therefore, an optimization technique has been developed, in order to find, once fixed the number of flow meters N fm , the optimal choice of the boundary pipes along which gate valves are to be inserted, by minimizing the alteration of the hydraulic performance and of the level of service for the users. This aim has been achieved by a heuristic procedure carried out with a Genetic Algorithm (GA) developed by the authors (Di Nardo et al, 2016a), maximizing the following objective function: corresponding to the total nodal power of the network (Di Nardo et al., 2013a), in which γ is the specific weight of water, z i , h i and Q i are, respectively, the geodetic elevation, the pressure and the water demand at the i-th node. The GA parameters are the following: each individual of the population is a sequence of N ec binary chromosomes corresponding to the pipes belonging to the edge-cut set; the l-th chromosome is set to 1 if a gate valve is inserted along the corresponding l-th pipe, while it is set to 0 if a flow meter is inserted. The GA has been carried out with 100 generations and with a population consisting of 500 individuals with a crossover percentage equal to P cross = 0.8.
In order to compute the objective function, hydraulic simulations are carried out in the GA. They are carried out using the freeware software EPANET2 (Rossman, 2000), that numerically solves the non-linear hydraulic equations of the water system.
Finally, after the dividing phase, hydraulic simulations are required to compute some performance indices (Di Nardo et al., 2015c) aimed at evaluating the hydraulic performance of WNP and so allowing to compare different layouts.
Case study
The effectiveness of the proposed procedure has been tested for the real case study of the water supply network of Parete, a town with 10,800 inhabitants, located in a densely populated area near Caserta (Italy). The water network has two sources and its main topological and energy characteristics are reported in Tables 1 and 2, respectively.
The network consists of m = 282 links and n = 184 nodes and, from a topological point of view, in agreement with most real systems, it is a sparse network, so it is not fully connected and its number of edges m < <n 2 , with a link density value (i.e. the ratio between the actual number of links and the number of links of a fully connected network with the same number of nodes) q = 0.017. As the number of edges that can be connected to a single node is limited by the physical space in spatial networks (Boccaletti et al, 2006), average node degreeK =3.05 is small. The case study shows a small average path length APL = 8.80, presenting itself as a cohesive and robust network (Yazdani and Jeffrey, 2011) as well as the value of graph diameter Dm = 20 shows that the nodes are mutually and easily reachable and that the network is ordered in a decentralized fashion (Yazdani and Jeffrey, 2010), which is an important aspect for an efficient communication (the flow in the case of hydraulic networks). Concerning the main spectral measurements, the "spectral gap" Δλ (the difference between the two largest eigenvalues of the adjacency matrix) is equal to 0.062 and the "algebraic connectivity" λ 2 (Fiedler, 1973) (the second smallest eigenvalue of the Laplacian matrix) is equal to 0.021. Both these values are small, showing that the graph arrangement can be decomposed into isolated parts (clusters or districts) (Estrada, 2006).
The hydraulic performances of the water supply network of Parete, reported in Table 2, is good in terms of maximum and mean nodal pressure heads, with h max and h mean higher than the design pressure head h * = 25 m (the pressure head required to satisfy water demand at all nodes). Conversely, the minimum pressure head h min is lower than h * , indicating that in some nodes the design pressure requirement is not fulfilled. Consequently, the system shows little energy resilience and so a "low availability" of the water system to be partitioned without a decrease in hydraulic performance (Greco et al, 2012). In Table 2 the value of the input power P A (a global performance index measuring the amount of energy entering the water system through the reservoirs and provided by pumps) is also reported. Following the steps of the proposed methodology for water network partitioning, WDN first can be seen as a graph (step 1) G = (V, E) in which V is the set of n vertices v i (the junction, the delivering nodes and the reservoirs) and E is the set of m edges e l (the pipes connecting the nodes). Then, all the above defined five weight matrices W ω are computed (step 2) and used to choose the most appropriate number k of clusters; this is a common problem in all clustering algorithms. The tool designed for spectral clustering, the eigengap heuristic (von Luxburg, 2007), is applied to all three graph Laplacian matrices, choosing the number of clusters k such that all eigenvalues λ 1 ,…,λ k assume small and similar values, while λ k+1 is relatively larger (step 3). According to this criterion, as shown in Fig. 1 for the case study of the water network of Parete, relatively to no-weight Laplacian matrix, where the first 10 smallest eigenvalues are plotted, the most appropriate number of clusters is three or four. It is worth to note that, as explained in von Luxburg, (2007), the eigengap heuristic works well only if the clusters in the data are very well pronounced, i.e. the more overlapping the clusters are, the less clear is the detection of the number of clusters. However, such a method gives in any case a useful preliminary indication. Once fixed the number k = 4 of clusters into which the network is subdivided, the first phase of the proposed partitioning procedure provides the spectral clustering of the water supply network of Parete. A total number of clustering layouts N CL = 15 is obtained (three algorithms for five weight matrices), as reported in Table 3. The result of the partitioning of the graph is represented in Fig. 2, without loss of generality, for the case of pipe diameter as weight and L rw Table 3, which gives: the number of nodes n k of each cluster, the balanced node index I b (standard deviation of the total number of nodes of the four clusters), the number N ec of pipes of the edge-cut set. For the two last solutions in Table 3 (W C and W F weight matrices with L sym as Laplacian matrix), the continuity of the network is not ensured. For the continuity check, it has been exploited another important property of the Laplacian matrix, namely the multiplicity m a of its zero eigenvalue, that is equal to the number of connected subgraphs of the network. So, the multiplicity of the zero eigenvalue of the unweighted unnormalized Laplacian matrix of the graph subdivided into four clusters has been evaluated. It resulted m a = 4, indicating that the obtained sub-graphs result internally connected, while, if m a > 4, it would mean that the network has been divided into more than four sub-graphs.
It is also evident that the most balanced layouts (i.e. clusters with similar numbers of nodes) correspond to the W A, W D and W 1/L for all three Laplacian matrices: as reported in Table 3, they lead to the lowest values of I b . As expected, the most balanced layout corresponds to no-weight matrix. In fact, without any weight, the spectral clustering leads to sub-graphs containing similar numbers of links that, in a WDN like Parete, implies also similar numbers of nodes (indeed the number of links connected to a node varies only slightly throughout the network). Conversely, when weights are given to pipes, the sum of the weights of the pipes belonging to the clusters is balanced, which does not necessarily imply that the clusters contain similar numbers of nodes. The last step of the clustering phase is the definition of the edge-cut set. This phase must be achieved with the aim of minimizing, in the subsequent dividing phase, the network perturbation and the investment related to the insertion of hydraulic devices. In this respect, it is reasonable that edge-cut sets containing a small number N ec of intra-cluster pipes would be preferable. Also for this index, the optimal solutions correspond to W A, W D and W 1/L for all three Laplacian matrices (they lead to the lowest values of N ec ). Even if the W F with L Laplacian matrix provides the lowest N ec , such a solution should not be considered, as it corresponds to a very unbalanced cluster layout.
The results given in Table 3 should be interpreted considering that, while unweighted spectral clustering provides the edge-cut set with the minimum N ec (compatible with the requirement of obtaining balanced clusters), weighted spectral clustering minimises the sum of the weights of the edges constituting the edge-cut set. Hence, it was expected that such a minimization would have not necessarily led to small values of N ec .
In this respect, it is interesting to note that the application of weights resulted in edge-cut sets less suitable to carry out the following dividing phase (achieved with EPA-NET software embedded in the GA) with little disturbance to the hydraulic performance of the network, if compared to the edge-cut set identified by the unweighted spectral clustering. In fact, for the dividing phase to affect as little as possible the hydraulic performance, the closed pipes should be the ones carrying small flows (pipes with small conductance, i.e. with small diameter), while the pipes remaining open after the dividing phase (i.e. the pipes along which water meters are installed) should be large, as they have to be capable of carrying also the discharge which, before the division, flew through the closed pipes. In other words, the edge-cut set should contain pipes with both small and large diameters (or hydraulic conductance). As reported in Table 4, which gives the diameters of the pipes belonging to the edge-cut sets obtained with different weights, in the case of the WDN of Parete, the edge-cut set provided by the unweighted clustering has such a feature. Conversely, the more the adopted weights are effective (i.e. the weights assume very different values for the various pipes of the network), the more the pipes of the edge-cut tend to be homogeneous (for instance, when the hydraulic conductance is adopted as a weight, the edge-cut set is formed by only very small pipes).
As shown in Table 5, which gives the main hydraulic performance indices of the network after the dividing phase (evaluated through the EPANET software), the number of flow meters has been fixed for all combinations to N fm = 5, which is the minimum possible number that guarantees the hydraulic performance of the network and, at the same time, simplifies the computation of the water budget (Water Industry Research Ltd, 1999), allowing an easier identification of water losses. Clearly, the number of gate valves is in all cases equal to the difference N bv = N ec -N fm . After the first clustering phase, the dividing phase has been carried out, computing all the hydraulic performance metrics reported in Table 5, namely: the dissipated power P D ; the total nodal delivered power P N = P A -P D (Di Nardo et al., 2013a); the minimum, mean and maximum pressure h min , h mean and h max .
Obviously, the results for the two last clustering solutions (W C and W F weight matrices with L sym Laplacian matrix) are not reported, because the continuity of the network is not respected and so the hydraulic simulation needed for the evaluation of the hydraulic performance could not be carried out.
It is important to highlight that the reported results are the optimal for each weight-Laplacian combination, meaning that, for the fixed number of flow-meters N fm = 5 and within the investigated solution space, the power P D dissipated by the system is minimized, and, consequently, the total nodal power P N is maximized.
As expected, the results in terms of hydraulic performance indicate that the best solutions correspond to the W A, W D and W 1/L for all three Laplacian matrices, as they lead to the smallest numbers of closed pipes. Also in this phase, even if the clusters layouts obtained with W F by means of both L and L rw Laplacian matrix seem to correspond to the lowest value of dissipated power P D , they cannot be considered as good solutions, because they are very unbalanced.
For the presented case study, it is clear that, from both topological and hydraulic point of views, the best solutions have been achieved with the normalized L rw Laplacian matrix. Indeed, at the same time it provides the most balanced clusters solutions, an edge-cut set with few pipes (Table 3), the lowest dissipated power, and the highest minimum pressure head. With reference to the weight choice, it looks clear that, even if there is not a great difference between W A, W D and W 1/L , the best solution was achieved with the unweighted matrix, regardless of the adopted Laplacian matrix. In this respect, it is worth to note that this result cannot be generalized, as it depends on the peculiar distribution of pipe diameters of the analysed WDN. In fact, unweighted clustering takes into account only the topological structure of the network, without using any information related to the hydraulic characteristics of the pipes. However, the obtained results are good in terms of nodal pressure and confirm the suitability of spectral clustering for water network partitioning. Further investigation about the choice of the weights is required to define a spectral clustering approach of general validity for the definition of DMAs.
Finally, Fig. 3 shows the WNP of the network of Parete corresponding to the best solution in terms of minimum pressure h min = 22.78, obtained with unweighted matrix and L rw Laplacian. In particular, in the left pane of Fig. 3, the first clustering phase is reported, highlighting the edge-cut set (dashed lines). In the right pane, the second dividing phase is illustrated, highlighting the optimal positioning of devices, which ensures the minimum hydraulic performance deterioration. For comparison, in Fig. 4, the WNP of Parete obtained with conductance-weighted adjacency matrix and L rw Laplacian is reported, highlighting the less balanced obtained layout. The different shapes and dimensions of the obtained clusters, as well as the greater number of edge-cuts are also evident. In both cases (unweighted WNP and conductanceweighted WNP), the gate-valves (closing pipes) have been located along the pipes with smallest diameters by means of the GA algorithm, so to reduce hydraulic performance deterioration.
Conclusions
The division of a WDN into DMAs aims at improving water supply network management and so, consequently, leakage detection and system safety. At the same time, the closure of pipes with gate valves to define the DMAs unavoidably increases the hydraulic head losses, leading to lower pressure at the water delivery nodes, compared to non-clustered layout. So far, although the design of optimal DMA layout is a problem deeply studied in the scientific literature, there is not an established procedure to solve it. In this regard, the paper presents an application to a real WDN of weighted spectral clustering for water network partitioning.
Spectral clustering is based on the eigenvalues of the graph Laplacian matrix of the network, for which three different formulations have been tested. Five different weights have been adopted, chosen among the major characteristics of the pipes: adjacency (in this case the partitioning is based only on the topology of the network), diameter, length, conductance and flow. Aim of the application is to understand which of the considered characteristics provides the best clustering layout, in terms of minimizing the edge-cuts and simultaneously balancing the dimensions of the clusters.
Compared to other heuristic methodologies, weighted spectral clustering allows to take into account either topological, geometrical, or hydraulic information about the system, within the framework of an elegant mathematical formalism.
Simulation results for the analysed case study, carried out with a number of DMAs k = 4, defined through the analysis of the eigenvalues of the unweighted Lapalcian matrix, confirm the effectiveness of the procedure, providing balanced clustering layouts and small numbers of intra-cluster boundary pipes. The latter result may favour the following heuristic dividing phase, consisting in the choice of the positions of flow meters and gate valves along the pipes of the edge-cut. Indeed, the hydraulic performance of the network, measured with several indices, is satisfactorily preserved in most of the weight-Laplacian combinations.
In particular, in this study the best solution was found, with the spectral clustering algorithm, using unweighted matrices. This result is different from previous studies found in the literature, in which different clustering techniques were adopted, and weighted matrices provided the best results. This result directly depends on the distribution of pipe diameters within the considered network, and therefore cannot be considered of general validity. In fact, it can be ascribed to the fact that, when weights related to pipe geometry are minimized, the optimal edge-cut tends to be formed by pipes with similar characteristics. In the dividing phase, instead, the best hydraulic results are obtained by installing gate valves along pipes with small diameter, and water meters along pipes with large diameter. Therefore, further investigation is required to define a weighted spectral clustering approach of general validity for the definition of DMAs. | 8,030.4 | 2017-06-30T00:00:00.000 | [
"Computer Science"
] |
A Hybrid Ensembled Double-Input-Fuzzy-Modules Based Precise Prediction of PV Power Generation
: Background As one of the most widespread renewable energy technologies, photovoltaic power generation system provides great environmental and economic benefits. Uncertain output of the photovoltaic power generation system gets a lot of concern, due to its randomicity and intermittence features. Achieving precise prediction for the PV power generation will greatly improve the quality of electric energy and enhance the stability of power system operation. Inspired by the powerful ability of fuzzy logic to deal with uncertainty and the superiority of machine learning to handle the time series prediction, a hybrid ensembled model consisted of the Double-Input-Fuzzy-Modules (DIFM) and Extreme Learning Machine is proposed in this paper. Firstly, the PV power generation data is taken as the input of each DIFM in order to efficiently handle the uncertainties. Then, the outputs of each DIFM are used as the input of ELM. Moreover, the least square estimation is applied to train the parameters of the hybrid ensembled model to further enhance the predict precision. Finally, the proposed hybrid ensembled model is applied to achieve a real-world PV power generation forecasting. Results The case study and comparisons results indicate that the proposed hybrid ensembled model outperforms other methods, including the adaptive network-based fuzzy inference system, the single-input-rule-modules connected fuzzy model, the support vector regression and the multiple linear regression, in terms of the mean absolute error, the root means square error, the mean absolute error percentage and the mean relative error. Conclusions This study fully took the advantages of fuzzy logic to deal with uncertainty and the superiority of machine learning to handle the time series prediction, then combined the DIFM and Extreme Learning Machine to propose a hybrid ensembled DIFM based PV power generation prediction model. Experimental results demonstrated the superiorities of the proposed model over the other comparisons, in both prediction accuracy and calculation speed.
INTRODUCTION
The PV power generation technology has been regarded as one of the most important technologies for reducing the carbon emission and enhancing sustainable development ability due to their clean, safe, and sustainable characteristics. The statistic report [1] showed that the global installed capacity of new PV power generation system has over 100GW in 2018, accumulated over 500GW. Nevertheless, the high penetration ratio of the PV power generation challenges the reliability and stability of current power system owing to its output with strong nonlinearity and high levels of uncertainty. Hence, it is critical and urgent to accurately predict the PV power generation for better planning and operating the power system.
Recently, various models/methods, including physical models [2,3], data-driven models [4][5][6] and artificial intelligence methods [7][8][9][10], have been presented for forecasting the output of PV power generation. The physical models highly depended on the analysis to the numerous circuit parameters, such as the shunt resistors, diode influence factors and the corresponded coefficients, etc. Therefore, it is hard to widely used among different PV power generation systems, since various characteristic parameters must be identified. On the other hand, such type models mainly considered the system parameters while ignoring the other influenced factors, for example the weather, the end-customer demands. That leads to the poor prediction accuracy of the physical models. With the development of data mining technologies, many researchers proposed the data-driven models using the historical running data of the PV power generation, meteorological data and demanded load data, etc. Unfortunately, the datadriven models hardly depict the nonlinear features of PV output power. Hence, the predicted accuracy is still not high [11]. Although the predicted results of artificial intelligence methods outperform than the above two types models, it significantly sacrifices the computation complexities and prediction speed. Hence, artificial intelligence methods are not suitable to achieve short-term or real-time prediction for the PV output power. On the other hand, such type methods rarely capture the linear features of the PV power generation system. No matter the physical models, the datadriven models or the artificial intelligence methods are single model and hardly meet the predication requirement owing to the complex characteristics of the PV power generation system, e.g., nonlinear, chaotic, intermittence, etc.
Utilizing the respective advantages of different single models/methods, hybrid models have attracted more attention of researchers. According to the way of hybrid, such models can be classified into the following types: 1). Combine single models, mainly are artificial intelligence models, and optimization algorithms. William et al. [12] proposed genetic algorithm-based support vector machine (GASVM) hybrid model for short-term power forecasting of residential scale PV power generation system. In [13], Semero et al. proposed a hybrid model based on a combination of PSO and adaptive neuro-fuzzy inference system (ANFIS) [14] for one-day-ahead hourly PV power generation prediction in microgrid [15], Ni et al. applied the differential evolution algorithm to optimize the combination weight value of Extreme Learning Machine (ELM) and then proposed a hybrid forecasting model based on the ELM and differential evolution algorithms for shortterm PV power generation. Liu et al. [16] applied the chicken swarm optimizer to optimize the weights and the thresholds of ELM, in order to improve the prediction effect and strength the convergence. The hybrid models are superior than the single models for predicting the output of PV power generation. In fact, the optimizers are just used to optimize the parameters of models, aggress the calculation speed, but cannot extracted new features.
2). In order to extract more features from the complex datasets of the PV power outputs, some papers proposed different hybrid models by combining different methods together. Majumder et al. [17] applied the variational mode decomposition (VMD) to extract the features from original nonlinear dataset, and then used the extracted features to train a robust kernel-based ELM (RKELM), in order to strengthen the forecasting accuracy for PV power generation system. Zhang et al. [18] presented a new integrated model based on the improved empirical mode decomposition (IEMD) and autoregressive moving average with exogenous terms (ARMAX), in order to better capture the characteristic of the PV power output. Similarly, Wang et al. [19] proposed a hybrid model combined the convolutional neural network with long shortterm memory network to achieve the prediction of the PV power generation. Moreover, Wang et al. [20], Giorgi et al. [21], Malvoni et al. [22] respectively used wavelet transform (WT) to decompose the original data and extracted different features, and then used different artificial intelligence methods, e.g., LSSVM, improved deep belief network (IDBN), generalized regression neural network (GRNN), ANFIS, etc., to achieve the final forecasting. Although the improvements of the forecasting results are obvious by using the signal decomposition model to extract features firstly, the single artificial intelligence models have the common shortcomings: complex and randomly given parameters, uncertain model structure and hardly to achieve global optimum.
3). In order to overcome the above-mentioned problems, combine the type 1) and the type 2) together, and present the third type hybrid models. The features are extracted by the WT, signal decomposition models or the other methods firstly. Then, the artificial intelligence models are used to achieve the prediction. Finally, some optimization algorithms are applied to optimize the parameters or structure so as to further increase the predict accuracy. Shang et al. [23] applied an enhanced empirical model decomposition (EEMD) to obtain the features, and then selected the improved support vector regression (ISVR) method to achieve the prediction. The optimization algorithm was used to fine tune the related free parameters. Eseye et al. [24] combined WT, particle swarm optimization (PSO) and SVM to improve the short-term predict accuracy of a real microgrid PV power generation system. Zhang et al. [25] proposed an adaptive hybrid model combing with improved VMD (IVMD), autoregressive integrated moving average (ARIMA) and improved DBN (IDBN) to predict the dayahead PV output power. Behera et al. [26] proposed a hybrid model based on the EMD and ELM optimized to achieve the prediction of PV power output. These type models can achieve a better prediction results than the above two type methods. However, the model structure and computation are complex. Moreover, the high volatility, uncertainty and randomicity characteristics of the PV power generation system should be further considered, in order to obtain a more satisfied forecasting result.
On the other hand, fuzzy logic system (FLS) as one of the powerful tools to handle high level of uncertainties have been widely applied in smart grid, intelligent transports and intelligent city, etc. Peng et al. [27,28] applied the FLS in the wireless sensor networks for solving the uncertainties of power allocation and energy consumption. Li et al. [29] combined the FLS and wavelet transform to achieve a short-term building electrical load forecasting. Moreover, the short-term traffic flow prediction was also researched based on data driven FLS [30].
Therefore, this paper will give a hybrid ensembled model based on the Double-Input-Fuzzy-Modules (DIFM), ELM and optimization algorithm, in order to further improve the forecasting accuracy of the PV power generation. The contributions and novelties of this study can be summarized as follows: A hybrid ensembled model is proposed for the PV power generation forecasting.
In the proposed model, the mapping between original data and features is obtained through the DIFM. This can effectively deal with the uncertainties and nonlinear features of PV power generation data. Then the outputs of DIFMs are used as the inputs for ELM and the final forecasting result is obtained. This fully take advantage of ELM to enhance the generalization abilities and solve the overfitting problem. Moreover, the training speed can also be improved. Moreover, the parameters of the model are optimized by least square estimation, in order to further improve it predict performance. Similar to the aforementioned, least square estimation method not only achieves the optimal parameters, but also keep the faster training speed. The proposed hybrid model is applied to predict the output of a real PV power generation system, and the detailed comparisons are also given. Both the real application and the comparisons indicate that the proposed hybrid model outperform other models in terms of the mean absolute error (MAE), the root mean square error (RMSR), the mean absolute error percentage (MAEP) and the mean relative error (MRE). The rest of this paper are organized as follows: the detailed demonstration of the proposed hybrid ensembled model will be presented in the Section II. Both experiments and comparisons will be given in the Section III. Finally, the conclusions will be given in Section IV.
THE PROPOSED HYBRID ENSEMBLED MODEL
In this section, the proposed hybrid ensembled model will be given in detailed. Firstly, the framework of the proposed hybrid ensembled model will be present. Then the detailed design of DIFM will be presented. Furthermore, ELM and model integration also will be provided. Finally, the training of the hybrid ensembled model will be demonstrated.
Framework of the proposed hybrid model
The structure of the proposed hybrid model is illustrated in Fig. 1, where each DFIM consists of two inputs variables selected from the original dataset through a moving window with step length one. The least square estimator is applied to optimize the consequences. Then the outputs of different DIFM are used as the inputs of ELM to obtain the final forecasting result. Similar, the parameters of the model are also be optimized by the least square estimation method. More specifically, the predication processes in this study are listed as follows: First, utilize the DIFM to handle the uncertainties hidden in the original dataset and extract the features. Moreover, a moving window is applied to select the input variable in order to fully mining the depth information. In this part, the least square estimation is adopted to generate the fuzzy rules and optimize the parameters for the DIFM respectively. Second, the features extracted from each of DIFM construct the input variables for the ELM part. Then the final result is obtained from the ELM. Similarly, the parameters of ELM are optimized by the least square estimation in order to further improve the forecasting accuracy.
.1 Double-input-fuzzy-modules
For each DIFM, the triangular membership functions (MFs) are adopted for the two input variables. The MFs for input variables is given in Fig. 2 c is the crisp consequent, m is the number of fuzzy sets for each of input variable. Therefore, the number of fuzzy rules of each DIFM is m 2 .
Finally, the output of DIFM-i can be obtained as follows.
Extreme learning machine
As one of the most popular single hidden layer feedforward neural networks, ELM has demonstrated superior performance items of learning speed and approximation performance. For the ELM, the standard input-output mapping can be expressed as follows.
where L is the number of the hidden neurons and is set before training, where the l y and l y are the l-th predicted outcome and the l-th real sample. Subsequently, the equation (2) can be rewritten as y H (5) where H is the outputs of hidden layers (denoted as training matrix), is the output weights between the hidden layers and the output layers. According to the training data set, they can be obtained as T N y y y y (8) Hence, the output weights can be computed by † Hy (9) where † H is the Moore-Penrose pseudo inverse of the training matrix H.
Models training
For our proposed the ensembled model, the training data set is denoted as 1 1 ,, Then the models training process can be descripted as follows.
Step1: based on the equation (3) Step3: the weight matrix can be computed according to the equation (9) and be optimized by the least square method.
EXPERIMENTS and DISCASSIONS
In this section, the proposed ensembled model will be applied to a real-world PV power generation prediction application. Moreover, it also will be compared with several popular predicted methods, in order to verify its advantage. Hence, several popular predicted methods will be firstly introduced. Then four popular metrics will be given for performance comparisons better. Finally, both experimental results and comparisons will be given in detail.
Several methods for comparison
In the following, the proposed ensembled model will be compared with some popular methods, e.g., ANFIS, the single-input-rule-modules connected fuzzy model (SIRM-FM) [31], SVR and the multiple linear regression (MLR) [32].
ANFIS combines neural network with fuzzy reasoning, then uses mathematical programming (least squares estimation) and a gradient-based algorithm to optimize the parameters. Therefore, it is one of the most popular and powerful prediction approaches and have wildly used in many applications [14][15].
SIRM-FM is one kinds of modular fuzzy models and constructs a fuzzy rule module for each input variable, then aggregate all the single-input-rule-modules for different input variables to obtain a crisp output. It has been wildly applied in various fields, e.g., energy consumption prediction of building [29], traffic flow prediction [30], etc.
SVR is an important variant of SVM. The kernel functions are used for solving the prediction problems, in order to achieve strong generalization ability and good prediction performance.
In MLR, the prediction is achieved by analyzing the mathematical correlation between the model variables and the observed data of the samples. It also has been wildly applied in solar energy prediction [32], river discharge forecasting [33], and so on.
Metrics
In order to evaluate the performance of all methods, the following four metrics, including MAE, RMSE, MSPE and MRE, are used in our study. (15) where N is the number of training data or test samples, the l y and l y are the l-th predicted outcome and the corresponding real sample. For the above-mentioned indices, the greater values of those mean larger gaps between predicted values and real sample and worse prediction performance.
Applied data set
In the experiments, the PV power generation data collected from a real PV power generation system located in an area of Germany. This data set can be obtained from http : //www.elia.be/nl. The sampling cycle is 15 minutes and duration from Jan 1, 2016 to Jun 30, 2016 with a total of 17,472 sample points. The former five months sample data is used as the training data set, and the last months sample data is used as the testing data set.
Experimental results and discussion
The prediction results of five different models are shown in Fig. 3, for instance Fig. 3(a) hybrid ensembled model, Fig. 3 Table 1. The Table 1 demonstrates that no matter the MAE, RMSE, MSPE or MRE of the hybrid ensembled model are better than the others. For better visualization, the corresponding testing results are also drawn in Fig. 4. Moreover, the box-plots of the absolute predict errors are also given in Fig. 5. Fig. 5, it can be clearly seen that the proposed hybrid ensembled model has the smallest forecasting absolute error median and the smaller heights between the bottom and top edges of the box-plots. This verifies the advantages of the proposed hybrid ensembled model again. 5.
CONCLUSION and FUTURE WORK
In order to handle the uncertainties of PV power generation system and improve the prediction performance, this study fully took the advantages of fuzzy logic to deal with uncertainty and the superiority of machine learning to handle the time series prediction, then combined the DIFM and ELM to propose a hybrid ensembled DIFM based PV power generation prediction model. In the proposed hybrid ensembled model, the original data set was handled by the DIFMs aiming at dealing with the uncertainties. Then each output of DIFMs were taken as input of ELM to obtain the final prediction. Furthermore, the least square estimation was adopted to optimize the parameters of the proposed model, in order to further improve the prediction performance. Both experiments and comparisons were given. The comparative results demonstrated that the proposed hybrid ensembled model outperformed other models in terms of MAE, RMSE, MSPE, MRE.
In this study, the data set collected from an area of Germany didn't consider some important factors, e.g., the load changing, the solar changing, the location difference, human activities, etc. Therefore, the prediction performance will be further enhanced by taking the aforementioned factors into the model. On the other hand, the proposed hybrid ensembled model can also be used handle the similar time series prediction problems, e.g., electrical load prediction, traffic flow prediction, indoor temperature prediction, etc. The above mentioned two points will be researched in our future work. | 4,264.4 | 2021-02-08T00:00:00.000 | [
"Computer Science"
] |
An Accurate Spectral Galerkin Method for Solving Multiterm Fractional Differential Equations
This paper reports a new formula expressing the Caputo fractional derivatives for any order of shifted generalized Jacobi polynomials of any degree in terms of shifted generalized Jacobi polynomials themselves. A direct solution technique is presented for solving multiterm fractional differential equations (FDEs) subject to nonhomogeneous initial conditions using spectral shifted generalized Jacobi Galerkin method. The homogeneous initial conditions are satisfied exactly by using a class of shifted generalized Jacobi polynomials as a polynomial basis of the truncated expansion for the approximate solution. The approximation of the spatial Caputo fractional order derivatives is expanded in terms of a class of shifted generalized Jacobi polynomials with , and is the polynomial degree. Several numerical examples with comparisons with the exact solutions are given to confirm the reliability of the proposed method for multiterm FDEs.
Spectral method is one of the principal methods of discretization for the numerical solution of most types of differential equations. The three most widely used spectral versions are the Galerkin, Tau, and collocation methods (see, for instance [26][27][28][29][30][31][32]). Recently, spectral method is a class of important tools for obtaining the numerical solutions of fractional differential equations. They have excellent error properties and they offer exponential rates of convergence for smooth problems. In the present paper we intend to extend the application of Galerkin method based on generalized Jacobi polynomials form solving linear problems to solve multiterm FDEs. To the best of our knowledge, there are not so many results on using this technique to solve such problems arising in mathematical physics. This partially motivated our interest in such a method.
Spectral Galerkin method for the numerical solution of fractional differential equations is characterized by expanding the solution by a truncated series of the trial functions. The unknown coefficients of this expansion will be determined by minimizing the error between the exact and numerical solutions in appropriate weighted space. This method provides exponential rates of convergence. An explicit expression for the derivatives of an infinitely differentiable function of any degree and for any fractional order in terms of the function itself is needed. Doha et al. [16] have obtained such a relation in the case of the basis functions of expansion that are shifted Jacobi polynomials. Another formula for shifted Legendre coefficients is obtained by Bhrawy et al. [17]. Moreover, in [33] the authors expressed explicitly the Caputo fractional derivatives of generalized Laguerre polynomials of any degree in terms of the generalized Laguerre polynomials themselves to solve fractional initial value problems on the half line.
An explicit expression for any Caputo fractional order derivative of the shifted generalized Jacobi polynomials of any degree in terms of the shifted generalized Jacobi polynomials themselves is the first goal of this paper. The fundamental goal of this paper is to develop a direct solution technique based on shifted generalized Jacobi-Galerkin method (SGJG) for solving multiterm FDEs with homogeneous and nonhomogeneous initial conditions. Finally, we present some numerical results exhibiting the accuracy and efficiency of our numerical algorithm.
The next section of this paper is for fractional preliminaries. Section 3 is devoted to proving a formula that expresses the Caputo fractional order derivative of the shifted generalized Jacobi polynomials. In Section 4, we construct and develop algorithms for solving linear FDEs by using shifted generalized Jacobi Galerkin spectral method. In Section 5, several examples are presented. Finally, some concluding remarks are given in the last section.
Preliminaries and Notations
In this section, we present some basic knowledge of fractional calculus, orthogonal shifted Jacobi polynomials, and generalized Jacobi polynomials these are most relevant to spectral approximations.
Similar to the integer-order differentiation, the Caputo's fractional differentiation is a linear operation; that is, where and are constants.
It is convenient to standardize the Jacobi polynomials so that where ( ) = Γ( + )/Γ( ). In this form the polynomials may be generated using the standard recurrence relation of Jacobi polynomials starting from , 0 ( ) = 1 and where = + + 1.
Generalized Jacobi Polynomials.
Recently, Guo et al. [36] presented and developed the generalized Jacobi approximation, in which the parameters and considered in the generalized Jacobi polynomialŝ, ( ) might be any real numbers. In this section, we give some properties of such polynomials. Let̂= We denote the set of integers by Z. For any , ∈ Z, the generalized Jacobi polynomials are defined by (see [36,37]) For our present purposes it is convenient to use the shifted Jacobi polynomials , ( ); let = (0, 1) and , ( ) = (1 − ) . We define the shifted GJPs and separate them into four cases as follows.
Proof. Firstly, Secondly, Thirdly, And lastly, A function ( ), square integrable in (0, 1), can be expressed in terms of shifted generalized Jacobi polynomials as where the coefficients are given by Proof. The analytic form of the shifted generalized Jacobi polynomials ,− ( ) of degree − is given by (26). Using Now, approximating + −] by terms of shifted generalized Jacobi series, we have where , is given from (28) where ] ( , , , ) is given as in (30), and this proves the theorem.
Mathematical Problems in Engineering 5
Shifted Generalized Jacobi Galerkin Method for FDEs
In this section, we are interested in employing the SGJG method to solve the linear multiterm FDE subject to the homogeneous initial conditions where ( = 1, . . . , ) and 0 < 1 < 2 < ⋅ ⋅ ⋅ < −1 < ], − 1 < ] ≤ are constants, ] ( ) ≡ (]) ( ) denotes the Caputo fractional derivative of order ] for ( ), and ( ) is a given source function. Let us first introduce some basic notation that will be used in the upcoming sections. We set where V ( ) ( ) denotes th-order differentiation of V( ) with respect to . Then the shifted generalized Jacobi-Galerkin approximation to (36) is to find ∈ such that By virtue of (31) and making use of the orthogonality relation of shifted generalized Jacobi polynomials (21), and after some rather lengthy calculation, we get Thereby, we can write (41) in the following matrix system form
Illustrative Examples
Several test examples are carried out in this section. The results obtained by the present methods reveal that the present method is very effective and convenient for linear FDEs.
whose exact solution is given by ( ) = 13 .
In Table 2, we present the maximum absolute errors, using SGJG method with various choices of ] and .
In Table 3, we present the maximum absolute errors, using SGJG method with various choices of ] and .
In Table 4, we exhibit maximum pointwise error using SGJG method with two choices of the shifted generalized Jacobi parameters , and = 8, 12, 16, 20, 24. We observe from this table that the suggested algorithm provides accurate and stable numerical results. This numerical experiment demonstrates the utility of the method.
Conclusion
We have derived a new formula expressing explicitly the Caputo fractional derivatives for any fractional-order of shifted generalized Jacobi polynomials of any degree in terms of shifted generalized Jacobi polynomials themselves. We have derived a Galerkin method, involving a specified class of the shifted generalized Jacobi polynomials, which permits us to numerically solve an important class of FDEs. Indeed, in Section 5, we demonstrated that for all parameter shifted generalized Jacobi considered, the method results in rather small errors with relatively few modes are considered. Since the method is rather robust, it is likely that it may be applied to other types of FDEs. For instance, one-and two-dimensional time-dependent FDEs Copyright of Mathematical Problems in Engineering is the property of Hindawi Publishing Corporation and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. | 1,866.2 | 2014-06-12T00:00:00.000 | [
"Mathematics"
] |
Conodont biostratigraphy and correlation of the San Juan Formation at the Cerro La Silla section, middle Tremadocian-lower Dapingian, Central Precordillera, Argentina
This study deals with the conodont biostratigraphy from the uppermost part of La Silla Formation (9.6 m) and the overlying San Juan Formation (264.7 m), at the Cerro La Silla section, Central Precordillera of San Juan, Argentina. The 41 samples of carbonate rocks that were digested for microfossils yielded 11,388 conodont elements corresponding to 78 species. The Paltodus deltifer deltifer Subzone of the Paltodus deltifer Zone from the Baltic biostratigraphic scheme is represented at the top stratum of the La Silla Formation and the basal part of the San Juan Formation (28.4 m), which correlates with the Macerodus dianae Zone (middle Tremadocian) of the Precordilleran and North American schemes. Following upwards, the Paroistodus proteus, Prioniodus elegans, Oepikodus evae, Oepikodus intermedius and Baltoniodus triangularis-Tripodus laevis zones (middle Tremadocian-lower Dapingian) are recorded in the San Juan Formation. The Baltoniodus triangularis-Tripodus laevis Zone is recognized from the second reef level (177.3 m from the base of the San Juan Formation) up to the top stratum in the section, in contrast to previous interpretations that assigned the referred interval to the Baltoniodus navis, Paroistodus originalis and Microzarkodina parva zones of the Baltic biostratigraphic scheme. The division of the Oepikodus evae Zone in subzones, according to its original definition for the Precordillera, is not applicable at the Cerro La Silla section due to the particular species distribution. The conodont elements show a brown alteration color (CAI 2-2.5), which indicates a burial paleotemperature of 60-155°C for the bearer strata.
Introduction
The Precordillera is located between 28°30ʹ and 33° S and 68°15ʹ and 69°45ʹ W, partly covering the La Rioja, San Juan and Mendoza provinces. This geological province includes extensive Paleozoic outcrops and, to a lesser extent, Mesozoic and Cenozoic rock units. On the basis of its stratigraphic and structural characteristics, the Precordillera is subdivided into three morphostructural units: Eastern (Ortiz and Zambrano, 1981), Central (Baldis and Chebli, 1969) and Western Precordillera .
The Central Precordillera is composed mainly of carbonate platform deposits (Cerro Totora, La Flecha, La Silla, San Juan and Las Chacritas formations) that makes up a rock sequence of ca. 2,500 m thick. This represent an apparently continuous cycle deposited under warm to temperate conditions, whose sedimentation began in the Cambrian and continued up to the Darriwilian.
The Cerro La Silla, in the Central Precordillera, is located 15 km southeast of Jáchal City (Figs. 1 and 2), San Juan Province. The study section can be accessed by vehicle through the National Route No. 40. The section of the San Juan Formation at the Cerro La Silla (Fig. 1) is interesting to analyze due to the presence of two reefal structures in its lower and middle parts, where a stratigraphically continuous record of the conodont fauna is represented. The upper stratigraphic section, which occupies approximately one third of its actual thickness, is covered by alluvial sediments in this locality.
The conodont biostratigraphy of the San Juan Formation at Cerro La Silla was partly studied by Lehnert (1995) and Thalmeier (2014). The biostratigraphic analysis of the uppermost part of La Silla Formation and of the overlying San Juan Formation, as well as an updating to the biostratigraphic scheme of the Precordillera ) motivated this work.
Accordingly, the objective of this work is to study the conodont fauna of the San Juan Formation exposed at the Cerro La Silla, including the transitional interval between this unit and the uppermost strata of the underlying La Silla Formation, in order to determine the characteristics and differences of the taxonomic record and to establish the conodont biostratigraphy following the updated biozonal scheme for the Precordillera , as well as its regional and global correlation.
The San Juan Formation
The carbonate sequence of the San Juan Formation (Keller et al., 1994), approximately 330 m thick, is made up of skeletal micritic limestones deposited from the upper Tremadocian up to the middle Darriwilian on a ramp topography, recording two regressivetransgressive cycles (Cañas, 1995).
The boundary between the La Silla Formation and the overlying San Juan Formation marks a major change in the configuration of the carbonate platform, with the passage from subtidal to open platform facies in a carbonate ramp geometry (Pratt et al., 2012). This lithofacial change is accompanied by an important faunal change.
The limestones of the San Juan Formation starts with a transgressive sequence at whose base a reef horizon consisting of calcimicrobials and sponges is developed (Cañas and Carrera, 2003).
Subsequently, high sea level sediments accumulate (mostly bioturbated skeletal wackestones) in a framework of environmental stability, which allows the development of rich subtidal communities dominated by suspension-feeding organisms, especially brachiopods and macluritacean gastropods (Cech and Carrera, 2002). A second reef horizon that consists of microbialites, receptaculitids (Calathium) and mainly stromatoporoids (Zondarella) is located in the middle part of the San Juan Formation close to the base of the Middle Ordovician (Dapingian).
During the Darriwilian, as a consequence of a relative sea level increment that led to the drowning of the platform below the photic zone, the carbonate production is suffocated. Consequently, the carbonate cycle culminates, being followed by the deposition of calcareous-shaly facies towards a predominantly pelitic sequence (Baldis and Beresi, 1981;Baldis et al., 1984), known as the Gualcamayo and Los Azules formations in different localities of the Precordillera.
Framework of the conodont biostratigraphy
at the study area Lehnert (1995) Albanesi et al. (1998). It should be noted that the samples from the basal strata (samples LS 8 and LS 7) were sterile, and the first productive level obtained by the author is located above the first reef level (sample LS 6).
Materials and methods
The field-work consisted in the recognition of the study area and the sampling of the stratigraphic profile corresponding to the San Juan Formation at the Cerro La Silla section. There 41 limestone samples were taken, with a variable weight between 2 and 4 kg, among intervals of interest for the accomplishment of the present study. Thus, 2 samples were taken from the upper part of the La Silla Formation (uppermost 9.6 m) and 39 samples throughout the San Juan Formation (Tables 1-4) Colaptoconus priscus Oelandodus costatus? Table 3 continued.
Species/Samples LSSJ H4 LSSJ H3 LSSJ H2 LSSJ H1 LSSJ H LSSJ G LSSJ F LSSJ D LSSJ B LSSJ A+7 LSSJ tope+
Anodontus longus The laboratory work comprised the processing of rocks for the recovery of microfossils, following the method of Stone (1987) (10% formic acid). For each processed sample, insoluble residue was recovered varying in weight from 20 to 100 g, regarding the composition of the limestone. Over this residue, associated microfossils were picked up, including 11,388 conodonts specimens, which correspond to 78 species (Tables 1-4), which were illustrated by conventional optical photomicrography. The conodonts are housed in the Museo de Paleontología, Universidad Nacional de Córdoba, under repository code CORD-MP.
Conodonts CAI
In the present study, the conodonts recovered from La Silla and San Juan formations present a color alteration index (CAI) around 2 to 2.5, which refers to burial temperatures of 60-155°C (Epstein et al., 1977). These values could be explained by the Niquivil tectonic thrust; that affected the easternmost part of the Central Precordillera (Voldman et al., 2010), as recorded for the San Juan Formation in the Cerro Potrerillo (Albanesi et al., 1998), Cerro Viejo of Huaco (Ortega et al., 2007;Mango and Albanesi, 2018a) and the Río Las Chacritas exposures (Serra et al., 2015).
Macerodus dianae Zone
This zone can be recorded from the lowest sample taken in the upper La Silla Formation, 9.6 m below the contact with the San Juan Formation (LSLS-1) (Fig. 4) to the sample LSSJ P1, where Paroistodus proteus (Lindström) appears in the record. These strata bear the index fossil Paltodus deltifer deltifer (Lindström) (Fig. 5) that allows to recognize the Paltodus deltifer Zone, Paltodus deltifer deltifer Subzone of the biostratigraphic scheme of Baltica, which correlates with the Macerodus dianae Zone of the scheme used in this work. In these samples, the record of Paltodus deltifer deltifer, Colaptoconus priscus (Ji and Barnes) and Colaptoconus quadraplicatus (Branson and Mehl) is frequent, and the record of Variabiloconus bassleri (Furnish) is scarce. The local thickness of this zone is ca. 38 m. Albanesi et al. (1998) reported the Paltodus deltifer Zone in the La Silla Formation at the Portezuelo Yanso section, and correlates its upper part with the Macerodus dianae Zone. Subsequently, studied the upper part of the La Silla Formation at the Cerro Viejo of San Roque, Central Precordillera of San Juan, and recognized the index fossils Macerodus dianae in the Umango section and Paltodus deltifer deltifer from the top stratum of the La Silla Formation in the Portezuelo Jáchal section. This work allowed to recognize the Macerodus dianae Zone.
In the San Jorge Formation, exposed in the central sector of the La Pampa Province, Albanesi et al. (2003) reported conodonts referred to the Baltoscandian upper P. deltifer Zone that correlates with the Macerodus dianae Zone.
Intercontinental correlation
The conodont associations from the Ceratopyge limestone (lower Oelandiano AIII Stage) of Västergötland and Öland, Sweden, would correspond to the Paltodus deltifer Zone (Lindström, 1955(Lindström, , 1971 that in turn correlates with the Macerodus dianae Zone of the Precordillera. Szaniawski (1980) analyzed the chalcedony layers from the Holy Cross Mountains at Poland, and divided the Paltodus deltifer Zone into two subzones, a lower or Paltodus deltifer pristinus Subzone, and an upper or Paltodus deltifer deltifer Subzone; later, Löfgren (1996) analyzed the biostratigraphy of the Orreholmen quarry in Västergötland, Sweden, recognizing this subdivision, where the upper subzone correlates with the Macerodus dianae Zone.
The Prioniodus gilberti Zone defined by Stouge and Bagnoli (1988) for layer 8 of the Cow Head Group at Newfoundland would represent the discussed zone. A correlative interval has also been identified by Smith (1991) in Greenland, Küppers and Pohler (1992) in Montagne Noire, southern France, An et al. (1983) in northern China, and by Nicoll et al. (1993) from the units underlying the Emanuel Formation in the Canning Basin of Australia.
At the Honghuayuan Formation, Guizhou, southern China, Zhen et al. (2007) record an association of conodonts that compares with material from Sweden, concluding that the lower part of that association would correspond to the P. deltifer Zone, that is partly the Macerodus dianae Zone.
Paroistodus proteus Zone
The index fossil Paroistodus proteus is recorded between the samples LSSJ P1 and LSSJ O6 (Fig. 4), immediately below the occurrence of Prioniodus elegans Pander. Its appearance in this stratigraphic interval allows the recognition of the Paroistodus proteus Zone in this section. It shall be noted that it has not been possible to identify the subzones of this zone, because the index fossils that determine these intervals have not been found. The local thickness of this zone is 7.5 m.
The Paroistodus proteus Zone records a low diversity of conodonts at the base, with Colaptoconus priscus and C. quadraplicatus as recurrent species, ranging from the underlying zone. The diversity increases upwards towards the upper section, with the appearances of Lundodus gladiatus (Lindström), Tropodus comptus (Branson and Mehl), Tropodus sweeti (Serpagli), Cornuodus longibasis (Lindström), Protoprioniodus simplicissimus McTavish, Kallidontus corbatoi (Serpagli), Diaphorodus russoi (Serpagli), Paroistodus parallelus (Pander), Protopanderodus leonardii Serpagli, Protopanderodus elongatus Serpagli, Paroistodus originalis (Sergeeva), and Scolopodus krummi (Lehnert). Hünicken and Mazzoni (1994) reported the Paroistodus proteus Zone in the San Juan Formation from Guandacol River area of the northern Precordillera. In turn, Albanesi et al. (1998) determined the Paroistodus proteus Zone from the top strata of the La Silla Formation and the basal part of the San Juan Formation at the Yanso Section, Central Precordillera. Voldman et al. (2016) established the Acodus apex Zone from the top stratum of the Santa Rosita Formation up to the basal part of the Acoite Formation, and the Acodus triangularis Zone from the lower part of the Acoite Formation, at the Chulpíos Creek, Santa Victoria Range, Cordillera Oriental. The Paroistodus proteus Zone of this study mostly correlates with the referred Acodus apex and A. triangularis zones. Löfgren (1993) analyzed conodonts from Hunneberg, Sweden, and recognized four biostratigraphic intervals within of the Paroistodus proteus Zone, increasing the resolution for this part of the Ordovician; later, Löfgren (1994) (An et al., 1983;An and Zheng, 1990;Zhen et al., 2015aZhen et al., , 2016Wang et al., 2018). At the Honghuayuan Formation, Guizhou, southern China, Zhen et al. (2007) recorded an association of conodonts that compared with material from Sweden, concluding that the upper part of that association would correspond to the P. proteus Zone, although they did not record the index species. At slope facies of the Shijiatou and Jingshan formations, southern China, Zhen et al. (2015b) recorded the Paroistodus proteus, Triangulodus bifidus and Serratognathus diversus biozones, which correlate with the Paroistodus proteus Zone of this study.
Prioniodus elegans Zone
The record of Prioniodus elegans (Fig. 6) below the first appearance of Oepikodus evae (Lindström), is verified between samples LSSJ O6 and LSSJ L1 (Fig. 4), allowing for the recognition of the homonymous zone for this interval, whereas its upper limit is demarked at the sample LSSJ K2 where the Oepikodus evae index fossil appears. The Prioniodus elegans-Tropodus sweeti Subzone is recognized from sample LSSJ O6 to LSSJ O1 by the record of Tropodus sweeti not associated with Oepikodus communis (Ethington and Clark Serpagli (1974) suggested the existence of the Prioniodus elegans Zone in the San Juan Formation, Precordillera, and proposed its correspondence with the Fauna A of the Pachaco section, based on the species association, although not determining the nominal species. This species was firstly published by Hünicken and Sarmiento (1980) from the Guandacol section, and the zone was defined by Albanesi et al. (1998) in the Yanso section. Lehnert (1993Lehnert ( , 1995 defined the correlative Prioniodus elegans-Oepikodus communis Association Zone for the basal levels from the San Juan Formation at the Niquivil and Cerro La Silla sections, Precordillera of San Juan.
Regional correlation
In the Acoite Formation, Chulpíos Creek, Santa Victoria, Cordillera Oriental, Argentina, Voldman et al. (2016) established the Acodus triangularis, Gothodus vetus and Gothodus andinus zones, successively upwards. The Prioniodus elegans Zone applied in this work could be correlated with the upper part of the Acodus triangularis Zone, the Gothodus vetus Zone and much of the Gothodus andinus Zone, as defined by the referred authors. Mango et al. (2016) studied samples from the Huaco Anticline, San Juan Precordillera, and recognized the Prioniodus elegans Zone in strata of the lower part of the San Juan Formation, based on the occurrence of the nominal species at the Río Huaco canyon. Löfgren (1978Löfgren ( , 1993Löfgren ( , 1994Löfgren ( , 1996 identified parts of this zone from different localities of Sweden. Layer 9 of the Cow Head Group contains conodonts assignable to the Prioniodus elegans Zone in different sections of Western Newfoundland (Fåhraeus and Nowlan, 1978;Stouge and Bagnoli, 1988;Pohler, 1994). It could also be correlated with the lower part of Fauna E proposed by Ethington and Clark (1971) for the North American biostratigraphic scheme, and with the Oepikodus communis Zone by Ethington and Repetski (1984).
Intercontinental correlation
The Protopanderodus inconstans-Scolopodus subrex Zone of shallow waters and Acodus delicatus-Acodus? primus Zone of deep water environments defined by Ji and Barnes (1994) for the Boat Harbour Formation, Saint George Group, Newfoundland, correlate with the lower Prioniodus elegans Zone. Its upper part corresponds to the lower Parapanderodus carlae-Stultodontus ovatus Zone of shallow-water facies and to the Oepikodus communis-Protoprioniodus simplicissimus Zone of deep-water facies as defined for the Catoche Formation by the same authors. Seo et al. (1994) defined the Paracordylodus gracilis and Triangulodus dumugolensis zones for the upper South Korean Dumugol Formation, which could be partially correlated with the Prioniodus elegans Zone. At slope facies of the Jingshan Formation, southern China, Zhen et al. (2015b) also recorded the Prioniodus elegans Biozone.
Oepikodus evae Zone
This zone is recognized from the sample LSSJ K2 by the occurrence of Oepikodus evae (Fig. 5) to the sample LSSJ J (Fig. 4), where Oepikodus intermedius (Serpagli) appears not associated to Oepikodus evae. In this section, the species Juanognathus variabilis Serpagli and Scolopodus oldstockensis Stouge are recorded throughout the zonal interval, though according to the reference biostratigraphic scheme by Albanesi and Ortega (2016) it would correspond to the Oepikodus evae-Scolopodus oldstockensis Subzone of Albanesi et al. (1998). The division of the zone will be discussed in detail under the discussion part. The local thickness of this zone is 23.9 m.
In this interval Periodon flabellum (Lindström), Drepanoistodus basiovalis (Sergeeva), Oepikodus intermedius, Scolopodus oldstockensis, Texania heligma Pohler, and Paroistodus cf. P. proteus appear in the record. Additionally, the last record of Tropodus comptus, Oepikodus communis, Prioniodus elegans and Periodon primus Stouge and Bagnoli is observed. Lehnert (1993Lehnert ( , 1995 proposed the O. evae and O. evae./O. intermedius zones at the Niquivil section, which correlates with the O. evae Zone established by Albanesi et al. (1998). The later zone was recognized at the Los Gatos Creek, Cerro Viejo of Huaco, Precordillera Central of San Juan, by Mango and Albanesi (2018a). Mestre (2008) reviewed the conodont biostratigraphy of the uppermost San Juan Formation in the Buenaventura Luna Monument area, recognizing the Oepikodus evae Zone regardless previous studies by Lemos (1981). Subsequently, Mango et al. (2016) analyzed conodonts from the Huaco Anticline, recognizing conodonts of the O. evae Zone from the top stratum of the San Juan Formation, and the limestone dropstones that bear the basal strata of the unconformably overlying Guandacol Formation.
Regional correlation
Species of the Trapezognathus diprion Zone are recognized in samples obtained from the Acoite Formation of the Cordillera Oriental, Northwest Argentina by Carlorosi and Heredia (2013), whose lower part correlates with the upper O. evae Zone of Albanesi et al. (1998). Voldman et al. (2016) define the Gothodus andinus Zone, in the upper Acoite Formation at the Chulpíos creek, Santa Victoria, Cordillera Oriental, Argentina. The upper section could be correlated with part of the O. evae Zone as used in this study. At the Suri Formation of the Famatina System (Lehnert et al., 1997;Albanesi and Astini, 2000) and the Niquivil section (Albanesi et al., 2006) the O. evae Zone is well documented.
Intercontinental correlation
The O. evae Zone can be correlated with the upper-middle part of the Oepikodus communis Zone in North American biostratigraphic schemes (Ethington and Repetski, 1984).
The index fossil has an important record in the formational units of Hubei Province in China An et al., 1985), although Stouge and Bagnoli (1988) appreciate difficulty in correlation.
The Precordilleran O. evae Zone can be correlated with the O. evae Zone of Lindström (1971) and the O. evae Zone and the lower Trapezognathus diprion Zone of Bagnoli and Stouge (1997) from the Baltoscandian region.
Oepikodus intermedius Zone
The occurrence of Oepikodus intermedius (Fig. 5) not associated to Oepikodus evae and Tripodus laevis Bradshaw, allows to recognize the homonymous zone from the sample LSSJ J to the sample LSSJ H (Fig. 4), where Tripodus laevis appears in the record. The local thickness of this zone is 30.9 m.
Regional correlation
At the Yanso section (Albanesi et al., 1998) and the Niquivil section (Albanesi et al., 2006) the referred unit would be correlated with the O. intermedius Zone. According to their collection of conodonts, the latter authors demonstrate that the Fauna C of the San Juan Formation (Serpagli, 1974)
Intercontinental correlation
The O. intermedius Zone could be correlated with the O. communis Zone (Ethington and Repetski, 1984;cf. Smith, 1991;Ji and Barnes, 1994), and with the Protoprioniodus aranda-Juanognathus jaanussoni interval of Ethington and Clark (1981) for the middle Wah Wah Formation of the Pogonip Group at Ibex, Utah.
The O. intermedius Zone correlates with the uppermost part of the Oepikodus evae Biozone of southern China and with the Jumudontus ganada Biozone of northern China (Zhen et al., 2015a(Zhen et al., , 2016Wang et al., 2018).
In the Baltic region, it can be correlated with the upper O. evae Zone (Lindström, 1971) and with the Microzarkodina sp. A Zone of Bagnoli and Stouge (1997). Lehnert et al. (2013) studied the biostratigraphy of the Oslobreen Group in the Svalbard archipelago, and recognized in the Valhallfonna Formation, Olenidsletta Member, the Oepikodus intermedius Zone, which would correlate with the O. intermedius Zone and the base of the Baltoniodus triangularis-Tripodus laevis Zone of this work.
Baltoniodus triangularis-Tripodus laevis Zone
This zone is defined by the first appearance of Tripodus laevis (Fig. 6), without being associated with Baltoniodus navis (Lindström), between samples LSSJ H and LSSJ tope+ (Fig. 4), the latter corresponds to the top stratum of the San Juan Formation in the section. At these levels, Tripodus laevis, Drepanoistodus costatus, Stolodus stola (Lindström), Pteracontiodus cryptodens (Mound), Scolopodus striatus Pander and Scalpellodus gracilis (Sergeeva) are recorded. The local thickness of this zone is at least 87.4 m.
Regional correlation
This unit has originally been defined to as Tripodus laevis Zone in the Portezuelo Yanso section of the Cerro Potrerillo (Albanesi et al., 1998); although considering the presence of Baltoniodus triangularis (Lindström) at the same section and at Peña Sombría, Della Costa and Albanesi (2016) and Albanesi and Ortega (2016) emended the original definition by incorporating the latter taxon as composite name to the zone, for a broader reference. At the Niquivil section the referred biostratigraphic unit is correlated with the Tripodus laevis Zone (Albanesi et al., 2006) or Baltoniodus triangularis-Tripodus laevis (Mango and Albanesi, 2018b). At the Los Gatos Creek, Cerro Viejo of Huaco, Precordillera Central of San Juan, Mango and Albanesi (2018a) recorded the Baltoniodus triangularis-Tripodus laevis Zone in the San Juan Formation.
The conodont associations corresponding to Fauna D recorded at the Pachaco section on the San Juan River (Serpagli, 1974) could be assigned to the Baltoniodus triangularis-Tripodus laevis Zone.
Intercontinental correlation
This zone would correlate with the Baltoniodus? triangularis and Microzarkodina flabellum zones of Bagnoli and Stouge (1997), and with the Baltoniodus triangularis Zone of Lindström (1971) and Tolmacheva (2001), from the Baltic biostratigraphic schemes.
In North America, it correlates with the Tripodus laevis Zone of Ross et al. (1997) and with the Microzarkodina flabellum-Tripodus laevis Zone of Ethington and Clark (1981). Ji and Barnes (1994) defined the Parapanderodus retractus Zone, for shallow-water environments, and the Pteracontiodus cryptodens Zone for deep-water environments in the Aguathuna Formation, Saint George Group, Newfoundland, correlatives of the Baltoniodus triangularis-Tripodus laevis Zone.
At the Huanghuachang section, China, it could be correlated with the Baltoniodus triangularis Zone (Wang et al., 2003). Lehnert et al. (2013) studied the biostratigraphy of the Oslobreen Group in the Svalbard Archipelago, and recognized in the Valhallfonna Formation, Olenidsletta Member, the Oepikodus intermedius Zone, whose upper part correlates with the lower Baltoniodus triangularis-Tripodus laevis Zone of this work.
Discussion
The 264.7 m thickness calculated in this work for the section of the San Juan Formation at Cerro La Silla differs from the 320 m estimated by Keller et al. (1994) for the same section, later used by Cañas (1999), Keller (1999), and Buggisch et al. (2003). However, the thickness calculated from the base of the San Juan Formation up to the beginning of the Oepikodus evae Zone in the Cerro La Silla section is 122.5 m in the present work, which coincides with the result obtained by Lehnert (1995).
At the Cerro La Silla, the division of the Oepikodus evae Zone according to its original definition is not possible to apply, due to the distribution of the species recorded. At the Portezuelo Yanso section, Scolopodus oldstockensis appears in the record next to Oepikodus intermedius in the middle Oepikodus evae Zone (Albanesi et al., 1998); instead, at the Cerro La Silla section, Oepikodus intermedius presents its first occurrence towards the middle Oepikodus evae Zone, while S. oldstockensis has its first occurrence in older strata at the base of the mentioned zone. The absence of S. oldstockensis in the lower Oepikodus evae Zone in the Portezuelo Yanso area could be related to facies control or a bias of the laboratory procedure.
Finally, Buggisch et al. (2003) indicated that the upper San Juan Formation at the Cerro La Silla, would correlate with the Baltoniodus navis, Paroistodus originalis, and Microzarkodina parva zones of the Baltic biostratigraphic scheme. However, the present records constrain the deposits located from the base of the second reef level to the top stratum of the San Juan Formation exposed in the section (87.4 m) to the Baltoniodus triangularis-Tripodus laevis Zone.
Conclusions
The San Juan Formation at the Cerro La Silla section presents a total thickness of 264.7 m. This result differs from published measurement that indicate a thickness of 320 m for the exposed strata of the formation, a significant difference possibly due to the field technique applied for measuring the thickness of this section. However, the thickness calculated from the base of the San Juan Formation up to the beginning of the Oepikodus evae Zone at this section is similar to the previously published data, as a reference for the middle part of the section.
The conodonts recovered from La Silla and San Juan formations present a color alteration index (CAI) varying from 2 to 2.5, which refers to burial temperatures of 60-155 °C compatible with the Niquivil tectonic thrust, present in the easternmost belt of the Central Precordillera.
For the upper La Silla Formation and the lower part of the San Juan Formation at the Cerro La Silla, the Paltodus deltifer deltifer Subzone of the Paltodus deltifer Zone from the Baltic biostratigraphic scheme, which correlates with the Macerodus dianae Zone (middle Tremadocian) of the Precordillera and the North American schemes, is determined.
The San Juan Formation at the Cerro La Silla section records conodont species of the Macerodus dianae, Paroistodus proteus, Prioniodus elegans, Oepikodus evae, Oepikodus intermedius and Baltoniodus triangularis-Tripodus laevis zones (middle Tremadocian-lower Dapingian). Recovering of Baltoniodus triangularis-Tripodus laevis Zone from the second reef level to the top stratum in the section contradicts previous interpretations that this interval correlates with the Baltoniodus navis, Paroistodus originalis and Microzarkodina parva zones of the Baltic biostratigraphic scheme.
At the Cerro La Silla section, Scolopodus oldstockensis is recorded from older strata of the Oepikodus evae Zone than at the Portezuelo Yanso section, Cerro Potrerillo, where this zone was defined for the Precordillera, either because of facies control or laboratory bias. This situation precludes the application of the zonal division in the study section. | 6,126.4 | 2020-09-30T00:00:00.000 | [
"Geology",
"Geography",
"Environmental Science"
] |
Loss of Guanylyl Cyclase C (GCC) Signaling Leads to Dysfunctional Intestinal Barrier
Background Guanylyl Cyclase C (GCC) signaling via uroguanylin (UGN) and guanylin activation is a critical mediator of intestinal fluid homeostasis, intestinal cell proliferation/apoptosis, and tumorigenesis. As a mechanism for some of these effects, we hypothesized that GCC signaling mediates regulation of intestinal barrier function. Methodology/Principal Findings Paracellular permeability of intestinal segments was assessed in wild type (WT) and GCC deficient (GCC−/−) mice with and without lipopolysaccharide (LPS) challenge, as well as in UGN deficient (UGN−/−) mice. IFNγ and myosin light chain kinase (MLCK) levels were determined by real time PCR. Expression of tight junction proteins (TJPs), phosphorylation of myosin II regulatory light chain (MLC), and STAT1 activation were examined in intestinal epithelial cells (IECs) and intestinal mucosa. The permeability of Caco-2 and HT-29 IEC monolayers, grown on Transwell filters was determined in the absence and presence of GCC RNA interference (RNAi). We found that intestinal permeability was increased in GCC−/− and UGN−/− mice compared to WT, accompanied by increased IFNγ levels, MLCK and STAT1 activation in IECs. LPS challenge promotes greater IFNγ and STAT1 activation in IECs of GCC−/− mice compared to WT mice. Claudin-2 and JAM-A expression were reduced in GCC deficient intestine; the level of phosphorylated MLC in IECs was significantly increased in GCC−/− and UGN−/− mice compared to WT. GCC knockdown induced MLC phosphorylation, increased permeability in IEC monolayers under basal conditions, and enhanced TNFα and IFNγ-induced monolayer hyperpermeability. Conclusions/Significance GCC signaling plays a protective role in the integrity of the intestinal mucosal barrier by regulating MLCK activation and TJ disassembly. GCC signaling activation may therefore represent a novel mechanism in maintaining the small bowel barrier in response to injury.
Introduction
Guanylyl cyclase C (GCC) is a transmembrane receptor for the endogenous peptides guanylin (GN) and uroguanylin (UGN) and for bacterial heat stable enterotoxin (ST) [1,2]. GCC signaling plays a pivotal role in the regulation of intestinal fluid and electrolyte homeostasis [3]. Activation of GCC leads to increased intracellular cyclic GMP (cGMP) accumulation and activation of the cystic fibrosis transmembrane conductance regulator. Activation in response to the superagonist ST results in secretory diarrhea [4,5]. In addition, GCC signaling regulates the renewal of the intestinal epithelium by restricting the proliferating cell cycle and promoting the transition from proliferation to differentiation along the crypt to villus axis [6,7]. Deregulated GCC action is postulated to result in colorectal tumorigenesis and GCC expression is used as marker for human colorectal cancer metastases [8,9]. Cross talk between activation of GCC signaling and c-src in colonic epithelial cells might also represent a feedforward mechanism of cancer cell proliferation and disease progression in colorectal cancer [10]. Our group has reported that activation of GCC signaling pathway protects intestinal epithelial cells from acute radiation-induced apoptosis [11]. However, it remains unknown whether GCC directly mediates the regulation of intestinal epithelial barrier function.
Barrier function is highly regulated by tight junction proteins (TJPs), allowing the epithelium to control transmucosal permeability to solutes, water, and electrolytes [12,13]. Amongst many components of TJs, occludin, junction adhesion molecule A (JAM-A), and claudins are membrane proteins that connect adjacent cells and build the intestinal barrier [13,14]. Activation of actomyosin contraction, as assessed by phosphorylation of the myosin II regulatory light chain (MLC), regulates the assembly of TJPs [15]. Increased MLC kinase (MLCK) activity has been demonstrated to mediate intestinal epithelial barrier dysfunction induced by tumor necrosis factor a (TNFa) and interferon c (IFNc) [16,17,18]. Cytokine-induced STAT1 activation in IECs mediates the onset of intestinal diseases [19,20]. Conversely, depletion or pharmacological inhibition of epithelial MLCK protected mice from TNFa-dependent intestinal epithelial barrier loss, and improved diarrheal symptoms [21].
Defects in intestinal barrier function have been implicated in the pathogenesis of a number of intestinal diseases, such as sepsis, inflammatory bowel disease (IBD) and irritable bowel syndrome (IBS). In our current studies, employing GCC and GCC ligand deficient mice, we demonstrate that GCC signaling is required for the maintenance of homeostatic intestinal barrier function, identifying a novel pathway as well as a new potential therapeutic target for intestinal barrier dysfunction.
Loss of GCC signaling increases paracelluar permeability in small intestine
As previously shown [8,22,23], we confirmed GCC expression throughout all intestinal segments. However, by qualitative immunohistochemistry, GCC was mainly expressed in IECs (Fig 1 A). We next determined paracellular permeability of intestinal segments in GCC2/2 and WT mice. We found that jejunal paracellular permeability to FD4 was significantly higher in GCC2/2 mice under basal conditions; this was not seen in ileum and colon (Fig 1 B, C and D). Analyzing LY uptake into jejunal villi of live mice by confocal microscopy, we consistently found that the integrity of jejunal epithelia was disrupted in GCC2/2 mice, as shown by significantly increased LY inside villi. This was not apparent in the ileum and colon (Fig 1 E and data not shown). Together, these results demonstrate that loss of GCC signaling in the jejunum leads to mucosal barrier dysfunction. In addition, under basal conditions, compensation for loss of GCC precludes barrier defects in the ileum and colon.
GCC2/2 mice are predisposed to LPS-induced intestinal injury
To further determine the function of GCC signaling in intestinal barrier, we challenged wild type (GCC+/+) mice and GCC2/2 mice with a non-lethal dose of LPS (1 mg/kg), which has been reported to cause a mild and reversible alteration in intestinal barrier function by promoting bacterial translocation and cytokine secretion [24]. LPS challenge (12 hr) did not increase permeability in the jejunum of GCC+/+ mice or the already elevated permeability in GCC2/2 mice ( Fig 1B); however, we found there was remarkably elevated permeability in the ileum of both genotypes after 12-hr LPS challenge and the increase was significantly higher in GCC2/2 mice compared to baseline (Fig 1B & C). Consistently, a significantly higher amount of bacteria translocated to mesenteric lymph nodes (MLN) in GCC2/2 mice relative to GCC+/+ mice after 12-hr LPS challenge (Fig 2A), demonstrating that loss of GCC leads to ileal barrier dysfunction after LPS challenge; also, GCC2/2 mice Figure 1. Loss of GCC signaling increases paracellular permeability in small intestine. A) GCC expression was detected in colon, ileum, and jejunum by immunohistochemistry (IH), inset is a negative control (Neg); original magnification, 6400, bar = 50 mm, n = 5. B, C & D) Jejunal, ileal and colonic paracellular permeability was determined using an everted gut sac in WT (GCC+/+) and GCC knock out (GCC2/2) mice with and without LPS (1 mg/kg) challenge, n = 10. E) Permeability to Lucifer Yellow (LY) was measured in jejunum from GCC+/+ and GCC2/2, n = 5. Relative intensity was determined by the ratio of LY intensity inside villi versus luminal side. Results are expressed as the mean 6 SEM. doi:10.1371/journal.pone.0016139.g001 consequently lost a significantly higher percentage of body weight than WT mice (p,0.01 , Fig 2 B). Furthermore, upon increasing the LPS dose to 4 mg/kg, we found that 90% of the GCC2/2 mice did not survive by 24 hrs after LPS challenge, (p,0.05, see
Loss of GCC signaling increases MLC phosphorylation in IECs and disrupts TJP assembly in small intestine
TJPs regulate intestinal paracellular permeability, controlling the penetration of pathogens and allergens to the submucosa [12,13]. We first studied an up-stream kinase of TJP assembly, MLCK which can be viewed as a final common pathway of acute tight junction regulation in response to a broad range of immune and infectious stimuli [16,25]. We found that loss of GCC led to a significantly increased cellular abundance of pMLC in IECs detected by both immunoblot and immunofluorescence (Fig 3A & B). Subsequently, immunoblotting demonstrated that JAM -A and Claudin-2 were significantly reduced in jejunum from GCC2/2 mice (Fig 3C). We also found a consistently reduced JAM-A abundance in GCC2/2 crypts using confocal immunofluorescence microscopy ( Fig 3D). This indicates that GCC signaling directly regulates the production and/or assembly of TJPs like JAM-A and claudin-2. Together, these data strongly suggest that GCC signaling mediates regulation of intestinal barrier function by regulation of TJP assembly.
Loss of UGN leads to increased paracellular permeability in small intestine
To test whether the role of GCC in maintaining barrier function was ligand-dependent, we studied permeability in the jejunum of UGN2/2 mice. UGN2/2 mice are also relatively deficient in GN [3]. We found that UGN2/2 mice demonstrated significant jejunal barrier dysfunction characterized by increased paracellular permeability and bacterial translocation to MLN along with increased MLC phosphorylation under basal conditions (see Fig 4A, B & C), suggesting that intestinal UGN and GN are also required for intestinal barrier homeostasis.
LPS exaggerates cytokines production in GCC2/2 intestinal mucosa
To further pursue the mechanism of increased barrier dysfunction in GCC2/2 mice, we measured levels of cytokines in the circulation. We found that IFN-c as well as IL12p70, but not TNFa, were significantly elevated in the peripheral circulation in GCC2/2 mice at baseline ( Fig 5A). Using quantitative PCR, we found no significant difference between genotypes in IFNc or TNFa levels in the small intestine at baseline. However, a nonlethal dose of LPS significantly upregulated IFNc mRNA expression in GCC2/2 mice, while TNFa levels were not significantly different (Fig 5B & C). This suggests that GCC null mice have a deregulated immune function, resulting in susceptibility to LPS induced intestinal injury.
Loss of GCC signaling activates IFN-c:MLCK pathway in IECs
Employing laser capture microdissection (LCM), we isolated jejunal IECs. We found there was a dramatically elevated IFN-c mRNA level in the IEC compartment in GCC2/2 mice at baseline compared to wild type ( Fig 6A). STAT1 activation is an important mediator of IFN-c signaling, reflecting intestinal mucosal immune response and inflammation [19,26]. Concomitant with increased levels of IFN-c in IEC compartment, immunohistochemistry results showed that there was increased phosphorylated STAT1 (pSTAT1) staining in jejunal IECs which was sparsely distributed in a patchy fashion ( Fig 6B). After 12 hr-LPS challenge, STAT1 activation in IECs was greatly increased and exhibited a more pervasive distribution in GCC2/2 mice Figure 3. Loss of GCC signaling increases MLC phosphorylation in IECs and disrupts TJP assembly in small intestine. Jejunal IECs were isolated with EDTA, and total protein was extracted from jejunal IECs, A) MLC abundance and phosphorylated MLC (pMLC) were measured by Western blot, densitometry results normalized to total MLC, n = 5. B) pMLC distribution was determined with immunofluorescence; dotted lines encircle staining of pMLC, n = 5. C) Total protein was extracted from jejunal tissue, and JAM-A and Claudin-2 abundance was determined by Western blot, n = 5. D) JAM-A distribution was determined with immunofluorescence, n = 5, original magnification, 6400, bar = 50 mm. Signal intensity was determined by densitometry. Quantitated western blot results are shown as the mean 6 SEM. * p,0.05 versus GCC+/+ mice. doi:10.1371/journal.pone.0016139.g003 ( Fig 6B). This was confirmed to be a significant increase by a semiquantitative counting of pSTAT1 in villus IECs ( Fig 6B). Interestingly, flow cytometry analysis indicated that CD3 + intra-epithelial lymphocytes (IEL) were significantly increased in the intestinal epithelial compartment of GCC2/2 mice (1.8560.15% in WT versus 2.8660.4% in GCC2/2 mice, p,0.01, n = 7); we also observed there was a fair amount of pSTAT1 positive IELs in LPS-treated GCC2/2 (Arrows, Fig 6B). These data suggest that jejunal epithelial barrier has been severely damaged in GCC2/2 mice, and the infiltrated CD3 + IELs might be the resource of elevated IFNc in IEC compartment. As IFN-c primes intestinal epithelia to respond to TNF-a and LIGHT which are associated with induction of the epithelial (long) isoform of MLCK [18,27], we employed real time PCR to quantitate levels of expression in wild type and GCC2/2 mice. We found long MLCK mRNA levels were significantly up regulated in GCC deficient jejunum at baseline, and were potentiated by LPS challenge both in WT and in GCC2/2 mice. Together, GCC may mediate the regulation of IEC barrier specifically through an IFN-c:MLCK pathway.
Reduction of GCC signaling leads to hyperpermeability in IEC monolayer
We grew HT-29 IEC monolayers on Transwell filters and used RNA interference to achieve approximately 70% knockdown of GCC expression. We observed that paracellular permeability in post-confluent HT-29 cell monolayers, assessed by the apical-tobasolateral flux of FD4, was markedly increased by knocking down GCC expression ( Figure S1, p = 0.02). Consistently, TEER was also reduced in GCC knock-down monolayers, compared to transfection reagent control without siRNA (p = 0.006, see Figure S2). Similar results were obtained using the Caco-2 intestinal cell line as well (Fig 7 and data not shown). These data indicate that IEC monolayer permeability could be increased by inhibiting GCC expression in vitro, analogous to results obtained with the GC-C knockout mouse.
Reduction of GCC signaling up regulates MLC phosphorylation and decreases TJPs in IEC monolayer
To mimic in vivo studies, we combined IFNc (10 ng/ml) with TNFa (10 ng/ml) to stimulate the Caco-2 monolayers for 48 hr [17]. Monolayer hyperpermeability, induced by cytokines, was significantly enhanced by GCC knock-down compared to cytokine-treated controls (p,0.01, Fig 7A). We next examined MLC phosphorylation and the abundance of TJPs in Caco-2 monolayers after GCC knock down. TNFa and IFNc cause TJ disruption and epithelial barrier loss by activating MLCK [17,18]. We first measured pMLC and found that pMLC was upregulated by GCC knock down, which was further enhanced by IFNc and TNFa treatment compared to controls (Fig 7B). We then found that JAM-A and Claudin-2, membrane-associated TJPs, were significantly reduced in IEC monolayers with GCC knock down; this was potentiated by IFNc and TNFa administration ( Fig 7B). Interestingly, we did not find apparent alterations of other components of TJPs, for example, claudin-1. Taken together, these data further indicate that GCC signaling may mediate regulation of intestinal epithelial barrier function directly through affecting TJP assembly.
Discussion
Original descriptions of GCC null mice were marked by a paradoxical lack of an obvious phenotype and we and others suggested that the function of GCC would only be revealed by systemic study and perturbation of gastrointestinal function [4,28]. This led to the establishment of roles for GCC in regulation of IEC proliferation, apoptosis, and migration [6,9,11]. Epithelial barrier function is a crucial component of gut homeostasis, and dysregulation contributes to the pathogenesis of many intestinal diseases. In this paper, we have shown that jejunal permeability was increased in GCC2/2 and UGN2/2 mice compared to WT. GCC2/2 mice exhibited ileal hyperpermeability and greater bacterial translocation after LPS challenge, accompanied by increased IFNc levels. The level of phosphorylated MLC in IEC was significantly increased in GCC2/2 and UGN2/2 mice compared to WT; Claudin-2 and JAM-A expression in TJs were reduced in GCC deficient IEC. GCC knockdown in IEC monolayers was associated with increased permeability under basal conditions and enhanced IFNc induced hyperpermeability in IEC monolayers. Our data strongly suggest that GCC signaling plays a role in the integrity of the intestinal mucosal barrier by regulating epithelial MLC phosphorylation and TJ assembly.
Functional TJP strands are located between polarized epithelial cells and characterize highly-differentiated gastrointestinal epithelial cells [29]. GCC is highly expressed in differentiated enterocytes. Waldman and co-workers have shown that loss of GCC is associated with changes in IEC homeostasis, including increased proliferation in the crypt, increased migration along the crypt-villus axis, and increased apoptosis [6,9,30]. The magnitude of these differences decreased from duodenum to colon and parallels the level of hyperpermeability that we saw in different segments of the intestinal tract, with highest disruption of barrier function in the jejunum. It is possible that the changes in barrier function and TJPs that we observed may contribute at least partially to the alteration of homeostatic processes in GCC2/2 intestine. Dedifferentiated IECs can lead to immature production of TJPs, and loss of contact inhibition [31]. Our results show that loss of GCC signaling reduced JAM-A and Claudin-2 in vivo and vitro, which have been determined to be associated with tumor progression [32,33]. Most recently, GCC activation was found a polar pattern of the effects on ion transport; GCC mucosal activation resulted in a potent cGMP-chloride secretion, which may add to its role in the intestinal barrier [34].
GN and UGN are expressed in the highly-differentiated compartment along the crypt-villus axis, associated with the transition from proliferation to differentiation [35]. They exhibit a gradient of expression along the length of the gastrointestinal tract with UGN levels highest in the proximal intestine and GN levels highest in the colon [3]. Consistently, we found that UGN2/2 mice, which also exhibit a significant decrease in intestinal GN protein [3], had a dysfunctional jejunal barrier function at baseline. Together, this suggests that intestinal UGN and GN as well as their signaling via GCC are required for the maintenance of small bowel barrier function. Levels of cGMP are reduced by 50-75% in the intestines of both GC-C2/2 and UGN2/2 mice [3,11] and this reduction may provide a basis for loss of TJ function that remains to be investigated.
IFN-c selectively increases epithelial permeability to large molecules by direct alteration of TJP assembly [36,37]. STAT-1 is an important signaling molecule for IFN-c; and STAT1 activation in IECs leads to downstream proinflammatory gene expression, predisposing IECs to injury [38,39]. In our studies, we found that GCC signaling is involved in a complicated modulation of gut mucosal immunity. An increased level of cytokines (IFNc and IL12p70) was detected in GCC knock out mice at baseline accompanied by significantly elevated jejunal permeability, MLCK expression and STAT1 activation in IECs. An important mechanism through which IFNc drives barrier dysfunction is by increasing expression of TNF and LIGHT receptors on epithelial Figure 6. Loss of GCC signaling activates IFN-c:MLCK pathway in IECs. A) Jejunal IECs were captured by LCM, RNA isolated and the level of IFNc mRNA was assessed by real-time PCR. n = 7. B) Paraffin-embedded jejunal tissue was immunostained by pSTAT1, n = 5, original magnification, 6400, bar = 50 mm. pSTAT1 positive cells were counted by a semi-quantitative method and expressed as average positive cells per villus. Arrows indicate pSTAT1 staining of intraepithelial lymphocytes in tissue from GCC2/2 mice. C) Long MLCK mRNA levels were determined by real-time PCR in jejunal tissue either under basal condition or following LPS challenge, n = 6. Results are shown as the mean 6 SEM. * p,0.05 versus WT group; # p,0.05 versus LPS treated WT group and ' versus LPS treated GCC2/2 group. doi:10.1371/journal.pone.0016139.g006 cells and sensitizing the IEC monolayer to cytokine stimulation [17,18]. Low dose LPS challenge resulted in a further disruption of barrier function in the ileum of GCC null mice along with significantly elevated luminal bacterial translocation. The barrier dysfunction predisposed GCC null mice to LPS induced sepsis and organ dysfunction, and subsequently resulted in increased mortality upon high dose LPS challenge. Our data also indicated that IFNc mRNA expression and IELs was elevated in LCMcaptured jejunal IEC compartment, suggesting that intestinal barrier dysfunction in GCC deficient mice is maintained by a continuous immune activation. These data indicates that loss of GCC signaling may lead to a dysregulation of the mucosal immune system, and triggers intestinal barrier dysfunction and immune activation.
Primary pathophysiologically relevant intestinal epithelial barrier dysfunction can broadly activate mucosal immune responses and accelerate the onset and severity of immune-mediated colitis, but is not sufficient for intestinal disease [25,40]. Cytokine-induced epithelial barrier dysfunction can be mediated by increased MLCK expression and subsequent myosin II regulatory light chain (MLC) phosphorylation; TNFa, IFNc, and LIGHT (a member of the TNFa superfamily) can cause MLCK-dependent barrier dysfunction [21,41]. Furthermore, MLCK upregulation is correlated with IBD disease activity, also suggesting that it may contribute to barrier dysfunction in intestinal disease [16]. Our data confirmed that loss of GCC signaling led to increased MLC phosphorylation, MLCK mRNA expression and IEC barrier dysfunction in mice. In comparison, our GCC knockdown studies in IEC monolayers highlight an increase in permeability accompanied by increased phosphorylation of MLC due only to decreased levels of GCC. However, the manner in which GCC signaling mediates the regulation of MLCK activity needs to be explored in the future. Together, loss of GCC signaling leads to the activation of IFN-c:MLCK pathway in IECs and may be an important initiating event that leads to barrier dysfunction, followed by pro-inflammatory factor production and a predisposition to LPS-induced injury.
We found that reduced JAM-A and Claudin-2 abundance was consistently associated with loss of GCC in both GCC deficient mice and in GCC knock down IEC monolayers. JAM-A has been demonstrated to regulate junctional assembly through recruiting and binding these proteins to its intracellular C-terminus in order to colocalize junctional proteins with the nascent junctions [42]. JAM-A null mice exhibit increased intestinal mucosal permeability, and JAM-A has been determined to regulate epithelial permeability, inflammation, and proliferation [43]. Aberrant expression of Claudin-2 has been linked to SAMP1/YitFc (SAMP) mice, that develop chronic ileitis [44]. Claudin-2 can convert ''tight'' tight junctions into leaky ones, and it was identified as a cation-selective paracellular channel [45]. Upregulation of poreforming claudin 2 leads to altered tight junction structure and pronounced barrier dysfunction in mild to moderately active Crohn's disease [46]. Conversely, reduced levels of claudin-2 or JAM-A may also lead to disrupted barrier function [36]. Therefore, GCC signaling may be relevant to regulation of intestinal barrier function directly through interacting with TJPs.
The precise mechanisms elucidating how GCC signaling is involved in the regulation of TJPs is the basis for ongoing investigation.
Disruption of intestinal barrier function leading to mucosal inflammation and immune activation may be a key factor in the pathogenesis of several diseases, including sepsis, IBD and IBS [47,48,49]. IBS is characterized by an increased small bowel paracellular permeability and an increased load of luminal bacteria [48]. Linaclotide (MD-1100), a GCC agonist, has been shown in animal studies to stimulate intestinal fluid secretion and transit, but not in GCC null mice [50,51]. Linaclotide improved bowel habits and symptoms of IBS patients with chronic constipation although the mechanisms of action downstream of cGMP are uncertain [51]. Our studies for the first time identify a novel GCC:MLCK:TJP pathway that regulates intestinal barrier function. Therefore, augmenting intestinal GCC activation may represent a novel approach for restoring mucosal barrier function in intestinal disorders.
Materials
All chemicals and antibodies were purchased from Sigma-Aldrich (St. Louis, MO) unless otherwise noted. Antibodies specific for MLC and phosphorylated MLC (serine 19) (pMLC) and STAT1 (pSTAT1) were from Cell Signaling Technology (Danvers, MA). Antibodies specific for JAM-A, Claudin 1, and 2 were from Zymed (Carlsbad, CA). Antibody for GCC was from FabGennix (Frisco, TX). ON-TARGET plus SMARTpool for human GUCY2C was purchased from DHARMACON (Chicago, IL).
Animal resources and maintenance
Animal studies were approved by the Cincinnati Children's Research Foundation (CCRF) Institutional Animal Care and Use Committee (Protocol # 8E03019). GCC and UGN deficient mice (GCC2/2 and UGN2/2) have previously been described [3,28]. Mice were inbred for 10 generations to C57BL/6 and maintained in specific pathogen free conditions. To induce a systemic inflammatory response, mice were injected intraperitoneally with Escherichia coli (strain O111:B4, Sigma) lipopolysaccha- ride (LPS, 1 mg/kg or 4 mg/kg) in 0.5 mL of phosphate-buffered saline (PBS). Groups of mice were sacrificed 12 hrs after injection of LPS for intestinal permeability assay or 24 hrs after injection for survival study [24].
Measurement of paracellular intestinal permeability and bacterial translocation
Jejunal, ileal and colonic paracellular permeability to the fluorescent tracer fluorescein isothiocyanate-dextran with a molecular mass of 4,000 Da (FD-4) was determined using an everted gut sac method [52]. Fluorescence was measured using a fluorescence spectrophotometer (Biotek Instruments, VT) at an excitation wavelength of 492 and an emission wavelength of 515 nm. Permeability was expressed as the mucosal-to-serosal clearance of FD-4. Bacterial translocation to mesenteric lymph nude (MLN) was determined as previously described [52].
In vivo measure of local villus permeability
Methods are based on those described previously [53,54]. Briefly, after anesthesia, ,1 cm of jejunum was exteriorized, slit open, and the mucosal surface rinsed with saline. Cell nuclei were stained with Hoechst 33342 (2 mg/kg bw. Invitrogen, Eugene OR) and imaged with 2-photon microscopy (Zeiss LSM510 NLO, Jena Germany). Jejunal permeability was monitored by adding 50 mM Lucifer Yellow (LY; CH lithium salt, Molecular Probes) to the superfusate, and confocal fluorescence imaging (458 nm excitation, 505 nm emission). Post-acquisition image analysis used Metamorph 7 (Molecular Devices, Downingtown, PA) and ImageJ (NIH, Bethesda, MD). The intensity of luminal and tissue LY fluorescence was measured in appropriate regions and expressed as the tissue/lumen ratio.
Immunoblotting (IB), Immunohistochemistry (IH) and immunofluorescence (IF)
Isolated jejunal IECs, jejunal tissue as well as cultured cells were saved. Total cellular protein extracts and cytosolic protein were prepared using cold RIPA buffer and the NE-PER kit per the manufacturers' recommendations (Pierce, Rockford, IL). Expression of claudin1 & 2, junction adhesion molecule (JAM-A) and MLC, and pMLC abundance were detected in total protein and in isolated jejunal IECs. Band intensities were quantified as mean area density using ImageQuant (Molecular Dynamics, Sunnyvale, CA). pMLC was expressed as relative area density corrected by MLC bands intensity. Frozen tissue sections from mouse jejunum (4 mm) were prefixed in paraformaldehyde. Tissue sections were labeled with pMLC and JAM-A. 49,6-diamidino-2-phenylindole dihydrochloride (DAPI) was used for nuclear counterstaining following FITC-conjugated or TRITCconjugated goat anti-rabbit secondary antibodies. GCC expression and phosphorylation of STAT1 were examined in paraffin embedded intestinal sections using VECTASTAIN Elite ABC system (Vector lab, Burlingame, CA). pSTAT1 positive cells were counted by a semiquantitative method and expressed as average positive cells per villus. Images were captured using a Zeiss microscope and Axioviewer image analysis software (Deutschland, Carl Zeiss Corp, Germany) [52,55].
Laser Capture Microdissection (LCM)
Briefly, approximately 200 crypts and adjacent surface epithelial cells from jejunum were captured by a Veritas Microdissection System (Molecular Devices, CA); RNA was isolated with a PicoPure RNA Isolation kit (Arcturus) using our published methods [55,56]. The quality and concentration of RNA was measured by NanoDrop (Thermo Fisher). Total RNA (200 ng) was used to reversely transcribe to cDNA followed by a SYBR Green real-time PCR on the Mx4000 multiplex quantitative PCR instrument (Stratagene).
Real-time PCR
Total RNA was isolated from frozen tissue using Tri Reagent (Molecular Research Center, Inc., Cincinnati, OH) according to the manufacturer's protocol. RNA samples were treated with DNase I (Ambion, Austin, TX) and reverse-transcribed (2 mg) using random decamers (RETROscript, Ambion). PCR reactions using specific gene primers were performed with Brilliant II SYBR Green QPCR mix (Stratagene, La Jolla, CA) in the Mx3000p thermocycler (Stratagene). A relative amount for each gene examined was obtained from a standard curve generated by plotting the cycle threshold value against the concentration of a serially diluted RNA sample expressing the gene of interest. This amount was normalized to the level of b actin RNA. Primer sequences [57,58] are listed in the Table 1.
RNA interference and IEC monolayer permeability assay [59,60] GUCY2C ON-TARGETplus SMARTpool (Chicago, IL) transfection was performed with pre-confluent HT-29 and Caco-2 cells growing on Trans-well inserts (Becton Dickinson, Bedford, MA) according to manufacturer's instructions. Groups with transfection reagents only (TRAN) were chosen as controls for RNA interference. For subsequent experiments we used monolayers of Caco-2 or HT-29 cells (21 days post-confluent). The medium bathing the apical surface of the monolayers was replaced with 200 mL of DMEM complete medium containing FITC-Dextran (FD-4, Sigma) at 25 mg/mL. The medium bathing the basolateral surface was replaced with 500 mL of DMEM complete medium alone or DMEM supplemented with or without IFNc (10 ng/ml) and TNFa (10 ng/ml). Fluorescence in basolateral bathing medium was measured using a fluorescence spectrophotometer (Biotek Instruments, VT). The permeability of the monolayer was expressed as a clearance (C; nl?cm 22 ?h 21 ). Transepithelial electrical resistance (TEER) was measured by E-VOM instrument (World Precision Instruments, Sarasota, FL). Results were expressed as Ohm/cm 2 .
Measurements of Cytokines
Blood was directly collected from heart through diaphragm and serum was prepared and used to measure cytokines and chemokines with Bioplex TM [52].
FACS analysis [61,62] Briefly, mouse jejunum was everted and incubated in calciummagnesium-free HBSS with 1 mM EDTA for 30 minutes at 37uC with gentle shaking to liberate IEC. Cell survival was determined with an Annexin V kit (eBioscience). CD3-/7-AAD-(eBiosciences) was used as a marker for intra-epithelial T lymphocytes. Data were analyzed using FlowJo software.
Statistical Analysis
Results are presented as the mean 6 SEM. Data were analyzed using analysis of variance, 2-tailed Student's t test, and the Mann-Whitney test as appropriate (Prism, GraphPad, San Diego, CA). P values#0.05 were considered significant. Figure S1 Reduction of GCC signaling leads to hyperpermeability in IEC monolayer. HT-29 IEC monolayers were grown on Transwell filters. Paracellular permeability in postconfluent HT-29 cell monolayers was assessed by the apical-tobasolateral flux of FD4 in the presence and absence of GCC siRNA, n = 5. Results are shown as the mean 6 SEM. (TIF) Figure S2 Reduction of GCC signaling leads to hyperpermeability in IEC monolayer. HT-29 IEC monolayers were grown on Transwell filters. Paracellular permeability in postconfluent HT-29 cell monolayers was assessed by TEER in the presence and absence of GCC siRNA, n = 5. Results are shown as the mean 6 SEM. (TIF) | 6,749.4 | 2011-01-31T00:00:00.000 | [
"Biology",
"Medicine"
] |
Improved synchronization analysis of competitive neural networks with time-varying delays
Adnène Arbi, Jinde Cao, Ahmed Alsaedi Higher Institute of Applied Sciences and Technology of Kairouan, University of Kairouan, 3100 Kairouan, Tunisia Tunisia Polytechnic School, University of Carthage, El Khawarizmi Street, Carthage 2078, Tunisia Faculty of Sciences of Bizerta, University of Carthage, BP W, Jarzouna 7021, Bizerta, Tunisia<EMAIL_ADDRESS><EMAIL_ADDRESS>School of Mathematics, Research Center for Complex Systems and Network Sciences, Southeast University, Nanjing 210996, China<EMAIL_ADDRESS><EMAIL_ADDRESS>Faculty of Science, King Abdulaziz University, Jeddah 21589, Saudi Arabia Department of Mathematics, King Abdulaziz University, Jeddah 21589, Saudi Arabia<EMAIL_ADDRESS>
Introduction
Classical concepts of the synchronization phenomenon are based on the notions of closeness of the frequencies or phases of the subsystems generating periodic oscillations.Using the traditional language of dynamical systems with continuous time, one can reveal hat synchronization of periodic oscillations that may be represented as follows.While a stable limit cycle is a geometrical image of such oscillations, an attracting two-dimensional (or n-dimensional) torus is a geometrical image of the oscillations generated by two (or n) uncoupled oscillators in a common phase space.As the parameter of coupling increases, the motions of partial subsystems are no longer independent, and a stable limit cycle is born on the torus that is still an attractor.This corresponds to the transition of the system to synchronization.The analysis of periodic systems incorporating full-time information leads to challenging control problems with a rich mathematical structure.Meanwhile, as a typical complex system, delayed neural networks have been verified to exhibit some complex and unpredictable behaviors such as periodic oscillations, bifurcation and chaotic attractors.Since synchronization of neural networks has been shown to be an important step toward both fundamental science and technological practice, much of the focus has been received and numerous research results have been reported in the literature [6,12,15,26].Many methods have been developed for synchronizing of chaos such as LMI based approach [13], adaptive control [18], passivity feedback control [24].
On the other hand, many researches have been devoted to the dynamics of various classes of neural networks (see [3, 7-10, 14, 27]).Furthermore, there is few works about the so-called competitive neural networks proposed for the first time by Meyer-Baese et al. (see [19,21,22,25]), who used them to model the dynamics of cortical cognitive maps with unsupervised synaptic modifications.The model of competitive neural networks is different from the traditional neural networks with first-order interactions.In [9], based on Lyapunov functional method and Kronecker product technique, the authors proposed some sufficient conditions for global synchronization of neutral-type neural networks with constant and delayed coupling.In [10], there is proposed a simple adaptive coupling enhancement algorithm for the synchronization of two coupled identical time-varying delayed neural networks based on the invariant principle of functional differential equations.
As a continuation of their previous published results, in this paper, we consider a target model with two different state variables: the short-term memory (STM) variable describing the fast neural activity and the long-term memory (LTM) variable describing the slow unsupervised synaptic modifications.In addition, it has been reported that if the parameters and time delays are appropriately chosen, the delayed competitive neural networks can exhibit complicated behaviors even with strange chaotic attractors.Based on the aforementioned arguments, the study of delayed competitive neural networks and its analogous equations have attracted worldwide interest (see [20]).
The remainder of this paper is organized as follows.In Section 2, we present the synchronization problem for CNNs.In Section 3, we introduce preliminaries, notations and hypotheses.The controller design will be proposed in Section 4. In Section 5, we will introduce the new criteria proving the exponential synchronization of CNNs.At last, an illustrative numerical example is given.
Methodology and problem formulation
The competitive neural networks with time-varying delays in this brief are modeled as follows: where i, j = 1, . . ., n; x i (t) is the neuron current activity level; f j (x j (t)) is the output of neurons; m ij (t) is the synaptic efficiency; y i is the constant external stimulus; D ij (t), D τ ij (t) represent, respectively, the connection weight and the synaptic weight of delayed feedback between the ith and jth neurons; B i (t) is the strength of the external stimulus; E i (t) denotes disposable scale; I i (t) denotes the external inputs on the ith neuron at time t; σ = max( τij (t)) < 1 for j = 1, . . ., n and t > t 0 , where σ is constant; α i , β i : R → R are continuous functions.
By setting , where y = (y 1 , y 2 , . . ., y n ) T , m i = (m i1 , m i2 , . . ., m in ) T and, without loss of generality, the input stimulus Y is assumed to be normalized with unit magnitude |y| 2 = 1, summing up the LTM over j, then the above networks are simplified, and we get a state-space representation of the LTM and STM equations of the networks: i = 1, . . ., n.In order to observe the synchronization behavior in the class of delayed functional differential equations, we consider two delayed functional differential equations, where the drive system with state variable denoted by x i drives the response system having identical dynamical equations denoted by state variable z i , and S i drives the response system having identical dynamical equations denoted by state variable W i .However, the initial condition on the drive system is different from that of the response system.The drive system is as follows: https://www.mii.vu.lt/NA i = 1, . . ., n, with the initial condition x 1 (t), . . .x n (t), S 1 (t), . . ., S n (t) = ϕ 1 (t), . . ., ϕ n (t), φ 1 (t), . . ., φ n (t) In practice, the output signals of system (3) can be received by system (4).Therefore, the goal of control is to design and implement an appropriate controller u + i (t) = (u i (t), ũi (t)) for the second system such that the controlled response system can synchronize with the drive system (3).The response system is as follows: where u i and ũi are the control terms respectively for STM and LTM with the initial condition where ϕ i (•) and φ i (•) are the real-valued bounded differentiable functions defined on [−τ * , 0], i = 1, . . ., n.
Preliminaries, notations and hypotheses
In this paper, we always consider the vectorial space R n for n ∈ N * equipped with the Euclidean norm (denoted by • ) in R n .Let R n be n-dimensional Euclidean space.In all that follows, we denote be I n ∈ R n×n and O n ∈ R n×n identity matrix and zero matrix, respectively.For all x, S, y, Z : R → R, we define the zero norm by For convenience, we introduce the following notations: Let us list some assumptions, which will be used throughout the rest of this paper: (H1) The functions α i , β i : R → R + are continuous and positive.
Remark 1.The exponential synchronization problem considered here is to determine the control inputs u i (t) and ũi (t) associated with the state-feedback for the purpose of exponentially synchronizing the two identical chaotic nonlinear neural networks (3) and ( 4) with the same system parameters except the differences in initial conditions.
Controller design
Let us define the synchronization error signal e i (t) = x i (t) − z i (t) and ẽi (t) = S i (t) − W i (t), where x i (t), S i (t) and z i (t), W i (t) are the ith state variables of the drive and response competitive neural networks, respectively.Therefore, the error dynamics between ( 3) and ( 4) can be expressed by for i = 1, . . ., n, where https://www.mii.vu.lt/NAFrom hypothesis (H2) we can have that f i (•) satisfies If the state variables of the drive system are used to drive the response system, then the control input vector with state feedback is designed as follows: where e(t) = (e 1 (t), . . ., e n (t)) T , and Ω = (ω i,j ) n×n ∈ R n×2n is the gain matrix to be determined for synchronizing both a drive system and response system.Besides, if new errors are defined by êi (t) and êi (t) is defined by êi (t) = e ρt e i (t) and êi (t) = e ρt ẽi (t), respectively, the dynamics of ( 5) and ( 6) can be transformed into the following forms: where F j (ê j (t)) = e ρt f j (e j (t)) and
Main results
The exponential synchronization problem of systems ( 3) and ( 4) can be solved if the controller gain matrix is suitably designed.The exponential synchronization condition is established in the following main results.
Proof.To confirm that the origin of ( 7) is globally exponentially convergent, a continuous Lyapunov functional V (t) is defined as follows: F 2 j êj (s) ds.
It is easy to verify that V (t) is a nonnegative function over [−τ * , +∞) and lim ê(t)→+∞ V (t) = +∞.By the expression of f j (e j (t)), F j (ê j (t)) and assumption (H2) we obtain k j e j (t) , F j êj (t) = e ρt F j e j (t) k j e ρt e j (t) = k j êj (t) .
Simulation results
Example 1.The parameters of a two-dimensional nonlinear competitive neural networks with time-varying delays (3) and ( 4) is given by the following system of equations: We choose the activation functions of competitive neural networks as the type of hyperbolic tangent function f j (x) = 0.3 tanh(x).Figures 1-10 show the oscillation of the delayed competitive neural networks (3) and ( 4) with above coefficients and initial values The oscillation of solution of drive system is clearly presented in Figs.1-4.x 1 (t) x 2 (t) It follows from the main theorem that if the controls input u i (t) and ũi (t) are chosen as then the matrix of control is as follows: The oscillation of solution of response system with above coefficients and initial values It is evident that hypotheses (H1)-(H2) hold (k 1 = k 2 = 0.3).By choosing ρ = 0.5 and σ = 0.9 we have Besides, ωi,j = 9.2.
Conditions (i), (ii) and (iii) of Corollary 1 are satisfied.Hence, by using Corollary 1, the drive system (3) can be synchronized by the corresponding response system (4).Figures 11-14 reveal the synchronization error of the state variables between the drive system and the corresponding response system.Remark 3. The major improvement over [20,22] is that in our approach, it is very easy to verify the criteria by simple algebraic calculus.
Remark 4. The competitive neural networks models investigated in [20] and [22] are considered with constant coefficients.However, in this work, we study the model with time-varying coefficients.Furthermore, our system include models in [20,22] and [23] as special cases when Hence, our results have been shown to be the generalization and improvement of existing results reported recently in the literature.
It follows from the main theorem that if the controls input u i (t), ũi (t) are chosen as then the matrix of control is as follows: 2 0 1.2 0 0 9.2 0 1.2 .
Hence, using Corollary 1, the drive system (3) can be synchronized by the corresponding response system (4).Figures 24-29 reveal the synchronization error of the state variables between the drive system and the corresponding response system.Remark 6.However, in most papers, the activation functions are assumed to be monotonically nondecreasing.In this paper, from Theorem 1 the restriction is removed, and thus, the results obtained here extend and improve those in [11,16,17].However the activation functions here are not monotonous in Example 2, the results in [11,16,17] are not applicable.
Remark 7.
The above examples show that the result of the proposed control law ensures exponential synchronization of the competitive neural networks constituting of two or multi-neurons with/without time delays.
Conclusion and future works
In the present work, we demonstrate that two different chaotic nonlinear competitive neural networks with time-varying delays can be synchronized using active control.More precisely, this paper has presented sufficient conditions to guarantee the exponential synchronization of a class of chaotic nonlinear competitive neural networks with time-varying delays.Moreover, the proposed criteria were dependent of the delay parameter, which may possess important significance in the design of chaos of delayed competitive neural networks.A numerical example and its simulation have been given to demonstrate the effectiveness and advantage of the theory results.Furthermore, the synchronization degree can be easily estimated.Finally, an illustrative example has been given to verify the theoretical results.However, to the best of our knowledge, there are few results concerning the exponential synchronization for competitive neural networks.This technique is applicable for the high-order Hopfield neural networks [2].Furthermore, there are no studies investigating the problem of exponential synchronization of competitive neural networks with mixed time-varying delays in the leakage terms [5] and the high-order competitive neural networks with mixed time-varying delays in the leakage terms [1].This is some interesting problems and will become our future investigative direction.Besides, more methods and tools should be explored and developed in this direction.Along, the future work spawning from this paper would be to train this model that can be applied in various areas including pattern recognition, associate memory, cryptography etc. | 3,159.8 | 2018-02-20T00:00:00.000 | [
"Computer Science"
] |
Entanglement quantification made easy: Polynomial measures invariant under convex decomposition
Quantifying entanglement in composite systems is a fundamental challenge, yet exact results are only available in few special cases. This is because hard optimization problems are routinely involved, such as finding the convex decomposition of a mixed state with the minimal average pure-state entanglement, the so-called convex roof. We show that under certain conditions such a problem becomes trivial. Precisely, we prove by a geometric argument that polynomial entanglement measures of degree 2 are independent of the choice of pure-state decomposition of a mixed state, when the latter has only one pure unentangled state in its range. This allows for the analytical evaluation of convex roof extended entanglement measures in classes of rank-two states obeying such condition. We give explicit examples for the square root of the three-tangle in three-qubit states, and show that several representative classes of four-qubit pure states have marginals that enjoy this property.
Entanglement is an emblem of quantum mechanics and the most important component for a broad spectrum of quantum technologies [1,2]. The more a quantum state is entangled, the better will it perform in an information processing and communication task, compared to any unentangled state [3][4][5]. Quantifying entanglement exactly is therefore a significative requirement to develop a rigorous assessment of nonclassical enhancements in realistic applications [6]. With the advent of quantum information theory in the last two decades, a sound machinery has been developed for the characterization and quantification of entanglement as a resource [1, 3,[6][7][8].
An entanglement measure E(|ψ ) defined on pure quantum states |ψ is a positive real function which is 0 iff |ψ is separable. Additionally, every such measure has to be an entanglement monotone, that is, it cannot increase on average under local operations and classical communication (LOCC) [7]. One of the difficulties of quantifying entanglement lies in the fact that entanglement measures as defined above do not typically admit an easy way to extend their scope to all mixed quantum states. The so-called convex roof of a measure of entanglement E(|ψ ) is obtained by finding the largest convex function on the set of mixed states which corresponds to E on pure states [9,10]. One can use this construction to define the extension of an entanglement measure E to mixed states ρ as where the minimization is performed over all sets {p i , |ψ i } such that i p i |ψ i ψ i | = ρ, that is, all convex decompositions of ρ into pure states, with normalized weights i p i = 1. With this definition, E is guaranteed to remain an entanglement monotone over mixed states as well [11]. It is however a formidable problem to find the optimal decomposition in Eq.
A particularly useful class of functions to consider for entanglement quantification are the polynomial invariants, that is, polynomial functions in the coefficients of a pure state |ψ which are invariant under stochastic LOCC (SLOCC). For a system of m qudits, a polynomial invariant of homogeneous degree h is therefore a function P which satisfies for a constant κ > 0 and an invertible linear operator L ∈ SL(d, C) ⊗m representing the SLOCC transformation [26]. The absolute value of any such polynomial with h ≤ 4 defines in fact an entanglement monotone [27,28]. Two common monotones, the concurrence for two qubits [15] and the three-tangle for three [19], are obtained in this way. Out of all possible homogeneous degrees h, degree 2 is of particular significance, as only then the SLOCC invariance of a polynomial entanglement measure E(|ψ ) extends to its convex roof E(ρ) [24]. One can relate the properties of entanglement measures with the geometric representation of quantum states on a hypersphere (a generalization of the Bloch sphere) by considering the so-called zero polytope [20,29]. For a given state ρ and an entanglement measure E, it is defined as the convex hull of all pure states (spanned by the eigenvectors of ρ) with vanishing E. Since entanglement vanishes for any convex mixture of such states but will never vanish for a state lying outside of the convex hull, the zero polytope gives a useful visual representation of the region of the hypersphere with zero entanglement. Various methods for constructing bounds to polynomial entanglement measures in mixed states rely on finding states within the zero polytope and using them to form suitable convex combinations with states outside of it [30][31][32].
In this Letter, we analyze the particular situation when the zero polytope for a given state ρ is reduced to a single point, that is, when there is only one state in the range of ρ with vanishing entanglement. Our investigation is naturally specialized to rank-2 states ρ, which admit an intuitive geometrical representation on a Bloch sphere, and represent ideal testbeds to analyze structural properties of multipartite entanglement in mixed states. We show that, when the zero polytope reduces to a point, any polynomial entanglement measure of degree arXiv:1512.03326v3 [quant-ph] 22 Jan 2016 2 simply corresponds to a measure of distance on the Bloch sphere. This property in turn renders the value of the measure independent of the decomposition of ρ into pure states. Therefore, the convex roof extension of the entanglement measure for ρ becomes trivial, and can be evaluated analytically in any decomposition, e.g. in the spectral one.
Although geometric methods have long been known to be valuable tools in quantum information theory [33], this surprising result relies only on classical Euclidean geometry, which does find use in the study of quantum correlations [34,35], but whose specific application to entanglement quantification went seemingly unnoticed so far.
We begin with the formal definition of the zero polytope. Given a state ρ of rank η and an entanglement measure E(|ψ ) which is the absolute value of a polynomial of degree h in the coefficients of the pure state |ψ , we can write the equation where ω j are complex coefficients and |φ j are the eigenvectors of ρ. The zero polytope is then defined to be the convex hull of all pure states which satisfy the above equation [29].
Noting that the expression in Eq. (3) is in fact a polynomial of degree h in the coefficients ω j , we are interested in the case when the polynomial has a unique root in ω j , that is, there is only one such state |z defined as a linear combination of the eigenvectors {|φ j } such that E(|z ) vanishes. States ρ for which this happens will be labeled as one-root states (shorthand for "one root to rule them all"). Since multivariate complex polynomials have uncountably infinite sets of solutions and there is no straightforward method to investigate their roots [29], our analysis is limited to η = 2. In this case we can represent the rank-2 state ρ as a point (or Bloch vector) r ∈ R 3 in the standard Bloch sphere, with polar points corresponding to the eigenvectors |φ 0 and |φ 1 of ρ, see Fig. 1. We then have, up to normalization, that the root state can be written as |z = |φ 0 + z |φ 1 (assuming E(|φ 1 ) 0) for some z ∈ C, while for any pure state |ω = |φ 0 + ω |φ 1 one has where N is a normalization factor. We note that for h = 2 this expression is proportional to the squared Euclidean distance ω − z 2 between the Bloch vectors associated to |ω and |z . This interpretation of the entanglement measure as a de facto measure of distance allows us to employ the geometrical properties of Euclidean spaces to investigate the behaviour of Ewhich from now on will precisely denote a polynomial entanglement measure of degree 2 -on an arbitrary mixed state ρ with Bloch vector r inside the sphere. We are thus ready to present our central result, which establishes a geometric relation for E across all possible decompositions of ρ.
Theorem 1. Consider an n-sphere with radius R and center located at o. We will indicate by {λ i , p i } a finite set of points p i ∈ R n+1 on the sphere with corresponding weights λ i , normalized so that i λ i = 1. Let us choose a particular point z on the sphere and denote by {α i , a i } a set such that all points according to the standard Euclidean distance. Let g denote the barycenter of the family of points, that is, g = i α i a i . Then, for any other set of points {β j , b j } which lie on the same sphere and share the same barycenter g, the following holds: Proof. Apollonius' formula [36] says that for any set of points {α i , a i } with barycenter g and any chosen point z ∈ R n+1 we can write: for any chosen a l . Applying the same formula to any other set of points {β j , b j } with the same barycenter g gives: by Eq. (6). Since the points {α i , a i } lie on the n-sphere, we where · denotes the standard Euclidean inner product. We can average over this expression with the weights Since the points {β j , b j } lie on the same n-sphere and share the same barycenter g, we can conclude that i α i a i 2 = j β j b j 2 , and therefore Using this in Eq. (7) gives the final result announced in Eq. (5).
Theorem 1 implies that, for every one-root state ρ with Bloch vector r (identifying r with the barycentre g in the statement of the Theorem), any polynomial entanglement measure E of degree 2 has the same value irrespective of the chosen decomposition of ρ into a set of ν ≥ 2 pure states. That is, the measure E is an affine function on the whole Bloch sphere, The evaluation of E(ρ) is thus made easy, and can be carried out exactly in any decomposition. It is particularly instructive to consider a decomposition of ρ such that all pure states {|ψ i } with Bloch vectors {a i } (in the notation of Theorem 1) lie on the secant plane equidistant from the root point z, see Fig. 1. The value of E(ρ) then corresponds to the squared distance from any of these points a i to z, according to Eq. (4). But since this can be done for any other state with a Bloch vector lying in the same plane, the equidistant plane is in fact a plane of constant entanglement. Let us now introduce the radial state ρ c whose vector c is at the centre of the secant plane, i.e. at the intersection between the plane and the Bloch sphere diameter joining z with the antipodal point z . The latter point corresponds to the pure state |z with maximal entanglement E on the sphere. As ρ and ρ c are on the same secant plane, one has E(ρ) = E(ρ c ). The latter can be evaluated by exploiting the affinity of E, Eq. taking the decomposition ρ c = 1 2 c−z |z z|+ 1 2 c−z |z z |. Since E(|z ) = 0, we finally get, for any one-root state ρ in the Bloch sphere, the closed formula where one recognizes the trace distance D Tr (ρ, τ) = 1 2 Tr|ρ − τ| in the second expression. The connection with Eq. (4) is made explicit by using elementary Euclidean geometry, which yields E(ρ) = E(|ψ l ) = N a l − z 2 = 2N c − z = E(ρ c ) , for any index l ∈ {0, . . . , ν − 1}. Comparing with Eq. (9) we find the value of the normalization constant, N = E(|z )/4.
In order to present concrete examples, we begin by investigating the concurrence C of two qubits [15], for which general analytical expressions are known [15,16] and can be compared with the results presented here. Visualizing Fig. 1, we can always apply a change of basis to rotate the Bloch sphere such that the north and south poles are occupied by the root z and the antipodal point z , respectively. We can further identify the root state |z with the computational product state |00 . By imposing z|z = 0 and that the concurrence vanish only on |z , we obtain a complete characterization of two-qubit one-root states (up to local unitaries), specified by the basis: |φ 0 ≡ |z = |00 and |φ 1 ≡ |z = cos γ 2 |01 +sin γ 2 e iδ |10 , with 0 ≤ γ ≤ π, 0 ≤ δ ≤ 2π. All horizontal planes crossing the ball are surfaces of constant concurrence. For any two-qubit state ρ inside the sphere, defined as in Eq. (10), the concurrence computed from Eq. (9) is C (ρ) = 1 2 1 − φ 0 |ρ|φ 0 + φ 1 |ρ|φ 1 C (|φ 1 ) = 1 2 (1 − r cos θ) sin γ, which coincides with the known general solution from [15].
We now focus on the case of three qubits (m = 3) and adopt the square root of the three-tangle √ T [19,24] as a polynomial measure of tripartite entanglement, whose explicit formula for pure states |ψ ∈ C 8 is provided in [19]. Such a measure plays a prominent role in studies of monogamy of entanglement [19,37], yet at present no closed solution exists in general for its evaluation on mixed states, beyond a few special cases [20][21][22][23][24][25]. We can readily construct a representative family of one-root rank-2 states of three qubits in a similar way as for two qubits. We take the poles of the Bloch sphere to be, respectively, the generalized W state |φ 0 ≡ |z = a |001 + b |010 + c |100 with a 2 + b 2 + c 2 = 1 (where a, b, c are chosen real for ease of illustration), and the entangled state |φ 1 ≡ |z = g |000 + t 1 |011 + t 2 |101 + t 3 |110 + e iγ h |111 with g 2 + h 2 + i t 2 i = 1, g ≥ t i , h ≥ 0, and − π 2 ≤ γ ≤ π 2 , as defined in [38]. Imposing the one-root property leads to h = 0 and t 3 = ( √ ct 1 + √ bt 2 ) 2 /a. We can then write any state ρ inside the Bloch sphere as in Eq. (10), which leads us to the exact expression for the square root of three-tangle √ T of ρ, Tripartite entanglement in this 7-parameter class of threequbit states ρ has thus been effortlessly quantified, thanks to their one-root property and its geometric implications.
Beyond specific examples, one can wonder whether a more systematic characterization of one-root three-qubit states is possible, so as to gauge the relevant range of applicability of our exact results. The answer is affirmative. Notice that every rank-2 three-qubit state ρ can be purified to a four-qubit state |Ψ ∈ C 16 , and conversely the set of marginals obtained by tracing out one qubit from arbitrary four-qubit pure states |Ψ completely characterizes the set of rank-2 three-qubit states ρ. We can then aim to identify the one-root three-qubit states in terms of their four-qubit purifications. To this end, we recall that while pure four-qubit states have an infinite number of SLOCC-inequivalent classes [26], that is, subsets of states which cannot be transformed into one another by performing SLOCC operations, from the point of view of classifying their entanglement properties they can in fact be conveniently grouped into 9 classes [39]. Each of these classes then forms a subset Υ µ ⊂ C 16 (for µ = 1, . . . , 9) represented by a generating family |G µ (dependent on at most four continuous complex parameters), such that all the states |Ψ µ ∈ Υ µ belonging to the µ th class are constructed as |Ψ µ = L |G µ / L |G µ , where L ∈ SL(2, C) ⊗4 is a SLOCC operation. The union of all the nine classes 9 µ=1 Υ µ covers the Hilbert space of all four-qubit pure states, up to permutations of the qubits.
We now make a useful observation. When two pure states are SLOCC-equivalent, the ranges of their corresponding reduced subsystems are spanned by SLOCC-equivalent bases, which means that all states in the reduced ranges are related by an invertible linear transformation [40]. Since for any two SLOCC-equivalent states the polynomial entanglement measure E either vanishes on both or is strictly nonzero on both [24], we have that SLOCC operations preserve the number of zero-E states in the ranges of the reduced subsystems. In other words, the number of roots in the zero polytope for the marginals of four-qubit states is a SLOCC-invariant.
It then suffices to check the marginals of the generators |G µ to look for the one-root property. This can be done analytically after some straightforward algebra, and we find as a result that four of the nine classes of four-qubit pure states have three-qubit marginals which can enjoy the one-root property. This, applied to the whole respective SLOCC classes [39], characterizes completely the set of three-qubit one-root states, and entails that for all these rank-2 mixed states ρ we can exactly calculate the convex roof extended entanglement measure √ T thanks to Theorem 1, which is remarkable. Explicitly, the classes whose marginals are generally oneroot are: class 4 (tracing out any one of the four qubits), class 5 (tracing out qubit 2 or qubit 4 only), and classes 7 and 8 (tracing out qubit 2, 3, or 4 only). The corresponding sets of three-qubit one-root states are given therefore by For completeness, we report the relevant (unnormalized) generators: , where a, b ∈ C with Re(a), Re(b) ≥ 0, and |L • refers to the notation of [39]. The square root of three-tangle for all the states in Eq. (12) is given exactly by Eq. (9); if one prefers, it can also be evaluated numerically in any convex decomposition (e.g. the spectral one), with no optimization required.
We finally note that many entanglement bounds for convex roof extended measures will be tight on the one-root states, because of their special properties. For instance, bounds such as the best separable approximation for two qubits [30], the best W approximation for three qubits [31], and the generalized best zero-E approximation [32] are based on finding a convex decomposition for an arbitrary state ρ in terms of states with vanishing entanglement and at most one state with nonvanishing entanglement. However, for one-root states, such a decomposition is possible only in one way: that is, into a pair formed by the root |z and some other state |ω ; hence the entanglement of ρ is trivially given by E(|ω ) with the corresponding weight. Additionally, bounds which use methods such as the conjugate gradient [41,42] are also guaranteed to converge to the right value. By the Schrödinger-Hughston-Jozsa-Wootters theorem [43,44], any two decompositions for a given density matrix ρ are related by applying a unitary matrix, therefore a typical instance of a numerical method of this kind calculates the gradient for a given entanglement measure on the unitary manifold and uses it to reach the minimum in the convex roof. For one-root states, however, any choice of the initial decomposition gives the right entanglement value by Eq. (8), and the value of the gradient of E on the unitary manifold can be verified (numerically) to stay uniformly zero.
In conclusion, we have shown that every polynomial entanglement measure E of degree 2 is affine for any rank-2 state ρ for which there is only one pure state |z in the range of ρ such that E(|z ) = 0. This renders calculating the convex roof of E trivially easy in any such case, as the entanglement of ρ does not depend on its pure-state decomposition. The method applies to many significant mixed states which did not enjoy known formulae before, as is the case for the three-tangle of the marginals of several classes of four-qubit pure states.
The results of Theorem 1 can be used for evaluation of various polynomial generalizations of the tangle in four and more qubits [45,46] whose states obey the one-root property; explicit instances can be readily constructed in analogy to the ones reported here for two and three qubits. Moreover, the geometric approach presented herein is rather powerful and applicable also to higher-dimensional systems using a generalized Bloch vector approach [47], although the properties of the complex polynomials encountered in the definition of the entanglement measures do not seem to allow for a simple generalization of the concept of one-root states. A possible extension of this work would be to find classes of qudit states with equivalent properties, which might lead us to accomplish an even more comprehensive study of multipartite entanglement.
We thank the European Research Council (ERC) Starting Grant GQCOP (Grant No. 637352), for financial support. We acknowledge fruitful discussions with J. Louko, A. Streltsov, A. Winter, W. K. Wootters, and especially K. Macieszczak. | 5,021.8 | 2015-12-10T00:00:00.000 | [
"Mathematics"
] |
Multivariable Heuristic Approach to Intrusion Detection in Network Environments
The Internet is an inseparable part of our contemporary lives. This means that protection against threats and attacks is crucial for major companies and for individual users. There is a demand for the ongoing development of methods for ensuring security in cyberspace. A crucial cybersecurity solution is intrusion detection systems, which detect attacks in network environments and responds appropriately. This article presents a new multivariable heuristic intrusion detection algorithm based on different types of flags and values of entropy. The data is shared by organisations to help increase the effectiveness of intrusion detection. The authors also propose default values for parameters of a heuristic algorithm and values regarding detection thresholds. This solution has been implemented in a well-known, open-source system and verified with a series of tests. Additionally, the authors investigated how updating the variables affects the intrusion detection process. The results confirmed the effectiveness of the proposed approach and heuristic algorithm.
Introduction
The ongoing evolution of science and technology continues to bring new challenges. With the Internet becoming one of the most important inventions of the last century and an integral part of today's world, new threats are emerging [1]. People are increasingly using the Internet for tasks that would have traditionally been done in person. Convenient online payments have convinced many to go online, even for the simplest activities. Remote working is now a common element of corporate network infrastructure. However, conducting our everyday lives online means that less-careful users are at risk of cyberattack [2]. Harmful software, viruses, and many other means of hacking are constantly being developed [3,4]. Vulnerabilities could mean losses for individual users, but they could also extend to millions or even billions of dollars [5,6]. This drives the development of effective security tools. Companies and individual users use a range of security tools to protect themselves against network attacks [7]. These tools should detect unexpected activities in the network and allow users to take appropriate action. The growing number of new and unknown attacks means that new methods of attack detection are required.
Awareness of the importance of cybersecurity encourages organisations to engage in joint defence activities, particularly those operating in the same sector, such as energy, healthcare, etc. By working together, they are able to collect and process data regarding sector-specific attacks and malicious software [8,9]. Multisector and multidomain collaboration is also frequently required. This is consistent with the development of cybersecurity in the EU, where the Horizon 2020 projects brings together partners to establish the European Cybersecurity Competence Network [10]. The ECHO project [11] is a good example of such a broad collaboration.
The federated approach to cybersecurity helps detect new network attacks and protect companies' assets more efficiently. Data collected and processed by federated entities can
Related Work
Two main types of intrusion detection algorithms can be distinguished: an approach based on the predefined attack's signature [12] and methods analysing behaviours to detect anomalies [13,14]. The second group contains heuristic algorithms reviewed by Kenny et al. in [15]. The list includes most types of algorithms proposed in the research on heuristic intrusion detection. Ali and Malebary [16] introduced a solution based on particle swarm optimisation (PSO)-an intelligent phishing website detection in the form of feature weighting. For five out of six common machine learning algorithms, the proposed method achieved better detection accuracy than other feature selection and weighting methods mentioned in the paper. Jacob [17] proposed a tabu search algorithm-automatic signature generation for detected cross-site scripting (XSS) attacks. Although the true positive ratio of the solution is acceptable, the detection algorithm is focused on finding XSS specific keywords instead of looking for injection patterns. Yerong et al. [18] combined two heuristic methods in their research-the support vector machine (SVM) used for intrusion detection was optimised using a genetic algorithm. The optimisation indeed improved detection accuracy and decreased the number of false positives compared to the values obtained by a radial basis function neural network and unoptimised SVM. Jothi et al. [19] provided yet another type of heuristic solution-an artificial neural network (ANN). The authors implemented an accurate machine learning model for the detection of structured query language (SQL) injection attacks, which could be implemented, for example, to prevent attacks during login sessions.
Most of the papers on heuristic intrusion detection have focused on machine learning [20][21][22][23][24][25][26][27][28][29]. The authors of this paper consider a different approach to intrusion detection: packet scoring. The topic has been studied by Subburathinam and Saravanan [30], who proposed calculating the score of the packet depending on different variables, e.g., port number or protocol. At the same time, the conditional legitimate probability was being checked-if either score or probability was an anomaly, the packet was dropped. Murtuza and Asawa [31] introduced the fitness score used for distributed denial of service (DDoS) detection in software-defined networks (SDN). The fitness value for each packet was either incremented or decremented depending on, for example, previous successful connections or protocol. Then, the packet was categorised depending on its score and processed further. Prasath and Perumal [32] also presented a heuristic algorithm for intrusion detection in SDN networks. However, this method is focused on finding anomalies using extracted features of flows, e.g., duration, protocol type, service. It is worth mentioning that the large number of features can decrease the performance of intrusion detection significantly [33,34]. In [35], Mukhopadhyay et al. proposed a lightweight heuristic intrusion detection and prevention system; its decision making engine is based on frame data and source/destination addresses. The decision engine can also take into account selected external data, such as the reputation of a given URL. The solution presented in this paper extends such an approach into different flags regarding suspicious/malicious IP addresses. The idea to include fuzzy entropy as a feature to support intrusion detection based on machine learning methods was introduced by Varma et al. [36]. The feature extraction method based on the regularised correntropy criterion was also proposed by Xing and Ren [37]. However, this paper assumes the direct impact of entropy to the score of a given packet.
Aside from packet scoring, the authors of this paper also focus on a federated approach to intrusion detection. The proposed solution operates on a shared file with malicious addresses, assigned flags, and entropy values, which could be updated by federated entities in the case of new attacks and sent out to all members of the federation to ensure security.
Intrusion Detection
The evolution of malware and emerging new attacks are driving the development of attack detection methods [38]. These methods should provide effective protection to users/companies and their data against intruders [39]. The detection methods can focus on analysing the behaviour of network traffic and detecting anomalies that are known or that could be a new type of attack on network infrastructure. The assumptions of such a solution are highly restrictive, since it should return low numbers of false positives and false negatives. If such defects are high, the system administrator may become complacent and fail to respond to a real attack [40]. Suitable detection methods should also be efficient enough to process network traffic and inform the administrator of any potential threats as soon as possible.
Intrusion Detection Systems
An intrusion detection system (IDS) is a solution that is used to monitor network traffic and able to detect attacks [41]. This solution is mainly used in two ways: as a network-based IDS located behind a firewall to analyse incoming traffic, or as a host-based IDS to analyse traffic targeted to specific host.
It is worth mentioning that IDS conducts a significantly more advanced analysis than a typical firewall, which is a filtering point between a local and external network. The major task of a firewall is to allow or deny network traffic using static analysis. Therefore, a firewall's configuration rules focus mainly on source/destination IP addresses or ports, which may prevent the firewall from detecting malicious traffic [42]. Firewalls frequently do not conduct analysis as advanced as IDS; however, a combination of these two solutions provides more efficient protection to network infrastructure [43].
The placement of an IDS is critical and varies depending on what the user needs to protect. It is crucial that the balance between network performance and the range of IDS operation is maintained. The most obvious placement of an IDS is behind the firewall, allowing for monitoring of the entire network; however, this could create a bottleneck that may decrease the overall throughput of the network. On the other hand, if the IDS is placed deeper inside the network, the performance levels will be maintained, while a part of the network will be left vulnerable [44,45].
There are two main types of IDS: software solutions (e.g., Snort [46] or Suricata [47]) and hardware solutions (e.g., devices developed by Cisco Systems [48] or Palo Alto Networks [49]). Selecting the most appropriate solution depends on infrastructure, budget, and other specific requirements of cybersecurity staff. Optimal IDS deployment and configuration make it possible for the network to stay hidden from attackers while remaining transparent to network users. However, it is a key element of network security, providing a response to any attack.
Detection Methods
The key purpose of IDS is to detect unwanted traffic in the network that could be a potential attack [50]. There are two main types of detection techniques [51]: misuse detection and anomaly detection.
Misuse detection is based on the attack's signature. This type of detection uses a predefined attack signature and compares it with an analysed packet or groups of packets [52]. If the signature or part of it matches the malicious signature, the event is reported. Misuse detection is effective and produces low levels of false negatives and false positives. However, this solution cannot detect new types of attack, which are unlikely to match any known signatures. Therefore, if the attack differs from the signature even slightly, it is not detected. This makes it essential for the producer/vendor of the IDS to update the signatures database frequently.
Anomaly detection, also known as behaviour-based detection, assumes that behaviour that determines the attacker's likely activity is different from the behaviour of a permitted network user [53]. IDS supporting anomaly-based detection is highly effective at finding zero-day attacks; however, it generates high volumes of false positives. We can distinguish two types of behaviour-based attack identification: heuristic analysis and anomaly analysis. The first type is based on potential behaviours, which can occur with different kind of attacks, such as port scanning or unauthorised access to confidential resources. The second type relies on anomaly recognition by detecting unusual activities. For instance, if a user logs into the local database outside of their usual hours and tries to access confidential data, it may be seen as an anomaly. The heuristic approach can also be based on data shared by the federated organisations.
Multivariable Heuristic Approach
A joint approach to attack detection is more effective than an individual approach, encouraging companies working in a given sector to create federations. This approach means each member of the federation has access to a broad knowledge of threats in cyberspace. However, shared data regarding malicious or suspicious entities can be fragmented and covers a range of aspects of the threat. Such inconsistent data should be organised into groups.
Flags
Groups known as flags describe the nature of a given threat. This information can be shared across the federation and is used by a heuristic detection algorithm. The choice of flags was inspired by the common vulnerability scoring system (CVSS) and authors' practical knowledge-including joint work in the H2020 ECHO project [11]. Parameters and their values can be configured depending on the local security policy. Flags and their default values are described below. These values were chosen for the purpose of the test to present functionality of the algorithm.
• dangerous-This flag identifies the severity of the threat associated with an IP address ( Table 1). The value of this flag is subjective and depends on the environment/federation. In some cases, the attack may not be especially harmful. For example, a phishing attack on medical wristband infrastructure is not especially dangerous; on the other hand, the same type of attack on corporate infrastructure can be critical. The value of this flag and the decision of which flag to assign to the given IP address can be based on an analysis of other flags. • attack-This flag specifies the type of attack in which the IP address was recently involved. The value of this flag may differ from its environment because the effectiveness of an attack also depends on the network's purpose and users. Table 2 shows descriptions and default values for attack flags. • range-This flag describes the impact of an attack by an IP address on other network components such as the server, switch, or router. In this case, a given attack may affect only a single attacked network component or spread over a part or all of the infrastructure. Table 3 shows description and default values for range flags. • access-Some attacks (e.g., phishing, malware) require user action within the network, while others (e.g., DDoS, DoS) do not require user response. This type of flag describes the need for user response within the network. Table 4 shows two possible flags: none and user. The first describes a situation when the attack does not require a user response. The second flag describes a situation when the attack requires a user response (e.g., opening an attachment in an email). • availability-Some attacks, such as ransomware, cause a partial or complete loss of access to the unit and data on it. This type of flag describes the impact on the availability of the attacked component. Table 5 shows three levels of impact on the functionality of a given component in the network.
Entropy
Entropy is a concept derived from information theory. Entropy, introduced by Claude Shannon, is the average amount of information carried by a single message [54]. By defining the probability of an event, it can be determined whether the event is recurring or rare. With regard to a computer network, the entropy of a phenomenon can determine whether it is a desired activity in a given network or an anomaly [55,56].
Assume that X is a discrete random variable, with a probability distribution p( For the assumed condition, the entropy takes the Formula (2).
Shared Data
The format of data shared in the federation should be simple and scalable. As we are operating on a relatively small number of addresses, comma-separated values (CSV) format was used. This type of file is concise, readable, and can be formatted and edited easily with many applications, such as LibreOffice Calc, Microsoft Excel, and Notepad. While the CSV format is convenient for operation on small amounts of data, the file type could be changed to more compact and scalable format, e.g., JavaScript Object Notation (JSON) or Extensible Markup Language (XML).
Each record must contain the IP address of the suspicious/malicious entity, defined flags, and entropy value. These sections in one record can be separated by commas. The general structure of a single record is as follows: IP_address, dangerous, attack, range, access, availability, counter, entropy An example list with records in the correct format is presented below. This kind of list (CSV file) can contain thousands of records with suspicious/malicious addresses. The first section defines a malicious IP address. Such an address can be provided by another company that had been attacked and, following forensic analysis, is confirmed that it took part in the attack. The second section determines the severity of the threat associated with a given IP address (e.g., the flag set to High may mean the website where the ransomware was downloaded, while the flag set to Low may mean that the IP address that was involved in a DDoS attack is a bot). The third section describes the types of attacks in which this IP address was involved. The fourth section determines the range of the attack on local network infrastructure. The range flag determines how many stations could be attacked. The fifth section contains access flags, which mark the requirement for user action within the network. The sixth flag describes the impact on the availability of the attacked network component. Finally, the structure includes a counter of the address appearing in the network shown alongside the entropy value of this address in the local network. The default values of counter and entropy are equal to zero.
Detection Algorithm
The proposed multivariable heuristic algorithm should take into account the flags and entropy value. However, the entropy depends on the number of received packets from a given IP address; therefore, the final value should be calculated for each captured packet. Additionally, this value should depend on the value of each flag in the correct proportion. Thus, the following formula for calculating the packet value is used: where PV f is the final packet value and PV i is the initial packet value. Parameters α, β, γ, δ, , and η should be chosen regarding the security policy. The authors suggest that the influence of the entropy value should be limited. Therefore, the η value has been set to 0.5 for calculations during the verification tests. Further, for dangerous and attack flags, it is a subjective assessment of how to evaluate the attack. Therefore, the ratio of 65% of the dangerous flag value and 35% of the attack flag value was adopted for calculations. The default parameters are shown in Table 6.
The final elements of the detection algorithm are related to the selected detection threshold. The heuristic algorithm should generate an alert if this threshold is exceeded. Therefore, three different detection parameters should be defined.
• packet_value-Initial value of the received packet immediately after the packet is captured. This value is the same for each analysed packet. • sensitivity-Lower limit of the packet value. When this limit is exceeded (following analysis), the packet is reported to the console. • entropy-Upper limit of the packet entropy value above which the packet is reported to the console.
Each of these parameters should have default values related to the deployed security policy in the protected network. Table 7 presents the proposed default values, which were selected during the experiments described in the next section.
Verification
The proposed multivariable heuristic detection algorithm must be verified in real-life scenarios. Therefore, this solution was tested in a network environment to detect malicious traffic. Additionally, the authors verified how updates of variables affect the efficiency and accuracy of the detection algorithm.
Methodology and Test Environment
The verification was performed in a Snort environment-an open source IDS network capable of logging and analysing incoming traffic in real-time. Snort is a powerful tool used to detect and prevent intrusions in networks [57]. It has been in use for over 20 years and is one of the most popular open source IDS tools [58]. However, this tool is a signature-based detection system. This means that new heuristic functionalities had to be implemented to verify the proposed multivariable heuristic detection algorithm.
The first step of detecting anomalies in Snort is collecting (sniffing) network traffic and identifying the structure of each packet. This requires a packet capture and filtering engine for acquiring data such as [59] packet capture time, length of the packet, size of the captured packet, and a pointer to the contents of the packet. After capturing the packet, Snort begins decoding: the acquired packet enters the packet decoder depending on the link layer from which it is read. Next, preprocessors expand Snort's functionality by making it possible to easily configure the packet processing modules [60]. The preprocessors are an element of Snort, which is key when it comes to developing a new functionality inside the environmental engine. The authors developed and deployed two new preprocessors: one allowing Snort to collect and update variables regarding malicious IP addresses, and the other to update flags and entropy values in a dynamic manner.
Detection rules are an important element of Snort. A single rule consists of a header and options. The header contains the rule's action, protocol type (currently supporting TCP, UDP, ICMP, and IP), destination IP addresses and netmasks, direction operator (used to indicate the direction of the traffic the rule applies to), and source and destination port information. Options contain alert messages and information that determine whether the rule action should be taken depending on the inspected packet [46]. Snort's detection ability was expanded during the verification process based on rules focused on SQL Injection (SQLi). This type of attack exploits application security vulnerabilities to inject SQL queries into a database.
As mentioned, the new heuristic preprocessors in Snort add new functionality to this environment. The configuration file should contain a path to a CSV file with malicious IP addresses. Each address should have flags and an entropy value assigned to it. Each flag has a value that will be added to the packet rating. The flag values must be negative, so their absolute value will be subtracted instead. The evaluation of packets starts at a predefined packet_value variable. Depending on the flags and entropy assigned to the address, the packet rating is updated (hence, the negative values assigned to the flags). At the end, packet_value is compared to the sensitivity variable, which is a deciding factor in displaying alerts.
Validation of the Algorithm
This section presents the functional verification of the multivariable heuristic algorithm. To show how the algorithm operates in different environments, the selection of flags for IP addresses was random. Listing 1 presents the shared file containing information on malicious IP addresses and flags. During detection, logs related to individual packets can be seen in the console and are saved to a log file. The packets are processed to update the shared data for further usage by the federated entities. Figure 1 shows example logs that appear during the detection process. Each log contains selected flags (type of attack related to the given IP address and dangerous flag associated with the given IP address), package value after calculations based on Equation (3), and current entropy value for the specific malicious IP address. During the verification test, 33,503 packets captured from a local network were analysed (there was no additional network traffic generated because of test's purpose: functional verification of the proposed solution). The test was performed on a personal computer. Figure 2 shows a brief summary. Most of the traffic ran on IPv4. Listing 2 presents the shared file updated immediately after the test. The file contains updated data related to malicious IP addresses, showing significant changes compared with the status before the analysis. In order to verify the algorithm operation, the packet value for the most frequent IP address (which is 192.168.0.103) has been calculated manually and then compared to the value computed by the algorithm. The calculations were based on Equation (3) and the default values of parameters (Table 6). Additionally, Figure 3 shows the log related to a packet from 192.168.0.103, which contains the packet value assigned by the detection algorithm. The functional verification of the heuristic detection algorithm demonstrates that, based on the external shared data (flags and entropy), the packet value can be determined in quantitative way, as both values-calculated and computed-are equal. The decision is made when this value is compared with the threshold. This approach detects malicious traffic.
Updating of Variables
This test verifies how the duration of the detection process affects effectiveness. In this scenario, the authors used network traffic containing SQL Injection attacks (the environment SQLi-LABS [61,62] was used to validate these attacks) and wrote the Iterate_Snort script. As the name suggests, the script contains an iterational algorithm that works alternately with attack detection by Snort, based on its output, and prepares a file of malicious addresses. The main goals of this algorithm is to detect attacks (in this scenario, SQL Injections), collect information about specific IP addresses that have performed an attack, and update variables (e.g., flags) in the shared file.
The created script requires two arguments: iterate, which sets the number of iterations, and timer, which sets the duration of a single scanning iteration by Snort. Snort operates with the appropriate set of rules for SQLi detection and an option that allows the program to log the alerts into a specified folder. Then, another script is run to create updated CSV files based on the collected alerts. Both processes are repeated until the number of completed iterations is equal to iterate.
The authors performed numerous tests to show differences between different configurations of Iterate_Snort arguments; each test had a different number of iterations, but the total operation time was the same. We assumed that in a single full test, exactly 180 attacks should be executed (the script operated for approximately 15 min while performing 180 attacks). Therefore, we chose the duration of packet collection in each scenario because every restart of Snort takes some time; we took into account pauses between iterations and the time when commands are executed. The results are shown in Table 8 and Figure 4.
The comparative analysis shows that the detection algorithm is repeatable in terms of effectiveness when it comes to the same configuration. It is also worth mentioning that there is no significant drop in effectiveness between different scenarios. The difference between sample and ten-iteration tests' effectiveness is lower than 2 percentage points. This means the shared data can be regularly updated and the algorithm would remain effective. While the average number of attacks detected in a sample test (one iteration) is the highest, Snort does not always detect all of the attacks (the minimum number of attacks detected is always lower than 180). The two-iteration tests missed two attacks on average. The standard deviation of the results is less than 1 and there were no significant irregularities in the results (the difference between the maximum and minimum values of attacks detected is equal to 2). This suggests that increasing the number of iterations will lower effectiveness. The average number of attacks detected in the five-iteration test is indeed lower, although the standard deviation values mean the difference is inconclusive. The maximum number of attacks detected by the five-iteration test is 179 attacks. This means that under specific conditions (such as a short delay between starting the scripts or Snort initialisation time lower than usual), the algorithm can perform very well, even with a higher number of iterations. The average number of attacks detected by ten-iteration tests is lower than the result of the two-iteration tests. While the number of lost packets is higher than in previous tests, the algorithm still performs very well: its effectiveness is nearly 98% despite running ten iterations.
Two main conclusions can be drawn from this analysis.
• In most cases, the algorithm cases perform better with a lower number of iterations. Its effectiveness is higher in one-iteration scenarios than in two-, five-, and ten-iteration scenarios. The two-iteration test's effectiveness is also higher than that of the teniteration tests. • The standard deviation of the tests is lower than 1 in each scenario. This means that the algorithm regularly detects attacks, and anomalies such as the minimum number of attacks detected by the ten-iteration test (174 attacks) are rare throughout its operation.
Conclusions
Security in cyberspace is a major challenge of modern IT systems [63,64], driving the development of new ways of detecting and protecting against threats and attacks. This paper proposes a multivariable heuristic algorithm as a new method of intrusion detection. This solution is based on different types of flags and values of entropy set for each suspicious address. Such information about suspicious addresses can be shared between entities in a federated environment. This makes the algorithm flexible and adaptable to different sectors and networks, as the flags are changed within a single CSV file. Depending on the input data, the algorithm calculates the packet value and decides-depending on the sensitivity of the network (set by a variable defined in the shared file)-whether the packet should be reported. Additionally, the authors propose an approach to parametrise the detection algorithm. The authors propose default values of the packet_value, sensitivity, and entropy variables in case these values are not set by the user manually, since they are crucial for the operation of the algorithm.
The effectiveness of the proposed solution was verified through a series of tests with different configurations. Snort-a popular open source IDS tool-was used during the experiments. The authors implement new functionalities in this environment to verify the introduced multivariable heuristic detection algorithm. The testing part consisted of two scenarios: functionality verification and comparative analysis of the algorithm. The total time was the same in each case, but the number of algorithm iterations increased. In each test, the values of the flags and entropy were random to show that the algorithm is effective in different network scenarios. The first scenario validated the algorithm operation: the decision based on the value of the packet counted using the proposed formula was made correctly. The second scenario was performed to check how the changes of detection duration affect the effectiveness. The authors drew two main conclusions: the algorithm performs better with a lower number of iterations, and it is rather repetitive in the same configuration.
With Internet use increasing rapidly every day, solutions such as the multivariable heuristic intrusion detection algorithm are highly desirable on the market. The new algorithm proposed in this paper was tested and its effectiveness demonstrated, although it is still open for future development. Some environments may need additional sectorspecific groups of flags that describe the character of a given threat. Another potential extension that could increase network security is collecting additional information about traffic using network devices, e.g., by monitoring the number of inbound packets on firewalls. These statistics could help prevent DoS and DDoS attacks more effectively. Future work will explore these directions of development of the multivariable-based approach to intrusion detection. The research will also focus on finding the optimal default parameters of the heuristic algorithm for different sectors. Such personalised parameters will increase the effectiveness of threat detection in a given network. It is important that the development of detection methods continues, given the fact that new threats and attacks are constantly appearing in cyberspace. | 7,067.2 | 2021-06-01T00:00:00.000 | [
"Computer Science"
] |
Women’s empowerment in agriculture and agricultural productivity: Evidence from rural maize farmer households in western Kenya
This paper documents a positive relationship between maize productivity in western Kenya and women’s empowerment in agriculture, measured using indicators derived from the abbreviated version of the Women’s Empowerment in Agriculture Index. Applying a cross-sectional instrumental-variable regression method to a data set of 707 maize farm households from western Kenya, we find that women’s empowerment in agriculture significantly increases maize productivity. Although all indicators of women’s empowerment significantly increase productivity, there is no significant association between the women’s workload (amount of time spent working) and maize productivity. Furthermore, the results show heterogenous effects with respect to women’s empowerment on maize productivity for farm plots managed jointly by a male and female and plots managed individually by only a male or female. More specifically, the results suggest that female- and male-managed plots experience significant improvements in productivity when the women who tend them are empowered. These findings provide evidence that women’s empowerment contributes not only to reducing the gender gap in agricultural productivity, but also to improving, specifically, productivity from farms managed by women. Thus, rural development interventions in Kenya that aim to increase agricultural productivity—and, by extension, improve food security and reduce poverty—could achieve greater impact by integrating women’s empowerment into existing and future projects.
Introduction
In many economies in sub-Saharan Africa (SSA), women provide most of the labour force for agricultural production [1,2,3]. In Kenya, for example, women make up between 42% and 65% of the agricultural labour force [4,5], in addition to their traditional domestic responsibilities. Despite women's important role in the agricultural sector, however, empirical evidence shows that they lag behind men with regard to agricultural productivity in SSA due to the PLOS woman is defined as "empowered" if she has achieved adequacy in at least 80% of the weighted indicators (equivalent to four out of the five domains) [28].
Although some empirical studies have used the WEAI to investigate the impact of women's empowerment on food security and nutrition-related outcomes [e.g. 6, 33, 34, 38, 39, 40, and 41], and have generally found that women's empowerment has the potential to improve both such outcomes, thus far only a single study [42] has used the WEAI to explore what effects women's empowerment may have on agricultural productivity. Other related studies have mainly focused on whether and how farm productivity is affected when inputs are redistributed between women and men instead of only among men in households (e.g., [12,43,44,45,46]). However, the findings from such studies may be less useful for effective policy prescription because they do not completely account for women's roles or the extent of women's engagement in the agricultural sector-a limitation that the WEAI overcomes through differentiating among the five domains listed above.
The key finding from our study is that women's empowerment in Kenya agriculture can spur increased maize productivity among smallholder farmer households. Furthermore, whereas all women's empowerment indicators (except workload) significantly increased productivity, the number of production decisions indicators seem to have greatest effect on productivity. The results further show that, female-and male-managed plots experienced significant improvements in productivity when the women who tended them were more empowered. These results suggest that future rural development interventions that aim to increase agricultural productivity in Kenya could achieve greater impact by integrating women's empowerment into existing and future projects, e.g., by focusing on women's access to credit, asset accumulation and community leadership.
The conceptual model and estimation methods adopted in the study are presented in the next section, which is followed by a description of the data and of selected characteristics of the sample households. The fourth section presents and discusses the estimation results, while the fifth and final section concludes the presentation of the study with a discussion of its policy implications.
Model specification
We conceptualise the relationship between women's empowerment and agricultural productivity in terms of a collective model of intra-household bargaining in which households are considered as a collection of discrete individuals, each with his/her own set of preferences, Table 1
Domain Indicator Definition of adequacy (= 1) Weight
Production Input in productive decisions Sole or joint participation in at least one decision related to food and cash-crop farming, livestock farming, and fishery production 1/5 Resources Asset ownership Sole or joint ownership of at least one major household asset 2 /15 Access to and decisions on credit Sole or joint control or participation in decision-making on credit from at least one source 1/15 Income Control over use of income Sole or joint control over income for at least one of food and cash-crop farming, livestock farming, and fishery production 1/5 Leadership Group membership Active member in at least one formal or informal group 1/5 Time Workload Spent less than or equal to 10.5 hours on paid and unpaid work during the previous day 1/5 rather than as a single, monolithic decision-making unit [47,48]. The chosen framework explicitly allows for intra-household differences in preferences and heterogeneity in the impact of bargaining power on household members' ability to negotiate and allocate resources to maximise their respective utilities [36]. Our selection of the collective model was based on the large body of empirical literature (e.g. [49,50,51]) that has demonstrated the inaccurate approximation of intra-household behaviour via the traditional unitary model [52]. For example, studies have shown that the relative bargaining power of women and men within a household largely depends on their relative access to, control over and utilisation of resources [24]. Such relative power may also directly influence agricultural productivity in a household through its effect on household members' ability to allocate and organise productive resources optimally [53,54]. Within this framework, plot-level productivity gains from women's empowerment might be expected to differ, depending on which household member manages the plot in question. While the productivity gains from women's empowerment are expected to be strongest for plots cultivated either solely or jointly by women, it is also possible that the productivityenhancing effects of women's empowerment extend ("spill over") to plots operated by others within the household, e.g., via the sharing of information, pooling of resources, or positive peer pressure. Indeed, a recent study [42] using WEAI data from Bangladesh found that improvements in women's empowerment were associated with higher levels of technical efficiency on all plots cultivated by a household, regardless of whether a woman actively managed the plot or not.
We examine the relationship between women's empowerment and farm productivity by extending the standard productivity or yield function to include a measure of women's empowerment as an additional input in the household maize production function, and we test whether the impact of women's empowerment on yields differs for plots managed jointly by the man and the woman in a household, and individually by either of them. Thus, the maize yield of household i from plot p is as specified in Eq 1: In this equation, Q ip denotes the quantity of maize yield per acre produced by household i from plot p; W i is a measure of women's empowerment status in agriculture, based on the A-WEAI; X i is a vector of household-and community-level explanatory variables that influence production decisions; K ip denotes a vector of inputs used in maize production on plot p; and P ip denotes plot-level attributes of plot p.
We acknowledge that women's empowerment status is potentially endogenous to agricultural productivity. For example, unobserved characteristics such as women's leadership or farm management skills could potentially affect both household production and their empowerment status. Endogeneity may also arise from reverse causation between women's empowerment and farm productivity: on the one hand, women's empowerment may increase agricultural productivity by encouraging a more optimal allocation and organisation of productive resources, while on the other, increased yields may enhance women's share of farm income, their contribution to household food security, or their status within the community. For example, women from more food-secure households are likely to command more respect in their communities and may be more likely to engage in community leadership activities than their counterparts from less food-secure households. We therefore treat women's empowerment (W i ) as an endogenous variable in the maize yield equation (Q ip ) as specified in the following system of equations: where E(u i |Z i ) = 0 and E(u i , ε ip ) 6 ¼ 0 In Eq 3, V denotes a vector of the explanatory variables, while Z denotes a vector of the instrumental variables. The Greek characters denote unknown parameters to be estimated, where ψ captures the effect of women's empowerment in agriculture on maize productivity. The error terms are ε ip and u i . The other elements are defined as above.
We apply an instrumental variable (IV) method described later in this section to correct for the potential endogeneity of women's empowerment using the following six variables as instruments: (1) diversity of associations in village (number of types of association), (2) difference in age between principal male and principal female in household (years), (3) difference in education between principal male and principal female in household (years), (4) whether the woman brought assets into marriage (1 = yes, 0 = no), (5) years of residence in village (women only), and (6) household composition (by age group) (see Table 2 for descriptive statistics). We instrument for the overall empowerment score using all the variables except for diversity of associations in the village. For each of the individual empowerment indicators, the specific variables used as instruments vary. Differences in age and education and whether the woman brought assets into her marriage were used as instruments for number of production, asset ownership, income and group membership decisions indicators. Diversity of associations and differences in age and education were used as instruments for number of credit decisions indicator. Differences in age and education and household composition by age category were used as instruments for the workload indicator.
Residing in a village with a high diversity of associations may help women to develop stronger social networks, which might in turn influence their decision to actively participate in such associations. For example, an earlier study [41], which looked at the impact of women's empowerment on food security in rural Bangladesh, used the number of informal credit sources in a village as an instrumental variable in their analysis. To this same end, our study used a questionnaire to capture what types of association-whether formal or informal- existed in the respondent's village of residence. These associations were found to comprise informal credit groups, input supply groups, development groups, a marketing group, mutual membership groups, a business association, a water association, a women's group, civil groups, and a religious group. As other studies have also shown [50,34], differences in age and education in respect of a household's principal decision-makers may reflect differences in human capital and, hence, could indicate women's relative bargaining position in the household. Similarly, assets brought by a woman into her marriage may also be positively associated with her bargaining position within the household [55,56,57]. In our study area, we noted the practice of relatives giving a woman gifts prior to her marriage. Such gifts ranged from smaller items to larger assets such as livestock and farming equipment.
We used the composition of the household by age group to instrument the workload indicator. We grouped household members into one of five categories, as follows: aged less than 5 years, aged between 5 and 9, aged between 10 and 14, aged between 15 and 19, aged between 20 and 44, aged between 45 and 60, and aged above 60. The proportion household members in each of these age categories served as an instrument to how much work and leisure time a woman had in a household. Additional adults of prime working age or adolescent children in a household, particularly other females, may decrease the domestic work burden of the principal woman in the household-if, for example, care-giving duties are shared among household members-whereas each additional dependent (household members aged below 5 or above 60) might increase demand on the principal woman's time.
The rest of the covariates included in the empirical model were drawn from the empirical literatures on agricultural productivity and women's empowerment in SSA. These covariates fall into four broad categories: (1) household socio-economic characteristics, (2) agricultural inputs and practices, (3) plot-level attributes, and (4) community-level variables. Household socio-economic characteristics constitute sex, age, education and livestock ownership. Agricultural inputs and practices constitute the quantity of fertiliser per acre, other input expenditure per acre (e.g. seed and agrochemicals), labour input (in person-days/acre), use of yieldenhancing practices (e.g. intercropping, crop rotation, and push-pull technology/PPT), a dummy variable for farmer confidence in the quality of agricultural extension and advisory service provision, and a dummy variable for perceived credit constraints in the household (equal to 1 if a household needed credit but were unable to get it, and 0 otherwise). Push-pul technology (PPT) is a cropping system in which cereals such as maize are intercropped with perennial fodder legumes (Desmodium) that repel ('push') stemborers and suppress Striga species (witchweed). The cereal crops are also surrounded by a border of perennial fodder grass (e.g. Pennisetum purpureum/Napier grass or Brachiaria species) that attracts ('pulls') stemborers away from cereal plants [58]. Plot-level attributes constitute the proximity of the household to the farm plot, plot tenure, perceived soil depth (shallow, medium or deep), soil fertility (low, medium or high), slope (gentle, medium or steep), and whether the plot is vulnerable to insect pests and diseases. Community-level variables constitute the distance from the household to the nearest input supply shop, output market and extension offices, and sub-region dummies to control for location-specific effects such as culture and unobserved agroecological attributes. Lastly, we also included a season dummy to capture the effects of seasonal weather variation on maize productivity.
Estimation methods
We estimate two alternative specifications of the yield function. In the first, women's empowerment is operationalised in aggregate, to understand the general maize productivity effect of women's empowerment. More specifically, following [28], we compute the female respondent's individual-level empowerment score, i.e., the weighted sum of her achievements across the six component indicators of the A-WEAI. In the second specification, we estimate separate yield equations for each of these six indicators. This allows us to identify the individual effect each indicator has on improving agricultural productivity. Five of the component indicators enter the yield equation as counts (the number of groups in which a woman is an active member, the number of decisions a woman makes about credit, the number of decisions a woman makes about production, the number of assets over which a woman has control, and the number of decisions a woman makes about household income). For workload, the sixth indicator, we created a dummy variable, which takes a value of 1 if the woman spent less than or equal to 10.5 hours of working on the day prior to the survey interview, and 0 otherwise. In both specifications, we estimate an additional yield function without including potentially endogenous input variables to check for robustness of the results.
We estimate the yield equations using a control function (CF) approach, which is more suitable for nonlinear models with endogenous variables [59] such as those in our case. As in other IV methods, the CF approach involves a two-stage estimation procedure such as twostage least squares (2SLS) method, but it has the advantage of being able to estimate nonlinear models with endogenous regressors. The 2SLS method, for example, which estimates the first stage using the ordinary least square (OLS) method, cannot be used in our case because our first stage involves nonlinear models; although we use 2SLS to test the validity and relevance of our instruments as we are not aware of any other suitable IV diagnostic test method when the first stage is a non-linear model. Thus, in our first stage, women's empowerment is predicted using the various instruments described in section 2.1 above, and the predicted values are included as covariates in the second stage. Our approach resembles the estimation procedures used in previous studies involving the WEAI [40,41]. However, whereas these studies have tended to treat the WEAI as continuously distributed and estimate the first stage using OLS, we properly account for the bounded nature of the empowerment score and estimate the first stage using a fractional response Probit model (FRPM) [60,61]. Failing to treat the empowerment score as bounded-as previous studies have done-and estimating the first stage using OLS may lead to inconsistent estimates and miss some potentially important nonlinearity. Following [62], therefore, we performed the regression specification error test (RESET) for functional misspecification in the first stage, and reject the null hypothesis that the equation has a linear functional form (F[3, 2432] = 2.96).
For the first-stage estimation of all the individual indicators of empowerment in agriculture except workload, we follow a similar CF approach as above, which again is justified by the nonlinear nature of these models, except that we use count regression models. Specifically, the input in productive decisions, asset ownership and control over use of income indicators are estimated using Poisson regression. The group membership indicator is estimated using zeroinflated Poisson because of excessive zeros. The indicator access to and decisions about credit is estimated using a negative binomial model due to over-dispersion. For workload, the firststage regression is estimated using a Probit model. The yield equations are estimated using OLS.
Description of the study area, data and sample household characteristics
The data for this study come from western Kenya, where PPT was developed and tested, and where it is now being promoted to increase maize productivity through controlling Striga species (witchweed) and stemborers as well as improving soil fertility. Farmers in this region have two growing seasons: a long rainy season (March-August) and a short one (September-December). The household and individual survey data were collected by the International Centre of Insect Physiology and Ecology (icipe) between July and August 2016 for the 2015/16 cropping seasons while farmers were harvesting and threshing maize planted in 2016/17. In respect of harvesting and threshing, female invest substantially more labour than male in their households [10].
Due to resource limitations we selected 9 of the 11 counties where PPT was being used by farmers. The selected counties were Bungoma, Busia, Homa Bay, Kakamega, Kisumu, Migori, Siaya, Trans Nzoia and Vihiga (Fig 1). Next, between 3 and 11 villages were randomly selected in each county using the probability-proportional-to-size sampling method. Within each village, between 2 and 21 households were randomly selected, also proportional to their size. In total, 60 villages and 711 farmers operating on 4,494 plots were surveyed. Of the total plots (4,346) cultivated by the households, 2,481 plot observations were planted with maize, which is the focus of this study. After dropping outlier observations and observations that had missing values for some variables, the usable sample amounted to 707 households and 2,248 maize plots.
Separate questionnaires were designed for households and individuals. The survey tool was administered using semi-structured interviews by trained enumerators who spoke and understood the local languages. Respondents' participation in the survey was voluntary. The questionnaire had an introductory statement that sought the respondent's consent to participate in the survey. While the household questionnaire was administered jointly to the principal male and female adult decision-makers in the household, the individual questionnaire was administered to the principal female decision-maker only, via a private interview conducted away from the male adult decision-maker to avoid data/response contamination. While both questionnaires aimed to capture gender dynamics within the household, the individual questionnaire was aimed specifically at eliciting the data required to compute the A-WEAI score for the adult female decision-maker. Due to budget constraints, the A-WEAI -oriented questionnaire was administered to women only, i.e., male adult decision-makers in a household were not interviewed.
The household questionnaire administered to both spouses elicited information on, among other things, household and individual demographic characteristics; crop and livestock production and utilisation data (input data, consumption data, marketing, etc.); ownership of productive assets (labour, land and livestock) by sex; farming practices such as PPT adoption; plot characteristics and management; access to development services (extension, credit and markets); non-agricultural income-generating activities; and social capital and network variables such as spouses' membership of rural institutions. Table 3 offers definitions and summary statistics for all the variables used in the analysis.
Descriptive statistics
The descriptive statistics show that, on average, women achieve adequacy in 64% of the weighted indicators in the A-WEAI. Each indicator takes a value of 1 if a woman achieves adequacy according to cut-offs defined by [28], and 0 otherwise. Per the 80% cut-off utilised in [28], 65.9% of women in the sample would be considered disempowered, which is close to the baseline WEAI reported by [63] for northern Kenya, where 68.4% of women in the area were reported as being disempowered. With respect to the individual indicators that comprise the A-WEAI, women were most likely to achieve adequacy in asset ownership, access to and decisions on credit, and control over use of income, and least likely to achieve adequacy in group membership and workload (see Fig 2). In households with both spouses present, the principal female decision-maker is about seven years younger than her male counterpart, on average, and has attained 0.36 fewer years of education.
Maize is the main staple food crop in western Kenya, providing both food and income to rural households in the area. The crop contributes about 68% of daily per capita cereal consumption, 35% of total dietary energy consumption and 32% of total protein consumption in the country [64]. Households in the sample reported maize harvests of 1,123 kg per acre (2.77MT/ha) on average. While western Kenya is part of a high potential maize production belt, the most important factor contributing to better yields may be the adoption of PPT by households in the study region. The average maize yield on PPT plots was 1,507 kg per acre (3.72 MT/ha), compared with an average on non-PPT plots of 960 kg per acre (2.37 MT/ha). Strengthening this finding is the fact that the sample households' PPT yield is close to the onfarm yield 3.50 MT/ha reported by [58].
Our study further reveals that the probability of adoption PPT is higher for households with empowered women (52%) compared with households with disempowered women (50%).
A stochastic dominance analysis also shows that the maize yield distribution of households with empowered women in production decision appear to dominate that of households with disempowered women (Fig 3). The vertical distance between the cumulative density function of empowered and disempowered is significant at better than the 5% level. Although this should be subjected to rigorous analysis, the result is in line with the existing empirical literature where empowering women or closing gender gaps would lead to higher agricultural productivity [26,43,65].
The pathways for higher yields associated with households with empowered women could be due to a combination of factors, including increased use of improved technologies such as inorganic fertilisers, of agronomic practices such as PPT, and of pesticides and labour use. For example, as illustrated in Figs 4 to 7, sample maize farming households with empowered women appear to stochastically dominate those with disempowered women with respect to seeds and pesticides expenditure (Fig 4), amount of seed planted (Fig 5), fertiliser application rates (Fig 6), and labour allocation (Fig 7). Furthermore, adoption of PPT and improved maize varieties is higher for households with empowered women (52% and 72%, respectively) compared with those for disempowered women (46% and 68%, respectively).
Econometric results
Tables 4 and 5 present the results of the first-stage equations estimated using fractional response, count and Probit regression models along with diagnostic tests for the relevance of the IVs used in this study. Because our primary interest is on the impacts of empowerment, the first-stage regression estimates are not discussed. However, it is worth mentioning that the endogeneity test rejects the null hypothesis that overall women's empowerment and the individual indicators of women's empowerment are exogenous except for workload indicator. We also performed tests for under-and over-identification and reject the null hypothesis that the outcome regression models are under-identified, i.e., the instruments are relevant or correlated with endogenous regressors. However, we fail to reject the null hypothesis that the instruments are valid instruments (overidentification test), or uncorrelated with error terms and that the excluded instruments are correctly excluded from outcome equations. Table 6 presents estimates of the effect of women's overall empowerment scores on maize yield, as derived from the CF-FRPM approach as well as by using OLS and 2SLS estimation methods for comparison purposes. Because we use the predicted values of the endogenous variable in the second-stage estimation, we report bootstrapped standard errors for CF-FRPM to improve the efficiency of our estimates. The results are consistent across the different regression models and show that an increase in women's overall empowerment score significantly improve maize yields, suggesting the importance of improving women's empowerment in Kenya agriculture to reduce food insecurity and poverty. Inputs (fertiliser, value of seed and pesticides) and agricultural practices (adoption of PPT, intercropping and rotation) are less likely to correlate with errors of yield equation because many explanatory variables that influence the former variables are included. Also, the decision to use these variables (i.e. the input and agricultural practice) occurred before the maize harvest period. However, results excluding these variables in the regression models are qualitatively similar to including them in those models. For instance, an increase in women's empowerment by 1% led to a 6.4% increase in maize yield when we used CF-FRPM, and 16.8% when we used 2SLS. Other significant determinants of maize yield in the study area include labour allocation, fertiliser application, expenditure on seed and pesticides, use of push-pull technology on a plot, intercropping, perceived incidence of pests and diseases on a plot, distance to input supply shops and output markets, sex of household head, and credit constraints (i.e., where a household needed but was unable to get credit) ( Table 6). Table 7 presents estimates of the effects of each individual indicator of women's empowerment on maize yields, estimated using a similar approach as in section 4.3 above (The results reported in Tables 6 and 7 are robust to the exclusion of input variables from the yield equation). The results indicate that all six indicators of women's empowerment except workload are positively and significantly associated with maize yields. We note considerable heterogeneity in the magnitude of each indicator's effect on maize yields. Thus, while all indicators of women's empowerment are important, number of production decisions indicator has greater effect on improving agricultural productivity. If the number of production decisions made by women increases by one-unit, maize productivity can increase by 32 percent (This is computed based on 100 × e β − 1, as we log-level regression specification). The same values for income, group membership and assets indicators are 14, 15 and 13 percent, respectively.
Dimensions of women's empowerment that matter most in increasing maize yields
Although not reported, qualitatively comparable results found using the two stages-least square (2SLS). Except the workload indicator, all other indicators have a positive and significant effects on maize yields. Other factor influencing maize yields include labour, fertiliser and seed and chemical use, use of PPT, intercropping, pest and disease occurrence, livestock ownership, credit constraints, extension services, sex of household head, and distance to input supply shops and output markets. Table 8 presents estimates, using the CF approach, of the effects of women's empowerment scores on maize yields according to whether plots were managed jointly by the principal female and male decision-makers, or managed solely by either one of them. The plot manager variables (female-managed plots, male-managed plots and jointly-managed plots) were created based on two questions: "Who in the household makes decisions on crops to be planted, input use and timing of crop activities?" and "Who in the household manages the plot?" The results reveal considerable heterogeneity in the effects of women's empowerment on maize yields across the three types of plot management. More specifically, we find that the increased maize yields due to higher empowerment scores for the women were statistically significant for female-managed and male-managed plots.
Effect of women's empowerment on the productivity of femalemanaged, male-managed and jointly-managed plots
In Table 9 we compare selected attributes of female-, male-and jointly-managed plots to highlight some of the factors that may be critical to closing the gender gap in maize productivity. For example, female-managed plots tend to be less fertile and receive a lower intensity of fertilisers relative to the other plot-manager categories. Most notably, however, stark differences exist in the quantity of labour supplied to female-managed plots, relative to male-managed and jointly-managed ones: on average, female-managed plots receive roughly 32 fewer person-days per acre of total labour compared with their male-managed counterparts, and nearly 38 fewer person-days per acre than jointly-managed plots do.
Several possible explanations exist for this trend. Firstly, given the typical division of labour within SSA households and the heavy domestic workload this places on women, women may have less time than men to devote to cultivating their plots [2,24]. Indeed, as reported earlier (Fig 2), more than 70% of women in the sample fail to achieve adequacy in the workload indicator, i.e., they spent more than 10.5 hours in paid and domestic work during the previous day. Secondly, restrictions on women's mobility-due partly to this heavy burden of labour and partly to prevailing gender norms-may not only prevent them from accessing traditional markets to hire additional labour but could also make it more difficult for them to supervise any workers they did manage to hire.
Conclusion and policy implications
Women's empowerment is widely perceived to be a key factor in closing gender gaps in agricultural productivity. In this paper, we explore the relationship between women's empowerment in agriculture-measured using indicators derived from the A-WEAI-and maize productivity, using smallholder maize farmers in western Kenya as a case study. Controlling for potential endogeneity, we find that women's empowerment leads to increased maize productivity, with the greatest gains derived from increases in women's participation in decision-making on agricultural production. Extending our analysis, we find evidence of heterogeneity in the effects of women's empowerment on maize yields, namely that female-and male-managed plots experience significant improvements in yield. The effects of women's empowerment on maize yields are insignificant for jointly-managed plots. Hence, our findings provide an important piece of evidence showing that women's empowerment may contribute to closing the gender productivity gap.
We contribute to the gender and agriculture literature on two fronts. First, we provide direct evidence that women's empowerment can contribute to closing the gender gap in agricultural productivity that has been widely observed in SSA [12,66], and more generally, that improvements in women's bargaining position may lead to more optimal allocations of the household's productive resources, evidenced by higher productivity. Second, we illustrate how failing to correctly account for the bounded nature of the empowerment score may lead to overestimating the true impact of women's empowerment on agricultural productivity. Furthermore, we demonstrate how to correct for this feature of the WEAI using various econometric procedures.
Our results also offer encouragement with respect to the effectiveness of policies and strategic interventions aimed at stimulating increased agricultural productivity in Kenya through women's empowerment. Although we find that having the power to make important decisions about agricultural production to be the most important driver of maize productivity among the six indicators of women's empowerment we tested, all except for the workload indicator had a significant effect on maize productivity. This speaks to the wide range of ways in which women's empowerment impacts positively on agricultural productivity and suggests a great scope of possible interventions, ranging from financial inclusion mechanisms such as digital savings accounts, affordable mobile-money-based credit schemes and asset-building mechanisms, to programmes facilitating the formation of strong community associations for women.
In conclusion, while our study points towards women's empowerment having a positive effect on maize yield, the cross-sectional nature of our data does not support an examination of the dynamic impacts associated with women's empowerment and maize yield. Furthermore, our data are not nationally representative and, thus, may not reflect women's empowerment status across Kenya. More research, using nationally representative and repeated data from Kenya and elsewhere in SSA, is needed to fully understand the relationship between women's empowerment and maize yield.
Emily Kimathi for producing study area map and Sandie Fitchat for language editing. The views expressed herein do not necessarily reflect the official opinion of the donors or icipe. | 7,588 | 2018-05-31T00:00:00.000 | [
"Agricultural and Food Sciences",
"Economics"
] |
Design and construction of a scanning stand for the PU mini-acoustic sensor
The main objective of this work was to design and construct a device that would facilitate and reduce the intensity of work in the process of scanning surfaces with a Microflown PU Mini acoustic probe. Our proposed device will shorten the total time to scan the surface and improve the quality of data measurements. The device was designed to be able to perform movement along the X and Y axes, thus achieving scanning of the entire sound field. Some components of the device are manufactured strictly according to particular specifications to suit our requirements. The most important step in the design of the scanning device we prepared for the particular acoustic sensor was the design for the linear connection that fulfills a supporting function for the entire device as well as provides movement along the X and Y axes. The final step was the design for the automation of the entire system, for which we selected an Arduino Uno (ATmega328) controller. The actual device was tested in the final stages. The device is designed and tested as functional and able to work on what was intended for and is ready for further modifications or use in practice.
Introduction
Modern acoustics is vastly different from the field that existed as recently as 30 years ago. It has grown to encompass the realm of ultrasonics and infrasonics. Improvements are still being made in the older domains of voice reproduction, audiometry, psychoacoustics, speech analysis, and environmental noise control [1]. Acoustics is defined by ANSI/ASA S1.1-2013 as "(a) Science of sound, including its production, transmission, and effects, including biological and psychological effects. (b) Those qualities of a room that, together, determine its character with respect to auditory effects." ANSI S1.1 was first published in 1960 and has its roots in a 1942 standard published by the American Standards Association, the predecessor of ANSI. The study of acoustics revolves around the generation, propagation, and reception of mechanical waves and vibrations.
History of acoustics
The historical progress of the science of acoustics is surveyed from the earliest recorded phenomena and theories to the present status of the subject [2]. With the sound as a major factor-affecting human life, it was only natural for interest in the science of sound, or acoustics, to emerge.
The honor of being the earliest acousticians probably belongs to the Greek philosopher Chrysippus (ca. 240 BC), the Roman architect-engineer Vitruvius (ca. 25 BC), and the Roman philosopher Severinus Boethius (480-524). Marin Mersenne (1588-1648), a French natural philosopher and Franciscan friar, may be considered to be the "father of acoustics." The "father of modern acoustics," Robert Bruce Lindsay (1900Lindsay ( -1985, developed the "Wheel of Acoustics" to illustrate the many fields of study included in the topic of acoustics [3]. Joseph Sauveur (1653-1713) should also be mentioned here, who suggested the term acoustics (from the Greek word for sound) for the science of sound. Ernst F. F. Chladni (1756Chladni ( -1827, author of the highly acclaimed work Die Akustik is often credited with establishing the field of modern experimental acoustics through his discovery of torsional vibrations and measurements of the velocity of sound with the aid of vibrating rods and resonating pipes. Acoustics also engendered the science of psychoacoustics. Harvey Fletcher (1884-1990), who is regarded as "the father of psychoacoustics," led the Bell Telephone Laboratories in describing and quantifying the concepts of loudness and masking . The outbreak of World War II served to greatly intensify acoustics research at major laboratories in Western Europe and in the United States of America. This research not only took on great proportions but has also continued unabated to this day [4,5]. Acoustics is no longer an esoteric domain of interest to a few specialists in the telephone and broadcasting industries, the military, and university research centers. Legislation and subsequent action have been demanded internationally to provide quiet housing, safe, and comfortable work environments in the factory and the office, quieter airports, and streets, and protection in general from excessive exposure to noisy appliances and equipment [6].
Perspectives of visualization techniques
Sound visualization techniques have played a key role in the development of acoustics throughout history. Many alternative methods and apparatus have been proposed over time [7]. Nevertheless, the current measurement procedures for characterizing sound fields can be classified into three major categories, regardless of the postprocessing techniques applied: step-by-step, simultaneous, and scanning measurements. Each of these techniques can be evaluated simply using three main features: measurement time, flexibility, and total cost of the equipment [5,8,9].
Chladni introduced one of the first methods focused upon the visualization of sound and vibration phenomena at the end of the eighteenth century [10]. The method was based on using sand sprinkled on vibrating plates to show the dynamic behavior of a vibrating body. He generated the so-called Chladni patterns by strewing sand on a vibrating plate excited with a violin bow causing the sand to collect along the nodal lines. The first scanning technique for displaying sound was presented by Kock and Buchta in 1965 [11]. He worked extensively on improving his apparatus, which led him to later publish the book Seeing Sound. In addition, he also developed a subtraction technique for visualizing wave patterns across a sound field. During the 1970s, multichannel microphone arrays were first applied for sound source localization, although the idea of developing such a device was first proposed during World War I. Billingsley invented the microphone antenna or so-called "acoustic telescope," in 1974 [12]. Since 1999, this group of apparatus has been cataloged as "acoustic camera" [13]. As has been mentioned above, there is notable interest in developing tools to assess the behavior of sound in both qualitative and quantitative terms. Generally, in acoustics, it is often necessary to describe not only the characteristics of the location and nature of the sound sources but also the behavior of the sound field that they generate. Consequently, the introduction of a measurement technique, which permits the acquisition of such information in an efficient way without raising the cost or complexity of the measurement setup, has a high potential for a wide range of applications [5,[14][15][16]].
The Microflown PU mini (Scan&Paint)
The Microflown device; see, e.g., ref. [17] is the only one based on the technology of a MEMS (microelectromechanical systems) acoustic sensor that measures the particle velocity instead of the sound pressure, as conventional microphones usually do, providing a new approach for measuring sound intensity. Due to the heating of two microscopic wires placed in parallel, this sensor can quantify the velocity of air particles, which, combined with a pressure microphone, enables us to describe the sound field completely.
There is a sound visualization technique proposed as an alternative sound visualization method called "Scan&Paint" [18]. The acoustic signals of the sound field are acquired by manually moving a single transducer across a measurement plane while filming the event with a webcam. With this probe, it is possible to measure quantities like sound intensity (see, e.g., ref. [19]), sound energy (see, e.g., ref. [20]), and acoustic impedance (see, e.g., ref. [21]) in one direction. In many cases, PU probes have a low susceptibility to background noise and can be used in the entire audible frequency range. Both the sound pressure and the acoustic particle velocity are measured simultaneously [1]. There are different methods to capture and visualize acoustic properties near an object. With PU probes, the velocity from the surface is directly and easily measured.
Material and methods
The basis for the structure of the SSAS (scanning stand for the PU Mini acoustic sensor) was chipboard 20 mm thick, to which were fastened all designed components of the device. Components (bearing platform upper 2 pcs, lower 3 pcs; horizontal rod holder 5 pcs, see Figure 1; servo motor mounting platform 2 pcs), which were designed in a specific size and shape and were made through 3D printing, enabling the reduction of the overall cost of producing the device.
The components were printed using a PRUSA i3 MK3 3D printer and created in Solid Edge 2020. PLA filament 1.75 mm was used for printing, which is one of the most versatile materials for FDM printing technology. PLA is a fully biodegradable material, from corn or potato starch or from sugar cane. It is increasingly used industrially. The time required to print all components was 19 h.53 min. The amount of filament used was 73.58 m, used print quality was 0.20 mm quality. The approximate price of the print was set at 5.48 € [22].
For the supporting function of the moving parts, we chose unsupported polished rods for their lightweight acting on the linear connection. We fixed them to the base plate using horizontal brackets. The linear bearings played an important role in ensuring linear motion. These perform the movement in the X and Y axes by moving around the sliding surface of the rod with minimal friction. For these bearings, we designed housing to facilitate attaching the upper part of the device to the lower linear connections and which also serves as a housing for the PU Mini sensor (1 pcs). Movement along the X and Y axes is provided by two NEMA SX17-1003LQCEF two-phase stepper motors with a 4-ply cable of 70 cm. The engines are fastened to the baseboard with screws on the auxiliary fitting. The motor shaft (5 mm) is a flexible coupling connected to a driven shaft (10 mm).
Arduino Uno R3 is a development board with an AVR ATmega328 microcontroller. The board includes 14 digital and 6 analog I/O connectors for connecting an external power supply, a reset button, and a status LED. Other optional peripherals must be connected separately. The advantage of this device is that it is easy to connect and program. Custom programming is performed in a simple environment, using the Arduino IDE programming language derived from Wiring. The code is very clear, and it separates the programmer from the complex hardware configuration. Arduino is an "Open Source" platform for the easy design and development of electronic programmable devices.
The density of programmable measuring points can vary depending on the nature and complexity of the scanned sound source. It follows that before scanning it is necessary to make test measurements to determine the feed rate and density measurement points of the sensor. This set up is the basis of the quality of measured outputs in the form of acoustic images, which means even better analysis and interpretation of measurement data.
The construction and verification of the scanning acoustics stand
Our proposed SSAS guide frame was made for laboratory research and dimensionally adapted to analyze sound sources from smaller industrial sources or household appliances to a size of 1 m × 1 m. The device was designed to be able to perform movement along the X and Y axes, thus achieving scanning of the entire sound field. The sequence of steps is shown in Figures 2-5.
Step 1: selection of appropriate materials and finish of the desired size, shape, production of subcomponents via 3D printing, workshop finishing of openings. Step 2: engine modifications, mounting on the platform, attaching to the baseboard of the guide frame.
Step 3: (a) mounting components on the base plate (horizontal rod holders, sensors, bearings for nuts, bearings); (b) to avoid accidental damage (curling) of the frame and thus the entire scanning device, the whole baseboard was reinforced by a support frame made from chipboard, adding strength and durability to the whole design; (c) for defining the movement path of the baseboard, there are mountings for the guide and the threaded rod together with the motor. The motion sensor on top of the platform is limited to movement along the X-axis, while the lower threaded rod ensures the movement of the entire device along the Y-axis; and (d) protecting cabling for the two motors from damage from movement using super-quiet energy chains for limited space (E3 system).
Step 4: completing, programming, setup, installing the sensor, start-up, and verification of the device. The device we have designed has the primary purpose of measuring the acoustic characteristics of the close field. The device has to include a probe that is specifically designed for measuring acoustic characteristics in exactly this kind of field. Microflown technology is focused on measuring acoustic quantities. Based on these assumptions, we chose a Microflown PU Min probe as our scanning device. The sound probe combines two sensors: a traditional microphone and a Microflown. The sound pressure and acoustic particle velocity are measured directly in one place. Two complementary acoustic properties, the scalar value "sound pressure" and the vector value "particle velocity" describe any sound field [17].
The parameters of the Microflown sensor that was used in the measurement are as follows: Acoustical properties microflown element: frequency range: 0.1 Hz to 10 kHz ± 1 dB; upper sound level: 125 dB; polar pattern (figure of eight); directivity (directive).
The speed of the sensor feed also affects the quality of the output data from the measurements. We found the optimal movement for scanning using a number of measurements. By verification, it was found that more accurate measurement values are obtained at low movement rates than at higher ones. Therefore, before the actual measurement, it is advisable to know the actual source of the sound (character of the sound), for subsequent optimal adjustment of scanning parameters (movement speed, point resolution, etc.). Regulation of the angular speed of the electric motor, on which the feed speed of the sensor depends can be regulated by the command in the Arduino IDE program; see Figure 6 [23]. The optimal feed rate of the sensor is from 10 to 30 mm/s. At higher feed speeds, inaccuracies in audio and video synchronization occur during subsequent data processing. After trying motors programmed for this purpose, we found that the device is capable of performing the function for which it was designed. Testing separately, we move the deposit sensor to the X-axis and move the upper part of the assembly equipment along the Y-axis. Then, we can test the entire device in laboratory conditions.
In the frequency range 315-630 Hz, measurement without a rear cover shows that noise is generated by the direct-drive motor area; the same effect is not visible in the frequency range 1,000-2,000 Hz (see Figure 7). These outcomes were compared with the results from the other two acoustic cameras from Gfai Tech GmbH and Noise Inspector. From comparative outputs, it can be concluded that the proposed device for the presented PU Mini sensor is suitable for use in laboratories and industrial operations where a short time is needed to locate the cause of the faults and adverse states of machinery and equipment using sound parameters.
Discussion and conclusion
The modern way of life has also brought the problem of noise and its impact on both human health and the quality of life in general. There are many unwanted sources of noise in our industrialized society. Localizing them is challenging. At present, there are several standard techniques. However, there is no universal solution. The Scan&Paint method makes it possible to localize stationary sources, within a few minutes under operating conditions. A simple scan of the surface is recorded by a video camera and synchronized with the audio data. The position of the probe can be recognized by the video and the color map can be calculated in very high resolution. The usage of the Microflown PU probe in combination with the Scan&Paint software tool enables additional information to be captured compared to the traditional methodology, such as the spatial distribution over the sample visualized through color maps, for detecting weak points, leakages, or assembly defects.
Comesaña and Wind [24], in 2011, at the SAE International conference published a paper, which presented the two main algorithms: the point method and the grid method. The results prove a strong agreement between the two methods. It was found that the point method provided higher spatial resolution color maps, whereas the grid method converges to a more accurate answer, mainly due to the spatial averaging applied. The scanning device presented in this article is based on the findings of Comesaña, which means that we designed it using the point method algorithm.
The main aim was to simplify, clarify, and streamline the scanning process for the measurement of sound recording and calculating acoustic images, which are relevant data for subsequent analysis and diagnosis of faults and adverse states of the studied machinery and equipment [25,26].
The main advantages of the designed scanning device particularly include the ability to improve the accuracy of the measured data as making a sound recording is limited by the human factor. A device engineered this way can be used for stationary sound sources such as smaller industrial sources or household appliances. The completed apparatus is not limited in terms of size in the case of major industrial sources of sound. Based on customer needs, the presented equipment can be dimensionally adjusted in terms of the size of the scanning area. The production costs of the scanning system do not exceed the sum of 300 €. These costs, however, do not include the Microflown measurement system. We assume that from the construction point of view, it will be possible to apply the proposed stand also when scanning devices generating electromagnetic fields using E-and B-field sensors. | 4,053 | 2021-01-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
One More Tool for Understanding Resonance and the Way for a New Definition
We propose the application of graphical convolution to the analysis of the resonance phenomenon. This time-domain approach encompasses both the finally attained periodic oscillations and the initial transient period. It also provides interesting discussion concerning the analysis of nonsinusoidal waves, based not on frequency analysis but on direct consideration of waveforms, and thus presenting an introduction to Fourier series. Further developing the point of view of graphical convolution, we arrive at a new definition of resonance in terms of time domain.
Introduction
1.1.General.The following material fits well into an "Introduction to Linear Systems, " or "Mechanics, " and is relevant to a wide range of technical and physics courses, since the resonance phenomenon has long interested physicists, mathematicians, chemists, engineers, and, nowadays, also biologists.
The complete resonant response of an initially unexcited system has two different, distinguishable parts, and there are, respectively, two basic definitions of resonance, significantly distanced from each other.
In the widely adopted textbook [1] written for physicists, resonance is defined as a linear increase of the amplitude of oscillations in a lossless oscillatory system, obtained when the system is pumped with energy by a sinusoidal force at the correct frequency.Figure 1 schematically shows the "envelope" of the resonant oscillations being developed.
Thus, a lossless system under resonant excitation absorbs more and more energy, and a steady state is never reached.In other words, in the lossless system, the amplitude of the steady state and the "quality factor" (having a somewhat semantic meaning in such a system) are infinite at resonance.
However, the slope of the envelope is always finite; it depends on the amplitude of the input function, and not on .Though the steady-state response will never be reached in an ideal lossless system, the linear increase in amplitude by itself has an important sense.When a realistic physical system absorbs energy resonantly, say in the form of photons of electromagnetic radiation, there indeed is some period (still we can ignore power losses, say, some back radiation) during which the system's energy increases linearly in time.The energy absorption is immediate upon appearance of the influence, and the rate of the absorption directly measures the intensity of the input.
One notes that the energy pumping into the system at the initial stage of the resonance process readily suggests that the sinusoidal waveform of the input function is not necessary for resonance; it is obvious (think, e.g., about swinging a swing by kicking it) that the energy pumping can occur for other input waveforms as well.This is a heuristically important point of the definition of [1].
The physical importance of the initial increase in oscillatory amplitude is associated not only with the energy pumping; the informational meaning is also important.Assume, for instance, that we speak about the start of oscillations of the spatial positions of the atoms of a medium, caused by an incoming electromagnetic wave.Since this start is associated with the appearance of the wave, it can be also associated with Figure 1: The definition of resonance [1] as linear increase of the amplitude.(The oscillations fill the angle of the envelope.)The infinite process of increase of the amplitude is obtained because of the assumption of losslessness of the system.the registration of a signal.Later on, the established steadystate oscillations (that are associated, because of the radiation of the atoms, with the refraction factor of the medium) influence the velocity of the electromagnetic wave in the medium.As [2] stresses-even if this velocity is larger than the velocity of light (for refraction factor < 1, i.e., when the frequency of the incoming wave is slightly higher than that of the atoms oscillators)-this does not contradict the theory of relativity, because there is already no signal.Registration of any signal and its group velocity is associated with a (forced) transient process.
A more pragmatic argument for the importance of analysis of the initial transients is that for any application of a steady-state response, especially in modern electronics, we have to know how much time is needed for it to be attained, and this relates, in particular, to the resonant processes.This is relevant to the frequency range in which the device has to be operated.
Contrary to [1], in textbooks on the theory of electrical circuits (e.g., [3][4][5]) and mechanical systems, resonance is defined as the established sinusoidal response with a relatively high amplitude proportional to .Only this definition, directly associated with frequency domain analysis, is widely accepted in the engineering sciences.According to this definition, the envelope of the resonant oscillations (Figure 2) looks even simpler than in Figure 1; it is given by two horizontal lines.This would be so for any steady-state oscillations, and the uniqueness is just by the fact that the oscillation amplitude is proportional to .
After being attained, the steady-state oscillations continue "forever, " and the parameters of the "frequency response" can be thus relatively easily measured.Nevertheless, the simplicity of Figure 2 is a seeming one, because it is not known when the steady amplitude becomes established, and, certainly, the "frequency response" is not an immediate response to the input signal.The envelope of resonant oscillations, according to the definition of resonance in [3,4] and many other technical textbooks.
When this steady state is attained?
The illustration, for = 10, of the resonant response of a second-order circuit.Note that we show a case when the excitation is precisely at the resonant frequency, and the notion of "purely resonant oscillations" applies here to the whole process, and not only to the final steady-state part.
Thus, we do not know via the definition of [1] when the slope will finish, and we do not know via the definition of [3][4][5] when the steady state is obtained.
We shall call the definition of [1] "the "Q-t" definition, " since the value of can be revealed via duration of the initial/transient process in a real system.The commonly used definition [3][4][5] of resonance in terms of the parameters of the sustained response will be called "the "Q-a" definition, " where "a" is an abbreviation for "amplitude." Figure 3 illustrates the actual development of resonance in a second-order circuit.The damping parameter will be defined in Section 3.
The Q-t and Q-a parts of the resonant oscillations are well seen.For such a not very high (i.e., 1/ not much larger than the period of the oscillations) the period of fair initial linearity of the envelope includes only some half period of oscillations, but for a really high it can include many periods.The whole curve shown is the resonant response.This response can be obtained when the external frequency is closing the self-frequency of the system, from the beats of the oscillations (analytically explained by the formulae found in Section 3) shown in Figure 4.
Note that the usual interpretation is somewhat different.It just says that the linear increase of the envelope, shown in Figure 1, can be obtained from the first beat of the periodic out () Figure 4: Possible establishing of the situation shown in Figure 3 through beats while adjustment of the frequency.We can interpret resonance as "filtration" of the beats when the resonant frequency is found.
beats observed in a lossless system.Contrary to that, we observe the beats in a system with losses, and after adjustment of the external frequency obtain the whole resonant response shown in Figure 3.
Our treatment of the topic of resonance for teaching purposes is composed of three main parts shown in Figure 5.The first part briefly recalls traditional "phasor" material relevant only to the Q-a part, which is necessary for introduction of the notations.The next part includes some simple, though usually omitted, arguments showing why the phasor analysis is insufficient.Finally, the third part includes the new tool, which is complementary to the classical approach of [1], and leads to a nontrivial generalization of the concept of resonance.
Our notations need minor comments.As is customary in electrical engineering, the notation for √ −1 is .The small italic Latin "v, " , is voltage in the time domain (i.e., a real value), V means phasor, that is, a complex number in the frequency domain. is the dummy variable of integration in a definite integral of the convolution type.It is measured in seconds, and the difference − , where is time, often appears.
Some Advice to the Teacher
First, we deal here with a lot of pedagogical science-in principle the issues are not new but are often missed in the classroom; as far as we know, no such complete scheme of the necessary arguments for teaching resonance exists.Perhaps this is because some issues indeed require a serious revisiting and time is often limited due to overloaded teaching plans and schedules.That the results of this "economy" are not bright is seen first of all from the already mentioned fact that electrical engineering (EE) students often learn resonance only via phasors and are not concerned with the time needed for the very important steady state to be established.The resonance phenomenon is so physically important that it is taught to technical students many times: in mechanics, in EE, in optics, and so forth.However, all this repeated teaching is actually equivalent to the use of phasors, that is, relates only to the established steady state.
Furthermore, the teachers (almost all of them) miss the very interesting possibility to exhibit the power of the convolution-integral analysis for studying the development of a resonant state.In our opinion, this demonstration makes the convolution integral a more interesting tool; this really is one of the best applications of the "graphical convolution, " which should not be missed in any program.The convolution outlook well unites the view of resonance as a steady state by engineers and the view of resonance as energy pumping into a system, by physicists.The arguments of the graphical convolution also enable one to easily see (before knowing Fourier series) that a nonsinusoidal periodic input wave can cause resonance just as the sinusoidal one does.Thus, these arguments can be used also as an explanation of the physical meaning of the Fourier expansion.Our classroom experience shows that the average student can understand this material and finds it interesting.
Thus, regarding the use of the pedagogical material, we would advise the teacher of the EE students, to return to the topic of resonance (previously taught via phasors), when the students start with convolution.
Finally, the present work includes some new science, which can be also related to teaching, but perhaps at graduate level, depending on the level of the students or the university.We mean the generalization of the concept of resonance considered in Section 5.It is logical that if the convolution integral can show resonance (or resonant conditions) directly, not via Fourier analysis, then this "showing" exposes a general definition of resonance.Furthermore, since mathematically the convolution integral can be seen-with a proper writing of the impulse response in the integrand-as a scalar product, it is just natural to introduce into the consideration the outlook of Euclidean space.
The latter immediately suggests a geometric interpretation of resonance in functional terms, because it is clear what is the condition (here, the resonant one) for optimization of the scalar product of two normed vectors.As a whole, we simply replace the traditional requirement of equality of some frequencies to the condition of correlation of two time functions, which includes the classical sinusoidal (and the simplest oscillator) case as a particular one.
The geometrical consideration leads to a symmetry argument: since the impulse response ℎ() is the only given "vector, " any optimal input "vector" has to be similarly oriented; there simply is no other selected direction.The associated writing inp ∼ ℎ, that is often used here just for brevity, precisely means the adjustment of the waveform of inp () to that of ℎ() by the following two steps.
It is relevant here that for weak power losses typical for all resonant systems, the damping of ℎ() in the first period can be ignored, which should be a simplifying circumstance for creation of the periodic inp ().The way of the adjustment of inp () reflects the fact that the Euclidean space can relate to one period.
Both because of the somewhat higher level of the mathematical discussion and some connection with the theory of "matched filters, " usually related to special courses (which could not be discussed here), it seems that this final material should be rather given for graduate students.However, we also believe that a teacher will find here some pedagogical motivation and will be able to convey more lucid treatment than we succeeded to doing.Thus, the question regarding the possibility of teaching the generalized resonance to undergraduate students remains open.Some other nontrivial points, deserving pedagogical judgement or analytical treatment, appear already in the use of the convolution.This means the replacement of the weakly damping ℎ() of an oscillatory system by the not damping but cut function ℎ (), shown in Figure 11, and the problem of definition of the damping parameter for the tending to zero ℎ() of a complicated oscillatory circuit.A possible way for the latter can be by observation (this is not yet worked out) of some averages, for example, how the integral of ℎ 2 , or of |ℎ|, over the fixed-length interval (, + Δ) is decreased with increase in .
Elementary Approaches
3.1.The Second-Order Equation.The background formulae for both the Q-t and Q-a parts of the resonant response can be given by the Kirchhoff voltage equation for the electrical current () in a series RLC (resistor-inductorcapacitor) circuit driven from a source of sinusoidal voltage with amplitude : Differentiating ( 1) and dividing by ̸ = 0, we obtain with the damping factor = /2 and the resonant frequency For purely resonant excitation, the input sinusoidal function is at frequency = , or at a very close frequency , as defined below in (6).
The Time-Domain Argument.
The full solution of (2) can be explicitly composed of two terms; the first, denoted as ℎ , originates from the homogeneous (ℎ) equation, and the second, denoted as , represents the finally obtained () periodic oscillations, that is, is the simplest (but not the only possible!) partial solution of the forced equation: It is important that the zero initial conditions cannot be fitted by the second term in (3), (), continued backward in time to = 0. (Indeed, no sinusoidal function satisfies both the conditions () = 0 and / = 0, at any point.)Thus, it is obvious that a nonzero term ℎ () is needed in (3).This term is where at least one of the constants 1 and 2 nonzero.Furthermore, it is obvious from (4) that the time needed for ℎ () to decay is of the order of 1/ ∼ (compare to (9)).However, according to the two-term structure of (3), the time needed for () to be established, that is, for () to become (), is just the time needed for ℎ () to decay.Thus, the established "frequency response" is attained only after the significant time of order ∼ .
Unfortunately, this elementary logic argument following from ( 3) is missed in [3][4][5] and many other technical textbooks that ignore the Q-t part of the resonance and directly deal only with the Q-a part.
However form (3) is also not optimal here because it is not explicitly shown that for zero initial conditions not only () but also the decaying ℎ () are directly proportional to the amplitude (or scaling factor) of the input wave.
That is, from the general form (3) alone it is not obvious that, when choosing zero initial conditions, we make the response function as a whole (including the transient) to be proportional to , appearing in (1), that is, to be a tool for studying the input function, at least in the scaling sense.
It would be better to have one expression/term from which this feature of the response is well seen.Such a formula appears in Section 4.
The Phasor Analysis of the Q-a Part.
Let us now briefly recall the standard phasor (impedance) treatment of the final Q-a (steady-state) part of a system's response.We can focus here only on the results associated with the amplitude, the phase relations follow straightforwardly from the expression for the impedance [3,4].
In order to characterize the Q-a part of the response, we use the common notations of [3,4]: the damping factor of the response ≡ /2, the resonant frequency = 1/ √ , the quality factor and the frequency at which the system self-oscillates: Note that it is assumed that 4 2 ≫ 1, and thus and are practically indistinguishable.Thus, although we never ignore per se, the much smaller value / ∼ 2 / can be ignored.When speaking about "precise resonant excitation, " we shall mean setting with this degree of precision, but when writing ̸ = , we shall mean that − = (), and not (/).Larger than () deviations of from are irrelevant to resonance.
It is remarkable that however small is , it is easy, while working with the steady state, to detect differences of order between and , using the resonant curve/response described by (8).
Figure 6 illustrates the resonance curve.Though this figure is well known, it is usually not stressed that since each point of the curve corresponds to some steady state, a certain time is needed for the system to pass on from one point of the curve to another one, and the sharper the resonance is the more time is needed.The physical process is such that for a small the establishment of this response takes a (long) time of the order of which is not directly seen from the resonance curve.The relation 1/ ∼ for the transient period should be remembered regarding any application of the resonance curve, in any technical device.The case of a mistake caused by assuming a quicker performance for measuring input frequency by means of passing on from one steady state to another is mentioned in [2].This mistake is associated with using only the resonance curve, that is, thinking only in terms of the frequency response.
The Use of Graphical Convolution
We pass on to the constructive point, the convolution integral presenting the resonant response, and its graphical treatment.It is desirable for a good "system understanding" of the topic that the concepts of zero input response (ZIR) and zero state response (ZSR), especially the latter one, be known to the reader.
Briefly, ZSR is the partial response of the circuit, which satisfies the zero initial conditions.As → ∞ (and only then), it becomes the final steady-steady response, that is, becomes the simplest partial response (whose waveform can be often guessed).The appendix illustrates the concepts of ZIR and ZSR in detail, using a first-order system and stressing the distinction between the forms ZIR + ZSR and (3) of the response.
Our system-theory tools are now the impulse (or shock) response ℎ() (or Green's function) and the integral response to inp () for zero initial conditions: The convolution integral (10) is an example of ZSR, and it is the most suitable tool for understanding the resonant excitation.
It is clear (contrary to (3)) that the total response (10) is directly proportional to the amplitude of the input function.
Figure 7 shows our schematic system.Of course, the system-theory outlook does not relate only to electrical systems; this "block-diagram" can mean influence of a mechanical force on the position of a mass, or a pressure on a piston, or temperature at a point inside a gas, and so forth.
Note that if the initial conditions are zero, they are simply not mentioned.If the input-output map is defined solely by ℎ() (e.g., when one writes in the domain of Laplace variable out () = () inp ()), it is always ZSR.
In order to treat the convolution integral, it is useful to briefly recall the simple example [5] of the first-order circuit influenced by a single square pulse.The involved Figure 9: The functions appearing in the integrand of the convolution integral (10).The "block" inp ( − ) is riding (being moved) to the right on the -axes, as time passes.We multiply the present curves in the interval 0 < < and, according to (10), take the area under the result, in this interval.When < Δ, only the interval (0, ) is relevant to (10).When > Δ, only the interval ( − Δ, ) is actually relevant, and because of the decay of ℎ(), out () becomes decaying.physical functions are shown in Figure 8, and the associated "integrand situation" of (10) is shown in Figure 9.
It is graphically obvious from Figure 9 that the maximal value of out () is obtained for = Δ, when the rectangular pulse already fully overlaps with ℎ(), but still "catches" the initial (highest) part of ℎ().This simple observation shows the strength of the graphical convolution for a qualitative analysis.
The (Resonant) Case of a Sinusoidal Input Function Acting
on the Second-Order System.For the second-order system with weak losses, we use for (10) As before, we apply Figure 10 builds the solution (10) step by step; first our ℎ() and inp (−) (compare to Figure 9), then the product of these functions, and finally the integral, that is, out () = ().
On the upper graph, the "train" inp ( − ) travels to the right, starting at = 0, on the middle graph we have the integrand of (10).The area inp () = () under the integrand's curve appears as the final result on the third graph.
In view of the basic role of the overlapping of inp ( − ) with ℎ(), it is worthwhile to look forward a little and compare Figure 10 to Figures 14 and 15 that relate to the case of an input square wave.For the upper border of integration in (10) be = (/ ) and for very weak damping of ℎ() the situations being compared are very similar.The distinction is that, in order to obtain the extremes of inp (), we integrate in Figure 15 the absolute value of several sinusoidal pieces (halfwaves), while in Figure 10 we integrate the squared sinusoidal pieces.Since we integrate, in each case, similar pieces (all positive, giving a maximum of out (), or all negative, giving a minimum), the result of each such integration is directly proportional to .
Thus, if = 0, when ℎ() is strictly periodic, from the periodic nature of also inp (), it follows that for any integer , which is a linear increase in the envelope for the two very different input waves, in the spirit of Figure 1.
For a small but finite , 0 < ≪ , the initial linear increase has high precision only for some first few when ∼ ∼ 1/ ≪ 1/, that is, ≪ 1, or − ≈ 1. (The damping of ℎ() may be ignored for these .) Observe that the finally obtained periodicity of out () follows only from that of inp (), while the linear increase requires periodicity of both inp () and ℎ().
The above discussion suggests the following simplification of the impulse response of the circuit, useful for analysis of the resonant systems.This simplification is a useful preparation for the rest of the analysis.
A Simplified ℎ(𝑡) and the Associated Envelope of the
Oscillations.Considering that the parameter 1/ appears in the above (and in Figure 3) as some symbolic border for the linearity, let us take a constructive step by suggesting a geometrically clearer situation when this border is artificially made sharp by introducing an idealization/simplification of ℎ(), which will be denoted as ℎ ().
In this idealization-that seems to be no less reasonable and suitable in qualitative analysis than the usual use of the vague expression "somewhere at "" of order 1/", we replace ℎ() by a finite "piece" of nondamping oscillations of total length 1/.
We thus consider that however weak the damping of ℎ() is, for sufficiently large , when ≫ 1/ ∼ , we have − ≪ 1, that is, the oscillations become strongly damped with respect to the first oscillation.For > 1/ the further "movement" of the function inp ( − ) to the right (see Figure 10 again) becomes less effective; the exponentially decreasing tail of the oscillating ℎ() influences (10), via the overlap, more and more weakly, and as → ∞, out () ceases to increase and becomes periodic, obviously.
We simplify this qualitative vision of the process by assuming that up to = 1/, there is no damping of ℎ(), but, starting from = 1/, ℎ() completely disappears.That is, we replace the function − sin by the function ℎ () = [() − ( − 1/)] sin , where () is the unit step function.The factor () − ( − 1/) here is a "cutting window" for sin .This is the formal writing of the "piece" of the nondamping self-oscillations of the oscillator.See Figure 11.
For ℎ (), it is obvious that when the "train" inp ( − ) crosses in Figure 10 the point = 1/, the graphical construction of (10), that is, out (), becomes a periodic procedure.Figuratively speaking, we can compare ℎ () with a railway station near which the infinite train inp ( − ) passes; some wagons go away, but similar new ones enter and the total overlapping is repeated periodically.
The same is also analytically obvious, since when setting, for > 1/, the upper limit of integration in (10) as 1/, we have, because of the periodicity of inp (⋅), the integral: as a periodic function of .
As is illustrated by Figure 12-which is an approximation to the envelope shown in Figure 3-the envelope of the output oscillations becomes completely saturated for > 1/.
Figure 12 clearly shows that both the amplitude of the finally established steady-state oscillations and the time needed for establishing these oscillations are proportional to , while the initial slope is obviously independent of .
It is important that ℎ () can be also constructed for more complicated functions ℎ() (for which it may be, for instance, ℎ(+/2) ̸ = − ℎ()) and also then the graphical convolution is easier formulated in terms of ℎ ().As an example relevant to the theoretical investigations-approximately presenting the Figure 10: Graphically obtaining the resonant response for a second-order oscillatory system and a sinusoidal input, according to (10).The envelope (not shown) has to pass via the maxima and minima of out () appearing in the last graph.
Figure 11: The simplified ℎ() (named ℎ ()): there is no damping at 0 < < 1/, but for > 1/, it is identically zero, that is, we first ignore the damping of the real ℎ() and then cut it completely.This idealization expresses the undoubted fact that the interval 0 < < 1/ is dominant and makes the treatment simpler.A small change in 1/ which makes the oscillatory part more pleasing by including in it just the (closest) integer number of the half waves, as shown here, may be allowed, and when using ℎ () in the following we shall assume for simplicity that the situation is such.
we can easily reduce, using periodicity of inp () for any oscillatory ℎ() (and ℎ ()), the analysis of the interval (0, 1/) to that of a small interval, as was for (0, / ) in Figure 10.
Nonsinusoidal Input Waves.
The advantage of the graphical convolution is not so much in the calculation aspect.It is easy for imagination (insight) procedure, and it is a flexible tool in the qualitative analysis of the time processes.The graphical procedure makes it absolutely clear that the really basic point for a resonant response is not sinusoidality, but periodicity of the input function.Not being derived from the spectral (Fourier) approach, this observation heuristically completes this approach and may be used (see the following) in an introduction to Fourier analysis.Thus, let us now take inp () as the rectangular wave shown in Figure 13 and follow the way of Figures 9 and 10, in the sequential Figures 14 and 15.
Here too, the envelope of the resonant oscillations can be well outlined by considering out () at instances = / ; first of all / , 2/ , and 3/ , for which we, respectively, have the first maximum, the first minimum, and the second maximum of out ().
There are absolutely the same qualitative (geometric) reasons for resonance here, and Figure 15 explains that if the damping of ℎ() is weak, that is, some first sequential halfwaves of inp (−)ℎ() are similar, then the respective extreme values of () = out () form a linear increase in the envelope.
Figure 16 shows out () = () at these extreme points.Though it is not easy to find the precise out () everywhere, for the envelope of the oscillations, which passes through the extreme points, the resonant increase in the response amplitude is absolutely clear.
Figures 10, 14, 15, and 16 make it clear that many other waveforms with the correct period would likewise cause resonance in the circuit.Furthermore, for the overlapping to remain good, we can change not only inp (), but also ℎ().Making the form of the impulse response more complicated means making the system's structure more complicated, and thus graphical convolution is also a valuable starting point for studying resonance in complicated systems in terms of the waveforms.This point of view will be realized in Section 5 where we generalize the concept of resonance.Thus, using the algorithm of the graphical convolution, we make two more methodological steps; a pedagogical one in Section 4.4 and the constructive one in Section 5.
Let Us Try to "Discover" the Fourier Series in Order to
Understand It Better.The conclusion regarding the possibility of obtaining resonance using a nonsinusoidal input reasonably means that when pushing a swing with a child on it, it is unnecessary for the father to develop a sinusoidal force.Moreover, the nonsinusoidal input even has some obvious advantages.While the sinusoidal input wave leads to resonance only when its frequency has the correct value, exciting resonance by means of a nonsinusoidal wave can be done at very different frequencies (one need not to kick the swing at every oscillation), which is, of course, associated with the Fourier expansions of the force.
Let us see how, using graphical convolution, we can reveal harmonic structure of a function, still not knowing anything about Fourier series.For that, let us continue with the case of square wave input, but take now such a waveform with a period that is 3 times longer than the period of selfoscillations of the oscillator.Consider Figure 17.
This time, the more distant instances, = 3/ , 6/ , and 9/ , are obviously most suitable for understanding how the envelope of the oscillations looks.One sees that also for = 3 , the same geometric "resonant mechanism" exists, but the transfer from = to = 3 makes the excitation significantly less intensive.Indeed, see Figure 18 comparing the present extreme case of = 3/ to the extreme case of = / of Figure 15.
We see that each extreme overlap is now only one-third as effective as was the respective maximum overlap in the previous case.That is, at = 3/ , we now have what we previously had at = / , which means a much slower increase in the amplitude in time.
Since out () is now increased at a much slower rate, but 1/ is the same (i.e., the transient lasts the same time), the amplitude of the final periodic oscillations is respectively smaller, which means weaker resonance in terms of frequency response.
Let us compare the two cases of the square wave thus studied to the initial case of the sinusoidal function.The case of the "nonstretched" square wave corresponds to the input sin , while according to the conclusions derived in Figure 18, the case of the "stretched" wave corresponds to the input (1/3) sin .We thus simply (and roughly) reduce the change in period of the nonsinusoidal function to the equivalent change in amplitude of the sinusoidal function.
Let us now try-as a tribute to Joseph Fourier-to speak not about the same circuit influenced by different waves, but about the same wave influencing different circuits.Instead of increasing , we could decrease , thus testing the ability of the same square wave to cause resonance in the different oscillatory circuits.For the new circuit, the graphical procedure remains the same, obviously, and the ratio 1/3 of the resonant amplitudes in the compared cases of / = 3 and / = 1 remains.
In fact, we are thus testing the square wave using two simple oscillatory circuits of different self-frequencies.Namely, connecting in parallel to the source of the square wave voltage two simple oscillatory circuits with self-frequencies and 3 , we reveal for one of them the action of the square wave as that of sin and for the other as that of (1/3) sin 3 .
Let us check this result by using the arguments in the inverse order.The first sinusoidal term of series (17) roughly corresponds to the square wave with = (i.e., = ), and in order to make the second term resonant, we have to change the self-frequency of the circuit to = 3, that is, make = (1/3) , or = 3 , which is our second "experiment" in which the reduced to 1/3 intensity of the resonant oscillations is indeed obtained, in agreement with (17).
It is possible to similarly graphically analyze a triangular wave at the input, or a sequence of periodic pulses of an arbitrary form (more suitable for the father kicking the swing) with a period that is an integer of .
One notes that such figures as Figure 18 are relevant to the standard integral form of Fourier coefficients.However on the way of graphical convolution, this similarity arises only for the extremes ( out ()) max = | out ( )|, and this way is independent and visually very clear.
A Generalization of the Definition of
Resonance in Terms of Mutual Adjustment of inp () and ℎ() After working out the examples of the graphical convolution, we are now in position to formulate a wider -domain definition of resonance.
In terms of the graphical convolution, the analytical symmetry of (10): means that besides observing the overlapping of inp ( − ) and ℎ(), we can observe overlapping of ℎ( − ) and inp ().
In the latter case, the graph of ℎ(−) starts to move to the right at = 0, as was in the case with inp (−).
Though equality (18) is a very simple mathematical fact, similar to the equalities = and ( ⃗ , ⃗ ) = ( ⃗ , ⃗ ), in the context of graphical convolution, there is a nontriviality in the motivation given by (18), because the possibility to move ℎ(−) also suggests changing the form of ℎ(⋅), that is, starting to deal with a complicated system (or structure) to be resonantly excited.We thus shall try to define resonance, that : Because of the mutual compensation of two half-waves of ℎ(), only each third half-wave of ℎ() contributes to the extreme value of out (), and the maximum overlaps between ℎ() and inp () are now one-third as effective as before.The reader is asked (this will soon be needed!) to similarly consider the cases of = 5 and so forth.
is, the optimization of the peaks of out () (or its r.m.s.value), in the terms of more arbitrary waveforms of ℎ(), while the case of the sinusoidal ℎ(⋅), that is, of simple oscillator, appears as a particular one.
The Optimization of the Overlapping of 𝑓(𝜆) ≡ 𝑓 inp (𝑡−𝜆) and ℎ(𝜆) in a Finite Interval and Creation of the Optimal
Periodic inp ().Let us continue to assume that the losses in the system are small, that is, that ℎ() is decaying so slowly that we can speak about at least few oscillatory spikes (labeled by ) through which the envelope of the oscillations passes during its linear increase.
In view of the examples studied, the extreme points of out () are obtained when are the zero-crossings of ℎ(), because only then the overlapping of inp ( −) with ℎ() can be made maximal.
Comment.Assuming that the parameters of the type / 0 of the different harmonic components of ℎ() are different, one sees that for a nonsinusoidal damping ℎ(), the distribution of the zero-crossings of ℎ() can be changed with the decay of this function, and thus for a periodic inp () the condition or considered for ≫ 1 need not be satisfied in the whole interval of the integration (0 < < ) related to the case of ≫ .However since both the amplitude-type decays and the change in the intervals between the zeros are defined by the same very small damping parameters, the resulted effects of imprecision are of the same smallness.Both problems are not faced when we use the "generating interval" and employ ℎ () instead of the precise ℎ().The fact that any use of ℎ () is anyway associated with error of order −1 ∼ points to the expected good precision of the generalized definition of resonance.
Thus, { }, measured with respect to the time origin, that is, with respect to the moment when inp () and ℎ() arise, is assumed to be given by the known ℎ().Of course, we assume the system to be an oscillatory one, for the parameters and of our graphical constructions to be meaningful.
Having the linearly increasing sequence | out ( )| = belonging to the envelope of the oscillations and wishing to increase the finally established oscillations, obviously we have to increase the factor .
However since and the whole intensity of out () can be increased not only by the proper wave-form of inp (⋅), but also by an amplitude-type scaling factor, for the general discussion, some norm for inp (⋅), has to be introduced.
For the definitions of the norm and the scalar products of the functions appearing during adjustment of inp () to ℎ(), it is sufficient to consider a certain (for a fixed, not too large ) interval ( , +1 )-the one in which we can calculate .This interval can be simply (0, 1 ) or (0, ).
Respectively, the scalar product of two functions is taken as With these definitions, the set of functions defined for the purpose of the optimization in the interval ( , +1 ) forms an (infinite-dimensional) Euclidean space.
For the quantities that interest us, we have from ( 23 Not ascribing to "(⋅)" index "" is justified by the fact that the particular interval ( , +1 ) to be actually used, is finally chosen very naturally.
The basic relation | out ( )| = means that any local extremum of out () is a sum of such scalar products as (23).
Observe that the physical dimensions of ‖ ⋅ ‖ and (⋅, ⋅) are Observe also from ( 22) and ( 23) that and that if we take 2 ∼ 1 , that is, Thus, we finally have the following two points.
(a) We find the proper interval ( , +1 ) for creating the optimal periodic inp ().
(b) The proportionality ∼ ℎ in this interval is the optimal case of the influence of an oscillatory circuit by inp ().
Items (a) and (b) are our definition of the generalized resonance.The case of sinusoidal ℎ() is obviously included since the proportionality to ℎ() requires () to also be sinusoidal of the same period.
This mathematical situation is the constructive point, but the discussion of Sections 5.3 and 5.5 of the optimization of from a more physical point of view is useful, leading us to very compact formulation of the extended resonance condition.However, let us first of all use the simple oscillator checking how essential is the direct proportionality of to ℎ, that is, what may be the quantitative miss when the waveform of () differs from that of ℎ() in the chosen interval ( , +1 ).
An Example for a Simple
Oscillator.Let us compare the cases of the square (Figures 13, 14, and 15) and sinusoidal (Figure 10) input waves of the same period for defined in the interval (0, /2 = /).Of course, the norms of the input functions have to be equal for the comparison of the respective responses.(Note that in the consideration of the above figures, equality of the norms was not provided, and thus the following result cannot be derived from the previous discussions.) Let the height of the square wave be 1.Then, 2 inp ( − ) = 1 everywhere, and according to (22) the norm is obtained as √/.For obtaining the same norm for a sinusoidal input, we write it as sin and find > 0 so that that is, Because of the symmetry of the sinusoidal and squarewave inputs, in both cases inp () = inp ( − ) ≡ () in the interval (0, /).For either of the input waveforms the norm of inp () now equals √/, and for ℎ() = sin of the simple oscillator (the damping in this interval is ignored), we have, according to (24) and (32), as the upper bund.Thus, while for the response to the square wave we have (meaning "cos ≤ 1"), where the equality is obtained only when the vectors are mutually proportional ("" = 0), that is, similarly (may be opposite) directed: The latter relation is obvious, in particular because it is obvious that while rotation of the usual vector (say a pencil) when directing it in parallel to another vector (another pencil), the length of this vector is unchanged.This point is much more delicate regarding the norm of a function being adjusted to ℎ(), which is the "rotation" of the "vector" in the function space.Since the waveform of the function is being changed, its norm can be also changed.
Since our "vectors" are the time functions and the functional analog of ( 40) is (for simplicity, we sometimes write instead of − ) we very simply obtain, by the mathematical equivalence of the function and the vector spaces, condition (31); that is, only an () that is directly proportional to ℎ() can give an extreme value for .
Comments.
One can consider ∼ ℎ to be both a generalization and a direct analogy to the condition = o of the standard definitions of [1,[3][4][5].Then, both of the equalities, = o and = ℎ, appear in the associated theories as sufficient conditions for obtaining resonance in a linear oscillatory system.The norms become important at the next step, namely, regarding the theoretical conditions of system's linearity, which always include some limitations on intensity of the function/process, in any application.For applications, the real properties of the physical source of inp () (e.g., a voltage source) whose power will here be proportional to 2 obviously require to be limited.
The requirement of preserving the norm ‖()‖ during realization of () ∼ ℎ() also necessarily originates from the practically useful formulation of the resonance problem as the optimization problem that requires calculation of the optimized peaks (or rms value) of inp ().
If ℎ( + /2) ̸ = −ℎ(), then the interval in which the scalar products (i.e., the Euclidean functional space) are defined has to be taken over the whole period of ℎ(), that is, as = ∫ 0 inp ( − )ℎ().( = is a necessary condition.) The interval in which we define can be named the "generating" interval.
We can finally write the optimal out () that resulted from the optimal inp () as where the function ℎ (0,)periodic () is ℎ() in the generating interval, periodically continued for > .
We turn now to an informal "physical abstraction, " suggested by the comparison of the two Euclidean spaces.This abstraction leads us to a very compact formulation of the generalized definition of resonance.
A Symmetry Argument for Formulation of the Generalized
Definition of Resonance.For the usual vector space, we have well-developed vectorial analysis, in which symmetry arguments are widely employed.The mathematical equivalence of the two spaces under consideration suggests that such arguments-as far as they are related to the scalar productsare legitimized also in the functional space.
Recall the simple field problem in which the scalar field (e.g., electrical potential) is given by means of a constant vector ⃗ , and it is asked in what direction to go in order to have the steepest change of ( ⃗ ).
As the methodological point, one need not know how to calculate gradient.It is just obvious that only ⃗ , or a proportional vector, can show the direction of the gradient, since there is only one fixed vector given, and it is simply impossible to "construct" from the given data any other constant vector, defining another direction for the gradient.Figure 19: The generalized input includes both generator inputs and initial conditions.(This becomes trivial when the Laplace transform is used to transform a system scheme.)Considering the superposition of the output function of a linear system for such an input, we obtain the associated structure of the solution of the linear system in a form that is different from (3).Respectively, the output function is written as ZIR() + ZSR().
For ZIR. Homogeneous equation (zero generator inputs) plus the needed initial conditions.For ZSR.Given equation with zero initial conditions.
Figure 19 schematically illustrates this presentation of the linear response.
Figure 7 reduces Figure 19 to what we actually need for the processes with zero initial conditions.The logical advantage of the presentation ZIR + ZSR over (3) becomes clear in the terms of the superposition.
The ZSR includes both, the decaying transient needed to satisfy zero initial conditions and the final steady state given in its general form by the following integral (A.8).The oscillations shown in Figure 3 are examples of ZSR.
The separation of the solution function into ZIR and ZSR is advantageous, for example, when the circuit is used to analyze the input signal, that is, when we wish to work only with the ZSR, when nonzero initial conditions are just redundant inputs.
The convolution integral (10) is ZSR.When speaking about system with constant parameters having one input and one output, the Laplace transform of ZSR() equals () inp () where () is the "transfer function" of the system, that is, the Laplace transform of h(t).Each time when we speak about transfer function, we speak about ZSR, that is, zero initial conditions.
It is easy to write out () for our problem.Using the known formula for Laplace transform of periodic function and setting the optimal inp ( − ) ≡ −ℎ(), ∈ [0, ], that is, inp () ≡ −ℎ( − ), ∈ [0, ], where is the period (in the sense of the generating interval) of ℎ(), we have, for the periodically continued inp (), the Laplace transform of our out () as (see (43
Figure 2 :
Figure2: The envelope of resonant oscillations, according to the definition of resonance in[3,4] and many other technical textbooks.When this steady state is attained?
Figure 5 :
Figure 5: The methodological points regarding the study of resonance in the present work.
Figure 8 :
Figure 8: The functions for the simplest example of convolution.(A first-order circuit with an input block pulse.)
Figure 12 :
Figure 12: The envelope of out () obtained for the simplified ℎ() shown in Figure 11.
Figure 13 :Figure 14 :
Figure 13: The rectangular wave at the input.
Figure 16 :Figure 17 :
Figure 16: Linear increase of the envelope (ideal in the in the lossless situation) for the square wave input.Compare to Figures 1, 3, and 12.
Figure18: Because of the mutual compensation of two half-waves of ℎ(), only each third half-wave of ℎ() contributes to the extreme value of out (), and the maximum overlaps between ℎ() and inp () are now one-third as effective as before.The reader is asked (this will soon be needed!) to similarly consider the cases of = 5 and so forth.
) ℎ (0,)periodic () , tio n wit h ℎ( ) Figure19schematically illustrates this presentation of the linear response.Figure7reduces Figure19to what we actually need for the processes with zero initial conditions.The logical advantage of the presentation ZIR + ZSR over (3) becomes clear in the terms of the superposition.The ZSR includes both, the decaying transient needed to satisfy zero initial conditions and the final steady state given in its general form by the following integral (A.8).The oscillations shown in Figure3are examples of ZSR.The separation of the solution function into ZIR and ZSR is advantageous, for example, when the circuit is used to analyze the input signal, that is, when we wish to work only with the ZSR, when nonzero initial conditions are just redundant inputs.The convolution integral (10) is ZSR.When speaking about system with constant parameters having one input and one output, the Laplace transform of ZSR() equals () inp () where () is the "transfer function" of the system, that is, the Laplace transform of h(t).Each time when we speak about transfer function, we speak about ZSR, that is, zero initial conditions.It is easy to write out () for our problem.Using the known formula for Laplace transform of periodic function and setting the optimal inp ( − ) ≡ −ℎ(), ∈ [0, ], that is, inp () ≡ −ℎ( − ), ∈ [0, ], where is the period (in the sense of the generating interval) of ℎ(), we have, for the periodically continued inp (), the Laplace transform of our out () as (see (43)) out () = () inp () = () ∫ 0 inp () − 1 − − inp ( +1 − ) , < < +1 .
36)only, for the response to the input sin we have, for the found, Analogy with the Usual Vectors.In the mathematical sense, the set of functions that can be used for the optimization of is analogous to the set of usual vectors. | 11,659.2 | 2013-02-13T00:00:00.000 | [
"Physics"
] |
N-Acetylglucosamine drives myelination by triggering oligodendrocyte precursor cell differentiation.
Myelination plays an important role in cognitive development and in demyelinating diseases like multiple sclerosis (MS), where failure of remyelination promotes permanent neuro-axonal damage. Modification of cell surface receptors with branched N -gly-cans coordinates cell growth and differentiation by controlling glycoprotein clustering, signaling, and endocytosis. GlcNAc is a rate-limiting metabolite for N -glycan branching. Here we report that GlcNAc and N -glycan branching trigger oligodendrogenesis from precursor cells by inhibiting platelet-derived growth factor receptor- a cell endocytosis. Supplying oral GlcNAc to lactating mice drives primary myelination in newborn pups via secretion in breast milk, whereas genetically blocking N -glycan branching markedly inhibits primary myelination. In adult mice with toxin (cuprizone)-induced demyelination, oral GlcNAc prevents neuro-axonal damage by driving myelin repair. In MS patients, endogenous serum GlcNAc levels inversely correlated with imaging measures of demyelination and microstructural damage. Our data identify N -glycan branching and GlcNAc as critical regulators of primary myelination and myelin repair and suggest that oral GlcNAc may be neuroprotective in demyelinating diseases like MS.
Myelination plays an important role in cognitive development and in demyelinating diseases like multiple sclerosis (MS), where failure of remyelination promotes permanent neuro-axonal damage. Modification of cell surface receptors with branched N-glycans coordinates cell growth and differentiation by controlling glycoprotein clustering, signaling, and endocytosis. GlcNAc is a rate-limiting metabolite for N-glycan branching. Here we report that GlcNAc and N-glycan branching trigger oligodendrogenesis from precursor cells by inhibiting platelet-derived growth factor receptor-a cell endocytosis. Supplying oral GlcNAc to lactating mice drives primary myelination in newborn pups via secretion in breast milk, whereas genetically blocking N-glycan branching markedly inhibits primary myelination. In adult mice with toxin (cuprizone)-induced demyelination, oral GlcNAc prevents neuroaxonal damage by driving myelin repair. In MS patients, endogenous serum GlcNAc levels inversely correlated with imaging measures of demyelination and microstructural damage. Our data identify N-glycan branching and GlcNAc as critical regulators of primary myelination and myelin repair and suggest that oral GlcNAc may be neuroprotective in demyelinating diseases like MS.
Myelination of axons by oligodendrocytes in the central nervous system plays a critical role in normal cognitive development and function and in demyelinating disease such as multiple sclerosis (MS) (1,2). In addition to speeding conduction of the action potential, myelination supports axon health and survival (3)(4)(5). In MS, remyelination of demyelinated axons by oligodendrocytes is often incomplete despite the presence of abundant oligodendrocyte precursor cells (OPC) throughout the brain (6)(7)(8)(9)(10). The molecular mechanisms that block remyelination in MS are incompletely understood, and there is a lack of therapies to promote myelin repair. Failure to adequately remyelinate is influenced by the microenvironment of the MS lesion, where reactive astrocytes, microglia, and macrophages produce various inhibitory factors leading to disruption in OPC differentiation, oligodendrocyte migration, process outgrowth, and attachment to axons (11). Multiple studies have identified molecules that limit OPC differentiation into myelin-producing cells including LINGO-1 (12), various extracellular matrix proteins (13,14), and myelin debris (15). Thus, increasing OPC differentiation has become an important strategy for promoting remyelination in MS and other demyelinating diseases (16).
Given the diverse and pleiotropic effects of N-glycan branching, identifying and manipulating regulatory mechanisms may provide new insights into disease pathogenesis and opportunities for therapeutic intervention. In this regard, metabolism is a critical regulator of N-glycan branching by controlling availability of the sugar-nucleotide UDP-GlcNAc, the substrate used by the Mgat family of N-glycan branching enzymes (19,20,24,37,38). UDP-GlcNAc is generated in the hexosamine pathway de novo from glucose or by salvage from GlcNAc. Extracellular GlcNAc enters cells through micropinocytosis, with supplementation of cells or mice with GlcNAc inhibiting pro-inflammatory T-cell responses and murine models of inflammatory demyelination by enhancing N-glycan branching (19,31,37,38).
Targeted deletion of galectin-3, a ligand for N-glycan branching, leads to decreased production of oligodendrocytes, poor myelination of axons, and reduced ability to remyelinate after injury (39). In humans, loss-of-function mutations in PGM3, a gene required to generate branched N-glycans from GlcNAc, display reduced branching and severe CNS hypomyelination (40). Platelet-derived growth factor-AA plays a critical role in oligodendrogenesis (41), with its receptor (PDGFRa) expressed in oligodendrocyte progenitor/precursor cells (42). In epithelial cells, N-glycan branching deficiency reduces PDGFRa surface expression by enhancing loss via endocytosis, leading to reduced signaling (18). Thus, here we examine the hypothesis that GlcNAc may provide an oral therapeutic to raise N-glycan branching in OPCs, promote myelination, and reduce the potential for neurodegeneration by initiating oligodendrocyte differentiation via enhanced PDGFRa surface expression and signaling in OPCs.
To confirm a role for N-glycan branching in oligodendrogenesis, we first used kifunensine to inhibit N-glycan branching (23) in NSC induced to differentiate by exogenous PDGF-AA. Reducing branching in NSCs using kifunensine significantly reduced PDGFRa surface expression and the number of O4 1 cells induced by PDGF-AA differentiation media (Fig. S1F). To confirm this result genetically, we utilized Mgat5 2/2 and doxycycline-inducible Mgat1 f/f /tetO-cre/ROSA-rtTA mice (17,22). Mgat5 deletion mildly reduces N-glycan branching, whereas Mgat1 deletion completely blocks N-glycan branching (Fig. S1A). In vitro doxycycline treatment of NSC from Mgat1 f/f / tetO-cre/ROSA-rtTA mice readily induced deletion of Mgat1, as measured by loss of L-PHA binding (Fig. S1G). Mgat5-and Mgat1-deleted NSCs displayed decreased surface levels of PDGFRa and were markedly reduced in their ability to differentiate into O4 1 pre-oligodendrocytes in response to PDGF-AA differentiation media ( Fig. 1E and Fig. S1G). In Mgat5 heterozygous NSCs, small reductions in N-glycan branching also inhibited oligodendrocyte differentiation (Fig. 1E). Thus, subtle changes in N-glycan branching can markedly impact oligodendrocyte differentiation from NSC in vitro.
GlcNAc and N-glycan branching promote primary myelination in mice Next, we examined whether oral GlcNAc can cross the blood-brain barrier to promote oligodendrocyte differentiation and myelination in vivo. Adult mice (n = 6) and lactating mothers were provided with/without 13 C-labeled GlcNAc ([U 13 C] GlcNAc) (10) in their drinking water, and metabolites derived from perfused brains were analyzed by liquid chromatography-tandem mass spectrometry (LC-MS/MS). Although this method does not resolve stereoisomers of N-acetylhexosamines (i.e. GlcNAc versus GalNAc), a reversible 4-epimerase equilibrates UDP-GlcNAc and UDP-GalNAc in vivo (19). LC-MS/ MS identified UDP-[U 13 C]-N-acetylhexosamines (UDP-[U 13 C]-HexNAc) in treated adult female mouse brains and in the brains of their suckling pups ( Fig. 2A). This demonstrates that orally delivered GlcNAc is not only able to cross the bloodbrain barrier and be metabolized to UDP-GlcNAc by CNS cells but is also secreted at sufficient levels in breast milk to raise UDP-GlcNAc in the brains of suckling pups.
To assess whether oral GlcNAc promotes oligodendrogenesis in vivo in the absence of inflammation, we examined primary myelination in mice during the early perinatal period. We provided GlcNAc or vehicle to pregnant/lactating female mice from E12.5, postnatal day 3 (P3) or P5 through to P8. Indeed, oral GlcNAc increased N-glycan branching in PDGFRa 1 cells and the number of pre-oligodendrocytes (PDGFRa 1 O4 1 ), immature oligodendrocytes (PDGFRa 2 O4 1 ), and mature oligodendrocytes (MBP 1 ), with little effect on the number of OPCs (PDGFRa 1 O4 2 ) ( Fig. 2B and Fig. S2A). The lack of change in OPC number is consistent with our in vitro data (Fig. 1D) and suggests that GlcNAc promotes OPC self-renewal and/or NSC differentiation to OPC, resulting in a stable number of OPCs. Consistent with increased oligodendrogenesis, oral GlcNAc also increased primary myelination when provided to pups from P3-8, as assessed by increased levels of staining for myelin basic protein (MBP) and myelin (as measured by FluoroMyelin) (Fig. 2C).
To confirm that N-glycan branching promotes myelination in the absence of inflammation in vivo, we generated mice with tamoxifen-inducible deletion of Mgat1 only in OPCs and oligodendrocytes, namely Mgat1 f/f Plp1-cre/ERTc 1 mice. Because proteolipid protein (PLP) promoter-driven Cre expression only becomes restricted to the oligodendrocyte lineage (OPC and oligodendrocyte) at P28 (43), we focused on adult mice. OPCs continue to proliferate and generate significant new myelin in adulthood, with myelination gradually doubling from ;2-10 months (44)(45)(46). Tamoxifen readily induced Mgat1 deletion in O4 1 oligodendrocytes but not O4 2 cells in vivo, as determined by loss of L-PHA binding by flow cytometry (Fig. S2B). Consistent with slow accumulation of new myelin from OPCs during adulthood, 2 weeks following tamoxifen treatment (Mgat1 deletion) adult Mgat1 f/f Plp1-cre/ERTc 1 mice did not display significant differences in brain levels of MBP or myelin (FluoroMyelin) relative to tamoxifen-treated controls (Fig. S2C). However, 8 weeks after initial tamoxifen treatment, Mgat1 deletion resulted in significant reductions in levels of MBP, myelin (FluoroMyelin), and number of total (Olig2 1 ) and mature (Olig2 1 CC1 1 ) oligodendrocytes, along with increased numbers of immature (Olig2 1 CC1 2 ) oligodendrocytes ( Fig. 2D and Fig. S2D). To confirm that these results primarily arose from a defect in new myelin formation from OPCs, rather than a defect in mature oligodendrocytes, we generated Mgat1 f/f Pdgfra-creER 1 mice where tamoxifen induces deletion of Mgat1 in OPCs but not mature oligodendrocytes. Indeed, 8 weeks but not 2 weeks after tamoxifen treatment, deletion of Mgat1 in OPCs significantly reduced levels of MBP, myelin (FluoroMyelin), and number of total (Olig2 1 ) and mature (Olig2 1 CC1 1 ) oligodendrocytes along with increased numbers of immature (Olig2 1 CC1 2 ) oligodendrocytes ( Fig. 2E and Fig. S2E). Tamoxifen has been reported to promote myelination (47); however, Mgat1 deletion reduced myelination despite potential positive effects of tamoxifen. Together, these data demonstrate that GlcNAc and N-glycan branching promote primary myelination in mice by driving OPC differentiation.
GlcNAc prevents damage to demyelinated axons by promoting myelin repair
To explore whether GlcNAc can promote remyelination in adult mice following myelin injury, we utilized the cuprizone model of nonimmune induced demyelination/remyelination on Mgat5 1/2 and WT C57BL/6 mice. Cuprizone at 0.2% induces demyelination in the corpus callosum by 3 weeks, with maximum demyelination at 5-6 weeks. Partial remyelination via maturation of OPCs begins at the height of demyelination and becomes complete ;3-5 weeks after cuprizone withdrawal. Given this, we examined four different treatment regimens ( Fig. 3A). When GlcNAc was concurrently provided during the final 3 weeks of a 6-week cuprizone (0.2%) exposure in WT mice, GlcNAc prevented loss of motor function (as measured using rotarod fall latency) while increasing MBP levels and reducing axonal damage (as measured by reduced accumulation of amyloid precursor protein (APP)) in the corpus callosum (Fig. 3C). To address potential confounding effects of GlcNAc on inhibiting demyelination by cuprizone during concurrent treatment, we initiated treatment of WT mice for 2 weeks or Mgat5 1/2 mice for 1 or 4 weeks of GlcNAc only after cuprizone was stopped (Fig. 3A). This revealed that GlcNAc is the average of fluorescence intensity of the area depicted in red of three brain slices per mouse. One-sided t test. D and E, the indicated adult mice (10 weeks old) were treated with tamoxifen at weeks 0 and 4 and sacrificed at week 8, and brains were analyzed by immunofluorescence microscopy for MBP, myelin (FluoroMyelin), Olig2 1 , and CC1 1 cells (n = 5 (2 male, 3 female), 8 (6 male, 2 female) (D), and n = 5 (2 male, 3 female), 4 (2 male, 2 female) (E); one-sided t test). Each data point in the graphs represents average fluorescence or cell counts of the highlighted area from three (D) or two (E) different brain slices per mouse. enhanced levels of MBP, myelin (FluoroMyelin), and mature oligodendrocytes (CC1 1 Olig2 1 ) while reducing the amount of degraded MBP (dMBP)/myelin degeneration within the corpus callosum (Fig. 3, D-F). dMBP was detected by an antibody that specifically recognizes areas of myelin degeneration (48). EM analysis confirmed these results, revealing that GlcNAc enhanced the number of myelinated axons and the degree of myelination as measured by the g-ratio while also reducing axon loss and the number of degenerating and dystrophic/ swollen axons ( Fig. 3G and Fig. S3A). GlcNAc also enhanced the number of paranodes, which increase with remyelination ( Fig. S3A) (49). Enhancement of myelination by GlcNAc depends on time, because the increase in FluoroMyelin staining in Mgat5 1/2 mice was ;2-fold greater with 4 weeks versus 1 week of GlcNAc treatment (Fig. 3, D and F). Importantly, the subtle reductions in N-glycan branching induced in Mgat5 1/2 mice did not alter baseline levels of myelin, yet they reduced remyelination following cuprizone-induced injury relative to Mgat5 1/1 control mice (Fig. S3, B and C). Together, these data indicate that GlcNAc and N-glycan branching promote myelin repair and provide neuro-protection to axons following demyelination.
A marker of serum GlcNAc inversely associates with imaging markers of myelin-axon damage To explore whether alterations in GlcNAc may impact myelination status in MS patients, we used a cohort of 180 MS patients to correlate endogenous serum HexNAc levels with measures of white matter damage by MRI of the brain. Increased T2w lesion volume and count on brain MRI are measures of the extent and frequency of demyelination, respectively. T2w lesion volume correlated with lower HexNAc serum levels (Fig. 4A, p = 0.020), whereas T2w lesion count did not (p = 0.387). Likewise, patients with contrast-enhancing lesions, a marker of active inflammation in MS, had similar serum Hex-NAc levels to those without (p = 0.866), suggesting that GlcNAc primarily affects the extent of permanent demyelination rather than initiation of inflammatory demyelination. T1w/T2w ratio maps (50) reflect microstructural integrity of myelin/axons in normal-appearing white matter (51) and cortical gray matter (52,53). With age and gender as covariates, low serum HexNAc levels were strongly associated with lower T1w/T2w ratios indicating microstructural damage of myelin/ axons in both normal-appearing white matter (r 2 = 0.18, p = 2.25 3 10 25 ) and gray matter (r 2 = 0.23, p = 1.32 3 10 26 ), (Fig. 4, B and C). Together, these data are consistent with our mouse data and suggest that GlcNAc may promote myelination in MS.
Discussion
Here we report a novel pathway for regulating oligodendrogenesis, primary myelination, and myelin repair by N-glycan branching and GlcNAc. Our data demonstrate that GlcNAc and N-glycan branching are neuroprotective for demyelinated axons by promoting oligodendrogenesis and myelination from OPCs. The association of low endogenous GlcNAc with increased myelin-axon microstructural damage in MS patients suggests that this mechanism is relevant to pathogenesis of MS. This hypothesis is consistent with recent data suggesting that some MS patients are blocked in their ability to generate new myelin from progenitors (9). The mechanisms that drive neurodegeneration in MS are poorly understood, and our data raise the possibility that alterations in N-glycan branching and/or GlcNAc availability may promote neurodegeneration by blocking remyelination. Indeed, we find that low levels of serum GlcNAc in MS patients is associated with a progressive disease course, clinical disability, and multiple neuroimaging measures of neurodegeneration (unpublished data).
Providing oral GlcNAc to lactating female mice increased primary myelination in nursing pups via delivery of GlcNAc in breast milk. In humans, GlcNAc is a major component of breast milk oligosaccharides (;1.5 to ;0.6 mg/ml from term to 13 weeks) (54) and can be released as a monosaccharide by infant microbiota (55). Thus, breastfed newborns consume ;0.5-1.5 g of GlcNAc per day or ;100-300 mg/kg/day for a 5-kg infant. This is similar to the ;160 mg/kg/day dose that we observed to promote myelination in adult mice. In contrast to human breast milk, GlcNAc is not a significant component of commercial baby formula. Breastfed infants display increased myelination and cognitive function compared with formula-fed infants (56,57), but the mechanism is unknown. Our data suggest that GlcNAc in human breast milk may be a major driver of this effect.
GlcNAc and N-glycan branching markedly enhanced cell surface expression of PDGFRa, a critical initiator of OPC differentiation. However, GlcNAc and N-glycan branching likely affect other cell surface receptors/transporters in OPCs to drive myelination and promote axonal health. For example, cell motility is significantly enhanced by N-glycan branching via reduced clustering of integrins (58,59). Such activity in OPCs would enhance their ability to traffic to sites of demyelination and promote myelin repair. N-glycan branching also stimulates glucose transporter surface retention to enhance glucose uptake (19,25). Glucose transporter 1 (GLUT1) in oligodendrocytes promotes axonal health and function by increasing transfer of lactate to axons via increased glucose supply to the glycolytic pathway in oligodendrocytes (60). Thus, part of the neuroprotective effect of GlcNAc following myelin repair may be through enhanced transport of glucose into oligodendrocytes.
GlcNAc and/or N-glycan branching also play important roles in suppressing T cell-and B cell-mediated inflammatory demyelination (17, 28, 30-33, 35, 37, 38). In T cells, GlcNAc and . Oral GlcNAc promotes remyelination and limits axonal injury. A, the ability of GlcNAc to promote myelin repair in vivo was assessed using the cuprizone model, with oral GlcNAc treatment (1 mg/ml in drinking water) during the last 3 weeks of a 6-week cuprizone exposure (I, active phase treatment) or after 5 weeks of cuprizone treatment for 1, 2, or 4 weeks (II, III, and IV, recovery phase treatment). WT (I and III) or Mgat5 heterozygous (II and IV) C57BL/6 mice were used. B, area of the medial corpus callosum (CC) analyzed. C-F, shown is the latency to fall in an accelerated rotarod test and immunofluorescence staining of the corpus callosum for MBP, degraded MBP (dMBP), APP, myelin (FluoroMyelin), and/or CC1/Olig2 from WT mice with active phase GlcNAc treatment (C, n = 5,5, all male), Mgat5 heterozygous mice with recovery phase GlcNAc treatment for 1 week (D, n = 8 (5 male, 3 female), 7 (4 male, 3 female)), WT mice with recovery phase GlcNAc treatment for 2 weeks (E, n = 11,14 for rotarod, n = 5,7 for immunofluorescence, all male), or Mgat5 heterozygous mice with active phase GlcNAc for 4 weeks (F, n = 6,6 with 4 male and 2 female per group). Data points represents average fluorescence from 3-4 different brain slices per mouse. Rotarod p-values by 2-way ANOVA with Sidak's multiple comparisons post-test. Immunofluorescence p-values by one-tailed t test. Scale bar = 50 mM. G, the CC of mice from the 4-week recovery phase treatment group in panel (F) were analyzed by EM (n = 3,3 with 2 male and 1 female per group). Representative electron micrographs in control and GlcNAc treatment groups are shown, scale bar = 1 mM. Filled and empty arrowheads indicate examples of myelinated and unmyelinated dystrophic axons, respectively. Plot of g-ratio versus axon diameter (n = 214 and 222 axons) was counted blindly from two fields (105mm 2 ) per mouse (p-value comparing best fit curves from nonlinear regression, R 2 is the goodness of fit for each group). Numbers of total axons, myelinated axons, and dystrophic axons (axon diameter . 0.7 mM) were counted blindly in six fields (105mm 2 ) per mouse in each treatment group (n = 18, 18, p-value by one-sided t test). All error bars are standard error.
N-glycan branching suppress activation signaling via the T-cell receptor (17,21,22), inhibit pro-inflammatory T H 1 and T H 17 differentiation (24,38), and enhance anti-inflammatory T regulatory cell differentiation (24). B-cell depletion is a potent therapy in MS, predominantly acting by suppressing innate antigen presenting cell function rather than via antibody production (61,62). N-glycan branching in B cells reduces pro-inflammatory innate signaling via toll-like receptors and inhibits antigen presenting cell activity, yet it promotes adaptive responses through the B-cell receptor (28). Thus, oral GlcNAc is uniquely positioned as a therapeutic to reverse three major targets driving MS pathogenesis, namely pro-inflammatory T-cell responses, pro-inflammatory innate B-cell activity, and myelin repair. No current MS therapy has such diverse mechanisms of action.
Concentrations of GlcNAc required to raise N-glycan branching in vivo are markedly lower than those required for in vitro activity (37,38). This is largely driven by GlcNAc entering cells by macropinocytosis, and therefore both time and rate of membrane turnover can influence concentrations of GlcNAc required to raise N-glycan branching (37,38). Thus, short-term in vitro experiments require high concentrations to raise intracellular GlcNAc levels quickly, whereas primary cells remain viable. In contrast, cells can be exposed to GlcNAc over a longer time period in vivo, allowing lower concentrations to be effective at raising N-glycan branching. The rate of macropinocytosis may also be much higher in vivo compared with in vitro.
GlcNAc is also known to be highly safe in humans. In addition to breastfed infants consuming significant quantities, large intravenous doses of GlcNAc (20 g and 100 g) in humans demonstrated no toxicity issues and no alterations in blood glucose or insulin (63,64). Moreover, treatment with insulin had no effect on the serum t 1/2 of GlcNAc (63). Oral GlcNAc (3-6 g/ day) has also been used in 12 children with inflammatory bowel disease for ;2 years without reported toxicities and/or side effects (65). In rats, chronic systematic toxicological studies at doses of 2323-2545 mg/kg/day for up to 114 weeks found no toxicity (66,67). Coupled with availability as a dietary supplement, oral GlcNAc may provide a potent, inexpensive, and safe therapy for MS. Large double-blind placebo-controlled trials are warranted to investigate this hypothesis.
Mouse brain and neural stem cell isolation and analysis
Mice were bred and utilized as approved by the University of California, Irvine Institutional Animal Care and Use Committee. Dorsal forebrain cortical tissue was dissected from the medial ganglionic eminence at embryonic day 12.5 (E12.5) of CD1 mice (Charles River Laboratories) or Mgat5 2/2 C57BL/6 mice and their WT littermates and placed in dissection buffer: PBS, 0.6% glucose, 50 units/ml Pen/Strep. Tissue from multiple embryos within the same litter were pooled, and a subsequent culture from a single litter was considered a biological repeat. The tissue was dissociated using 0.05% trypsin-EDTA at 37°C for 10 min, followed by treatment with soybean trypsin inhibitor (Life Technologies). Dissociated cells were resuspended in proliferation medium containing DMEM, 13 B27, 13 N2, 1 mM sodium pyruvate, 2 mM L-glutamine, 1 mM N-acetylcysteine, 20 ng/ml epidermal growth factor (EGF) (PeproTech), 10 ng/ml basic fibroblast growth factor (bFGF) (PeproTech), and 2 mg/ml heparin, seeded at 150,000 cells/ml (nontissue culturetreated plastic plates), and grown as nonadherent spheres. Cell cultures were passaged approximately every 3 days using enzyme-free NeuroCult Chemical Dissociation Kit (mouse) (Stemcell Technologies). The cultures were passaged at least once prior to experimental use. For experiments, passaged cells were cultured in proliferation media (bFGF and EGF) or differentiation media (bFGF (10 ng/ml) and PDGF-AA (10 ng/ml); Life Technologies) for 48 h with or without the presence of GlcNAc (Ultimate Glucosamine, Wellesley Therapeutics) or kifunensine (GlycoSyn). Neurospheres were dispersed using the enzyme-free NeuroCult kit before being analyzed by flow cytometry using one or more of the following antibodies: anti-
GlcNAc treatment of mouse pups
GlcNAc (1 mg/ml) in drinking water was provided to pregnant PL/J mothers or mothers who recently delivered pups and were nursing their young. After the treatment period, pups were anesthetized with isoflurane and cardiac-perfused with PBS. Pup and fetal brains were removed and homogenized by trituration using glass pipettes in PBS with 5% FBS. The cells were then stained with antibodies and analyzed by flow cytometry using antibodies described above. For immunofluorescence analysis of pup brains, pups were quickly decapitated, and brains were harvested and fixed in 4% paraformaldehyde overnight.
[U 13 C]GlcNAc treatment of mice [U 13 C]GlcNAc was purchased from Omicron Biochemicals and put in the drinking water at 1 mg/ml of female mice aged 8 weeks for 3 days. Fresh solution of [U 13 C]GlcNAc in drinking water was provided each day. After 3 days, mice were anesthetized with isoflurane and underwent cardiac perfusion with 50 ml of PBS. Brains were harvested and snap frozen in liquid nitrogen. Tissues were cut into 0.04-g pieces and crushed mechanically before undergoing extraction as described below ("Targeted LC-MS/MS"). Levels of UDP-[U13C]GlcNAc were measured by LC-MS/MS analysis as described below ("Targeted LC-MS/MS").
Tamoxifen-induced deletion of Mgat1
Mgat1 f/f Plp1-cre/ERTc 1 and Mgat1 f/f Pdgfra-creER 1 were generated by crossing our Mgat1 f/f mice with Plp1-cre/ERTc 1 and Pdgfra-creER 1 lines from The Jackson Laboratory. Tamoxifen was dissolved in corn oil overnight at 37°C at a concentration of 20 mg/ml. Mgat1 f/f Plp1-cre/ERTc 1 and Mgat1 f/f Pdgfra-creER 1 mice (mean age P71.24, S.D. 1.393) and their control Mgat1 f/f littermates were injected intraperitoneally with tamoxifen (75 mg/kg) daily for 3 days starting on day 0 and sacrificed at 2 weeks or retreated with tamoxifen and sacrificed at 8 weeks. Mice were sacrificed following anesthesia and cardiac perfusion with PBS. Brains examined by flow cytometry were first homogenized by trituration using glass pipettes in PBS with 5% FBS. Brains examined for myelin content were dropfixed in 4% paraformaldehyde overnight.
Cuprizone-induced demyelination
Cuprizone at 0.2% induces demyelination in the corpus callosum by 3 weeks, with maximum demyelination at 5-6 weeks (68). 8-week-old C57BL/6 mice purchased from The Jackson Laboratory or 8-week-old Mgat5 1/2 C57BL/6 mice were treated with 0.2% cuprizone (Sigma-Aldrich) mixed into milled rodent chow for 6 weeks for the active phase treatment and 5 weeks for the recovery treatment. During active phase treatment, GlcNAc (1 mg/ml) in drinking water or just drinking water (control) was provided for the last 3 weeks of cuprizone treatment. For the recovery phase treatment, GlcNAc in drinking water or control was provided after cuprizone treatment had been stopped. Mice were anesthetized and underwent cardiac perfusion with 4% paraformaldehyde in PBS or 4% paraformaldehyde plus 0.5% glutaraldehyde in sodium cacodylate buffer for immunofluorescence or electron microscopic analysis, respectively. Brains were then fixed overnight in perfusion solution.
Accelerated rotarod
One day prior to cuprizone treatment, mice were trained on the rotarod by allowing them to run three 5-min trials at a constant 30 rotations per minute (RPM). Mice then underwent weekly testing during cuprizone and GlcNAc treatment on an accelerating rotarod starting at 4 rpm and increasing to 40 rpm over 5 min. Latency for mice to fall was recorded. If a mouse was not running on the rotarod by holding on for three turns, this was considered a fall. For the active phase treatment, one trial was run every week. For the recovery phase treatment, three trials were run for each mouse each week and latencies were averaged. As expected with cuprizone treatment, performance degraded as treatment progressed. Mice whose performance did not drop below a predetermined threshold (200 s) were not used in analysis.
Immunofluorescence analysis
For NSC immunofluorescence, whole neurospheres were seeded onto laminin-coated coverslips (Neuvitro) in proliferation medium. After 24 h, proliferation medium was removed and replaced with differentiation medium (same components as proliferation medium but excluding EGF, bFGF, and heparin) to induce differentiation. For analysis of mouse brains, brains were incubated in 30% sucrose for at least 72 h, embedded in optimal cutting temperature compound ( (69)(70)(71), whereas co-staining for CC1 and Olig2 are markers of mature oligodendrocytes (72,73). To examine the amount of myelin, slices were incubated in FluoroMyelin (1:300; F34651, Thermo Fisher Scientific) for 45 min. Images were acquired on a Keyence fluorescence microscope. Mean fluorescence intensity of the medial corpus callosum was measured using ImageJ.
EM
Three mice from each treatment group (control and GlcNAc) were selected randomly for EM analysis (before other investigations were performed). Portions of these brains from 0 to 21 bregma were rinsed in 0.1 M cacodylate buffer overnight and again for 15 min the next day. 2 3 1-mm blocks of the corpus callosum were dissected out and postfixed with 1% osmium tetroxide in 0.1 M cacodylate buffer for 1 h, rinsed in double distilled H 2 O, dehydrated in increasing serial dilutions of ethanol (70%, 85%, 95%, and 100% 3 2) for 10 min each, put in propylene oxide (intermediate solvent) for 23 10 min, incubated in propylene oxide/Spurr's resin (1:1 mix) for 1 h, and then incubated in Spurr's resin overnight. The blocks were put in a fresh change of resin in flat embedding molds the next day and polymerized overnight at 60°C. The blocks were sectioned at 1 mm using a Leica Ultracut UCT ultramicrotome. Floating sections were stained in toluidine blue (1% toluidine blue and 2% sodium borate in double distilled H 2 O) at 60°C for 3 min., mounted on slides, and cover-slipped. Ultrathin sections were sectioned at 70 nm using a Leica Ultracut UCT ultramicrotome. Sections were mounted on 150 mesh copper grids, stained with uranyl acetate and lead citrate, and viewed using a JEOL 1400 electron microscope. Images were captured using a Gatan digital camera. A blinded rater analyzed images by calculating the g-ratio (ratio of the diameter of the axon excluding the myelin sheath divided by the axon diameter including the myelin sheath) and counting the number of total axons, myelinated axons, dystrophic axons (defined as axon diameter . 0.7 mm), degenerating axons, and paranodes. Degenerating axons were identified as axonal swellings containing more than five clustered dark mitochondria and lysosomes. Paranodes were identified as axons with close proximity of the axolemma with the inner membrane of the myelin sheath with a surrounding cytoplasmic portion of oligodendrocyte.
MS patient cohort
MS patients were recruited from the neuroimmunology clinical trial unit at the NeuroCure Clinical Research Center, Charité -Universitätsmedizin Berlin (Table S1). Inclusion criteria were based on the 2010 revised McDonald criteria, stable immunomodulatory therapy (relapsing-remitting MS) or no treatment (primary progressive MS and secondary progressive MS). Exclusion criteria were acute relapse and/or corticosteroids within 6 months prior to inclusion. Disease course was determined under strict adherence to the 1996 Lublin criteria (74). Blood draws were fasting. The study was approved by the local ethics committee of Berlin (Landesamt für Gesundheit und Soziales (LAGeSo)). All study participants gave written informed consent. Studies were conducted in conformity with the 1964 Declaration of Helsinki in its currently applicable version.
T2w lesion segmentation was performed as previously described (75) using a semi-automated procedure including image co-registration using FLIRT (FMRIB Software Library, Oxford, UK) and inhomogeneity correction as embedded into the MedX v3.4.3 software package (Sensor Systems Inc., Sterling, VA, USA). Bulk white matter lesion load and lesion count of T2w scans were routinely measured using MedX.
For calculation of T1w/T2w ratio maps, MPRAGE, FLAIR, and T2w scans were reoriented to standard space, bias field corrected, and cropped to a robust field of view using FSL 5.0.9 (76). The MPRAGE and FLAIR scans were then linearly co-registered to T2w using FSL FLIRT and then registered to Montreal Neurological Institute space and brain extracted using the Brain Extraction Toolbox (76). T2w lesions were then automatically segmented by applying the lesion prediction algorithm to FLAIR scans, implemented in the Lesion Segmentation Toolbox version 2.0.15 (77) for Statistical Parametric Mapping (SPM). Gray matter, white matter, and brain masks were then extracted from the MPRAGE. The lesion mask was subtracted from these masks to remove any lesion effects. The T1w/T2w ratio was created by dividing the processed MPRAGE scans by the processed T2w scans. Median T1w/ T2w ratios were extracted from the normal-appearing white matter, gray matter, and brain masks.
Targeted LC-MS/MS
Serum samples for metabolomics analysis were prepared as described previously (78). Briefly, 50 ml of serum (stored at 280°C) and 200 ml of ice-cold extraction solvent (40% acetonitrile: 40% methanol: 20% H 2 O) were vortexed for 2 min, then shaken in an Eppendorf shaker (Thermomixer R) at 1400 rpm, 4°C for 1 h and centrifuged at 4°C for 10 min at ;18,000 3 g in an Eppendorf microfuge. Supernatants were transferred to a clean tube and evaporated in a Speedvac (Acid-Resistant Cen-triVap Vacuum Concentrators, Labconco). Dried samples were stored at 280°C. The samples were resuspended in 100 ml of water containing the Internal Standards D 7 -Glucose at 0.2 mg/ ml and H-tyrosine at 0.02 mg/ml. The samples were resolved by LC-MS/MS in negative mode at the optimum polarity in Multiple reaction monitoring (MRM) mode on an electrospray ionization triple-quadrupole mass spectrometer (AB Sciex 4000 QTRAP, Toronto, Ontario, Canada). MultiQuant software (AB Sciex, Version 2.1) was used for peak analysis and manual peak confirmation. The results, expressed as area ratio (area of analyte/area of internal standard), were exported to Excel and analyzed with MetaboAnalyst 3.0. Standard curves were prepared by adding increasing concentrations of GlcNAc or N-acetyl-D-[UL-13 C 6 ]glucosamine ([UL 13 C 6 ]GlcNAc) (Omicron Biochemicals, Indiana) to a 50-ml aliquot of control serum. This way we were able to create a calibration curve for HexNAc serum levels, obtaining absolute values rather than relative concentrations. Analysts were blinded in regard to sample origin (healthy control or MS).
Statistical analysis
Statistical analyses for the in vitro and animal experiments were done with Graphpad Prism by t tests, with analysis of variance (ANOVA) with Sidak's post-test correction, or by comparing best-fit curves from nonlinear regression (Y = Bmax*X/(K d 1 X) as described in the relevant figure legends. Statistical analyses for the clinical part were performed with R Project version 3.5.3. Correlation between serum HexNAc levels and lesion measurements were analyzed using nonparametric Spearman's Rho analyses. Correlations between HexNAc levels and T1w/T2w-ration measurements were analyzed using linear regression models with HexNAc levels as an independent variable.
Data availability
All data are contained within the manuscript. | 7,679 | 2020-09-25T00:00:00.000 | [
"Biology"
] |