text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Implementing 3D Model LNG Tanker Ship Cargo Handling System Equipment for Training Using Augmented Reality
The learning system of the vessel is very important for a marine engineer to understand the characteristic of machinery system. This can be conducted by looking at the block diagram and seeing the 3D picture or directly observing the system on a vessel in order to get a complete understanding. It is difficult to do direct observation due to safety regulations and company confidentiality, especially for LNG tanker ship. To overcome this restriction, this study will propose a 3D drawing for training that can represent the real operational condition of cargo handling system equipment of an LNG tanker vessel. Augmented Reality 3D is used to recreate the system and put necessary information about the object along with the 3D drawing. The augmented reality 3D object and information showing with the android application by scanning the object augmented reality marker. In this research, the user is satisfied with the experience and interface based on user data collected from the questionnaire method with aspect of attractiveness, uses ability, effectiveness, user interface experience, and measure the understanding about the application with questionnaire value measurement.
Introduction
Improvement technology for the maritime industry has become more competitive. Many new technology was added to provide a simple and efficient working environment. Digitization has the main role to improve how to manage work in the maritime industry. Beside improving the efficiency of working, digitization can improve the safety aspect of the maritime industry. The ship has many systems to operate during the voyage. Three main systems are crucial to the ship operation i.e machinery system, electricity system, and cargo handling system.
For some ships, there is a restriction area or even entered the ship due to safety regulation because of flammable and hazardous cargo. In this case is LNG ship cannot be accessed without proper safety equipment and trained crew and had many restriction areas on the cargo area, especially for the visitor cannot be accessed cargo area refers to IACS class society rules [1].
LNG tanker ship has their characteristic of the cargo handling system and constructed for transporting liquefied commonplace gas (LNG). This carriers are purpose-built tank vessels for transporting LNG at sea. During handling, LNG is transferred between the onshore storage tanks and ship tanks at soaring sail rates through single or parallel pipelines. It is forever influenced by outside disturbances outside. In general, these operations are power-intensive and signify stringent safety considerations. It is extremely necessary and vital to control the temperature and pressure to lop the risk of accidents [1] [2]. LNG tanker ship has different complexity system due to their cargo is liquified natural gas. Their operation needs to be monitored to control the stability of cargo [2]. Because of this characteristic engineer must understand the map diagram of the system, so if there is any trouble or To learn how to operate that system, ship engineers or vessel crew need to learn the diagram of it. Mostly the diagram of the system that is available is on paper, so each engineer has to own their copy to learn the system. To digitize, the diagram needs technology called Augmented Reality (AR). Augmented Reality will accelerate engineers to study and identify the part of the system. Augmented Reality will provide a 3D model based on the drawing system and can be operated with a mobile phone or tablet (gadget).
A study using Augmented Reality can be understood clearly and close without worrying about safety and permission to going onboard the vessel. Because of augmented reality, the object can be digitalized and can improve the knowledge about the ship faster in real time with a closer look into the object. applications in design and manufacturing. It consists of seven main sections. The first section introduces the background of manufacturing simulation applications and the initial augmented reality developments. The second section describes the current hardware and software tools associated with AR. The third section reports on the various studies of design and manufacturing activities, such as augmented reality collaborative design, robot path planning, plant layout, maintenance, CNC simulation, and assembly using augmented reality tools and techniques. Augmented Reality, a state of art technology can proceed digital information to the world. For futher more Augmented Reality will be use into advance engineering application. Advances in localization technology will enable the deployment of augmented reality in a complex environment [3].
Data Collecting
The LNG ship that will be using on this research is 30.000m3 LNG carrier with 3 cargo storage tank on board. The ship type is membrane tank and have 6 main cargo component to loading unloading LNG from storage or shore connection. The cargo handling equipment will be shown on this task is cargo pipe, cargo pump, condenser, compressor, cargo tank vent, and manifold. These components will be drawn in 3D and attach to complete ship hull drawing. The ship specification dimension is,
3D Drawing Object
3D drawing is created using software called UNITY. This software can process complex 3D drawing and produce smooth detail of a particular object in a system, so when the object projected on augmented reality application it can reach decent result. The final result is to modelling the 3D of LNG Cargo handling system equipment and divide in to some objects to be detail [4]. "$ *1>;5B 4B1G9>7 ?2:53D Others equipment such as LNG condenser and LNG compressor will featured in the augmented reality 3D object. The user will get the complete system equipment to more understand.
3D created based on the data of the ship and other reference such as on the internet, this can archived closed drawing object look.
Result and Discussion
The result of this research is the 3D object drawing can be showing as augmented reality it mean the object need application to facilitate the result, author using vuforia software engine by unity. Vuforia is an augmented reality software development kit with SDK format for mobile devices that allows the creation of augmented reality applications for phone. It uses computer vision technology to recognize and track planar images and 3D objects in real-time. This image registration capability allows developers to position and redirects virtual objects, such as 3D models and other media, concerning real-world objects when they are viewed through a mobile device's camera [4].
The main purpose the application is to acknowledge about LNG ship tanker cargo handling system equipment. Therefore, all the object is available at the ship so the user is keep experience things like real world reality. To excellent the work author using Vuforia engine version 9.8 to produce smooth and decent resolution for the augmented reality object. Vuforia helps to create the better user interface of the augmented reality application with format android application, the application can be access only with android OS.
Augmented reality development will be using marker based augmented reality. This type of AR is utilizing recognition as said by object. Marker Based AR supply databases about the thing after AR intention focused on the recognition of the object. Marker Based AR has different utilizes according to market purposes. This AR detects the thing at the fore of the camera and supplies databases about the thing on the screen. It can replace the marker on the AR intention shroud with a 3D version of the corresponding object. Markers are square and constitute a black thick border or other types of border. The benefit of the border is to separate markers from a background in a seized frame easily. The tracking module is the vast majority of necessary things on this type of AR. It calculates the relative pose of the camera in genuine time. Degrees of Freedom (DOF) is the allowance for situation and orientation thing of 3D design [5].
After the application created, to operate the application user must be downloading the object marker picture, the by scanning the marker with augmented reality application then the object will shown like the figure 7 until figure 10, "$ C89@ 1E7=5>D54 B51<9DI ?2:53D "$ C89@ 31B7? 81>4<9>7 CICD5= 1E7=5>D54 B51<9DI ?2:53D Beside the 3D object there's information about the object based on selected augmented reality object, so the user can knowing more about the showing object. After the application is completely built, it needs to test the product to the user. In the user experience test, the author needs to test the application to the user target and collect the data to measure the ability and user satisfaction with the product. The category that needs to be measured is, Application user interface, Application Performance, Quality of Augmented Reality object, Quality of information inside the application, Application benefit to the user, and collect feedback from the user. There're a lot of often two ways to measure UI and UX, either by objective methods or subjective methods. Yet a mixture of both methods perhaps supplies more dependable results. Unbiased methods supply results utilizing experimental evidence, while subjective methods supply results from the user's point of view. The subjective methods are repeatedly utilized to know to archive that, the author use questionnaire to measure by giving the score to each category.
Creating user test questionnaire based on basic product user test questionnaire chart flow. User test questionnaire must include aspect of attractiveness, uses ability, effectiveness, user interface experience, and measure the understanding about the application. Author create questionnaire chart flow based on the product that author created, so the basic questionnaire flow chart will match with the product that about to be tested The questionnaire is divided into two categories, i.e., (i) to measure the user interface quality, and (ii) to measure the application ability to give understanding about the augmented reality objects. User interface quality measurement will be measure quality of showing the object and how close object with the real world, then the object information, it is showing or no. For the ability quality measurement will be measure user understanding before and after using the app, the understanding of the information and object, and effectiveness. Author will showing the questionnaire on the table 2. The user experience test was conducted by letting the user try the augmented reality application and explore it. After exploring the application, user fills a questionnaire to measure the application performance. Each respondent give a value for each question from one (1) to five (5) in which one (1) is the lowest and value five (5) is the highest.
The measurement score is based on the basic system usability scale method. This method is usually used to measure the ability and quality of the product. Measurement to score the value of each question Table 3 shows that most respondents already understand about the cargo handling equipment on the ship, although 43% of them do not. The respondents agree that the application interface can be easily understood, this means the base of the application interface device can retain. The respondents measured the augmented reality 3D object likely alike based on table 3, this means the object needs more improvement to reach perfectly alike. Most of the respondents can understand the information about the AR 3D Object, the information is provided on each AR 3D Object. Respondents can access the application smoothly, but there's some respondent can't smoothly access the application due to phone OS (operating system). Respond as a marine student or surveyor trainer, etc. based on the table3, agree this augmented reality application is helping to improve learning about cargo handling system of LNG Ship. However, the total score of the user experience test questionnaire is 1037 out of 1230. The total score equation itself is by total all of the question scores. To evaluate the over all result of the questionnaire, there are several value that need to be determine. Those The score calculation on (3),(4),(5),(6),(7), and (8) it is basic range to categorize the application score criteria. Application score criteria will be categorize based on the value of maximum, quartile III value, median value, quartile I value, and the minimum value. Score in the range of quartile III value and the maximum value will be categorize as excellent . Score in the range quartile II value and quartile III categorize as very good . Score in the range median value and quartile II value is categorize as good (783-1006). Score in the quartile I value and median value will categorize as enough (514-783). Last, score in the minimum value and quartile I value will categorize as bad (246-514). Based on scoring categorize, the result of user experience test consisting of 6 question with 41 respondent of LNG tanker Cargo handling System Equipment 3D augmented reality application is very good because the total score is 1037 out of 1230 (the maximum value), in between range of the quartile II value and the quartile III value (1006-1118). | 2,991.8 | 2022-01-01T00:00:00.000 | [
"Computer Science",
"Business"
] |
Monitoring in the Intensive Care Unit: Its Past, Present, and Future
Monitoring in the critical care setting has dramatically improved during the past 50 years and has contributed significantly toimprove patients' safety and outcome [1–3]. New technologies have allowed the transfer ofadvances in biology, physiology, and bioengineering to the bedside to support data drivendecision making and continuous monitoring of the vulnerable critically ill patients.The most striking advances include the continuous and noninvasive measurement of oxygen saturation by pulseoximeters and of end tidal CO2 and the real-time displays of flow, volume, pressure time curves, and derived measures by modern ventilators as well as the development invasive and more recently noninvasive devices that provide beat-to-beat arterial pressure, stroke volume, and cardiac output monitoring.
Despite these advances and the apparent impact made on patients' outcome, there are still a lot of progress to be made to bring monitoring to the level of safety and reliability achieved by industries such as aviation [3, 4].
The future of monitoring in the critical care setting probably relies less on global appraisal of descriptive variables and more on functional monitoring of organs. Ultimately monitoring complex organ function is more informative and will likely be more important than global and/or regional physiological parameterssuch as organs perfusion and oxygenation. Metabolic monitoring, reflecting the biologic functions of the organs, starts to emerge [5]. Noninvasive monitors and trend analysis will obviously continue to grow. In addition, more advanced monitoring of pain, sleep, wakefulness, and delirium are very much needed.Atthe end of the day, decision support systems and automated system will become instrumental and central in daily monitoring when such system can provide the high level of accuracy needed to allow health care providers to rely on them [6, 7]. In addition, decision support systems willonly make sense if they improveclinicians' decision making, not if they just synthesized clinical algorithms. We expect that decision support software that integrates monitoring signals to raise the safety, reliability, and efficiency bar and not to fully replace human being. Finally, there is still a lot to be learned regarding identifying which variables should be monitored to impact outcome and what constitute an appropriate as oppose to pathological harmful one to critical illnesses. Without such understanding, enhanced monitoring has the potential to lead to costly and counterproductive interventions.
Finally, one has to ask whether new monitoring technologies must be evaluated and clearly demonstrate a positive impact on outcome before being used. There is no easy and universal answer to this question, we believe. Most hospital administrators may require outcome data before purchasing any new and potentially expensive technologies. This approach could, however, delay the implementation of useful technologies. It is indeed possible and likely that initial studies, even when well conducted, could only show no impact on outcome [8]. As an example, the pulse oximeter has been shown to have no impact on patients outcome [9, 10] despite the fact that this is considered standard of care. While some in the medical community are still wondering whether pulse oximeters do improve outcome since the data is lacking, in other industries such as aviation evidence-based data before implementing new technologies (monitors, autopilot, simulation) is not required and this industry has now reached an unmatched level of safety. On the other end, a more thoughtful assessment of clinical indication and physician education of physicians regarding Swan-Ganz catheter and hemodynamic management would have prevented many unhelpful right heart catheter placement over decades and possible harm. Clearly, there is not a single simple answer for every technology and/or problem at hand.
In conclusion, monitoring in our specialty has come a long way. We are, however, still facing difficult challenges and the future holds great promisesfor our patient [3], particularly if, as an scientific community, we can learn from our past mistakes.This special issue on monitoring of critical patients illustrates some of the current and future challenges we are facing.
Maxime Cannesson
Alain Broccard
Benoit Vallet
Karim Bendjelid
Monitoring in the critical care setting has dramatically improved during the past 50 years and has contributed significantly to improve patients' safety and outcome [1][2][3]. New technologies have allowed the transfer of advances in biology, physiology, and bioengineering to the bedside to support data driven decision making and continuous monitoring of the vulnerable critically ill patients. The most striking advances include the continuous and noninvasive measurement of oxygen saturation by pulse oximeters and of end tidal CO 2 and the real-time displays of flow, volume, pressure time curves, and derived measures by modern ventilators as well as the development invasive and more recently noninvasive devices that provide beat-to-beat arterial pressure, stroke volume, and cardiac output monitoring.
Despite these advances and the apparent impact made on patients' outcome, there are still a lot of progress to be made to bring monitoring to the level of safety and reliability achieved by industries such as aviation [3,4].
The future of monitoring in the critical care setting probably relies less on global appraisal of descriptive variables and more on functional monitoring of organs. Ultimately monitoring complex organ function is more informative and will likely be more important than global and/or regional physiological parameters such as organs perfusion and oxygenation. Metabolic monitoring, reflecting the biologic functions of the organs, starts to emerge [5]. Noninvasive monitors and trend analysis will obviously continue to grow. In addition, more advanced monitoring of pain, sleep, wakefulness, and delirium are very much needed. At the end of the day, decision support systems and automated system will become instrumental and central in daily monitoring when such system can provide the high level of accuracy needed to allow health care providers to rely on them [6,7]. In addition, decision support systems will only make sense if they improve clinicians' decision making, not if they just synthesized clinical algorithms. We expect that decision support software that integrates monitoring signals to raise the safety, reliability, and efficiency bar and not to fully replace human being. Finally, there is still a lot to be learned regarding identifying which variables should be monitored to impact outcome and what constitute an appropriate as oppose to pathological harmful one to critical illnesses. Without such understanding, enhanced monitoring has the potential to lead to costly and counterproductive interventions.
Finally, one has to ask whether new monitoring technologies must be evaluated and clearly demonstrate a positive impact on outcome before being used. There is no easy and universal answer to this question, we believe. Most hospital administrators may require outcome data before purchasing any new and potentially expensive technologies. This approach could, however, delay the implementation of useful technologies. It is indeed possible and likely that initial studies, even when well conducted, could only show no impact on outcome [8]. As an example, the pulse oximeter has been shown to have no impact on patients outcome [9,10] despite the fact that this is considered standard of care. While some in the medical community are still wondering 2 Critical Care Research and Practice whether pulse oximeters do improve outcome since the data is lacking, in other industries such as aviation evidence-based data before implementing new technologies (monitors, autopilot, simulation) is not required and this industry has now reached an unmatched level of safety. On the other end, a more thoughtful assessment of clinical indication and physician education of physicians regarding Swan-Ganz catheter and hemodynamic management would have prevented many unhelpful right heart catheter placement over decades and possible harm. Clearly, there is not a single simple answer for every technology and/or problem at hand.
In conclusion, monitoring in our specialty has come a long way. We are, however, still facing difficult challenges and the future holds great promises for our patient [3], particularly if, as an scientific community, we can learn from our past mistakes. This special issue on monitoring of critical patients illustrates some of the current and future challenges we are facing. | 1,832.6 | 2012-09-17T00:00:00.000 | [
"Medicine",
"Engineering"
] |
Gaṅgeśa on Epistemic Luck
This essay explores a problem for Nyāya epistemologists. It concerns the notion of pramā. Roughly speaking, a pramā is a conscious mental event of knowledge-acquisition, i.e., a conscious experience or thought in undergoing which an agent learns or comes to know something. Call any event of this sort a knowledge-event. The problem is this. On the one hand, many Naiyāyikas accept what I will call the Nyāya Definition of Knowledge, the view that a conscious experience or thought is a knowledge-event just in case it is true and non-recollective. On the other hand, they are also committed to what I shall call Nyāya Infallibilism, the thesis that every knowledge-event is produced by causes that couldn’t have given rise to an error. These two commitments seem to conflict with each other in cases of epistemic luck, i.e., cases where an agent arrives a true judgement accidentally or as a matter of luck. While the Nyāya Definition of Knowledge seems to predict that these judgements are knowledge-events, Nyāya Infallibilism seems to entail that they aren’t. In this essay, I show that Gaṅgeśa Upādhyāya, the 14th century Naiyāyika, solves this problem by adopting what I call epistemic localism, the view that upstream causal factors play no epistemically significant role in the production of knowledge.
Introduction
This essay explores a problem for Nyāya epistemologists. It concerns the notion of pramā. Roughly speaking, a pramā is a conscious mental event of knowledgeacquisition or learning, i.e., a conscious experience or thought in undergoing which an agent learns or comes to know something. Suppose I see that there is a white picket fence outside my window. In undergoing this perceptual awareness, I learn or come to know that there is a white picket fence outside my window. So, this perceptual awareness is a pramā. Similarly, suppose I see that the sky is overcast and, on that basis, consciously infer that it will rain today. In making that inferential judgement, I may learn or come to know that it will rain today. If that is the case, this inferential judgement is a pramā. Call any such event of knowledgeacquisition a knowledge-event.
The problem is this. Many Naiyāyikas accept: The Nyāya Definition of Knowledge. An awareness-event (jñāna, i.e., a conscious experience or thought) counts as a knowledge-event if and only if it is a true non-recollective awareness-event (yathārthānubhava). 1 But these Naiyāyikas are also committed to: Nyāya Infallibilism. Any knowledge-event is produced by a causal complex (kāraṇasāmagrī or collection of causes) that couldn't give rise to an error.
These two commitments seem to conflict with each other in cases of epistemic luck, i.e., cases where an agent undergoes an awareness-event that is true accidentally or as a matter of luck. 2 Consider four cases that we will discuss throughout this paper. 1 Here, I translate the term "jñāna" as "awareness" or "awareness-event." The standard translation of "jñāna" as "cognition" is problematic. In contemporary philosophy of mind and cognitive science, the expression "cognition" is supposed to apply to mental states whose contents can be used for the purposes of theoretical reasoning, verbal reports, and planning action. But, according to some Indian philosophers (e.g., Yogācāra thinkers), "jñāna" can include perceptual states that don't fit this description. So, it's better to use the more neutral terms "awareness" or "awareness-event" instead of "cognition." 2 Cases like Mist and Fire are treated as accidentally true or fact-conforming by Indian philosophers themselves. See, for example, Ś rīharṡa's (12th century CE) Khaṇḍanakhaṇḍakhādya (KKh 383.23), Gaṅgeśa's (14th century CE) Tattvacintāmaṇi (TCM C IV.2 499.2-3) and Vyāsatīrtha's (15th-16th centuries CE) Tarkatāṇḍava (TT I 151.1-4). In post-Gettier epistemology (i.e., after Gettier's (1963) paper on why knowledge isn't justified true belief), similar cases of epistemic luck have received a lot of attention. The relevant kind of epistemic luck is what Pritchard (2005) calls veritic luck. For a partial survey of this literature, see Shope (2017). However, the concerns of this literature are somewhat different from mine. First, this literature is concerned with the notion of knowledge, while I here focus on the Indian notion of a pramā or a knowledge-event. These two notions are not the same. States of knowing can involve doxastic states that are purely dispositional, but a knowledge-event can only be an awarenessevent, i.e., a conscious mental occurrence. A state of knowing can carry information derived purely from memory, but a knowledge-event must be a non-recollective awareness-event whereby one acquires information instead of retrieving information already in one's possession. Second, in the post-Gettier era, the project of analysing knowledge was driven by the aim of finding anti-luck conditions on knowledge, i. e., conditions that would exclude cases of knowledge-destroying epistemic luck. For exceptions, see Hetherington (1999) and Weatherson (2003). But, as we shall see, the later Naiyāyikas weren't invested in the project of proposing anti-luck conditions on knowledge-events. Mist and Fire. I look at a hill and see what looks like smoke emerging from it. So, I judge that there is smoke on the hill. I am wrong: all I see is a wisp of mist. I had previously observed (in kitchens, etc.) that smoke is always accompanied by fire. On the basis of those observations, I had judged that, wherever there is smoke, there is fire. Now, I remember that generalization. So, I conclude that there is fire on the hill. My judgement is true: there is fire on the hill.
Horns and Cows. From a distance, I see an animal with horns. Earlier, I had observed many cows with horns. On the basis of these observations, I judged that all animals with horns are cows. Now, I recall that. generalization So, I conclude that the animal is a cow. My judgement is true: the animal I see is a cow.
The Mistaken Deceiver. You think that there is no pot in the next room. You want to deceive me. So, you tell me, "There is a pot in the next room." Since I have no reason to distrust you, I take your utterance at face value. So, I judge that there is a pot in the next room. My judgement is true: there is a pot in the next room.
The Parrot. A parrot is hidden behind a door, and it can randomly string together words to form sentences. I don't know this. On this occasion, imitating the voice of a friend, the parrot utters the sentence, "There is a pot in the next room." Since I think that my friend is behind the door and have no reason to distrust her, I take the utterance at face value. So, I judge that there is a pot in the next room. My judgement is true: there is a pot in the next room.
In each case, I undergo a non-recollective awareness-event that is true as a matter of accident or luck. 3 Given the Nyāya Definition of Knowledge, it should count as a knowledge-event. But it's hard to shake off the intuition that the causes of the awareness could easily have led to a mistake. If this intuition is right, then the Naiyāyikas are in trouble. For, if the Nyāya Definition of Knowledge is correct, then Nyāya Infallibilism is false in these cases. This is what I shall call the problem of epistemic luck.
In this essay, I lay out Gaṅgeśa Upādhyāya's (14th century CE) solution to this problem in Tattvacintāmaṇi (TCM). To solve this problem, Gaṅgeśa-as well as his commentators such as Jayadeva Miśra (15th century CE)-adopted a more permissive form of Nyāya Infallibilism. This form of infallibilism involves what I will call epistemic localism, the thesis that upstream causal factors (e.g, a speaker's epistemic standing in the case of testimony) don't play any epistemically significant role in the production of knowledge. 4 By downplaying the epistemic role of such factors, Gaṅgeśa and his commentators were able to treat epistemically lucky inferential and testimonial awareness-events as knowledge-events. This, in turn, allowed them to resolve the tension between the Nyāya Definition of Knowledge and Nyāya Infallibilism.
Why is this significant? First, my discussion reveals that some modern interpretations of Gaṅgeśa are simply wrong. Mukhopadhyay (1992, p. 285) and Phillips (2012, pp. 84-85) think that Gaṅgeśa does not take the judgements in The Parrot and The Mistaken Deceiver to be knowledge-events. In what follows, I show that there is little textual support for this claim. Second, if I am right, Gaṅgeśa's treatment of epistemic luck in TCM reveals a radical shift in the attitudes of Naiyāyikas towards cases like Mist and Fire. While earlier Naiyāyikas seem to rule them out from the class of knowledge-events, later Naiyāyikas (from Gaṅgeśa onwards) certainly do not. This forces these later Naiyāyikas to revise their epistemological commitments quite radically.
This essay has five parts. I will begin by describing the stance of early Naiyāyikas on epistemically lucky awareness-events. As I will show, they accepted a version of Nyāya Infallibilism that would prevent them from recognizing such awarenessevents as knowledge-events. I will then turn to Gaṅgeśa. I will consider whether his theory of inference and testimony would allow him to treat my judgements in cases like Mist and Fire, etc. as knowledge-events. The answer, I will argue, is "Yes." Next, I will explain how Gaṅgeśa frames the problem of epistemic luck. I will then lay out the two distinct solutions that he offers to this problem. The second of these is an instance of the approach that I have called epistemic localism. I will go on to show how Gaṅgeśa's commentator, Jayadeva, extends this approach to cases that Gaṅgeśa doesn't address.
Background: Nyāya Infallibilism
Most of Gaṅgeśa's Nyāya predecessors subscribed to an infallibilist conception of epistemic instruments (pramāṇa). For these writers, a knowledge-event is a true, nonrecollective awareness-event, and an epistemic instrument is an instrument or means (karaṇa) by which any such awareness-event arises. An instrument by which an effect arises is the maximally efficient (sādhakatama) cause of that effect. Though there is some disagreement amongst these Naiyāyikas on what maximal efficiency actually consists in, many of them agree that a maximally efficient cause of an effect is a cause such that, when it occurs, the effect must immediately follow. In this sense, an instrument that gives rise to an effect or result (phala) is excluded from a lack of connection with its result (phalāyogavyavacchinna). 5 So, if an epistemic instrument is 5 In his commentary Nyāyavārttika (NV) on Vātsyāyana's (4th/5th century CE) Nyāyabhāṣya (NB), Uddyotakara (6th century CE) says that an instrument is the maximally efficient (sādhakatama) cause of an effect and takes this maximal efficiency to be a form of excellence (atiśaya). He then spells out six different ways in which this notion of maximal efficiency (sādhakatamatva) can be understood . Several of these interpretations suggest that an instrument is a cause such that, when it occurs, the effect must immediately follow. In Nyāyavārttikatātparyaṭīkā (NVTṪ ), his commentator, Vācapasti Miśra (9th century CE), interprets him exactly along these lines ). More significantly for our purposes, in his sub-commentary Nyāyavārttikatātparyapariśuddhi (NVTP), Udayana (10th/11th century CE) points out that the maximal efficiency of an instruments consists in its being excluded from a lack of connection with its result (phalāyogavyavaccheda) . This view implies that the instrument must be the cause that occurs last in the causal chain that gives rise to the relevant effect. In Nyāyamañjarī (NM), Bhaṫṫa Jayanta (9 th century CE) criticizes this view on the grounds that the entire a maximally efficient cause of a knowledge-event, it must be an event or an entity such that its occurrence is immediately (as a matter of necessity) followed by a knowledgeevent. Udayana states this conception of an epistemic instrument succinctly in the fourth chapter (stavaka) of Nyāyakusumāñjali (NKu).
According to the view of Gotama [i.e., author of the Nyāya-sūtra], a knowledge-event is a correct discrimination (samyakparicchitti). Moreover, the status of being a knower (pramātṛtā) consists in possessing that knowledge-event, while the status of being an epistemic instrument consists in being excluded from a lack of connection with that knowledge-event (tadayogavyavaccheda). 6 For these Nyāya writers, what distinguishes an epistemic instrument from other instruments of awareness is that it never gives rise to an error; in this sense, it doesn't err from its object (arthāvyābhicārin). 7 In his sub-commentary on the Nyāyasūtra (NS), Nyāyavārttikatātparyapariśuddhi (NVTP), Udayana explains the idea as follows: An epistemic instrument is simply the instrument for a knowledge-event…A knowledge-event is a non-erroneous apprehension (aviparītopalabdhi)… Instrumenthood is maximal efficiency (sādhakatamatva). However, that is specified simply by a specific event (kriyā). Moreover, in this case, that event has the characteristic of being a knowledge-event. Therefore, being an epistemic instrument consists in not erring, which is characterised as the property of producing non-erroneous non-recollective awareness-events. 8 Footnote 5 continued causal complex underlying an awareness must be treated as its instrument ). Moreover, later Naiyāyikas who defined the instrument of an effect as something that produces the effect through the mediation of an operation (vyāpāra) also criticized this view; see Bhavānanda Siddhāntavāgīśa's Kārakacakra and Matilal (1990, pp. 372-378). 6 Verse 4.5 : mitiḥ samyak paricchittis tadvattā ca pramātṛtā | tadayogavyavacchedaḥ pramāṇyaṃ gautame mate || Earlier, in Verse 4.1, he says (NKu 450.8): "a knowledge-event is a true, non-recollective awareness..." (yathārthānubhavo mānam). 7 While explaining Vātsyāyana's remark that an epistemic instrument possesses an object (arthavat), Uddyotakara says : "First of all, an epistemic instrument is a discriminator of an object" (pramāṇam tāvat arthaparicchedakam |). Vācaspati Miśra takes this simply to say that an epistemic instrument doesn't err from its objects (arthāvyabhicārin) . He explains this notion as follows : "Moreover, this simply is an epistemic instrument's property of not erring from its object: the lack of the disconformity (avisaṃvāda)-relative to a different place and time, or a different state of a person-with regard to the nature and the qualifier of the object as they are presented by that epistemic instrument" (iyam eva cārthāvyabhicāritā pramāṇasya yaddeśakālanarāvasthāntarāvisaṃvādo 'rthasvarūpaprakārayos tadupadarśitayoḥ |). Similarly, Jayanta says (NM I 31.6-7): "An epistemic instrument is a causal complex (sāmagrī), which gives rise to a non-erroneous and doubt-free apprehension of an object, and which may or may not have the nature of an awareness" (avyabhicāriṇīm asandigdhām arthopalabdhiṃ vidadhatī bodhābodhasvabhāvā sāmagrī pramāṇam). Thus, for all these writers, an epistemic instrument doesn't err from its object insofar as it only produces awareness-events that accurately represent their respective objects. For discussions of Nyāya Infallibilism, see the exchange between Dasti and Phillips (2010) and Ganeri (2010).
If Udayana is right in his description of earlier Nyāya, then early Naiyāyikas are committed to Nyāya Infallibilism: since all knowledge-events must arise from some epistemic instrument and no epistemic instruments could fail to yield true awareness-events, the causal complex that gives rise to a knowledge-event couldn't give rise to an error.
This commitment to infallibilism drove some early Naiyāyikas, such as Bhat˙ṫa Jayanta and Udayana, towards a virtue-theoretic conception of knowledge-events. 9 On this view, if an agent arrives at a knowledge-event, the causal complex that gives rise to her awareness-event must include certain positive factors-called epistemic virtues (guṇa)-that guarantee the truth of the resulting awareness. 10 Though these early writers are reticent on which epistemic virtues are necessary for which kinds of knowledge-events, their treatment of individual epistemic instruments strongly suggests that, if an instrument of awareness is to serve as an epistemic instrument, it must possess epistemic virtues that are typically absent from cases of epistemic luck. 11 Let's see why.
Start with inference (anumāna). Suppose I see that there is smoke coming out of a hill. I had judged earlier that, wherever there is smoke, there is fire. Now, I 9 For Jayanta's defence of this view, see NM I 442.13-444.2, and for Udayana's defence of this view, see NKu 211.1-2. The Vaiśeṡika philosopher, Ś rīdhara (10th century CE), also defends this view in his Nyāyakandalī (NK 519.1-2). It is worth noting a difference between discussions of epistemic virtue in this earlier Indian context and in contemporary epistemology. Contemporary virtue epistemologists treat epistemic or intellectual virtues either as faculties or as traits that promote some intellectual good. Virtue reliabilists, like Sosa (1991), think of intellectual virtues as faculties or qualities that helps the agent maximize her surplus of true beliefs over false ones. In contrast, virtue responsibilists, like Zagzebski (1996), treat intellectual virtues as traits of character that promote intellectual flourishing. However, Naiyāyikas typically treat epistemic virtues as causal factors that are necessary for the production of knowledge-events. 10 This kind of Virtue Infallibilism faced some opposition from Mīmāṁ sakas. Why? If Vātsyāyana and other Naiyāyikas are right, then the Veda can be an epistemic instrument only if its author is trustworthy (āpta) and therefore possesses certain epistemic virtues. But the Bhāṫṫas cannot say this: for them, the Veda is authorless. So, they cannot explain the status of the Veda as an epistemic instrument by appealing to the epistemic virtues of its author. For this reason, they defend the theory of intrinsic knowledgehood (svataḥ-prāmāṇya) with respect to production (utpatti): on this view, a knowledge-event arises simply from the normal causes that give rise awareness-events of a certain kind (as long as those causes are nondefective); no positive factors such as epistemic virtues are necessary. Bhaṫṫa Kumārila's (7th century CE) commentators-Bhaṫt˙a Umbeka, Sucaritamiśra and Pārthasārathimiśra-defend different varies of the theory of intrinsic knowledgehood in their commentaries on Verse 47 in Ślokavārttika ad Mīmāṃsāsūtra 1. . 11 In NKu, Udayana claims that, even though the defects that prevent a piece of testimony from generating knowledge-events may be positive entities (bhāva), the mere absence of those defects may not be enough for a knowledge-event to arise. The point is illustrated with the case of inference. Imagine a variant of Mist and Fire, where there is no fire on the hill. In such a case, the falsity of my initial judgement that there is smoke on the hill is the epistemic defect that makes my final judgement come out false. Udayana's point (as explained by commentators like Varadarāja and Vardhamāna) is that, even when such defects are absent, unless epistemic virtues, like the correct awareness of the reason as present in the site or as pervaded by the target, are present, an inferential knowledge-event won't arise. See Udayana's Nyāyakusumāñjali (NKu 215.1 and 216.1-2), Varadarāja's Bodhanī (NKu 216.6-7) and Vardhamāna's Prakāśa (NKu 215.12-13 and 216.16). remember that. So, I conclude that there is fire on the hill. Any such inference involves three components: a target (sādhya), a reason (hetu) or an inferential mark (liṅga), and a site (pakṣa). The target is the thing that is inferred; here, it's fire. The reason or the inferential mark serves as evidence for the target; here, it's smoke. The site is the place where the presence of the target is inferred; here, it's the hill. For Naiyāyikas beginning with Uddyotakara, the process by which an inferential knowledge-event arises involves three steps. 12 First, the agent becomes aware of the site as possessing the reason. For example, in this case, I simply see that there is smoke on the hill. This is called the awareness of the reason as a property of the site (pakṣadharmatājñāna). Then, the agent recalls that there is a relation of pervasion (vyāpti) between the reason and the target. In this case, I recall that, wherever there is smoke, there is fire; thus, I recall that fire pervades smoke. This step is called the recollection of pervasion (vyāptismaraṇa). Finally, the agent combines these two bits of information in a single judgement: she judges that the site contains a reason that is pervaded by the target. For example, in this case, I may judge that the hill possesses smoke that is pervaded by fire. This is called a subsumptive judgement (parāmarśa). This gives rise to the inferential knowledge-event (anumiti) that there is fire on the hill.
According to these Naiyāyikas, a good or non-defective reason (saddhetu) must have five characteristics: (1) it must be present in the site, (2) it must be present at a similar site (sapakṣa), i.e., a place where the target is observed to be present, (3) it must be absent from a dissimilar site (vipakṣa), i.e, a place where the target is absent, (4) it must be such that the relevant target isn't already proved (by some other epistemic instrument) to be absent from the site, and (5) it must be such that there is no competing (and equally strong) inferential mark that supports the opposite thesis, i.e., that the target is absent from the site. Any inferential mark that fails to satisfy any of these conditions is said to be a pseudo-reason (hetvābhāsa). 13 An inferential mark that is absent from the site is unproved (asiddha or sādhyasama); when it isn't present at any similar site, it is said to be incompatible (viruddha); when it is present at a dissimilar site, it is said to be deviating (savyabhicāra); when the relevant target is proved to be absent from the site, the inferential mark is rebutted (bādhita or kālātīta); finally, when there is an equally strong competing inferential mark, the inferential mark is said to be counterbalanced (satpratipakṣa or prakaraṇasama). Notice that the inferential marks involved in Cows and Horns and Mist and Fire are pseudo-reasons. In Mist and Fire, the smoke is the inferential mark, while the fire is the target. But since the smoke is actually absent from the site, i.e., the hill, the inferential mark ends up being unproved; in particular, later Naiyāyikas call this kind of unproved reason unproved by nature (svarūpāsiddha), because the inferential mark, by its own nature, isn't proved to be 12 See NV 41.9-12 and 42.12-21 on NS 1.1.5. 13 These pseudo-reasons are listed in NS 1.2.4 and then explained in NS 1.2.5-9. For later discussions of these pseudo-reasons, . For an accessible introduction, see Matilal (1990, pp. 42-58). present in the site. 14 In Cows and Horns, the horns serve as the inferential mark, while the target is cowhood. Since animals other than cows can have horns, the inferential mark is deviating because it is present at a dissimilar site.
Importantly, early Naiyāyikas, such as Vātsyāyana and his sub-commentators, claimed that a genuine inference-i.e., an episode of reasoning that yields an inferential knowledge-event-must involve an epistemically virtuous inferential mark, i.e., an inferential mark that satisfies (at least some of) the conditions laid out above. In particular, it cannot involve an inferential mark that is either deviating or unproved: an episode of reasoning that involves such a defective inferential mark will merely be a pseudo-inference (anumānābhāsa). 15 This immediately implies that my judgements in Mist and Fire and Cows and Horns cannot be inferential knowledge-events.
Let us now turn to the case of testimony (śabda). According to NS 1.1.7, "Testimony is the teaching of a trustworthy person." In his commentary, Vātsyāyana explains the notion of trustworthiness as follows: Certainly, a trustworthy person is a teacher who is directly acquainted with existent objects (dharma) and is motivated (prayukta) by the desire to convey things as they have been perceived. The direct acquaintance with an object is 14 Other Indian philosophers discussed similar cases. For example, the author of Nyāyapraveśa mentions a kind of unproved reason (asiddhahetu) that he calls unproved in virtue of being suspect (sandigdhāsiddha) : "A mass of material elements, which is suspected to have the nature of mist, etc. but is stated for the sake of proving fire, is unproved in virtue of being suspect" (bāṣpādibhāvena saṃdihyamāno bhūtasaṃghāto'gnisiddhāv upadiśyamānaḥ saṃdigdhāsiddhaḥ |). In his classification of pseudo-reasons (which, incidentally, matches the classification given in Nyāyapraveśa), the Vaiśeṡika philosopher, Praśastapāda (6 th century CE), discusses a similar case as an example of a reason that is unproved as having that nature (tadbhāvāsiddha) (KA 229.7-9): "A reason that is unproved as having that nature is like this. When an awareness of fire is to be brought about by means of the nature (bhāva) of smoke, the mist that is put forward [as a reason] is unproved as having the nature of smoke" (tadbhāvāsiddho yathā dhūmabhāvenāgnyadhigatau kartavyāyām upanyasyamāno bāṣpo dhūmabhāvenāsiddha iti |). Earlier in the text, Praśastapāda clearly mentions a case like this as a case of error . 15 This argument occurs in Vātsyāyana's commentary on NS 2.1.37 where he entertains a sceptical objection against the status of inference as an epistemic instrument. The sceptic considers three pseudoinferences: (1) a pseudo-inference of past rain from the fulness of a river (that is caused due to a dam), (2) a pseudo-inference of future rain on the basis of the movement of ants with their eggs (caused by the destruction of their nests), and (3) a pseudo-inference of the presence of a peacock outside on the basis of a noise that resembles the cry of a peacock (but in fact is made by a human being). The inferential marks in (1) and (2) are deviating, while the one in (3) is unproved. The sceptic's argument is that, since the inferential marks involved in all putative inferences are defective just like these pseudo-reasons, no putative inference can prove anything (NB 80.6-9). Vātsyāyana's response simply is that episodes of reasoning which are based on defective inferential marks such as these aren't genuine inferences (NB 80.12-18 on NS 2.1.38). In all these cases, the inferential mark lacks certain distinguishing characteristics that a genuine reason would have. Vātsyāyana says : "This very fault lies with the inferrer, and not with inference, insofar as he seeks to be aware of an object-which is to be inferred by a specific characteristic of an object-by observing something that lacks that specific characteristic" (so 'yam anumātur aparādho nānumānasya, yo'rthaviśeṣeṇānumeyam artham aviśiṣṭārthadarśanena bubhutsata iti |). This strongly suggests that pseudo-reasons cannot yield inferential knowledge-events. Interestingly, a similar view is found in verses 156-64 of the section called Nirālambanavāda in Kumārila's Ślokavārttika. Kumārila argues that, in cases like Mist and Fire, one cannot arrive a true inferential judgement (Ś V 182.23-183.6); for some discussion, see Ganeri (2007, ch. 5). We shall return to this view in the next section. the attainment (āpti) of the object. Since he undertakes action on the basis of that, he is trustworthy (āpta). 16 On Vātsyāyana's view, a trustworthy person must have three features. First, she must have been directly acquainted with the content that she wishes to communicate. Second, she must have compassion for other beings to whom she is communicating this content; in other words, her utterances must be motivated by a desire to help others. Third, she must want to communicate how things are exactly the way she herself has found them. Note that this notion of trustworthiness is quite strong: it seems to imply that a palaeontologist who has never encountered dinosaurs but has made lots of good inferences about them still cannot be treated as trustworthy with respect to them. While Uddyotakara seems to largely agree with this characterisation of the trustworthy speaker 2), Vācaspati Miśra seems to relax, or reinterpret, some of these requirements (perhaps to accommodate a wider class of testimony). Instead of taking the requirement of direct acquaintance literally, he interprets this requirement as follows: "A person who is directly acquainted with, i.e., has determined by means of a firm epistemic instrument, the existent objects (dharma), i.e., the entities (padārtha) that are useful for the attainment of benefits and the avoidance of harms, is said to be so [i.e., trustworthy]." 17 Thus, on Vācaspati's view, even a palaeontologist who has never seen dinosaurs could still count as trustworthy with respect to them.
Even if we accept Vācaspati's weakened conception of trustworthiness, this Nyāya view entails that a piece of testimony can serve as an epistemic instrument only if its speaker possesses at least two virtues: she must have gained by means of an epistemic instrument a correct awareness of the content that her utterance conveys, and she must have the desire to sincerely convey the truth. Both of these virtues are (arguably) missing in cases like The Mistaken Deceiver and The Parrot. In both cases, the speaker hasn't determined the content of the utterance to be true by means of any epistemic instrument, and lacks the desire to convey the truth. So, the relevant linguistic utterances cannot be treated as epistemic instruments. Therefore, the resulting judgements cannot be knowledge-events.
The upshot: the early Naiyāyikas' commitment to Nyāya Infallibilism would have prevented them from treating epistemically lucky awareness-events as knowledgeevents.
This strongly suggests that the conception of knowledge-events that these Naiyāyikas were working with was closer to our contemporary notion of knowledge. According to a simple account of knowledge, a belief (or, more generally, an information-bearing state) has the status of knowledge just in case it is 16 NB 14.4-5 on NS 1.1.7: sākṣātkaraṇam arthasyāptiḥ, tayā pravartata ity āptaḥ | āptaḥ khalu sākṣātkṛtadharmā yathādṛṣṭasyārthasya cikhyāpayiṣayā prayukta upadeṣṭā | Vātsyāyana fleshes out this notion of the trustworthy person while defending the status of the Veda as an epistemic instrument. In his commentary on NS 2.1.68, he says (NB 96.16-97.7): "Moreover, what does the status of trustworthy persons as epistemic instruments consist in? Being directly acquainted with existent objects (dharma), compassion towards living beings, and the desire to convey things as they are" (kiṃ punar āptānāṃ prāmāṇyam? sākṣātkṛtadharmatā bhūtadayā yathābhūtārthacikhyāpayiṣeti |). 17 NVTṪ 166.20-22: sudṛḍhena pramāṇeṇenāvadhāritāḥ sākṣātkṛtā dharmāḥ padārthā hitāhitaprāptiparihāraprayojanā yena sa tathoktaḥ | true but not as a matter of luck. On the early Nyāya view, a knowledge-event is simply a conscious non-recollective experience or thought in undergoing which one non-luckily or non-accidentally acquires true information. Thus, it's plausible to think of knowledge-events as conscious mental events of learning or knowledgeacquisition. This way of connecting this conception of knowledge-events to the notion of knowledge explains at least two aspects of early Nyāya epistemology. First, it explains why these Naiyāyikas thought that the causes underlying knowledge-events couldn't give rise to any error. Second, it also explains why they thought that knowledge-events have to be non-recollective: since reliable recollective awareness-events (typically) only help us retrieve information that we had already acquired earlier, we don't independently acquire any true information through them. Thus, even when they are true (and reliable), they cannot be events of knowledge-acquisition.
What unifies the early Nyāya approach to knowledge-events is a form of epistemic anti-localism. According to the early Naiyāyikas, the production of inferential and testimonial knowledge depends on the transmission of knowledge from other causally upstream awareness-events that belong either to the agent herself or to some other agent. For example, in the case of inference, the production of an inferential knowledge-event depends on whether the initial steps of the relevant cognitive process-the agent's initial awareness of the reason as a property of the site or her awareness of pervasion-are themselves knowledge-events. An inferential judgement can be a knowledge-event only if these initial awarenessevents are. So, the epistemic status of these causally upstream awareness-events matters. Similarly, in the case of testimony, the epistemic virtues of the speaker play an important role: unless the speaker undergoes a knowledge-event regarding the content that she wishes to convey, the resulting testimonial awareness cannot be a knowledge-event. Once again, the production of knowledge in this case depends on the epistemic status of a causally upstream awareness-event. This form of antilocalism explains why these early Nyāya authors wouldn't treat epistemically lucky awareness-events as knowledge-events. As we shall see later, Gaṅgeśa rejects this form of anti-localism.
geśa on Inference and Epistemic Luck
In Khaṇḍanakhaṇḍakhādya (KKh), Ś rīharṡa showed that the early Nyāya theory of knowledge-events doesn't handle cases of epistemic luck well. On the one hand, since Naiyāyikas like Udayana were committed to the Nyāya Definition of Knowledge, they couldn't rule out epistemically lucky inferential judgments (like my judgements in Mist and Fire and Cows and Horns) from the class of knowledgeevents. Yet, given their other commitments, these Naiyāyikas also couldn't treat these as knowledge-events. For none of the characteristic epistemic virtues that are supposed to accompany inferential knowledge-events are present in such cases.
Here, I won't rehearse Śrīharṡa's arguments. 18 In this section, my aim will be to show that Gaṅgeśa partially concurs with Śrīharṡa: he agrees that, if his own preferred version of Nyāya Definition of Knowledge is right, inferential judgements based on pseudo-reasons cannot be excluded from the class of knowledge-events. In TCM, Gaṅgeśa endorses a version of the Nyāya Definition of Knowledge.
A non-recollective awareness of something at a place where it exists is a knowledge-event. Alternatively, it is an awareness that attributes a certain qualifier to something that possesses that qualifier. The awareness of something at a place where it doesn't exist, or an awareness that attributes a qualifier to something that possesses the absence of that qualifier is not a knowledge-event (apramā). 19 We can state this more precisely.
Gaṅgeśa's Definition of Knowledge. An awareness-event is a knowledge-event if and only if (i) it is a non-recollective awareness, and (ii) if it attributes a qualifier (prakāra) x to a qualificand (viśeṣya) y by a relation R (or presents y as characterised by x in virtue of the relation R), then x is related to y by R.
To see how the definition works, consider a case where I see a banana before me as yellow. Here, the banana is the qualificand, while the yellow colour I perceptually attribute to it is the qualifier. My perceptual awareness is a knowledge-event just in case that yellow colour that I perceptually attribute to the banana actually is present in the banana.
Apply this definition to cases like Mist and Fire and Horns and Cows. In these cases, the relevant agent forms inferential judgements on the basis of pseudoreasons. Can such an inferential judgement be a knowledge-event? Given Gaṅgeśa's definition of knowledge-events, the answer (we might think) has to be a resounding "Yes." In Mist and Fire, for example, if the qualificand of my inferential judgement is the hill and the qualifier is fire, then my judgement indeed is a knowledge-event by Gaṅgeśa's lights. For there is fire on the hill. However, Gaṅgeśa's discussion of these cases reveals that things aren't as straightforward as they appear. Let's see why.
Non-probativity
A good place to begin will be Gaṅgeśa's general definition of pseudo-reasons in Anumānakhaṇḍa. Gaṅgeśa offers three distinct definitions: In that context, the property of being a pseudo-reason (hetvābhāsatva) is (i) the property of being an intentional object of a true awareness which is the counterpositive (pratiyogin) of an absence that serves as a cause of an inferential knowledge-event (anumiti), or (ii) the object such that an awareness of the inferential mark, which has that object as an intentional object, is an impediment (pratibandhaka) to an inferential knowledge-event, or (iii) the property of being something which, when it is being apprehended (jñāyamāna), serves as an impediment to an inferential knowledge-event. 20 The first definition simply says that, if an agent were to correctly judge that the inferential mark of a putative inference had the property of being a pseudo-reason, she wouldn't be able to arrive at the relevant inferential knowledge-event. In this sense, the absence of such a correct judgement (which has as its intentional object the property of being a pseudo-reason) is a cause of the inferential knowledge-event.
The second definition restates that very idea in a slightly different way: it says that, if an agent were aware of an inferential mark as possessing the property of being a pseudo-reason, then that awareness would prevent the relevant inferential knowledge-event from arising. Finally, the third definition says that the property of being a pseudo-reason, when it is being apprehended, itself prevents the relevant inferential knowledge-event from arising. Notice that all of these three definitions gesture at the same idea: namely, that if an agent were to (correctly) judge that a putative inferential mark is a merely apparent or defective reason, then she couldn't arrive at an inferential knowledge-event on the basis of it. In other words, such a judgement serves as a source of defeating evidence, which, in turn, prevents the agent from (rationally) making the inferential judgement that she would have made otherwise. Therefore, a pseudo-reason isn't, by definition, an inferential mark that prevents the agent from arriving at inferential knowledge-events. While this doesn't immediately tell us whether an inferential knowledge-event can be based on a pseudo-reason, it does clear some theoretical space for saying so.
In a later section of Anumānakhaṇḍa-named "asādhakatāsādhakaprakaraṇa"-Gaṅgeśa takes up this question. Suppose that, in the context of a debate, a participant points out that her opponent's argument relies on a pseudo-reason. This reply counts as a good response (saduttara) to that argument. Why is this so? A plausible answer: in pointing out a pseudo-reason, this participant is able to prove that the relevant inferential mark is non-probative (asādhaka), i.e., that it doesn't prove the presence of the relevant target. This, in turn, blocks the opponent's argument. What is non-probativity (asādhakatva)? Before presenting his preferred proposal, Gaṅgeśa considers, and rules out, a number of proposals about what nonprobativity could be. We will focus on a proposal to which Gaṅgeṡa devotes the 20 TC C II.1 763.1-2 and 764.1-2: tatrānumitikāraṇībhūtābhāvapratiyogiyathārthajñānaviṣayatvaṃ, yadviṣayatvena liṅgajñānasyānumitipratibandhakatvaṃ, jñāyamānaṃ sad anumitipratibandhakaṃ yat tattvaṃ hetvābhāsatvam | Gańgeśa's use of the term "anumiti" is somewhat inconsistent. As we shall see, he sometimes qualifies the expression with adjectives like "yathārtha" or "satya" (both of which roughly mean "true"); in such contexts, he seems to use the term merely to refer to inferential awareness-events (which may or may not be true), and not to inferential knowledge-events. So, whenever he uses such an adjective, I have translated the term as "inferential awareness"; in other cases, I have translated it as "inferential knowledge-event." greatest amount of attention: namely, that non-probativity is simply the property of not producing a correct awareness of the site as characterised by the target (samīcīnasādhyaviśiṣṭapakṣapratyayājanakatva).
Gaṅgeṡa notes a troubling consequence of this proposal. If this view is true, and if we agree that defective reasons are non-probative, then one cannot arrive at a true inferential judgement by reasoning from a pseudo-reason. This would imply that, since the inferential mark in Mist and Fire is unproved by nature (svarūpāsiddha), I cannot arrive at a true inferential judgement in that situation. But my final judgement is indeed correct in that case! So, the proposal fails. Gańgeśa explains this idea as follows: It [i.e. non-probativity] is also not the property of not producing a correct awareness of the site as characterised by the target. For (i) in cases involving rebutted (bādha), incompatible (viruddha) and unproved (asiddha) reasons, when the site isn't a locus of the target, a true (satya) awareness of the target isn't well-established, and (ii) an inferential awareness of fire (vahnyumiti) in a site that contains fire-which arises from an erroneous awareness of mist as smoke-is true (satya). 21 Gaṅgeśa's argument is this. At least, in cases where the site doesn't contain the target, some pseudo-reasons, e.g., rebutted, incompatible or unproved reasons, cannot yield any true awareness of the site as characterised by the target. However, in a case like Mist and Fire, where the target is genuinely present in the site, even an inferential mark that is unproved in the site (svarūpāsiddha) can give rise to a true inferential awareness.
Gaṅgeśa considers two distinct strategies for resisting this conclusion.
Strategy 1. In Mist and Fire, the inferred target (i.e., the fire) isn't present in the site (i.e., the hill). So, the inferential judgement is false. Strategy 2. In Mist and Fire, the site (i.e., the hill) or the target (i.e., the fire) appears in the final inferential judgement as connected to the relevant inferential mark (i.e., smoke) in a certain way. But, since the inferential mark is absent from the site, the final judgement is false.
Both these strategies, according to Gaṅgeśa, are unsuccessful.
Strategy 1
Let's begin with Strategy 1.
What makes this strategy attractive from a Nyāya standpoint? The Naiyāyikas are realists about intentional objects of awareness: if anything is an intentional object of a conscious thought or experience, it must exist independently of that thought or experience. This compels them to accept a misplacement theory of error (anyathākhyātivāda). According to this theory, when an agent misperceives an object o as characterised by some property F, the erroneous awareness ascribes to that object a property, i.e., F, which she has earlier veridically perceived elsewhere. So, when I misperceive a mother-of-pearl as silver, my erroneous perceptual awareness ascribes to the mother-of-pearl the property of silverhood that I have encountered elsewhere. Let's now see how this applies to cases like Mist and Fire. In Mist and Fire, I misperceive the hill as containing smoke. This may be smoke I have seen elsewhere, e.g., in the kitchen. If this is right, one could argue that my final judgement in these cases is false. For, if the smoke that I ascribe to the hill is absent from the hill, then the fire I ascribe to the hill-insofar as it accompanies the smoke I perceptually ascribe to the hill-also cannot be present on the hill. This is precisely the conclusion that defenders of Strategy 1 support.
Partially following Śrīharṡa's treatment of such cases in Khaṇḍanakhaṇdakhādya, 22 Gaṅgeśa offers two responses to this strategy.
Moreover, it is not the case that it is simply some other fire that appears in that awareness. For, there is no evidence for this, since that fire may be recognized [later], and such an inferential awareness is possible in a case that involves just one individual [as the target]. 23 According to Gaṅgeśa, there is no good reason for us to think that the inferred target in cases like Mist and Fire is in fact missing from the site. In Mist and Fire, for example, after inferentially judging that there is fire on the hill, if I walk up to the fire that is actually present on the hill, I can-at least seemingly-recognize it as the fire that I inferred. If we take this to be a genuine case of recognition, then we must concede that the fire I inferred is indeed the fire that I now see. This argument isn't all that convincing: the opponent could simply deny that this is a genuine case of recognition. But Gaṅgeśa's second response (which is borrowed from Ś rīharṡa) is more persuasive: he points out that the proposal in question cannot succeed when it comes to an episode of reasoning which involves just one object as its target. Consider Horns and Cows: here, the target is a universal (sāmānya), i.e., cowhood. 24 The opponent cannot argue that the cowhood that I ascribe to the animal is distinct from the cowhood that is present in the animal that I see. For there is just one such property! Despite these problems, the opponent might insist that a version of this strategy could still be made to work. In a case like Mist and Fire, I take the inferred fire to be identical to something that pervades the smoke that I saw. Thus, in my final inferential judgement, something that pervades the defective inferential mark (i.e., the smoke) appears as fire. But, if there is really no smoke on the hill, the fire that is present on the hill cannot pervade the defective inferential mark. Similarly, in Cows and Horns, I take cowhood to be identical to something that pervades the possession of horns. Thus, in my final inferential judgement, something that pervades the defective inferential mark (i.e., the possession of horns) appears as cowhood. But, if cowhood doesn't really pervade the possession of horns, the cowhood that is present in the animal cannot pervade the defective inferential mark. So, in each case, the inferential judgement will end up being false.
Once again, Gaṅgeśa thinks that this strategy cannot succeed.
[The opponent:] With regard to the defective inferential mark, one is aware of the identity with something that is pervaded by fire. And, thus, with regard to fire, one is also aware of the identity with something that pervades the defective inferential mark. Otherwise, there wouldn't be an awareness of the defective inferential mark as pervaded by fire. In this manner, something that pervades the defective inferential mark appears as fire. So, the inferential awareness of fire is simply untrue.
[Reply:] No. For, in a case where an inferential awareness [of fire] arises due to the superimposition of smoke on to light that is pervaded by fire, there would be the consequence that the inferential awareness doesn't fail to be true despite there being a variety of unprovedness, because there is an identity between what pervades light and what pervades smoke. 25 Gaṅgeśa is imagining a case like this.
Light and Fire. I look at a hill and mistake what is in fact light to be smoke emerging from the hill. In fact, there is no smoke on the hill. Earlier, on numerous occasions, I had noticed, in kitchens, etc., that smoke goes hand in hand with fire. On the basis of those observations, I had judged that wherever there is smoke, there is fire. Now, I remember that. So, I conclude, "There's fire on the hill." My judgement is true: there is fire on the hill.
In this case, the inferential mark-i.e., smoke-is defective: it suffers from the fault of being unproved by its own nature (svarūpāsiddha), since it is absent from the site. According to the opponent, something that pervades that defective reason appears as fire in my final judgement. But since there is light on the hill, the hill contains fire, and that fire pervades the light present on the hill. Since the fire that pervades the smoke isn't distinct from the fire that pervades the light, the final inferential judgement (which ascribes to the hill a fire that pervades the smoke) will come out true.
Strategy 2
Let's move on to Strategy 2. According the defenders of this strategy, when I conclude that there is fire on the hill in Mist and Fire, the inferential mark somehow appears as an intentional object in my final inferential judgement. For example, the content of my judgement could be expressed in one of the following two ways: (1) The hill that contains smoke also contains fire.
(2) The hill contains the fire that pervades smoke.
Both these judgements are false. (1) is false because the hill doesn't contain any smoke; (2) is false because the fire that is present on the hill doesn't accompany (and therefore doesn't pervade) smoke.
Gaṅgeśa thinks that this strategy is hopeless. 26 First of all, he thinks that there is no evidence for thinking that the reason actually appears as an intentional object in the final inferential judgement. Second, he invokes a case like this.
Darkness. I can't tell whether darkness is a positive entity (bhāva) like a material object or its size or colour, or a negative entity (abhāva), e.g., a mere absence of light. I have noticed that both positive and negative entities are knowable (prameya). So, despite being uncertain about whether darkness is a positive or a negative entity, I reason like this, "Is it a positive entity or a negative entity? In both cases, it's knowable." Thus, I reason from both those properties-positivity (bhāvatva) and negativity (abhāvatva)-to the conclusion that darkness is knowable.
On one way of reconstructing the reasoning, it involves a conjunctive inferential mark which combines these two mutually incompatible properties. If this were true, then the inferential mark would be unproved by nature (svarūpāsiddha). For nothing is both a positive entity and an absence. But the inferential judgement that darkness is knowable is undeniably true. But, if the opponent were to say that the inferential mark appears as an intentional object of this judgement, she would be forced to say that this judgement is false. That is the problem.
In response, the opponent might argue that the inferential mark in this case isn't a conjunction of both positivity and negativity, but in fact is disjunctive, i.e., the property of being positive or negative (bhāvābhāvānyataratva). In reply, Gaṅgeśa makes two points. First, he notes that an inference like this could be made even by someone who isn't aware of any pervasion between this disjunctive property and knowability. Moreover, he says that including "either…or…" (anyataratva) into the specification of the inferential mark is an unnecessary qualification (vyarthaviśeṣaṇa), presumably because it would not rule anything out from the scope of the inferential mark (since everything is either positive or negative). 27 The opponent also cannot claim that, in this case, the inferential judgement is true because it has the content, "Darkness, which is characterised by some property (either positivity or negativity) that is pervaded by knowability, is knowable." For, the analogous judgement in Mist and Fire, "The hill, which is characterised by some property that is pervaded by fire, contains fire," is also true. 28 In response to all these problems, the opponent might simply point out that there are good reasons for thinking that, in any case of reasoning, the inferential mark does appear as an intentional object of the final inferential judgement.
[The opponent:] The inferential mark is an intentional object of the inferential awareness, (i) because, as a matter of rule, it is the intentional object of any awareness of the reason as a property of the site, (ii) because it is the intentional object of [the awareness of] pervasion, and (iii) because, as a matter of rule, it is the intentional object of the awareness of the qualifier [i.e., the target] which causes the inferential awareness, just like hillhood and like the target. Moreover, [in a case where] there exists a causal complex for the awareness of some other qualifier with respect to something that has been apprehended as possessing a qualifier, there is-in that very case-an awareness of the qualification of a qualified object (viśiṣṭavaiśiṣṭyajñāna). So, with respect to that very thing [i.e., the hill] which is qualified by smoke, there is an inferential awareness of being qualified by fire. 29 There are two arguments here. The first argument is relatively simple. Since the inferential mark appears in every essential step of the inference (the awareness of the site as possessing it, the recollective awareness of pervasion, and the subsumptive judgement), it must also appear in the final inferential judgement. In this respect, it should be on the same footing as the delimitor of sitehood (pakṣatāvacchedaka, i.e., the property that specifies which object plays the role of the site in the inference). In Mist and Fire, the delimitor of sitehood is hillhood (parvatatva). It appears as a qualifier of the hill not only in the initial awareness of the hill as possessing smoke and in the subsumptive judgement, but also in the final inferential judgement. The same is true of the target, i.e., fire. The target appears in the recollection of pervasion as well as in the subsumptive judgement. But it also appears in the final inferential judgement. Given that these two components of the Footnote 27 continued This point is surprisingly underexplained in all the extant commentaries of the passage. That is why I am forced to reconstruct Gaṅgeśa's rationale for saying this on my own. 28 TCM C II.1 990.9-12: "[The opponent:] In that case [i.e., Darkness], being pervaded by the target is the rule (tantra). And there is no rebutting defeat (bādha) in that respect. [Reply:] If this is right, then, in the case of an inferential awareness that arises from a defective inferential mark (kūṭaliṅga), the site's possessing something that is pervaded by fire is the rule. Moreover, in that case [i.e., Mist and Fire], there is indeed something that is pervaded by fire" (atha sādhyavyāpyatvam eva tatra tantraṃ tatra ca bādho nāstīti cet | tarhi kūtaliṅgakānumitau vahnivyāpyavattvam eva tantraṃ vahnivyāpyañ ca kiñcit tatrāsty eva | 29 TCM C II.1 990.18 and 991.1-5: atha liṅgam anumitiviṣayo niyamataḥ pakṣadharmatājñānaviṣayatvāt vyāptiviṣayatvāt niyamenānumitihetuviśeṣaṇadhīviṣayatvāc ca parvatatvavat sādhyavac ca | kiñcaikaviśeṣaṇavattvenā jñāte [yatra] viśeṣaṇāntaradhīsāmagrī tatraiva viśiṣṭavaiśiṣtyajñānam iti dhūmaviśiṣṭa eva vahnivaiśiṣtyānumitir iti | inference appear as intentional objects in the final inferential judgement, why shouldn't the same be true of the inferential mark?
The second argument is different. When some object o is apprehended as qualified by some property F and the causal conditions for a further awareness of o as qualified by some other property G are present, then G should appear in the resulting awareness as the qualifier of an o that is already qualified by F. This is what Gaṅgeśa calls the awareness of the qualification of a qualified object. So, in Mist and Fire, if I am already aware of the hill as qualified by smoke, then, even when I infer the presence of fire on the hill, the fire should appear in my inferential judgement as a qualifier of a hill that is already qualified by smoke. If that happens, my inferential judgement will be false.
In response to these two arguments, Gańgeśa gives one final (and I think decisive) response. He appeals to a variant of Mist and Fire: Mist and Fire Redux. I look at a hill and see what looks like smoke emerging from it. What I see is a wisp of mist. But there is smoke elsewhere on the hill. On the basis of what I see, I judge that there is smoke on the hill. Since I remember that fire always accompanies smoke, I judge, "There's fire on the hill." My judgement is true: there is in fact fire on the hill. 30 In this case, the inferential mark, i.e., smoke, is defective, but not altogether absent from the site. 31 Thus, even if the inferential mark were to appear as an intentional object (i.e., as a qualifier of the hill) in my final inferential judgement, my final judgement wouldn't be false. Gaṅgeśa explains the idea as follows. Now, let this be true. Even then, when smoke is present by chance (daivāt) on that hill, how can the inferential awareness be untrue even with respect to that part? Therefore, the fire that is brought about by wet fuel is a pervader of smoke, not any other fire. Moreover, it is not the case that, since some other fire pervades smoke in virtue of firehood, that other fire is also a pervader of smoke. For smoke is present even in the absence of that other fire. In this manner, the following is also refuted: "Due to an erroneous awareness of smoke with respect to mist, a fire that pervades smoke appears [in the inferential awareness], and that fire doesn't exist in that case. So, that inferential awareness is not true." For, when the smoke is present by chance, the inferential awareness is true. 32 The point is this. Even if the smoke appears as an intentional object in the final inferential judgement, the judgement could be entirely true when there is in fact smoke as a matter of luck on the hill. The opponent cannot reject this conclusion by arguing that the kind of fire I infer isn't the same kind of fire that is present on the 30 For Ś rīharṡa's version of the case, see KKh 389.11-16. 31 For discussion of whether inferential marks of this sort can be treated as unproved (asiddha), see Saha (2003, ch. 4). 32 TCM C II.1 991.1 and 992.1-6: astu tāvad evaṃ, tathā 'pi daivāt tatra dhūmasattve kathaṃ tadaṃśe'py asatyatā | ata eva ārdrendhanaprabhavo vahnir dhūmavyāpako nānyaḥ | na ca vahnitvena vyāpakatvād anyo'pi tathā, tena vinā'pi dhūmasattvāt | evaṃ bāṣpe dhūmabhramāt dhūmavyāpako vahnir bhāsate sa ca tatra nasty eveti na sānumitiḥ satyeti nirastaṃ | daivād dhūmasattve satyatvād iti | hill. For, given that smoke is only produced due to the combustion of wet fuel, the only kind of fire that pervades smoke is the fire that is produced from wet fuel. In Mist and Fire Redux, since I take the fire that I infer to be a pervader of smoke and there is in fact smoke on the hill, the inferred fire isn't distinct in kind from the fire that is present on the hill. So, the inferential awareness comes out true, and the opponent's strategy fails. 33 I think these passages reveal a significant aspect of Gaṅgeśa's approach to cases like Mist and Fire. Gaṅgeśa seems to concede that, if the Nyāya Definition of Knowledge is right, episodes of reasoning that involve apparent or defective reasons can yield true (satya) awareness-events. In fact, in this very section, when Gaṅgeśa states his own considered view (siddhānta), he accepts all the objections that he himself put forward as part of the prima facie position (pūrvapakṣa). That is why he doesn't take a non-probative inferential mark to be something that doesn't produce any correct awareness about the target. Rather, he defines non-probativity as follows: We reply. Non-probativity is the property of not producing any awareness of the target in a state where there is an awareness of itself [i.e., of nonprobativity]. 34 The idea is simple. As we already know from Gaṅgeśa's definition of pseudoreasons, if one were to judge that an inferential mark is an apparent or defective reason, one wouldn't (rationally) judge-on the basis of the reason-that the target is indeed present in the site. That is precisely what makes such inferential marks non-probative. In the same way, therefore, the non-probativity of an inferential 33 Gaṅgeśa's own commentators don't agree with him here. For example, Rucidatta points out that, even if the smoke is present by chance in the hill, it cannot appear as an intentional object of the final inferential judgement. He writes (TCM T II.2 181.9-12): "This is to be considered here. Even when smoke is present by chance on that hill, it is not an intentional object of the inferential awareness, because there is no subsumptive judgement that portrays it as pervaded by fire, and it is accepted that the awareness of an inferential mark [in the final inferential judgement] takes place in virtue of its being presented (upanīta) by the subsumptive judgement. For, otherwise, it couldn't be included amongst good reasons (saddhetu). And, thus, [in Mist and Fire], since mist, under the guise of smoke (dhūmatvena), becomes the intentional object of the subsumptive judgement, only the mist which is presented by that subsumptive judgement becomes the intentional object of the inferential awareness under the guise of smoke. Otherwise, the subsumptive judgement in that case wouldn't also be erroneous. So, how can the inferential awareness be true with respect to that part [which concerns the inferential mark]?" (atredaṃ cintyam | daivāt tatra dhūmasattve'pi sa nānumitiviṣayaḥ | tasya vahnivyāpyatvenāparāmarśāt tadupanītatvena liṅgabhānābhyupagamāt | anyathā tasya saddhetutvenāsaṃgrāhyatvāt | tathā ca bāṣpasya dhūmatvatvena parāmarśaviṣayatvāt tadupanītasyaiva dhūmatvenānumitiviṣayatvam | anyathā parāmarśo'pi tatra bhrānto na syāt iti tadaṃśe katham anumiteḥ satyatvam iti |). The Tirupati edition contains two typographical errors here: it prints "śintyam" instead of "cintyam" and "matvena" instead of the first occurrence of "dhūmatvena." I have corrected those. Rucidatta goes on to suggest that the inferential mark must appear in the final inferential judgement as a delimitor of sitehood (pakṣatāvacchedaka, i.e., a property that specifies which object plays the role of the site) (TCM T II.2 182.10). If the misperceived smoke appears in the final inferential judgement as a delimitor of sitehood, that judgement will be false. While this solution seems to work in Mist and Fire, it doesn't work in cases like Horns and Cows which involve deviating (but not unproved) inferential marks. For example, in Horns and Cows, I correctly take the animal to have horns. So, even if the horns appear in the final awareness-event as delimitors of sitehood, the final inferential judgement will remain true. 34 TCM C II.1 992.6-7: ucyate | svajñānadaśāyāṃ pakṣe sādhyapratyayājanakatvam asādhakatvam | mark (like the misperceived smoke in Mist and Fire) doesn't by itself prevent a true inferential awareness from arising. However, if an agent were to recognize that the reason in question is non-probative, she (if rational) wouldn't infer the target on the basis of it. Thus, this account leaves open the possibility that the inferential judgements that arise in cases like Mist and Fire are knowledge-events.
geśa on Testimony and Epistemic Luck
Gaṅgeśa's stance on cases like The Parrot and The Mistaken Deceiver is much less clear. On the one hand, Gaṅgeśa's own definition of knowledge-events seems to straightforwardly predict that these are knowledge-events: in The Parrot and The Mistaken Deceiver, my judgements ascribe to the next room a pot that it actually contains. How can they fail to be knowledge-events? 35 On the other hand, Gaṅgeśa's own definition of testimony as an epistemic instrument doesn't seem compatible with this verdict: "The epistemic instrument that is testimony is produced by a true awareness (tattvajñāna) about the content (artha), which serves as a cause of the utterance (prayoga)." 36 On a natural interpretation, this says that a linguistic utterance has the status of an epistemic instrument just in case it is produced by a true awareness of its own content. In cases like The Parrot and The Mistaken Deceiver, the linguistic utterances that produce my judgements aren't produced by the speaker's true awareness of its content. Therefore, Gaṅgeśa's definition of testimony as an epistemic instrument seems to imply that these judgements aren't produced by epistemic instruments, and therefore aren't knowledge-events. In this section, my aim is to resolve this apparent inconsistency. We shall focus here on a section called "śabaprāmāṇyavādaḥ" in Śabdakhaṇḍa of TCM. At the beginning of that section, Gaṅgeśa's Vaiśeṡika opponent casts doubt on the status of testimony as an independent epistemic instrument. According to this opponent, in cases where an agent comes to know something on the basis of testimony, her knowledge-event is in fact based on an inference. Why? For both Gaṅgeśa and his Vaiśeṡika opponent, the content of a sentence (vākya) is simply a semantic relation (saṃsarga) amongst the referents of different words (pada) that are part of the sentence. The Vaiśeṡika thinks that, on hearing a linguistic utterance, a hearer can correctly infer which semantic relation the speaker intends to convey 35 Here's an additional piece of evidence. In his commentary Prakāśa on Nyāyakusumāñjali, Gaṅgeśa's son, Vardhamāna (14 th century CE), takes cases like The Mistaken Deceiver to show that the presence of epistemic defects (doṣa), e.g., the desire to deceive, etc., amongst the causes of a testimonial awareness needn't prevent such an awareness from being a knowledge-event : "Since the property of being an epistemic instrument is observed to be present in a sentence uttered by a mistaken deceiver in virtue of its conformity to reality despite the presence of the speaker's defects, a defect is also not conducive (prayojaka) to the absence of the property of being an epistemic instrument (aprāmāṇya)" (vaktṛdoṣe saty api bhrāntavipralambhakavākye saṃvādāt prāmāṇyadarśanād doṣo'pi na aprāmāṇyaprayojakaḥ |). This remark seems to clearly concede that, in cases like The Parrot and The Mistaken Deceiver, my judgement that there is a pot in the next room is a knowledge-event. 36 TCM C IV.1 1.4-5: prayogahetubhūtārthatattvajñānajanyaḥ śabdaḥ pramāṇam | In my translation, I am taking "artha" to mean content of the uttered sentence, rather than any arbitrary object. simply on the basis of certain properties of the utterance. What is the structure of that inference? The Vaiśeṡika explains: Even then, testimony is not a distinct epistemic instrument. For the semantic relation amongst the referents of words is proved simply on the basis of an inference: namely, "Words like 'Bring the cow with the stick', or words in the Veda, are preceded by an awareness of a semantic relation which (i) is an intentional object of [the speaker's] intention (tātparyaviṣaya) and (ii) holds amongst the recollected referents of those words. For they are a group of words that have syntactic dependency and so on, just like [the words occurring in] 'Bring the pot.'" 37 Let's unpack this. Consider a situation where a speaker utters the sentence, "Bring the cow with the stick!" On Gaṅgeśa's view, such a command can be true or false: it means that the act of bringing a cow can be achieved by means of effort and is a means to some desired outcome, but doesn't bring about any pain which exceeds the pain that is necessary for bringing about that outcome. 38 When a hearer is exposed to a command like this, how does she become aware of its content? The Vaiśeṡika would tell the following story. This string of linguistic expressions satisfies three conditions: (i) syntactic dependency (ākānkṣā), i.e., the dependency of the expressions in virtue of which they together convey a content, (ii) contiguity (āsatti), i.e., the temporal proximity amongst the utterances of the expressions, and (iii) semantic fitness (yogyatā), i.e., the absence of rebutting knowledge-events (bādhakapramā) that show that the content of the utterance is false. When the hearer correctly judges that a string of linguistic expressions satisfies these conditions, she may infer (on that basis) that the relevant expressions were in fact produced by the relevant speaker's awareness of a content which (i) the speaker wants to communicate, and (b) which consists in a semantic relation amongst the referents of the relevant expressions. On the basis of this inference, the hearer may understand, and judge as true, the content of the relevant utterance. In this inference, 37 TCM C IV.1 22.1, 23.1, 25.1-2, and 29.1: tathā 'pi śabdo na pramāṇāntaraṃ padārthasaṃsargasyānumānād eva siddheḥ | tathā hi gām abhyāja daṇḍeneti padāni vaidikapadāni vā tātparyaviṣayasmāritapadārthasaṃsargajñānapūrvakāṇi ākāṅkṣādimatpadakadambatvāt ghaṭam ānayetivat | 38 It might be surprising to see Gaṅgeśa accept the view that commands like, "Bring the cow!" (gām abhyāja) or injunctions like, "One should worship a stūpa" (caityaṃ vandeta) can be assessed for truth or falsity. According to Gaṅgeśa (and many other Indian philosophers), utterances of this sort can motivate an agent to act in virtue of involving verbal endings that exhort the agent to act, e.g., the imperative suffix (loṭ) and optative suffix (liṅ). In the section of TCM called "vidhivāda," he argues that what motivates the agent to undertake an action in such cases is her awareness of the action as (i) accomplishable by means of effort (kṛtisādhya), and (ii) as a means to a desired outcome (iṣṭasādhana) and (iii) as not giving rise to pain which exceeds the pain that invariably accompanies the relevant desired outcome (iṣṭotpattināntarīyakaduḥkhādhikaduḥkhājanaka). For discussion, see TCM C IV.2 144.2-4 and 174.5-186.1. On the basis of this claim, he concludes the exhortative verbal endings like the imperative or the optative suffix should refer to all three of these properties. Thus, a sentence like "One should worship a stūpa" or "Bring the cow!" would just mean that the relevant act (of worshipping a stūpa or bringing the cow) is accomplishable by means of effort, is a means to a desired outcome, and doesn't bring about pain that exceeds the necessary amount of pain. If the act in question doesn't have one of these characteristics, then the sentence can be false. If it has all of them, the sentence will be true. the site consists in the words that are uttered. The target is being preceded by the speaker's awareness of a semantic relation which she intends to convey and which holds amongst referents of those expressions. The reason is the property of being a group of words that satisfy conditions like syntactic dependency, and so on.
Soon, however, Gaṅgeśa considers a Nyāya objection to this account: namely, that the reason in question deviates from the target, i.e., is present at a place where the target is absent.
[A Naiyāyika:] In the case of a sentence uttered by a deceiver, there is a deviation. For, in that case, there is no awareness of a semantic relation on the basis of the observation of some specific characteristic. One shouldn't say, "Since it is not possible for anyone to construct a sentence without an awareness of a semantic relation, it is possible for that [deceiver] to have a suppositional awareness of a semantic relation (āhāryaṃ tasya saṃsargajñānam)." For, first of all, it is possible for one to construct sentences-just like a parrot-simply on the basis of one's awareness of the relevant words, and the same is the rule even in other cases. 39 The point is this. Suppose you want to deceive me. You know that there is no pot in the next room, but you tell me that there is one. Even though the words that you utter satisfy all three conditions mentioned above, the target won't be present here: since you don't judge (on the basis of any specific piece of evidence) that there is a pot in the next room, your utterance isn't preceded by your judgement that the relevant content is true (i.e., that the semantic relation amongst the referents of the words holds). So, the reason deviates from the target. The Vaiśeṡika might offer the following response: since it's not possible for anyone to construct a sentence without grasping its content, even the deceiver must undergo some sort of suppositional awareness about the content of the relevant sentence before she utters it. But this response fails: just as a parrot can string together expressions without understanding the content of the relevant utterance, so also can the deceiver construct utterances without grasping or reflecting on the content of her utterances.
The Vaiśeṡika replies to this objection as follows.
No, because even that deceiver has an awareness of the semantic relation, since he utters the sentence with the intention (āśaya), "This sentence will convey to this person the semantic relation amongst the referents of the words." Moreover, [the Nyāya objection fails] because there is an absence of semantic fitness. Therefore, in the case of sentences that don't conform to reality and are uttered in the manner of a parrot, there is no deviation. Rather, the awareness of a semantic relation, which arises from testimony, takes place due to an error regarding semantic fitness. 40 The passage contains two arguments. First of all, the Naiyāyika opponent is simply wrong in thinking that, in the case of the deceiver, the speaker lacks an awareness of the content of her utterance. For, the deceiver utters the relevant expressions precisely because she wishes to convey a certain content to the hearer, and she couldn't have that desire without undergoing an awareness regarding that content. So, even if the reason is present in this case, the target isn't absent. Thus, the charge of deviation is avoided. However, this reply isn't robust. Consider a case where a parrot mechanically utters a false sentence, and an agent undergoes an awareness on the basis of it. In such a case, the kind of intention that underlies the deceiver's utterance is missing. So, the problem of deviation will remain intact.
That is possibly why the Vaiśeṡika offers a second argument: when a sentence is false, one of the three conditions mentioned above-namely, semantic fitness-is absent. According to Gaṅgeśa, semantic fitness is the absence of rebutting knowledge-events (bādhakapramāviraha), i.e., roughly, knowledge-events that show that the relevant content is false. 41 If the content of a sentence is false, there always are rebutting knowledge-events-e.g., knowledge-events belonging to Īśvara, an omniscient God-like being-which show that the relevant content is false. Thus, in this case, the expressions uttered by the deceiver lack semantic fitness. So, since the reason is absent in this case, the problem of deviation doesn't arise. Similarly, in cases where a parrot or a child mechanically utters a false sentence, the reason put forward in the Vaiśeṡika inference doesn't deviate from the target, because, in those cases too, semantic fitness is absent. However, in such cases, the hearer may still come to understand what the sentence means, because she mistakenly thinks that the expressions are semantically fit. And that is an absence-which resides in the semantic relation with a referent of one word-of being the qualificand in a knowledge-event regarding the counterpositiveness of an absence that resides in the referent of another word" (ucyate bādhakapramāviraho yogyatā, sā cetarapadārthasaṃsarge'parapadārthaniṣṭhātyantābhāvapratiyogitvapramāviśeṣyatvābhāvaḥ |). This is somewhat complicated. But the meaning is relatively simple. Consider a sentence like, "He sprinkles it with fire" (vahninā siñcati). The relevant words lack semantic fitness, because there is a rebutting knowledge-event that shows that sprinkling isn't the kind of act that can be performed by means of fire. In this case, the referent of "with fire" is the instrumenthood that resides in fire (vahniniṣṭhakaraṇatā). Normally, the semantic relation with this kind of instrumenthood would reside in the referent of a verb by a relation of determinanthood (nirūpakatva), since the referent of the verb, i.e., an action, determines which object plays the role of an instrument in relation to it. But, we know, the referent of the verb "sprinkles"-sprinkling (seka)-isn't the kind of act that can be performed by means of fire. Therefore, the relevant semantic relation is known to be absent from the referent of that verb. Thus, the semantic relation with the referent of "with fire" is the qualificand of a knowledge-event where it appears as the counterpositive of an absence that resides in the referent of "sprinkle." So, semantic fitness is absent. For this explanation, see Mathurānātha's Rahasya (TCM C IV.1 263.4-7). This, however, paves the way for a different worry for the Vaiśeṡika. With reference to cases like The Mistaken Deceiver and The Parrot, the Naiyāyika opponent says: There is a deviation in the case of a fact-conforming sentence (samvādivākya) uttered by a parrot or someone else, and a sentence uttered by a mistaken deceiver, which aren't accompanied by any awareness of a semantic relation. Moreover, how can there be a knowledge-event regarding a semantic relation, given that an inference about the awareness of the speaker is impossible? 42 The problem is basically the same as before. In these cases, there is no absence of semantic fitness, since the content of the sentence is true and thus there is no rebutting defeater that shows that it is false. So, the hearer can correctly judge that the relevant expressions satisfy the three conditions mentioned above. Therefore, the reason can be present in the site, i.e., the words occurring in the sentence. But the target is absent. In the parrot example, this is obvious: the parrot simply has no awareness as of there being a pot in the next room, so it couldn't have uttered the relevant expressions on the basis of its awareness of the content of the relevant sentence. Moreover, in a version of the deceiver example where the deceiver is mistaken, if the deceiver doesn't utter the relevant sentence on the basis of any awareness of its content (but merely on the basis of her awareness regarding the relevant words), the utterance won't be preceded by any awareness of that content. Thus, in both cases, despite the presence of the reason, the target will be absent. So, the problem of deviation cannot be avoided.
The Naiyāyika opponent's second remark raises a different problem. The Naiyāyika presupposes that, in The Mistaken Deceiver and The Parrot, since the sentence uttered by the parrot or the deceiver conforms to reality, the hearer's judgement is indeed a knowledge-event. But that isn't something that the Vaiśeṡika can easily accommodate. For the conclusion of the Vaiśeṡika inference could be false in such cases: given that neither the mistaken deceiver nor the parrot may undergo an awareness of the relevant sentential content, the hearer's inferential judgement that the speaker undergoes such an awareness may not be true. So, testimonial knowledge-events cannot be reduced to inferential knowledge-events. The Vaiśeṡika's response to these problems is somewhat cryptic: No. Moreover, it has been said [in "prāmāṇyavāde utpattivādaḥ"] that, if that awareness [which arises from the parrot's or the mistaken deceiver's utterance] were a knowledge-event regarding a semantic relation, the relevant sentence would be comparable to the Veda. 43 Mathurānātha explains the point as follows: 42 TCM C IV.1 48.2 and 49.1-2: atha saṃsargajñānaṃ vinā śukasyānyasya vā samvādivākye bhrāntapratārakavākye ca vyabhicāraḥ kathaṃ vā tatra saṃsargapramā vaktṛjñānānumānāsambhavād iti cet | 43 TCM C IV.1 49.3: na| yadi tac ca saṃsargapramā tadā vedatulyatety uktam | "It would be comparable to the Veda." The meaning is that, just as, in the case of the Veda, the target is present due to its being preceded by a knowledge-event of Īśvara, so also is true in the relevant case. 44 We can unpack the thought as follows. If my judgements in The Parrot and The Mistaken Deceiver are knowledge-events, then the Vaiśeṡika will happily say that the relevant sentences are similar to the Veda. In the case of the Veda, the Vaiśeṡika's inference yields a correct conclusion precisely because Īśvara, the omnipotent and omniscient God-like being, has composed the Veda with the intention of communicating its content to us. Similarly, even when the parrot or the mistaken deceiver utters a sentence, Īśvara serves as the agent of the relevant utterance. For he is a cause of every effect. So, we may argue that the relevant utterance is in fact caused by Īśvara's true awareness of the relevant sentential content. Thus, the conclusion of the Vaiśeṡika's inference will come out true.
Ultimately, Gaṅgeśa rejects the Vaiśeṡika's reduction of testimony to inference. But the Vaiśeṡika's response also contains a hint of a solution to the problem that we started out with. Recall that, for Gaṅgeśa, a piece of testimony can only serve as an epistemic instrument if it is caused by a true awareness of its content. Since, in The Parrot and The Mistaken Deceiver, the speakers needn't have any true awareness of the relevant sentential contents, the relevant sentences cannot straightforwardly be treated as epistemic instruments. But the Vaiśeṡika's response shows us a way out. Following the Vaiśeṡika, we could argue that, in each of these cases, the utterance of the relevant sentence is caused by Īśvara's true awareness of its content. Thus, the sentence can indeed end up having the status of an epistemic instrument. 45 As we shall later see, Gaṅgeśa himself will endorse this solution in the section of TCM called "prāmāṇyavāde utpattivādaḥ." The lesson is this. If my arguments in this section and the last are sound, then Gaṅgeśa's conception of inference and testimony as epistemic instruments don't exclude epistemically lucky awareness-events from the class of knowledge-events. This aspect of Gaṅgeśa's view creates trouble for him. For it cannot easily be reconciled with his commitment to Nyāya Infallibilism. This is precisely the problem that we shall now turn to. However, in such cases, the view doesn't belong to a Naiyāyika, but rather to a Prābhākara, and don't add any new arguments to the discussion. 46 For a translation of this section, see Phillips and Tatacharya (2009, pp. 141-209).
The Problem of Epistemic Luck
Virtue Infallibilism. For any kind K of knowledge-events, there is a kind E of epistemic virtue associated with that kind K, such that, if any awareness belongs to that kind K, then it is produced by an instance of E.
What does this say? For each kind of knowledge-event-perceptual, inferential, analogical, or testimonial-there is a proprietary epistemic virtue that produces knowledge-events of that sort. According to Gaṅgeśa, there is no uniform epistemic virtue that serves as the cause of all knowledge-events.
We reply. There is no uniform epistemic virtue for every knowledge-event. Rather, depending on the circumstances (yathāyatha), the contact of a sensefaculty with many parts of an object and true awareness-events regarding an inferential mark, similarity, and a sentential content serve as epistemic virtues only individually with respect to each specific [kind of] knowledge-event, because there are positive and negative correlations [between each kind of knowledge-event and each of these virtues]. 47 As Gaṅgeśa goes on to explain, each specific kind of knowledge-event has a corresponding epistemic virtue that gives rise to it. In the case of perceptual knowledge-events, it is observation of specific characteristics, which-in the case of composite material objects-may be mediated by the contact of the relevant sensefaculty with a sufficiently large number of parts of the relevant object. 48 In the case of inferential knowledge-events, the epistemic virtue is a true subsumptive judgement, i.e., a correct awareness of the site as characterised by an inferential mark that is pervaded by the target. Finally, in the case of testimonial knowledgeevents (including those produced by the Veda), the epistemic virtue is the (speaker's) true awareness about the content of the relevant sentence. 49 Gaṅgeśa's commitment to Virtue Infallibilism creates trouble for him. On the one hand, none of the epistemic virtues on the list given above are (or have to be) present in cases like Mist and Fire, Horns and Cows, The Mistaken Deceiver, and The Parrot. In Mist and Fire and Horns and Cows, my subsumptive judgement is certainly false, because the reason is either absent from the site or not pervaded by the target. In The Mistaken Deceiver and The Parrot, the basis of the speaker's 47 TCM C I 327.2-7: ucyate | pramāmātre ca nānugato guṇaḥ kintu tattatpramāyāṃ bhūyo'vayavendriyasannikarṣayathārthaliṅgasādṛṣyavākyārthajñānānāṃ yathāyathaṃ pratyekam eva guṇatvaṃ anvayavyatirekāt | 48 TCM C I 327.8-9: "Just as bile, etc. and error about the inferential mark, etc. serve as defects with respect to specific awareness-events that aren't knowledge-events, so also does the observation of a specific characteristic serve as an epistemic virtue with respect to perceptual knowledge-events, since it regularly accompanies them" (tattadaprāmāyāṃ pittādiliṅgabhramādīnāṃ doṣatvavat pratyakṣe viśeṣadarśanam api guṇaḥ tadanuvidhānāt |). 49 TCM C I 341.1-4: "Moreover, since an inferential knowledge-event, etc. doesn't arise from the mere absence of defects such as errors pertaining to an inferential mark, similarity and a sentential content, epistemic virtues such as a true subsumptive judgement are proved. In this manner, since any knowledgeevent is produced by an epistemic virtue, a knowledge-event is produced even in the case of the Veda by an epistemic virtue, namely a true awareness regarding the contents of [Vedic] sentences. So, Īśvara is proved as the bearer of that epistemic virtue." (api ca liṅgasādṛṣyavākyārthabhramadoṣābhāvamātrān nānumityadir iti satyaparāmarśādiguṇasiddhiḥ | evaṃ pramāyā guṇajanyatvena vede'pi pramā vākyārthayathārthajñānaguṇajanyeti tadāśrayeśvarasiddhiḥ |). utterance needn't be her true awareness of the content of the relevant sentence. Thus, if (following Gaṅgeśa's Definition of Knowledge) we treat my judgements in these cases as knowledge-events, Gaṅgeśa's version of Virtue Infallibilism will be really difficult to defend. This is simply an instance of a more general tension between The Nyāya Definition of Knowledge and Nyāya Infallibilism. Here, we will see how this problem is framed by Gaṅgeśa himself.
Let's begin with a standard objection, offered by Bhāṫt˙a Mīmāṁ sakas, against Virtue Infallibilism. According to earlier Naiyāyikas like Bhaṫt˙a Jayanta, Vācaspati Miśra and Udayana, the Veda has the status of an epistemic instrument precisely because its author is Īśvara, who possesses a correct awareness of the contents of Vedic sentences. The Mīmāṁ saka disputes this: Since there is a rebutting defeater for this view in the case of Veda, e.g., the fact that no author of the Veda is recollected, and so on, therefore, even in ordinary practice, a sentence serves as the cause of a knowledge-event simply in virtue of its defectlessness. However, in the case of Veda, even though a speaker is absent, its defectlessness is determined solely on the basis of its permanence. 50 The thought is this. In the case of Veda, no author is recollected. This, according to the Bhāṫt˙a Mīmāṁ saka, suggests that the Veda has no author. 51 So, in order to explain the status of the Veda as an epistemic instrument, we cannot appeal to its author's correct awareness of its contents. We can only appeal to the Veda's lack of epistemic defects (i.e., epistemic defects that normally give rise to misleading testimony). Similarly, the Mīmāṁ saka claims, ordinary testimony also serves as an epistemic instrument because it is defectless. So, Virtue Infallibilism is false.
Gaṅgeśa's response here is significant: No, because the rebutting defeater will be [later] refuted extensively, and because that defectlessness is absent from (i) the sentence uttered by a mistaken deceiver and (ii) the defect-induced sentence, "There is a cloth," that is uttered when the sentence, "There is a pot," is to be uttered, both of which are epistemic instruments insofar as they conform to the facts (saṃvādāt). Moreover, if that Mīmāṁ saka view is right, the sentence, "One should worship a stūpa," and a sentence uttered by a parrot, etc. by chance would also be epistemic instruments. For the defects of the speaker are absent in those cases and they are similar to the Veda in virtue of being independent of any epistemic instrument. 52 50 TCM C I 344.5-7: vede kartrasmaraṇāder bādhakāt loke 'pi nirdoṣatvenaiva pramāhetutvam, vede tu nityatvenaiva vaktur abhāve'pi nirdoṣatvam avadhāryata iti cet | 51 See v. 368ab in the section called "vākyādhikaraṇa" in Kumārila's Ślokavārttika (Ś V 668.19). 52 TCM C I 345.1-6: na | bādhākasya bahuśo nirākariṣyamāṇatvāt bhrāntapratārakavākye ghaṭo'stīti vācye paṭo'stīti doṣajanyavākye ca saṃvādāt pramāṇe tadabhāvāt | kiṃ ca daivavaśasampannaṃ caityaṃ vandetety ādikaṃ śukabālādivākyam apy evaṃ pramaṇaṃ syāt, vaktṛdoṣābhāvāt pramāṇāpekṣatvena vedatulyatvāt | The passage contains two distinct arguments. First, Gaṅgeśa thinks that the Mīmāṁ saka is simply wrong in thinking that the Veda has no author; later, he will offer arguments against this position in Śabdakhaṇda. 53 On the one hand, if the Mīmāṁ saka is right, in a case like The Mistaken Deceiver or in a case where a true sentence is uttered instead of another false utterance due to a slip of tongue, the relevant sentence wouldn't serve as an epistemic instrument due to the presence of epistemic defects. This is important: the fact that Gaṅgeśa takes this to be a problem for the Mīmāṁ saka clearly suggests that he takes these accidentally true awarenessevents to be knowledge-events. Second, it's obvious that a false sentence uttered by a sincere Buddhist, e.g., the sentence, "One should worship a stūpa", or a false sentence uttered by a parrot or a child, cannot be an epistemic instrument. But, if the Mīmāṁ saka is right in thinking that any defectless sentence can be an epistemic instrument, even such a false sentence should generate knowledge-events on the Mīmāṁ saka's view. For such an utterance isn't accompanied by any of the typical epistemic defects, e.g., the desire to deceive, which give rise to misleading testimony.
However, the Mīmāṁ saka notices that the Naiyāyika, who is committed to Virtue Infallibilism, also faces a similar problem.
Even for you, how can a sentence uttered by a parrot, etc. or a mistaken deceiver be an epistemic instrument? For it is not produced by any epistemic virtue. Moreover, the following inference does not work: "Since such a sentence is not produced by any epistemic virtue and doesn't have as its intentional object what is intended [by the speaker], it is not an epistemic instrument." For, due to the conformity of the sentence to the facts, the target of this inference is rebutted. 54 As Gaṅgeśa and his predecessors acknowledge, the epistemic virtue that typically explains the status of a sentence as an epistemic instrument is the speaker's correct awareness regarding its content. This, presumably, is not (or needn't be) present in cases like The Mistaken Deceiver and The Parrot. So, if Gaṅgeśa and his predecessors accept Virtue Infallibilism, they cannot accept the sentences uttered in these cases to be epistemic instruments. Given that these sentences aren't produced by the epistemic virtues of the speaker (or given that their contents don't reflect the speaker's intention), why can't Gaṅgeśa and his Nyāya comrades simply accept the conclusion that these utterances aren't epistemic instruments? As Gaṅgeśa's opponent points out, these utterances conform to reality. So, the resulting awareness-events must be knowledge-events. This, in turn, rebuts any argument that seeks to show that the relevant utterances aren't epistemic instruments. That's bad news for Virtue Infallibilism.
However, a Naiyāyika could resist this conclusion in a different way. For example, she could claim that, at least according to a certain conception of semantic fitness, in cases like The Mistaken Deceiver, the relevant sentence lacks semantic fitness. Suppose we define semantic fitness not as the absence of rebutting defeaters for the content of a sentence, but rather as the absence of rebutting defeaters for the sentential content that the speaker is aware of. Since the mistaken deceiver doesn't take the content of her sentence to be true (and, in that sense, lacks awareness of it) and utters the relevant sentence on the basis of an erroneous awareness about how things are in the world, semantic fitness is absent in this case. The Mīmāṁ saka's response goes like this.
However, there is a view that says: "In the case of a sentence uttered by a mistaken deceiver, there is simply no semantic fitness. For semantic fitness is the absence of rebutting defeat for the sentential content that the speaker is aware of, and an intentional object of error is rebutted." That is wrong. For, since the [deceiver's] error-which takes the form, "There is no pot," with respect to something that contains a pot-has a different intentional object from the sentence [uttered by the deceiver], it is not a cause of that sentence. Moreover, for reasons of parsimony, semantic fitness is the absence of rebutting defeat for the content of a sentence. And, in that scenario, the content of the sentence is unrebutted. 55 The Mīmāṁ saka's reply has two parts. In The Mistaken Deceiver, when the deceiver says, "There's a pot in the next room," she incorrectly thinks that there is no pot in the next room. However, her error plays no (direct) causal role in generating her utterance. What explains her utterance is her desire to communicate a certain content to the hearer (which might in turn be explained by her error and her desire to deceive the hearer). So, even if she makes the mistake, why should that prevent an awareness-event based on the utterance from being a knowledge-event? Second, according to the Mīmāṁ saka, there is a simpler (and therefore preferable) notion of semantic fitness-namely, the absence of rebutting defeat for the content of the uttered sentence-which allows us to show that, in this case, the sentence uttered by the deceiver is semantically fit (given that its content cannot be rebutted).
A Naiyāyika who doesn't treat a sentence uttered by a parrot as an epistemic instrument might raise a different problem: namely, that, when a hearer is exposed to the utterance of a parrot, she simply doesn't judge the content of the sentence to be true, but merely undergoes an awareness that takes the form, "This person [or animal] says this." Since the hearer doesn't judge the content of the sentence to be true, the relevant awareness can't be regarded as a testimonial knowledge-event. Once again, the Mīmāṁ saka dismisses this worry quite quickly.
It is also not to be said, "An awareness of the content [of the sentence] simply doesn't take place on the basis of a sentence uttered by a parrot, a child, and so on. Rather, there is an awareness of the following sort, 'This being says this.'" This is because one cannot deny the existence of a non-recollective awareness when the causal complex for the awareness of a semantic relation-e.g., syntactic dependency, etc.-is present, and because the resulting awareness is a true non-recollective awareness in virtue of conforming to the facts. 56 The point is simple: given that all the causes necessary for testimonial awareness are present in this case, a testimonial awareness (which has as its content the content of the uttered sentence) cannot fail to arise. And that awareness-event-insofar as it conforms to reality-must also be true. Thus, it will end up being a knowledgeevent.
At last, the Mīmāṁ saka opponent extends this objection to cases like Mist and Fire. As Gaṅgeśa himself says earlier, a true subsumptive judgement is the epistemic virtue that invariably precedes all inferential knowledge-events. However, in Mist and Fire, given that there is no smoke on the hill, the subsumptive judgement that the hill possesses smoke that is pervaded by fire cannot be true. So, given the absence of this epistemic virtue, the resulting awareness cannot be a knowledge-event. But, according to Gaṅgeśa's own definition of knowledge-events, this is a knowledge-event.
If this were so, the inferential awareness of fire in a place that indeed contains fire-based on an erroneous awareness of smoke-wouldn't be a knowledgeevent. For it wouldn't be produced by a true awareness about the inferential mark. And it is not the case that some other fire is simply the intentional object in that case. For the fire is recognized, and, in the case of an inference involving just one individual like cowhood, that [i.e., some other individual] is absent. Moreover, there is no superimposition of an identity with something else. For, even though the inferential awareness may be an error-due to a superimposition of an identity-with respect to the part that involves the superimposition (upadhāna) of the inferential mark, it would be a knowledgeevent with respect to the part that concerns the target. 57 As we have already seen from Gaṅgeśa's own discussion of such cases, a Naiyāyika cannot escape the conclusion that inferential judgements that arise in cases like Mist and Fire are knowledge-events. As the Bhāṫt˙a points out, one cannot argue that the inferred fire is in fact distinct from the fire that is actually present on the hill, because one can perceptually recognize the fire on the hill as the fire that one inferred earlier, and the same strategy of response isn't available in a case like Horns and Cows. The Naiyāyika also cannot argue that, in a case like Horns and Cows, the agent's mistake lies in taking cowhood to be identical to some other property. For, even then, insofar as the agent correctly takes the target, i.e., cowhood, to be present in the animal, the awareness at least will be true (and therefore a knowledge-event) in relation to the part that concerns the target. 56 TCM C I 347.3-4 and 348.1-2: na ca śukabālādivākyād arthabodha eva na bhavati, kiṃ tv evaṃ ayaṃ vadatīty evaṃ prakārā pratītir iti vācyam | ākāṅkṣāder anvayabodhasāmagryāḥ sattve'nubhavānapalāpāt | saṃvādena yathārthatvānubhavāc ca | 57 TCM C I 348.2-5 and 349.1-2: evaṃ dhūmabhramād vahnimaty eva vahnyanumitir na pramā syāt yathārthaliṅgajñānājanyatvāt | na ca vahnyantaram eva tatra viṣayaḥ pratyabhijñānāt gotvādyekavyaktike tadabhāvāc ca | nāpi tatrānyatādātmyāropaḥ, saṃsargāropād liṅgopadhānāṃśe bhramatve'pi sādhyāṃśe pramātvād iti | The general thrust of the Mīmāṁ saka's objection is this. If we accept Virtue Infallibilism, then all inferential and testimonial knowledge-events must be caused by some epistemic virtue. But, in cases like Mist and Fire, The Parrot and The Mistaken Deceiver, no such virtues seem to be present. Yet, given the Nyāya Definition of Knowledge, it's difficult to avoid the conclusion that these are knowledge-events. So, there is a tension between Virtue Infallibilism and the Nyāya Definition of Knowledge.
Gan
. geśa's First Solution: Appealing to Divine Awareness Gaṅgeśa offers two distinct solutions to this problem. The first of these isn't original: Gaṅgeśa attributes the first solution to "those who know the tradition" (sampradāyavit), i.e., presumably other Naiyāyikas. It involves an appeal to Īśvara's awareness as a cause of both inferential and testimonial knowledge-events.
We reply. In the case of a sentence uttered by a mistaken deceiver and a sentence uttered by a parrot, etc., since the sentences must be uttered by a trustworthy person (āpta) insofar as they are pieces of testimony that are epistemic instruments, the cause of those sentences is simply Īśvara's true awareness regarding the relevant sentential contents just as in the case of the Veda. For he is the agent of every effect. Moreover, even someone who claims that defectlessness makes a piece of testimony an epistemic instrument must admit that the utterance of a parrot, etc. is comparable to the Veda.
[The opponent:] If that were so, pseudo-testimony (śabdābhāsa) would completely disappear, since it would also have Īśvara as its speaker.
[Reply:] No. Since the content of such a sentence is false, it is not an intentional object of the awareness of Īśvara. 58 In framing her objection to Gaṅgeśa's Virtue Infallibilism, the Mīmāṁ saka opponent was assuming that, in The Parrot and The Mistaken Deceiver, even when my testimonial judgements are true, the utterances cannot be caused by the speaker's true awareness about the contents of the relevant sentences. So, if the opponent is right, the kind of epistemic virtue that normally explains the truth of testimonial awareness-events are missing in these cases. Gaṅgeśa wants to deny precisely this. He starts from the simple observation that the sentences uttered by both the parrot and the mistaken deceiver are epistemic instruments insofar as they produce knowledge-events. Now, according to the Naiyāyikas, any piece of testimony that is an epistemic instrument must be produced by a trustworthy speaker. Who is the trustworthy speaker in the case of these utterances? It can't be the deceiver or the parrot. It has to be Īśvara, given that he is implicated in every effect as an agent. So, these utterances too are produced by Īśvara's true awareness of the relevant contents. Thus, the epistemic virtue which, for Gaṅgeśa, explains the status of testimony as an epistemic instrument-namely, the speaker's true awareness of the sentential content-will be present even in these cases. This, however, doesn't mean that there cannot be any misleading testimony-what Gaṅgeśa calls pseudotestimony-that produces erroneous testimonial awareness-events. For, in cases of misleading testimony, Īśvara's awareness of the relevant sentential content isn't present, since he cannot undergo any false awareness. That explains why, in such cases, a knowledge-event cannot arise.
This story can be easily extended to cases like Mist and Fire. How?
In this manner, even in the case of a knowledge-event produced by a pseudoreason, the cause is simply Īśvara's awareness of the hill as possessing something pervaded by fire. This is what people who know the tradition say. 59 Recall Mist and Fire. In that scenario, my inferential judgement that there is fire on the hill is true but my subsumptive judgement that the hill contains smoke that is pervaded by fire is false. For there is no smoke on the hill. This, in turn, implies that the proprietary epistemic virtue that explains inferential knowledge-events goes missing in this case. Gaṅgeśa thinks this is wrong. Since Īśvara causes every effect, his awareness of the hill as containing something that is pervaded by fire can serve as the true subsumptive judgement that causes my inferential judgement. Thus, even in Mist and Fire, my inferential judgement can be caused by an epistemic virtue.
This treatment of cases like Mist and Fire seems to have been popular amongst Naiyāyikas of this period. For example, in his refutation of Ś rīharṡa, Khaṇḍanoddhāra, Vācaspati Miśra II (14th century CE) appeals to an explanation like this. 60 One of Gaṅgeśa's early commentators, Jayadeva Miśra (15th century CE), offers some insight into the motivation for this solution. 59 TCM C I 350.1-2: evaṃ liṅgābhāsajanyapramāyām api vahnivyāpyavattvajñānam īśvarasyaiva janakam iti saṃpradāyavidaḥ | 60 Vācaspati Miśra II discusses this case in the context of an objection to the later Nyāya view that an instrument (karaṇa) is a cause of an effect that gives rise to that effect through the meditation of an operation (vyāpāra) that it produces. Ordinarily, in the case of inferential awareness-events, this role would be played by the recollective awareness of pervasion, which gives rise to an inferential judgement through the meditation of a subsumptive judgement that it produces. But if the epistemic virtue that explains inferential knowledge-events is Īśvara's subsumptive judgement, it becomes hard to explain how that could be produced by a recollection of pervasion. For Īśvara's mental states are all permanent. Vācaspati poses and then solves the problem in the following manner (KU 58.3-7): "[The opponent:] First of all, you have said that, in a case where there is an inferential awareness (anumiti) that has the status of being a knowledge-event but is produced by a pseudo-reason, its knowlegehood (pramāṇya) is produced by an epistemic virtue in virtue of an inference (anumāna) that has the nature of Īśvara's subsumptive judgement. In that case, that subsumptive judgement simply cannot be an operation, since it is permanent.
[Reply:] True. Since Īśvara's awareness is the cause of the universe, it serves as a cause in this case too. That is precisely why the inferential awareness is a knowledge-event owing to his virtue insofar as he is agent of the inferential awareness. However, even in that scenario, the instrument is the recollection of pervasion which belongs solely to the person who makes the inference and which produces the subsumptive judgement" (nanu liṅgābhāsajanyā yatra pramābhūtānumitis tatra tatprāmāṇyam iśvaratṛtīyaliṅgaparāmarśarūpānumānāt guṇajanyam iti tāvad āttha tatra hi sa vyāpārabhūto na bhavati nityatvād it cet | satyam | jagatkāraṇatvāt tad atrāpi kāraṇaṃ tata eva cānumānakartus tasya guṇād anumitiḥ pramā | karaṇas tu tatrāpi anumitibhāja eva tṛtīyaparamarśajananī vyāptismṛtir iti |).
Rather, a knowledge-event [in these cases] is the virtue, and that has a knowledge-event as its result. Moreover, it is not the case that a suppositional superimposition that accidentally conforms to reality is of that sort. If this were the case, then it would follow that a distinct epistemic instrument [other than the four recognized ones] exists, since-without having a uniform and non-overextended property as its delimitor-something cannot have the status of being an epistemic instrument. 61 Let's flesh out this thought more carefully. On one way of reading Jayadeva, this solution is inspired by a certain general principle about epistemically indirect awareness-events (e.g., inferential, analogical, or testimonial knowledge-events): namely, that an epistemic virtue that gives rise to an epistemically indirect knowledge-event must itself be a knowledge-event. In The Mistaken Deceiver, even though a deceiver's utterance may be based on an accidentally true suppositional awareness about the content of the relevant sentence, her suppositional awareness isn't a knowledge-event. So, it cannot play the role of an epistemic virtue in relation to my testimonial awareness-event.
Why is that plausible? Jayadeva offers an argument. If the Naiyāyika were to treat a suppositional awareness as an epistemic virtue, she would have to revise her account of testimony. She would have to say that a sentence can serve as an epistemic instrument insofar as its utterance is based on either (i) the speaker's knowledge of its content, or (ii) the speaker's true suppositional awareness of its content. This would imply that there is no uniform or non-disjunctive (anugata) property which is shared by all sentences that serve as epistemic instruments. Why is this bad? Each epistemic instrument, according to Jayadeva, is delimited by a uniform property that doesn't extend to other epistemic instruments. If this principle is right, then we shouldn't treat sentences that are uttered on the basis of suppositional awareness-events as instances of the same epistemic instrument as sentences that are uttered on the basis of knowledge-events. So, we will be compelled to posit some new epistemic instruments over and above the four traditional ones. To avoid these two problems, it is better to say that testimonial knowledge-events must ultimately be based on Īśvara's knowledge.
This solution seems somewhat ad hoc. The Naiyāyika is forced to recognize Īśvara's true awareness as an epistemic virtue, because she doesn't have any other way of explaining how an accidentally true awareness could be a knowledgeevent. 62 Arguably, this is why Gaṅgeśa distances himself from this proposal. In the next section, I will lay out his own solution to the problem of epistemic luck. 61 Āloka in TCMA 167.11-13: vastutas tu pramā guṇaḥ | sā ca pramāphalam | na cāhāryāropaḥ kākatālīyasamvādas tathā | tathā sati prāmāṇāntarāpatter anugatānatiprasaktadharmāvacchedakaṃ vinā pramāṇatvāsambhavāt | 62 Jayadeva's commentary strongly suggests this reading of Gaṅgeśa's attitude towards this solution. See TCMA 167.16-20.
Gan . geśa's Second Solution: Epistemic Localism
Gaṅgeśa's second solution to the problem of epistemic luck focuses solely on the case of testimony. Against his Nyāya predecessors, he argues that the production of testimonial knowledge-events doesn't depend on the epistemic virtues of the source of a sentence (e.g., its speaker), but rather on properties of the sentence and its hearer. This strategy is an instance of epistemic localism: by reducing epistemic virtues to local properties of a sentence and its hearer, it downplays the epistemic significance of upstream causal factors in the production of knowledge-events. Gaṅgeśa explains his solution to the problem of epistemic luck in the following passage.
Here, we say the following. In ordinary practice, the true awareness of the speaker doesn't serve as an epistemic virtue with respect to testimonial knowledge-events. Rather, semantic fitness and so on, or a true awareness about those conditions, does. For that is parsimonious, and these conditions are necessary for testimonial knowledge-events. There is no true awareness of semantic fitness in the case of a non-fact-conforming sentence (vākye visaṃvādini) that is produced by error, carelessness, or the desire to deceive. For the content of that sentence is rebutted. The same is true of a non-factconforming sentence that is uttered, due to physical indexterity (karaṇāpāṭava), regarding something when something else is to be stated. However, when a sentence conforms to the facts, it is indeed an epistemic instrument. 63 Gaṅgeśa's argument is that, in the case of ordinary speech, the speaker's true awareness of the sentential content doesn't serve as an epistemic virtue. Rather, semantic fitness, etc., or the hearer's true awareness regarding these factors, plays this role. Why does that matter? Two kinds of cases need to be considered. In a scenario where a parrot or child utters a false sentence, the sentence doesn't conform to the facts. So, there is a rebutting defeater (bādhaka) for that content (i.e., a piece of evidence that would decisively show that the content is false). But, as we already know, semantic fitness involves the absence of knowledge about such defeaters. Things are different in cases like The Parrot and The Mistaken Deceiver. In such cases, the content of the relevant sentence conforms to the facts. Hence, there is no such rebutting defeater. As a result, semantic fitness, or a true awareness regarding it, may indeed be present. So, these sentences can serve as epistemic instruments. 64 63 TCM C I 350.3-7 and 351.1: atra brumaḥ | śābdapramāyāṃ loke vaktṛyathārthajñānaṃ na guṇaḥ, kiṃ tu yogyatādikaṃ yathārthatajjñānaṃ vā| lāghavād āvaśkyatvāc ca | bhramapramādavipralipsādijanye vākye visaṃvādini na yathārthayogyatājñānam, vākyārthasya bādhitatvāt | evaṃ karaṇāpāṭavād anyasmin vaktavye anyābhidhāne visaṃvādini, saṃvādini tu pramāṇam eva | 64 At the end of the passage, Gaṅgeśa says (TCM C I 351.1-2): "In some cases, there is simply no awareness of semantic fitness" (yogyatādijñānam eva kvacin nāsti |). Here, Mathurānātha explains that this remark addresses an objection against the proposal that semantic fitness itself is the epistemic virtue when it comes to testimonial knowledge-events. In some cases, even though the linguistic expressions that are uttered may be semantically fit, a testimonial knowledge-event may not arise. For example, if I have misleading evidence for thinking that you are a liar, then, even when you utter a true sentence, I might not be able to rationally judge that what you say cannot be rebutted. So, I won't take your sentence This second solution to the problem of epistemic luck runs into an objection. Gaṅgeśa claims elsewhere in Śabdakhaṇḍa that "the status of testimony as an epistemic instrument is dependent on [the underlying] intention (tātparya)." 65 According to Gaṅgeśa, the intention underlying an utterance is its having a certain aim (tatprayojanakatva), which could be either an awareness that the speaker wants the hearer to undergo, or an action that the speaker wishes the hearer to perform. 66 So, what does it mean for the status of testimony as an epistemic instrument to be dependent on the underlying intention? According to the commentator, Mathurānātha, this means that whether a piece of testimony produces a non-recollective awareness depends on the hearer's awareness of the speaker's intention. Gaṅgeśa's second solution contradicts this principle. According to this solution, a correct (or incorrect) awareness of the speaker's intention (tātparya) isn't a necessary condition for any testimonial knowledge-event. But this means that an agent can undergo a testimonial knowledge-event even when she misconstrues the intention underlying an utterance or simply has no clue about what it is. Mathurānātha offers a concrete example of this, which we can spell out as follows. 67 Hari. You are an avowed atheist. Your friend, who believes in the many Hindu gods, wants to dispel what she takes to be your illusion. So, in order to convey the idea that Viṡṅu exists, she says, "Hari exists" (harir asti). But you misconstrue her intention., since "hari" also stands for lions. So, you come to judge that a lion exists. That, of course, is true.
According to Gaṅgeśa's second solution, since you have a correct awareness of semantic fitness, etc., your judgement can indeed be a knowledge-event, and therefore the relevant sentence should be treated as an epistemic instrument. But this contradicts the view that an awareness of the speaker's intention is necessary for testimonial knowledge-events.
Gaṅgeśa addresses this objection as follows.
Footnote 64 continued to be semantically fit. As a result, I won't judge the content of your sentence to be true, and therefore won't undergo a testimonial knowledge-event. In such cases, a general cause of testimonial awarenessnamely, an awareness of semantic fitness (unaccompanied by any doubt about semantic fitness)-is missing. See Rahasya in TCM C I 350.18-21: "[The opponent:] If semantic fitness is the virtue, then, in some cases, semantic fitness exists in itself [without giving rise to any testimonial knowledge-event]. But why does a testimonial knowledge-event not arise [in such a case]? [Reply:] So, he has said: 'In some cases,…' And, thus, the effect is absent [in such cases] simply due to the absence of a general cause of testimonial awareness, i.e., an awareness of semantic fitness" (atha yogyatā ced guṇaḥ tadā kvacit svarūpasatī yogyatā vartate śābdapramā kathaṃ na jāyate ity ata aha 'kvacid' iti, tathā ca śābdabodhasāmānyakāraṇayogyatājñānābhāvād eva kāryābhāva iti bhāvaḥ |). 65 TCM C IV.1 319.2: tātparyadhīnaṃ śabdaprāmāṇyam | Mathurānātha glosses the statement as follows (TCM C IV.1 319.5-7) : "The implied meaning of '…is dependent on intention' is that the status of testimony as an epistemic instrument, i.e., as a producer of non-recollective awareness-events, is dependent on intention, i.e., is also dependent on the awareness of intention; the awareness of intention also assists testimony" (tātparyādhīnam iti tātparyādhīnaṃ tātparyajñānāsyāpy adhīnaṃ śabdaprāmāṇyaṃ śabdasyānubhavajanakatvaṃ, tātparyasya jñānam api śabdasya sahakārīti phalitārthaḥ |). 66 TCM C IV.1 325.5-6.
[The opponent:] If this is so, then, in the presence of a true awareness of semantic fitness, etc., even a fact-conforming awareness regarding some other semantic referent (śakya), which arises from a polysemous expression that is meant to convey something else, would be a knowledge-event, either when there is an error with respect to the [underlying] intention or without that error. So, even in that case, that sentence would be an epistemic instrument.
[Reply:] No. For this is accepted. Moreover, since that sentence doesn't produce a knowledge-event with respect to the intentional object of the [underlying] intention, it is not an epistemic instrument [with respect to that object]. In this manner, even in the case of the Veda, only a true awareness of semantic fitness is the epistemic virtue. So, Īśvara isn't proved on the basis of the fact that Veda-based knowledge-events are produced by epistemic virtues. 68 Gaṅgeśa's response is subtle. First, he thinks that this result-namely, that a true judgement that is based on a misunderstanding about the speaker's intention can be a knowledge-event-is acceptable. However, he acknowledges also that there is a sense in which the sentence in question isn't an epistemic instrument. Since it fails to produce a knowledge-event with respect to the content that the speaker in fact intends to convey, it cannot be treated as an epistemic instrument with respect to that content.
In effect, Gaṅgeśa rejects an argument for Īśvara that earlier Naiyāyikas like Udayana put forward. 69 The argument goes like this. The Veda gives rise to testimonial knowledge-events. In the case of all testimonial knowledge-events, the epistemic virtue is (a) the speaker's true awareness of the content of the relevant sentence and (b) her desire to convey that content to the hearer (i.e., the intention). So, the author of the Veda too must be someone who has a true awareness of the content of the Veda and has the desire to convey that content to the hearer. But such an agent cannot be someone like us, since we don't have any independent access to the truths that the Veda communicates. So, the author of the Veda must be Īśvara. Gaṅgeśa thinks that this argument is unsound. For the epistemic virtue in the case of testimonial knowledge-events is neither the speaker's awareness of the relevant sentential content nor her intention. It is simply the hearer's true awareness regarding semantic fitness, etc. So, any attempt to prove the existence of Īśvara by appealing to the epistemic virtues underlying testimonial knowledge-events must fail.
[The opponent:] Let this be true. In the case of the Veda, the speaker's true awareness of the sentential content is also an epistemic virtue. For, in ordinary practice, that kind of awareness is a cause of any testimony that serves as an epistemic instrument. That is why you infer the speaker's awareness in ordinary practice. And, in the same way, the Veda is composed by an autonomous person who possesses a true awareness that has as its intentional object the contents of the relevant Vedic sentences. For it is a piece of testimony that is an epistemic instrument, just like the sentence, "Bring the cow." So, Īśvara is proved.
[Reply:] Don't say this. The status of being a piece of testimony that serves as an epistemic instrument is possible even without being preceded by any true awareness of the sentential content, due to the presence of necessary conditions such as semantic fitness and so on, or due to a true awareness regarding those conditions. This has been explained. So, this condition [namely, the speaker's true awareness regarding the sentential content] isn't conducive (prayojaka) [to the status of testimony as an epistemic instrument].
[The opponent:] Even then, in ordinary practice, [the utterance of] a sentence that is an epistemic instrument is thought to be caused by an awareness of the sentential content. So, how can such a sentence occur without that awareness?
[Reply:] No. For a sentence is uttered for the sake of practical undertakings, etc. on the basis of the awareness, "From such words, he will become aware of the sentential content," with the aim of producing an awareness of the sentential content in a person to be motivated (prayojya). Thus, the speaker's initial awareness of the sentential content is causally superfluous (anyathāsiddha). In fact, it doesn't cause the application of the string of words. For, just as in the case of a parrot, that collection of words is produced (upapatteḥ) simply by the causes of the individual words themselves. 70 Gaṅgeśa denies that the status of any sentence as an epistemic instrument depends on the speaker's true awareness of its content. The last paragraph explains why. Suppose a farmer tells her farmhand, "Bring the cow." The farmhand immediately judges that she is supposed to bring the cow, and complies with the command. In this case, according to Gaṅgeśa, the speaker utters the sentence with the explicit aim of motivating the farmhand to act. But the utterance isn't produced directly by the speaker's true awareness of the sentential content, but rather by her awareness, "From such words, the farmhand will become aware of the content of the sentence." 70 TCM C I 352.5-9 and 353.1-2: syād etat | vede vaktṛyathārthajñānam api guṇaḥ loke pramāṇaśabdaṃ prati tādṛśasya jnānasya hetutvāt | ata eva tava loke vaktṛjñānānumānam | evaṃ ca vedo vākyārthagocarayathārthajñānavatsvatantrapuruṣapraṇītaḥ pramāṇaśabdatvāt gām ānayeti vākyavad itīśvarasiddhiḥ | tathāpi loke vākyārthajñānaṃ pramāṇavākye kāraṇaṃ gṛhītam iti tena vinā kathaṃ tad iti cet-na | pravṛttyādyarthaṃ hi prayojyasya vākyārthajñānam uddiśyaitādṛśapadebhyo vākyārthaṃ jñāsyatīti buddhyā vākyaprayoga ity anyathāsiddhaṃ prathaṃ vaktur vākyārthajñānam, na tu tādṛśapadāvalīprayoge tasya hetutvam | tādṛśapadasamūhasya pratyekapadahetor eva śukādivadupapatteḥ | So, under Gaṅgeśa's account, it is simply an awareness regarding the relevant words, which causes the speaker to utter those expressions. Therefore, her own awareness of the sentential content is causally superfluous (anyathāsiddha) with respect to the utterance: even if she didn't have the relevant awareness, she could still make the utterance on the basis of a similar awareness regarding the individual words. Cases like The Parrot can be explained in exactly the same way: the parrot obviously has no clue about the content of the sentence it utters, but it utters the relevant expressions on the basis of some awareness about each of them. If this is right, then a speaker's true awareness of the sentential content cannot be regarded as a cause of her utterance. Hence, it cannot be treated as an epistemic virtue that causes the resulting knowledge-event.
Note that this argument (which is only intended to show that the speaker's true awareness of the sentential content isn't an epistemic virtue) generalizes to the case of intention. For, in the parrot case, the parrot might have no desire to convey any particular content to the hearer; it might only utter the relevant expressions out of a desire to simply utter those expressions. Since such a desire cannot explain the truth of the resulting testimonial awareness, it cannot be treated as an epistemic virtue.
Suppose this is right. Even then, couldn't Gaṅgeśa's Naiyāyika opponent still argue that the Veda functions as an epistemic instrument with respect to a content that its speaker intends to convey? And, if that is right, wouldn't that be enough to show that Īśvara exists? The opponent states her argument as follows.
The Veda is an epistemic instrument with respect to the intentional object of [the underlying] intention (tātparya). And the intention [underlying the Veda] is its being uttered due to a desire to produce the awareness of that intentional object. Moreover, without the Veda, people like us can't undergo any awareness about the imperceptible content of the Veda, in virtue of which people like us could utter he Veda out of a desire to produce an awareness of that content. Furthermore, it is not the case that such an awareness arises from the Veda itself. For this would lead to mutual dependence (anyonyāśraya). Therefore, that Veda-which is uttered out of a desire to produce an awareness of a certain content by someone who perceives the entire content of Veda-is an epistemic instrument with respect to that content. So, such a desire is simply the epistemic virtue. Any knowledge-event regarding the content of the Veda is produced by that. Thus, an autonomous being, who is the greatest of all persons, is proved to be the locus of that virtue. 71 The argument starts out from the assumption that the Veda serves as an epistemic instrument with respect to some content that its speaker intends to convey. Who is this speaker? It cannot be a person like us. First, independently of the Veda, people like us aren't aware of the imperceptible truths about dharma that the Veda conveys. Second, the speaker who composes the Veda couldn't become aware of the content 71 TCM C I 353.3-8 and 354.1: atha tātparyaviṣaye vedaḥ pramāṇaṃ tātparyañ ca tatpratītīcchayoccāraṇam, na cāsmadāder vedaṃ vinātīndriyavedārthagocarajñānaṃ,yena tatpratītīcchayoccāraṇaṃ bhavet | na ca vedād eva tat, anyonyāśrayāt | ataḥ sakalavedārthadarśinā yasya vedasya yadarthapratītīcchayoccāraṇaṃ kṛtaṃ sa tatra pramāṇam iti tādṛśecchaiva guṇas tajjanyā vedārthaprameti tadāśrayasvatantrapuruṣadhaureyasiddhir iti | of the Veda by means of the Veda itself, since that would lead to a problem of mutual dependence: the existence of the Veda would depend on that person's awareness of its content, and the person's awareness of its content would depend on the existence of the Veda. So, the speaker of the Veda must be someone who (unlike us) has direct epistemic access to its content and who utters the Veda with the desire to produce in us an awareness of that content. This agent is none other than Īśvara. And his desire, according to the opponent, is the epistemic virtue that explains the status of the Veda as an epistemic instrument.
Gańgeśa's response is cautious. On the one hand, he doesn't want to say that the testimonial knowledge-events that arise from the Veda are as arbitrary, or as unconstrained by the speaker's intention, as the knowledge-events that arise from sentences uttered by a parrot. On the other hand, he stands by his previous argument, namely that the status of the Veda as an epistemic instrument cannot help us prove the existence of Īśvara. That is why he now argues that, though the testimonial knowledge that arises from the Veda is indeed constrained by the intention of some speaker, this speaker needn't necessarily be Īśvara.
[Reply:] Don't say this. Since a preceptor (adhyāpaka)-who is aware of the contents of Vedic sentences with the help of all the auxiliary disciplines (aṅga) like Mīmāṁ sā, etc.-utters the Veda out of a desire to produce an awareness regarding those specific contents, someone who is truly aware of the content of the Veda indeed has an intention to convey those specific contents. Thus, having become aware of this intention (tatparatva), i.e., the Veda's being uttered by such earlier preceptors out of a desire to produce an awareness regarding those specific contents, future generations (uttarottareṣām) become aware of the content of the Veda. Thus, there is a beginningless sequence of intentions. What is the point of Īśvara?
[The opponent:] In that case, since a preceptor who doesn't know the content of the Veda doesn't utter the Veda out of a desire to produce an awareness of those specific contents, the Veda [when uttered by such a preceptor] isn't an epistemic instrument due to the absence of intention. Neither is there an ascertainment of the content of Veda on the basis of that.
[Reply:] No. In the beginningless cycle of rebirth, that Veda has been uttered with the desire to produce the awareness of those specific contents at some time by someone who is aware of the content of the Veda on the basis of Mīmāṁ sā, etc. The intention [underlying the Veda] obtains simply to that extent. 72 72 TCM I 354.1-6 and 355.1-6: maivaṃ, mīmāṃsādisakalāṅgasācivyād vedavākyārthajñānavatā 'dhyāpakena tattadarthapratītīcchayā vedasyoccāraṇam iti vedārthayathārthavidas tattadarthe tātparyam asty eva | evaṃ pūvapūrvatādṛśādhyāpakena tattadarthapratītīcchayoccāritatvaṃ tatparatvam avagamyottarottareṣāṃ vedārthapratyaya ity anādis tātparyaparampareti kim īśvareņa | tarhi vedārthānabhijñādhyāpakenoccāritavedasya na tadarthapratītīcchayoccāraṇam iti tātparyābhāvān na pramāṇaṃ na vā tato'rthaniścaya iti cet-na | anādau saṃsāre tasya vedasya kadācit kenacin mīmāṃsādyadhīnavedārthajñānavatā tatpratītīcchayoccāraṇaṃ kṛtaṃ tāvataiva tatparatvam iti | According to Gaṅgeśa, some sentences that are epistemic instruments don't have that status in virtue of producing knowledge-events with respect to the contents that their speakers intend to communicate. Rather, they have that status in virtue of producing true awareness-events about contents that their speaker didn't intend to communicate. For they are uttered either by a speaker who intends to communicate some other content (e.g., a person who uses a polysemous expression but is misunderstood), or by a speaker who has no such intention (e.g., a parrot or a babbling child). In such cases, the speaker's intention plays no role in explaining the status of the relevant sentence as an epistemic instrument. However, the Veda isn't like this: it serves as an epistemic instrument insofar as it produces awarenessevents about certain fixed contents that it is intended to convey. But the agent to whom that intention belongs isn't Īśvara; it is just a preceptor who has understood the contents of Vedic sentences by means of exegetical tools like Mīmāṁ sā, etc.
For our purposes, the important question is this. When the Veda produces knowledge-events about the contents that it is intended to communicate, does a true awareness of the speaker's intention play any role in generating the relevant knowledge-event? Gańgeśa's response to the objection raised in the passage quoted above suggests that the answer is "No." Suppose my teacher doesn't quite understand the Veda, but I am able to grasp its intended content (perhaps because I am better at Vedic exegesis than my teacher). In that case, even though my teacher may utter the Veda to convey some other content, my awareness of the true content of the Veda-which may be based on a miscontrual of my teacher's intentionwould still count as a knowledge-event. Thus, even though I won't be getting my teacher's intention right, I would still gain an awareness of the intended content of the Veda in a looser sense, i.e., in the sense that, at some point in the beginningless cycle of rebirth, someone who correctly understood the Veda uttered the Veda precisely with the intention of communicating that content. Even in this case, therefore, grasping the immediate speaker's intention isn't necessary for a knowledge-event to arise. The hearer's true awareness of semantic fitness, etc. should suffice.
This view is an instance of epistemic localism. For Gaṅgeśa, testimonial knowledge-events are produced not due to the transmission of knowledge from a trustworthy speaker, but rather due to the truth-conducive properties of a sentence (its semantic fitness, etc.) and the hearer's true awareness of those truth-conducive properties. Therefore, the epistemically significant factors which explain testimonial knowledge-events are local to the sentence and the hearer and don't belong to the source of the sentence (i.e., the speaker). Now, we might worry that this localist strategy doesn't fit well with Gaṅgeśa's own remark that the status of testimony as an epistemic instrument depends on the underlying intention. Gaṅgeśa's commentator, Rucidatta, addresses this point with reference to cases like The Parrot.
[Objection:] Since, in the case of a sentence uttered by a parrot, etc., there is a testimonial awareness even though the absence of intention is ascertained, there is a deviation [from the rule that the status of testimony as an epistemic instrument depends on the hearer's awareness of the intention underlying the utterance]. It is not to be said that, since Īśvara's intention is present, this is not so. For it is impossible to apprehend that intention, since there is no means of apprehending it, given that the sentence uttered by a parrot and so on is unconstrained by exegetical rules, etc.
[Reply:] No. For, in that case, it is possible to apprehend that intention even by means of semantic fitness and so on, because, without an apprehension of those factors, an awareness of the semantic relation isn't established. This is what some say. According to others, that awareness of the underlying intention is said to be a cause only in other cases [i.e., in cases that don't involve a sentence uttered by a parrot, etc.]. 73 Rucidatta sketches two solutions on behalf of Gaṅgeśa. First of all, it is possible to argue that, in these cases too, Īśvara's intention brings about the relevant utterance, and it is possible for the hearer to grasp that intention on the basis of various features of the sentence such as semantic fitness, etc. The other strategy is to give up the idea that an awareness of the speaker's intention is actually necessary in cases like this. In other words, the claim that an awareness of the speaker's intention is a cause of testimonial knowledge-events needs to be qualified so that it applies only to some (but not all) testimonial knowledge-events.
Let's sum up. Contrary to some modern interpretations, Gaṅgeśa clearly thinks that the testimonial awareness-events that arise in cases like The Parrot and The Mistaken Deceiver are knowledge-events. 74 According to Gaṅgeśa's preferred proposal, the epistemic virtues that explain the epistemic status of those awarenessevents are local to the sentence or to the hearer: it is either semantic fitness, etc., or the hearer's true awareness of semantic fitness, etc. This localist approach to epistemic virtues became influential amongst later Naiyāyikas. For example, in his 73 TCMP p. 12: nanu śukādivākye tātparyavyatirekaniścaye 'pi śābdabodhād vyabhicāraḥ, na ca tatrāpīśvaratātparyasattvān na tathātvam iti vācyam | śukādivākyasya nyāyaprakaraṇādyananurodhitayā grāhakābhāvena tatra tadgrahasyāśakyatvād iti cet, na, yogyatādināpi tatra tadgrahasaṃbhavāt | tadagrahe tatrānvayabodhāsiddher ity eke | tadatiriktasthala eva taddhetutvam uktam ity anye | 74 Modern commentators, such as Mukhopadhyay (1992) and Phillips (2012), think that Gaṅgeśa doesn't recognize the testimonial awareness-events that arise in The Parrot and The Mistaken Deceiver as knowledge-events. This, obviously, contradicts what I have been claiming. So, it's worth examining the views of these authors more carefully. Start with Mukhopadhyay: he thinks that, for Naiyāyikas, a sentence can have the status of an epistemic instrument only if it is produced by a speaker with the right epistemic virtues, so the sentence uttered in The Parrot or The Mistaken Deceiver can't be an epistemic instrument (Mukhopadhyay 1992, p. 285). Mukhopadhyay's argument isn't persuasive. As we have already seen, Gaṅgeśa thinks that, in these cases, the status of the relevant sentence as an epistemic instrument isn't explained by any epistemic virtue of the immediate speaker. The relevant epistemic virtue is either (i) Īśvara's true awareness of the content of the relevant sentence, or (ii) conditions like semantic fitness, etc. or the hearer's true awareness regarding such conditions. So, Mukhopadhyay is misreading Gaṅgeśa. Phillips' argument is slightly more promising: he appeals to the fact that, for Gaṅgeśa himself, the hearer's awareness of the speaker's intention is a necessary condition for testimonial knowledge-events; this is missing in these cases (Phillips 2012, p. 85). Our discussion shows that Phillips is wrong. Gaṅgeśa thinks that an awareness of the immediate speaker's intention isn't necessary for a testimonial knowledge-event. Following Rucidatta, we can respond to Phillips in two distinct ways. We could either say that the intention that we are aware of in cases like The Parrot and The Mistaken Deceiver is Īśvara's intention, or that an awareness of the intention underlying the utterance is simply not required in such cases.
Kārikāvalī, Viśvanātha Nyāyapañcānana says, "In the case of testimonial awareness, the epistemic virtue should be a knowledge-event either regarding semantic fitness or the speaker's intention." 75 Similarly, in Nyāyakaustubha, Mahādeva Puṅatāmakāra (17th century CE) straightforwardly endorses Gańgeśa's second proposal: "In this manner, in the case of a testimonial knowledge-event, a knowledge-event regarding semantic fitness is the epistemic virtue." 76 Thus, the second solution that Gaṅgeśa offers to the problem of epistemic luck seems to have been widely accepted in later Nyāya.
Jayadeva's Extension of Localism
Gaṅgeśa's localist solution to the problem of epistemic luck focuses on testimony. Can the story be extended to inference? Gaṅgeśa's commentator, Jayadeva, says, "Yes." In his commentary Āloka, Jayadeva says: It is to be understood that, even in the case of inference, a true awareness of the absence of rebutting defeat (bādha) serves as the epistemic virtue. 77 Just as the hearer's true awareness about semantic fitness, i.e., the absence of rebutting knowledge-events (bādhakapramā), serves as the epistemic virtue in the case of testimony, so also does a correct awareness about the absence of rebutting knowledge-events serve as the epistemic virtue in the case of inference. Thus, we end up with a perfectly systematic solution to the tension between the Nyāya Definition of Knowledge and Nyāya Infallibilism.
However, Jayadeva thinks that this solution doesn't quite work. The problem is expressed in the voice of a Naiyāyika who thinks that Gaṅgeśa's solution works for testimonial knowledge-events, but not for inferential ones.
[The opponent:] The absence of rebutting defeat, simply insofar as it exists in itself, is a constituent element (aṅga) of inference, but not insofar it is an object of awareness. For the former is parsimonious. This because, [if things were otherwise], given that the absence of the absence of a target boils down to the target, it would follow that an awareness of the target is the cause of an awareness of the target. And this is not possible, since the fault of proving that which has been proved is an impediment to inferential knowledge-events. 78 The problem is this. If we take a correct awareness about the absence of rebutting defeat to be the cause of inferential knowledge-events, then the agent can only correctly infer a target if she already correctly judges that the absence of the target reasons (bādhitahetu). Still, Maheśa thinks Jayadeva ignores this solution because his own solution is much clearer. 82 Jayadeva's solution is based on an account of epistemic virtues and defects. 83 He rejects the conception of epistemic virtues and defects that motivated Gaṅgeśa's first solution to the problem of epistemic luck: the claim that only knowledge-events can serve as the epistemic virtues in cases of inference. Jayadeva thinks that this is wrong. Even though the causes of an awareness may include a knowledge-event, that alone cannot guarantee that the awareness is true. For example, suppose I perceptually recognize Pierre outside a café. When I enter the café, this earlier perceptual awareness may cause me to infer that the café doesn't contain Pierre. But this inference of Pierre's absence, though based on a knowledge-event, may be erroneous (e.g., if Pierre has snuck into the café unbeknownst to me). Analogously, if I correctly infer the absence of Pierre from the café after undergoing an illusion as of him being outside the café, my inferential judgment may constitute a knowledgeevent about Pierre's absence from the café (e.g., if Pierre is in fact absent from the café). This shows that, even if knowledge-events (or errors) are included amongst the causes of our awareness-events, they needn't function as epistemic virtues (or as epistemic defects). According to Jayadeva's own proposal, epistemic virtues simply are positive factors (i.e., not mere absences of epistemic defects) which bring about knowledge-events, while epistemic defects simply are positive factors (i.e., not mere absences of epistemic virtues) which bring about instances of error.
What does this tell us about cases like Mist and Fire? Jayadeva's final explanation is relatively simple. In any inference, if an agent arrives at a true inferential judgement, her final judgement must be based on a correct judgement that the site contains something that is pervaded by the target. Consider Mist and Fire: even though I am wrong to think that the hill contains smoke, my subsumptive judgement that the hill contains smoke that is pervaded by fire is still partially true. For, after all, the hill does contain something that is pervaded by fire! That, in turn, explains why I arrive at a true judgement. Therefore, the epistemic virtue that explains the truth of my inferential judgement (and its status as a knowledge-event) 82 See : "We say here. The following distinction doesn't hold: 'Only an error is a defect, and only an instance of knowledge-event is a virtue.' For, there is a deviation because a knowledge-event about a counterpositive of an absence causes an error about the absence, and a superimposition of a counterpositive of an absence causes a knowledge-event about the absence. Rather, virtuehood (guṇatva) consists in having a causehood that is a counter-relatum (pratiyogin) of an effecthood that is delimited by the property of being a knowledge-event such that it doesn't produce anything in virtue of being an absence which is delimited by a nature that is a delimitor of the property of producing knowledge-events. And defecthood consists in having a causehood that is a counter-relatum of an effecthood that is delimited by the property of being an error such that it doesn't produce anything in virtue of being an absence whose counterpositiveness is delimited by a nature that is a delimitor of the property of producing knowledge-events" (atra brumaḥ | bhrama eva doṣaḥ, pramaiva guṇa iti na vibhāgaḥ, pratiyogipramāyā abhāva-bhramaṃ prati pratiyogyāropasyābhāvapramāṃ prati janakatvena vyabhicārāt, kintu pramājānakāvacchedakarūpāvacchinnābhāvatvenājanakatve sati pramātvāvacchinnakāryatāpratiyogikakāraṇatākatvaṃ guṇatvam, pramājānakatāvacchedakarūpāvacchinna-pratiyogikābhāvatvenājanakatve sati bhramatvāvacchinna-kāryatā-pratiyogika-kāraṇatākatvaṃ doṣatvaṃ…). I have given a simplified explanation of these definitions above.
is my correct awareness of the hill as characterised by something that is pervaded by fire. Hence, Jayadeva writes: Therefore, in this case, the awareness of a pervaded object with respect to something that possesses a pervaded object and the awareness of a pervaded object with respect to something that doesn't possess a pervaded object are the epistemic virtue and the epistemic defect respectively. For that is parsimonious. However, [the epistemic virtue] isn't the awareness of a specific pervaded object with respect to something that possesses that very pervaded object, and so on. For that is not parsimonious.
And, thus, since something that is pervaded [by fire] must necessarily be present at a place that actually contains fire even when one isn't aware of that specific pervaded object, how can the status of an inferential awareness as a knowledge-event be ruled out even in a case where the smoke, etc. which is apprehended as pervaded by fire is absent from that place? This is because, when there is an awareness of some other pervaded object [i.e., smoke] with regard to a site that actually contains a pervaded object, it is possible for one to have an awareness of the site as possessing a pervaded object. For it is not possible that the awareness of a site only as possessing that very pervaded object which is present in it is the cause of inferential awareness-events, since it has already been said that this won't be parsimonious. Therefore, this [i.e., the status of a true inferential awareness based on a pseudo-reason as a knowledge-event] can be accommodated by appealing to the awareness-events of people like us. What then is the point of admitting the awareness of Īśvara in order to account for that? This is the direction the reader should go in. 84 In a nutshell, Jayadeva proposes the following revision to the Nyāya theory of epistemic virtues (which Gaṅgeśa himself had mentioned earlier). Instead of treating a true subsumptive judgement as the epistemic virtue that gives rise to inferential knowledge-events, we should treat the agent's true awareness of the site's possessing something that is pervaded by the target as the epistemic virtue. Not only is this proposal parsimonious, but it also allows us to explain the truth of such awareness-events without appealing to any kind of divine awareness. Jayadeva's proposal is an instance of the same kind of epistemic localism that Gaṅgeśa himself endorses. Not only did the early Naiyāyikas take the production of testimonial knowledge-events to be dependent on the transmission of knowledge from a speaker to a hearer, but they also took the production of inferential knowledge-events to be dependent on the transmission of knowledge from certain initial awareness-events-e.g., the agent's awareness of the reason as a property of the site and her awareness of pervasion-to a final cognitive state, i.e., her inferential judgement. According to them, if these initial awareness-events weren't knowledge-events, the final inferential judgement couldn't be a knowledge-event either. Following Gaṅgeśa, Jayadeva rejects this picture. For him, the epistemic status of these initial awareness-events doesn't really affect the epistemic status of an inferential judgement. What matters is whether the subsumptive judgement (which is based on these initial awareness-events) is partially true, i.e., whether the agent correctly judges that the site contains something that is pervaded by the target. And this partial truth of the subsumptive judgement is independent of the epistemic status of those initial awareness-events. As long as the subsumptive judgement is partially true in the relevant respect, the final inferential judgement is guaranteed to be true. So, a subsumptive judgement that is correct in this way constitutes the epistemic virtue that produces inferential knowledge-events. This approach to inference makes the relevant epistemic virtue local to the last step of the cognitive process, and therefore downplays the epistemic importance of the initial steps of the process. Thus, this is an instance of epistemic localism.
How influential was Jayadeva's solution? A version of this solution seems to have been widely accepted by later Naiyāyikas. In his Kārikāvalī, Viśvanātha Nyāyapañcāna says, "Moreover, in the case of inferential knowledge-events, the epistemic virtue should be a subsumptive judgement with respect to a site that possesses the target." 85 He explains in his commentary, "In the case of inferential knowledge-events, the awareness of being characterised by something pervaded by the target with respect to a site that possesses the target is the epistemic virtue." 86 Similarly, Mahādeva Puṅatāmakāra says: In the case of inferential knowledge-events, a good subsumptive judgement is the epistemic virtue. And a good subsumptive judgement isn't a subsumptive judgement that, by nature, is a knowledge-event. For an inferential knowledge-event is produced even by means of a subsumptive judgement that apprehends something which isn't pervaded or isn't a property of the site as such [i.e., either as pervaded or as a property of the site]. Rather, it is a subsumptive judgement that has as its qualificand [i.e., the site] something that possesses the target. And, thus, in virtue of being a subsumptive judgement which has as its qualificand something that possesses fire, etc., such a subsumptive judgement serves as the cause of an awareness which is delimited by the property of being an inferential knowledge-event that has fire, etc. as its qualifier. 87 85 Kārikāvalī verse 132d-133ab in NSM 484.1-2: atha tv anumitau punaḥ| pakṣe sādhyaviśiṣṭe tu parāmarśo guṇo bhavet | 86 NSM 484.8: anumitau sādhyavati sādhyavyāpyavaiśiṣtyajñānaṃ guṇaḥ | 87 NKau 69.3-7: anumitipramāyāṃ salliṅgaparāmarśo guṇaḥ| salliṅgaparāmarśaś ca na pramātmaka-parāmarśaḥ| avyāpyāpakṣadharmayos tattvāvagāhinā parāmarṡeṇāpi vastugatyā sādhyavati pakṣe pramānumitijananād api tu sādhyavadviśeṣyakaparāmārśaḥ eva | tathā ca vahnyādiprakārakapramānumititvāvacchinnaṃ prati vahnyādimadviśeṣyakaparāmarśatvena hetutā | For both Viśvanātha and Mahādeva, if a subsumptive judgement is to bring about an inferential knowledge-event, it doesn't itself have to be a knowledge-event. It only has to be partially true: it must correctly ascribe to the relevant site the property of possessing something that is pervaded by the target. And this will be true just in case the site in question possesses the target. This strongly suggests that Jayadeva's solution came to be the standard way of reconciling the Nyāya Definition of Knowledge with Gaṅgeśa's Virtue Infallibilism in cases of inference like Mist and Fire.
Conclusion
Let's take stock. Gaṅgeśa resolves the tension between the Nyāya Definition of Knowledge and Nyāya Infallibilism by appealing to a form of epistemic localism, i. e., the view that upstream causal factors play no epistemically significant role in the production of knowledge-events. What forced him to adopt this view?
The early Naiyāyikas reject epistemic localism. They accept a view according to which the production of inferential and testimonial knowledge-events depends on the epistemic status of causally upstream awareness-events, e.g., the agent's initial awareness of the reason as present in the site or the speaker's awareness of the sentential content. They think that, if these awareness-events weren't knowledgeevents, the resulting testimonial or inferential awareness-events couldn't be knowledge-events either. This commits these Naiyāyikas to a theory that excludes epistemically lucky awareness-events from the class of knowledge-events. But this commitment is problematic. For it is in tension with their view that any true nonrecollective awareness can be a knowledge-event.
As Gaṅgeśa's first solution to this problem shows, some Naiyāyikas try to solve this problem without embracing epistemic localism. To account for the production of knowledge-events in cases like Mist and Fire, they appeal to Īśvara's knowledge. This solution seems ad hoc. Gaṅgeśa's preferred solution avoids this disadvantage. He argues that the production of testimonial knowledge-events doesn't depend on the epistemic status of the speaker's awareness, but rather depends on the truthconducive properties of the relevant sentences and the hearer's true awareness regarding those properties. These are the epistemic virtues that produce such knowledge-events. By restricting epistemic virtues to downstream causal factors in this manner, Gaṅgeśa adopts a robust form of epistemic localism. His commentator, Jayadeva, extends this localist approach to the case of inference. Therefore, if the arguments of Gaṅgeśa and Jayadeva succeed, they will have shown that epistemic localism can help us resolve the conflict between the Nyāya Deffinition of Knowledge and Nyāya Infallibilism. | 33,416.6 | 2021-03-02T00:00:00.000 | [
"Philosophy"
] |
Increasing incidence of hypotension in the emergency department; a 12 year population-based cohort study
Background The epidemiology of hypotension as presenting symptom among patients in the Emergency Department (ED) is not clarified. The aim of this study was to describe the incidence, etiology, and overall mortality of hypotensive patients in the ED. Methods Population-based cohort study at an University Hospital ED in Denmark from January 1, 2000, to December 31, 2011. Patients aged ≥18 years living in the hospital catchment area with a first time presentation to the ED with hypotension (systolic blood pressure (SBP) ≤100 mm Hg) were included. Outcomes were annual incidence rates (IRs) per 100,000 person years at risk (pyar) and etiological characteristics by means of the International Classification of Diseases, Tenth Revision (ICD-10), as well as 7-day, 30-day, and 90-day all-cause mortality. Results We identified 3,268 of 438,198 (1 %) cases with a mean overall IR of 125/100,000 pyar (95 % CI: 121–130). The IR increased 28 % during the period (from 113 to 152 cases per 100,000 pyar). Patients ≥65 years had the highest IR compared to age <65 years (rate ratio for men 6.3 (95 % CI: 5.6-7.1) and for women 4.2 (95 % CI: 3.6-4.9)). The etiology was highly diversified with trauma (17 %) and cardiovascular diseases (15 %) as the most common. The overall 7-day, 30-day and 90-day mortality rates were 15 % (95 % CI: 14–16), 22 % (95 % CI: 21–24) and 28 % (95 % CI: 27–30) respectively. Conclusion During 2000–2011 the overall incidence of ED hypotension increased and remained highest among the elderly with a diversified etiology and a 90-day all-cause mortality of 28 %.
Background
Systolic blood pressure (SBP) is widely used in the initial triage of acutely ill patients and forms a basic part of the initial assessment of the circulation [1]. The presence of hypotension often signifies critical illness and several large multicenter studies have used the presence of hypotension as an inclusion criteria together with other variables [2]. These studies often focus on highly selected hypotensive patient populations in specialized treatment units, and the evidence gained is a reflection of this selection.
During the past decade, research investigating annual trends in incidence rates (IR) of potentially hypotensive patients suggest opposite trends depending on the etiology and population of interest. While the annual IRs of sepsis seems to increase [3], the trend for myocardial infarction (MI) have decreased [4]. Whether population-based IRs and annual trends of primary undifferentiated hypotensive ED patients demonstrate dynamic trends, are not known. Previous estimates of hypotension in EDs rely mainly on hospital data samples that are weighted to extrapolate to national level estimates and are therefore vulnerable to sampling bias [5]. In general studies on this topic are limited, either by place of settings or selective inclusion criteria and conditions studied [6]. Population-based IRs of hypotension among ED patients is important to quantify as the presence of hypotension -even transient -is associated with worse outcomes and can therefore not be neglected [7]. The epidemiological knowledge gained, can serve as a foundation for future interventional studies in this critical population.
While ED visits in Denmark have been stable through recent years, ED visits among the ageing population are increasing [8]. Furthermore, time-sensitive critical illnesses (i.e. cardiogenic shock, severe sepsis, and the 'golden hour' of trauma), have increased the demand of prompt critical care recognition and delivery in the ED setting. Collectively, this adds to the hypothesis of a possible increasing trend in ED hypotension. We, therefore, conducted an ED populations-based cohort study to examine annual IRs in first time presentation of hypotension over a 12-year period from 2000-2011 and subsequent the etiology and short-term mortality.
Study design and setting
We conducted a population-based cohort study with data from the ED of Odense University Hospital, Denmark, during the period of 1th January 2000 to 31th December 2011 (12 years). Odense University Hospital is a 1,000-bed university teaching hospital representing all specialties including surgical, neurological, and general internal medical patients. The population served by this ED consists of four well-defined municipalities with a mixed rural-urban population of 290,000 persons. It is the only serving ED in this part of Denmark and provides primary 24-h acute medical care, with 48.000 annual visits.
Selection of participants
Adults (age ≥ 18 years) were considered eligible when presenting to the ED with a SBP ≤ 100 mm Hg registered within 3 h upon arrival. Based on a recent published study, examining SPB thresholds and mortality in our ED, we defined a SBP ≤ 100 mm Hg as hypotension [9]. We used the Shock Index (SI) as a proxy for acute illness. SI is calculated as the ratio of heart rate to SBP and included as a categorical variable (<0.7, 0.7-1, ≥1) [10]. If a patient had multiple encounters with hypotension over the study period, only the first was included within the cohort. The primary date of contact defined the index date. Patients <18 years, patients residing outside the hospitals catchment area at the time of contact, and patients without a Danish personal identification number were excluded. We also excluded patients with a previous presentation of SBP ≤ 100 mm Hg. To minimize left sided censoring, patients who had visited the ED between 1 of January 1998 and 1 of January 2000 with hypotension were excluded as well. The background population, from which the cases were retrieved, was the composed general adult (≥18 years) Danish citizens living in the hospitals catchment area.
Variables and outcome measures
The primary outcome was the IRs of hypotension (SBP ≤ 100 mm Hg) from 1st January 2000 to 31th January 2011 in the ED, both overall and by year. Secondary outcomes were etiological characteristics by means of major ICD-10 codes and the proportion of 7-day, 30-day, and 90-day allcause mortality. The primary exposure variable was the first recorded SBP value at presentation. SBP was measured with an automated oscillometric device or manual cuff and sphygmomanometer. HR was measured with ECG, palpation or pulse oximetry. We also included information on the additional covariates; age, gender and time of contact during the day (07:00-14:59, 15:00-22:59, 23:00-06:59). Charlson comorbidity index (CCI; 0, 1-2, >2) was used as a marker of comorbid illness.
We defined etiology based on the primary ED diagnoses and the immediate ensuing hospital discharge diagnosis. These were assigned by physicians in the ED at discharge/referral to other departments and based on the International Classification of Diseases, Tenth Revision (ICD-10) (see below).
Data sources and processing
In Denmark every Danish citizen is assigned a unique 10digit civil personal registry number (PRN-number). This unique PRN-number enables accurate linkage between the Danish national registers [11]. True population-based studies are hereby possible as all patient contacts are registered and linked between all Danish registries using the patients unique PRN-number.
The Danish national patient registry
Since 1995 the Danish National Patient Registry has been covering all in and out patient clinic contacts at hospitals in Denmark assembling data regarding dates of admission, discharge, admitting departments and all primary and secondary discharge diagnoses (ICD-10 code system) from hospitals (except psychiatric departments and hospitals) [11]. By discharge every unique patient journey is assigned one primary diagnosis and one or more secondary diagnosis (up to 20 diagnoses) classified according to the ICD-10 system. We used discharge diagnoses from the previous 10 years in order to generate a CCI for each enrolled patient upon the index contact date as a proxy for comorbid illness.
Database
Since 1996 all patients records from the ED are registered electronically and available as patients record notes from the contact. As a part of the routine procedure, all patients presenting to the ED, except those with minor orthopedic complaints, had their vital signs measured and registered by a nurse at arrival. The record notes are available in text-format, in which vital parameters are consistently stated, including time of admission and time of measured SBP and HR. By electronic screening it was possible to identify and retrieve information on all patients with a measured and registered SBP ≤100 mm Hg as well as the unique value of SPB and HR. The present data extraction process has been manually validated in 500 files with a sensitivity of 96 % (95 % CI [91-99]) and a specificity of 100 % (95 % CI [99-100]) for exact SBP, in the study by Kristensen et al. [9].
Data on municipality of residence, migration-, maritaland vital status, and date of birth were retrieved from The Danish Civil Registration System and linked to other registries and databases using the unique PNR-number [11].
Other registers and databases
We retrieved information regarding the annual mid-year population of persons 18 years old or older living in the hospitals catchment area (accessed September 2014 at Statistics Denmark website; http://www.statistikbanken.dk).
Analysis
Baseline characteristics were presented as medians and interquartile range (IQR) for continuous variables and categorical variables as numbers and percentages. We used the Chi-square test for categorical variables and the Kruskal-Wallis equality-of-populations rank test for continuous variables. Patients were followed from index date until the date of death, completion of 90 days follow-up, emigration, or December 31, 2011, whichever came first.
The crude annual IRs were calculated as the number of IRs per 100,000 pyar (age ≥18 years) with the corresponding 95 % confidence intervals (95 % CI) assuming a Poisson distribution. The annual IRs were adjusted using direct standardization to the sex-and age distribution of the municipalities of the EDs catchment area midyear population in the year 2000. The population was defined as contributing to one person-year at risk per resident per year in the analyses. The incidence rates were estimated and analyzed using a Poisson regression model. Age group, gender, calendar time in years, and interaction between age group and gender were used in the adjusted model. Calender time was entered in the model as a continuous variable. Age was divided into two predefined age intervals: 18-64 years and ≥65 years. The Poisson model was assessed using the Hosmere Lemeshow goodness-of-fit test.
Etiological characteristics were categorized into major ICD-10 groups and calculated as frequencies and proportions based on primary registered conditions at discharge among all hypotensive patients as well as stratified into SBP intervals.
We constructed Kaplan-Meier curves and reported the all-cause 90-day mortality, stratified by SBP intervals. Comparisons between groups were evaluated with a log-rank test. Cuzick's test was used for trends in mortality between SBP intervals. All tests of significance were two-tailed, and p values <.05 were considered significant. Missing values (ICD-codes; n = 2 and HR; n = 128) were excluded in the analysis of the specific variable. Statistical analyses were performed using Stata version 13.1 (Stata Corporation LP ®, Texas, USA).
Ethics committee approval
The study was approved by the Danish Data Protection Agency (J.nr 2008-58-0035) and the Danish Health and Medicines Authority (j.nr. 3-3013-205/1). In accordance with Danish law, observational studies performed in Denmark do not need approval from the Medical Ethics Committee. The study was conducted according to the STROBE statement.
Incidence rate
The annual crude IRs together with the standardized IRs of hypotension (SBP ≤100 mm Hg) and IRs of different levels of hypotension during the period 2000-2011 are shown in Fig. 3
Discussion
This study provides population-based epidemiological characteristics of adult hypotensive patients arriving to a University ED in Denmark. Our results showed that a first time presentation of hypotension was a common finding in the ED with an increasing annual trend in IRs throughout the period 2000-2011. By means of discharge diagnoses the etiology was clearly diversified and the 90-day all-cause mortality was 28 %.
Our primary aim was to address the IR and trend of hypotension in the ED. In our study we have reported an overall mean IR of SBP ≤100 mm Hg of 125/100,000 pyar. Comparing our IRs with other conditions suggest hypotension to be as common as first time hospitalization with myocardial infarction (MI) [12] and more common than ST-segment elevation MI [13]. While the IRs of MI have decreased during the past decades [12,13] registered sepsis is on the rise [3]. The IRs of hypotension, in our Adult contacts (18 years We found higher IRs among elderly males, compared to women. Moreover 55 % of our cohort represented patients aged 65 years or more, increasing with decreasing SBP level. Accordingly, a large Canadian study analyzed 34,454 ED visits by older adults (>65 years), accounting for 22 % of the total ED visits in which, 74 % of patient visits were triaged as urgent or emergent [14]. The most common diagnoses (ICD-9 and ICD-10) were nonspecific, relating to "symptoms, signs, and ill-defined conditions" (25 %). Injury and poisoning constituted 17 % of diagnoses, while diagnoses related to the circulatory system and respiratory system constituted 10 and 9 % of diagnoses, respectively [14].
Comparing these proportions with ICD-10 discharge groups in our study suggest a similar pattern, given the increasing IRs and the dominating ageing proportion of patients. Discharge diagnoses were dominated by injury and poisoning, circulatory system and unspecific diagnoses (symptoms and abnormal clinical/laboratory findings) in our cohort. Other studies, among undifferentiated nontraumatic hypotension in the ED, report sepsis and cardiovascular diseases as common etiologies [15,16]. A similar etiological distribution applies for critically ill hypotensive patients in the ICU [1]. This difference could reflect the use of ICD-10 codes, and population-based setting, while others have applied primary clinical assessments and strict inclusion criteria when categorizing the etiology. Interestingly, infectious and cardiovascular diseases increased by each decile decrease in SBP level, whereas trauma decreased accordingly. Moreover, we found an increase in discharge diagnoses of infectious and respiratory diseases, disease of the genitourinary system, as well as symptoms, signs, and abnormal clinical and laboratory findings, not elsewhere classified. As a supplementary analysis we applied validated discharge diagnoses for patients with community-acquired infections presenting to the ED (see Appendix: Table 4 for ICD-10 codes validated by Henriksen et al. [17]). Using this algorithm we found a proportion of 13 % with a discharge diagnoses of infection, compared to 5 % in the initial analysis. The difference reflects our use of merely major ICD-10 groups, as certain infectious diseases (e.g. ICD10-J189 = "pneumonia, unspecified") are grouped under respiratory diseases in the ICD10 system. Applying the validated discharge diagnoses for infections we found a confirmatory increasing trend (p = 0.011). Although the data source used is considered a unique information source to carry out epidemiological studies and health service research in our country, the discharge diagnoses among hypotensive ED patients have not undergone validation. The heterogeneous etiological data presented here, should therefore be interpreted bearing this in mind. An important finding in this study is the 90-day allcause mortality of 28 %. Correspondingly, in-hospital mortality among non-traumatic hypotensive patients, (SBP ≤100 mm Hg) is reported to be 10-25 % [16,[18][19][20][21][22] while mortality among traumatic hypotensive populations (SBP ≤100 mm Hg) are 7-24 % [23,24]. As reported by Jones et al. we find an exposure of a single episode of hypotension (<100 mm Hg) in the ED setting to portend a possible later adverse outcome [7]. Furthermore, the mortality seem to increase with each decile decrease in SBP as reported previously [19].
We decided to include patient with a first time presentation of SBP ≤100 mm Hg measured within 3 h upon arrival. Only 85 patients did not meet this eligibility criterion. Moreover 92 % had their vital values measured within 30 min. All patients in our cohort had a mean SI ≥0.9 suggesting possible acute or critical illness. This could imply, that a great proportion of patients presented with clinical symptoms suggesting critical illness and therefore the ED personal deemed SBP measurement appropriate in order to delineate the hemodynamic stability. Whether a large proportion of our cohort presented with shock (e.g. organ failure or elevated lactate) is of interest, but not feasible based on the available data presented here.
We believe this population-based study provides robust data on the incidence of hypotension in the ED. When hypotension is present, mortality is substantial. Correct diagnosis and resuscitation of patients with hypotension are well-known steps to improve prognosis. Future epidemiological perspectives for research should address the underlying etiology and prognosis of undifferentiated hypotension as this could delineate targeted interventions at ED arrival. At the level of triage, SBP ≤100 mm Hg should be regarded as a critical finding and the cause of hypotension explored. Future prehospital protocolised management by combining e.g. ultrasound, vital parameters and lactate could further expedite resource allocation and triage of these, often critically ill patients.
Study strengths and limitations
The Danish public healthcare system, with a complete, independently and prospectively recorded medical Cuzick's test for trend a history, reduced the possible risk of information biases and loss to follow up was not an issue. With the use of the Danish population-based registries we were able to compute quite accurate estimates on the outcomes: incidence, mortality and comorbidity. We chose to use the first contact with hypotension in order minimize bias from repeated measurements. Furthermore we excluded patient with residency outside the catchment area and a previously reported admission with SBP ≤100 mm Hg in the years 1998-99 in order to avoid possible overestimation of the IRs. Several issues and limitations should be considered when interpreting our results. Our single center design limits the generalizability of our findings. Although our ED is the only on serving this part of Denmark, the presence of "market share" within the bordering of other ED catchment areas in Denmark is a possibility. We are not able to adjust for possible hypotensive patients living in our catchment area, who have had contact to other hospitals. However, we have excluded patients living in municipalities outside of our ED catchment area and thereby minimized this proportion (n = 516, Fig. 1). The blood pressure measurements were registered prospectively and as a routine documentation and not necessarily for research purposes. However, a great proportion of cases did not have a SBP measured and registered at arrival (n = 273,794). These patients suffered from minor complaints where the nurse did not judge a SBP measurement relevant. Patients with medical complaints and trauma severe enough to warrant a SBP measurement are therefore the population of relevance. This must be kept in mind when interpreting our findings.
We acknowledge the possible limitation in the blood pressure measurement as the accuracy of the automatic oscillometric devices and measurement by auscultation can be inaccurate [25]. However, this is still the method used in most clinical and research based settings when describing blood pressure and we therefore find it generalizable.
We further acknowledge the limitations of the etiological characteristics. Ideally, a classification into shock categories could be clinical useful. However, these data were not available in the current dataset. We had missing values on covariates; ICD-codes (2 cases) and HR (128 cases), but not on SBP. Of notion, is the drop in the IR of 2008, which was caused by an organizational change in the electronic registration of vital parameters in this year.
Finally, our study and results can be influenced and confounded by unmeasured variables such as use of cardio-therapeutic medications known to inhibit the cardiovascular compensatory response in individuals and potentially mask hypotension and bias our estimates, especially among elderly comorbid patients using these medications. During the observation period a physician-staffed mobile emergency care unit was deployed (October 2007) in the pre-hospital setting. Accordingly, increased awareness and change in Fig. 6 Kaplan-Meier curves illustrating overall 90-day survival according to different systolic blood pressure levels. Below the curves are listed the number at risk at corresponding intervals in survival time treatment algorithms in certain critical conditions have been introduced (surviving sepsis campaign and percutaneous coronary intervention of myocardial infarction). Although we consider this proportion minimal, the possibility of patients suffering time-dependent illnesses diagnosed prehospitally (e.g. ruptured aneurism, myocardial infarction) and referred directly to a facility within our hospital (e.g. operational theatre or ICU) and thereby bypassing our ED is a possibility we acknowledge. Although there was no structural change in the primary care service, a change in general practitioners' interest in assessing acute clinical conditions (due to the increasing specialization and fragmentation of primary care services) is another possibility we acknowledge.
Conclusion
We conclude that a presentation with hypotension is a common critical finding among ED patients with an increasing trend. Adverse outcome are substantial carrying a 90-day all-cause mortality of 28 %. Using ICD-10 codes, etiological characteristics are diversified both at ED arrival and at hospital discharge. Prospective risk stratified protocols should evaluate the use and impact prognostics of hypotension in triage algorithms, both prehospitally and in the ED setting. | 4,804.8 | 2016-03-02T00:00:00.000 | [
"Medicine",
"Biology"
] |
Spectrophotometric Complexation Studies of Some Transition and Heavy Metals with a New Pyridine Derivative Ligand and Application of It for Solid Phase Extraction of Ultratrace Copper and Determination by Flame Atomic Absorption Spectrometry
A new pyridine derivative ligand, (E)-(Pyridine-2-ylmethylidene) ({2-(E)-(Pyridine-2-ylmethylidene) amino] ethyl} has been synthesized and kf value of it’s complexes with Cu, Ni, Cd, Zn, Co, Hg and Ag has been determined spectrophometrically. The stability of the complexes to vary in acetonitrile solvent was in the order of Cu > Ni > Cd > Zn > Co > Hg > Ag, thus because this ligand have good selectivity to copper ion, a simple, reliable and rapid method for preconcentration and determination of the ultratrace amount of copper using octadecyl silica membrane disk modified by this ligand, and determination by flame atomic absorption has been presented. Various parameters including pH of aqueous solution, flow rates, the amount of ligand and the type of stripping reagent were optimized. Under optimum experimental conditions, the breakthrough volume is greater than 2000 ml with an enrichment factor of more than 400 and 0.054 μg⋅L detection limit. The capacity of the membrane disks modified by 6 mg of the ligand has been found to be 330.17 g of copper. The effects of various cationic interferences on the percent recovery of copper ion were studied. The method has been successfully applied for the determination of copper ion in different water samples.
As such compounds may reveal high tendency for stable coordination complexes with numerous transition metal ions, particularly those that can be regarded as soft Lewis acids such as Cu 1) and Cu 2).The polyimine ligands (Figure 1) were chosen as suitable building blocks for these complexation reaction.These compounds are flexibile about the central C-C bond.Each of these ligands contains four potential sites for coordina-tion to metal ions, the peripheral pyridyl-N as well as the inner imino N-atoms.In this work we focus on a series of new 2,2'-bipyridyle-type organic ligands with added matal coordination functionality along the molecular backbone.The complexation process has been monitored by UV/Vis absorption spectroscopy.
Among the chemical species copper has a biological action at low doses and a toxic effect when ingested in larger quantities.A concentration more than 1 μg⋅mL -1 of copper can impart a bitter taste to water.Large oral doses can cause vomiting and may eventually cause liver damage.Copper concentration in potable water is usually very low (20 μg•L -1 ) [9].Determination of copper isusually carried out by flame [10][11][12] and graphite [13,14] atomic absorption spectrometry, as well as spectrophotometry M. PAYEHGHADR ET AL. 2 [15,16], chemiluminescence [17] and electrothermal methods [18,19].However, due to presence of copper in low levels in environmental samples and the matrix effects, different separation and preconcentration techniques such as liquid-liquid extraction [20], precipitation [21], ion exchange [22], solid phase extraction [23,24] and membrane filtration [25] improve the analytical detection limit, increase the sensitivity by several orders of magnitude, enhance the accuracy of the results and facilitate the calibration.Among these techniques, solid phase extraction is preferred by many researchers on account of the fast, simple and higher preconcentration factor, rapid phase separation, time and cost saving [26,27].A number of supports have been widely used for the preconcentration and separation of trace metal ions from various matrices.Among the absorbents, silica with chemically bonded alkyl chains such as octadecyl bonded silica (C18), modified by suitable ligands has been an excellent used extractor of metal ions [28 -32].In this work a newly synthesized ligand, ((E)-(Pyridine-2-ylmethylidene) ({2-(E)-(Pyridine-2-ylmethylidene) amino] ethyl}) (BPYMH) (Figure 1), is studied as a disk modifier for Cu 2) ion.Also, we report on extraction and preconcentration of copper 2) from water samples and determination by atomic absorption spectrometry.
Instruments
All UV-Vis Spectra recorded on a computerized double-beam 2550 Shimadzu spectrophotometer, using two matched 10 mm quartz cell.In a typical experiment, 2.0 ml of ligand solution (5.0 × 10 -5 mol•L −1 ) in acetonitrile was placed in the spectrophotometer cell and the absorbance of solution was measured.Then a known amount of the concentrated solution of metal ions in acetonitrile (1.3 × 10 -3 mol•L −1 ) was added in a stepwise manner using an l0 μl Hamilton syringe.The absorbance of the solution was measured after each addition.The metallic ions solution was continually added until the desired metal to ligand mole ratio was achieved.
The determinations of copper were performed on a GBC Sens AA flame atomic absorption spectrometer (air-acetylene flame) with Hollow Cathode Lamp (HCL) and equipped with a deuterium background corrector.The absorbance wavelength was set at 324.7 nm and the spectral bandwidth at 0.5 nm.A Metrohm 827 pH meter was used to measure pH values.
The modified C18 extraction disks were used in conjunction with a standard 47 mm filtration apparatus (Schleicher and Schüell, Dassel, Germany) connected to a vacuum.
Chemicals
Methanol, acetonitrile and other organic solvents were used spectroscopic grade from Merck.All mineral acids were of pro analysis from Merck.Analytical grade standard stock of copper 2), sodium hydroxide and nitrate or chloride salts of magnesium, zinc, cobalt, manganese, lead, nickel, cadmium, silver, mercury, sodium, potassium and calcium (all from Merck) were of the highest purity available.The new synthesized BPYMH ligand with the highest purity was used as a chelating ligand.Working standards were prepared by appropriate dilution of the stock solution with deionized water.
Estimation of Formation Constants
The formation constant (kf) and the molar absorptivity (ε) of the resulting 1:1 complexes between the BPYMH ligand and different metallic ions in acetonitrile at 25˚C were calculated by fitting the observed absorbance, Aobs, at various metallic ion/ligand mole ratios to the previously derived equations [33,34] (Equation ( 1)), which express the Aobs as a function of the free and complexed metal ions and the formation constant evaluated from a non-linear least-squares program KINFIT [35].
Sample Extraction
Extraction were performed with a 47 mm diameter × 0.5 mm thickness, Empore high performance extraction membrane disk containing octadecyl-bonded silica (8 μm particles, 6 nm pore size) from 3 M company.The disks were used in conjunction with a standard Scott Duran 47 mm filtration apparatus.
After placing the membrane in the filtration apparatus, it was washed with 10 ml methanol and then with 10 ml deionized water to remove all contaminations arising from the manufacturing process and the environment.After drying the disk by passing air through it for several minutes, a solution of 6 mg BPYMH ligand dissolved in 3 ml acetonitrile was introduced to the reservoir of the apparatus and was drawn slowly through the disk by applying a slight vacuum.The filtration step was repeated until the passed solution was completely clear.Finally, the disk was washed with 25 ml deionized water and dried by passing the air through it.The membrane disk modified by the BPYMH ligand was then ready for sample extraction.It is important to note that the surface of the disk was not left to become dry from the methanol was added until the extraction of Cu 2+ ions from water were completed [36].
Then 100 ml of the sample solution containing 10 μg Cu 2+ was passed through the membrane (flow rate = 5 ml/min).After the extraction, the disk was dried completely by passing air through it for a few minutes.The extracted copper was stripped from the membrane disk using appropriate amounts of suitable eluent (the best eluent was 1 M nitric acid).This step was done with 5 ml eluent solution and the Cu 2+ was determined with flame atomic absorption spectrometer.
Spectrophotometric Studies
Spectrophotometric studies of complexation reaction between the BPYMH ligand and metallic ions in acetonitrile solution revealed that ligand can form stable 1:1 (metallic ion to ligand) complexes with different metallic ions.
The electronic absorption spectra of BPYMH ligand (5 The stoichiometry of the metal complexes was examined by the mole ratio method.Sample of the resulting plots for all Mn + -L complexes are shown in Figure 3, at 297 nm, and it is evident that 1:1 (metallic ion to ligand) complexes are formed in solution.The formation constants of the resulting 1:1 metallic ions to the BPYMH complexes were obtained at 25˚C by absorbance measurements of solutions in which varying concentrations of metallic ions were added to fixed amounts (5.0 × 10 −5 mol•L −1 ) of ligand solution.All the resulting absorbance-mole ratio data were best fitted to Equation (1), which further supports the formation of ML in solution.
[ ] ( listed in Table 1.The data given in Table 1 revealed that, at 25˚C, the stabilities of the complexes varies in the order Cu 2+ > Ni 2+ > Cd 2+ > Zn 2+ > Co 2+ > Hg 2+ > Ag + .Thus, considering the observed stability, we decided to use ligand as a suitable modifier for the selective concentration and extraction of Cu 2+ ions on the octadecyl silica membrane disks.Some preliminary experiments were undertaken in order to investigate ligand retention of copper ions by the membrane disk in the presence of ligand, after the recommended washing, wetting and conditioning procedures were carried out.It was found that, the membrane disk modified by the BPYMH ligand is capable to retain Cu 2+ ions in the sample solution quantitatively (the test solution used contained 10 µg copper in 100 ml water at pH 7.0).
Choice of Eluent
In order to choose the most effective eluent for quantitative stripping of retained ions from the modified diskafter extraction of 10 μg Cu 2+ from 100 ml sample (in the presence of 6 mg ligand), the ions were stripped with 5 ml of different inorganic solution and the resulting data are listed in Table 2.
From the data given in Table 2, it is immediately obvious that among the different solutions, 5 ml 1 M nitric acid can a accomplish the quantitative elution of copper from the membrane disk, while other solutions are ineffective for the complete elution of copper.
Effect of Ligand Amount
The optimum amount of the ligand for the membrane disks was studied.The results of the amount of ligand play an important role in obtaining quantitative recoveries of metal ions, because in its absence, the disk does not retain the metal ions.Therefore, the influence of the amount of ligand on recovery of the copper ion was examined in the range of 5 -15 mg using 100 ml solution containing 10 μg copper ions.The recoveries of copper ion reached to 100% with 6 mg of ligand (Figure 5).On this basis, in all studies were carried out with 6 mg of BPYMH ligand.
Effect of Flow Rate and pH
The effect of flow rates of the sample and stripping solutions from the modified membrane disk on the retention and recovery of copper ion was investigated.It was found that, in the range of 5.0 -20 ml⋅min -1 , the retention of copper by the membrane disk is not affected by the sample solution flow rate considerably.Similar results for extraction metal ions have already been reported [37].
In this work quantitative stripping of copper ion from the disk was achieved in a flow rate of 2.0 -5 ml⋅min -1 , using 5 ml of 1 M Nitric acid.Most chelating ligands are conjugate bases of weak acid groups and accordingly, have a very strong affinity for hydrogen ions.The pH therefore, will be a very important factor in the separation of metal ions by chelating, because it will determine the values of the conditional stability constants of the metal complexes on the surface of the sorbent [38].In order to investigate the effect of pH on the SPE of copper ion, the pH of aqueous samples was varied from 2 -9, using different buffers, and the recommended procedure was followed.As shown in Figure 6 the Cu 2+ ion can be retained quantitatively in the pH range of 7.0 -8.0.For subsequent experiments, pH = 7 was chosen as a working pH.Higher pH values (>9) were not tested because of the possibility of the hydrolysis of octadecyl silica in the disks.
Sorption Capacity
The maximum capacity of the disk was determined by passing 50 ml portions of an aqueous solution containing 500 μg copper ion, through the modified disk with BPYMH ligand, followed by determination of the retained ions using FAAS.The maximum capacity of the disk was found to be 330.2(±8.1) μg copper ion per 6 mg of ligand.
Breakthrough Volume
Since Breakthrough volume represents the sample volume that can be pre-concentrated without the loss of analyte during elution of the sample, the measurement of breakthrough volume is important in solid phase extraction.The breakthrough volume of the sample solution was tested by dissolving 10 μg of copper ion in 50, 100, 250, 500, 1000 and 2000 ml water and the recommended procedure was followed.In all cases, the extraction by modified disks was found to be quantitative.Thus, the breakthrough volume for the method should be greater than 2000 ml.Consequently, by considering the final elution volume of 5 ml and the sample solution volume of 2000 ml, an enrichment factor of around 400 was easily available.
Limit of Detection
The limit of detection (LOD), of the proposed method for the determination of copper ion was studied under the optimal experimental conditions.The LOD obtained from CLOD = K b S b m −1 for a numerical factor K b = 3 is 0.054 μg per 1000 ml.
Effect of Diverse Ions on Sorption of Copper
In order to investigate the selective separation and determination of Cu 2+ ion from its binary mixtures with divers metal ions, a 100 ml aliquot solution containing 10 μg Cu 2+ and milligram amounts of other cations was taken and the recommended procedure was followed.The results are summarized in Table 3.The results show that the copper ions in the binary mixtures are retained almost completely by the modified membrane disk.
Analysis of Artificial and Natural Water Samples
To test the applicability of the developed procedure, it was applied to the extraction and determination of copper ions from some water samples.Tap water of Abhar and Zanjan cities, two fountains in Tarom city and synthetic water samples were analyzed.The results for this study are presented in Table 4.The recovery of samples is satisfactory reasonable and was confirmed using addition method, which indicates the capability of the system in determination of ions.A good agreement was obtained between the added standards and measured analyte amounts.The recovery values calculated for the added standards were always higher than 90%, thus confirming the accuracy of the procedure and its independence from the matrix effects.
Conclusion
A simple, precise and accurate method was developed for selective separation, pre-concentration and determination of copper from various complex matrices.The time taken for the separation and analysis of copper in 100 ml sample is at the most 20 min.It can selectively separate Cu 2+ ions from various metal ions even they are present at much higher concentrations.The method can be successfully applied to the separation and determination of copper in real samples.
10 −5 mol•L −1 ) in the presence and increasing con centration of Cu 2+ (1.3 × 10 −3 M) ions were recorded (Figure 2) in acetonitrile at 25˚C.The resulting complexe of Cu 2+ with BPYMH ligand are distinguished by a little spectral shift toward longer wavelength.
1 )
For evaluation of the formation constants and molar absorptivity coefficients from absorbance vs. [M]/[L] mole ratio data, a non-linear least squares curve fitting program KINFIT was used.A sample computer fit of the absorbance-mole ratio data is shown in Figure 4.All of the logK f values evaluated from the computer fitting of the corresponding absorbance-mole ratio data at 25˚C are
Figure 4 .
Figure 4. Computer fit of absorbance versus [Cu 2+ ]/[L] mole ratio plot in acetonitrile at 25˚C, (×) experimental point, (O) calculated point, (=) experimental and calculated points are the same within the resolution of the plot.
Figure 5 .
Figure 5.Effect of ligand amount on extraction recovery of copper ion.
1
Initial samples contained 10 μg Cu 2+ and different amounts of various ions in 100 ml water.
Table 4 .
Determination of copper ions in real water samples.Aqueous phase.100 ml sample solution pH = 7 (with 0.01 M of phosphate buffer), amount of ligand = 6 mg, eluent = | 3,753.4 | 2013-01-09T00:00:00.000 | [
"Chemistry"
] |
Analysis and Prediction of COVID-19 using SIR, SEIR, and Machine Learning Models: Australia, Italy, and UK Cases
-The novel Coronavirus disease, known as COVID-19, is an outbreak that started in Wuhan, 6 one of the Central Chinese cities. In this report, a short analysis focusing on Australia, Italy, and the 7 United Kingdom has been conducted. The analysis includes confirmed and recovered cases and deaths, 8 the growth rate in Australia as compared with Italy and the United Kingdom, and the outbreak in 9 different Australian cities. Mathematical approaches based on the susceptible, infected, and recovered 10 case (SIR) and susceptible, exposed, infected, and recovered (SEIR) models were proposed to predict 11 the epidemiology in the countries. Since the performance of the classic form of SIR and SEIR depends 12 on parameter settings, some optimization algorithms, namely, the Broyden–Fletcher–Goldfarb–Shanno 13 (BFGS), conjugate gradients (CG), L-BFGS-B, and Nelder-Mead are proposed to optimize the 14 parameters of SIR and SEIR models and improve its predictive capabilities. The results of optimized 15 SIR and SEIR models are compared with the Prophet algorithm and logistic function as two known 16 ML algorithms. The results show that different algorithms display different behaviours in different 17 countries. However, the improved version of the SIR and SEIR models have a better performance 18 compared with other mentioned algorithms described in this study. Moreover, the Prophet algorithm 19 works better for Italy and the United Kingdom cases than for Australian cases and Logistic function 20 compared with Prophet algorithm has a better performance in these cases. It seems that Prophet 21 algorithm is suitable for data with increasing trend in pandemic situations. Optimization of the SIR and 22 SEIR models parameters has yielded a significant improvement in the prediction accuracy of the 23 models. Although there are several algorithms for prediction of this Pandemic, there is no certain 24 algorithm that would be the best one for all cases.
parameters (Susceptible, Infected, Recovered) in the SIR model.The results indicate that the suggested method is precise enough with low error compared to analytical methods.Mbuvha and Marwala (2020) calibrated the SIR model to South Africa after considering different scenarios for R0 (reproduction number) for reporting infections and healthcare resource estimation for the next few days.Qi, Xiao et al. (2020) proposed that both daily temperature and relative humid-ity influenced the occurrence of COVID-19 in Hubei province and insome other provinces.Salgotra, Gandomi et al. (2020) developed two COVID-19 prediction models based on genetic programming and applied this model in India.Findings from a study by (Salgotra, Gandomi et al. 2020) show genetic evolutionary programming models are highly reliable for COVID-19 cases in India.
In January 2020, the first case of Covid-19 was reported in Australia.In this report, a short analysis focusing on Australia was addressed and reported and continued as a simulation for the next few days.
The manuscript is organized in several sections.Section I presents the research methodology.Section II and III introduce the SIR and SEIR models.Section IV shows the prediction algorithms (logistic function and Prophet algorithm).Sections V shows the results.The conclusion and discussion are provided in the last section.
I. Research methodology
The study was carried out in several phases.For the first step, data were collected from World Health Organization (WHO) and John Hopkins University since they collect data from different organizations.After that, data were analyzed and preprocessed in order to avoid any duplicated and missing values.Numerical tests were performed using Python and R and executed on a computer Intel ® Core i7-4510U 2.0 GHz 8 GB DDR3 Memory (Supplementary file).The flowchart of the research methodology is provided in Figure 1.(2) in which • S is the number of individual susceptible at time t.
• I is the number of infected individuals at time t.
• R is the number of recovered individuals at time t.
• β and γ are the transmission rate and rate of recovery (removal), respectively.
III. The SEIR model
The SEIR model is an extended version of SIR model (Peng, Yang et al. 2020).It models the interaction of people between different conditions: the susceptible (S), exposed (E), infective (I), and recovered (R).The parameters S, I, and R are same as parameters in SIR model and E presents the fraction of individuals that have been infected but does not show any signs.The SEIR-model diagram is as follows (Fig. 3): Figure 3 The SEIR diagram (Peng, Yang et al. 2020) The equations of SEIR model are defined as follows (Eqs.4-10): dependent mortality rate (Peng, Yang et al. 2020).
IV. Prediction
In the present section, some machine learning techniques were used for COVID-19 case predictions in Australia, Italy, and the United Kingdom.Machine learning is a branch of computer science in which data could teach algorithms.The learning process could be done as supervised-, unsupervised, and/or semi-supervised learning forms (Mitchell 1997, Arkes 2001, Armstrong 2001, Nikolopoulos, Litsa et al. 2015, Maleki, Mahmoudi et al. 2020).In this section, some approaches that are used for prediction of cases (confirmed and deaths) of
a. Analysis i. New cases
In this sub-section, the confirmed growth rates focusing on Australia, Italy, and the United Kingdom for every day from 2020-04-24 to 2020-05-23 were calculated.Figure 4 depicts the growth rate of confirmed cases in the countries.As can be seen in Figure 4, the growth rate for Australia was always below 0.5 during times of outbreak and just above 0.0 at the of May, while the rate for Italy and the United Kingdom is generally high.The growth rate for the United Kingdom was almost above 2.0 in April and then dramatically declined in May.The rate for Italy fluctuates between 0.5 and 1.5 in April and May.
Figure 5 also presents the growth rate of death cases for the above-mentioned countries daily from 2020-04-24 to 2020-05-23.The growth rate for death cases in Australia fluctuated between 0 and 7 in April and May and was 7 at the end of April (higher than Italy and United Kingdom during the same time), while for Italy, the rate was almost below 2.0 during the same time period and for the United Kingdom, the rate was just below 4.0 at the end of April and just above 0.0 at the end of May.ii.Overall growth rate This section shows numbers of active cases in these three countries.The active cases were calculated using the following equation: Active_cases=confirmed_cases -deaths_cases -recovered_cases (13) From equation ( 13), the overall growth rate could be calculated according to Equation 14: In equation ( 14), the index i presents day. Figure 6 illustrates the overall growth rate for confirmed cases in the countries.Negative numbers show that people recovering are faster than those getting sick and that would be good news.The rate for Australia in the time period was almost below zero and changed from −15 at the end of April to just below −5 at the end of May and for Italy fluctuated between just above −7.5 and just above 0.0, while the rate for the United Kingdom was almost always positive number in the time horizon (00.0 and 3.0).Figure 7 illustrates the number of death cases in Australia compared with the two other countries, and it is clear that the number in Australia is significantly lower than other two.With the aim of forecasting, the logistic function is defined in Equation ( 11) and was applied to collected data (Time horizon: start of outbreak in the countries) and results have been illustrated in Figures 9-14 = ) since these parameters could be estimated.Before the start of the outbreak, it is essential to address whether the number of susceptible cases is equal to the number of people in these countries because no antibodies exist, and no vaccines for the disease have been developed.At first, 0 R =2.7 was fixed (reported by Australian Government: Department of Health) as the the median number, 0.378 β = , and 0.14 γ = . Figure 19 (a-c) present the confirmed cases provided by the optimized SEIR model with the abovementioned decriptions in the three countries (See Figure 18).Real data were used to estimate the values of β and γ .An optimizer was used to find the best estimation of β and γ .The optimization algorithms were the Broyden-Fletcher-Goldfarb- Shanno (BFGS) algorithm (Fletcher 1987), L-BFGS-B (Byrd, Lu et al. 1995), conjugate gradients (CG), (Fletcher and Reeves 1964), and Nelder-Mead (Nelder and Mead 1965).The parameter settings are provided in Table 3.The flowchart of the improved SIR and SEIR versions and parameter settings for the above-mentioned algorithms are addressed in Figure 18 and Table 4, respectively.Table 5 shows the optimized values obtained by different algorithms (SIR model).The best values for the parameters were found using the Nelder-Mead algorithm (for SIR model) and L-BFGS-B algorithm (for SEIR model).This method is illustrated in Figure 18.As was mentioned earlier, before the start of the outbreak, the number of susceptible cases was equal to the number of people in these countries because no antibodies exist, and no vaccine for the disease is available.From Wikipedia, the populations of Australia, Italy, and the United Kingdom are 25 06 , 60 06 , and 67 06 , respectively.Table 6 illustrates the RMSE values obtained by the algorithms (for SIR and SEIR models) showing improvements in significantly reducing the values.
•Defining initial values for the parameters and variables.•Solving the SIR and SEIR models, numerically.
Step 1 Step 2 •Using an optimization algorithm to estimate the best values for the paramters.•Estimating R0.
Step 3 17 Tables 7-9 present the results of the predicted cumulative confirmed cases obtained using the Prophet algorithm in the three countries.In the presented tables, y represents the true values of confirmed cases, ds is time, ŷ is the forecasted values,
VI. Conclusion and discussion
COVID-19 is a family of Coronaviruses that has affected the life of billions of people worldwide.
The first phase of the paper started with a short analysis of COVID-19, focusing on Australia, Italy, and the United Kingdom.The analysis presents confirmed and death growth rates in Australia, a comparison between Australia, Italy, and the United Kingdom, and also, a short analysis in different states of Australia.The analysis shows that generally Australia is in a good position compared with two other countries.However, the situation in different cities of Australia are completely complicated; for example, New South Wales has the most confirmed and deaths cases, while Northern Territory shows the least confirmed and death cases (it is valuable to mention that New South Wales has more population).
Mathematical approaches based on SIR and SEIR were proposed to predict the epidemiology in Australia, Italy, and the United Kingdom.Since the classic form of SIR and SEIR are deterministic, an improved version based on parameter optimization was suggested to improve the prediction.
The results are compared with logistic function and Prophet algorithm and summarized as follows: • Comparison between the classic form of SIR model with real data showed a significant gap.However, initializing the parameters of the SIR model significantly improved the prediction.
• The classic form of SIR model worked better for the United Kingdom, while the SIR model was not suitable for Australia case (regarding RMSE values).
• The logistic function was a good model for the United Kingdom with an r2_score of 0.97, while this score for Australia was 0.67 and Italy was 0.95.
• The best RMSE value belonged to the Australia cases (confirmed and deaths).
• Optimization of parameters of the SIR and SEIR models significantly improved the prediction accuracy of the models.
• Improved version of SEIR has better performance compared with SIR model (Regarding
RMSE values and Figures).
• Optimized SEIR model has better prediction for UK and Italy compared with Australia.
• The best values for the parameters were found using the Nelder-Mead algorithm for SIR model and L-BFGS-B algorithm for SEIR model.
• The Prophet algorithm worked better for Italy and the United Kingdom cases than for Australian cases.
• Logistic function compared with Prophet algorithm had a better performance in these cases.
• The improved version of the SIR and SEIR model had a better performance compared with logistic function, Prophet algorithm, and classic form of SIR model.
In this paper, all forecasting was addressed without considering of scenario of social distancing and quarantine that makes it valuable as a future direction.This paper presents SIR and SEIR as epidemiology models; it would interesting to test other epidemiology models.Moreover, it is worthwhile to combine the mathematical model with other observations such as Policy intervention, human behavior, and constraints.
Compliance with Ethical Standards:
• Sources of Funding: The authors confirm that there is no source of funding for this study.
• Conflict of Interest: The authors declare that they have no conflict of interest.
• Human Participants and/or Animals: None.The SEIR diagram (Peng, Yang et al. 2020) Figure 4 Growth rate (Con rmed cases in Australia, Italy, and the United Kingdom)
References
Figure 5 Growth rate (death cases in Australia, Italy, and the United Kingdom) Figure 6 Overall growth rate for con rmed cases in Australia, Italy, and the United Kingdom Predicted cases in Australia using the susceptible, infected, recovered (SIR) model (blue: real con rmed cases, red: SIR model) Predicted cases in Italy based on the SIR model (blue: real con rmed cases, red: SIR model) Predicted cases in UK based on the SIR model (blue: real con rmed cases, red: SIR model) Flowchart of improved version of SIR and SEIR models
Supplementary Files
This is a list of supplementary les associated with this preprint.Click to download. Supplementarymaterials.docx
Figure 1 Figure
Figure 1 Flowchart of the current research process 10)Where α presents the protection rate, β shows the infection rate, illustrates the inverse of the average latent time, δ displays the inverse of the average quarantine time,
COVID(
curve's maximum value, and K is the logistic growth of the curve b) Times Series forecasting with the Prophet algorithm The Prophet algorithm is an open-source tool developed by Facebook' s Data Science team, and its main goal is business forecasting (Taylor and Letham 2017, Taylor and Letham 2018).The Prophet algorithm works well with time-series data that have seasonal effects and are robust in dealing with missing data (Ndiaye, Tendeng et al. the Prophet algorithm, the forecast could be written as shown in Equation 5
Figure 4 Figure 5
Figure 4 Growth rate (Confirmed cases in Australia, Italy, and the United Kingdom)
Figure 6
Figure 6 Overall growth rate for confirmed cases in Australia, Italy, and the United Kingdom
Figure 8
Figure 8 (a-h) shows confirmed versus deaths cases in each individual Australian state.By now (2020-
Figure 9 Figure 13
Figure 9 Prediction of confirmed cases by logstic function (Australia)
Figure 15
Figure 15 Predicted cases in Australia using the susceptible, infected, recovered (SIR) model (blue: real confirmed cases, red: SIR model)
Figure*
Figure 18 Flowchart of improved version of SIR and SEIR models
Figure 19
Figure 19 Prediction done by optimized SEIR model and upper bounds for the forecasted values, respectively.It should be noted , the forecasted values were made between the cutoff and cutoff + horizon.Tables 7-9 are also called cross-validation matrices that are used to find the error values between y and ŷ after which the RMSE values can be obtained (Figure23 a-c).Figures20-22visualize forcasted values obtained using the Prophet algorithm, indicating the mentioned algorithm is fitted for the cases of Italy and the United Kingdom but with errors for Australia.
Figure 20 Figure 23
Figure 20 Forcasting by Prophet for the next year (Confirmed cases in Australia) Figures
.
As it is shown in Figures 9-14, the logistic function is fitted until the trend of cases is increases and to evaluate the performance of metric R2 scores used for confirmed and death cases.Results are presented in Table2.Another metric that has been used in experiments is the root mean square error (RMSE), and the results of RMSE I depicted in Table2.The best RMSE value belongs to the Australian cases (confirmed and deaths).
Table 1
R2 score fore different countries, different cases
Table 2
Root mean square error (RMSE) values for different countries and different cases
Table 3
RMSE values obtained by SIR model (before optimization of parameters)
Table 5
Median values of SIR parameters determined by the departments of health in each country
Table 6
RMSE values obtained based on the improved SIR model considering a 0.99 confidence interval
Table 8
Predicted cumulative confirmed cases in the United Kingdom(cross-validation matrix) | 3,812.2 | 2020-10-13T00:00:00.000 | [
"Computer Science"
] |
Numerical Analysis of Dynamic Effects of a Nonlinear Vibro-Impact Process for Enhancing the Reliability of Contact-Type MEMS Devices
This paper reports on numerical modeling and simulation of a generalized contact-type MEMS device having large potential in various micro-sensor/actuator applications, which are currently limited because of detrimental effects of the contact bounce phenomenon that is still not fully explained and requires comprehensive treatment. The proposed 2-D finite element model encompasses cantilever microstructures operating in a vacuum and impacting on a viscoelastic support. The presented numerical analysis focuses on the first three flexural vibration modes and their influence on dynamic characteristics. Simulation results demonstrate the possibility to use higher modes and their particular points for enhancing MEMS performance and reliability through reduction of vibro-impact process duration.
Introduction
Many traditional devices of microelectromechanical systems (MEMS) do not include contacting surfaces. However in recent years there is an increasing interest in various microsensors and microactuators that employ contact interaction in their normal mode of operation. This trend is determined by the new developments in MEMS technology and new market demands. Among such devices, the fast development of microswitches is very promising. However, insufficient mechanical reliability is one of the main obstacles for wider successful application of these microdevices [1,2]. Interrelated parasitic vibro-impact effects (bouncing) and stiction (a contraction for ‗static friction') are one of the major reasons that degrade their reliability [1][2][3][4][5][6][7]. Due to the elastic response of contacting microstructure of a microswitch, at each on/off cycle, its tip bounces over the substrate a number of times upon contact, as already been reported by K. Petersen in 1979 [8]. This effect is not unexpected, since these switches are essentially a microscopic copy of mechanical relays, in which contact bounce is a well-known phenomenon. It is harmful since it induces pitting and hardening due to the repeated impacts, causes a severe damage of contact surfaces by mechanical hammering and electrical arching (especially during -hot switching‖ at high current densities), thus promoting the initiation and subsequent propagation of subsurface cracks, facilitating material transfer during detachment of contacting microstructure. Such progressive degradation of the contact interface can eventually lead to stiction and make the device non-functional. Stiction is usually defined as unintentional permanent attachment of compliant microstructure surfaces occurring during contact when restoring elastic forces are unable to overcome adhesive interfacial forces [9][10][11]. Bouncing degrades device operational speed by increasing actual switching time defined as the time at which a continuous electric current flow can be achieved. MEMS switches must be capable to operate for billions of cycles during their life-time. Limiting of bouncing is crucial since it would increase the reliability and improve their performance by reducing switching time. Many researchers emphasize that in order to achieve these goals a deeper understanding is required in the field of vibro-impact interactions [2,6,7,12,13]. Consequently, to enhance the mechanical reliability of microswitches (like those developed by MEMS research group at Kaunas University of Technology [14]) and other contact-type microdevices, besides a correct selection of the interfacial materials [15], it is of fundamental importance to model and thoroughly analyze characteristic dynamic effects related to complex vibro-impact phenomena. Different research groups throughout the world employ different simulation strategies and numerical models of varying complexity and dimensionality for investigation of contact-type microdevices. The predominant trend is to concentrate modeling efforts on certain aspects of device operation such as electrostatic actuation (e.g., [16]) or viscous air damping (e.g., squeeze-film damping [17]). The other research trend is to pursue development of comprehensive computational models accounting as precisely as possible for all of the major physical processes and coupled-field interactions taking place in operation of contact-type MEMS devices. In this respect some researchers rely on application of classical beam theories with finite difference schemes to model microswitch dynamics by including electrostatic forces, squeeze-film damping and contact bouncing effects [6,7] simulated either by simple linear spring approach [7] or by additionally incorporating adhesive interaction into contact model [6]. The finite element (FE) method is increasingly employed as the multiphysics capabilities of FE software are improving at a rapid pace. A successful example of the latter strategy is a research work by Guo et al. [18], where a complex 3-D FE model is developed within ANSYS, accounting simultaneously for electrostatic actuation, squeeze-film damping, modeled by compressible Reynolds equation, and nonlinear contact including adhesion based on Johnson-Kendall-Roberts (JKR) theory. The authors analyze influence of air damping and actuation voltage on bouncing process and demonstrate how modification of the damping and tailoring of the voltage can be used to mitigate the process. Czaplewski et al. also applied the FE method for generation of 3-D model of a microswitch including electrostatic actuation but excluding mechanical contact and squeeze-film damping [19]. This approximation is used because the authors focused their attention on electrostatic-structural interaction with a purpose of designing actuation waveform that would completely eliminate contact bouncing. FE analysis is also used by Lishchynska et al. in an attempt to simulate bouncing effect in a microswitch [20]. Air damping is not considered by the authors, which simulate electromechanical behavior and propose effective voltage controller scheme for stabilizing off-stage oscillations. However, the authors emphasize that more research work is still required in the field of bouncing reduction in order to achieve stable dynamic behavior during microswitch closure.
A review of the literature on contact bounce in microswitches suggests that extensive research efforts are still needed in this field and that scientific results on underlying dynamical aspects of this detrimental phenomenon are relatively scarce. Modification of electrostatic control mechanism is a predominant approach used for reduction of bouncing however we believe that there is still enough undisclosed potential in the mechanical domain alone, which could be beneficial in tackling the considered problem. Therefore in this paper a contact-type microdevice is analyzed purely from mechanical point of view, thereby concentrating on intrinsic dynamic properties of elastic structures such as natural vibration modes and their advantageous utilization. Figure 1a illustrates a generalized model of common electrostatic contact-type MEMS device operating in ambient air. The device is based on cantilever microstructure, though fixed-fixed configuration is frequent as well. The goal of the current research work is to focus on the impact process alone and carry out detailed investigation of important dynamic aspects of this complex phenomenon. Therefore in this paper electrostatic forces are not considered and it is assumed that the microstructure is operating in vacuum, thus squeeze-film damping is neglected as well (the research of these phenomena have been reported earlier [21][22][23]). Exclusion of gas environment from the presented numerical model is justified by a preference to avoid ambient gas in device operation since it creates favorable conditions for electrical arching. For simulation purposes a 2-D modeling approach is applied since: a) flexural vibration modes have a much more significant influence on vibro-impact process in comparison to torsional modes and b) it is computationally more cost-effective. Figure 1b located in a single layer and j = 1,2,...,k motion limiters or supports (0 < k < 2 m) that are located in i = 1,2,...,m nodes. Each beam element has two nodes with three degrees of freedom (DOF) at each one (displacement in x-and y-axis directions and rotation in x0y plane). The model was meshed manually with number of finite elements m equal to 50, thereby resulting in 150 total DOFs. The sufficiency of this particular mesh density was confirmed by comparative simulations presented in Section 2 and summarized in . Impact modeling is based on contact element approach and makes use of Kelvin-Voigt (viscoelastic) rheological model, in which linear spring is connected in parallel with a damper-the former represents the impact force and the latter accounts for energy dissipation during impact. After proper selection of generalized displacements in the inertial system of coordinates, model dynamics is described by the following equation of motion given in a general matrix form:
Finite Element Model of Impacting Cantilever Microstructure
is a vector representing the sum of external forces acting on the microstructure. Since external electrostatic and air pressure forces are not considered here, this vector is used as a mechanical load during simulations of free impact vibrations presented in Section 2. The initial conditions are defined as: -vector of impact interaction between cantilever microstructure and the support.
represent the reaction of the impacting microstructure and are expressed as: where i j K , i j C -stiffness and viscous friction coefficients of the support, i j -distance from the i-th nodal point of the microstructure to the j-th surface of the support located at the corresponding nodal point. In the case of the considered model the assumption of proportional damping is adequate therefore internal damping is modeled by means of Rayleigh damping approach [24]: where dM , dK are mass and stiffness damping parameters respectively that are determined from the following equations using two damping ratios 1 and 2 that correspond to two unequal natural frequencies of vibration 1 and 2 [24]: The presented FE model of the vibro-impact microsystem was implemented in FORTRAN.
Numerical Analysis of Impact Vibrations of Cantilever Microstructure
Free impact vibrations of elastic microstructures constitute one of the operation modes of contact-type MEMS devices. Complete vibro-impact process consists of free vibrations of the microstructure in the intervals between the impacts and its vibration during the impacts. Therefore, thorough analysis of free and impact vibrations of elastic microstructures is essential. For this purpose special FORTRAN numerical codes were written and used for running detailed dynamic simulations with the developed FE model of the cantilever microstructure that undergoes impacts against the support.
The modes of natural transverse vibrations of the microstructure ( Figure 2) consist of transverse displacements Y (Figure 2a) and torsions Ф around the axes perpendicular to the plane of vibrations (Figure 2b). The first five modes (I, II, III, IV, V) were obtained, which form nodal points in the intersection with the axis line. These points are denoted by numbers that express the ratio (x 0 /l) between the distance x 0 from the anchor of the cantilever microstructure and its whole length l. The letters Y ij and Φ ij denote the values of the maximum amplitudes (deflections) of the flexural and rotational modes.
The process of free impact vibrations of the microstructure for the case when the support is located at the free end of the cantilever is presented in Figure 3. Free impact vibrations were obtained by: (a) displacing free end of the microstructure upwards to a certain height (static analysis) and (b) releasing the microstructure from its statically-deformed position thereby allowing it to impact the support (transient analysis). The obtained complex vibro-impact motion is a result of self-excitation of several vibration modes of the microstructure. rotational. x 0 /l denotes ratio between the distance x 0 from the anchor of the cantilever and its whole length l, Y ij and Φ ij -maximum amplitudes of the flexural and rotational modes respectively: index i-number of vibration mode, j-sequence number of the maximum amplitude point with respect to the anchor point. The simulated y l trajectory was matched to the experimental one by variation of the contact stiffness and damping values in the viscoelastic impact model. Initial guesses of these values were performed empirically based on the available data on material properties of the microstructure and contact surfaces. Thereby the developed FE model of the impacting cantilever was adjusted until an acceptable level of accuracy was achieved. The accuracy of the model was checked quantitatively by using simulated and experimental values of period of free impact vibrations T and calculating relative error = ((T exp T)/T exp ) 100. Simulated vibro-impact process in Figure 3 yields T 5.1 s, while the corresponding measured value is equal to T exp 4.9 s. This gives 4%. This discrepancy is sufficiently small and allows us to consider the developed model to be adequate to the physical one.
Temporal characteristics that are most typical for the free impact vibrations are: T p -duration of the transient vibro-impact process, T-period of free impact vibrations, T 1 -duration of vibrations between two impacts, T 2 -impact duration. The accuracy of simulation results is significantly influenced by the density of the finite elements mesh. Figure 4 The points x 0 /l = 0.78, 0.87 coincide with the nodal points of the 2 nd and the 3 rd flexural vibration modes, while x 0 /l = 0.67-with the maximum amplitude point of the 3 rd mode. These points will be referred to as particular points of natural vibration modes. The subsequent numerical analysis will be confined to the consideration of the first three modes since they significantly influence the dynamic characteristics of the vibro-impact process. In order to clarify the nature of these characteristics it was necessary to determine vibration modes of the microstructure during the impact on the support. Figure 6 provides dependencies of the location of nodal points of the modes on the position of the support for the case of supported microstructure. The nodal points y ij and ij of the displacement mode Y i and rotational vibration mode Φ i are designated by two indices: i refers to the number of vibration mode, j-to the sequence number of the nodal point with respect to the anchor point of the microstructure. In comparison to unsupported microstructure, an additional nodal point (i = 0) is added to each mode for the case of supported microstructure. In Figure 6 the diagonal line represents the shifting of the support from the anchor of the microstructure to the free end. It is obvious that the position of the support determines the position of nodal points of the mode. When the support is located at the anchor, the modes and nodal points coincide with those of the unsupported microstructure, which is demonstrated by the nodal points indicated on the vertical axis x 0 ' /l. This distribution of locations of nodal points characterizes the microstructure before the impact. However, when the support is shifted, portion of the nodal points of the displacement modes Y i are shifted together, though this is not characteristic for rotational modes. This phenomenon is related to the pin-joint support of the microstructure. Simulated curves presented in Figure 6 enable explanation of the cause of changes in the dynamic characteristics of the considered vibro-impact microsystem in that case when the support is located in point x 0 /l = 0.87: the 2 nd nodal point of the 3 rd vibration mode of the supported microstructure coincides with the same point for the case of unsupported one (x 0 /l = 0.87). This implies that in the process of impact vibrations this point does not change its position either before or during the impact. This mode is amplified when the force of impact is applied to this nodal point. Consequently, the amplitude of the 3 rd mode increases resulting in more intensive energy dissipation in the material of the microstructure since it is considered [24] that energy dissipated by the structure that vibrates in the higher mode exceeds energy dissipated by the structure vibrating in its fundamental mode as many times as is the ratio of natural frequencies of the modes. Thus, the energy dissipated in the microstructure that vibrates in the fundamental mode is nearly 17 / 1 3 times less than in the case of vibrations in the 3 rd mode. It is evident that intensification of the amplitude of the 3 rd mode by locating the support at its nodal point does not cancel the first two modes. The fact that nodal points y 31 and φ 21 coincide in the case when the support is located at point x 0 /l = 0.87 suggests the possibility of amplification of the 2 nd mode as well. However, the advantages achieved in the considered case are first of all related to the intensification of the amplitude of the 3 rd vibration mode (during vibro-impact process cantilever vibrations in a wide frequency range are excited). The advantages achieved when the support is positioned in point x 0 /l = 0.78 are related to the intensification of the 2 nd vibration mode amplitude because this is the point in which the nodal points of the 2 nd vibration mode of the supported and unsupported microstructure are located (x 0 /l = 0.78). As Figure 6 indicates, the intersection of the trajectories of nodal points y 20 and The presented explanation is also confirmed by the dependences of the maximal amplitude points of separate vibration modes on the position of the support (Figure 7). The relationship of amplitudes of displacement modes Y 11 with respect to support locations (Figure 7a) demonstrates that when the support is located in point x 0 /l = 0.87, the amplitude of the 3 rd displacement mode Y 33 is maximal whereas other amplitudes do not reach their maximal values in this point. Positioning of the support in the point of the maximum amplitude of the 3 rd vibration mode (x 0 /l = 0.67) amplifies the displacement amplitude Y 32 that coincides with the said point of maximum amplitude. Amplitudes Y 30 and Y 31 are increased as well, whereas amplitude Y 33 is reduced. When the support is positioned in the nodal point of the 2 nd displacement mode, the displacement amplitude Y 22 increases whereas other amplitudes of the 2 nd mode decrease. Similarly, the amplitudes of rotational vibration modes Ф 11 are intensified as well (Figure 7b). Due to the impact of the cantilever on the support located in one of the particular points of vibration modes, the associated amplitudes increase even further thereby amplifying separate vibration modes.
After the performed analysis of the behavior of the nodal points and the points of maximum amplitude with respect to the support location, it is important to investigate the dependence of the frequencies of separate vibration modes on the position of the support. Figure 8 illustrates simulated dependences of the ratio between circular natural frequencies of the supported microstructure ω i and those of the unsupported one ω iin . It may be observed that the 1 st natural frequency of the supported microstructure reaches the maximum value when the support is located in point x 0 /l = 0.78 whereas the 2 nd and the 3 rd natural frequencies reach their maximum values when the support is located in other positions.
Therefore, in order to ensure maximum vibrational stability of a contact-type MEMS device containing a supported cantilever microstructure, the support must be positioned in point x 0 /l = 0.78. In this case the resonance frequency of the microsystem is maximum and, additionally, it becomes possible to amplify the amplitudes of the 2 nd mode of natural vibrations and to dissipate a significant portion of kinematically-transferred energy in the material of the microstructure. Furthermore, when the support is located in point x 0 /l = 0.78, the difference between the 1 st and the 2 nd natural frequencies of the supported microstructure is maximum, and by selecting the stiffness of the support to be located in the given point, the 1 st natural frequency may be brought closer to its 2 nd natural frequency thereby increasing its vibrational stability under external kinematical excitation, which may be very important when microdevice is located on the moving object. Common contact-type MEMS devices incorporate gaps between compliant and fixed microstructures. However, feasible MEMS designs may be also based on usage of prestress of contacting links. Therefore it is crucial to select the prestress in such a way that minimal rebound amplitudes are achieved resulting in reduced energy consumption during device control. Figure 9 presents simulated maximum rebound amplitudes l y z / max max as a function of prestress ∆/l, when the support is located at the free end of the cantilever microstructure. The diagonal line indicates the position of the support when it is vertically moved from the boundary position to the position of maximum prestress. The dashed lines at zero level represent the equilibrium position of the microstructure (vertical) and zero prestress (horizontal).
As the simulation results in Figure 9 demonstrate, minimum rebound amplitudes with respect to the equilibrium position are characteristic in the case of small prestress magnitudes (point B, when ∆/l = 0.01). By drawing a perpendicular from point B to the diagonal line, minimum amplitudes of the microstructure rebound are determined. Thus, in order to obtain the smallest bouncing that ensures minimum power consumption, the prestress should be selected in accordance to point B.
In addition to the amplitude-frequency characteristics of free impact vibrations, it is essential to determine the velocities and the forces induced during the impact. Figure 10 demonstrates the dependence of the pre-impact velocity (continuous lines) and original contact pressure force P (dashed line) on the position of the rigid support at zero prestress during the first three impacts (I, II, III) of the microstructure on the support. When the support is located in the particular points of the 3 rd flexural mode of the cantilever microstructure, a decrease in the velocity and original contact pressure force is observed, which is related to the increase in the dissipated energy in the material. Figure 11) also reveal that during the microstructure impact on the support positioned in point x 0 /l = 0.87 the contact pressure force is lower in the first stage of impact than in the second one, as compared with the opposite characteristics of the pressure force when the support is located at x 0 /l = 1.
Conclusions
In this paper we have presented a 2-D finite element model of cantilever microstructure impacting against viscoelastic support thereby representing a general case of contact-type MEMS devices. The model was developed within FORTRAN environment. Impact is modeled by means of contact-element approach that uses Kelvin-Voigt rheological element taking into account both contact stiffness and damping. Values of these parameters were selected empirically to match experimentally-obtained vibro-impact trajectories. Results of numerical analysis of characteristic vibro-impact process-free impact vibrations-were reported by considering three stages of the studied process: pre-impact, impact and post-impact. Obtained numerical results are provided in a dimensionless form and therefore are applicable across all scales ranging from macro to nano.
Numerical analysis is centered around the consideration of the first three flexural modes of the cantilever microstructure since they have a major effect on dynamic characteristics of the vibro-impact process. Investigation of influence of support position (along horizontal axis of the microstructure) on maximum post-impact rebound amplitudes indicates that the smallest values are obtained when the support is located in specific points coinciding with the nodal points of the 2 nd and the 3 rd flexural vibration modes (x 0 /l = 0.78 and 0.87 respectively) as well as with the amplitude peak of the 3 rd mode (x 0 /l = 0.67). Simulations reveal that support (contact point) positioning in these so-called particular points of vibration modes results in reduction of transient vibro-impact process thereby enabling to increase MEMS device operational speed as well as to enhance its reliability by diminishing detrimental consequences of this process. In-depth numerical analysis was conduced in order to reveal the physical nature of the aforementioned findings. For this purpose vibration modes of the microstructure during the impact on the support were determined. It is known that the position of the support determines the position of nodal points of the mode: when the support is shifted, portion of the nodal points of the flexural modes are shifted together. However it was revealed that in the process of impact vibrations the aforementioned particular points do not change their position either before or during the impact. This implies that the 2 nd and 3 rd modes are amplified when the force of impact is applied to these points. The effect is particularly pronounced in the case of the 2 nd nodal point of the 3 rd flexural mode (x 0 /l = 0.87). Consequently, the amplitude of the 3 rd mode increases resulting in more intensive energy dissipation in the material of the microstructure (energy dissipated is 17 / 1 3 times larger than in the case of microstructure vibrating in its fundamental mode). Increase of dissipated energy in the material at this particular point is also confirmed by observed reduction of the induced velocity and contact pressure force during impact.
Numerical study of influence of support position on the natural frequencies of separate vibration modes indicates that maximization of vibrational stability of contact-type MEMS devise containing supported microstructure is achieved by placing support at x 0 /l = 0.78 due to maximization of the 1 st natural frequency of the supported microstructure. By selecting the stiffness of the support to be located in the given point, the 1 st natural frequency may be brought closer to its 2 nd natural frequency thereby increasing the vibrational stability.
Obtained results of numerical analysis reveal huge potential of advantageous usage of higher vibration modes with their particular points for suppressing harmful bouncing process in contact-type microdevices resulting in improved reliability and performance. Therefore further research efforts are necessary in this field in order to identify different approaches for control of impact-related processes thereby enabling designers to develop innovative MEMS sensors and actuators that operate in contact mode. | 5,920.6 | 2009-12-16T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics"
] |
Extending Radio Broadcasting Semantics through Adaptive Audio Segmentation Automations
: The present paper focuses on adaptive audio detection, segmentation and classification techniques in audio broadcasting content, dedicated mainly to voice data. The suggested framework addresses a real case scenario encountered in media services and especially radio streams, aiming to fulfill diverse (semi-) automated indexing/annotation and management necessities. In this context, aggregated radio content is collected, featuring small input datasets, which are utilized for adaptive classification experiments, without searching, at this point, for a generic pattern recognition solution. Hierarchical and hybrid taxonomies are proposed, firstly to discriminate voice data in radio streams and thereafter to detect single speaker voices, and when this is the case, the experiments proceed into a final layer of gender classification. It is worth mentioning that stand-alone and combined supervised and clustering techniques are tested along with multivariate window tuning, towards the extraction of meaningful results based on overall and partial performance rates. Furthermore, the current work via data augmentation mechanisms contributes to the formulation of a dynamic Generic Audio Classification Repository to be subjected, in the future, to adaptive multilabel experimentation with more sophisticated techniques, such as deep architectures.
Introduction
The remarkable progress of web technologies and multimedia services has promoted the quick, easy and massive interchange of multimodal digital content. The involvement of plural heterogenic resources and customized user preferences require applicable content description and management mechanisms [1]. Moreover, content recognition, semantic interpretation and conceptualization attempts are currently being deployed, thus generating further difficulties and challenges, especially for time-based media (TBM) [2]. From the media organization point of view, media assets management (MAM) automation and intelligent multimedia processing technologies are needed for proper content archiving with optimum exploitation of both human resources and infrastructures, thus facilitating content reuse scenarios and audiovisual production in general. The same applies to the individual producers, the freelancers that are involved in the media and the contributors to the so-called user-generated content (UGC). In fact, their needs in audiovisual content management and archiving services are even harder to meet [3], considering that they do not usually have at their disposal professional MAM software equipped with radio and audiovisual broadcasting automation utilities. Considering the media consumer site, content classification, summarization and highlighting are pursued for multimedia content indexing, searching and retrieval automation. New trends regarding content recognition refer to topic classification, story understanding and/or enhanced semantic interaction, thus requiring adaptive audiovisual feature extraction and selection engines along with machine learning methods. These services rely on the utilization of extended multivariate databases, usually demanding applicable content annotation and/or semantic tagging. However, there are issues regarding the inhomogeneity of labeling meta-data, while in some cases, ground-truth training pairs are difficult to obtain (or are even completely unavailable). Hence, combinations of supervised, semi-supervised and unsupervised data mining algorithms are utilized to serve the specific necessities of various real-world multimedia semantics [1,[4][5][6][7][8][9].
Sound recognition plays an important role in most of the encountered audio and audiovisual pattern analysis cases, where related content is massively produced and uploaded (i.e., digital audio broadcasting, podcasts and web radio, but also video on demand (VoD), web-TV and multimodal UGC sharing in general). Specifically, there are various pattern recognition and semantic analysis tasks in the audio domain, including speech-music segmentation [8], genre recognition [10], speaker verification and voice diarization [11], speech enhancement [12,13], sound event detection [14], phoneme and speech recognition [15][16][17], as well as topic/story classification [18][19][20], sentiment analysis and opinion extraction [4,5,21], multiclass audio discrimination [22], environmental sound classification [23] and biomedical audio processing [24]. Audio broadcast is generally considered to be one of the most demanding recognition cases, where a large diversity of content types with many detection difficulties are implicated [1]. In addition, audio broadcasted content can be easily accessed, while new productions are massively and continuously created and uploaded/distributed. Hence, smart multi-purpose audio semantic and associated future Web 3.0 services can be built and progressed upon such audio broadcasting scenarios.
The current paper focuses on the investigation of various audio pattern analysis approaches in broadcast audio content. Following the results of previous research [1,25], stress test evaluation procedures and assessment of real-word scenarios are conducted, investigating the impact of the involved feature engines, the windowing configuration and the formulated classification schemes in both supervised and unsupervised strategies. The main target is to highlight the most effective parameterization at each stage of the entire modeling, aiming at assembling hybrid smart systems featuring optimum behavior without excessive computational load and resources demand (like in deep neural architectures).
The rest of the paper is organized as follows. The subsequent section addresses the problem definition and background work, with the corresponding particularities of the current experimental approach in radio productions. The literature state of the art follows, presenting previous research related to the topic under investigation. The implementation section describes the configuration and modeling aspects, including pre-processing actions, definition of classification taxonomies, ground truth data acquisition and feature engine formulation. Thereafter, experimental results of various methods and classification schemes are analyzed with the use of appropriate performance metrics. Finally, validation and optimization aspects regarding the whole implementation are addressed, followed by the discussion and conclusion section.
Background Work and Problem Definition
The current work investigates efficient and easy-to-implement adaptive strategies for voice detection, segmentation and classification in audio broadcast programs. Such audio signals usually comprise multiple segments-events, implicating various patterns, such as speakers' voices, phone correspondences, recorded reports, commercial jingles and other sound effects. Thus, efficient treatment and management of the broadcasted content involve demanding semantic analysis tasks. There are many issues that deteriorate the efficiency of audio recognition, requiring special attention. A common difficulty that must be faced in typical radio programs relies on the temporal overlapping of events and patterns, where music usually coexists with voice components. In addition, background noise and/or reverb contamination (mostly in non-studio recordings) deteriorate the recognition accuracy, while fade in/out operations and similar creative (/mixing) effects further complicate the speech detection task. Other unwanted matters include various recording and preprocessing artifacts, speech rate variability, pronunciation effects and subjective speech degradation issues in general.
Motivated by the results of a previous research on program-adaptive pattern analysis for Voice/Music/Phone [1] and Language Discrimination taxonomies [6], the presented methodology functions as an add-on module towards the formulation of a dynamic Generic Audio Classification Repository. Hence, following already adopted hierarchical classification strategies, new schemes were adapted based on clustering techniques, but also their combination with supervised training methods. In this context, semi-supervised hierarchical and hybrid pattern recognition systems are proposed for light-weighted speech/nonspeech segmentation, noise detection and further discrimination of male/female voices, independently of the involved speakers. Several experiments were conducted for the determination and validation of adaptive audio feature engines at every classification level of the involved hierarchies. Another issue that is addressed in the current work is the investigation of the impact of the temporal segmentation accuracy in the overall classification performance. Several stress tests were conducted in this direction, using different window lengths and segmentation resolution, offering windowing efficiency insights.
Semantic analysis procedures in radio content mainly involve voice detection, speech recognition and speaker identification tasks. Machine learning approaches based on clustering techniques that determine speech/non-speech frames were implemented for voice activity detection via Gaussian mixture models, Laplacian similarity matrices, expectation maximization algorithms, hidden Markov chains and artificial neural networks [26][27][28][29][30]. A more specific and interesting audio pattern that can be detected in audio signals, i.e., in broadcast programs, refers to phone line voices, due to the contained particular spectral audio properties [1,25,26].
It must be noted that an extended study was carried out in [1] for content analysis and description purposes of broadcast radio programs, aiming at the formulation of adaptive classification taxonomies (speech/non-speech and voice/music discrimination, speaker verification, noise detection) with the utilization of various direct, hierarchical and combined hybrid schemes implementations. In this direction, efficient annotation and segmentation processes were applied in the radio signals formulating the ground truth database for the subsequent corresponding semantic analysis. During supervised classification, based on the developed taxonomies, several algorithms were employed from the statistical domain (i.e., linear, logistic regressions), decision trees (i.e., J48 tree models), support vector machine techniques (i.e., SMO) and artificial neural network modeling. The comparison between the respective classification performances indicated that neural network implementations provided the highest discrimination efficiency in almost all cases/schemes. Moreover, a thorough feature evaluation was conducted in [25] to investigate the saliency of an initial augmented extracted audio feature set. Ranking algorithms were employed, aiming at the detection of the most efficient feature subsets for classification purposes, based on the above speech/non-speech segmentation and speaker discrimination taxonomies. In this context, the current paper aims to extend the previously conducted work, trying to integrate unsupervised classification potentials via clustering techniques in the content of radio programs. Indicative comparisons of the previously and currently employed classification methods are presented in the following sections. Figure 1 presents the proposed methodology for the program-adaptive classification problem. In real-world cases, the sound source could be consisted of a short length broadcasted signal (for example 10 min duration), which is thereafter implicated in semantic analysis tasks, based on pattern recognition operations, aiming to formulate efficient/automated content description/labeling mechanisms for archiving purposes. As mentioned above, the conducted work/experiments investigate the feasibility of the proposed architecture via supervised classification and clustering strategies, mainly in voice content. The audio signal that triggers the initiation of the process in Figure 1 could derive from radio streams, either traditionally broadcasted or hosted on web radio platforms. Taking into consideration multiple speakers' voices coexisting in different radio shows with differentiated characteristics/structure, it is anticipated that the initially formed Ground Truth Repository could not function efficiently towards multilayer classification, due to reduced data acquisition. On this basis, the current work proposes an initial experimentation with a small input dataset of a specific radio stream in order to examine the potential voice discrimination rates. Thereafter, the Ground Truth Repository will be gradually/iteratively augmented with other instances of the same radio program (therefore retaining the same content characteristics/structure), reinforcing the confidence in the classification results. In this context, each group of records of the same radio program functions as a sub-(ground truth) dataset in the Generic Repository, justifying the adaptive character of the proposed framework (dotted lines in Figure 1). The same operation is followed for other radio shows along with their respective diversified instances, in an iterative way, leading to effective data augmentation of the Generic Ground Truth Repository, which subsequently can be utilized for experimentation with more sophisticated deep learning strategies. As anticipated, the most crucial step (and demanding one) towards the feasibility of the proposed architecture has to do with the classification effectiveness in the initial reduced duration radio stream. The aforementioned topic/step constitutes the main research objective of the current work, namely, to investigate if traditional light-weighted machine learning methods could support efficient discrimination rates before proceeding into more complex and with increased computational load methods on augmented data. Analyzing Figure 1, the block diagram of the suggested framework is initiated with the presence of the aforementioned reduced duration radio signal. Thereafter, the content is subjected to multiple looping operations for window selection/tuning and feature extraction. At this point, the experimentation could begin with unsupervised/clustering methods without any data annotation, based on hierarchical taxonomies for voice discrimination (the classification schemes will be discussed in next section). The labeling procedure based on specific radio program adaptation leads to confident supervised machine learning modeling in the same taxonomies. Both classification strategies are tested separately and combined in multilayer/hierarchical discrimination schemes, given the investigative character of the presented work. It must be noted that the dataflow withholds an iterative character for the determination of optimized windowing in terms of classification rates of stand-alone and combined machine learning topologies in every layer of the hierarchical framework. The multivariate experimentation in window lengths, feature engine and classification algorithms, along with meaningful results extraction, could thereafter lead to the formulation of pretrained models in the Generic Repository before testing in future implementations (i.e., deep architectures).
Data Collection-Content Preprocessing
For the demands of the model under development, audio content from different broadcast shows is combined and transcoded in the common uncompressed PCM WAV format (16 bit, 44,100 Hz). During the conducted experiments, the stereo information is not taken into consideration, since the differentiations in channels properties have been thoroughly studied in previous work [26]. Moreover, the selected audio content involves the typical audio patterns in radio productions, namely, speaker voices, different kinds of noise and music, and phone conversations are also included.
Thereafter, the synthesized audio file is segmented into smaller audio frames. In order to quantitatively investigate the impact of the time windows duration, especially in the unsupervised classification performance, four different temporal lengths are employed: 1000 ms, 500 ms, 250 ms and 125 ms. Table 1 presents the formulated audio database, including the label of the samples, of the 3.5 min (210 s) synthesized audio file. In Table 1, the phone conversation samples are notated with P, the voice signal with V and the residual non-speech segments with R. The voice signal implicates different male and female voices, V MV and V FV , respectively, while V GV-GV,G=M,F represent speech overlapping between same gender speakers (G = M, F) and V MV-FV withholds speech overlapping between different gender speakers, which commonly occur in radio productions. The residual signal includes the music content and the noisy interferences. In the music content (M), different music genres are selected, such as rock, lounge, hip-hop and classical music, with both male and female singers, which are usually heard in radio programs. Moreover, the audio content includes representative jingles of radio programs (J). Finally, noise reductions (N) refer to reverb, hiss effects, silence and other noisy interferences. In this way, the collected data contain all the typical audio patterns in radio productions.
Classification Taxonomies and Ground Truth Data
As Table 1 exhibits, there are many audio patterns that can be investigated/classified in the audio content of radio broadcasting. An initial experimental procedure was conducted in [1], implicating several direct, hierarchical and hybrid classification schemes with the utilization of supervised classification algorithms and the subsequent comparison of results. The current work attempts to extend the semantic analysis process for speech detection/discrimination. In this context, three classification schemes are employed in an hierarchical mode, in order to disintegrate the initial complex pattern recognition problem into more efficient layers. Figure 2 presents the classification schemes with the respective audio content labeling. The first layer includes the voice discrimination of speaker and phone conversations (VPR scheme). It must be noted that phone voice is considered as a distinct audio speech signal because of its specific audio and spectral properties, as [1,25] present. In this way, the voice and phone signals can be classified from music, jingles and other noise content (residual signal). The second layer includes a single speaker vs multiple speakers scheme (SM scheme) that attempts to discriminate a speakers' voice from speech overlapping between them. Finally, the subsequent third layer presents the speaker genre diarization problem, aiming to classify male/female voices (MF scheme). It must be stated that the whole semantic analysis is conducted with both supervised and unsupervised classification algorithms, and a combination of them, but the procedure can be also served by the solely automatic clustering process between the layers/schemes. According to the previous mentioned discrimination schemes, the annotation procedure assigns the corresponding class to the respective segmented audio frames, with the notation of Table 1. In this context, a ground truth database is formulated only for supervised classification purposes, since unsupervised classification utilizes only the initial non-labeled audio samples for clustering detection. The annotation procedure is also essential in order to evaluate the discrimination rates of the unsupervised automated classification, comparing the clustering structures to the assigned classes of the ground truth formulated database.
Feature Engine
In the current paper, 90 different features were initially selected and extracted via bibliographic suggestions, empirical observations and trial and error experiments. Hence, we utilize standard spectral audio features that are frequently used in audio content classification, such as spectral centroid (SPc) and its MPEG-7 perceptual version (audio spectral centroid, ASC), audio spectral spread (ASS), spectral flatness measure (SFM), rolloff frequency (RF), bandwidth (BW), spectral irregularity and brightness, spectral flux (SF) and delta spectrum magnitude (DFi) [1,25].
Similarly, popular time-domain audio features are employed (i.e., low short time energy ratio-LSTER; crest factor-CF; logarithmic expressions of normalized recording level, average power and dynamic range-loudness, pav and DR, respectively), in combination with time and frequency-domain signal statistics (audio entropy, RMS energy, temporal and spectral skewness and kurtosis, etc.) [1,25]. In addition, the first thirteen mel frequency cepstral coefficients (MFCCs) are selected due to their increased discriminative power in speech and speaker recognition. Audio envelope thresholding and peak analysis is also performed for the estimation of additional features, such as global and efficient signal to noise ratio (GSNR/ESNR); envelope level that has been exceeded in 85% of the signal length (E85 [dB]); estimation of the total number envelope peaks (nPeaks), including the average and variance values of their magnitudes and their time distances (PKSavr, PKSvar, PKS-Distavr, PKS-Distvar, respectively); peak transition measure (PTM) and its normalized version (nPTM); and estimation of the number of significant samples (nss) exceeding the envelope threshold level of E85, providing also the average and variance of the length of the silent (insignificant) segments (meanL-SP, varL-SP) [1,25]. Finally, spectral bands' comparison features are extracted by means of FFT and 9-level DWT/UWT analyses. Hence, band energy ratios (BER) of low (LF, <250 Hz), medium (MF, 250 Hz-4 kHz) and high frequencies (HF, >4 kHz) to the overall spectral energy formed using the FFT sequences. Similarly, wavelet average power and crest factor sequences of all the k = 10 formed scales of the WT coefficients are estimated (WPav-k, WCF-k), also allowing the extraction of wavelet power centroid (WPc) and time variance (WPcv), the energy ratio of the lowest band to the total energy expressing the significance of the lowest band (WLBsign) and the energy concentration to the highest level wavelet band (WLBconc).
The above initially extracted feature set was engaged and tested in previous experiments in [25], and consequently, useful remarks and conclusions referring to their efficiency emerge from comparisons in the current work. Figure 3 exhibits the main categories of the features.
Another aspect that must be taken into consideration refers to the evaluation of the extracted features, because each audio property contributes at a different level to the discrimination efficiency. Moreover, the exploitation of the whole feature set leads, as it is anticipated, to increased computational load and processing needs and therefore, smaller and efficient subsets are sought in order to resolve these issues. Several experiments have been carried out in [1,25] concerning the saliency and ranking of the feature set while employing supervised classification in different implementations and schemes. The computed cross-correlation matrices and principal component analysis revealed in [25] a feature vector with dimension/rank equal to 36 for supervised classification with artificial neural system implementations. For this reason, the subsequent experiments in the next section employ the salient feature vector that has been determined for supervised classification purposes.
Since unsupervised classification algorithms do not use predefined classes and only investigate clusters of data/values, the whole feature set cannot be evaluated strictly in terms of the implemented scheme (VPR, SM, MF). Consequently, the attributes' saliency is determined by their respective discrimination impact in the classification efficiency. Nevertheless, an initial indicative ranking of the audio properties (the first 30) is presented in Table 2 for each classification scheme, while utilizing the "InfoGainAttributeEval" algorithm on the audio feature values in the WEKA environment. This technique evaluates the importance of each attribute individually by estimating the information gain with respect to the class using entropy measures. Furthermore, in the next section, several subsets of the initially extracted features are tested on the grounds of their effectiveness in the clustering procedure while employing the unsupervised classification algorithm each time.
Classification Techniques and Performance Metrics
Several experiments have been conducted in [1] in order to compare supervised classification techniques (decisions trees, artificial neural systems, regressions, etc.) in terms of their overall and partial discrimination efficiencies in various implementations and schemes. One of the most balanced discrimination rates emerged from the employment of artificial neural training for the development of the supervised classifier. Consequently, in the current work, artificial neural systems (ANS) are solely utilized for supervised classification purposes. Several topologies were tested in order to achieve efficient training performance leading to network implementations of two sigmoid hidden layers and an output linear layer, while an approximate number of 20 nodes are engaged in the hidden layers. Furthermore, the k-fold validation method is utilized, dividing the initial audio samples set into k-subsets and thereafter using the (k − 1) subsets for training requirements and the remaining subset for validation purposes; the whole procedure is iteratively repeated k times. The k-fold validation technique is employed for the performance evaluation of the ANS classifiers and, moreover, favors the formulation of generalization classification rules. For the current experiments, we selected k = 10, ensuring that for each of the 10 iterations, 90% of the audio frames' feature values are engaged in the model training process and 10% for validation purposes.
The performance rates for each parameters' combination (window length, k-fold iteration, etc.) are based on the extracted confusion matrix of the respective model. Specifically, the confusion matrix represents an array of values of the correctly classified instances and the misclassified ones. Table 3 presents an example of the output confusion matrix for the temporal length 125 ms for the taxonomy voice (V), phone (P) and residual (R) according to VPR scheme. As shown, the correctly classified samples are on the main diagonal of the array, while above and below are the erroneously classified samples. The overall pattern recognition performance PS of the ANS for each of the implemented schemes is evaluated by the % ratio of the number of the correctly classified samples Ncor to the total number of the input samples N. In the same way, the partial discrimination rate PSci of a class Ci is defined as the % ratio of the correctly classified samples Ncci in the class Ci to the total number of samples Nci that Ci class includes. The above definitions are described in Equations (1) and (2).
Applying Equations (1) and (2) in our example of Table 3, the numbers of correctly classified samples in each class are N CCV = 2255 (for class V), N CCP = 755 (for class P) and N CCR = 1545 (for class R), while the total number of correctly classified instances in the model is Ncor = 2255 + 755 + 1545 = 4555. Furthermore, as Table 1 exhibits, the input dataset had N = 4800 samples in total, and for each class, we have N V = 2400, N P = 800 and N R = 1600. Consequently, the partial recognition rates for each class are: On the other hand, the clustering process in the current work is being implemented through the k-means classification algorithm that aims to detect the formulation of group of data/feature values, according to a similarity metric. The criteria that defines the integration of a sample into a cluster, usually refers to a distance metric such as Euclidean, Manhattan, Chebyshev or min-max distance from the cluster center/average. The experiments that are carried out in the present work utilize both Euclidean and Manhattan distance in the k-means implementation for additional comparison purposes.
Since the clustering process only detects data groups, the classification performances cannot be directly evaluated with Equations (1) and (2). One of the main objectives of the current work is to investigate the feasibility of the automatic unsupervised classification in audio data through clustering methods and compare the results with the respective ones by supervised ANS. In this way, we can compare the data clusters formulations of k-means with the corresponding classes in ANS in order solely to evaluate the clusters. Table 4 presents the example of the output cluster formulation of the k-means algorithm for the same window length of 125 ms. The partial discrimination performance PU Gi of cluster Gi is defined as the % ratio of the number of samples of class Ci that have been classified to cluster Gi to the total number of samples of class Ci. The above metric is essentially a % measurement of resemblance of cluster class. In addition, the overall discrimination performance of clustering PU is evaluated as the % ratio of the sum of the numbers of samples of each class Ci that have been correctly grouped in cluster Gi to the total number of input samples N. The above metrics are described in Equations (3) and (4). The classification results of the employed supervised/unsupervised techniques in the next section are evaluated based on Equations (1)-(4).
Applying Equations (3) and (4) in our example of Table 4, the number of clusters is nclusters = 3 and the distribution of class samples in the respective clusters is N V→G1 = 2240 (for class V in Group1), N P→G2 = 800 (for class P in Group 2) and N R→G3 = 2240 (for class R in Group 3). Again, as Table 1 exhibits, the input dataset had N = 4800 samples in total and for each class, we have N V = 2400, N P = 800 and N R = 1600. Consequently, the partial recognition rates for each class are:
Performance Results on Combined Taxonomies
The supervised ANS models and the clustering k-means algorithm are implemented independently in the first classification layer of the VPR scheme. Furthermore, the data mining techniques are optionally combined in the subsequent layers, in order to evaluate either a strict supervised or unsupervised character of classification or a hybrid one, while moving down in the classification schemes/layers. Figure 4 exhibits the combinations of classification methods. It must be noted that the clustering "path" leads to a more automated whole semantic analysis process compared to the prerequisites of ground truth databases that ANS classifiers demand. In order to follow the successive implementations of ANS and k-means algorithms, each path in Figure 4 is represented with the initials S (for supervised classification) and U (for clustering) for the three layers, namely, the X-X-X notation where X = S or X = U. For example, the notation S-S-U stands for the combined classification with ANS modeling in VPR and SM schemes and k-means clustering in the MF scheme. Table 5 presents the performance rates of ANS and k-means for the first VPR classification scheme for several temporal segmentation windows. The overall and partial discrimination rates for supervised classification are quite high for all of the V, P and R classes, reaching even values of 100%. A useful remark derives from the slight decline in performances dependent on the window duration. The unsupervised k-means algorithm also presents high performance rates, i.e., for 1000 ms windowing, 94.76% for overall discrimination, and 96.67%, 100.00% and 93.08% for the cluster formulation according to the corresponding class V, P and R respectively. The phone signal class reaches the 100% discrimination performance, for both algorithms, confirming the initial assumptions of its more specific temporal and spectral audio properties. Moreover, the implementations with Manhattan distance usually lead to slightly increased performance values, but on the other hand, decreases in temporal windowing considerably deteriorate the clustering process with discrimination values of 71.55% and 63.51% for 250 ms and 125 ms framing windows, respectively. The 1000 ms segmentation window leads to the highest discrimination rates for both supervised/unsupervised techniques, but the impact of temporal differentiations is quite obvious, especially in the clustering process. In order to proceed to the next classification level/layer, the most efficient results of 1000 ms segmentation windowing are employed from the VPR scheme, which provide 100% voice signal discrimination in ANS and 96.67% clustering in k-means. Thereafter, the classification techniques are employed again for the SM scheme, in order to discriminate a single speaker from multiple speakers in the detected voice signal of the VPR scheme. Table 6 exhibits the discrimination rates for the SM scheme. The selection of temporal window also remains crucial for the ANS and k-means implementations for the SM classification scheme. The 1000 ms framing leads to more efficient overall and partial discrimination rates for all of the S-S, S-U, U-S and U-U combinations. Moreover, the Manhattan distance metric for the k-means results in better clustering performances compared to Euclidean distance. Finally, the most useful remark refers to the 100% discrimination and clustering performance in the combined algorithms' implementation for the multiple speakers class of the voice signal.
Moreover, the U-S and S-S sequences lead to more efficient single speaker discrimination (97.5%), and consequently, the ANS classification is vital in the second layer of the SM scheme. This allows the semantic analysis to proceed in the third hierarchical MF scheme for genre voice classification of the single speaker voice. Table 7 exhibits the classification rates for the third layer of MF scheme. As Table 7 exhibits, the male/female voice is discriminated in high performances for both supervised and unsupervised implementations, with overall discrimination values of about 90%. More thoroughly, the ANS modeling offers slightly better and more balanced classification results (92.50%, 95%, 90%) compared to the k-means clustering rates (90%, 100%, 80%). Furthermore, it is quite useful to note that, in the MF scheme, the ANS implementations yield better overall and partial performances for smaller segmentation windows, while the opposite stands for k-means clustering. Finally, the same observation for the better selection of the Manhattan distance metric also remains for the MF classification scheme.
Summarizing the remarks of the overall semantic analysis for hierarchical classification in the three layers of Figure 4, it must be noted that several combinations can be sought for pure or hybrid classification techniques in order to reach efficient discrimination results. The integration of clustering methods in supervised implementations promotes automation and functionality in the whole semantic analysis process.
Validation and Optimization Issues
As mentioned in Section 2.5, the feature evaluation process is crucial for the overall processing load and time, especially for the supervised classification techniques. Even though the clustering k-means algorithm noted reduced computational load in the previous experiments while exploiting the whole extracted feature set, in this section, a feature evaluation process is conducted especially for the clustering method, while also utilizing the ranking results of Table 2. The k-means algorithm is employed for the three classification schemes VPR, SM and MF, but different numbers of audio properties are exploited in each implementation, based on the ranking of Table 2. Figures 5-7 present the overall and partial discrimination rates while utilizing differentiated numbers of audio features.
From the above diagrams, the optimum number of features is determined based on the best performance rates. Table 8 presents the number of features and the corresponding discrimination rates for the k-means clustering in the hierarchical classification schemes of Figure 2. Comparing the values of Table 8 with the corresponding performances of Tables 5-7, we can observe the positive impact of the diversification of the number of features in clustering, in the context of the whole semantic analysis process performance rates. Another aspect that must be taken into consideration while employing the pattern recognition analysis refers to the selection of the segmentation window, which has a crucial impact on the discrimination performances. More specifically, in almost all of the current experiments and also previous ones in [1,25], the 1000 ms framing length leads to better classification results. One justification may derive from the fact that 1000 ms contains more information data, which is a determinant factor for classification purposes, especially of heterogeneous audio content (i.e., VPR scheme). In order to further investigate and validate the selection of 1000 ms framing length, besides the comparisons in the previous section with various temporal windows (1000 ms, 500 ms, 250 ms, 125 ms) in terms of their corresponding classification results, several experiments are conducted in this section with a sliding 1000 ms segmentation window. In the following analysis, the 1000 ms segmentation begins with 100 ms, 200 ms, 300 ms and 400 ms delays, resulting in successive information loss compared to the initial accurately annotated frames. Moreover, in real conditions, the whole semantic analysis process may suffer from inaccurate annotation tasks, and consequently, the sliding 1000 ms windows may reveal the impact of the selected window, on the grounds of a sensitivity analysis in the classification problem. Table 9 presents the performance values for supervised and unsupervised classification on the VPR scheme with the sliding effect. As Table 9 exhibits, the segmentation delay results are different from the respective ones with no sliding. Nevertheless, the impact of the sliding effect is not so obvious for 100 ms, 200 ms or 300 ms temporal delays, but when referring to 400 ms sliding, the classification performances decrease for both ANS and k-means methods, which indicates a significant information loss. Consequently, the 1000 ms framing selection appears to be an efficient segmentation window, which also allows real condition annotation fault margins. Furthermore, in the same table, the discrimination results for k-means with 32 feature implementations are exhibited, which are more increased with the corresponding implementations with the whole feature set. This remark reinforces the previous conclusions of the feature adaptivity potential in the whole semantic analysis process.
Conclusions and Future Aspects
This paper presents a framework for quick and light-weighted adaptive segmentation and classification processes in audio broadcasted content. Several combinations of training techniques were implicated in hybrid and hierarchical taxonomies, along with multivariate experiments in feature sets and temporal window tuning. The classification rates (especially in supervised strategies) revealed that such a methodology is feasible for effective content discrimination while choosing a profile of parameters in terms of audio properties, window lengths and machine learning techniques. While moving into more careful conclusions drawn by the whole experimentation setup, it must be mentioned that traditional machine learning strategies can be exploited when limited data exist in order to support initial broadcasted data classification for effective radio content description and archiving purposes. The temporal length of windowing process contributes radically to the taxonomies' performance, favoring mainly medium lengths (around 1 s duration), while the sensitivity experiments with overlapping potentials revealed that sliding operations improve the general classification rates compared to more strict segmentation choices. Moreover, it must be highlighted that clustering methods can facilitate a quick and effective semi-automated "blind" data discrimination without the data annotation step, especially for the initial voice classification layer. In all cases, the audio signals deriving from broadcasted programs can be efficiently processed in hierarchical/hybrid classification implementations since error propagations are treated better while breaking down the content in multilayer discrimination schemes compared to direct ones.
The overarching purpose of this work is the formulation of a dynamic Generic Audio Classification Repository, fed by iterative radio program-adaptive classification pro-cesses/experimentation. In this context, the audio database is constantly evolving and augmenting, and more taxonomies could be incorporated, either in voice signals (language discrimination, emotion estimation, etc.) or in residual data (music genre classification, noise removal, etc.). In this direction, the semantic analysis process could facilitate more complex and resource-demanding machine learning strategies in rich data content that could involve deep architectures (RNNs, 1d/2d CNNs, etc.). The main target of the presented work is to integrate, step by step, all possible classification schemes based on radio content structures, in order to support effective pretrained models and automated solutions independent of adaptive methodologies. | 8,292.2 | 2022-07-18T00:00:00.000 | [
"Computer Science"
] |
Iterative Receiver Based on SAGE Algorithm for Crosstalk Cancellation in Upstream Vectored VDSL
We propose the use of an iterative receiver based on the Space Alternating Generalized Expectation maximization (SAGE) algorithm for crosstalk cancellation in upstream vectored VDSL. In the absence of alien crosstalk, we show that when initialized with the frequency-domain equalizer (FEQ) output, the far-end crosstalk (FEXT) can be cancelled with no more real-time complexity than the existing linear receivers. In addition, the suggested approach does not require offline computation of the channel inverse and thus reduces the receiver complexity. In the presence of alien crosstalk, there is a significant gap between the rate performance of the linear receivers as compared with the single-user bound (SUB). The proposed receiver is shown to successfully bridge this gap while requiring only a little extracomplexity. Computer simulations are presented to validate the analysis and confirm the performance of the proposed receiver.
Introduction
"Very-high-speed Digital Subscriber Lines (VDSLs)" is a broadband access technology that uses twisted pairs (TPs) as a medium for high-speed data transmission [1]. It is one of the key broadband technologies for solving the "last mile" problem. With the exploitation of a high bandwidth (in tens of a megahertz), it can provide a bidirectional data rate up to 200 Mbps over short loop lengths [2]. Several TPs corresponding to a number of users are contained in a binder, which ultimately connect the central office (CO) or optical network unit (ONU) to the customer premise equipment (CPE). Because of the electromagnetic coupling among closely packed TPs, a "crosstalk" termed as farend crosstalk (FEXT) is introduced into the far-end signal at each TP. Such crosstalk that arises from subscribers enjoying similar type of services under VDSL systems is referred to as self-crosstalk and degrades the performance significantly, especially for shorter loop lengths [1]. Another major cause of rate degradation is crosstalk that originates from subscribers enjoying other services, and is referred to as alien crosstalk [3]. Such crosstalk exists in practical situations mainly due to the coexistence of broadband over power line (BPL) systems, radio frequency interference (RFI) ingress, and crosstalk from subscribers within the same binder enjoying other DSL services.
With the advent of vectored transmission in [4], which leverages the colocation of receiver modems at the CO, there has been a surge of research interest in receiver designs for crosstalk cancellation [5]. For the design of crosstalk cancelers, computational complexity is an important issue due to the presence of a large number of tones (typically 4096) as well as a number of users per vectored group. Recently, a near-optimal linear zero forcing (ZF) receiver has been proposed in [6] for self-crosstalk cancellation, which requires channel matrix inversion at each tone. Such matrix inversions are frequently required due to changes in user status or variations in crosstalk characteristics [7], and hence cause an increase in overhead on the computational cost of the ZF receiver. To avoid this, a low-order truncated series approximation of the inverse channel matrix was considered for downstream transmissions in [8]. But it was shown that such an approach does not provide performance as good as the ZF receiver when the loop lengths are short (which is often the case with VDSL systems). The authors in [9] have suggested a pilot-based least-mean-square (LMS) tracking 2 ISRN Communications and Networking algorithm, which requires a large training overhead for selfcrosstalk cancellation.
All the above authors assume the absence of alien crosstalk. The presence of an alien noise, originating from an external source, introduces a high spatial correlation among noise at the receivers of different vectored users [10][11][12]. It was shown in [3] that noise correlation between twisted pairs is more than the correlation between the tones. In theory, the receivers suggested for self-crosstalk can be applied after a whitening procedure. However, the prewhitening operation applied on the spatially correlated noise destroys the columnwise diagonal dominant (CWDD) characteristics of the channel (leading to poor performance of linear receivers for alien crosstalk cancellation). The authors in [10] suggested a nonlinear successive interference canceler to achieve higher data rates than ZF receiver. However, this receiver requires QR (here Q denotes a unitary matrix, and R is an upper triangular matrix) decomposition of the channel matrix and is quite involved computationally because of the search required for QR ordering. A computationally expensive turbo receiver based on the minimum mean-squareerror (MMSE) criterion for such crosstalk cancellation was suggested in [13]. An alien crosstalk canceler was considered in [14] by assuming perfect symbol estimation after selfcrosstalk cancellation. In [3], a joint transmitter-receiver cooperation framework was shown to achieve capacity for alien crosstalk cancellation. However, the proposed algorithm is dependent on knowledge of the channel at the transmitter, and cooperation between the CPEs, which is not feasible in most situations. An algorithm to mitigate a single interference from home local area network using iterative soft cancellation was suggested in [15] for use in downstream DSL. The proposed scheme does not consider vectoring and, therefore, may not be suitable for use in the upstream transmissions.
In practice, there is a need for crosstalk canceler at each tone, which can support both conventional single-user as well as multiuser detection with a limited complexity. Considering a bit loading based on the crosstalk-free signal-tonoise ratio (SNR), the performance after frequency-domain equalization (FEQ) may be degraded in terms of bit error rate (BER). However, it is important to appreciate that the estimated symbols are located within a small radius (of the order of minimum distance between constellation points) of the true symbol value. When the crosstalk is present on the victim line, it can be mitigated by utilizing the estimates of the disturbers (after FEQ) iteratively to yield a relatively smaller variance of the residual crosstalk, ensuring the minimum BER level of 10 −7 as per the DSL standard. The specific CWDD property of DSL channels facilitates achievement of crosstalk-free performance and hence motivates the deployment of an iterative canceler.
With the above motivation, we propose in this paper an iterative receiver based on a space-alternating generalized expectation maximization (SAGE) algorithm for cancellation of the crosstalk in VDSL systems. Basically, the SAGE algorithm is a variant of the expectation-maximization (EM) algorithm [16] that yields performance close to the maximum likelihood (ML) solution in situations where the ML solution is computationally intractable [17]. We initially consider a situation when alien crosstalk is not present. By employing an ordered SAGE (OSAGE), we derive simple bounds on the achievable signal-to-interference-noise ratio (SINR) to help facilitate the performance analysis. Based on the derived bounds, we show that our proposed canceler provides near crosstalk-free performance while eliminating the need for channel inversion. We also show that the proposed receiver cancels the self-crosstalk by initializing the receiver with the FEQ output and requires only a single iteration to come close to the optimal performance. We next consider the case when alien crosstalk is also present. By deriving a new bound on the CWDD parameter of equivalent channel after noise whitening, we show that the convergence conditions continue to be satisfied in most situations of practical interest and can be exploited to cancel the alien crosstalk.
In the presence of alien crosstalk of high power and/or low correlation, the SAGE algorithm is still shown to require only one iteration for approximating the ML solution, though a few additional iterations may be required under the conditions of low power and/or high correlation of alien noise across the TPs. This offers an attractive trade-off between data rate and complexity while canceling the self-crosstalk and mitigating alien crosstalk. Computer simulations are conducted to demonstrate the relevance of the proposed method in practical VDSL deployments.
The organization of this paper is as follows. A description of the system model is given in Section 2. Section 3 presents an iterative receiver based on the SAGE algorithm for selfcrosstalk cancellation while Section 4 describes an alien crosstalk cancellation algorithm with noise prewhitening. The performance of the proposed iterative receiver based on the derived bounds as well as computer simulations is assessed in Section 5. Finally, conclusions are drawn in Section 6.
Notation. Vectors (matrices) are denoted by boldface lower (upper) case letters. A i j and [A] i j denote the i jth element of the matrix A while X i denotes the ith element of vector x. All vectors are column vectors. The variance of random variable X is denoted by σ 2 x . The operators (·) * , (·) T , and (·) † denote the conjugate, transpose, and conjugate transpose, respectively. The operators E{·}, tr(·), | · |, and det(·) represent expectation, trace, absolute value, and determinant, respectively. We use z and z to differentiate z corresponding to self-and alien crosstalk cancellation, respectively.
System Model
We consider a system model for upstream transmissions as shown in Figure 1. As is well known, DMT modulation based on Inverse Fast Fourier Transform (IFFT) is used in VDSL. The available frequency band is divided into a number of parallel subcarriers or tones, and IFFT effectively loads symbols onto the multiple tones. The DMT receivers (at each TP in the CO) ignore the cyclic prefix portion of the received signal and use the Fast Fourier Transform (FFT) for demodulation. Since an adaptive bit loading is used, the QAM constellation size varies according to the receive SNR.
We assume that all users are perfectly synchronized and that the impulse response length of channel is no longer than the cyclic prefix length. We now consider a VDSL system with N vectored users as shown in Figure 1. At the receiver of the ith TP, the kth sample of the FFT Y i,k is given by where X i,k is the data symbol of the ith user and V i,k is a component of the additive noise at the kth tone that includes the alien crosstalk and thermal noise. The coefficient H ii,k basically arises due to attenuation of the kth tone in the TP, which is usually modeled by the transmission line theory. Crosstalk coupling coefficient H i j,k is the complex channel element of the kth tone from the jth interferer to the ith victim and is modeled as in [18]. The second term on the right-hand side of (1) represents the FEXT. As is well known, this crosstalk is the major factor limiting the performance of DSL systems. We consider signal-level coordination in such a way that the samples Y i,k at the kth tone for all TPs are processed together (referred to as vectoring in [4]). Therefore, the received vector on the kth tone can be expressed as where T represent the vectors of received samples, transmitted symbols, and noise samples, respectively. H k is the channel matrix for the kth tone whose diagonal elements correspond to the direct paths between the CPEs and the CO while the off-diagonal elements represent the crosstalk. The maximum ratio of the nondiagonal element to the diagonal element is defined by a parameter α cwdd Equation (3) implies that the magnitude of the crosstalk channel coefficient |H ji,k | from the ith disturbing transmitter into the jth receiver is always weaker than the magnitude of the corresponding direct channel |H ii,k |, which indicates the CWDD characteristic of the channel, typically observed in this context [4,6].
In what follows, we omit the index k for notational simplicity since crosstalk cancellation is carried out tonewise. The analysis is similar for all tones. Henceforth, we refer to Y i,k , Y k , X i,k , X k , H i j,k , and so forth, by Y i , Y , X i , X, and H i j , respectively. In the next section, we assume the vector of noise samples to be spatially white, while the case of spatially correlated noise (which arises due to the alien crosstalk) is discussed in Section 4.
SAGE Algorithm for Self-Crosstalk Cancellation
In this section, we investigate the performance of an iterative receiver based on the SAGE algorithm [16] for the cancellation of self-crosstalk in upstream VDSL. The SAGE algorithm with user ordering is considered in Section 3.1, while its special unitary subset case is dealt with in Section 3.2.
Iterative Receiver with Ordered SAGE.
The magnitude of FEXT on any tone shows a statistical variation from TP to TP because of the variations in the characteristics of twisted pairs, the nature and line length of the disturbers, and so forth. Consequently, some selected TPs are more highly affected than others at a given frequency. By giving priority to these TPs we order the users accordingly in the SAGE algorithm to increase the convergence rate and refer to this as "Ordered SAGE (OSAGE)" algorithm. Here we design our iterative technique such that the crosstalk of an ordered subset of users (grouped according to their decreasing SINR) is cancelled sequentially. As such each step of the OSAGE algorithm updates only one component of each ordered subset at a time, while keeping the estimates of the other components fixed at their previous values. By ordering N users of set S = {1, 2, 3, . . . , N } into M subsets S 1 , S 2 , S 3 , . . . , S M containing N S1 , N S2 , N S3 , . . . , N SM users, respectively, the algorithm at each iteration is described as follows.
(2) Maximization step: where X i∈SL is the symbol update of the ith user of the Lth subset (L is used for indexing subsets) and X j∈S is the latest available estimate of jth disturber of the set S. Each iteration uses the prior iteration's estimates to generate new estimates of interference and subtracts these recent estimates from the received signal to produce new estimates with lower interference levels.
In the rest of this section, we discuss initialization of OSAGE and its performance analysis (convergence behaviour).
Initialization and SINR after FEQ.
The initialization process in the considered OSAGE algorithm plays an important role in the receiver performance. The algorithm is initiated by the use of FEQ (a one-tap equalizer) because it has the following advantages. Firstly, it can help to exploit knowledge of the approximate symbol estimates, and secondly, it utilizes easily the accurate knowledge of direct channel coefficients. We divide Y i in (1) by H ii to find the FEQ estimate X 0 i,feq of the desired symbol X i corrupted by In (5) above, ξ 0 i,feq and V i = V i /H ii represent the FEXT plus noise and noise terms after FEQ, respectively. We define x, j as the total crosstalk noise on the ith user which is the sum of individual crosstalks. It is noted that X j 's are independent equiprobable symbols taken typically from a QAM constellation. We assume that the central limit theorem holds for ξ 0 i,feq [15], so that it can be modeled as a complex Gaussian random variable X 0 This is reasonable since the FEXT component consists of the weighted sum of independent symbols. The SINR after FEQ can be expressed as SINR We remark that the value of SINR feq i is low when number of disturbers is large, and specially so at higher frequencies. This is due to the fact that crosstalk power depends on both crosstalk channel gains and the signal strength of the disturbers. The SINR feq i needs to be maximized by employing FEXT cancellation techniques. In the ideal scenario, all the crosstalk gets removed to get a parameter SNR awgn i = σ 2 x,i /σ 2 v,i as the SNR with only additive white Gaussian noise (AWGN). Higher rate performance is of course indicated by the singleuser bound (SUB). SUB is the capacity achieved when a single-user is assumed to be transmitting and whose signal can be detected by all the receivers, either from direct channel or coupling paths [6]. However, it has been established in [6] that the SUB is only marginally higher than the data rate obtained with SNR awgn i . Therefore, since the proposed OSAGE algorithm is essentially a crosstalk cancellation algorithm, it aims to achieve the performance close to crosstalk-free value, that is, SNR awgn i . To quantify the gain of our designed crosstalk canceler, we define SNR For insights into the SNR gain, we provide computer simulations for 8 users of different line lengths (300 m to 1000 m) within a binder, and at different tones (4 and 12 MHz) in Figure 2. It can be seen that the SNR gain per tone for most of the users is significantly high (about 21 and 19 dB at 4 and 12 MHz, resp.). To understand the impact on data rate, using a thumb rule of 3 dB per bit for every tone, we can see that a substantial improvement in data rate is effected by FEXT cancellation, for the typically large number of tones used in the upstream VDSL system scenario.
Performance of OSAGE Algorithm.
We now consider the performance of OSAGE by obtaining an expression for ISRN Communications and Networking 5 the SINR after the qth iteration, for users in each of the M subsets. This is done by first looking at the variance ψ 1 i of the estimation error ξ 1 i after the first iteration for the ith user and then generalizing these to the corresponding values ψ q i (variance of ξ q i ) after the qth iteration, for each of the subsets To see this, we can use (5) by assuming equal transmit signal power to write Since H i j /H ii and H jl /H j j have small amplitudes (in view of the CWDD nature of the channel) and arbitrary phase values, E{ ξ 0 i,feq ξ 0 * j,feq } for a large N is small enough to be assumed to be negligible.
As stated earlier, the OSAGE receiver performs the crosstalk cancellation on a victim user by initializing the iteration with FEQ output. We use (4) to get an estimation error ξ 1 i∈S1 for the ith user of the first subset (S = 0) after the first iteration (superscript denotes the iteration step) as The summation term in (9) corresponds to the residual crosstalk after cancellation while the second term represents AWGN after frequency equalization. It is assumed that residual crosstalk is Gaussian distributed, which is reasonable when the number of users in the binder is large due to the central limit theorem. With such an assumption on ξ 1 i∈S1 , the residual crosstalk ψ 1 i∈S1 for the ith user can be expressed as follows: Substituting (6) for the jth disturber in the above expression, we get Interchanging |H ii | 2 and |H j j | 2 in the denominator of (11) with σ 2 v,i = σ 2 v, j (note that we assume thermal noise variance of each TP to be equal but not so for the noise power after FEQ), the above equation can be expressed as To simplify further, we define two channel parameters as follows: where α is the CWDD parameter corresponding to the longest coupling length of the binder, which can be easily estimated without binder configuration. It is important to emphasise that the value of α is independent of the binder provided the maximum reach of VDSL is fixed. By considering (6) and FEXT on the longest TP (maximum) together with (13) in (12), we get an upper bound on the residual crosstalk as Writing SNR gain i of (7) as SNR , the crosstalk power after FEQ of (6) can be represented in terms of SNR gain as On substituting (15) into (14), we have Consequently, a lower bound of SINR for the ith user of the subset S 1 (S = 0) is given by Similarly, the residual crosstalk for the ith user of the second subset S 2 can be expressed as By applying the similar approach as for the subset S 1 , the residual crosstalk for i ∈ S 2 (S = 1) is upper bounded as Doing so recursively for subsets, we can represent the residual crosstalk after the first iteration (q = 1) for S ≥ 1 as and the lower bound on SINR for the ith user of the subset S L as
ISRN Communications and Networking
For a general iteration (q ≥ 2), we can find the residual crosstalk of the ith user of the first subset as where M denotes the total number of subsets. Similarly, the residual crosstalk of the ith user of the last subset (Mth) can be expressed as It can be seen from (22) and (23) that users of the last subset enjoy the maximum benefit of convergence while those of the first subset the least. This, of course, is acceptable in practical scenarios since the first subset is associated with users with good SINR. For a general subset (1 ≤ L ≤ M), residual crosstalk can be obtained as Using (24), a lower bound on SINR after the qth iteration (q ≥ 2) can be obtained as Combining (17), (21), and (25) together generalizes the analysis of the considered iterative receiver for each user of every subset after any given iteration step. Some of the important implications and conclusions of this analysis are taken up in the following remarks.
Remark 1.
The bound on SINR serves as a performance predictor because our focus is on achieving high data rate for a given quality of service (BER is usually fixed at 10 −7 ). It can be seen that the data rate for a practical DSL system with an SNR gap of Γ and K tones having spacing of Δ f is It follows that the performance of OSAGE can be assessed by the manner in which SINR q i∈SL approaches SNR awgn i at each tone. It is useful here to consider this behaviour through a practical example. Using typical values of α ≈ 10 −2 , SNR gain i ≈ 20 dB, and N = 25 in (17), it is found that the SNR loss (with respect to SNR awgn i ) for user of the first subset after the first iteration is (N − 1)α 2 SNR gain i ≈ 0.24 (approx. 0.93 dB) and eventually approaches zero for the users of subsequent subsets. The resulting effect on the data rate is very small, as computed from (26). Thus, our proposed algorithm effectively cancels the crosstalk with a single iteration, with an associated computational complexity of O(N 2 ) per tone.
In contrast to the ML receiver which requires a computational complexity of O(KC N ) (large constellation size C is often used in VDSL), the complexity of the OSAGE algorithm of the qth iteration is O(qKN 2 ). With a single iteration, online complexity of the considered receiver is similar to that of the ZF receiver, which incurs a computational cost of O(KN 2 ). Since the ZF receiver requires a computation of inverse of the channel matrix, the proposed iterative receiver promises to be computationally efficient due to its ability to avoid this offline computation.
Remark 2.
As convergence is an important concern for any iterative technique, we now discuss the condition for convergence for our proposed OSAGE-based iterative algorithm. For ease of presentation, we consider a case of single subset (M = 1) in (22) and apply ψ q i∈S1 < ψ q−1 i∈S1 to get (N − 1)α 2 < (SNR gain i − 1)/SNR gain i as a necessary and sufficient condition for the convergence of the OSAGE algorithm. In a crosstalk-limited DSL system (SNR gain i 1), the condition reduces to (N − 1)α 2 < 1 for the convergence of the considered iterative receiver. It is worth mentioning that this convergence criterion holds good even for the case of a large number of users since α usually takes a small value (typically of the order of ≈10 −2 ). In Figure 3, we plot (N − 1)α 2 for different values of N and tones for various loop lengths. It can be seen that the convergence condition is readily satisfied.
Remark 3. The rapid convergence shown in Remark 2 can be also explained intuitively. Considering an initial bit loading (assuming an AWGN channel) in the presence of crosstalk, the estimated symbols after FEQ may be in error. However, the true symbols lie within a small radius, of the order of minimum distance d min between the constellation points. If the crosstalk due to the jth TP is removed using the FEQ estimates (by subtracting H i j X j,feq ), the variance of the residual crosstalk is a small multiple of |H i j | 2 d 2 min . This is much smaller than the variance of the original crosstalk (|H i j | 2 σ 2 x, j ) since d 2 min is a small fraction of σ 2 x, j due to the high bit loading (bit loading of 13-14 bits per tone is not uncommon in VDSL). This coupled with the fact that the crosstalk channel coefficients are small due to the CWDD property accounts for the surprisingly fast convergence for the OSAGE algorithm. (17), (21), and (25) are derived by considering the maximum possible values of the CWDD parameter and FEXT, these are tight in most of the situations. This can be verified from the fact that the CWDD parameter as well as the FEXT (hence SNR gain) is not very sensitive to the variation of line length (except at shorter line length as shown in Figure 2). The tightness of bound is obvious in the case of a typical VDSL equal length lines scenario as both α and FEXT remain almost same for all users in this case. Furthermore, in the near-far problem (that occurs when the binder contains loops with widely varying lines), equal FEXT for different users can be achieved by employing upstream power back-off (UPBO) under which the upstream transmitters vary their power spectral densities (PSDs) in accordance with the line lengths.
Remark 4. Since the bounds in
Remark 5. From a system design viewpoint, our derived bounds are of considerable importance. The main design parameters considered here are SNR gain and q. The SNR gain can be changed at the level of designer discretion as per the requirement of user data rates. The iterating parameter q indicates the complexity. The trade-off between data rate and complexity can be realized by varying these parameters. Thus, the bounds in (17), (21), and (25) are useful tools for theoretical analysis of the performance of the considered canceler. Moreover, the bounds depend on local parameters such as binder size, CWDD parameter, and SNR awgn i , and not on the binder configuration. This together with independent single-user water filling algorithms on each TP greatly simplifies the adaptive bit loading for the SRA operation. Here, SRA stands for seamless rate adaptation whereby the receiver monitors the SNR of the channel, and according to channel conditions, sends a message to the transmitter to initiate a change in bit loading.
The USAGE Algorithm and Its
Performance. From the discussion in the previous section, we observe that a near crosstalk-free performance can be achieved even by considering only the first subset in the OSAGE algorithm. Motivated by this observation, we consider a special case of OSAGE, termed as "Unitary-subset SAGE (USAGE)," algorithm in this section in which all users are grouped into a single subset. With only one subset, each user is updated at the same time to realize a simultaneous cancellation of crosstalk terms. It offers the advantage of parallel implementation with savings in computation time but at the price of some performance loss. For its performance analysis, we can express the symbol estimates for the ith user at the qth iteration using (4) as where X q−1 j is the symbol estimate of a disturber after the (q− 1)th iteration. It should be noted that X 0 j = X 0 j,feq represents the post-FEQ estimate of the jth crosstalker. Using (27), we obtain the power ψ q i of residual crosstalk as Since E{|X j − X q−1 j | 2 } is the residual crosstalk at the (q −1)th iteration, we can represent (28) in the form of a recursive relation via
ISRN Communications and Networking
Assuming equal thermal noise at each TP, we can represent (31) in terms of σ 2 v,i (the noise after FEQ for ith user) and α (the CWDD parameter of the binder) by making use of Using (30), we can obtain residual crosstalk of ith user after the qth iteration in terms of initial crosstalk (post-FEQ) iteration by interchanging |H ii | 2 and |H mm | 2 and replacing FEXT m with the FEXT on the longest wire (maximum) together with appropriate interchanges of other diagonal terms, as By substituting ψ 0 i = SNR gain i σ 2 v,i from (15) in (34) and using the result along with (32), we obtain a lower bound of SINR after the qth iteration as Although the USAGE algorithm can achieve a near crosstalkfree performance with a single iteration in most situations, some performance loss for the case of shorter line lengths may occur, which is quantified as where At shorter loop lengths, the loss can be as much as 3 dB for a typical number of users. Improvement in performance however can be realized by increasing the iterations since (N − 1)α 2 < 1 ensures convergence. This convergence condition (obtained by applying ψ q i < ψ q−1 i from (34) and using SNR gain i 1) is similar to that obtained in Remark 2. Under the condition of convergence, USAGE algorithm achieves rate performance close to crosstalk-free rate since when α → 0 and q → ∞, δSNR dB → 0, and (38) The lower bound of (35) is simple and can be used to obtain the data rate without the explicit knowledge of the crosstalk coupling coefficients.
Iterative Receiver for Alien Crosstalk Cancellation
So far we have studied the cancellation of self-crosstalk assuming the presence of only white thermal noise. However, as discussed earlier noise from other external sources (alien crosstalk) may significantly affect the functionality of the vectored VDSL systems. In practical scenarios, such a crosstalk may arise due to the use of TPs and power lines over the same spectrum for broadband access. Further, the TPs near the customer premises are generally unshielded so that RFI from nearby radio transmitter couples with these wires to induce alien crosstalk. There is also a crosstalk due to the presence of nonvectored DSL lines (usually unsynchronized) within the same binder that is carrying the vectored lines. This alien crosstalk manifests itself as spatially correlated (i.e., correlated across TPs) additive noise at each tone at the receiver of the considered VDSL system and hence degrades its performance even with the effective cancellation of self-crosstalk. Modeling of alien crosstalk is discussed in [3,10]. Alien crosstalk is easily amenable in the vectoring framework (due to the spatial correlation between different noise samples across the TPs). In order to facilitate this, we study the use of iterative technique of the previous section for the self-crosstalk cancellation but preceded with a prewhitening filter. Our aim is to develop an effective crosstalk canceler for VDSL systems in order to achieve considerable improvement in performance in the presence of alien crosstalk. The issue here is whether the prewhitening operation affects the CWDD property of the effective channel to enable the SAGE algorithms to converge fast on the postwhitened signals. We, therefore, investigate analytically the behavior of the modified channel (channel together with the noise whitening filter) in the next section and the receiver performance based on the modified channel.
CWDD Characteristic of Equivalent Channel after
Whitening. To proceed, we denote the magnitude and phase of the correlation coefficient between the noise samples on various TPs by ρ and φ, respectively and the alien noise power by σ 2 a . We define a noise covariance matrix R[r i j ] whose i jth element is given as ISRN Communications and Networking 9 where j = √ −1. We use a noise whitening filter at the receiver which multiplies (2) with R −1/2 to obtain where Using the definition of matrix inverse, we represent the elements of |R −1/2 | as where R 1/2 ji is the (N − 1) × (N − 1) submatrix by removing the jth row and the ith column from the square root of the covariance matrix R 1/2 . We define as the maximum absolute value of determinant of principal and nonprincipal submatrices of R 1/2 , respectively. Now it can be shown using (39) that γ r > γ t for the given covariance matrix R. By using (42) in (41) along with the results of (43), we get a bound on the absolute value of elements of H as where G = |H ii |/| det(R 1/2 )| and α as defined in (13). We use (44) to find an upper bound on α for the equivalent channel as where η = γ t /γ r . From (45), one can verify that α = α for the case of uncorrelated noise. Since 0 ≤ η < 1, it is found that α ≥ α for all the values of α. It can be however shown that α < 1 as long as α + η(1 − α) < 1 holds for α < 1 and 0 ≤ η < 1. Although this does not clearly establish the CWDD property α 1, it is seen that for typical practical values of N , alien noise powers and correlation parameters, the required convergence conditions for the convergence of OSAGE and USAGE are satisfied. This is confirmed later in the simulation results presented in Section 5.
Iterative Cancellation of Alien Crosstalk.
Alien crosstalk cancellation after noise whitening follows analysis similar to that for self-crosstalk. To highlight the difference, we provide the distinct parameters in terms of the characteristics of alien noise. The CWDD characteristic of the equivalent channel has been already considered in the previous section. The noise variance is given by σ 2 v,i = 1 after whitening, while after FEQ it is given as Making use of (44), we find an upper bound on SNR after noise whitening (denoted by SNR white i ) as We provide the bounds on | det(R)|, γ r , and γ t in (A.1) and (A.6), (A.2), and (A.10), respectively, with proofs given in the appendix. The FEXT on the ith TP after whitening filter can be represented as Substituting σ 2 v,i = 1 in (7) along with the result of (47), we can express SNR gain of the ith user after noise whitening as The SINR bounds for the present case can be obtained by simply invoking the above terms in (17) (17), we find the SINR of the OSAGE algorithm for the ith user of the first subset after first iteration as Similarly, using (48) along with the result of (32) in (35), a simple lower bound on the performance of the USAGE algorithm for the case of alien crosstalk can be obtained as It can be seen from the above expressions that performance depends on noise correlation and power of alien disturbers, since SNR white i and α are functions of these quantities. Specifically, SNR white i and α increase with correlation while the former decreases significantly and the latter increases marginally with alien power. With the help of these facts, the proposed iterative receiver can be designed to achieve the near crosstalk-free performance. It may be expected from (49) and (50) that fast convergence (i.e., within one iteration) may be exhibited under low correlation and/or high alien power. However, additional iterations may be required under conditions of high correlation and/or low alien noise power scenarios.
Numerical and Simulation Results
In this section, we present numerical results based on the analytical expressions derived here, along with MATLAB simulation experiments to investigate the performance of the proposed iterative receiver. For performing the simulations, we adopt the stochastic channel model [19] and consider the binder to consist of eight lines such that there are seven disturbers per line. We have simulated the proposed algorithm for various VDSL scenarios in the context of real VDSL deployments. Scenario 1 deals with the distributed case consisting of eight VDSL users with lines varying from 300 m to 1000 m in 100 m increments. Scenario 2 includes the case of equidistant lines. In crosstalk limited DSL systems, a near-far problem may occur with the coexistence of long lines with short lines in the same binder. This case is considered under scenario 3 which includes 4 near-end users (at 400 m) and 4 far-end users (located at 800 m) from the CO. For the analysis of OSAGE algorithm, we assume four subsets with sizes 2, 2, 3, 1 with appropriate ordering, for equidistant scenario and with 2 elements per subset for other scenarios at each tone by giving priority to the users with shorter line lengths. A bandplan 998 ADE is incorporated with 3 upstream bands US0 (25-138 kHz), US1 (3.75-5.2 MHz), and US2 . Other simulation parameters are used in accordance with DSL standards [2] as listed in Table 1.
Self-Crosstalk Cancellation.
In this section, the performance of the proposed (OSAGE and USAGE) receiver is investigated for self-crosstalk cancellation. We consider mainly the performance after a single iteration, in order to keep the real-time complexity of O(N 2 ) per tone. We consider SINR and the corresponding data rate characteristics for the numerical studies. The analytical lower bounds, derived in (17) and (21) for OSAGE algorithm and in (35) for USAGE algorithm (with q = 1), are also validated through the simulations.
The main observations that can be drawn from the numerical investigations and simulations are summarized below.
(i) First, we consider a practical VDSL scenario such that the binder consists of distributed users with varying line lengths from the CO. For this case, we have simulated the SINR and the corresponding data rate characteristics for the considered algorithms, and the results are illustrated in Figures 4 and 5, respectively. From these figures, we see that both algorithms achieve performance close to crosstalkfree performance for a broad range of line lengths and bandwidths. It is further observed that the proposed Figure 4) at the higher-frequency tones, the overall effect on data rate is negligible over the entire bandwidth as shown in Figure 5.
(ii) Next, we consider a simplified scenario of equidistant lines, which can be feasible in some scenarios. Figures 6 and 7 incorporate this case and show that the USAGE algorithm works well and achieves close to crosstalk-free performance for longer loop lengths. Even for shorter loop lengths, it gives satisfactory performance for lower-frequency tones. However, the OSAGE algorithm always outperforms USAGE and achieves date rate close to crosstalk-free performance as shown in Figure 7.
(iii) Finally, we address the performance of our proposed algorithms in a mixed scenario, to consider the impact of the near-far problem. This problem occurs when the binder contains loops with widely varying line lengths, which can be alleviated in practice by effective transmit power controls of nearby users. However, as shown in Figure 8, our algorithms offer far-end users to achieve crosstalk-free performance without compromising the data rates for the nearend users, thus potentially avoiding the need for such power controls.
(iv) The ZF receiver exploits the CWDD property of DSL channels and achieves close to self-crosstalk-free performance. As stated earlier, it requires additional computation of channel matrix inversions. Further, it has been shown in many publications [6,20] that linear receivers (ZF and MMSE) experience significant performance degradation in the presence of alien crosstalk. The SAGE algorithm can effectively mitigate the effect of alien crosstalk as discussed in next section.
Alien Crosstalk Cancellation.
For this scenario, we investigate the performance of the proposed OSAGE receiver for various alien crosstalk powers and noise correlations by assuming equal length TPs. We consider correlation coefficient values for the alien noise components in the various TPs, in the range of 0.50-0.99. We have also taken an alien power of −100 dBm/Hz as it lies between the levels of −120 dBm/Hz (alien-free environment) and −80 dBm/Hz (indicating a very strong alien crosstalk) [21]. We assume known covariance matrix R and obtain numerically the parameters γ r and γ t in (43) and | det(R)|.
We demonstrate the effect of whitening on the original channel through the CWDD behaviour of modified channel (after noise whitening) in Figures 9 and 10. Simulation results and upper bound (derived in (45)) of the CWDD parameter are also shown in Figure 9. It can be observed that the CWDD parameter increases with the increase in both alien power and correlation, but the effect of the former is insignificant. It can be seen that the CWDD characteristic of the modified channel is weaker as compared to that of the original channel. However, even the weaker CWDD property is seen to satisfy the convergence condition (for typical values of N ) and hence can be utilized for alien crosstalk cancellation, via the OSAGE receiver. Figures 11 and 12 highlight the receiver performance in the presence of alien crosstalk. It is observed from the figures that high spatial correlation can be efficiently utilized in alien crosstalk cancellation. The number of iterations required for such a cancellation follows the similar features as for selfcrosstalk cancellation, depending on noise correlation and alien power. For achieving a higher data rate improvement at shorter loop length, the proposed iterative receiver can take 2-3 iterations when the alien crosstalk has a low power and high noise correlation. However, a single iteration is sufficient to mitigate crosstalk of high alien power and/or low correlation (which corresponds to a lower data rate improvement after noise whitening).
Conclusions
We have investigated the use of an iterative receiver based on the SAGE algorithm for crosstalk cancellation in upstream VDSL. The proposed receiver was shown to be practically feasible and computationally efficient. By employing user ordering, a lower bound (after each iteration) on the performance of the SAGE receiver was derived. Since the derived bound depends only on the binder size, the longest line, and crosstalk-free SNR, it is easier for designer to predict the data rate and thus achieve a specified quality of service. Analytical and simulation results confirm that the SAGE-based receiver operates close to self-crosstalk-free performance with a single iteration (and complexity comparable to that of the linear ZF receiver) while eliminating the need for channel inversion. An upper bound on the CWDD parameter of the modified channel (after noise whitening) was obtained. This CWDD property after noise whitening was shown to be retained, a fact that was exploited for the cancellation of alien crosstalk. Performance of the proposed receiver was shown to be dependent on noise correlation and alien crosstalk power. As such mitigation of alien crosstalk requires only one iteration for high alien power and/or low noise correlation while a few more iterations may sometimes be required for the case of low alien noise power and/or high noise correlation.
Appendix
In this appendix, we present the bounds on modulus of the determinant of the covariance matrix and its principal and nonprincipal submatrices. For simplicity, we assume equal alien powers and thermal noises at each TP. Applying For a lower bound on | det(R)|, we first represent the bounds on minimum and maximum eigenvalues (|λ R,min |, |λ R,max |) in terms of mean (μ λR ) and variance (σ 2 λR ) of eigenvalues along with the trace of the covariance matrix R [22] as An upper bound on the modulus of determinant of the nonprincipal submatrix can be represented in terms of eigenvalues (λ √ R ) of R 1/2 [24] as (A.8) Here, eigenvalues (λ √ R ) of R 1/2 can be represented in terms of eigenvalues (λ R ) of R using the eigenvalue decomposition as | 10,312.4 | 2011-01-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Strongly Nil-*-Clean Rings
A *-ring $R$ is called a strongly nil-*-clean ring if every element of $R$ is the sum of a projection and a nilpotent element that commute with each other. In this article, we show that $R$ is a strongly nil-*-clean ring if and only if every idempotent in $R$ is a projection, $R$ is periodic, and $R/J(R)$ is Boolean. For any commutative *-ring $R$, we prove that the algebraic extension $R[i]$ where $i^2=\mu i+\eta$ for some $\mu,\eta\in R$ is strongly nil-*-clean if and only if $R$ is strongly nil-*-clean and $\mu\eta$ is nilpotent. The relationships between Boolean *-rings and strongly nil-*-clean rings are also obtained.
Introduction
Let R be an associative ring with unity. A ring R is called strongly nil clean if every element of R is the sum of an idempotent and a nilpotent that commute. These rings are first discovered by Hirano-Tominaga-Yakub [11] and were refered to as [E-N]representable rings. In [8] and [9], Diesl refers to this class as strongly nil clean and studies their properties. Studying strongly nil cleanness is also relevent for Lie algebra. The decomposition of matrices as in the definition of strongly nil cleanness over a field must be the Jordan-Chevalley decomposition in Lie theory. An involution of a ring R is an operation * : R → R such that (x + y) * = x * + y * , (xy) * = y * x * and (x * ) * = x for all x, y ∈ R. A ring R with involution * is called a * -ring. An element p in a * -ring R is called a projection if p 2 = p = p * (see [2]). Recently the concept of strongly clean rings are considered for any * -ring. Vaš [15] calls a * -ring R strongly * -clean if each of its elements is the sum of a projection and a unit that commute with each other (see also [14]).
In this paper, we adapt strongly nil cleanness to * -rings. We call a * -ring R strongly nil * -clean if every element of R is the sum of a projection and a nilpotent element that commute. The paper consists of three parts. In Section 2, we characterize the class of strongly nil * -clean rings on several different ways. For example, we show that a ring R is a strongly nil * -clean ring if and only if every idempotent in R is a projection, R is periodic, and R/J(R) is Boolean. In Section 3, we prove a result related to the strongly nil * -cleanness of a commutative * -ring and its algebraic extension. For a commutative * -ring R with µ * = µ, η * = η ∈ R, R[i] = {a + bi | a, b ∈ R, i 2 = µi + η } is strongly nil * -clean if and only if R is strongly nil * -clean and µη is nilpotent. Foster [10] introduced the concept of Boolean-like rings as a generalization of Boolean rings in the commutative case. We adopt the concept of Boolean-like rings to rings with involution and prove that a * -ring R is * -Boolean-like if and only if R is strongly nil *clean and αβ = 0 for all nilpotent elements α, β in R. In the last section, we investigate submaximal ideals ( [12]) of strongly nil * -clean rings; and also define * -Boolean rings as * -rings over which every element is a projection and characterize them in terms of strongly nil * -cleanness.
Throughout this paper all rings are associative with unity (unless otherwise noted). We write J(R), N (R) and U (R) for the Jacobson radical of a ring R, the set of all nilpotent elements in R and the set of all units in R, respectively. The ring of all polynomials in one variable over R is denoted by R[x].
Characterization Theorems
The main purpose of this section is to explore the structure of strongly nil * -clean rings. A ring R is called uniquely nil clean if, for any x ∈ R, there exists a unique idempotent e ∈ R such that x − e ∈ N (R) [8]. If, in addition, x and e commute, R is called uniquely strongly nil clean [11]. Strongly nil cleanness and uniquely strongly nil cleanness are equivalent by [11,Theorem 3]. Analogously, for a * -ring, we define uniquely strongly nil * -clean rings by replacing "idempotent" with "projection" in the definition of uniquely strongly nil clean rings.
We will use the following lemma frequently. (i) R is strongly nil * -clean; (ii) R is uniquely nil clean and every idempotent in R is a projection; (iii) R is uniquely strongly nil * -clean.
Proof If R is strongly nil * -clean, then R is strongly * -clean. For, if x ∈ R, then there exist a projection e ∈ R and w ∈ N (R) such that 2 − x = e + w and ew = we. This gives that x = (1 − e) + (1 − w) where 1 − e is a projection and 1 − w ∈ U (R). If R is strongly * -clean, then every idempotent in R is a projection by [14,Theorem 2.2]. By [11,Theorem 3], the proof is completed.
We note that the condition "every idempotent in R is a projection" in Proposition 2.2 is necessary as the following example shows.
. Then R is a commutative * -ring with the usual matrix addition and multiplication. In fact, R is Boolean. Thus, for any x ∈ R, there exists a unique idempotent e ∈ R such that x − e ∈ R is nilpotent. But it is not strongly nil * -clean because the only projections are the trivial projections and there does not exist a projection e in R such that On the other hand, in [11,Theorem 3], it is proved that R is strongly nil clean if and only if N (R) is an ideal and R/N (R) is Boolean. Also, R is uniquely nil clean if and only if R is abelian, N (R) is an ideal and R/N (R) is Boolean [11,Theorem 4]. So if we adopt these results to rings with involution, immediately we have the following theorem by using Proposition 2.2. But we give a new proof to the necessity. (1) Every idempotent in R is a projection; (2) N (R) forms an ideal; Proof Assume that R is strongly nil * -clean. In view of Lemma 2.1, for any x ∈ R, there exist an idempotent g ∈ R and a nilpotent element v ∈ R such that x = g + v and gv = vg. Thus, Write (x − x 2 ) m = 0, and so x m ∈ x m+1 R. This shows that R is strongly π-regular. According to [1,Theorem 3], N (R) forms an ideal of R. Further, x − x 2 ∈ N (R), and so R/N (R) is Boolean. The converse is obvious by [11,Theorem 3].
A ring R is called strongly J- * -clean if for any x ∈ R there exists a unique projection e ∈ R such that x − e ∈ J(R) [7]. (1) R is strongly J- * -clean; (2) J(R) is nil.
Proof Suppose that R is strongly nil * -clean. In view of Theorem 2.4, N (R) forms an ideal of R, and this gives that N (R) ⊆ J(R). On the other hand, for any x ∈ J(R), there exists a projection e ∈ R such that x − e ∈ N (R). Then e = x − (x − e) ∈ J(R). This shows that e = 0, and so x is nilpotent. That is J(R) is nil, and so N (R) = J(R). In view of Proposition 2.2, we can see that there exists a unique projection e ∈ R such that x − e ∈ J(R). Hence R is strongly J- * -clean by [7,Theorem 3.3].
Conversely, assume that (1) and (2) hold. In view of [7, Proposition 2.1], R is strongly * -clean. Thus, R is abelian. Let x ∈ R. By virtue of [7,Theorem 3.3], there exist a projection e ∈ R and a w ∈ J(R) such that x = e + w and xe = ex. As J(R) is nil, w ∈ R is nilpotent. Therefore R is strongly nil * -clean.
From Lemma 2.5 and [7, Proposition 2.1], it follows that
The first inclusion is strict, because, for example, the power series ring Z 2 [[x]] is strongly J- * -clean but not strongly nil * -clean where * is the identity involution by [4, Example 2.5 (5)]. The second inclusion is also strict by [7, Example 2.2(2)].
We should note that a strongly nil clean ring may not be strongly J-clean (see [4, Example on p. 3799]). Hence strongly nil clean and strongly nil * -clean classes have different behavior when compared to classes of strongly J-clean and strongly J- * -clean classes respectively. Lemma 2.6 Let R be a * -ring. Then R is strongly nil * -clean if and only if (1) Every idempotent in R is a projection; (2) J(R) is nil; Proof Assume that (1), (2) and (3) hold. For any x ∈ R, x + J(R) = x 2 + J(R). As J(R) is nil, every idempotent in R lifts modulo J(R). Thus, we can find an idempotent e ∈ R such that x − e ∈ J(R) ⊆ N (R). By Lemma 2.1, xe = ex, and so the result follows. The converse is by Theorem 2.4 and Lemma 2.5.
Recall that a ring R is periodic if for any x ∈ R, there exist distinct m, n ∈ N such that x m = x n . With this information we can now prove the following. Theorem 2.7 Let R be a * -ring. Then R is strongly nil * -clean if and only if (1) Every idempotent in R is a projection; (2) R is periodic; Proof Suppose that R is strongly nil * -clean. By virtue of Lemma 2.6, every idempotent in R is a projection and Therefore the proof is completed by Lemma 2.6.
Proposition 2.8 A * -ring R is strongly nil * -clean if and only if
(1) R is strongly * -clean; Proof Suppose that R is strongly nil * -clean. By the proof of Lemma 2.5, Conversely, assume that (1) and (2) hold. Let a ∈ R. Then we can find a projection e ∈ R such that (a − 1) − e ∈ U (R) and e(a − 1) = (a − 1)e. That is, (1 − a) + e ∈ U (R). As 1 − (a − e) ∈ U (R), by hypothesis, a − e ∈ N (R). In addition, ea = ae. Accordingly, R is strongly nil * -clean.
Corollary 2.9
Let R be a * -ring. Then R is strongly nil * -clean if and only if so is Proof One direction is obvious. Conversely, assume that R is strongly nil * -clean.
, a 1 , · · · , a n−1 ∈ R}. Also note that R is abelian. Thus, it can be easily seen that every element in R[x]/(x n ) can be written as the sum of a projection and a nilpotent element that commute.
Algebraic Extensions
Let R be a commutative * -ring, and let µ, η ∈ R with µ * = µ and η * = η. Let The aim of this section is to explore the algebraic extensions of a strongly nil * -clean ring.
Since R is strongly nil * -clean, it follows from Theorem 2.4 that 2 − 2 2 ∈ N (R), and so 2 ∈ N (R). For any a + bi ∈ R[i], it is easy to verify that This shows that is Boolean. According to Theorem 2.4, we complete the proof.
As an immediate consequence, we deduce that a commutative * -ring R is strongly nil * -clean if and only if so is R[i] where i 2 = −1.
We now consider a subclass of strongly nil * -clean rings consisting of rings which we call * -Boolean-like. First recall that a ring R is called Boolean-like [10] if it is commutative with unit and is of characteristic 2 with ab(1 + a)(1 + b) = 0 for every a, b ∈ R. Any Boolean ring is clearly a Boolean-like ring but not conversely (see [10]). Any Boolean-like ring is uniquely nil clean by [ Since R is abelian, it follows that xy = yx. Hence R is commutative.
The following is an example of a * -Boolean-like ring.
Proof If R[i] is * -Boolean-like, then R is * -Boolean-like. Also µη ∈ R is nilpotent by Proposition 3.1 and Theorem 3.4. Since µ is unit and N (R) is an ideal, η is nilpotent.
We end this section with an example showing that strongly nil clean ring need not be strongly nil * -clean.
Example 3.7 Consider the ring Then for any x, y ∈ R, (x − x 2 )(y − y 2 ) = 0. Obviously, R is not commutative. This implies that R is not a * -Boolean-like ring for any involution * . Accordingly, R is not strongly nil * -clean for any involution * ; otherwise, every idempotent in R is a projection, a contradiction (see Lemma 2.1). We can also consider the involution
Submaximal Ideals and * -Boolean Rings
An ideal I of a ring R is called a submaximal ideal if I is covered by a maximal ideal of R. That is, there exists a maximal ideal J of R such that I J R and for any ideal K of R such that I ⊆ K ⊆ J then we have I = K or K = J. This concept was firstly introduced to study Boolean-like rings (cf. [12]).
A * -ring R is called a * -Boolean ring if every element of R is a projection. The purpose of this section is to characterize submaximal ideals of strongly nil *clean rings, and * -Boolean rings by means of strongly nil * -cleanness. We begin with the following lemma. (2) For any a ∈ R, n ≥ 1, a n ∈ M implies that a ∈ M .
Proof Suppose that M is maximal. Obviously, M is prime. Let a ∈ R and a n ∈ M . If a ∈ M , RaR + M = R. Thus, RaR = R where R = R/M and a = a + M . Clearly, R is an abelian clean ring, and so it is an exchange ring by [5,Theorem 17.2.2]. This implies that R/M is an abelian exchange ring. As in the proof of [5, Proposition 17.1.9], there exists an idempotent e + M ∈ R/M such that R(e + M )R = R and e + M ∈ aR. Thus, 1 − e ∈ M . Hence, 1 − ar ∈ M for some r ∈ R. This implies that a n−1 − a n r ∈ M , and so a n−1 ∈ M . By iteration of this process, we see that a ∈ M , as required.
Conversely, assume that (1) and (2) hold. Assume that M is not maximal. Then we can find a maximal ideal I of R such that M I R. Choose a ∈ I while a ∈ M . By hypothesis, there exist an idempotent e ∈ R and a nilpotent u ∈ R such that a = e + u. Write u m = 0. Then u m ∈ M . By hypothesis, u ∈ M . This shows that e ∈ M . Clearly, R is abelian. Thus eR(1 − e) ⊆ M . As M is prime, we deduce that 1 − e ∈ M . As a result, 1 − a = (1 − e) − u ∈ M , and so 1 = (1 − a) + a ∈ I. This gives a contradiction. Therefore M is maximal.
Let R be a strongly nil * -clean ring, and let x ∈ R. Then there exists a unique projection e ∈ R such that x − e ∈ N (R). We denote e by x P and x − e by x N . Lemma 4.2 Let I be an ideal of a strongly nil * -clean ring R, and let x ∈ R be such that x ∈ I. If x P ∈ I, then there exists a maximal ideal J of R such that I ⊆ J and x ∈ J.
Then Q is an ideal of R. If Q ∈ Ω, then x P ∈ Q, and so x P ∈ K i for some i. This gives a contradiction. Thus, Ω is inductive. By using Zorn's Lemma, there exists an ideal J of R which is maximal in Ω. Let a, b ∈ R such that a, b ∈ J. By the maximality of J, we see that RaR + J, RbR + J ∈ Ω. This shows that x P ∈ RaR + J RbR + J . Hence, x P = x 2 P ∈ RaRbR + J. This yields that aRb ∈ J; otherwise, x P ∈ J, a contradiction. Hence, J is prime. Assume that J is not maximal. Then we can find a maximal ideal M of R such that J M R. Clearly, R is abelian. By the maximality, we see that x P ∈ M , and so 1 − x P ∈ M . This implies that 1 − x P ∈ J. As x P R(1 − x P ) = 0 ⊆ J, we have that x P ∈ J, a contradiction. Therefore J is a maximal ideal, as asserted. Proposition 4.3 Let R be strongly nil * -clean. Then the intersection of two maximal ideals is submaximal and it is covered by each of these two maximal ideals. Further, there is no other maximal ideals containing it.
Proof Let I 1 and I 2 be two distinct maximal ideals of R. Then I 1 I 2 I 1 . Suppose I 1 I 2 ⊆ J I 1 . Then we can find some x ∈ I 1 while x ∈ J. Write x n N = 0. Then x n N ∈ I 1 . In light of Lemma 4.1, x N ∈ I 1 . Likewise, x N ∈ I 2 . Thus, x N ∈ I 1 I 2 ⊆ J. This shows that x P ∈ J. By virtue of Lemma 4.2, there exists a maximal ideal M of R such that J ⊆ M and x ∈ M . Hence, I 1 I 2 ⊆ M and I 1 = M . If I 2 = M , then I 2 + M = R. Write t + y = 1 with t ∈ I 2 , y ∈ M . Then for any z ∈ I 1 , z = zt + zy ∈ I 1 I 2 + M = M , and so I 1 = M . This gives a contradiction. Thus I 2 = M , and then J ⊆ M ⊆ I 2 . As a result, J ⊆ I 1 I 2 , and so I 1 I 2 = J. Therefore I 1 I 2 is a submaximal ideal of R. We claim that I 1 I 2 is semiprime. If K 2 ⊆ I 1 I 2 , then for any a ∈ K, we see that a 2 ∈ I 1 I 2 . In view of Lemma 4.1, a ∈ I 1 I 2 . This implies that K ⊆ I 1 I 2 . Hence, I 1 I 2 is semiprime. Therefore I 1 I 2 is the intersection of maximal ideals containing I 1 I 2 . Assume that K is a maximal ideal of R such that I 1 I 2 ⊆ K. If K = I 1 , I 2 , then I 1 + K = I 2 + K = R. This implies that I 1 I 2 + K = R, and so K = R, a contradiction. Thus, K = I 1 or K = I 2 , and so the proof is completed.
We call a local ring R absolutely local provided that for any 0 = x ∈ J(R), J(R) = RxR.
Corollary 4.4 Let R be strongly nil * -clean, and let I be an ideal of R. Then I is a submaximal ideal if and only if R/I is Boolean with four elements or R/I is absolutely local.
Proof Let I be a submaximal ideal of R.
Case I. I is contained in more than a maximal ideal. Then I is contained in two distinct maximal ideals of R. Since I is submaximal, there exists a maximal ideal J of R such that I is covered by J. Thus, we have a maximal ideal J ′ such that J ′ = J and I J ′ . Hence, I ⊆ J J ′ ⊆ J. Clearly, J J ′ = J as J + J ′ = R, and so I = J J ′ . In view of Proposition 4.3, there is no maximal ideal containing I except for J and J ′ . This shows that R/I has only two maximal ideals covering {0 + I}. For any a ∈ R, it follows from Theorem 2.4 that a − a 2 ∈ R is nilpotent. Write (a − a 2 ) n = 0. Then (a − a 2 ) n ∈ J. According to Lemma 4.1, a − a 2 ∈ J. Likewise, a − a 2 ∈ J ′ . Thus, a − a 2 ∈ J J ′ , and so a − a 2 ∈ I. This shows that R/I is Boolean. Therefore R/I is Boolean with four elements.
Case II. Suppose that I is contained in only one maximal ideal J of R. Then R/I has only one maximal ideal J/I. Clearly, R is an abelian exchange ring, and then so is R/I. Let e ∈ R/I be a nontrivial idempotent. Then I ⊆ I + ReR ⊆ J or I + ReR = R. Likewise, I ⊆ I + R(1− e)R ⊆ J or I + R(1− e)R = R. This shows that I + ReR = R or I + R(1 − e)R = R Thus, (R/I)(e + I)(R/I) = R/I or (R/I)(1 − e + I)(R/I) = R/I, a contradiction. Therefore all idempotents in R/I are trivial. It follows from [5, Lemma 17.2.1] that R/I is local. For any 0 = x ∈ J/I, we see that 0 = I ⊆ RxR ⊆ J. As I is submaximal, we deduce that J = RxR. Therefore R is absolutely local.
Conversely, assume that R/I is Boolean with four elements. Then R/I has precisely two maximal ideals covering {0+I}, and so R has precisely two maximal ideals covering I. Thus, we have a maximal ideal J such that I J. If I ⊆ K ⊆ J. Then K = I or K is maximal, and so K = J. Consequently, I is submaximal. Assume that R/I is absolutely local. Then R/I has a uniquely maximal ideal J/I. Hence, J is a maximal ideal of R such that I J. Assume that I K ⊆ J. Choose a ∈ K while a ∈ I. Then J = RaR ⊆ K, and so K = J. Therefore I is submaximal, as required.
Corollary 4.5 Let R be strongly nil * -clean. If I 1 and I 2 are distinct maximal ideals of R, then R/(I 1 I 2 ) is Boolean.
Proof Since I 1 /(I 1 I 2 ) and I 2 /(I 1 I 2 ) are distinct maximal ideals, R/(I 1 I 2 ) is not local. In view of Proposition 4.3, I 1 I 2 is a submaximal ideal of R. Therefore we complete the proof from Corollary 4.4.
Recall that an ideal I of a commutative ring R is primary provided that for any x, y ∈ R, xy ∈ I implies that x ∈ I or y n ∈ I for some n ∈ N. Clearly, every maximal ideal of a commutative ring is primary. We end this article by giving the relation between strongly nil * -clean rings and * -Boolean rings.
Lemma 4.6 Let R be a commutative strongly nil * -clean ring. Then the intersection of all primary ideals of R is zero.
Proof Let a be in the intersection of all primary ideal of R. Assume that a = 0. Let Ω = {I | I is an ideal of R such that a ∈ I}. Then Ω = ∅ as 0 ∈ Ω. Given any ideals Then M ∈ Ω. Thus, Ω is inductive. By using Zorn's Lemma, we can find an ideal Q which is maximal in Ω. It will suffice to show that Q is primary. If not, we can find some x, y ∈ R such that xy ∈ Q, but x ∈ Q and y n ∈ Q for any n ∈ N. This shows that a ∈ Q + (x), and so a = b + cx for some b ∈ Q, c ∈ R. Since R is strongly nil * -clean, it follows from Theorem 2.7 that there are some distinct k, l ∈ N such that y k = y l . Say k > l. Then y l = y k = y l+1 y k−l−1 = y l yy k−l−1 = y l+2 y 2(k−l−1) = · · · = y 2l y l(k−l−1) . Hence, y l(k−l) = y l (y l(k−l−1) ) = y 2l y 2l(k−l−1) = y l(k−l) 2 . Choose s = l(k − l). Then y s is an idempotent. Write y = y P + y N . Then y s − y P = (y P + y N ) s − y P = y N sy P + · · · + y s−1 N ∈ N (R). As R is a commutative ring, we see that (y s − y P ) 3 = y s − y P . This implies that y s = y P . Since xy ∈ Q, we show that xy s ∈ Q, and so xy P ∈ Q. It follows from a = b + cx that ay P = by P + cxy P ∈ Q. Clearly, y s ∈ Q, and so a ∈ Q + (y P ). Write a = d + ry P for some d ∈ Q, r ∈ R. We see that ay P = dy P + ry P , and so ry P ∈ Q. This implies that a ∈ Q, a contradiction. Therefore Q is primary, a contradiction. Consequently, the intersection of all primary ideal of R is zero. (2) Every primary ideal of R is maximal; (3) R is strongly nil * -clean.
Proof Suppose that R is a * -Boolean ring. Clearly, R is a commutative strongly nil * -clean ring. Let I be a primary ideal of R. If I is not maximal, then there exists a maximal ideal M such that I M R. Choose x ∈ M while x ∈ I. As x is an idempotent, we see that xR(1 − x) ⊆ I, and so (1 − x) m ∈ I ⊂ M for some m ∈ N. Thus, 1 − x ∈ M . This implies that 1 = x + (1 − x) ∈ M , a contradiction. Therefore I is maximal, as required.
Conversely, assume that (1), (2) and (3) hold. Clearly, every maximal ideal of R is primary, and so J(R) = {P | P is primary}. In view of Lemma 4.6, J(R) = 0. Hence every element is a projection i.e. R is * -Boolean.
Corollary 4.8 A ring R is a Boolean ring if and only if
(1) R is commutative; (2) Every primary ideal of R is maximal; (3) R is strongly nil clean.
Proof Choose the involution as the identity. Then the result follows from Theorem 4.7. | 6,686.4 | 2012-11-22T00:00:00.000 | [
"Mathematics"
] |
Adagio for Thermal Relics
A larger Planck scale during an early epoch leads to a smaller Hubble rate, which is the measure for efficiency of primordial processes. The resulting slower cosmic tempo can accommodate alternative cosmological histories. We consider this possibility in the context of extra dimensional theories, which can provide a natural setting for the scenario. If the fundamental scale of the theory is not too far above the weak scale, to alleviate the ``hierarchy problem,"cosmological constraints imply that thermal relic dark matter would be at the GeV scale, which may be disfavored by cosmic microwave background measurements. Such dark matter becomes viable again in our proposal, due to smaller requisite annihilation cross section, further motivating ongoing low energy accelerator-based searches. Quantum gravity signatures associated with the extra dimensional setting can be probed at high energy colliders -- up to $\sim 13$ TeV at the LHC or $\sim 100$ TeV at FCC-hh. Searches for missing energy signals of dark sector states, with masses $\gtrsim 10$ GeV, can be pursued at a future circular lepton collider.
I. INTRODUCTION
Cosmological observations of light element abundances have led us to conclude that our understanding of the cosmos, based on the Standard Model (SM) and general relativity, can provide a quantitative description of the Universe when it was a few seconds old [1].This corresponds to the era of Big Bang Nucleosynthesis (BBN), dominated by radiation at a temperatures of O(MeV).The agreement between theory and observation implies that the rate of the cosmic expansion in this era, given by the Hubble parameter H, is set by a plasma that cannot have significant contribution from unknown physics.
At the same time, the study of cosmology has provided us with some of the starkest clues that our fundamental understanding of the Universe remains incomplete.Key examples are the mystery of what constitutes dark matter (DM) and how visible matter evaded complete annihilation, i.e. what is the source of the cosmic baryon asymmetry.There is broad consensus in particle physics and cosmology that new fundamental ingredients are needed to address these questions.While a great variety of ideas have been proposed to explain either problem, none has been shown to be a definitive resolution, neither empirically nor via inescapable theoretical imperatives.
The quantitative understanding of the cosmos back to the BBN era illustrates that the predicted rate for the relevant microscopic processes, when compared to the expansion rate of the Universe at the corresponding epoch, are generally correct within margins of error.Similarly, the efficiency of a new physics mechanism that aims to address a cosmological puzzle is measured against the expansion rate set by H.In general relativity, the expansion rate itself is set by the gravitational response of spacetime to various forms of energy density and (assuming zero curvature) H ∝ M −1 P , where M P ≈ 1.2 × 10 19 GeV [1] is the Planck mass set by Newton's constant G N = M −2 P (with the reduced Planck constant ℏ and the speed of light c set to unity: ℏ = c = 1).
The above account implies that if gravity had a different strength at very early times, expectations about the viability of a cosmological model would change.In particular, if gravity was much weaker well before the BBN era, certain processes that are deemed too inefficient may have been sufficiently fast.A good example is provided by thermal relic DM with parameters that lead to inefficient annihilation, resulting in its overabundance and conflict with precision cosmological data.One may address this problem in various ways, for example by generating additional entropy later on to dilute the DM density [2].However, we will entertain a less explored possibility here, namely weaker gravity leading to a slower expansion, which allows a longer time for DM to annihilate before its abundance is set by freeze-out.To realize this 'Adagio' scenario, we will assume that during some early cosmological epoch, the value of the Planck mass was larger, reducing the strength of gravitational coupling.
The key feature of our scenario is the time variation of gravitational coupling in the early stages of cosmology, which makes the Planck mass a function of time: M P = M P (t).For t < t * , corresponding to temperatures T > T * , gravity was weaker, i.e.M P (t) > M P (t * ), and we will assume that M P was constant afterwards.In general, we would like t * to be early enough that the well-established features of the early Universe are not perturbed significantly.As we will illustrate, this can be achieved as long as t * is sufficiently small compared to ∼ 1 s (T * is sufficiently large compared to ∼ 1 MeV) corresponding to the onset of BBN, which we will assume to go through according to the standard theory.
One may simply postulate that M P had some temperature dependence, M P → M P (T ), that led to its variation over cosmic time, and the value that we observe today was set before the BBN epoch.This is not the way we usually think of gravity, but in fact such a behavior for M P could be realized in theories with n ≥ 1 extra dimensions (an early suggestion for this possibility can be found in Ref. [3]).In an extradimensional framework -which is generally deemed necessary for a proper formulation of quantum gravity in string theory -the observed 4-d M P is a derived quantity.The true fundamental scale M F in (4 + n)-d may be much smaller than M P if the extra dimensions are compact, with a typical size arXiv:2308.10928v2[hep-ph] 14 Dec 2023 4,5], according to On general grounds, one may expect that the extra dimensions are initially small, i.e. ∼ M −1 F , but dynamically grow to become large.This would then translate to a time variation of 4-d gravitational coupling set by M P .Given the above, we will adopt (4 + n)-d theories as our basic underlying framework.Such models can in principle alleviate the "hierarchy problem" -i.e., the smallness of the Higgs mass m H ≈ 125 GeV compared to the implied scale of gravityby lowering M F to be not far above m H .
As we will discuss in the following, the relative proximity of M F to the TeV scale generally constrains the maximum reheat temperature of the Universe to ≲ GeV, which would point to masses for thermal relic DM at the GeV scale.This regime of masses has garnered significant attention in recent years, as an alternative to the traditional weak scale models, characterizing dark sectors that are potentially accessible to a wide range of laboratory experiments (see, for example, Ref. [6]).Note that large classes of thermal relic models of DM with a mass ≲ 10 GeV are ruled out by Cosmic Microwave Background (CMB) observations [7,8], making them less motivated as experimental targets.However, if DM annihilation cross sections can be lowered significantly, as in our scenario, those models can become viable again.
Next, we will describe the main features of our (4 + n)-d framework and sketch how the early Universe evolution, leading to variable gravity, unfolds in our scenario.We will then consider the implications of our model for DM production and outline some of its phenomenological consequences.A summary and some concluding remarks are given at the end of this work.
II. MODELS WITH EXTRA DIMENSIONS
In principle, the scale of compactification of n additional dimensions could be very high, which would make the underlying physics inaccessible to low energy measurements.One could assume that the fundamental scale of the (4 + n)-d theory is not very far from the weak scale ∼ O(TeV), potentially addressing its origin.This will be the main scenario we will consider in what follows, for it can be motivated as a resolution of the hierarchy problem and may be probed in future high energy experiments.
We will adopt the general picture described in Ref. [9] as the starting point of the cosmological evolution.Initially, all spatial dimensions have a size ≳ M −1 F .The basic idea is that one can construct a model of inflation that satisfies key observational requirements, where the initial inflationary era leads to rapid expansion along the visible 3-brane dimensions, while the compact dimensions remain fixed near the fundamental size ≳ M −1 F .After this main inflationary era ends, the size of the extra dimensions, governed by the radion potential, starts to grow.During this time, the non-compact dimensions shrink and the radiation contained on the visible brane blue-shifts.Once the radiation on the brane and radion potential have equal sizes, the contraction of the visible dimensions would stop.
In typical scenarios for which Ref. [9] aims to provide a cosmological framework, the radion eventually reaches the minimum of its potential where the extra dimensions attain their final stabilized size.After this point, the evolution of Universe will resemble the standard 4-d picture, at low energies.A generic problem in this scenario is that the radion ends up as an oscillating modulus which is long-lived on cosmological time scales and can lead to early matter domination and possible conflict with cosmological data.This could be addressed, for example, by a brief period of secondary inflation, diluting the energy density in the radion field [9].
We will consider the framework of Ref. [9], outlined above, but depart from it by adding an interlude to the evolution of the cosmos before it ends up with stable extra dimensions.In our modified scenario, the radion potential initially has a different minimum corresponding to larger compact dimensions than arrived at eventually.This intervening cosmological era would then be governed by a 4-d gravitational interaction that could have been significantly weaker than the one observed today.Below, we will argue that the general demands of our proposed scenario do not fit within the specific model assumed in Ref. [9], where the radion potential acts as the main source of inflation.
Assuming that the radion potential V rad (R) is the source of inflation along the 3-brane dimensions, corresponding to the visible world, the primordial density perturbations are given by [9] where R I is the size of the extra dimensions during inflation and S is a parameter that needs to be O(10 −3 ) for a consistent inflationary scenario.To avoid significant deviations from scale-invariant perturbations, as required by data, R I should be approximately constant, R I ∼ M −1 F , during inflation.As we will discuss below, the largest reheat temperature ∼ O(GeV), consistent with cosmological constraints, can be attained for maximal n, and so we will mostly focus on the case with n = 6 extra dimensions and M F ≳ 10 TeV, for general consistency with experimental bounds that we will discuss later.This implies that in order to have a suppressed Hubble constant during freeze-out, the radion potential may only transition to its "late" Universe minimum, corresponding to the present value of M P , at T < GeV.This would typically demand that V rad is governed by scales ≲ O(GeV).Using Eq. ( 2), the preceding considerations imply that δρ/ρ ≲ 7×10 −8 , which is well below the measured value ∼ O(10 −5 ) [1].Note that here, the spectral index n s is given by [9] Since the measured value is n s ≈ 0.97 [8], choosing S ≪ 10 −3 is not a viable option for enhancing the density perturbations in Eq. ( 2).
Given the above analysis, we then assume that an appropriate brane-localized potential is present for an inflaton Φ, such that it allows for sufficient inflation of ∼ 60 e-folds, or more, to address large scale features of the cosmos.The density perturbations produced during the slow roll of Φ are given by where gives the inflationary Hubble scale and Φ is the time derivative of Φ.During inflation, 3H I Φ ≈ −dV (Φ)/dΦ, which is subject to the slow-roll condition We thus have For M F R I ∼ 1, choosing V (Φ) 1/2 ∼ (100 GeV) 2 can then easily yield the observed level of density perturbations.Based on the above discussion, we may then assume that the radion potential stays at a minimum that yields R > R 0 , where R 0 is today's size of extra dimensions, until after freeze-out at T < GeV.We note that for the intermediate value of Planck mass would be κ n/2 times larger and the corresponding Hubble constant would be smaller by that amount.This would allow consideration of thermal relic DM with O(10) times smaller annihilation cross sections than in the standard picture, for κ ∼ 2 and n = 6.
Constraints on Extra Dimensional Cosmology
During the period where the compact extra dimensions are changing size, the (4 + n)-dimensional metric is approximately described by the Kasner solutions [9], where a i and b i are the initial scale factors for the 3 large and n compact dimensions, respectively, and t i is the initial time where the contraction of the compact extra dimensions begins.
For the case where the compact dimensions are contracting, the values of k and l in the exponents are given by Note that these are the solutions with opposite sign from those considered in Ref. [9].For n = 6 extra dimensions, we obtain k = 5/9 and l = −1/9.This implies that if the compact dimensions shrink by a factor of κ, then the large dimensions will increase in size by a factor of κ 5 .Since the temperature cools as the 3-dimensional universe expands, we require that the anomalous expansion from the Kasner phase ends before the temperature falls below T BBN ≈ 2 MeV [10,11], to avoid significant deviation from standard BBN.This means the Kasner phase must begin before The presence of large extra dimensions allows for production of light Kaluza Klein (KK) gravitons in the early Universe which could cause conflict with observational data.To avoid such problems, one is led to assume that the Universe did not attain a high reheat temperature, which limits the scope of cosmological models considered in this framework.These considerations were revisited in Ref. [12], and the most stringent constraint, based on preserving the products of the BBN, was determined to be where r ≈ 6 is a numerical factor, and T max is the maximum reheat temperature.To adapt the above bound (12) to our Adagio scenario, we multiply the left-hand side by a factor κ n/2 to account for the fact that we assume a value of M P ∝ R n/2 during the relevant cosmological era that is κ n/2 times larger (to avoid excessively complicated results, we have only considered this factor that gives the main effect for general n).An additional factor of κ should also be included to reflect the growth of the graviton KK mass scale by ∼ κ after the extra dimensions shrink to their late Universe size; the more massive the relic that decays, the more stringent the bound from BBN on its abundance [12].With these modifications, we obtain the following relation that applies to our scenario The above bound could be somewhat alleviated if one accounts for the non-hadronic decay channels of KK gravitons [12], but we adopt it to be more conservative.As can be seen from Eq. ( 13), the dependence of T max on κ ≲ 10 is not very strong.Requiring T min < T max leads to The temperature where the radion potential readjusts to its late value will be taken to be well below the freeze-out temperature which implies T * ≪ T max .This allows for a simpler and more transparent treatment of the cosmic evolution in our work.Hence, we can assume that the DM relic abundance is set while the value of M P is larger than today's value, but constant.
III. DEMONSTRATION WITH DARK PHOTON MEDIATED DARK MATTER
For the purpose of demonstration, we present an analysis using a concrete dark matter model with fermionic dark matter coupled to a dark photon, associated with a dark U (1) D gauge interaction, which kinetically mixes with the SM photon.We will assume that the dark sector is localized on the same brane as the SM content, making it effectively 4-dimensional.For more general treatments of dark photon and kinetic mixing in extra dimensional models see, for example, Refs.[13][14][15], where other possible effects may also allow circumvention of the CMB bounds considered here.
The phenomenologically relevant part of the Lagrangian is with dark matter field χ, of unit U (1) D charge, dark photon field A ′ µ , dark photon field strength tensor F ′ µν , and ordinary photon field strength tensor F µν .The covariant derivative µ has gauge coupling e D , and "dark fine structure constant" α D ≡ e 2 D /(4π).When 2m χ < m A ′ , the annihilation of χ is through an offshell dark photon to SM charged particles.The s-wave thermal cross section for this annihilation to some charged particle with mass m 0 , charge Q, and number of colors n c is given by (see e.g.Ref. [18] for the general formalism, Ref. [19] for the treatment applicable to dark photons, and Ref. [20] for a simplified expression in the limit m 0 ≪ m χ ) ) In the very early universe, this annihilation cross section, together with the Hubble expansion, sets the relic abundance via thermal freeze-out, and at later times, these same annihilations will affect the CMB.On the other hand, the dark photon can be produced on-shell in collisions of SM particles at colliders, with the dark photon then decaying invisibly to dark matter almost 100% of the time.This leads to the search channel of mono-γ plus missing energy at e + e − colliders.
To determine the relic density, the following equations from Ref. [21] are used: where x f = m χ /T f determines the freeze-out temperature T f , g is the internal degrees of freedom of the relic, g * counts the relativistic degrees of freedom at freeze-out; for s-wave annihilation j = 0 and σ 0 = ⟨σv⟩.The relic energy density in χ is then given by where h ≈ 0.67 [1].For the thermal freeze-out mechanism to work, it is required that The cold dark matter energy density of the Universe is observed to be Ω CDM h 2 = 0.12 [1].In our Adagio scenario, M P at the time of freeze-out is given by where κ is as in Eq. ( 8) and M P,0 ≈ 1.2 × 10 19 GeV is the value of the 4-dimensional effective Planck mass today.Since both Eq. ( 17) and Eq. ( 18) depend on the product M P σ 0 , a lower cross section can be exactly compensated by a larger Planck mass to produce the same relic abundance at the same freeze-out temperature.
Figure 1 shows how the Adagio cosmology modifies the parameter space that leads to thermal relic dark matter, along with the relevant constraints from CMB measurements from Planck [8], mono-γ searches from BABAR [16], and projections for mono-γ searches at Belle II [17] for the choice of model parameters α D = 0.5 and m A ′ = 3m χ .Figure 2 shows the same curves for the choice of model parameters α D = 0.5 and m A ′ = 2.5m χ .Figure 3 also shows similarly for the choice of model parameters α D = 0.2 and m A ′ = 3m χ .Based on the allowed temperature range from Eq. ( 19), m χ could be as large as x f T max , which for typical values of x f ∼ 20 and T max ∼ 1 GeV (for M F = 50 TeV) leads to m χ ≲ 20 GeV.
IV. POSSIBLE SIGNALS
The main motivation for large extra dimensions is to solve the hierarchy problem by placing the electroweak scale close to the fundamental scale of gravity M F .In this case, quantum gravity effects can be searched for at colliders.A variety of ATLAS [22,23] and CMS [24,25] searches have constrained the fundamental scale to be well above the TeV scale, with the strongest limits requiring M F ≳ 9.2 TeV [24].The ultimate LHC reach for the fundamental scale is at M F = 13 TeV [23].As center of mass energy is critical for reaching much larger values of M F , we expect that a future hadron collider with center of mass energy of 100 TeV [26] would be able to access M F ≲ 100 TeV.
The large extra dimension framework naturally points to thermal relic DM around the GeV scale, which can be made compatible with the CMB constraints in the Adagio scenario.The dark photon mediator can be searched for experimentally.While there are many collider searches for dark photons, in this scenario, the dark photon decays to dark matter instead of to SM final states.Searches for mono-γ at Belle II [17] can, in principle, probe some of the relevant parameter space in our scenario, as seen in Figs. 1, 2, and 3, but that region of parameters is disfavored by the Planck limits.
Most of the mass region in our dark photon mediated realization of the Adagio scenario will require higher energy, with comparable integrated luminosity to that of Belle II of 50 ab −1 .A future lepton collider with high luminosity, such as FCC-ee or CEPC, could search for dark photons beyond the mass reach of Belle II [27].We note that the cross section for e + e − → γA ′ , in the limit m 2 A ′ ≪ s, scales as 1/s [28], Belle II 50ab -1 proj.where s is the center of mass energy.For a lepton collider with s = m 2 Z , where m Z = 91.2GeV is the Z vector boson mass, integrated luminosities of O(100 ab −1 ) have been envisioned [27].Since the cross section for A ′ production at such a facility would be ∼ 100 times smaller, compared to that at Belle II, we then expect a sensitivity to ϵ that is ∼ 10 times worse.Hence, for a future circular lepton collider operating at the Z pole, we expect a sensitivity to ϵ ∼ 5 × 10 −4 , for m A ′ ≳ 10 GeV, corresponding to m χ ≳ 3 GeV in Fig. 1.
V. POTENTIAL ALTERNATIVE UTILITY
Here, we will examine "freeze-in" [29] as a possible alternative mechanism for the production of DM.In this framework, DM and its associated interactions are never in thermal equilibrium.This points to very feeble interactions between the visible components of the cosmic energy density and the dark sector.It is interesting that such a connection between the two sectors could actually be motivated by astrophysical data that seem to favor non-standard cooling mechanisms for stellar objects [30].This could be realized through the coupling of electrons, for example, to a light boson.
Let us, for simplicity, assume that a light scalar ϕ, in the keV regime, couples to electrons with strength y e .One can roughly take y e ∼ 10 −15 [31] to be in the regime of interest for a possible explanation of anomalous stellar cooling hints.A rough estimate of the freeze-in abundance Y χ ≡ n χ /s produced via the light mediator ϕ, where n χ is the DM number density and s ∼ g * T 3 is the entropy density, can be obtained from with w a numerical factor of O(π −6 ) [29] and y χ the coupling of ϕ to DM χ.For m χ ∼ 0.1 GeV, as an example, DM selfinteraction limits require y χ ≲ 10 −3 [31].For g * ∼ 10, using m χ ∼ 0.1 GeV, for example, we see that Y χ is much smaller than the ∼ 10 −9 required.How about an interaction with muons?Let us assume the coupling y µ ϕμµ.One can estimate the rate for µ + µ − → γϕ as ∼ αy 2 µ T .Requiring this process to be out of equilibrium (for a freeze-in scenario and to avoid overproducing ϕ, which could act as extra radiation and cause tension with BBN), would yield y µ ≲ 10 −9 and hence we may adopt y µ ∼ 10 −10 .Using Eq. ( 21) with y e → y µ , we find Y χ ∼ 10 −10 , which is about O(10) too low.However, in an Adagio scenario with M P → O(10)M P , one may accommodate a freeze-in mechanism using muon initial states.At the same time, the coupling of ϕ to electrons may provide an explanation of anomalous stellar cooling, mentioned above.
Here, we also note that for m χ ∼ 0.1 GeV we may assume that the reheat temperature is ∼ 0.1 GeV.In that case, since the mass of the tau lepton m τ ≈ 1.8 GeV there would be a suppressed thermal τ population.Thus, one may assume that its coupling to ϕ is larger than that to muons, g τ ∼ 10 −8 , without overproducing ϕ.A 2-loop diagram can induce [32] δg e ∼ g ℓ α 2 16π 2 (m e /m ℓ ), where ℓ = µ, τ .Here, a roughly m 2 ℓ scaling of lepton couplings to ϕ has been assumed, as a possibility.
More detailed calculations are needed for more reliable estimates and the preceding discussion is only meant to elucidate another potential application of our Adagio cosmological scenario.
VI. CONCLUDING REMARKS
We have shown how extra-dimensional models can realize a changing M P , slowing the timescales of early universe cosmology.Using this Adagio mechanism to reduce the Hubble expansion in the early universe, GeV-scale thermal relic dark matter, disfavored by CMB constraints, becomes viable again.Naturalness arguments that the electroweak scale should not be too separated from the fundamental scale of gravity also point to the GeV scale for thermal relics in this scenario.In such a case, the LHC or future hadron colliders can directly look for quantum gravity effects.This new avenue for producing thermal relics with the correct abundance provides a new motivation for GeV-scale dark sector searches.
Though we focused on low mass thermal relic DM production, an Adagio scenario can, in principle, be implemented in other contexts as well.This could potentially affect what is often called the unitarity bound on the thermal relic DM mass, which requires it to be below ∼ 100 TeV [33].If the minimum requisite annihilation cross section can be lowered below the canonical values, one could entertain DM masses above this bound.However, this scenario would require M F to be much larger than that considered in this work to allow reheating to much higher temperatures.
Another potential consequence of a smaller Hubble rate, corresponding to larger causally connected volumes, could be in the relation between the temperature at which primordial black holes may form in the early Universe and their typical masses.In the presence of an Adagio interlude, one would generally expect larger masses for such black holes, given the larger collapsing Hubble volume, at a given temperature.
Of course, one may also entertain the opposite 'Allegro' scenario, where the Hubble rate is larger than the standard value as a function of temperature (or energy density, in general).This could, for example, allow larger cross sections for thermal relics than the typical expectation, providing other alternatives for viable DM models.However, we will not discuss this possibility further here and leave it for future work.
ϵFIG. 1 .
FIG.1.Plot of dark matter mass mχ versus kinetic mixing ϵ in a dark photon mediator model with αD = 0.5 and dark photon mass m A ′ = 3mχ.Constraints from Planck[8] (green), BABAR[16] mono-γ searches (blue), and projected reach from Belle II[17] mono-γ searches (red dotted) are shown.Curves showing the parameter space that reproduces the observed relic abundance for standard cosmology (black, dotted) and for our Adagio cosmology with 10 times larger MP (black, solid) in the early universe.The constraint on Tmin is shown by the vertical orange solid line, Tmax for MF = 13 TeV (within LHC reach) corresponds to the vertical orange dashed line, and Tmax for MF = 50 TeV (within FCC-hh reach) is the vertical orange dotted line. | 6,570.2 | 2023-08-21T00:00:00.000 | [
"Physics"
] |
STUDY OF THE ZIZIPHUS JOAZEIRO PEEL FOR INDIGO BLUE ADSORPTION.
The Zizyphus joazeiro Mart peel (ZJP) was evaluated to remove the indigo blue (IB) in aqueous medium through the adsorption process. The high value of the maximum adsorptive capacity (50 mg.g -1 ) of ZJP for this textile dye proves that this natural adsorbent is efficient in IB adsorption. Tests on glass columns support this claim. ZJP was more efficient (90.5%) than activated carbon (15.2%) in the removal of IB present in water. For this study the surface of ZJP was studied by scanning electron microscopy, besides tests on the influence of grain size and ZJP mass on the removal of IB in water. Tests were also performed under the influence of stirring time in the adsorptive process. Subsequently, the maximum adsorptive capacity (MAC) of ZJP for IB was determined. The MAC value represents the amount of IB that can be retained in 1.0 g of ZJP. Finally, filtration studies were performed on glass columns and the efficiency of ZJP was compared to the activated carbon for the removal of IB. at rpm for 3 minutes to remove ZJP After centrifugation the supernatants were analyzed in a spectrophotometer at 573 nm to IB to evaluate the amount (%) of this dye ZJP. point
The Zizyphus joazeiro Mart peel (ZJP) was evaluated to remove the indigo blue (IB) in aqueous medium through the adsorption process. The high value of the maximum adsorptive capacity (50 mg.g -1 ) of ZJP for this textile dye proves that this natural adsorbent is efficient in IB adsorption. Tests on glass columns support this claim. ZJP was more efficient (90.5%) than activated carbon (15.2%) in the removal of IB present in water.
…………………………………………………………………………………………………….... Introduction:-
Dyes, including textiles dyes, are an important class of pollutants in the aquatic environment (Gupta & Suhas, .2009). Every year a great deal of textile dyes are disposed of in the environment through industrial effluents that do not receive adequate treatment. Waste of the industrial textile dye containing indigo blue (IB) and its derivative anthranilic acid are hazardous to the aquatic environment (Zhu et al., 2016). Therefore, studies on methodologies for the removal of this and other dyes pollutants are of extreme importance. Among the most studied techniques is the adsorption using simple (Raymundo et al., 2010) or modified (Zhu et al., 2016) natural adsorbents. The use of this material is justified not only by its adsorption efficiency but also by its high availability and low cost (Gupta & Suhas, .2009 Kebaili et al (2018) studied orange industry residues as an adsorbent for methylene blue. Various other low cost adsorbents have been used for the treatment of water containing dyes (Gupta & Suhas, 2009). However, Zizyphus joazeiro Mart, a Brazilian northeastern plant, only used for medicinal and other applications not was studied for dye removal in water. In this study we investigated the use of Zizyphus joazeiro Mart. peel (ZJP) as a possible IBremoving agent in water. For this study the surface of ZJP was studied by scanning electron microscopy, besides tests on the influence of grain size and ZJP mass on the removal of IB in water. Tests were also performed under the influence of stirring time in the adsorptive process. Subsequently, the maximum adsorptive capacity (MAC) of ZJP for IB was determined. The MAC value represents the amount of IB that can be retained in 1.0 g of ZJP. Finally, filtration studies were performed on glass columns and the efficiency of ZJP was compared to the activated carbon for the removal of IB. The activated carbon (AC) was obtained from Casa das Químicas (Flores da Cunha-RS, Brazil). The following equipment was used in laboratory tests: laboratory oven (Quimis Q-317 B model), particle size sieves (Granutest), Analytical balance (Shimadzu AY 220 model), sputter coater (Shimadzu, IC-50 Ion Coater model), scanning electron microscope (Shimadzu, SSX 550 model), pHmeter (PHTEK), magnetic stirrer (Warmnest), UV/Vis spectrophotometer (Even model), vacuum filtration pump (Prismatec) and microcentrifuge 13,000 rpm (Evlab).
Methods:-Preparation of the ZJP:-
The Zizyphus joazeiro Mart. peel (ZJP) was triturated in an industrial blender to obtain particles between 2.38 and 4.29 mm, 1.19 and 2.38 mm and finally particles smaller than 1.19 mm. The particle sizes were selected in particle size sieves (Granutest). Subsequently the material was washed 10 times in distilled water (pH 7.0) and dried in a laboratory over for 35 hrs at 40C. After drying, the material was stored in hermetically sealed plastic bottles.
Granulometry adsorption tests:-
This step had as purpose to evaluate which particle size would be more efficient in the dye adsorption. For this aqueous solutions (pH 7.0) containing IB (1,000 mg/L) were stirred (1,000 rpm) at 5 minutes and 25C in the presence of 4.0 g of ZPJ of different particle sizes (2.38 -4.29 mm or 1.19 -2.38 mm or less than 1.19 mm). Subsequently the solutions were filtered at vacuum for retention of the ZJP containing IB dye. The supernatants were centrifuged at 13,000 rpm for 3 minutes to remove ZJP residues. After centrifugation the supernatants were analyzed in a spectrophotometer at 573 nm to IB to evaluate the amount (%) of this dye retained in the ZJP.
Scanning electron microscopy (SEM) study:-
Analysis of the surfaces of the ZJP particles (<1.19 mm) were performed using a scanning electron microscope. Before analysis the ZJP particles were immersed in a thin layer of gold using the spray coating. Subsequently the surface of this sample was visualized in SEM (electron beam of 20 kV). The use of higher voltage electron beams has been attempted. However, at 25 kV the material was destroyed, making it impossible the visualization of the ZJP surface.
Influence of ZJP mass in adsorptive process
Aqueous solutions (pH 7.0) containing IB (1,000 mg/L) were stirred (1,000 rpm) at 5 minutes and 25C in the presence of different ZPJ (< 1.19 mm) mass (0.25, 0.50, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, and 5.0 g). Subsequently the solutions were filtered at vacuum (figure 2) for retention of the adsorbent containing IB dye. The supernatants were centrifuged at 13,000 rpm for 3 minutes to remove ZJP residues. After centrifugation the supernatants were analyzed in a spectrophotometer at 573 nm to IB to evaluate the amount (%) of dye retained in the ZJP. Each mass point was made in triplicate.
Influence of stirring time:-
Aqueous solutions (pH 7.0) containing 1,000 mg/L IB were stirred (1,000 rpm) in the presence of 1.5 g ZJP (< 1.19 mm) at 25C at different times (0.5, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, and 10.0 minutes). Subsequently the solutions were filtered at vacuum (figure 2) for retention of the adsorbent containing IB dye. The supernatants were centrifuged at 13,000 rpm for 3 minutes to remove ZJP residues. After centrifugation the supernatants were analyzed in a spectrophotometer at 573 nm to IB to evaluate the amount (%) of this dye retained in the ZJP. Each time point was made in triplicate
Influence of dye concentration and determination of MAC value:-
The objective of this step was to calculate the MAC value using the Langmuir Mathematical Model. This value represents the maximum amount of IB that can be retained in 1 g of ZJP. For this, aqueous solutions (pH 7.0) containing increasing concentrations of IB were stirred at 1,000 rpm for 4 minutes (time previously optimized) at 25C in the presence of 1.5 g (mass previously optimized) of ZJP. The supernatants were centrifuged at 13,000 rpm for 3 minutes to remove ZJP residues. After centrifugation the supernatants were analyzed in a spectrophotometer at 573 nm to IB to evaluate the amount (%) of dye retained in the ZJP.
173 Experiments in glass columns:-Three glass columns (38 x 3 cm) were filled with 8.2 g of activated carbon (AC) in 5.0 cm of height and 20 g of gravel in 9.0 cm of height. Another three glass columns were equally filled, but with the addition of 10.0 g of ZJP in 5.0 cm of height. Subsequently 50 ml of 1000 mg/L IB aqueous solutions were percolated through the columns. The flow rate used was 2.0 ml / min. for all columns (optimized in our laboratory). The filtered solutions were centrifuged at 13,000 rpm and analyzed in a spectrophotometer at 573 nm to IB to evaluate the amount (%) of this dye retained in the column.
Granulometry adsorption tests:-
The results of this analysis demonstrated that the smaller particles (< 1.19 mm) provided higher adsorption for IB (figure 1). After these tests, the particles of smaller size (<1.19 mm) were selected for analysis by scanning electron microscopy (SEM) and other later stages. These results of granulometry can be explained by the increase of the contact surface provided by smaller particles. However, this is not always true. For example, Raymundo et al. (2010) demonstrated that larger sugarcane bagasse particles were more efficient for removing congo red dye than smaller particles.
Scanning electron microscopy (SEM) study:-
The visualization of the ZJP particles on a larger scale (20.0 micrometers) appears to reveal a heterogeneous surface ( figure 2). However, the smaller scale visualization (2.0 micrometers) showed that the surface is homogeneous without irregularities and arranged in the form of smooth plates (figure 2). This is not an interesting feature for the adsorption process, since the irregular surfaces are indicated for an efficient adsorption (Ribeiro et al., 2016). The morphological homogeneous characteristics of ZJP surface suggest relative disadvantage in this physical interaction between this adsorbent and IB. However, it was observed that some adsorbents which have homogeneous surface are also capable of interacting with pollutants in an aqueous medium. These interactions occur due to the presence of chemical groups of the adsorbents and not only to the physical deposition of the pollutants molecules on heterogeneous surface (Ribeiro et al., 2011).
Experiments in glass columns:-
The results using filtration in glass columns revealed that the retention of dye in the columns is greater in the presence than in the absence of ZJP. Columns containing only gravel and activated carbon were able to remove 15.2 % of IB. However, the columns containing gravel, activated carbon and ZJP removed 90.5 % of dye (figure 7).
Conclusion:-
The results of the experiments show that ZJP may possibly be used as a natural adsorbent alternative in the treatment of textile effluents containing IB. The high MAC value and the high adsorption percentage of IB in columns support this assertion. | 2,427.6 | 2019-03-31T00:00:00.000 | [
"Engineering"
] |
Composite Higgs Models, Technicolor and The Muon Anomalous Magnetic Moment
We revisit the muon magnetic moment (g-2) in the context of Composite Higgs models and Technicolor, and provide general analytical expressions for computing the muon magnetic moment stemming from new fields such as, neutral gauge bosons, charged gauge bosons, neutral scalar, charged scalars, and exotic charged leptons type of particles. Under general assumptions we assess which particle content could address the $g-2_{\mu}$ excess. Moreover, we take a conservative approach and derive stringent limits on the particle masses in case the anomaly is otherwise resolved and comment on electroweak and collider bounds. Lastly, for concreteness we apply our results to a particular Technicolor model.
We revisit the muon magnetic moment (g-2) in the context of Composite Higgs models and Technicolor, and provide general analytical expressions for computing the muon magnetic moment stemming from new fields such as, neutral gauge bosons, charged gauge bosons, neutral scalar, charged scalars, and exotic charged leptons type of particles.Under general assumptions we assess which particle content could address the g − 2µ excess.Moreover, we take a conservative approach and derive stringent limits on the particle masses in case the anomaly is otherwise resolved and comment on electroweak and collider bounds.Lastly, for concreteness we apply our results to a particular Technicolor model.
I. INTRODUCTION
The Standard Model (SM) of elementary particles is in excellent agreement with the experimental data and has been able to endure electroweak precision data throughout the years.The nature of the electroweak symmetry breaking is one of the most important problems in particle physics, and the 125 GeV new resonance discovered at the LHC [1] has many of the characteristics expected for the Standard Model (SM) Higgs boson.Despite its success, there are observational reasons to believe that the standard model is not the whole story, such as dark matter, neutrino masses, and more fundamental ones such as the hierarchy problem.
Here we try to asses some models concerning the muon anomalous magnetic moment that are capable of addressing the aforementioned matters.
The muon magnetic muon is one of the most precisely measured observables in particle physics.There is an old discrepancy of 3.6σ between the theoretical SM contribution to g-2 and its measured value [2].This deviation gave rise to numerous new physics effects speculated to be plausible answers to the exciting excess.One of the striking features of the muon magnetic moment is its sensitivity to new physics effects coming from low to very high energy scale models.Moreover, it is fair to say that the majority of the extensions to the SM give sizeable corrections to g-2 in a certain region of parameter space.Albeit, due to the embedded theoretical and experimental uncertainties surrounding this quantity a conservative approach is needed in order to derive robust results.
Currently, the difference, a exp µ −a SM µ = (296±81)×10 −11 , which corresponds to 3.6σ [3], can be reduced up to 2.4σ if one used τ data in the hadronic corrections [2].Thus, it is clear that a large improvement in the theoretical calculation from SM is demanded before claiming a new physics discovery.
Therefore, we will take a conservative approach in this work.We will discuss the possibility of addressing g-2 with new fields as well as derive bounds on the particles masses by enforcing their contributions to be within the error bars reported by the experiments, having in mind interactions that appear in Composite Higgs models (CHM) and Technicolor models (TC) for the following reasons:(i) CHM provide a plausible solution to the hierarchy problem since the Higgs sector is replaced by a new and strongly coupled sector.The strong sector contains a global symmetry, which is then spontaneously broken at a scale Λ and the Higgs is identified as one of the Nambu-Goldstone bosons [4].CHM can also have explicit global symmetry breaking by linear couplings of SM field to operators in the strong sector, thus inducing an electroweak symmetry breaking and generating the Higgs mass [4].(ii) Alternatively, inspired in QCD, Technicolor was a theory invented to provide a natural and consistent quantum-field theoretic description of electroweak (EW) symmetry breaking, without elementary scalar fields.TC are based on the introduction of a new strong interactions, where in these theories the Higgs boson is a composite field of the so called technifermions.The beauty of TC as well as its problems are clearly summarized in Refs.[5,8,9].In particular, the model described in Ref. [10] the electroweak symmetry is broken dynamically by a technifermion condensate generated by the SU (2) T C Technicolor (TC) gauge group [11].
One of the most distinct differences between those models is that in CHM the electroweak symmetry is not directly broken due to a fermion condensate.Albeit, the fermion condensates become strong at a high scale, say Λ = 10T eV , breaking a global symmetry that results into a heavy pseudo-scalar that mixes with the Higgs that needs to be introduced to generate the Yukawa lagrangians and generate fermion masses.In other words, the electroweak symmetry in such models is simply the vacuum alignment produced by the Higgs and Pseudoscalar mixing.This solves the the hierarchy problem because at a scale larger than 10 TeV there is a condensate, whereas, in Technicolor the condensate of technifermions that have the quantum number SU (2) × U (1) condense the give masses to the gauge bosons.Despite those subtle differences in the pattern of symmetry breaking, Techinicolor and CHM share similarities as far as the muon magnetic moment is concerned.
In summary, motivated by interesting aspects of the both CHM and TC models and the g-2 discrepancy we will revisit the g-2 in terms of simplified lagrangians which rise in those models, and lastly apply our findings to a particular technicolor model.We begin by discussing the muon magnetic moment.As important as charge, mass, spin and lifetime of a given particle, the magnetic moments are fundamental quantities.On the classical level, an orbiting particle with electric charge e and mass m exhibits a magnetic dipole moment given by − → µ = e/2m − → L , where − → L is the orbital angular momentum.To measure the magnetic moment we need the presence of a magnetic field (B) because the observable Hamiltonian goes as -− → µ .− → B .However, this classical view is abandoned for particles and the magnetic moment, which is intrinsic, is obtained replacing the angular momentum by the spin, in such way that now − → µ = −g e/(2m) − → S .The deviation from the Dirac value g/2 = 1, obtained on the classical level, is a µ = (g − 2)/2, so called muon anomalous magnetic moment.The experimental and theoretical results are reported in terms of it.The SM contributions to a µ are divided into three parts: electromagnetic (QED), electroweak (EW) and hadronic ones.The QED part consist of all photonic and leptonic contributions and has been evaluated up to 4-loops, whereas the EW involves the SM bosons (W ± , Z,H) and has been computed up to three loops [2].Lastly, the hadronic contributions has to do with quarks running in the loops.Due to the large fermion masses involved, the hadronic contributions carry the largest uncertainties, in particular the hadronic vacuum polarization, which is calculated from e + e − → hadrons, or τ → hadrons data [2], and the hadronic light-by-light scattering, which currently cannot be determined from data [12], and give rise to the most relevant errors.In summary, the SM expected contribution to the muon anomalous magnetic moment is a SM µ = (116591785 ± 51) × 10 −11 [3].Although, the E821 experiment at Brookhaven National Laboratory, which studied the precession of muon and anti-muon in a constant external magnetic field as they circulated in a confining storage ring, reported the following value a E821 µ = (116592080±63)×10 −11 [13,14].Thus, which results into a 3.6σ excess.Out of ±81 error, ±51 × 10 −11 is associated to theoretical uncertainties.In particular, ±39 × 10 −11 stems from the lowest-order hadronic contribution and ±26 × 10 −11 rises from hadronic light-by-light contributions [3].An important effort has been put forth trying to reduce the theoretical and experimental errors.In the light of the g-2 experiment at FER-MILAB both uncertainties are expected to be substantially reduced and bring the total error down to ±34 × 10 −11 [3,15].In our figures, we exhibit a dark (light) gray band that delimits the mass range which accommodates the g-2 anomaly, and two red horizontal lines, where the solid (dashed) refers to the current (projected) 1σ bound based on the present (expected) ±81 (±34) error bar.
The muon magnetic moment is tightly related to the flavor violating µ → eγ decay.Thus our limits are also strongly correlated to those rising from flavor violating decays.Current data imposes BR(µ → eγ) < 5.7 × 10 −13 [16], with, where Λ refers to the new physics scale and λ µe to the flavor violating coupling constant [17].Notice that for new physics processes which occur at the TeV scale rather dwindled couplings are required.Although, one can in principle postulate the presence of new symmetries or simply make use of suppressed non-diagonal couplings [18][19][20][21][22][23].Hereafter, we focus on the muon magnetic moment only, but the reader should keep in mind that competitive µ → eγ bounds might exist.Now we have reviewed the status of the muon anomalous magnetic moment, we further discuss general features of Composite Higgs(CH) and Technicolor models.
III. COMPOSITE HIGGS AND TECHNICOLOR MODELS
Composite Higgs models (CHM) are extensions of the SM where the Higgs boson is a bound state of new strong interactions.These models are arguably the leading alternative to supersymmetric models, since they provide an explanation to the hierarchy problem.One of the main features of CHM are the existence of new particles with masses at the TeV scale that are excitations of the composite Higgs.Those particles can be potentially produced and discovered in the foreseeable future at the LHC.Moreover, such particle could produce deviations from the SM predictions in low energy observables such as the muon magnetic moment, which is the focus of this work.There are various ways to generate the Higgs boson, but CHM can be broadly divided in two categories: (i) Higgs is a generic composite bound state of strong dynamics (TC); (ii) Higgs is a Goldstone boson of spontaneous symmetry breaking(CHM).(See [24][25][26][27][28][29] for recent phenomenological works on CHM) The possibility that the Higgs boson is a composite state instead of an elementary one is more akin to the phenomenon of spontaneous symmetry breaking that originated from the effective Ginzburg-Landau Lagrangian, which can be derived from the microscopic BCS theory of superconductivity describing the electron-hole interaction (or the composite state in our case).This dynamical origin of the spontaneous symmetry breaking has been discussed with the use of many models, with technicolor being the most popular one.The early technicolor models suffered from problems such as flavor changing neutral currents (FCNC) and contribution to electroweak observables in disagreement with experimental data, as can be seen in the reviews of Ref. [5,9].Nevertheless, the TC dynamics may be quite different from the known strong interaction theory, i.e.QCD.This fact has led to the walking TC proposal [6], where the incompatibility with the experimental data has been resolved, making the new strong interaction almost conformal and changing appreciably its dynamical behavior.In the latter TC theory the technifermion selfenergy will acquire large current masses, and subsequently the pseudos-Goldstone bosons formed with these ones.An almost conformal TC theory can be obtained when the fermions are in the fundamental representation, introducing a large number of TC fermions (nF ), leading to an almost zero β function and flat asymptotic coupling constant.This procedure may induce a large S parameter incompatible with current electroweak measurements though.The perturbative expression to the S parameter (in the massless limit) is given by where N d is de number of left-handed electroweak technidoublets, and d(r) is the dimension of the technifermion representation.Data require the value of the S parameter to be less than about 0.3, TC models with fermions in other representations than the fundamental one, such as Minimal [7] (MWT) and Ultraminimal [30] (UMT) TC models are viable models that accommodate the measured S parameter.As for flavor changing neutral-current (FCNC) processes, the vector bosons that mediate generation-changing transitions must have large masses ∼ 10 3 TeV.Moreover, corrections from heavy fermions (top) and pseudo-scalars which are set by the Techinicolor symmetry breaking scale, require the latter to be in the ballpark of ∼ 1 TeV.Additional limits arise if one wants to incorporate dark matter particles [27].Studies using high dimensional operators have been performed and shown that with a ∼ TeV symmetry breaking scale such model might reproduce the correct relic abundance while avoiding direct [31] and indirect detection bounds [32].See for a review concerning current LHC data [33].
Setting aside those subtleties, CH and TC models share similar features as far as the muon magnetic moment is concerned.Some models postulate the existence of, not limited to, neutral vectors, charged vectors, neutral pseudo-scalars φ 0 , charged scalars φ + , exotic charged leptons (L), and even doubly charged gauge bosons (see Ref. [34][35][36]).We point out that precision-electroweak observables such as the oblique parameters S and T result into a robust bound on the scale of symmetry breaking of (CH) namely, Λ > 0.8 − 5.5 TeV [4].The precise limit strongly depends on the particular details of the model [4].In the context of TC models the precisionelectroweak parameters and constraints from (FCNCs) processes restrict TC models to a specifc dynamic, walking TC models [5] in our case, the contribution due to the TC sector should still lead to a value to the S parameter compatible with the experimental data [33].That being said, here we derive analytical expressions to compute the muon magnetic moment for several particles that are present in some Technicolor and CHM models in a general setting, assuming then that the possible contributions of these theories are due to TeV energy scale.
IV. COMPOSITE HIGGS MODELS AND TECHNICOLOR CONTRIBUTIONS TO MUON ANOMALOUS MAGNETIC MOMENT
In general after the chiral symmetry breaking of the strongly interacting sector a large number of Goldstone bosons can be formed, and only few of these degrees of freedom are absorbed by the weak interaction gauge bosons, which is the case of TC models.The others may acquire small masses resulting in light pseudo-Goldstone bosons that have not been observed experimentally.However, in the TC models considered in this work these bosons obtain masses that are large enough to have escaped detection at the present accelerator energies, the possible pseudo-scalars bosons can be listed according to their different quantum numbers.Some works have devoted attention to the muon magnetic moment in the context of composite higgs models such as [36][37][38], here we extend those by including a more accurate and general calculation to g-2 stemming from new fields.
• Pseudo-Scalars: As the comment made in the previous section pseudoscalars give rise to corrections to muon magnetic moment through the Effective Lagrangian , The correction to g-2 is found to be, where λ = m µ /M φ1 which gives us, in agreement with [40][41][42][43].
In Fig. 1(a) we exhibit the Feynman diagram for this process.Notice that the additional muon mass suppression is typical in neutral scalars correction to g-2.
Hence, typically neglected.Additionally, we have included the energy scale Λ that reflects the Technicolor or CHM symmetry breaking scale.Those two factors suppress the overall correction.Moreover, note that the contribution rising from a pseudo-neutral scalar is always negative and therefore it cannot accommodate the muon magnetic moment excess.We point out that this result is general and applicable to any extension of the SM model.However, we point out that those pseudo-scalars are quite common in CHM and Technicolor models.As aforesaid, the muon mass and the symmetry breaking scale suppressions dwindle general contributions to g-2 stemming from pseudo-scalars.In Fig. 2 the black dashed line is our numerical result for this pseudo-scalar, which has been multiplied by an overall factor of 10 6 so that we could show it in the figure.
• Pseudo-Scalars + Charged Lepton: Exotic charged Lepton have also been evoked in the models in question [10,35,44,45] and contribute to g-2 through Fig. 1(b).A simplified Lagrangian for this field can be written as, which give rises to, where = M E /m µ and λ = m µ /M h .In the limit M φ2 M L we get, Differently from the previous case, now we have a large m L enhancement.Currently limits range from 10GeV up to 100 GeV and largely depend on the search channel.For instance, L3 Collaboration has placed a limit of M L > 100 GeV on a forth generation of charged leptons [2].It is not clear whether heavy charged leptons are attainable at the LHC, since Ref. [46] states that those searches suffer from large backgrounds, making difficult to pick a signal, whereas in Fig. 6 of Ref. [47] we easily find 3σ and 5σ significance for M L = 200 − 800 GeV.In Ref. [48] they claim one might possibly exclude masses up to 250GeV at the next LHC run if mixing between the heavy lepton and the SM tau is present.Regardless, ILC should definitely reach sensitive via the pair production of heavy leptons via Drell-Yann processes as discussed in Ref. [49,50].Anyhow, the correction to g-2 turns out to be sizeable as we can see in Fig. 2. Notice this is second most relevant contribution to g-2.Because of the negative sign, we can place a current and projected 1σ limit since the anomaly should be otherwise resolved.Taking m L = 100 GeV and Λ = 1 TeV, we derive m φ2 > 2.8TeV and m φ2 > 4.8TeV, using the current and projected sensitivity as shown in Fig. 2. In Fig. 3 we present the results for Λ = 10 TeV and m L = 100 GeV.The overall contribution is small, because of the large suppression imposed by Λ.Thus we impose m φ2 > 150GeV, using projected sensitivity.
• Charged Scalar: Charged scalars are evoked in several CHM and Technicolor models through the simplified lagrangian, The correction to g-2 appears in diagrams such as in Fig. 1c, which results into, where with = m ν /m µ and λ = m µ /M φ + , which results in, Notice that the overall correction is negative and quite dwindled due to the m 4 µ suppression as one can see in Figs.2-3, where we plotted the results for Λ = 1 TeV and 10 TeV.We point out that there are various collider limits mass of such singly charged scalars that lie in the ∼ 100 − 200 GeV mass range [51].In specified UV models, stronger constraints might apply [42].
• Charged Vector: Sequential W gauge bosons corrects the muon magnetic moment via the diagram in Fig. 1d and lagrangian, which results into, where with = m ν /m µ and λ = m µ /M W .
One can clearly see that a singly charged vector boson rises as a natural candidate to explain the (g − 2) µ anomaly because it gives always positive contributions and for couplings of order one as we expect from gauge couplings, singly charged vector with masses of ∼ 400 GeV might account for the anomaly as exhibited in Figs.2-3.
However, searches in the regime where this new charged boson interacts only with right handed neutrinos, i.e when g a10 = −g v10 give a 95% C.L bound from LEP using effective operators which reads g v10 /M W < 4.8 × 10 −3 GeV −1 [42], not still rulling out the region of parameter space which a W accommodates g-2.Although, LHC data we can exclude M W > 2.55 TeV at 95% C.L, assuming SM coupling with fermions [52,53].The latter, literally rules out sequential singly charged gauge bosons as an alternative to address g-2.
• Doubly Charged Scalar: Doubly-charged scalars are typically present in models with triplet of scalars such as 3-3-1 models.There are two diagrams that give rise to correction to the muon magnetic moment: one when a photon is emitted from the doubly charged and another when the photon stems from the muon, or an exotic fermion.The lagrangian representing this contribution is, where the electric charge of the doubly charged scalar running in the loop, and q f = 1 is the electric charge of the muon in the loop.In the regime M φ ++ m µ , m L , this integral expression simplifies to, In Figs.2-3 we plotted the results for m L = 100 GeV and Λ = 1 and 10 TeV.It is clear the correction from a doubly charged scalar is negative and dwindled.See Refs.[54] for collider bounds on doubly charged scalars.
• Doubly Charged Vector: The presence of doubly charged vectors a distinct feature of the so called 3-3-1 models, which might also have dynamic symmetry breaking in the context of Technicolor such as in Ref. [10,45].Massive gauge bosons in general have both vector and axial coupling.However, the vector component of the charged current involving two identical fields is null.In the model we will discuss further, the doubly charged gauge boson couples to the muon via an exotic heavy lepton as , In this case the integral is more complicated because the charged lepton mass can be comparable to the doubly charged boson one, plus the vector current is no longer null.One needs to solve the master integral below numerically for find the precise correction to g-2.Although, when the doubly charged gauge boson is much heavier than the muon and the exotic lepton we find, where = m L /m µ and λ = m µ /M U ±± , and Solving Eq.22 for m L = 100 GeV we find the numerical result in Figs.2-3.The result is insensitive to the scale of symmetry breaking differently from the previous cases since this is gauge interaction.It is visible from Figs.2-3 that the doubly charged boson induces the largest corrections to the muon magnetic moment.
In summary, we have presented several simplified lagrangians applicable to several CHM and TCM.Now we apply our results to a technicolor model that extends the electroweak sector of the standard model.
V. A TECHNICOLOR MODEL
The model we briefly discuss below has been proposed in Ref. [11,44] and is based on the gauge symmetry SU (3) ⊗ SU (3) L ⊗ U (1) N that has been extensively studied in the literature [55][56][57][58][59][60][61].The Technicolor model we investigate in this work is inspired by the known 3-3-1 minimal model, and therefore it inherits several features of the latter including the absence of dark matter particles.Nevertheless, dark matter can be incorporated with singlet fermions with no prejudice to our reasoning [62] in agreement with recent measurements from WMAP9 and PLANCK [63].Anyways, in order to make the model anomaly free two of the three quark generations transform as 3 * , the third quark family and the three leptons generations transform as 3.In the TC sector the triangular anomaly cancels between the two generations of technifermions, where technifermions are singlets of SU (3) c .
The 331-TC model considered in this section presents the formation of two scales namely, the 331 symmetry breaking scale, F Π ∼ T eV , and the TC scale F T C ∼ 250GeV and the 331-TC model corresponds to an example of two-scale Technicolor (TC) model.The 331 symmetry breaking is implemented by the U (1) X condensate T T , that defines the mass scale of the exotic bosons (Z , U ±± ) and the TC sector is responsible for the electroweak symmetry breaking.The contribution of the condensate T T , mediated by Extended Technicolor interactions(ETC), to exotic pseudo goldstones [10] (Θ ±± , Θ 0 ) masses can be estimated as As a result these bosons in principle can acquire masses that are large enough to have escaped detection at the present accelerator energies.The contribution of T T to mass of pseudo-scalar mimics the contribution expected by walking TC dynamics and the contribution of the TC sector for S parameter can be estimated as S ∼ 0.1.Similarly to the Farhi-Susskind model [64], the couplings of the neutral PGBs with muons are found to be [65], where F Θ = F Π ∼ 1T eV is the decay constant of 331-TC (PGBs), m L corresponds to mass of exotic leptons.Combining Eq.26 with the doubly charged gauge boson correction shown in Eq.21, and the Z correction derived in Eq.32, we have the total correction to the muon magnetic moment rising from the 331-TC model.The Z contribution has been obtained from the neutral current [66], where with Neutral gauge bosons contribute to g-2 through the general master integral, leading to, where, V and A and the respective vector and axial couplings and, In the limit M Z m µ the integral simplifies to, with V and A given in Eq.31.In summary to model corrects the muon anomalous magnetic through: (i) Z (Eq.31) (ii) U ++ (Eq.21) (ii) Θ 0 (Eq.25)(iii)Θ 0 (Eq.25)(iv) Θ ++ (Eq.25) In Fig. 4 we exhibit the individual contributions to g-2.Doubly charged gauge bosons give rise to the largest contribution, yielding the same constraints discussed in the previous section.Additionally, from Fig. 4 we see that Θ 0 results into a sizeable and negative correction to g-2, whereas the doubly charged scalar induces a less relevant but positive one.Doubly charged scalar or singly charged scalars contributions are in general negligible.In this model it plays a more relevant role simply because of the m L enhancement, which is absent other 3-3-1 models [40,43,[67][68][69][70][71][72].Since the doubly charged gauge boson is overwhelmingly more relevant than the others we can conclude that for M ++ U ∼ 2 − 3 TeV the 3-3-1 TC can accommodate the g-2 anomaly with no prejudice to current bounds, once its contribution does not depend on the scale of symmetry breaking.In other words, we can push the scale of symmetry breaking to sufficiently high energies to obey electroweak and collider limits.
VI. CONCLUSIONS
We have derived news physics contributions to the muon anomalous magnetic moment motivated by Composite Higgs models and Technicolor and shown general analytical expressions to account for new corrections stemming from neutral gauge bosons, charged gauge bosons, neutral scalar, singly charged scalars, doubly charged scalars and exotic charged leptons.We outlined which particles are able to reproduce the excess as well as derived 1σ bounds in case the anomaly is otherwise resolved.Moreover, we commented on electroweak and collider bounds.Lastly, for concreteness we apply our results to a particular Technicolor model which might accommodate the g-2 anomaly with TeV scale gauge boson masses.
FIG. 4 .FIG. 5 .
FIG. 4. Individual corrections to the muon magnetic moment as function of the boson masses for Λ = 1 TeV.The green band delimits the current and projected sensitivity of the muon magnetic moment.See text for detail. | 6,251 | 2015-12-10T00:00:00.000 | [
"Physics"
] |
Targeting ASC in NLRP3 inflammasome by caffeic acid phenethyl ester: a novel strategy to treat acute gout
Gouty arthritis is caused by the deposition of uric acid crystals, which induce the activation of NOD-like receptor family, pyrin domain containing 3(NLRP3) inflammasome. The NLRP3 inflammasome, composed of NLRP3, the adaptor protein ASC, and caspase-1, is closely linked to the pathogenesis of various metabolic diseases including gouty arthritis. We investigated whether an orally administrable inhibitor of NLRP3 inflammasome was effective for alleviating the pathological symptoms of gouty arthritis and what was the underlying mechanism. In primary mouse macrophages, caffeic acid phenethyl ester(CAPE) blocked caspase-1 activation and IL-1β production induced by MSU crystals, showing that CAPE suppresses NLRP3 inflammasome activation. In mouse gouty arthritis models, oral administration of CAPE suppressed MSU crystals-induced caspase-1 activation and IL-1β production in the air pouch exudates and the foot tissues, correlating with attenuation of inflammatory symptoms. CAPE directly associated with ASC as shown by SPR analysis and co-precipitation, resulting in blockade of NLRP3-ASC interaction induced by MSU crystals. Our findings provide a novel regulatory mechanism by which small molecules harness the activation of NLRP3 inflammasome by presenting ASC as a new target. Furthermore, the results suggest the preventive or therapeutic strategy for NLRP3-related inflammatory diseases such as gouty arthritis using orally available small molecules.
inflammasome with an adaptor protein, apoptosis-associated speck-like protein containing a CARD(ASC), and pro-caspase-1. Pro-caspase-1 is cleaved to generate caspase-1, its active form, and caspase-1 cleaves pro-IL-1β precursor to generate active IL-1β, which is secreted into the extracellular environment. A previous study showed that macrophages from mice deficient in NLRP3 inflammasome components were unable to secrete active IL-1β following stimulation with uric acid crystals 7 . Articular inflammation induced by MSU crystals was dependent on NLRP3 inflammasome; in NLRP3-, ASC-or caspase-1-deficient mice, neutrophil influx was abrogated, and the production of gout-related cytokines was reduced 8 . Because IL-1β is the major effector cytokine produced in gout 3 and because NLRP3 inflammasome activation is strongly implicated in the pathogenesis of gout 7 , repression of the NLRP3 inflammasome could provide an effective therapeutic strategy for gout.
This observation prompted us to search for available small-molecule inhibitors of the NLRP3 inflammasome that could be administered orally. We intended to find the compound to inhibit inflammasome activation among phytochemicals. We screened various anti-inflammatory phytochemicals and caffeic acid phenethyl ester (CAPE) was one of the most effective inhibitors of NLRP3 inflammasome. CAPE is an active component of honeybee propolis and is well known for its anti-inflammatory property 9 . Therefore, we investigated whether CAPE could suppress uric acid-induced activation of the NLRP3 inflammasome, using bone marrow-derived primary macrophages (BMDMs) and in vivo animal gout models. Our results would provide a novel preventive or therapeutic strategy using anti-inflammatory phytochemicals targeting NLRP3 inflammasome for the treatment of metabolic diseases such as acute gout.
Results
CAPE suppresses uric acid crystal-induced NLRP3 inflammasome activation in bone marrow-derived primary macrophages. We first investigated whether CAPE could block the activation of NLRP3 inflammasome induced by uric acid crystals. BMDMs were first primed with LPS. To exclude the possibility that CAPE might affect LPS-mediated signaling pathways, CAPE was added after washing out the LPS. After pre-treatment with CAPE, cells were further stimulated with MSU crystals. CAPE alone or in combination with MSU did not reduce cell viability nor induce cell death in BMDMs (Supplemental Figure 1). CAPE inhibited MSU crystal-induced cleavage of pro-caspase-1 and pro-IL-1β, to caspase-1(p10) and IL-1β, respectively, in the cellular supernatants (Fig. 1A). The cleavages of pro-caspase-1 to caspase-1 and of pro-IL-1β to IL-1β are considered hallmarks of inflammasome activation. In addition, CAPE consistently reduced MSU crystals-induced secretion of IL-1β in a dose-dependent manner (Fig. 1B). Furthermore, CAPE suppressed MSU crystals-induced production of IL-18, another cytokine produced upon inflammasome activation (Fig. 1C). CAPE did not affect the mRNA levels of IL-1β and IL-18 in BMDMs stimulated with MSU, showing that the decrease of IL-1β and IL-18 by CAPE was not dependent on transcriptional regulation (Supplemental Figures 2A and 2B). CAPE did not affect the secretion of TNF-α, the release of which is independent on the inflammasome, in BMDMs stimulated by MSU (Fig. 1D).
ASC forms oligomers in response to NLRP3 activators 10 , as another measure of inflammasome activation. CAPE suppressed MSU crystals-induced formation of ASC oligomers in BMDMs (Fig. 1E). Confocal microscopy analysis consistently showed that CAPE reduced MSU crystal-induced formation of ASC speckles ( Fig. 1F and Supplemental Figure 3). CAPE did not affect the mRNA levels of ASC, suggesting that CAPE did not inhibit the synthesis of ASC (Supplemental Figure 4).
We determined whether CAPE impacted the activation of NLRP3 inflammasome in human cells. CAPE suppressed the degradation of pro-caspases-1 to caspase-1(p10) and the cleavage of pro-IL-1β to IL-1β induced by MSU crystals in THP-1, human monocyte cell line (Supplemental Figure 5A). MSU crystals-induced IL-1β secretion was also decreased by CAPE in THP-1 cells (Supplemental Figure 5B). The results show that CAPE suppressed the activation of NLRP3 inflammasome in murine macrophages and human monocytic cells.
Uric acid crystals are phagocytosed and destabilize the phagosome, which activates the NLRP3 inflammasome 7 . However, other NLRP3 activators activate the NLRP3 inflammasome via different upstream pathways. For example, adenosine triphosphate (ATP), triggers NLRP3 inflammasome activation by binding to purinergic receptors such as P2X7, thereby increasing potassium efflux 11 . Nigericin, a microbial toxin derived from Streptomyces hygroscopicus, decreases intracellular potassium levels by acting as a potassium ionophore independent of receptor activation 12 . Therefore, we examined whether CAPE could inhibit NLRP3 inflammasome activation mediated by other activators, such as ATP and nigericin. CAPE suppressed ATP-induced cleavage of pro-caspase-1 and pro-IL-1β to caspase-1(p10) and IL-1β in BMDMs (Supplemental Figure 6A) and reduced ATP-induced secretion of IL-1β and IL-18 (Supplemental Figures 6B and 6C). In addition, CAPE suppressed nigericin-induced cleavage of pro-caspase-1 and pro-IL-1β and secretion of IL-1β and IL-18 in BMDMs (Supplemental Figures 6A, 6B, and 6C). These results show that CAPE suppresses NLRP3 inflammasome activation mediated by other activators, such as ATP and nigericin in macrophages.
Oral administration of CAPE attenuates uric acid crystals-induced inflammasome activation in a mouse air pouch inflammation model. We attempted to confirm the suppressive effects of CAPE on NLRP3 inflammasome in vivo using a mouse air pouch model. After air pouches were formed on the backs of the mice, the mice were orally administered 30 mg/kg of CAPE. One hour later, MSU crystals were injected into the air pouch to activate the NLRP3 inflammasome, resulting in the production of cleaved caspase-1 and IL-1β in the air pouch exudates ( Fig. 2A). Oral administration of CAPE abolished MSU crystal-induced cleavage of pro-caspase-1 and pro-IL-1β in air pouch exudates ( Fig. 2A). In addition, caspase-1′s enzymatic activity was reduced in air pouch exudates isolated from mice that had been administered CAPE (Fig. 2B). MSU-induced increases in IL-1β and IL-18 levels in air pouch exudates were also reduced by oral administration of CAPE ( Fig. 2C and D). Luminescence imaging-based in vivo scan analysis further supported these results. Bone marrow-derived immortalized macrophages transfected with the iGLuc luciferase reporter gene 13 , were injected into the air pouches. Injecting the air pouches with MSU crystals increased their luminescence, demonstrating iGLuc reporter gene activation (Fig. 2E). In contrast, oral administration of CAPE greatly reduced the luminescence generated by MSU crystals injection (Fig. 2E). These results provide in vivo evidence for the suppressive effects of CAPE on NLRP3 inflammasome induction by uric acid crystals.
CAPE diminished neutrophil infiltration in the air pouch tissue and exudates, as shown by histological examination and myeloperoxidase activity, respectively ( Fig. 2F and G), demonstrating that CAPE treatment attenuates the inflammatory responses induced by uric acid crystals by suppressing NLRP3 inflammasome activation.
Oral administration of CAPE prevents uric acid crystals-induced gout in mice by blocking NLRP3 inflammasome activation. Next, we investigated whether CAPE's suppression of NLRP3 inflammasome could be applied to treat gout. A gout mouse model was generated by injecting MSU crystals into a mouse's hind foot; the injection led to increased foot thickness and neutrophil infiltration into the foot tissue ( Fig. 3A-D). Oral administration of CAPE reduced the foot thickness to normal levels ( Fig. 3A and B). CAPE blocked MSU crystal-induced recruitment of neutrophils to foot tissues, as shown by histological examination of the foot tissue and by analysis of myeloperoxidase activity in foot tissue homogenates ( Fig. 3C and D). These results show that oral administration of CAPE attenuates the inflammatory symptoms of gout caused by the injection of uric acid crystals in mice.
Injection of MSU crystals induced the degradation of pro-caspase-1 to caspase-1(p10) and the cleavage of pro-IL-1β to IL-1β in foot tissue homogenates of wild-type mice (Fig. 3E), but not in foot tissue of NLRP3 knockout mice (Fig. 3E). These results indicate that in the mouse foot, the MSU crystal-induced production of caspase-1(p10) and IL-1β depends on NLRP3 inflammasome activation. We examined whether CAPE could suppress uric acid crystals-induced NLRP3 inflammasome activation in gout. Oral administration of CAPE prevented the cleavage of pro-caspase-1 to caspase-1(p10) and of pro-IL-1β to IL-1β in foot tissue injected with MSU crystals Figure 1. CAPE suppresses the MSU crystals-induced activation of the NLRP3 inflammasome in primary macrophages. Bone marrow-derived macrophages (BMDMs) were primed with LPS (500 ng/ml) for 4 hr. The cells were treated with CAPE for 1 hr and then stimulated with monosodium uric acid (MSU) crystals (500 μg/ml) for (A) 4.5 hr or (B-E) 6 hr. In (A), the cell culture supernatants and cell lysates were immunoblotted for pro-caspase-1, caspase-1 (p10), pro-IL-1β, and IL-1β. In (B, C and D) the cell culture supernatants were analyzed for secreted IL-1β, IL-18, and TNF-α using ELISA. The values represent the means ± SEM (n = 3). # Significantly different from vehicle alone, p < 0.05. *Significantly different from MSU alone, p < 0.05. In (E), the cell lysates and crosslinked pellets were resolved using SDS-PAGE and were immunoblotted for ASC. In (F), the cells were fixed, permeabilized and stained for ASC (green), and the nuclei were stained with 4′,6-diamidino-2-phenylindole (DAPI; blue). The arrows indicate ASC speckles. The data are representative of three independent experiments. CAPE, caffeic acid phenethyl ester; MSU, monosodium uric acid crystals. DIC, differential interference contrast.
Scientific RepoRts | 6:38622 | DOI: 10.1038/srep38622 (Fig. 3F). Caspase-1 enzyme activity in foot tissue homogenates consistently increased as a result of MSU crystal injection, but this increase was abolished by CAPE treatment (Fig. 3G). Furthermore, CAPE decreased MSU crystal-induced production of IL-1β and IL-18 in foot tissue homogenates ( Fig. 3H and I). In contrast, neither MSU crystal injection nor CAPE treatment altered TNF-α levels ( Fig. 3J), indicating that TNF-α may not play an important role in the inflammatory symptoms of uric acid crystal-induced gout.
Consistent with the results from the air pouch inflammation model, oral administration of CAPE effectively alleviated the inflammatory symptoms of uric acid crystal-induced gout model in mice. In addition, the suppressive effects of CAPE on gout were mediated by its blocking NLRP3 inflammasome activation in the mouse foot.
CAPE directly binds to ASC. We investigated the mechanism by which CAPE suppresses NLRP3 inflammasome activation. To narrow down the inflammasome components that CAPE might target, we reconstituted the NLRP3 inflammasome complex in 293T cells by overexpressing each component, and the expression of iGLuc luciferase reporter gene was measured as an indicator of inflammasome components activation. When 293T cells were transfected with all three components, CAPE was able to suppress the expression of iGLuc luciferase reporter gene induced by NLRP3 plus ASC plus caspase-1 (Fig. 4A). When 293T cells were transfected with ASC and caspase-1 without NLRP3, CAPE could still block the expression of iGLuc luciferase (Fig. 4B). However, when 293T cells were transfected only with caspase-1, CAPE did not inhibit the expression of iGLuc luciferase (Fig. 4C), suggesting that CAPE does not target caspase-1. An in vitro caspase-1 enzyme activity assay confirmed that CAPE did not directly inhibit caspase-1 activity (Fig. 4D). Since the inhibitory effects of CAPE were observed when ASC was present, an inhibitable component would be narrowed down to ASC. These suggest that the target of CAPE may be ASC, not NLPR3 nor caspase-1.
To investigate whether CAPE binds to ASC, we performed pulldown experiments using biotin-tagged caffeic acid (BT-CA) (Fig. 5A). BT-CA exerted the inhibitory activity for MSU crystals-induced IL-1β secretion in BMDMs (Fig. 5B). We generated the structural analogs of BT-CA to examine the relationship between the inhibitory effect on IL-1β production and the binding activity to ASC. Biotin-tagged dihydrodihydroxycinnamic acid phenethyl ester (BT-DHC) inhibited IL-1β production induced by MSU crystals, while biotin-tagged dimethoxycinnamic acid phenethyl ester (BT-DMC) did not show such activity ( Fig. 5A and B). To examine whether BT-CA bound to ASC in the cell, BMDM cell lysates were treated with BT-CA. BT-CA bound proteins were precipitated with NeutrAvidin-beads and subjected to immunoblotting analysis for ASC. ASC was detected in the precipitated proteins, showing that BT-CA bound to ASC (Fig. 5C), suggesting that CAPE binds to ASC in BMDMs. Addition The mice were orally administered CAPE (30 mg/kg) or vehicle (Veh, 0.02% DMSO in water). After 1 hr, MSU crystals (3 mg/ml in PBS/mouse) or PBS alone were injected into the air pouches. After 6 hr, the pouch exudates were harvested and the supernatants were analyzed by (A) immunoblotting for caspase-1(p10) and IL-1β, (B) caspase-1 enzyme activity assay, and ELISAs for (C) IL-1β, and (D) IL-18. (E) Bone marrow-derived immortalized macrophages that had been transfected with the iGLuc luciferase reporter plasmid were injected into the air pouches. After 3 hr, the mice were orally administered CAPE (30 mg/kg) or vehicle. After 1 hr, MSU crystals (3 mg/ml in PBS/mouse) or PBS alone were injected into the air pouches. After 6 hr, luminescence derived from iGLuc-luciferase expression was assessed by in vivo imaging analysis using an Xtreme system (Bruker). (F) The air pouch tissue was fixed for histological examination using H&E staining. The purple dots represent infiltrated neutrophils. (G) Myeloperoxidase (MPO) activity, which reflects neutrophil recruitment, was assessed in the air pouch exudates. The values in the bar graphs represent the means ± SEM (n = 3-6 mice). # Significantly different from vehicle alone, p < 0.05. *Significantly different from MSU alone, p < 0.05. Veh, vehicle.
Scientific RepoRts | 6:38622 | DOI: 10.1038/srep38622 of CAPE attenuated precipitation of ASC with BT-CA, showing that CAPE prevented the binding of BT-CA to ASC (Fig. 5C). ASC was detected in NeutrAvidin-precipitated proteins derived from BMDM lysates incubated with BT-DHC, but not BT-DMC (Fig. 5C). These suggest that the inhibitory effects of the CAPE analogs on IL-1β production are correlated with their binding capacity to ASC.
To further confirm the binding of CAPE to ASC, BT-CA was treated with 293T cell lysates exogenously expressing ASC by transfection with ASC-expression plasmid. Immunoblotting analysis with NeutrAvidin-beads-precipitated proteins showed that BT-CA bound to ASC in 293T cell lysates expressing exogenous ASC (Fig. 5D). Consistently with the results with Fig. 5C, addition of CAPE abolished the precipitation of ASC with BT-CA in 293T cell lysates (Fig. 5D). Exogenously expressed ASC was co-precipitated with BT-DHC, but not BT-DMC (Fig. 5D). We investigated whether CAPE abolished ASC speck formation in 293T cells overexpressing ASC. After 293T cells were transfected with ASC expression plasmid and treated with CAPE, ASC speck formation was examined by confocal microscopy analysis. CAPE treatment resulted in decrease of ASC speck formation in 293T cells overexpressing ASC (Supplemental Figure 7).
These results show that BT-CA associates with both endogenously and exogenously expressed ASC, suggesting that CAPE targets and binds to ASC in the cell.
To further confirm the binding of CAPE to ASC, we employed SPR analysis with recombinant ASC protein. CAPE directly bound to ASC in a dose-dependent manner (Fig. 5E and F). The parameters of the interaction kinetics and the affinity constants were calculated based on a simple 1:1 interaction model using Biacore T200 software and are presented in Fig. 5F. We investigated whether CAPE affected activation of other inflammasomes that require ASC. AIM2 (absent in melanoma 2), an interferon-inducible HIN-200 family member, senses cytoplasmic double-stranded DNA, forming an inflammasome with ASC via homotypic PYD-PYD interactions to induce the activation of caspase-1 14 . We examined whether CAPE regulated activation of AIM2 inflammasome induced by transfection of synthetic double-stranded DNA, poly dA:dT in LPS-primed BMDMs. CAPE decreased the degradation of pro-caspase-1 to caspase-1(p10) and cleavage of pro-IL-1β to IL-1β induced by poly dA:dT in BMDMs as demonstrated by immunoblotting and ELISA (Supplemental Figure 8). These results demonstrate that CAPE suppressed the activation of other inflammasomes such as AIM2 inflammasome, which require ASC.
CAPE blocks the interaction between NLRP3 and ASC. To investigate the ASC domain to
which CAPE binds, we performed SPR analysis using recombinant proteins of ASC-PYD, ASC-CARD, and NLRP3-PYD. CAPE bound to ASC-PYD similarly with ASC protein (Supplemental Figure 9A). However, CAPE did not bind to ASC-CARD or NLRP3-PYD (Supplemental Figures 9B and 9C). The results suggest that CAPE preferentially binds to ASC-PYD. Molecular modeling analysis using the crystal structure of ASC (2KN6), which was obtained from the protein data bank (PDB) 15 , suggests a docking model between CAPE and the PYD domain of ASC. CAPE formed hydrogen bonds with Glu13 and Lys24 of ASC and interacted with Lys21 and Leu45 through a lipophilic interaction (Fig. 6A and B). In particular, Glu13 is an important amino acid that plays a critical role in both NLRP3-ASC interaction 16 and ASC-ASC oligomerization 17 .
Therefore, we asked whether the binding of CAPE to ASC would result in the disruption of the NLRP3-ASC interaction. Co-immunoprecipitation study showed that CAPE prevented MSU-induced association of NLRP3 and ASC in BMDMs (Fig. 6C). In addition, ATP-induced association between NLRP3 and ASC was also blocked by CAPE (Fig. 6D). These results suggest that CAPE suppresses the activation of the NLRP3 inflammasome by directly binding ASC, blocking the association of NLRP3 and ASC.
Together, oral administration of CAPE effectively attenuated the inflammatory symptoms of gouty arthritis by suppressing NLRP3 inflammasome activation. CAPE targeted ASC, a bridge protein for NLRP3 and caspase-1, thereby blocking NLRP3 inflammasome activation.
Discussion
In this study, we presented that CAPE would be effective to prevent acute gout by targeting ASC and inhibiting NLRP3 inflammasome activation, as CAPE administration reduced inflammatory symptoms in two animal models of acute gout. Both in vitro studies on primary macrophages and in vivo studies using animal models, including an air pouch model -which mimics the synovium-and a foot gout model showed that CAPE's inhibitory effects were mediated by the suppression of uric acid crystal-induced NLRP3 inflammasome activation. In addition, CAPE's suppression of NLRP3 inflammasome activation was supported in vivo by luminescence imaging analysis using the iGLuc luciferase reporter. CAPE's applications could be extended to the treatment of other diseases related to uric acid crystal accumulation. It has been reported that uric acid released from injured cells contributes to lung injury-associated inflammation and fibrosis via activation of the NLRP3 inflammasome 18 . In future study, it would be worth examining the efficacy of CAPE treatment against uric acid-mediated lung diseases.
Recent studies have reported that certain phytochemicals, such as resveratrol 19 , quercetin 20 and epigallocatechin-3-gallate 21 , can inhibit NLRP3 inflammasome activation by blocking mitogen-activated protein kinase (MAPK) and nuclear factor (NF)-κB activation or by decreasing reactive oxygen species (ROS) production. Other small-molecule inhibitors, such as 3,4-methylenedioxy-β-nitrostyrene 22 , MCC950 23 , and β-hydroxybutyrate 24 , have also been reported to suppress NLRP3 inflammasome activation. In this study, we propose ASC as a new regulatory target of anti-inflammatory phytochemical, CAPE, in NLRP3 inflammasome pathway. Our overexpression study indicated ASC as a required component for CAPE's inhibitory effect on NLRP3 inflammasome. CAPE did not directly bind to NLRP3 PYD, nor inhibited caspase-1 enzyme activity. These suggest that CAPE preferentially targets ASC among NLRP3 inflammasome components. Although several previous studies have investigated inhibitory chemicals, to the best of our knowledge, this report is the first showing the direct association of an anti-inflammatory phytochemical with ASC resulting in the inhibition of the NLRP3 inflammasome.
It is well known that CAPE suppresses the activation of transcription factors, thereby regulating the transcription levels of cytokines. However, in this study, we intended to elucidate the direct effect of CAPE on inflammasome activation rather than the effect on the transcriptional levels of pro-caspase-1 and pro-IL-1β expression. Therefore, CAPE was treated after washing out LPS in BMDMs to dissect CAPE's effect on the transcription of pro-IL-1β. The cleavage of pro-caspase-1 and pro-IL-1β to caspase-1(p10) and IL-1β are the hallmarks of inflammasome activation and that these cleavages are blocked by CAPE. These suggest the direct regulatory role of CAPE in inflammasome activation, which was clearly demonstrated by immunoblotting experiments with BMDM cells and in vivo luminescence-imaging study using an inflammasome-dependent reporter plasmid. Finally, our results indicate the direct binding of CAPE to ASC-PYD, leading to the disruption of NLRP3-ASC association. It is still possible that CAPE would exert both the transcriptional regulation and the direct impact on inflammasome component at in vivo situation where the priming signal and the inflammasome activating signal are mixed.
Collectively, our results show that CAPE, a natural product that is abundant in propolis, is a small-molecule inhibitor of the NLRP3 inflammasome. Thus, CAPE may have preventive or therapeutic potential against NLRP3 inflammasome-related diseases, particularly gout. CAPE directly associates with ASC, thereby blocking the assembly of NLRP3-ASC. Thus, ASC could be a new therapeutic target for gout. Our results reveal a new regulatory mechanism that modulates the activation of the NLRP3 inflammasome and can be utilized as the basis for development of new NLRP3 inflammasome inhibitors. Animals and cell culture. Mice(C57BL/6) were obtained from Orient Bio (Seoul, Korea). Nlrp3 A350VneoR mice (B6.129-Nlrp3 tm1Hhf /J, Stock Number:017969) were purchased from Jackson Laboratory (Bar Harbor, ME). The mice were housed in a room controlled for temperature (23 ± 3 °C) and relative humidity (40-60%) under specific pathogen-free conditions. Mice were acclimated in the animal facility for at least a week before the experiments. Mice of individual experimental groups in each experiment were of similar age and weight and randomly allocated to treatment groups. Investigators were blinded for treatment or genotype of mice, or both, in all experiments. Bone marrow-derived primary macrophages (BMDMs) were prepared from mice as described previously 25 . Bone marrow-derived immortalized macrophages from C57BL/6j mice were kindly provided by S. Kim (The Western University, Canada) 26 . Macrophages and 293T cells (human embryonic kidney cells) were cultured in Dulbecco's modified eagle medium containing 10%(v/v) fetal bovine serum (Invitrogen, Carlsbad, CA), 10,000 units/ml of penicillin, and 10,000 μg/ml of streptomycin.
Reagents.
Purified LPS from Escherichia coli was obtained from List Biological Laboratory Inc. (Campbell, CA). CAPE was purchased from Sigma-Aldrich. The structural derivatives of CAPE, biotin tagged CA, -DHC, and -DMC were synthesized as described previously 27 . Monosodium urate (MSU) and ATP were purchased from Invivogen (Carlsbad, CA). Antibodies for mouse caspase-1 and ASC were obtained from Santa Cruz Biotechnology (Santa Cruz, CA). Antibody for NLRP3 was purchased from Adipogen (San Diego, CA). Antibody for IL-1β was from R&D Systems (Minneapolis, MN). Caspase-1 activity kit for animal model was from Ab-cam (Cambridge, MA).
Plasmids.
A pcDNA3.1nV5-hNLPR3 expression plasmid was from You-Me Kim (Pohang University of Science and Technology, South Korea). Expression plasmids for ASC and caspase-1 were gifts from Giulio Superti-Furga (Austrian Academy of Sciences, Austria). An iGLuc plasmid was provided by Veit Hornung (University of Bonn, Germany). Transient transfection and luciferase assay were performed as previously described 25 . Analysis of inflammasome activation. This was performed as previously described 28 . Briefly, BMDMs were primed with LPS for 4 hrs. To exclude the effect of CAPE on LPS, CAPE was added after washing out the LPS with phosphate-buffered saline(PBS). The cells were treated with CAPE and stimulated with NLRP3 inflammasome activators such as MSU and ATP in serum-free medium. The cells were lysed in RIPA buffer (50 mM Tris-HCl, pH 7.4, 1% NP-40, 0.25% sodium deoxycholate, 150 mM NaCl, 1 mM EGTA, 1 mM PMSF, 1 mM Na 3 VO 4 , 10 μg/ml aprotinin, 10 μg/ml leupeptin). The supernatants were precipitated with methanol: chloroform (1:0.25), followed by centrifugation at 20,000 g for 10 min. The upper phase was discarded and one volume of methanol was added. The mixture was centrifuged at 20,000 g for 10 min to obtain a protein pellet, which was dried at room temperature and resuspended in Laemmli buffer (0.25 M Tris-HCl, pH 6.8, 0.4% glycerol, 10% SDS, 0.2% 2-mercaptoethanol, 0.64% bromophenol blue). The samples were resolved with SDS-PAGE and subjected to immunobloting assay. Luminescence-based in vivo imaging analysis. Bone marrow-derived immortalized macrophages were transiently transfected with iGLuc plasmid as described previously 13 . After the cells were injected into air pouches on the backs of mice, mice were orally administered with CAPE. One hour later, the mice were injected with either 1 ml of PBS alone or PBS containing MSU crystals. After 6 hr, mice were injected with 1 ml of luciferase substrate (Renilla-Glo ® Luciferase Assay system, Promega, cat.#E2720, Madison, WI) and subjected to luminescence measurements using an Xtreme system (Bruker, Billerica, MA). iGLuc signal was measured over 5 min of exposure with the acquisition mode set to luminescence and photography overlay.
Enzyme
Myeloperoxidase (MPO) activity assay. MPO activity was determined using MPO colorimetric activity assay kit (Bio-vision, Milpitas, CA). A foot gout model in mice. Mice (7 to 8 weeks old) were orally administered 0.5 ml sterilized water containing CAPE or vehicle. After 1 hr, MSU crystals or PBS were subcutaneously injected under the plantar surface of the right paw. Twenty-four hours after injecting MSU crystals, foot tissue was homogenized in RIPA buffer, and the supernatant was collected for MPO assays, ELISAs and immunoblot assays. For histological analysis, sagittal sections of the footpads were fixed in 10% paraformaldehyde and stained with H&E.
Immunoblots of ASC monomer and oligomer. Insoluble ASC complexes were isolated from cell lysates by centrifugation and subsequent crosslinking with discuccinimidyl suberate (Thermo Scientific, Waltham, MA) as previously described 10 . The samples were resolved on SDS-PAGE and processed for immunoblot assay.
Confocal microscopy analysis. Confocal microscopy analysis was performed as previously described 29 .
Briefly, BMDMs were plated on coverslips and incubated with an anti-ASC antibody, then incubated with an anti-rabbit IgG-FITC antibody. The cells were examined with an LSM710 laser scanning confocal microscope (Carl Zeiss, Oberkochen, Germany) using Zen2010 software.
Surface plasmon resonance (SPR) study. Anti-ASC antibodies were covalently immobilized to a CM5 sensor chip (cat.#BR-1005-30, GE Healthcare, Buckinghamshire, UK). Recombinant human ASC protein(Abnova) was captured with anti-ASC antibodies via antibody-antigen binding. For affinity measurements, the association and dissociation phases were monitored with a Biacore T200 (GE Healthcare). CAPE was dissolved in basic running buffer (PBS containing 0.005% Tween-20 and 5% DMSO) and injected into the flow cell at different concentrations with a flow rate of 5 μl/min at 25 °C. The sensor chip was washed with basic running buffer between each concentration. Control experiments were performed with blank (sensor chip only) and active(sensor chip with antibody only) channels on the same sensor chip. Based on the obtained assay curves, the control signals, which reflected the bulk effect of the buffer, were subtracted using T200 evaluation software ver. 2.0 (GE Healthcare). The kinetic parameters of the interaction and the affinity constants were calculated using a simple 1:1 interaction model with Biacore T200 evaluation software.
Molecular modeling study.
To predict a model of CAPE binding to ASC, we used the crystal structure of ASC (PDB code 2KN6). The protein structure was minimized using the Protein Preparation Wizard in the Maestro graphical user interface (version 9.3, Schrödinger). CAPE was docked into a minimized crystal structure of the PYD of the ASC using the docking routine in Prime (ver.3.1, Schrödinger, LLC, New York, NY, 2012). The docking was performed using the default settings and keeping all residues fixed. The graphics for the refined docking model for CAPE were generated using PyMol (http://www.pymol.org).
Precipitation and immunoblotting of proteins associated with biotin-tagged compounds. Cell lysates were treated with biotin-tagged CA, -DHC, or -DMC at room temperature for 4 hr and further incubated with NeurAvidin-beads (Thermo Scientific) at 4 °C for 2 hr. The samples were centrifuged at 15,000 g for 5 min and washed with TENT buffer (50 mM Tris, pH 7.0, 5 mM EDTA, 150 mM NaCl, 0.05% Tween 20) three times. Proteins co-precipitated with the beads were processed for immunoblot assays. | 6,623.4 | 2016-12-09T00:00:00.000 | [
"Medicine",
"Chemistry"
] |
A Generalized Flow for B2B Sales Predictive Modeling: An Azure Machine Learning Approach
Predicting sales opportunities outcome is a core to successful business management and revenue forecasting. Conventionally, this prediction has relied mostly on subjective human evaluations in the process of business to business (B2B) sales decision making. Here, we proposed a practical Machine Learning (ML) workflow to empower B2B sales outcome (win/lose) prediction within a cloud-based computing platform: Microsoft Azure Machine Learning Service (Azure ML). This workflow consists of two pipelines: 1) an ML pipeline that trains probabilistic predictive models in parallel on the closed sales opportunities data enhanced with an extensive feature engineering procedure for automated selection and parameterization of an optimal ML model and 2) a Prediction pipeline that uses the optimal ML model to estimate the likelihood of winning new sales opportunities as well as predicting their outcome using optimized decision boundaries. The performance of the proposed workflow was evaluated on a real sales dataset of a B2B consulting firm.
I. INTRODUCTION
In the Business to Business (B2B) commerce, companies compete to win high-valued sales opportunities to maximize their profitability. In this regard, a key factor for maintaining a successful B2B business is the task of determining the outcome of sales opportunities. B2B sales process typically demands significant costs and resources and, hence, requires careful evaluations. As a result, quantifying the likelihood of winning sales opportunities at the early stages is an important basis for appropriate resource allocation to avoid wasting resources and sustain company's financial objectives and [1].
Conventionally, outcome prediction is carried out relying on subjective human ratings [2]. Most of the Customer Relationship Management (CRM) systems allow salespersons to rate the probability of winning new opportunities manually [3]. This probability is then used as a metric to calculate weighted revenue on the opportunity records. Often each salesperson develops a non-systematic rating intuition with little to no quantitative rationale, neglecting the complexity of the business environment's dynamics [4]. Besides, as often as not, selling opportunities are intentionally underrated to avoid internal competition with other sellers or overrated to circumvent the pressure from sales management to maintain a high performance [5].
Even though the abundance of data and improvements in statistical and Machine Learning (ML) techniques have led to significant enhancements in data-driven decision-making, the literature is scarce in the subject of B2B sales outcome prediction. Yan et al. explored predicting win-propensity for sales opportunities using a dynamic clustering technique [5]. Their approach allows for online assessment of opportunities win rate; however, it heavily relies on regular inputs and updates in the CRM profiles. This does not appear to be a robust source of data considering each salesperson often handles multiple opportunities in parallel and puts less effort into making frequent updates to each opportunity records [6].
In a work by Tang et al. wining probability was estimated using a hybrid ML model trained on snapshots of historical data [4]. However, aiming for the goal of standardizing this paradigm across multiple companies limited performing an extensive variable selection and feature engineering in their solution. On top of that, reliance on historical snapshots of data entails expensive computations and requires modifications to the data collection strategies for companies. Overall, despite some theoretical work in this context, the literature lacks a practical approach with a concrete business implementation.
Here, we proposed a thorough workflow for predicting the outcome of B2B sales opportunities by converting this problem into a binary classification framework. In our workflow, first, an ML pipeline extracts, cleans, and imputes sales opportunities data and then extensively trains various types of ML classification models on the data. After optimally parametrizing each model, the ML pipeline eventually outputs a voting ensemble classifier composed of these models. In addition, this pipeline enhances the data using a comprehensive feature engineering step built on statistical analyses of historical data from selected categorical attributes of sales opportunities.
A second Prediction pipeline makes use of the ML model to estimate the likelihood of winning a given sales opportunity. Importantly, this pipeline also includes a statistical analysis step that specifies appropriate decision boundaries based on industry and monetary value segmentation of the sales opportunities. This helps to maximize the interpretability of the ML model's predictions and increasing its credibility and reliability.
To demonstrate the usability of our workflow, it was implemented and deployed to a real B2B consulting firm's sales pipeline using the Azure Machine Learning Service (Azure ML) cloud-based platform. Such a cloud-based workflow allows for a more scalable solution that readily integrates into the existing CRM software applications within each enterprise. Finally, the performance of our solution was evaluated not only in terms of standard statistical measurements (prediction accuracy, AUC, etc.) but also with reference to financial measurements.
A. Data
In this work, we used sales opportunity data extracted from a global multi-business B2B consulting firm's CRM data in three main business segments: Healthcare, Energy, and Financial Services. The data consisted of a total number of 25578 closed opportunities records ( Fig 1A) out of which ~58% were labeled as "Won" in their status record (Fig 1B). The raw CRM dataset contained 20 relevant variables (features) for each opportunity. These features are categorized according to their data type as in Table 1. Once a profile is created in the CRM system for any new opportunity, users are required to enter an estimation of the probability of winning that opportunity. User-entered probabilities in the dataset were discrete values of: 0, 0.25, 0.5, 0.75, and 0.99. The "Status" of an opportunity takes one of the following values: "Open" for new opportunities, "Won", and "Lost" for closed opportunities.
First, in order to clean the data, any record with a missing "Status" was dropped. Missing values in each of other features was inferred and imputed using an appropriate ML model trained on the rest of the features (XGBoost Regressor for continuous features, and XGBoost Classifier for categorical features) [7]. Since most of the selected features were mandatory to populate for creating CRM profile for an opportunity, less than 1% of the whole dataset contained missing values.
B. Feature Engineering
To enhance the dataset, additional relevant features were calculated and added to the dataset. These additional engineered features were quantified based on three categorical features in the dataset: Sales Lead, Account and Account Location. Feature Engineering was conducted by a simple statistical analysis of these target features.
A history of the total number of opportunities and the total number of won, and lost opportunities were calculated for each unique value in the target features. Also, the win rate was determined using the ratio of won and total opportunities. Next, Total Contract Value was averaged across won opportunities for each unique feature value to record the mean contract value of won opportunities for individual accounts, sales leads, and locations. To include the extent of variability in the Total Contract Value of won opportunities, the coefficient of variation was also calculated.
The aforementioned statistics were collected and stored in feature engineering lookup dictionaries for Accounts, Sales Leads, and Account Locations (Fig 2 is an example of the lookup dictionary for Sales Lead). In the last step the Mahalanobis distance between Total Contract Value of opportunities and won opportunities was computed for each unique value of the target features to quantify how far the contract value an opportunity is from the previous won opportunities value distribution. Including the engineered features enhanced the dataset and increased the total number of features to 47 for each opportunity (20 features originally from the raw CRM data + 9×3 = 27 engineered features based on the target features). The final dataset (25578 opportunities) was randomly partitioned into a Training set (70%) and a Testing set (30%). The training set was used for the purpose of training ML models with a 10fold cross-validation technique and the testing set was used to report the testing performance of the trained models. A third Validation set was also collected after the proposed framework was deployed to the sales pipeline over a period of 3 months (846 opportunities) for further evaluation of the workflow's performance.
III. PROPOSED FLOW AND MODELING
Our approach for predicting the outcome of sales opportunities is essentially converting the problem into a supervised binary classification paradigm. Our proposed workflow involves two main pipelines: an ML pipeline, and a Prediction pipeline. A pipeline is defined as an executable workflow of data, encapsulated in a series of tasks (steps). All codes were custom-written in python using the Azure Machine Learning Service platform.
A. Machine Learning Pipeline
The main objective of the ML Pipeline was to train predictive models on the data. As illustrated in Fig 3A, the ML pipeline used raw CRM data of all closed opportunities (either won or lost). In the first step, the raw dataset was cleaned, and missing values were imputed using appropriate inference techniques. Next, the dataset was enhanced by adding the engineered features appropriately for each opportunity. For training ML models, under the supervised classification paradigm, features and class labels were extracted from the enhanced CRM dataset for each closed opportunity. All features in the raw CRM dataset were selected except for Probability (user-entered probability) to avoid biasing models with probability estimations from users. Also, Status was used as the binary class labels (won = 1, lost = 0). At this point, the dataset was also partitioned into a training set and a testing set.
Probabilistic classification models, given the feature vector of an observation X, output a conditional probability distribution over the class labels ( ∈ | ) which for the binary case = {0,1}. This probability simply corresponds to the likelihood that an observation belongs to one of the classes. The predicted class of an observation (here won/lost) can then be determined using the conditional probability of the model:
̂= argmax ( = | )
In other words, for a binary classification, the predicted class is the one with a probability of more than 50% assigned to it, we refer to this probability cutoff threshold as the naïve decision boundary. The performance of a classification model can be evaluated using metrics defined in Table 2 [8]. For binary classifications, the Receiver Operating Characteristic (ROC) curves are generated; the area under the ROC curves (AUC) quantifies the robustness of the classification (a higher AUC suggests robust classification performance) [9].
For a comprehensive insight into the classification results, we also took a step towards measuring the prediction performance in monetary terms. In particular, we aggregated the total contract values of opportunities in different scenarios of classification ( , , , and ) and defined monetary metrics with a similar formulation as the statistical metrics (Table 2). In this regard, monetary precision is the fraction of the contract value of opportunities correctly predicted as won. Also, monetary recall measures the contract value proportion of actual won opportunities that are correctly identified as such.
Table 2. Statistical and Monetary Performance Metrics
In order to train classification models on the data we used the Automated Machine Learning (AutoML) step of Azure ML. In this step, multiple iterations of various types of ML models are trained in parallel and optimally parameterized based on their classification performance [10]. We limited the models to a total number of 35 iterations from XGBoost, and LightGBM classification models [7]. The training accuracy of the models was calculated based on a 10fold cross-validation technique. All models were grouped in a Voting Ensemble [11]. A voting ensemble (Fig 4) outputs a soft-voting linear combination of individual model's probability predictions weighted based on their accuracy according to: The Azure ML platform supports deploying ML models as web services on Azure Kubernetes Service (AKS) [12]. AKS enables request response service with low-latency and high-scalability which makes it suitable for production-level deployments. In the last step of the ML pipeline, the best performing model in terms of accuracy (the voting ensemble classification model) was deployed as a web service to an AKS cluster.
B. Prediction Pipeline
The Prediction Pipeline was designed to predict the winning probability of new opportunities using the classification model found in the ML pipeline. As shown in Fig 3B, all opportunities CRM data was uploaded to this pipeline. The opportunities were filtered based on their "Status" into open (new opportunities) and closed (either won or lost) categories. Probability predictions were generated for open opportunities and closed opportunities were used to modify the decision boundaries.
First, data went through a cleaning process similar to the ML pipeline. Afterwards, the feature engineering lookup dictionaries created in the ML pipeline were used to enhance the CRM data. Note that this step also ensured the data was transformed into a format that was consistent with the data used to train the ML models. The voting ensemble model deployed in the ML pipeline was used to make predictions on the open opportunities. Specifically, the model calculated the probability that an open opportunity belonged to the class of won opportunities. We directly used this probability to infer the likelihood of winning the new opportunities. The probability predicted by the ML model, although informative, required further analyses to lead to a conclusive decision making.
In order to maximize the interpretability of predicted probabilities, Prediction pipeline generated optimal decision boundaries based on business segments and total contract values of all closed opportunities. For this, we split each business segment's closed opportunities total contract value distribution into four quantiles (4 equal-sized groups). For each contract value quantile, we found the cutoff probability decision boundary that maximized the ML prediction's precision for that quantile (defined in Table 2). A Total number of 12 decision boundaries was calculated based on business segments and contract values (4 quantiles × 3 business segment). These modified decision boundaries were used instead of the naïve boundary (50% probability cutoff) as reference points to predict the final Status.
Azure ML platform is capable of scheduling automatic pipeline runs [13]. The ML pipeline was scheduled for a weekly rerun in order to retrain ML models on an updated CRM dataset which contains an additional weeklong history of the newly closed opportunities. This also kept the feature engineering lookup tables updated according to the most recent information. The Prediction pipeline was scheduled for a daily rerun to calculate and store predictions for new opportunities and remodify the cutoff decision boundaries.
IV. RESULTS
This section gives an overview of the performance of our proposed workflow to predict the outcome of sales opportunities using statistical metrics such as Accuracy, F1score, ROC curves, etc. On top of that, the performance was measured in terms of opportunities contract values (Table 2). Finally, ML predictions were compared to the user-entered predictions.
A. Model Training Results
A total number of 35 iterations of XGBoost and LightGBM classification models were trained individually on the data and then combined in a voting ensemble model based on their training accuracy. The voting ensemble model's training accuracy based on a 10-fold cross-validation was equal to 0.82. Further performance metrics are summarized in Fig 5. Note that while training the models, the classification cutoff threshold was set to the naïve decision boundary (50% probability).
B. Modified Decision Boundaries
To predict the final status (win/lose) of an open opportunity, the ML model's predicted probability needs to be compared to a reference decision boundary. We tailored the optimal decision boundary to two features: business segment, and contract value. Modified decision boundaries for healthcare, energy, and finance segments' contract value quantiles are shown in Fig 6. Interestingly, the cutoff probabilities decrease for opportunities with higher contract values implying a more optimistic decision making for more profitable contracts. Detailed distribution plot of the predicted probabilities for all closed opportunities are outlined in Fig S1.
C. Testing Results
The voting ensemble model's predicted probabilities were used in accordance with the modified decision boundaries on the testing dataset. The ML workflow's performance on the testing set was evaluated using appropriate statistical metrics and then compared to the user-entered predictions. Note that the testing set was not used for model training. The workflow's accuracy varied across various business segments in a range of 0.82-0.87. The total accuracy of the model (0.85) was considerably higher than the user-entered predictions (0.67). All metrics are summarized in Table 3.
On the testing set, the proposed workflow resulted in a higher classification accuracy (0.85) compared to the manual user-entered predictions (0.67). Also, the monetary accuracy of our workflow (0.90) beats manual prediction (0.74). This means the probabilities estimated by the ML workflow not only predict the outcome (win/lose) of opportunities more accurately but also result in more profitable decision makings.
D. Validation Results
A second comparison between the ML workflow and userentered predictions on the validation dataset collected over three months demonstrated the effectiveness of the proposed workflow. All performance metrics were calculated based on the first snapshot of the workflow's prediction for new opportunities and their final status after being labeled as closed (implying that for any new opportunity the model's prediction was stored before the opportunity was closed and used as a training data for the model). The ML workflow retained a higher classification accuracy on the validation set (0.83) compared to user-entered predictions (0.63) while having a close monetary accuracy.
CONCLUSION
In this paper, we proposed a novel machine learning workflow for sales opportunity winning probability prediction implemented on a cloud computing environment. With our approach, sales opportunity data is cleansed, enhanced and used to train a probabilistic ML classification model. This model is then used to predict the winning probability for new sales opportunities. In addition to that optimal decision boundaries are calculated to predict new opportunities outcome.
This workflow was evaluated after being deployed to a multi-business B2B consulting firm. The ML workflow resulted in a superior overall performance compared to manual predictions made by salespersons. The proposed workflow combines the cloud-computing platform with ML algorithms for sales outcome prediction which makes it straightforward to integrate into existing sales pipelines.
It is worth mentioning that although data-driven prediction of sales outcome is more concrete than subjective estimations, it should not overwhelmingly rule out sensible or justifiable sentiments regarding a sales opportunity. A data-driven approach, such as our workflow, can provide a reliable reference point for further assessments of the feasibility of a sales opportunity. | 4,256 | 2020-02-04T00:00:00.000 | [
"Business",
"Computer Science"
] |
Protocol design and performance analysis for cognitive cooperative networks with multiple antennas
In this article, we deal with a novel access protocol design for cognitive cooperative networks with multiple antennas. According to the principles of cognitive radio, the secondary user (SU) can exploit the primary user (PU) burstiness to access the licensed spectrum in the proposed access protocol. To get more access opportunities to the licensed spectrum, the SU simultaneously relays the PU’s packets and transmits its own packets based on the superposition coding. Furthermore, to concurrent transmission of the PU’s packets and SU’s packets based on the superposition coding such that they are received without the interference at the primary receiver and secondary receiver respectively, two weight vectors at the SU equipped with multiple antennas are designed based on zero-forcing algorithm. Specifically, from a networking perspective, we analyze the performance of the proposed access protocol in terms of the maximum stable throughput and the average end-to-end delay for both the PU and SU based on the principles of queueing theory. In addition, to protect the PU’s performance and exhibit the advantage of adopting multiple antenna technology, we jointly optimize the parameter of superposition coding and the number of antennas, and define the maximum stable throughput cooperative gain compared to the non-cooperative access scheme. More importantly, the impact of imperfect channel state information (CSI) at the SU on the maximum stable throughput and the average end-to-end delay performance is evaluated in the simulations from a practical point of view. Analysis and simulation results demonstrate that the proposed access protocol achieves significant performance gains for both the PU and the SU, outperforms the existing cooperative access protocol based on the dirty-paper coding, and keeps robust against to the imperfect CSI.
Introduction
Cognitive radio has been proposed as a promising technology to solve the contradiction between the inefficient utilization of spectrum and the limitation of spectrum resources in recent years [1]. In cognitive radio networks, the secondary user (SU) is allowed to share the licensed spectrum when the performance of the primary user (PU) is not affected. Currently, the SU can access the licensed spectrum through three schemes: interweaved spectrum sharing, underlay spectrum sharing and overlay spectrum sharing [2]. In the interweaved spectrum sharing scheme, http://jwcn.eurasipjournals.com/content/2013/1/70 cognitive radio networks. Most of the existing works on the combined topic focus on solving the problems in the physical layer [4][5][6][7][8][9][10], such as the enhancement of the sensing ability for the SU and the improvement of the outage and error performances for both the PU and SU. When delay-sensitive applications are considered, other performance metrics about the maximum stable throughput and the average end-to-end delay become critical. Nowadays, various access protocols for cognitive cooperative networks were proposed where the SU cooperatively relayed the PU's packets in exchange for more access opportunities [11][12][13][14]. Simeone et al. [11] analyzed the maximum stable throughput for both the primary system and the secondary system with and without relaying capability in the basic four nodes with single antenna configuration. Sadek et al. [12] investigated the stability and the delay of a cognitive multiple access relay channel, where the cognitive user acted as a cooperative node for relaying PU's packets without having its own packets. Krikidis et al. [13] proposed various protocol designs for a single antenna cognitive cooperative system with a cluster of SUs by using dirty-paper coding (DPC) and opportunistic relay selection. However, the system model considered in [13] was not practical which required the information to be perfectly exchanged with the inner of the cluster and rigorous synchronization for each user. In addition, there was the interference at the primary receiver (PR) and the secondary receiver (SR) in the proposed access protocol based on the complicated DPC [13], which resulted in the poor performances of PU and SU. Bao et al. [14] studied the performance of a single antenna cognitive cooperative system with coexisting multiple PUs and one SU in terms of the maximum stable throughput and the average end-to-end delay.
While these prior works have improved our understanding on the protocol design and performance analysis for cognitive cooperative systems, the key limitation of these works is that they all assume a single antenna system. With the fact that the multiple antenna technique will be adopted as one of the key enabling technologies for the next generation wireless communication systems, the importance of understanding the fundamental performance of cognitive multiple antenna networks becomes increasingly evident [15][16][17][18]. Therefore, we are interested in the scenario where a SU equipped with multiple antennas and making a comprehensive analysis of the proposed protocol design for cognitive cooperative networks in this article. To the best of our knowledge, the evaluation of the application of multiple antenna technique to cognitive cooperative networks has not been reported in the literature from a networking point of view. Hence, the goal of the work is to evaluate the stable throughput for PU and SU and the end-to-end delay performance for the PU by using the principle of queuing theory. To get more access opportunities, we make the SU as a relay for the PU and allow the SU to simultaneously transmit the PU's packets and its own packets based on the superposition coding in the cognitive cooperative networks. Furthermore, to avoid the interference to each other, two weight vectors are designed at the SU for the PR and the SR based on zero-forcing algorithm, respectively. More importantly, the impact of imperfect channel state information (CSI) on the performance of the proposed access protocol is also considered in the simulations. Results demonstrate that the designed protocol is robust against to the imperfect CSI and achieves significant performance gains for the PU and the SU, respectively.
The rest of this article is organized as follows. In Section 2, a system model is described. Section 3 designs the cognitive cooperative protocol and analyzes the maximum stable throughput of the proposed access protocol for both the PU and the SU, respectively. In Section 4, we define the cooperative gain of the proposed cooperative protocol and jointly optimize the parameter of the superposition coding and the number of antennas. In Section 5, the average end-to-end delay performance for the PU is analyzed. Finally, conclusions are drawn in Section 6.
System model
As illustrated in Figure 1, we consider a cognitive cooperative system which consists of a PU, a PR, a SU equipped with M antennas and a SR. a We make an assumption that both the PU and SU have an infinite buffer to store the incoming packets as in [11][12][13][14]. The channel is slotted in time and the transmission time of each packet equals to a slot duration. The packets' arrival at the PU and the SU are Bernoulli random process, independently and stationary from slot to slot with mean λ p and λ s (packets per slot), respectively. Due to the effect of the fading channel, a packet can be successfully or unsuccessfully received by the intended receiver, which requires the feedback acknowledgment (ACK) and negative acknowledgment (NACK), b respectively.
Physical layer model
We assume that all channels experience independent stationary Rayleigh flat-fading with channel coefficients being denoted as in Figure 1, which are circularly symmetric complex Gaussian random variable with zero-mean and unit-variance. Thus we have where α (2 ≤ α ≤ 5) denotes the path-loss exponent and d ij , i ∈ {PU, SU} , j ∈ {PR, SR} represents the distance between the respective transmitter and receiver. Throughout the article, we assume that the CSI about g 0 , f 0 and f 1 is known at the SU. c The transmit powers at the PU and the SU are denoted by P p and P s , respectively. n k ∼ CN 0, σ 2 , ∀k ∈ {PR, SR} is complex additive white Gaussian noise (AWGN) at the PR and SR. n SU ∼ CN M,1 0 M , σ 2 I M is the AWGN vector at the SU. Accordingly, the signal x p received at the SU and the PR from the PU at time t can be respectively expressed as ( 1 ) where the index t is dropped without loss of generality.
In this article, we consider that the success or failure of packet reception for each link i → j is characterized by outage events and outage probabilities. The outage event O ij is defined as the instantaneous achievable rate C falls below a given rate R with an outage probability Pr O ij = Pr {C < R}. In order to overcome the impact of the fading channel and decrease the outage probability between the PU and the SU, we apply a 1×M weight vector w r to the received signal y SU at the SU, then the resultant scalar signal at the SU is given by where (·) † denotes the conjugate transpose operation and w r = g 0 g 0 as stated in [19,20].
From (2) and (3), the received signal-to-noise ratio (SNR) of the primary signal at the PR and the SU can be respectively derived as SNR PR = P p σ 2 |h 0 | 2 and SNR SU = Correspondingly, the outage probability for a given target rate R p between the PU and the PR is represented as where γ 0 = P p d α PU,PR σ 2 and 0 = 2 R p − 1.
Similarly, the outage probability between the PU and the SU is given by where γ 1 = P p d α PU,SU σ 2 , γ (·, ·) and (·) denote the lower incomplete gamma function and gamma function [21], respectively.
Queueing stability
In a communication network, the stability of the queue is a fundamental performance metric. Stability is defined as the state where all the queues in the network are stable. According to Rao and Ephremides [22], a queue is stable if and only if there exists a positive probability for the queue being empty, i.e., where Q i (t) denotes the size of the ith queue at time t.
For a more rigorous definition of stability, we can refer to [22,23]. If the arrival and departure rates of a queueing system are stationary, the stability can be checked by using Loynes' theorem [24]. The theorem states that the queue is strictly stationary which requires the average arrival rate less than the average departure rate of the queue, i.e., the service rate. Otherwise, the queue is unstable.
Cognitive cooperative protocol design with multiple antennas
In this section, we investigate a novel cognitive cooperative access protocol which can efficiently combine the principles of cognitive radio with multiple antenna technology. From higher network layer viewpoint, we focus on analyzing the maximum stable throughput for both the PU and SU by using the queuing theory. Furthermore, we assume that the SU has perfect spectral sensing ability and mainly analyze the impact of multiple antennas on the cooperative access protocol for the PU and the SU in the article. The impact of imperfect spectral sensing problems is beyond the scope of our consideration.
Non-cooperative access scheme
In order to evaluate the gains obtained from the cooperative access scheme, the non-cooperative access scheme is analyzed as a baseline scenario, in which the PU transmits its data through the primary link directly without getting any assistance from the SU. The queue size of the PU or SU at time t, denoted by Q i (t), i ∈ {p, s}, evolves as follows: where X i (t) represents the number of packet arrivals at time t and is a Bernoulli stationary process with finite http://jwcn.eurasipjournals.com/content/2013/1/70 In the non-cooperative access scheme, the service process of the PU can be modeled as From the definition of the service process, the average service process of the PU is given by Due to assuming perfect spectral sensing at the SU, the PU's queue and the SU's queue are not interacting. Therefore, according to Loynes' theorem, the average service process is also defined as the maximum stable throughput of the corresponding queue. Hence, the stability of the PU's queue, under the non-cooperative access scheme, should satisfy According to the principles of cognitive radio, the SU is allowed to access the licensed spectrum without producing the interference to the PR. Therefore, the SU can access the licensed spectrum when the PU does not have any packet in its queue. We can model the service process of the SU as [13,14] where Q p (t) = 0 represents the event that the PU's queue is empty at time t. According to Little's theorem [24], it has probability Pr Q p (t) = 0 = 1 − λ p μ max PN . Hence, the average service process of the SU is given by Correspondingly, the stability of the SU's queue, under the non-cooperative access scheme, requires
Cooperative access scheme
In this section, we investigate a cognitive cooperative access scheme, in which the SU acts as a relay to deliver the PU's packets unsuccessfully received by the PR through the primary link directly. Therefore, in contrast to the non-cooperative access scheme, the PU's packet is removed from its queue whenever it is correctly received by the PR or the SU. Moreover, the conventional noncooperative ACK/NACK mechanism should be revised and the SU is allowed to send an ACK to notify the PU for successful decoding the packet [11][12][13]. To get more access opportunities, the SU simultaneously relays the PU's packets and transmits its own packets based on the superposition coding in the cooperative access scheme. Further, to concurrent relay the PU's packets and transmit its own packets based on the superposition coding such that they are received without interference at the PR and SR respectively, we design two M × 1 weight vectors at the SU to the PU's signal and the SU's signal, i.e., w p and w s , respectively. Thus, the received signal at the PR and SR can be represented as where β (0 ≤ β ≤ 1) denotes the power allocation factor, i.e., the parameter of the superposition coding. In order to eliminate the interference signal and maximize the desired signal at the PR and SR, we adopt the zero-forcing algorithm to design the weight vectors for simplicity. Then, two weight vectors w p and w s can be obtained by the following optimization problems [15,25] w p = arg max It seems difficult to solve the two optimization problems to get the zero-forcing beamforming weight vectors w p and w s . In fact, we can use projection matrix theory [26] to get the optimum weight vectors as follows: where V ⊥ PR and V ⊥ SR are the projection matrices for the PR and SR and given, respectively, by (18) http://jwcn.eurasipjournals.com/content/2013/1/70 To this end, the received SNR at the PR and the SR can be calculated from (14)- (17), after some algebraic manipulations, as To derive the maximum stable throughput for the primary system and the secondary system under the cognitive cooperative access scheme, we first present the following theorem [15,25].
degrees of freedom, and its probability density function is given as In what follows, we analyze the maximum stable throughput of the cognitive cooperative access scheme and compare with that of the non-cooperative access scheme.
PU stability
In contrast to the non-cooperative access scheme, the SU has two queues in the cognitive cooperative access scheme, i.e., one queue Q s storing its own packets and one queue Q sp containing the packets from the PU which are not successfully received by the PR. Hence, the packets in the queue Q p will be removed in the cooperative access scheme whenever the packets are successfully received by the PR or the SU, which means that the maximum stable throughput of the PU depends on both the stability of the queue Q p and Q sp . Hence, the service process of the PU can be modeled as Then, the average of the service process of the PU in the cognitive cooperative access scheme can be derived as In what follows, we analyze the stability of the queue Q sp . The evolution of the queue Q sp can also be modeled as where Y sp (t) = 1 Q p (t) = 0 ∩ O t S,P (β) denotes the number of packet departures from the queue Q sp and R is the number of packet arrivals of the queue Q sp at time t, respectively. It is worth noting from (23) that only the queue Q p is empty, the SU will access the channel to transmit its own packets and the packets in the queue Q sp based on the superposition coding to get more access opportunities.
Correspondingly, the average packet arrival rate and departure rate for the queue Q sp , at time t, can be respectively computed as where γ 3 = P s β d α SU,PR σ 2 . To this end, according to Loynes' theorem [23], the stability condition of the PU's queue in the proposed cooperative access scheme can be derived from (22)-(25) as
SU stability
According to the cognitive cooperative access scheme, the service process of the SU can be modeled as where Q sp (t) = 0 denotes the event that the queue Q sp is empty at time t and the probability of the empty of the queue Q sp based on Little's theorem [24] is given by For the empty of the queue Q sp condition, the SU only establishes a communication between itself and the SR. From (27), the average service rate of the SU can be derived as where γ 4 = P s (1−β) d α SU,SR σ 2 . Using Loynes' theorem [23], the stability condition of the SU's queue in the cognitive cooperative access scheme requires
Cooperative gains
In this section, we first analyze the cooperative gains for the PU and the SU in the cognitive cooperative access scheme, respectively. In what follows, we define the maximum stable throughput cooperative gains for the PU and the SU as From (26) and (33), the maximum stable throughput of the PU with cooperation can be derived as Since the maximum stable throughput of the PU without cooperation is independent of M, it keeps constant when M grows asymptotically large. To this end, the cooperative gain bound of the PU can be derived as (32).
By following the similar proof for that of the PU, the cooperative gain bound of the SU can be obtained after some simple mathematical manipulations.
Parameters optimization
Due to the fact that the parameter of superposition coding, i.e., the power allocation factor β, introduces an interesting tradeoff between the maximum stable throughput of the PU and the SU. Hence, we optimize the appropriate power allocation factor β to maximize the stable throughput of the SU while supporting the pre-selected PU's stable throughput λ p0 (λ p0 < μ max PN ) in this section. The optimization problem can be formulated as It is worth pointing out that the solution of the optimization problem should satisfy the inequality λ p (M, β) ≥ λ p0 . Since it is difficult to obtain a closed form expression for β opt , we numerically evaluate β opt by Monte Carlo simulations. As shown in Figure 2, the stable throughput of the SU decreases with the power allocation factor β whereas the stable throughput of the PU presents the flat behavior when the power allocation factor β exceeds a certain value. Hence, the optimal power allocation factor β opt can always be found for all practical SNR values under different antenna configurations through simulations. Besides, note that the use of β opt and M opt based on (35), will simultaneously (i) achieve a significant performance improvement for the stable throughput at both the PU and the SU, and (ii) guarantee that the PU's stable throughput of the cooperative access scheme will equal to or outperform the PU's stable throughput of the non-cooperative scheme.
Numerical results for stable throughput performance
In this section, we make extensive numerical results to evaluate the performance of the proposed cooperative access scheme. To present the advantage of the proposed cooperative access scheme, the existing cooperative access scheme based on DPC in [13] is provided for comparison in the simulations. Throughout the article, a system topology where the PU, SU, and PR are collinear is considered. Unless otherwise specify, the simulation parameters http://jwcn.eurasipjournals.com/content/2013/1/ are set as follows: Figure 3 compares the maximum stable throughput of cooperative and non-cooperative access schemes as a function of the PU's arrival rate λ p using the fixed power allocation factor β = 0.5. We label the proposed cooperative access scheme based on zero-forcing algorithm as ZF-cooperation. It is noted from the figure that the proposed cooperative access scheme outperforms its noncooperative counterpart. Further, as can be expected in any conventional multiple antennas system since the extra number of antennas provides additional space diversity gain, the maximum stable throughput of the SU is significantly improved in the proposed cooperative access scheme. However, in contrast to the cooperative access scheme, increasing the number of antennas at the SU does not provide any diversity gain in the non-cooperative access scheme due to the restriction of access opportunities for the SU. Hence, increasing the number of antennas only provides the additional coding gain for the SU in the non-cooperative access scheme.
In Figure 4, we compare the proposed cooperative access scheme, i.e., ZF-cooperation, with the conventional cooperative access scheme by using DPC in [13] (labeled as DPC-cooperation). We adopt the selection algorithm for the parameter of superposition coding in [13], in which the power allocation factor β is selected from the PU's point of view. Hence, we select β 0 = 0.6 and β 1 = 0.7 for the ZF-cooperation scheme and the DPC-cooperation scheme, respectively. To a fair comparison between the two schemes, we assume that for ZF-cooperation scheme the PU and SU transmit with power P p and P s , respectively, while for the DPC-cooperation scheme the PT and SU transmit with power P p and P s 2 , respectively. This is due to the fact that two SUs are selected to transmit its own packets and relay the PU's packets in the DPC-cooperation scheme. As can be observed in the figure, the proposed ZF-cooperation scheme significantly outperforms the DPC-cooperation scheme for both primary and secondary stable throughput. Furthermore, the non-cooperative access scheme achieves better performance than that of the DPC-cooperation scheme under low SNR regime. This is can be explained by the fact that the DPC-cooperation scheme introduces a severe interference to the primary transmission in the low SNR regime, which results in a lower gain throughput for the SU. However, different from the DPC-cooperation scheme, there exists no interference at the PU and SU in all SNR region in the proposed ZF-cooperation scheme. Figure 5 illustrates the impact of the number of antennas M on the maximum PU's stable throughput performance when P s = 5 and P p = 1, 3, 5 dB, respectively. As can be expected, when the transmit power of the PU increases, the maximum PU's stable throughput is significantly improved. It is also observed that the maximum PU's stable throughput of the proposed ZF-cooperation scheme is improved as the number of antennas M increases due to the fact that a higher diversity gain can be obtained by increasing the number of antenna. However, the maximum PU's stable throughput almost keeps constant when M ≥ 6 under different transmit powers of the PU, which implies that an adequate number of antennas can be selected for the design of the system to reduce the complexity of our scheme. Figure 6 plots the cooperative gain of the PU versus the transmit power P s when P p = 1 dB and 3 dB, respectively. It is noted from this figure that more cooperative gain of the PU can be obtained when the quality of PU's link is poor. In addition, we can observe that the cooperative gain can be gradually improved by increasing the transmit power P s of the SU and the number of antennas M. Specifically, when the number of antennas increases to a certain value, the full cooperative gain can be achieved in the proposed cooperative access scheme, which verifies our analysis in Section 4. Figure 7 shows the impact of imperfect CSI on the PU and the SU performances of the proposed ZF-cooperation scheme. In previous analysis, we assume that perfect CSI about the links can be obtained at the SU. Such an assumption, however, does not hold in practical systems. The reason is that channel knowledge is always outdated due to the effect of delay and time-varying of the wireless link, which implies that we need to consider the impact of imperfect CSI. Since we are more interested in studying the impact of imperfect CSI on the designed weight vectors at the SU, we only consider the imperfect CSI about the channel f 0 between the SU and the SR and the channel f 1 between the SU and the PR in the simulation. The imperfect CSI can be described, using the correlation model [27][28][29], as follows: where f 0 denotes the outdated CSI between the SU and the SR, and f 1 is the outdated CSI between the SU and the PR. n 0 and n 1 are independent CSCG random vectors with each element having zero mean and unit variance, and δ 2 (0 ≤ δ 2 ≤ 1) represents the variance of the outdated CSI. As shown in Figure 7, the stable throughput for the PU and the SU decreases for higher δ 2 . The result can be expected since increases of δ 2 represent that the channel information about f 0 and f 1 becomes more inaccuracy at the SU. On the other hand, while the stable throughput of the PU decreases when the variance of the outdated CSI δ 2 becomes larger, the proposed ZF-cooperation scheme still outperforms the non-cooperative access scheme, which means that the proposed ZF-cooperation scheme keeps robust against the imperfect CSI. Figure 8 shows how the distance d PU,SU affects the maximum stable throughput for the PU and SU when M = 3 and P p = P s = 1 dB. From the figure, we see that the proposed cooperative access scheme achieves a better stable throughput performance than the noncooperative access scheme for the PU over all values of d PU,SU . In addition, as we can observe, there exists an optimal d PU,SU to the maximum stable throughput of the PU and the SU, respectively. For the proposed cooperative access scheme, we note that the PU's stable throughput improves for increasing d PU,SU when 0.1 ≤ d PU,SU < 0.7, but gets worse when 0.7 ≤ d PU,SU ≤ 0.9. This is due to the fact that the PU's performance for the proposed cooperative access scheme is dominated by the worst link as the conventional relay networks. For 0.1 ≤ d PU,SU < 0.7, the SU-PR link is the worst link compared to the PU-SU link, it has a lower SNR due to the longer distance. While for 0.7 ≤ d PU,SU ≤ 0.9, the longer distance results in the poor link between the PU and the SU.
Delay analysis
In this section, we analyze the average end-to-end delay of the PU in the proposed cooperative access scheme. Before delving into the details, we first give the definition of the end-to-end delay for a packet, which is described as the time from the packet arrives at the queue Q i till it is correctly received by the respective receiver.
Non-cooperative access scheme
According to Little's theorem, the average end-to-end delay of a packet in the queue Q p can be modeled as where E Q p denotes the average queue size. packet is successfully transmitted from the PU to the PR through the primary link directly. The probability of the event A is given by To this end, the average end-to-end delay for the packet of the queue Q p in the proposed cooperative scheme is given by The closed-form expressions for the average delay of T P and T SP can be derived by following the similar process in the non-cooperative access scheme. This completes the proof of Theorem 4.
Results and discussions
Here we compare the average end-to-end PU's queue delay of the proposed ZF-cooperation scheme with that of the non-cooperation scheme. We now consider the following two cases of interest in the simulations. Case 1: SU owning perfect CSI and Case 2: SU owning imperfect CSI. Figures 9 and 10 illustrate that the delay performance of the proposed cooperative access scheme versus the PU's arrival rate λ p under perfect CSI and imperfect CSI cases, respectively. As shown in Figure 9, the average end-toend delay performance of the proposed ZF-cooperation scheme outperforms that of the non-cooperation scheme. Moreover, the packet of the PU's queue suffers severe delay with the increasing of the PU's arrival rate. However, increasing the number of antennas can significantly improve the average end-to-end delay performance of the proposed ZF-cooperation scheme due to more diversity gain. In Figure 10, we investigate the impact of the imperfect CSI on the average end-to-end delay performance for the PU with different antenna configurations. It is observed that the average delay of the PU is almost not affected by the imperfect CSI at low arrival rate and can be improved by increasing the number of antennas. In addition, the proposed cooperative access scheme still outperforms the non-cooperative access scheme under the imperfect CSI, which demonstrates that the proposed cooperative access scheme keeps robust against the imperfect CSI at the SU.
Conclusions
In this article, we have dealt with the protocol design in cognitive cooperative networks with multiple antennas. To simultaneously transmit the PU's data and its own data based on the superposition coding such that they were received without interference at the PR and SR respectively, two weight vectors were designed at the SU based on zero-forcing algorithm. Specifically, we analyzed the maximum stable throughput and the average end-to-end delay performance of the proposed cooperative access scheme based on zero-forcing algorithm and compared with that of the existing cooperative access scheme based on the DPC. Simulation results demonstrated that the proposed cooperative access scheme achieved a better performance than the existing cooperative access scheme by using the DPC. In addition, the cooperative stable throughput gains for the PU and the SU were defined to study the effect of antennas on the performance of cognitive cooperative networks, and the upper bounds for the PU and the SU were also derived, respectively. Through analysis and simulations, we found that the upper bound of cooperative gain could be obtained by increasing the transmit power of the SU or the number of antennas. Furthermore, the impact of imperfect CSI on the performance of the proposed cooperative access scheme was investigated from the practical view and simulation results showed that the proposed cooperative access scheme kept robust against the imperfect CSI at the SU. Endnotes a The system model, corresponding to a practical scenario in which a secondary base station is serving one SU, is also considered in [15]. b Assuming that the packet ACK or NACK feedback is no error due to the fact that the short length of ACK/NACK packets can be coded at very low rate in the feedback channel [11][12][13]. c In practice, the CSI between the SU and the SR can be obtained through the classic channel estimation and feedback mechanisms as in [30]. While the CSI about g 0 and f 1 can be obtained at the SU by direct estimating pilot signals from the primary system or by using the band manager which can exchange the channel information between the PU and the SU [31]. | 7,455 | 2013-03-14T00:00:00.000 | [
"Computer Science",
"Business"
] |
Homogeneous and Heterogeneous Photocatalysis for the Treatment of Pharmaceutical Industry Wastewaters: A Review
Pharmaceuticals are biologically active compounds used for therapeutical purposes in humans and animals. Pharmaceuticals enter water bodies in various ways and are detected at concentrations of ng L−1–μg L−1. Their presence in the environment, and especially long-term pollution, can cause toxic effects on the aquatic ecosystems. The pharmaceutical industry is one of the main sources introducing these compounds in aquatic systems through the disposal of untreated or partially treated wastewaters produced during the different procedures in the manufacturing process. Pharmaceutical industry wastewaters contain numerous pharmaceutical compounds and other chemicals and are characterized by high levels of total dissolved solids (TDS), biochemical oxygen demand (BOD) and chemical oxygen demand (COD). The toxic and recalcitrant nature of this type of wastewater hinders conventional biological processes, leading to its ineffective treatment. Consequently, there is an urgent demand for the development and application of more efficient methods for the treatment of pharmaceutical industry wastewaters. In this context, advanced oxidation processes (AOPs) have emerged as promising technologies for the treatment of pharmaceutical industry wastewaters through contaminant removal, toxicity reduction as well as biodegradability improvement. Therefore, a comprehensive literature study was conducted to review the recent published works dealing with the application of heterogeneous and homogeneous photocatalysis for pharmaceutical industry wastewater treatment as well as the advances in the field. The efficiency of the studied AOPs to treat the wastewaters is assessed. Special attention is also devoted to the coupling of these processes with other conventional methods. Simultaneously with their efficiency, the cost estimation of individual and integrated processes is discussed. Finally, the advantages and limitations of the processes, as well as their perspectives, are addressed.
Introduction
The release of untreated or partially treated industrial wastewaters in the environment is a considerable source of pollution and can cause a variety of adverse effects on the aquatic environment and human health. In general, industrial activities consume considerable amounts of water and simultaneously produce huge volumes of wastewaters characterized by high toxicity as well as high concentrations of biochemical oxygen demand (BOD), chemical oxygen demand (COD), suspended solids (SS) and inorganic and organic pollutants [1,2].
Pharmaceutical industry wastewater is a characteristic kind of industrial wastewater which contains numerous non-biodegradable organic compounds such as drugs, antibiotics, X-ray constant agents, cytostatic agents, analgesics, anti-inflammatories, antiepileptics, antiseptics, blood lipid regulators, antidepressants, steroids, hormones, flame retardants, and other broadly used chemicals at different concentrations, contributing to high values of BOD, COD and total solids [3]. Due to the enormous volume and hazardous nature of wastewater during manufacturing activities, the pharmaceutical industry is classified as Table 1. Composition of pharmaceutical industry wastewater [3,6,7]. In addition, the complex composition of pharmaceutical industry wastewaters, and especially recalcitrant compounds, significantly reduces the performance of conventional wastewater treatment processes. Because of the high concentrations and the non-biodegradable nature of many pharmaceuticals, commonly employed biological and chemical treatment methods are often inefficient for their complete removal [8,9]. Consequently, disposal of treated effluents into receiving aquatic systems can lead to contamination with pharmaceuticals that have proven to cause toxic effects to various microorganisms [6].
High concentrations of pharmaceuticals end up in the environment from various pharmaceutical production facilities. Industrial effluents discharged from the manufacturing units are characterized as a major source entering pharmaceutical compounds in aquatic systems [4,5]. The main pathway for pharmaceuticals to enter the environment is through discharges of pharmaceutical industries wastewater to the wastewater treatment plants (WWTP) and then from municipal effluents. It is estimated that approximately half of the pharmaceutical wastewaters produced worldwide are discarded without specific treatment [6].
Side-effects that have been reported due to the presence of pharmaceutical compounds in the environment and its microorganisms are the development of antibiotic resistance, retardation of nitrite oxidation, methanogenesis, reduction in growth rate of microalgae, feminization in fish and alterations in the behavior and migratory patterns of salmon [6].
Therefore, the efficient treatment of pharmaceutical industry wastewaters is a significant demand before their disposal in water streams to avoid serious environmental problems. As the conventional, treatment technologies have been proven to be inefficient, several alternatives for the treatment of industrial wastewaters have been proposed [10,11]. Over the last years, advanced oxidation processes (AOPs) have received considerable attention due to their high versatility and efficiency in water decontamination. Their efficiency is based on the generation of various species and especially highly oxidative hydroxyl radicals (HO • ). HO • radicals have high redox potential, are non-selective and able to oxidize various non-biodegradable organic pollutants, such as dyes, pesticides, pharmaceuticals, toxins, etc. In addition to oxidation by HO • , several other simultaneous reactions can take place, leading to the degradation of organic pollutants present in wastewater. AOPs offer different possible ways for the in situ formation of the reactive species including several methods such as photo-Fenton, semiconductor photocatalysis, UV/H 2 O 2 , sonolysis, wet air oxidation and O 3 -based processes, among others [10][11][12][13].
Among the AOPs, photocatalysis has proved to be one of the most efficient processes for the treatment of industrial wastewaters. Photocatalysis can take place in homogeneous [1,10,13]. As HO • radicals possess non-selective nature and high oxidation potential, they present high reaction rates with numerous organic contaminants. Based on the existing literature data, transformation of recalcitrant pollutants in more biodegradable molecules and high mineralization percentages to CO 2 , water and mineral acids under specific conditions can be achieved [1,13,14] Being aware of the above, numerous research articles are focused on the removal of organic and inorganic contaminants by homogeneous and heterogeneous photocatalysis. In recent years the scientific interest has been extended to the application of homogeneous and heterogeneous photocatalysis for pharmaceutical industry wastewater treatment. Therefore, this study provides an overview on the photocatalytic treatment and purification of real pharmaceutical industry wastewater. The data were collected from the Elsevier Scopus Database documents search with article title, abstract, and keywords as follows: pharmaceutical industry wastewaters, photocatalytic treatment, heterogeneous photocatalysis and photo-Fenton, with the document type "article". The generated literature list was checked manually and only the articles focused on the treatment of real pharmaceutical industry wastewaters using the targeted processes were considered.
Based on the overview, the efficiency of the processes in terms of COD reduction, pharmaceuticals removal, toxicity abatement and biodegradability increase are discussed. In addition, significant parameters that affect the processes as well as their potential applicability in large scale are evaluated. Special attention is also devoted to their combination with other treatment technologies which can lead to the optimum efficiency with lower cost. Finally, the gaps in the literature that should be investigated are highlighted.
Photocatalysis in Wastewater Treatment: Fundamental Aspects
Photocatalysis is the acceleration of a photochemical transformation by the action of catalyst such as TiO 2 or Fenton's reagent [6]. After the discovery of water splitting by photocatalysis from Fujishima and Honda (1972) [15], this process has been extensively studied for environmental applications, including water and wastewater treatment, using various photocatalysts. One of the most used photocatalysts is TiO 2 in the forms of anatase and rutile. The band gap of TiO 2 for anatase and rutile phase is~3.2 eV and~3.0 eV, respectively. Its wide application is justified by its significant advantages, i.e., low cost, simple synthesis and chemical and photochemical stability [16,17].
The mixture of anatase/rutile (≈70/30) crystal phases, known as Degussa P25, is one of the most efficient commercially available photocatalytic materials that has been investigated extensively. However, with the development of nanotechnology, doping and co-doping of TiO 2 with metals and non-metals, the immobilization of the catalysts on a suitable support substrate as well as its combination with other materials have been adopted and studied for the removal of organic pollutants. Other photocatalysts that have also been extensively used to remove pollutants from water are indicatively various sulfides, bismuth oxyhalides, graphitic carbon nitride (g-C 3 N 4 ), perovskites and composite materials [13,14,16]. However, most of them have not been investigated for the treatment of real pharmaceutical industry wastewater.
The overall process of heterogeneous photocatalysis can be described by the Equations (1)-(7) included in Table 2 [1,13,16]. TiO 2 is used as a representative photocatalyst to describe the mechanism. The photocatalytic reaction initiates with the irradiation of TiO 2 with a photon of energy equal to or greater than its band gap width, and the formation of photogenerated electron/hole (e − /h + ) pairs. In aqueous suspensions the produced holes (h + VB ) and electrons (e − CB ), can react with surface HO − groups and O 2 , respectively, leading to the formation of HO • and O ·− 2 radicals. If oxygen is limited, rapid recombination of photoproduced holes and electrons can take place. The HO 2 • radical is formed by the reaction of proton and O ·− 2 radical. Further reactions can produce HO • through the formation of H 2 O 2 . All these species and especially HO • radicals can react with wastewater pollutants leading to their removal/mineralization. The e − and h + can also lead to the reduction and oxidation of molecules adsorbed on the surface of the photocatalytic material [1,13]. The increase or decrease of the reaction rate is often associated with an enhanced or suppressed e − /h + recombination, respectively [11]. Table 2. Heterogeneous and homogeneous photocatalysis mechanisms [1,13,16,18].
Heterogeneous Photocatalysis (TiO 2 Is Used as a Representative Photocatalyst)
Photo-Fenton process In general, five steps take place during photocatalysis [11]: 1.
Transfer of molecules to the photocatalyst's surface; 2.
Adsorption of molecules on the surface; 3.
Activation of the catalyst and decomposition of adsorbed molecules; 4.
Desorption of the products; 5.
Removal of reaction products from the photocatalyst's surface.
The factors that can affect the degradation efficiency are the initial organic load of the wastewater, type of irradiation, mass of catalyst, pH, temperature, irradiation intensity, concentration of oxygen, the addition of oxidants and the presence of substances that can scavenge the reactive species. Furthermore, the separation of heterogeneous catalysts from the treated wastewater, mass transfer limitations on the immobilized catalysts and the low quantum yield for HO • radical production can limit the application of the process in real scale [13].
In the photo-Fenton process, HO • and O ·− 2 are generated during the irradiation of the H 2 O 2 and Fe 2+ mixture (Fenton's reagent) in acid conditions, according to Equations (8)- (11). The addition of oxalic acid to the solution containing Fe (III) leads to the formation of ferrioxalate complexes (ferrioxalate-assisted photo-Fenton process) that under irradiation can produce also oxidative species such as O ·− 2 , HO 2 • and HO • radicals (Equations (12)- (19)) [18]. The main advantage of the photo-Fenton and ferrioxalate-assisted photo-Fenton process is the use of sunlight as source energy since iron-organic acid complexes absorb at wavelengths of the visible light spectrum. The addition of oxalic acid to the photo-Fenton system promotes the formation of ferrioxalate complexes, which expand the useful range of the solar spectrum up to 550 nm and can provide cost-effective and environmentally benign treatment [18]. However, some certain challenges that face these processes in wastewater treatment are the low pH values (~3), the formation of iron sludge as well as the high concentrations of Fe n+ ions that can enter the water streams after their application [19,20].
Homogeneous Photocatalysis for the Treatment of Pharmaceutical Industry Wastewaters
Homogeneous photocatalytic processes have been studied for the treatment of pharmaceutical industry wastewaters at laboratory and pilot scales [2,7,9,18,[21][22][23][24]. Table 3 summarizes the experimental studies applying the homogeneous photo-Fenton and ferrioxalateassisted photo-Fenton processes for pharmaceutical industry wastewaters treatment. Fenton's reagent (a mixture of H 2 O 2 and Fe 2+ ) and ferrioxalate complexes in acid pH values under mainly solar irradiation were studied. The removal efficiency was found to depend on the applied experimental conditions as well as the initial organic load of the wastewater. Initial concentrations of Fe (II) and H 2 O 2 affected the removal efficiency significantly [18]. In case of ferrioxalate-assisted photo-Fenton process, the addition of oxalic acid also enhanced the efficiency [18,23], as depicted in Figure 1 [18]. The increased degradation efficiency could be attributed to the continuous regeneration of Fe (II) via the photo-reduction of Fe (III) and the formation of oxidative species and mainly HO • · through ferrioxalate photochemistry [18].
Homogeneous Photocatalysis for the Treatment of Pharmaceutical Industry Wastewaters
Homogeneous photocatalytic processes have been studied for the treatment of pharmaceutical industry wastewaters at laboratory and pilot scales [2,7,9,18,[21][22][23][24]. Table 3 summarizes the experimental studies applying the homogeneous photo-Fenton and ferrioxalate-assisted photo-Fenton processes for pharmaceutical industry wastewaters treatment. Fenton's reagent (a mixture of H2O2 and Fe 2+ ) and ferrioxalate complexes in acid pH values under mainly solar irradiation were studied. The removal efficiency was found to depend on the applied experimental conditions as well as the initial organic load of the wastewater. Initial concentrations of Fe (II) and H2O2 affected the removal efficiency significantly [18]. In case of ferrioxalate-assisted photo-Fenton process, the addition of oxalic acid also enhanced the efficiency [18,23], as depicted in Figure 1 [18]. The increased degradation efficiency could be attributed to the continuous regeneration of Fe (II) via the photo-reduction of Fe (III) and the formation of oxidative species and mainly HO • through ferrioxalate photochemistry [18]. H 2 O 2 conversion efficiency and the degree of mineralization were the highest when the oxalic/Fe(III) initial molar relation was close to 3. In these conditions, the Fe(III) ions were complexed with the maximum amount of oxalate in the form of the saturated complex Fe(C 2 O 4 ) 3 3− [18]. In most of the studies, acidic conditions (≤3) were applied to avoid the precipitation of Fe 3+ ( Table 3). The low pH values needed in photo-Fenton process are one of the main factors that limit its application in wastewater treatment. Iron-based solid catalysts have been investigated the last decades allowing the easy removal of the catalysts from the Toxics 2022, 10, 539 6 of 17 treated wastewaters. Special attention is also given to magnetically recoverable catalysts and on catalyst immobilization. On the other hand, the possibility of applying solar irradiation in photo-Fenton process reduces the cost of energy consumption, thus providing an important advantage. The photo-Fenton process can be also applied as a pretreatment method to increase the biodegradability of pharmaceutical industry wastewater rendering the biological treatment more efficient. This option is described in Section 3.3.
Heterogeneous Photocatalysis for the Treatment of Pharmaceutical Industry Wastewaters
The published works that have focused on the application of heterogeneous photocatalysis for the treatment of pharmaceutical industry wastewaters are compiled in Table 4. Heterogeneous photocatalysis using commercial TiO 2 nanoparticles and mainly Degussa P25, Sn-modified TiO 2 as well as nanocomposites of TiO 2 with multi-wall carbon nanotubes (MWCNTs) has been studied for the treatment of pharmaceutical industry wastewaters under UV irradiation [25][26][27][28][29][30]. The addition of oxidants, such as H 2 O 2, enhanced the photocatalytic removal, probably due to the formation of oxidative species and mainly HO • radicals. Although in most cases high efficiencies in terms of COD and toxicity reduction were achieved, TiO 2 photoactivity, mainly in the UV region due to the wide bandgap as well as the separation step needed in case of suspensions, limit significantly its application. To overcome these limitations, doping of TiO 2 and coating of the photocatalyst particles onto supports that are readily removable was investigated and promising results for the treatment of pharmaceutical industry wastewaters were reported [31].
As the usage of solar irradiation is essential for large scale applications, visible-lightresponsive photocatalysts have been synthesized and investigated for pharmaceutical industry wastewaters treatment. Two-dimensional AgInS 2 /SnIn 4 S 8 nanosheet heterojunctions with strong visible-light absorption and narrow band gap of 2.27-2.35 eV were prepared and investigated for the treatment of pharmaceutical industry wastewater [32]. About 50% COD removal was observed in 720 min, whereas the addition of H 2 O 2 enhanced the efficiency (Figure 2A), and the COD of pharmaceutical industry wastewater decreased to 153 mg L −1 , which meets the discharge standards for industrial effluent. The high photocatalytic activity of AgInS 2 /SnIn 4 S 8 is attributed to the efficient charge separation ( Figure 2B), and its high catalytic stability after five catalytic cycles is correlated with the strong chemical interaction between AgInS 2 and SnIn 4 S 8 [32]. In addition, 1% graphene oxide/AgIn5S 8 (rGO/AgIn5S 8 ) nanocomposites were tested for the treatment of real pharmaceutical industry wastewater under visible-light illumination [33]. 76% COD removal was observed in 90 min whereas the addition of H 2 O 2 led to 89% COD removal in 90 min. The enhanced photocatalytic activity of the composite material was primarily attributed to largest specific surface area, enhanced light absorption as well as the most efficient separation and transfer of photogenerated charge carriers through rGO sheets which act as electron acceptors and transfer channels in the nanocomposites [33]. From an economic point of view, heterogeneous photocatalysis can be applicable for the treatment of pharmaceutical industry wastewaters on a large scale by the usage of solar irradiation. Consequently, much more research is needed, and special efforts should be devoted to design photocatalysts which combine high efficiency, visible-light response, low-cost, stability, good reusability, and environmental friendliness. From an economic point of view, heterogeneous photocatalysis can be applicable for the treatment of pharmaceutical industry wastewaters on a large scale by the usage of solar irradiation. Consequently, much more research is needed, and special efforts should be devoted to design photocatalysts which combine high efficiency, visible-light response, low-cost, stability, good reusability, and environmental friendliness.
Hybrid Systems for the Treatment of Pharmaceutical Industry Wastewaters
Hybrid systems involve several combinations of different technologies for the treatment of industrial wastewaters. Hybrid systems can minimize the disadvantages of individual technologies and simultaneously improve the total efficiency. Photocatalytic processes have been combined with other methods for the treatment of pharmaceutical industry wastewaters and the studied hybrid systems, along with their efficiency, are presented in Table 5. 96.5% COD removal [28] The combination of heterogeneous photocatalysis and photo-Fenton using Fe-TiO 2 composite photocatalyst (composite beads) has been applied for the treatment of real pharmaceutical industry wastewaters using various experimental set-ups and irradiation sources [25,34]. Removal of COD higher than 71% was achieved in all cases and the hybrid systems were more efficient than the individual processes [25,34]. A combination of solar photo-Fenton with ozonation has also shown improvement of COD removal compared with individual processes (either solar photo-Fenton or ozonation) [24]. Moreover, the hybrid system led to a significant decrease in operational costs due to the reduction of catalyst consumption, along with the absence of sludge production [24].
Heterogeneous photocatalysis using commercial TiO 2 nanoparticles as well as WO 3 /CNT under UV and visible irradiation has been combined with sonolysis [26,35] to treat real pharmaceutical wastewaters. In both cases, the combined processes showed higher efficiency than the individual methods in terms of toxicity and COD reduction as well as biodegradability increase [26,35]. Similar results (toxicity and COD reduction and biodegradability increase) were observed when heterogeneous photocatalysis using commercial TiO 2 nanoparticles was combined with biological treatment (rotating biological contractor (RBC)) to treat real pharmaceutical wastewaters [28]. Boroski et al. (2009) employed electrocoagulation (EC) followed by UV/TiO 2 /H 2 O 2 and obtained 97% COD removal [36]. Photo-Fenton as a pre-treatment stage to increase the biodegradability of the wastewater followed by biological process has been investigated by various authors [7,21,22,37]. Overall results indicate that pre-treatment of pharmaceutical wastewater by photo-Fenton with subsequent biological degradation led to higher COD and TOC removal efficiency when compared to individual processes. In addition, complete detoxification of wastewaters was achieved indicating that hybrid treatment technology is an effective approach [7,21,22,37]. [38] investigated solar photo-Fenton as a finishing step for biological treatment of a pharmaceutical wastewater containing mainly nalidixic acid (NXA). Total degradation of NXA and its transformation products after the hybrid process (Figure 3) as well as reduction in toxicity were observed, demonstrating that this hybrid system is a useful and efficient approach [38].
Cost Estimation/Operational Costs
Cost analysis is one of the most significant factors in wastewater treatment [39,40]. For the application of homogeneous and heterogeneous photocatalysis (individually or in combination with other processes) in large-scale, solving simultaneously practical engineering problems, cost estimation must be considered. However, limited studies on cost estimation are available in the literature, fact that renders difficult the large-scale application. An economic analysis was carried out by Monteagudo et al. (2013) using a ferrioxalate-assisted solar photo-Fenton process in a compound parabolic collector (CPC) pilot plant for the treatment of 35 L pharmaceutical wastewater containing 125 mg L −1 TOC. The costs considered are related to electrical energy and chemical (reagents and catalysts) consumption. For the maximum mineralization degree, a total cost of EUR 0.0157/g TOC removed or EUR 1.65/m 3 of treated wastewater was estimated. However, cost reduction could be achieved using more collectors with photovoltaic panels [18].
In a pilot-scale using cascade reactor and solar irradiation, the combination of photocatalysis and photo-Fenton, in situ dual process using Fe-TiO 2 composite beads, was found to be a good option in terms of cost evaluation (fabrication of reactor, synthesis of composite beads, oxidant, electricity consumption). The overall cost of the treatment process estimated to be USD 0.357 (treatment of 5 L of wastewater) and USD 25.047 for one and 70 runs, respectively. The overall cost of this hybrid system can be further reduced by the reuse of the beads [25] as well as with appropriate engineering modifications and efficient reactor designing in scale-up studies [34]. Based on Talwar et al. (2021), a scale-up cost analysis of the dual process demonstrated that the cost can be less than USD 0.1 L −1 of the wastewater [34].
A combination of solar photo-Fenton with ozonation led to a significant decrease in operational costs i.e., chemicals and energy consumption (total cost: EUR 12.69 m −3 , EUR 1.22 kg −1 COD removed) due to the reduction of catalyst consumption, along with the absence of sludge production when used for the treatment of pharmaceutical industry wastewaters [24].
Conclusions and Prospects for Future Research
The complex composition of pharmaceutical industry wastewaters renders their effective treatment an urgent demand prior to their disposal into the environment. Based on the literature survey conducted in this review, homogeneous and heterogeneous photocatalysis can be characterized as effective methods for the treatment of pharmaceutical industry wastewaters. The main advantages, limitations and prospects of photocatalytic processes for the treatment of pharmaceutical industry wastewaters are summarized in Table 6. In general, high removal percentages can be achieved after the optimization of the processes. However, differences in the efficiency highlighted the importance of performing optimization studies before application. Moreover, the impact of wastewater characteristics on the overall process efficiency was found to be significant. Some limitations of the processes such as continuous usage of chemicals and energy, iron sludge production and separation of the catalyst particles strongly hinder full-scale application, and all these aspects need further investigation. During the last decades, efforts have been made to synthesize materials with visible-light response as alternatives to conventional catalysts. Another aspect of interest is the separation of heterogeneous catalysts and their reuse. As a good solution, magnetic materials in the form of metal oxides, spinels and composites, due to their easy separation by a magnetic field, can be considered. Thus, the application of photocatalytic processes using novel materials should also be extended and investigated for the treatment of pharmaceutical industry wastewaters.
The effect of real wastewater quality on photocatalytic processes is important, and the influence of various matrix components on the efficiency needs to be researched systematically in the future. This can also contribute to the transformation of photocatalytic processes from laboratory into large-/pilot-scale, which is still the challenge. In case of real wastewaters which are produced at flow rates of thousands of m 3 per day and large-scale applications, the combination of a conventional process with photocatalysis seems to be the most promising option. Hybrid technologies can enhance efficiency, leading to high removal percentages of the pollutants, in most cases within safe discharge limits, as well as to reduce the cost of the treatment process. The most common hybrid treatment scheme that have been studied for pharmaceutical industry wastewaters combines photo-Fenton process, as a pretreatment stage, to remove mainly recalcitrant compounds and enhance the biodegradability of the wastewater and a biological treatment method. Table 6. Advantages, limitations, and prospects of photocatalytic processes for the treatment of pharmaceutical industry wastewaters. The overall assessment of the literature highlights that special emphasis should be placed on large-scale application for non-biodegradable wastewater treatment and probably reuse. More work is needed to be done on the synthesis and recycling of novel catalysts, matrix impact on degradation kinetics and reactor modeling of the individual and combined processes. A complete economic analysis should also be conducted, including equipment and implementation expenses, amortization, reagent demand, energy costs and sludge disposal. Better economic models must be developed to estimate how the cost of the individual and/or combined processes varies with specific industrial wastewater characteristics as well as the targeted decontamination. Moreover, scarce data exist concerning the identification of the produced transformation products (TPs) of the pharmaceuticals present in the wastewater during the applied processes. Similarly, limited studies evaluated the toxicity evolution during the treatment by the studied AOPs. Considering that undesired intermediates TPs can be formed in some cases, the elucidation of the structures of the TPs, as well as the monitoring of the toxicity evolution, are imperative. These aspects are fundamental for the optimization of the processes and give a useful insight about their potential integration with other process under real conditions as well. The overall future trend is to ensure removal efficiency focusing on the final characteristics of the treated wastewater and the removal of the target contaminants to concentrations safe for the environment while reducing the influence of the other limiting factors. | 6,053.6 | 2022-09-01T00:00:00.000 | [
"Environmental Science",
"Chemistry",
"Medicine"
] |
Surface activation of polyamide fibers by radio-frequency capacitive plasma for application of functional coatings
The results of experimental researches on the modification of polyamide fibrous materials for technical purposes by low-pressure radio-frequency capacitive discharge plasma are presented. The effect of plasma modification on the wettability of polyamide fibers and their adhesive properties was studied.
The Introduction
For the development of textile and light industries in Russia, as well as increasing import substitution, it is not so much the development of new types of fibers and yarns, as the modification of existing ones, in order to give them the desired properties. Polyamide fibrous materials for technical purposes need to improve the adhesive properties, which will ensure their more effective impregnation with modifying solutions and polymeric binders in the preparation of composite materials, as well as the preparation of the fiber surface for their subsequent metallization.
For the modification of synthetic textile fibers used radio-frequency plasma discharge of low pressure. As opposed to traditional methods of processing, electrophysical methods, including plasma ones, are resource-efficient, environmentally friendly and require only one-time investments [1]. The results of previous researches [2][3][4][5] show that the treatment of textile materials by plasma of radiofrequency capacitive discharge of low pressure allows to change the surface properties, for example, to improve the adhesion characteristics. The plasma of the radio-frequency discharge allows modification of synthetic materials without their destruction, and the physico-chemical mechanism of action ensures the stability of the effects after plasma modification.
Materials, methods and equipment
The use of the radio-frequency plasma discharge of low-pressure in the processes of modification of fibrous materials was investigated on samples of polyamide fibers for technical purposes (Technical Specification 2272-103-77319717-2012).
For the modification of polyamide fibers was applied to the experimental radio-frequency plasma installation with individually designed cassette for treatment of fibrous materials, which are located in the interelectrode space and providing the opportunity for uniform modification of all parts of the winding fibrous material The change of morphology of surface of polyamide fibers was determined by confocal laser scanning microscopy with the Olympus Lext OLS 4100. The effect of plasma modification on the 2 change in wettability of fibrous materials was estimated by determining the capillarity index (GOST 29104.11-91). Adhesive properties of polyamide fibers was evaluated by determining the adhesive strength of the fiber with the cured matrix by the method of wet-pull-out, designed to KNRTU with the IMET RAS by A.A. Baykov [6].
Results
To establish the regularities of the effect of radio-frequency plasma on the samples of polyamid technical fibers, their processing was carried out when changing the input parameters of the installation within the following limits: discharge power Wp = 0,7-1,5 kW; processing time t = 60-600 s; pressure in the working chamber P = 30-50 Pa; plasma gas flow G = 0,01-0,04 g/s; plasmaargon and argon/propane-butane in a ratio of 70/30.
The results of confocal laser scanning microscopy of surface of polyamid fibers before and after their plasma modification are shown in fig. 1. According to microscopic analysis on the surface of the initial polyamid fibers observed the presence of technical impurities and mechanical impurities in samples modified by plasma in the medium of argon and argon/propane-butane, there is a purification of the surface of the fibers. It is obvious that as a result of ion bombardment in the process of plasma treatment, physical spraying of mechanical impurities and components of the oiling agent occurs regardless of the plasma-forming medium used, which leads to smoothing of the fiber surface [7].
The effect radio-frequency plasma modification on the change in the capillary index of polyamid fibers is shown in fig. 2. After plasma treatment of argon/propane-butane a decrease in capillarity index is observed in comparison with the initial samples by 27.6%. The decrease, as a result of plasma treatment, the values of the capillarity index can be due to both physical and chemical processes taking place in the surface layers of fiber-forming polymers due to the addition of plasma-forming gas components. Studies of changes in the capillarity index for samples after 2 months after their plasma modification show that the obtained effects are stable over time.
On the basis of the adsorption theory of adhesion, it can be argued that the change in the physical properties of the polyamid fibers will contribute to the change in their adhesive properties to the polymer matrices. To confirm this effect, adhesion of the polyamid fibers to the polymer matrix was investigated ( fig. 3).
Conclusions
The obtained results showed that the effect of plasma modification depends on the composition of the plasma-forming gas. The radio-frequancy plasma modification of polyamide fibrous materials for technical purposes in the plasma-forming of argon/propane-butane allows to increase their adhesive properties, which makes it possible to effectively impregnate them with polymeric binders, as well as to apply functional coatings with high adhesion to the fibrous substrate on their surface. | 1,136.8 | 2019-10-01T00:00:00.000 | [
"Materials Science"
] |
Research on Management Defense and Enterprise Innovation under Informatization
The information development affects the Enterprise Management Defense, and then affects the enterprise innovation. This paper takes the non-financial listed companies in China’s A-share market from 2013 to 2017 as samples to empirically study the impact of management defense on enterprise innovation. It is found that managerial defense inhibits enterprise innovation; compared with non-state-owned enterprises, managerial defense of state-owned enterprises has a more significant inhibitory effect on enterprise innovation. The results of this paper provide a basis for improving corporate governance structure, weakening management defense, promoting enterprise innovation, and help government departments deepen the reform of state-owned enterprises.
Introduction
Based on the theory of separation of two rights, when there is a conflict of interest between managers and shareholders, managers will take management defensive behavior in order to maintain their own job security and maximize their own interests [1] . Management defense is a common phenomenon in China's Listed Companies, and it has an important impact on the company's capital structure, investment efficiency and dividend policy. Some scholars have found that management defense can cause short-sighted investment, leading to enterprises to reduce R & D Investment [2] . There are also studies that show that in order to ensure the safety of their positions and maximize their effectiveness, managers will have risk aversion behavior, leading to the reduction of enterprise risk-taking level [3] . The above analysis shows that management defense may indirectly affect enterprise innovation from R & D investment, enterprise risk-taking, innovation enthusiasm and other aspects, but few scholars directly investigate the impact of management defense on enterprise innovation from the perspective of innovation output. This paper takes the number of patent applications of A-share listed companies from 2013 to 2017 as the analysis object, empirically tests the impact of management defense on enterprise innovation ability, and examines the moderating effect of institutional investor research. Compared with previous studies, the contributions of this paper are as follows: first, from the perspective of innovation output, it proves the inhibitory effect of managerial defense on enterprise innovation, enriches the relevant literature on managerial defense and enterprise innovation, helps enterprises to improve the governance mechanism, inhibit managerial defense behavior, and promote enterprise innovation and long-term development; second, it distinguishes the nature of enterprises, and finds the relationship between managerial defense and enterprise innovation. The difference of enterprise innovation relationship between state-owned enterprises and non-state-owned enterprises provides the basis for deepening the reform of state-owned enterprises and improving the innovation ability of state-owned enterprises.
Theoretical Analysis and Research Hypothesis
Management defense refers to the behavior that the management chooses to consolidate their own position security and pursue the maximization of personal utility under the pressure of internal and external control mechanism of the enterprise. Due to the separation of the two rights and the existence of moral hazard, there is information asymmetry between the management and the shareholders. The management has more internal information about the company's operation than the shareholders. When there is a conflict of interest between the management and the shareholders, there will be defensive behavior of the management, which will damage the interests of the enterprise or shareholders [4] . In addition, according to the risk aversion hypothesis, managers with management defense motivation also have a stronger risk aversion tendency, in order to reduce the risks faced by their positions and their own interests [3] , but it will lead to the reduction of enterprise risk-taking level. Because innovation itself has a certain risk, the reduction of risk-taking level will weaken the enthusiasm of enterprise innovation. However, innovation activities are the basis for enterprises to gain market competitive advantage, and the innovation ability of enterprises represents the future development potential of enterprises, which is in line with the long-term interests of enterprises and shareholders. To sum up, management defensive behavior is not conducive to enterprise innovation, and will damage the interests of enterprises and shareholders. Based on the above analysis, the following hypotheses are put forward: H1: There is a negative correlation between managerial defense and enterprise innovation, that is, the enhancement of managerial defense will reduce enterprise innovation.
Management defense is a common phenomenon in China's listed companies, but in the context of China's special system, there are great differences in management defense among enterprises with different property rights [2] .First of all, compared with non-state-owned enterprises, state-owned enterprises need to undertake more policy tasks, and executives of state-owned enterprises will more consider the influence of political factors when making business decisions [5] . Secondly, state-owned enterprise executives are directly appointed by the government and have a short term of office. In order to meet the needs of government assessment in a short period of time and strive for promotion opportunities, they will pay more attention to short-term economic benefits and their own political goals in the business process, while non-state-owned enterprise executives are less affected by political factors and pay more attention to the long-term benefits and profit promotion of the enterprise [6] . Thirdly, the compensation of state-owned enterprise executives can not benefit from the huge benefits brought by innovation due to the influence of "salary limit order". Therefore, compared with non-state-owned enterprise executives, state-owned enterprise executives have stronger management defense motivation. The owners of non-state-owned enterprises control the enterprises, so the supervision of the managers of state-owned enterprises is weaker than that of non-state-owned enterprises. Based on the above analysis, the following hypotheses are put forward: H2: Compared with non-state-owned enterprises, the negative correlation between managerial defense and enterprise innovation is more significant.
Sample Selection and Data Sources
Based on the sample of China's A-share listed companies from 2013 to 2017, this paper conducts the following screening according to the research practice: ① excluding ST and * ST companies; ② excluding financial companies; ③ excluding the samples with missing values in the observation samples; ④ in order to avoid the influence of extreme values, 1% and 99% Winsor treatment is carried out for all continuous variables in the samples; ⑤ according to the nature of the actual controllers of listed companies The samples are divided into state-owned and non-state-owned enterprises. After screening, 4998 effective observations were obtained. The above survey data of institutional investors are from wind database, and other data are from CSMAR database. Stata 15.0 statistical analysis software is used for data processing and empirical analysis.
Variable Definition
(1) Management defense (MEI). This paper uses Li Bingxiang et al. (2018) [7] as reference to the construction of management defense index. Without considering external control mechanism, the construction of defense degree variable of management is shown in Table 1. Six variables are selected from three aspects: personal characteristics of managers, incentive mechanism of managers and constraint mechanism of managers. The average value of these six variables is taken as the indicator of the defense level of management Mark, i.e. MEI= (Age+ Degree+ Tenure+ Share+ Independent+ Dua) /6 If the shareholding ratio is greater than 0, take 0; if the shareholding ratio is 0, take 2 The characteristics of manager restraint mechanism Proportion of independent directors 2 for less than 30%, 1 for 30% to 40% for 0 for more than 40% The chairman and the general manager are in one position If the general manager concurrently serves as the chairman of the board, the assignment is 2; otherwise, it is 0 (2)Enterprise innovation (INNO). Referring to Zhou Donghua et al. (2019) [8] for the definition of enterprise innovation (INNO) variable, this paper uses the natural logarithm of the number of patent applications plus one to measure enterprise innovation.
(3) Control variables. Referring to the relevant literature, this paper selects the company size (size), asset liability ratio (Lev), return on equity (ROE), growth opportunity (growth), investment opportunity (tobinq), equity concentration (first), board size (b size), enterprise establishment period (age), year factor, industry factor as the control variables. The details are shown in Table 1.
Model Building
Based on the above analysis, in order to test the impact of management defense on enterprise innovation and the impact of institutional investor research on the relationship between management defense and enterprise innovation, this paper constructs the following model with reference to the research design of Yan Zhenli et al.
(1) Management defense and the return of enterprise innovation. In order to test hypothesis 1 and 2, we use formula (1) to study the relationship between managerial defense and enterprise innovation.
Descriptive Statistics
Descriptive statistics of variables are reported in Table 3. The maximum value of management defense (MEI) is 1.500, and the minimum value is 0.167, which indicates that different enterprises have different degrees of management defense. Through the investigation of the average and median of the financial degree of Listed Companies in China, it is found that there is a large gap in the financial degree of Listed Companies in China. The descriptive statistics of other variables were in the normal range.
Regression Analysis
(1)Regression analysis of managerial defense and enterprise innovation. Table 4 reports the regression results of management defense (MEI) and enterprise innovation (INNO). From the regression results of the whole sample, we can see that the F value of the model is 64.520, and the R2 of the regression is 0.260, which is significant at the 1% level, indicating that the construction of the model is feasible, the fitting is good, and the empirical results are reliable. The regression coefficient of management defense (MEI) and enterprise innovation (INNO) is -0.221, and it is significantly negative at the level of 1%. That is to say, the higher the defense level of the management, the easier the management to carry out self-interest behavior and reduce enterprise innovation, which supports hypothesis 1a. From the grouped regression samples, we can see that the regression coefficient of management defense (MEI) and enterprise innovation (INNO) in the state-owned samples is -0.485, which is significantly negative at the 1% level, and the regression coefficient of management defense (MEI) and enterprise innovation (INNO) in the non-state-owned samples is -0.115, which is significantly negative at the 10% level, which indicates that the management defense in the state-owned samples and the non-state-owned samples is opposite However, compared with non-state-owned enterprises, the negative impact of state-owned enterprise management defense on enterprise innovation is more significant, which supports hypothesis 1b. The significance level is 1%. The significance level of * * is 5%, and the significance level of * * is 10%.
Conclusion and Suggestion
With the development of information technology, we should use Internet technology such as blockchain to strengthen the supervision of management defense.Based on the relevant data of non-financial listed companies in A-share market from 2013 to 2017, this paper empirically analyzes the relationship between managerial defense and enterprise innovation, and further examines the differences of the above relationship among enterprises with different property rights, and draws the following conclusions: (1) in order to maintain their positions and maximize their own utility, managers will adopt managerial defense behavior, and managerial defense process is the most important The enhancement of the degree of innovation will inhibit enterprise innovation.
(3) Compared with non-state-owned enterprises, the negative correlation between managerial defense and enterprise innovation is more significant. According to the above research conclusions, this paper puts forward the following policy suggestions: (1) the regulatory department should improve the understanding of management defense, guide enterprises to restrain management defense behavior and improve the innovation ability of enterprises by employing high-level management talents, designing reasonable salary mechanism and equity incentive system, and improving the internal governance structure of the company.
(2) Government departments should cultivate and improve the manager market of state-owned enterprises, establish a talent selection system, further deepen the reform of state-owned enterprises, promote the marketization process of China, give full play to the governance role of institutional investor research, restrain the defensive behavior of management, and improve the innovation ability of state-owned enterprises. | 2,769.2 | 2021-01-01T00:00:00.000 | [
"Business",
"Economics",
"Computer Science"
] |
Heat transport via a local two-state system near thermal equilibrium
Heat transport in spin-boson systems near the thermal equilibrium is systematically investigated. An asymptotically exact expression for the thermal conductance in a low-temperature regime wherein transport is described via a co-tunneling mechanism is derived. This formula predicts the power-law temperature dependence of thermal conductance $\propto T^{2s+1}$ for a thermal environment of spectral density with the exponent $s$. An accurate numerical simulation is performed using the quantum Monte Carlo method, and these predictions are confirmed for arbitrary thermal baths. Our numerical calculation classifies the transport mechanism, and shows that the noninteracting-blip approximation quantitatively describes thermal conductance in the incoherent transport regime.
Introduction
Heat transport via small systems has recently attracted considerable attention because a lot of intriguing phenomena can emerge reflected from the properties of a system and the surrounding environment. For instance, quantized thermal conductances have been observed in heat transport by phonons [1,2] and photons [3] in a manner similar to electric transport [4]. Thermal rectification [5,6] and thermal transistors [7] have also been theoretically proposed in analogy to electronic devices. Heat transport via a quasi one-dimensional material, e.g., carbon nanotubes, shows neither diffusive nor ballistic transport, which is currently categorized as anomalous transport [8]. Heat transport due to magnetic excitation is now a key ingredient in the field of spintronics [9]. Studying the general properties of thermal transport using typical systems is clearly an important subject not only for theoretical development but also for future experiments.
The spin-boson system is one of most common and important systems for describing a local discrete-level system embedded in a bosonic thermal environment [10,11]. This system has numerous applications, e.g., it is used to describe molecular junctions [12], superconducting circuits [13], and photonic waveguides with local two-level systems [14]. Hence, it is regarded as a minimal model for describing a zero-dimensional object with discrete quantum levels surrounded by a bosonic environment. One of the important problems here is to clarify the dissipative dynamics of the system near the equilibrium situation [10]. Depending on the properties of the thermal environment, the behavior of the autocorrelation function of the system changes from coherent oscillation to incoherent decay as a function of time. Intriguingly, at zero temperature, a quantum phase transition occurs when the coupling strength between the system and the environment is changed [15,16]. The sub-ohmic environment induces a second-order phase transition [17,18,19,20,21,22], while the ohmic case shows a Kostelitz-Thoulesstype phase transition [11,23,24]. The super-ohmic case does not have a distinct phase transition but exhibits a crossover. In addition, the ohmic environment induces the Kondo effect [25] at sufficiently low temperatures [10,11,26,27]. From this background in an equilibrium situation, it is quite natural to ask what happens if one considers heat transport in this system. Herein, we present systematic studies of heat transport via the spin-boson system and derive some exact results for this case.
A number of studies have investigated heat transport via spin-boson systems [28,29,30,31,32,33,34,35,36]. Segal et al. introduced an iterative path-integral technique for numerical calculations to investigate the far-from-equilibrium regime [28]. Ruokola and Ojanen studied low-temperature properties using a perturbation method and discussed co-tunneling mechanisms [29]. However, their methods do not seem to succeed in reproducing low-temperature properties, e.g., the Kondo effect. Two of the present authors (TK and KS) have focused on the transport properties in an ohmic environment and found several Kondo signatures [30], including the T 3 -temperature dependence of the thermal conductance. Herein, we advance in this direction and cover arbitrary types of environments. We consider a general picture for understanding the transport properties at extremely low temperatures for the whole regime of spectral densities and quantitatively characterize the transport mechanism for all temperature regimes.
We present our findings in this paper to distinguish them from existing literature. First, we derived an asymptotically exact expression for the thermal conductance in the extremely low-temperature regime, reproducing the aforementioned T 3 -temperature dependence of thermal conductance in the ohmic case. Our formula is asymptotically exact in the co-tunneling transport regime and predicts power-law temperature dependences ∝ T 2s+1 for the thermal environment of spectral density with the exponent s. Second, we performed accurate numerical calculations to investigate thermal conductance over the entire temperature regime. We confirmed the temperature dependencies predicted by our expressions for the co-tunneling and the sequential tunneling transport regimes. Furthermore, we found that the noninteracting-blip approximation (NIBA) [10] describes thermal conductance in the incoherent tunneling regime accurately. In table 1 the transport mechanisms for each regime are summarized and relevant analytical descriptions are presented. In the table, sequential tunneling, co- Table 1. Summary of the relevant transport process. Here, ∆ eff is an effective tunneling amplitude [see equations (23) and (24)] and T * is the crossover temperature [see equation (31)]. The last column shows the temperature dependences of the thermal conductance, where "Schottky" indicates a Schottky-type temperature dependence proportional to e − ∆ eff /kBT /T 2 . The temperature dependence of NIBA is complex in general, and the symbol ( * ) indicates the high-temperature limit.
Exponent
Condition Transport process Dependence
Sequential tunneling Schottky
tunneling, and NIBA imply the analytical descriptions based on the approximate form [equation (29)], the asymptotically exact expression [equation (33)], and the analytical descriptions based on the NIBA expression[equation (22)] with equations (41) and (42). The paper is organized as follows. In section 2, we introduce the model and explain the Meir-Wingreen-Landauer-type formula. In section 3, we classify the transport mechanism and derive an asymptotically exact expression that is valid in the cotunneling transport regime. We perform numerical calculation using the quantum Monte Carlo method, and compare the results with analytic approximations in section 4. In section 5, we summarize our work.
Model
We consider heat transport via a local quantum system coupled to two reservoirs denoted by L and R. The model Hamiltonian is given by Figure 1. Symmetric double-well potential of the local system. An energy spacing of quantum levels in each well is ω 0 (indicated by the blue sold lines), and an energy splitting due to quantum tunneling (indicated by the red dashed lines) is ∆ = E e −E g , where E g and E e are the ground-state energy and the first-excited-state energy, respectively.
where H S , H ν , and H I,ν describe the local system, the reservoir ν (= L, R), and the interaction between them, respectively. The operators p and x are the momentum and position for the local system, respectively, and V (x) is the potential energy. The reservoirs comprise multiple phonon (or photon) modes, which are described in general by harmonic oscillators with frequency ω νk and mass m νk , where the subscript denotes the phonon (photon) wavenumber k in the reservoir ν. The momentum and position of an individual oscillator are denoted by p νk and x νk , respectively. For simplicity, the system-reservoir coupling H I,ν is consider as a bilinear form of x and x νk , and the interaction strength is denoted by C νk . The second term of H I,ν is a counter term to cancel the potential renormalization due to the reservoirs. In this study, the potential energy V (x) of the local system is considered as a doublewell potential as shown in figure 1. We assume that the barrier height of the double-well potential is sufficiently large in comparison with ω 0 , where ω 0 is the frequency of a small oscillation at the potential minima x = ±x 0 /2. Then, quantum tunneling between the two wells induces small energy splitting ∆ ( ω 0 ) between the ground-state energy E g and the first excited energy E e .
After truncating the local system into two states by considering the two lowest energy eigenstates, we obtain the spin-boson model [11,10]: Here, σ i (i = x, y, z) is the Pauli matrix, b νk is an annihilation operator defined by and λ νk = x 0 C νk / √ 2 m νk ω νk . In the present model, we assign the localized states at the left (right) well as |↓ (|↑ ). Throughout this study, we examine the symmetric doublewell potential (ε = 0) and only use the bias term εσ z to define the static susceptibility where · · · implies an equilibrium average. For the symmetric case (ε = 0), the system Hamiltonian H S describes the tunneling splitting ∆ between the ground state (σ x = +1) and the first excited state (σ x = −1). The properties of the reservoirs are characterized by the spectral function which is considered to be continuous assuming that the number of phonon (photon) modes is large. For simplicity, we assume the following simple for the spectral function [11,10]: where α ν is the dimensionless coupling strength between the two-state system and the reservoir ν. To cut off high-frequency excitation, we introduced the exponential cutoff function e −ω/ωc , where ω c is the cutoff frequency, which is considerably larger than other characteristic frequencies, e.g., ∆, ε/ , and k B T / . The exponent s in equation (13) is crucial for determining the properties of the reservoirs. The case s = 1 is called "ohmic," whereas the cases s > 1 and s < 1 are called "super-ohmic" and "sub-ohmic," respectively.
Thermal conductance
The heat current flowing from reservoir ν into the local two-state system is defined as follows: Using the standard technique of the Keldysh formalism [37,38,39], one can derive the Meir-Wingreen-Landauer-type formula [40] for the nonequilibrium steady-state heat current J L = − J R ≡ J as follows [7,6,30,41]: where α = α L + α R , γ = 4α L α R /α 2 is an asymmetric factor, n ν (ω) is the Bose distribution function in reservoir ν, and χ(ω) is the dynamical susceptibility of the two-state system defined by Equation (15) is derived in Appendix A. The linear thermal conductance is defined as Using the exact formula [equation (15)], the linear thermal conductance is given as where χ(ω) is evaluated for the thermal equilibrium and β = 1/(k B T ). Thus, we need to calculate the dynamical susceptibility χ(ω) for evaluating the linear thermal conductance.
For convenience of discussion, we also introduce a symmetrized correlation function and its Fourier transformation: From the fluctuation-dissipation theorem [10], the imaginary part of the dynamical susceptibility is related to S(ω) as The thermal conductance is then rewritten using the correlation function S(ω) as
Classification of Transport Processes
The dynamics of dissipative two-state systems have long been studied using a number of approximations [11,10]. In this section, we re-examine such analytic approximations from the viewpoint of heat transport. In section 3.1, we first consider the effective tunneling amplitude and discuss a quantum phase transition driven by strong systemreservoir coupling. Next, we consider the three mechanisms, which we call "sequential tunneling" (section 3.2), "co-tunneling" (section 3.3), and "incoherent tunneling" (section 3.4) following in the previous literatures [11,10,29,42]. We derive analytic expressions for the thermal conductance in each transport process. We also introduce NIBA in section 3.5.
In this section, we show two novel results of our study. The first concerns the co-tunneling process. We derive an asymptotically exact formula for the co-tunneling (a) (b) Figure 2. Schematics of the ground-state wavefunction (a) below the transition (0 ≤ α < α c ) and (b) above the transition (α c < α). The former state is delocalized, whereas the latter is localized at one of the two wells. For the localized state, quantum tunneling between the two wells is forbidden since the overlap integral between the states in the two wells vanishes.
process by utilizing the generalized Shiba relation. This formula always holds at low temperatures for an arbitrary exponent (s) as long as the ground state of the system is delocalized. The second result is related to the incoherent tunneling. In particular, we find that the Markov approximation is inadequate to describe the thermal conductance in the incoherent tunneling regime. Instead, the thermal conductance in this regime is well described by NIBA, which considers the non-Markovian properties of stochastic dynamics. We show that NIBA quantitatively explains numerical calculations in section 4.
Effective tunneling amplitude and quantum phase transition
One important effect of the system-reservoir coupling is renormalization of the tunneling amplitude ∆. In this subsection, we briefly show the effective tunneling amplitude results obtained via adiabatic renormalization [11,10]. A detailed derivation is given in Appendix B.
For the ohmic case (s = 1), the effective tunneling amplitude is given by This result indicates a phase transition at zero temperature, for which the critical value of the system-reservoir coupling is α = 1 [16,15]. For system-reservoir couplings below the transition (0 ≤ α < 1), the ground state is non-degenerate, as shown in figure 2 (a), indicating the coherent superposition of the two localized states |↑ and |↓ . We call this ground state "delocalized." For strong system-reservoir couplings above the transition (α > 1), the coherent superposition of the two localized states is completely broken, leading to the doubly-degenerate ground states shown in figure 2 (b). We call this ground state "localized." In this localized regime, quantum tunneling between the wells is forbidden at zero temperature since there is no mixing (∆ eff = 0) between the two localized states. Thus, the present quantum phase transition can be recognized as a "localization" transition that separates the delocalized and localized regimes at zero Figure 3. Schematic of the sequential tunneling process. Heat transport occurs by a combination of (a) phonon (photon) absorption and (b) phonon (photon) emission.
temperature.
For the sub-ohmic case (s < 1), the adiabatic renormalization always leads to an effective tunneling amplitude of zero (∆ eff = 0). This is correct in the limit ∆/ω c → 0, as discussed in a previous study [11]. However, for a finite value of ∆/ω c , the naive adiabatic renormalization procedure yields incorrect results and should be improved. In subsequent theoretical studies [43,44], it was found that the localization transition actually occurred at a critical system-reservoir coupling (α = α c ), where the critical value α c depended on both s and ∆/ω c . The existence of the localization transition was also confirmed via numerical calculations [18,19]. In summary, for the sub-ohmic case, the ground state is delocalized for 0 ≤ α < α c , as shown in figure 2 (a), and localized for α c < α, as shown in figure 2 For the super-ohmic case (s > 1), the effective tunneling amplitude is always finite: where Γ(z) is the Gamma function. Therefore, there is no localization transition and the ground state is always delocalized, as shown in figure 2 (a).
Sequential tunneling
For weak system-reservoir couplings (α 1), the system and the reservoirs are almost decoupled and the interaction Hamiltonian H I,ν can be regarded as a perturbation. For the second-order perturbation, the system dynamics are described by a stochastic transition between the ground state (σ x = +1) and the excited state (σ x = −1), as shown in figure 3. The transition from the ground state to the excited state involves phonon (photon) absorption, and the inverse transition involves phonon (photon) emission. A combination of these two processes induces heat transport. We refer to this type of transport process as "sequential tunneling" by analogy with the electronic transport process through quantum dots. The transition rates for the process of phonon (photon) absorption and emission are calculated based on Fermi's golden rule as follows [11]: where I(ω) = I L (ω) + I R (ω) and n B (ω) = (e βω − 1) −1 is a Bose distribution function. Using these transition rates, the stochastic dynamics of the system are described using the Lindblad equation where ρ(t) is a density matrix of the system, L e = σ + x ≡ (σ z − iσ y )/2, and L a = σ − x ≡ (σ z + iσ y )/2. By solving this equation, we obtain the symmetrized correlation function as where Γ = (Γ e + Γ a )/2. The correlation function S(ω) has two peaks at ω = ±∆, reflecting the coherent system dynamics. Because Γ ∆ always holds in the weakcoupling regime, the correlation function is approximated as where δ(x) is the delta function. The thermal conductance for the weak coupling regime is obtained by substituting equation (28) into equation (22) as follows: This result is identical to the formula derived in previous research [5] and [30] using the master equation approach and is consistent with the perturbation theory [29]. For actual comparison with the numerical simulation in section 4, we improve the approximation by replacing ∆ with ∆ eff using adiabatic renormalization (see section 3.1).
The formula for sequential tunneling [equation (29)] is valid when For the sub-ohmic case (s < 1), this condition is never satisfied, indicating the absence of a sequential tunneling regime. For the ohmic case (s = 1), the condition is equivalent to α 1, whereas for the super-ohmic case (s > 1), the condition is always satisfied for a moderate temperature (k B T ∼ ∆ eff ). At high temperatures (k B T ∆ eff ), the condition is always satisfied for s ≥ 2, whereas for 1 < s < 2, it becomes where T * is the crossover temperature.
The formula for sequential tunneling [equation (29)] predicts the exponential decrease in the thermal conductance as the temperature is lowered. At low temperatures, the thermal conductance behaves as κ ∝ e − ∆ eff /k B T /T 2 ; this is because the transition from the ground state to the excited state is strongly suppressed if the thermal fluctuation is smaller than the effective energy splitting, i.e., when k B T ∆ eff . When the sequential tunneling process is strongly suppressed at low temperatures, equation (29) becomes invalid since another process becomes dominant, as discussed in the next subsection.
Reservoir L
Reservoir R Figure 4. Schematic of the co-tunneling process. At k B T ∆ eff , heat transport via a virtual excitation in the local system is dominant.
Co-tunneling and an asymptotically exact formula
At low temperatures, heat transport via the virtual excitation of the local two-state system becomes dominant (see figure 4); this transport process is known as "cotunneling" by analogy with the electronic transport process through quantum dots. In a previous study [29], an analytical expression for thermal conductance was derived using the fourth-order perturbation theory with respect to the interaction H I,ν . However, in this calculation the renormalization of the tunneling amplitude at a low temperature has not been considered.
Here, we derive a new asymptotically exact formula for the thermal conductance without any approximations. For this purpose, we focus on an asymptotically exact relation called the generalized Shiba relation [45,46]: where χ 0 is the static susceptibility defined in equation (10). This exact relation holds at low temperatures (k B T ∆ eff ) for arbitrary environments and arbitrary systemreservoir couplings. At low temperatures (k B T ∆ eff ), the dominant contribution to the integral of equation (18) comes from the low-frequency part (0 ≤ ω k B T ∆ eff ) due to the factor of the Bose distribution function. By substituting the low-frequency asymptotic form S(ω) πα( χ 0 /2) 2Ĩ (ω) into equation (18), we obtain This expression is similar to the co-tunneling formula in previous studies [29,47,42] but significantly differs in terms of static susceptibility, χ 0 , which considers higher-order processes. Equation (33) can be rewritten as Reservoir L Reservoir R Figure 5. Schematic of the incoherent tunneling process. The wavefunction is localized in the two wells, and a stochastic transition occurs between them.
where F (s) is a dimensionless function of s. Thus, we find that the thermal conductance κ is proportional to T 2s+1 at low temperatures. The same temperature dependence has been derived by the perturbation theory [29,47,42]. However, the perturbation theory cannot treat renormalization effect due to higher-order processes on the static susceptibility, and fails in predicting a correct prefactor including χ 0 . In contrast, the present result given in equation (33) is asymptotically exact, incorporating the renormalization effect appropriately. The co-tunneling formula [equation (33)], a new formula that is first derived in the present study, holds universally at low temperatures for an arbitrary exponent, s, as long as the ground state of the system is delocalized (∆ eff > 0) In a previous study [30], the thermal conductance in the ohmic case (s = 1) was shown to be proportional to T 3 , which is consistent with equation (33), and this T 3 -dependence was discussed in terms of the emergence of the Kondo effect. However, it is worth nothing that the power-law temperature dependences are derived in an unified way even in non-ohmic cases. These temperature dependences result from nontrivial many-body effects due to strong mixing between the system and the reservoirs.
Incoherent tunneling: the Markov approximation
For a strong reservoir-system coupling, the coherent superposition of the two localized states is completely broken. In such a situation, heat transport is induced by stochastic dynamics between the two localized states |↑ and |↓ , as shown in figure 5. We call this transport process "incoherent tunneling." Within the Markov approximation [48,49,50], the stochastic dynamics of the system are described by the master equation where P L (t) and P R (t) (= 1 − P L (t)) are the probabilities that the wavefunctions of the system are localized at the well on the left-hand side (σ z = −1) and that on the right-hand side (σ z = 1), respectively, at time t. The transition rate Γ is calculated via second-order perturbation with respect to the Hamiltonian H S as follows [11]: Note that this expression for the transition rate of incoherent tunneling is valid when Γ k B T [50]. By solving the master equation [equation (36)], the symmetrized correlation function is calculated as In contrast to sequential tunneling, S(ω) has only one peak at ω = 0 with a width of 2Γ, indicating the destruction of the superposition of the two localized states. The long-term dynamics are well described by the Markov approximation [11]. Therefore, one may expect that the thermal conductance in the incoherent tunneling regime would be well approximated by substituting equations (37)-(40) into equation (22). However, the results of the Markov approximation show clear deviation from the numerical results, as discussed in section 4. The reason for this is summarized as follows. Note that incoherent tunneling occurs when Γ k B T . Under this condition, the integrand of equation (22) is proportional to ω s−2 for Γ ω k B T / since S(ω) ∝ ω −2 [see equation (40)]. Then, the integral in equation (22) diverges if the high-frequency cut-off occurring due to the Bose distribution function is absent. This indicates that the high-frequency part of the integral in equation (22) makes the dominant contribution to the thermal conductance. Although the Markov approximation yields reasonable results for the low-frequency behavior of S(ω), it fails to reproduce the accurate high-frequency behavior of S(ω) in general, leading to incorrect results for the thermal conductance.
NIBA
To study the short-term (high-frequency) dynamics in the incoherent tunneling regime, we introduce the NIBA, which is a natural extension of the Markov approximation in the previous subsection [11,51]. In NIBA, the symmetrized correlation function is calculated in a manner same as that followed in a previous study [10]: where Σ(λ = −iω) is the frequency-dependent self-energy defined as Here, Q 1 (τ ) and Q 2 (τ ) are given by equations (38) and (39), respectively. The thermal conductance is then calculated by substituting equations (41) and (42) into equation (22). From the definition, it is easy to check that NIBA reproduces the Markov approximation if we neglect the frequency dependence of the self-energy and replace it with the zero-frequency value Σ(0) = 2Γ. Since NIBA appropriately considers the non-Markovian properties, it is suitable to describe the thermal conductance in the incoherent tunneling regime. The condition for NIBA is well known [11,10]. As expected from the fact that NIBA is an extension of the Markov approximation, it works well for the incoherent tunneling regime. Roughly, the incoherent tunneling mechanism becomes crucial in a regime wherein both the sequential tunneling formula and the co-tunneling formula fail. (a) NIBA holds at moderate-to-high temperatures in the sub-ohmic (s < 1) and ohmic cases (s = 1). (b) It holds for T > T * in the super-ohmic case of 1 < s < 2, where T * is the crossover temperature discussed in section 3.2. Note that NIBA never holds for s ≥ 2 since the crossover temperature T * diverges.
Here, the NIBA has been introduced to improve the Markov approximation in the incoherent regime. This introduction of the NIBA may give impression to the readers that the NIBA is a good approximation only in the incoherent regime. However, the NIBA is known to be applicable for a wider parameter region not restricted to the incoherent regime [10]. The NIBA holds also in the weak coupling regime (α 1) at arbitrary temperature for the unbiased case (ε = 0), where the interblip interaction is shown to be much weaker than the the intrablip interaction (for detailed discussion, see Sec. 21.3 in Ref. [10]). For this reason, NIBA yields almost the same result as the sequential tunneling formula or the co-tunneling formula if the system-reservoir coupling is sufficiently weak.
In section 4, we show that NIBA is an excellent approximation for reproducing the numerical results for a wide region of the parameter space at moderate-to-high temperatures. Thus, the short-term (high-frequency) non-Markovian behavior in the system dynamics is important for calculating the thermal conductance in the incoherent tunneling regime.
Numerical Results and Comparison with Analytical Formulas
While the analytical approaches discussed in the previous section are sufficiently powerful for clarifying the mechanism of heat transport in a two-state system, the detailed conditions justifying each approximation are not trivial. To understand all features of heat transport, unbiased numerical simulation without any approximation would be helpful. In this section, we therefore perform numerical simulations based on the quantum Monte Carlo method and compare the simulation results with the analytical formulas introduced in section 3. After briefly describing the numerical method in section 4.1, we separately consider the ohmic (section 4.2), sub-ohmic (section 4.3), and super-ohmic cases (sections 4.4 and 4.5).
The dynamics of the spin-boson model has been studied by using various numerical methods [52, 53, 54, 55, 56, 57, 58]. However no systematic comparisons between analytical approximations and numerical simulations has been performed in the context of heat transport near thermal equilibrium. This comparison allows us to discuss the validity of various approximations critically.
Numerical method
For numerical simulations, we employ the continuous-time quantum Monte Carlo (CTQMC) algorithm proposed in a previous study [19]. According to this algorithm, the partition function is rewritten in path-integral form with respect to an imaginary time path, σ z (τ ), and the weight of this path is defined. Then, we apply the Monte Carlo method to this representation using the cluster update algorithm [59]. The details of the CTQMC method are given in Appendix C.
Using the CTQMC method, we evaluate the imaginary time spin correlation function C(τ ) and its Fourier transform as follows: where σ z (τ ) = e τ H/ σ z e −τ H/ . The dynamical susceptibility χ(ω) is obtained from C(iω n ) via analytical continuation as follows: Analytical continuation is performed by Padé approximation [60,61] or by fitting the imaginary time spin correlation function's Fourier transform to the Lorentzian function [58]. For details, see Appendix C.
The ohmic case (s = 1)
In figure 6, we show the thermal conductances for α = 0.05, 0.1, 0.5, and 0.7 as functions of temperature. We plot the graph using the normalized temperature k B T / ∆ eff and the normalized thermal conductance κ/(k B γ∆ eff ), where ∆ eff is the effective tunneling amplitude defined in equation (23). As shown in figure 6, the numerical results fall on a universal scaling curve at each value of α regardless of the ratio ∆/ω c ( 1) obtained via this normalization. This universal behavior is characteristic of the Kondolike effect [30]. In figure 6 (c), we also show the exact solution (the Toulouse point) for α = 0.5 (indicated by the brown dot-dashed line) [58,10,30]. The agreement between the numerical results and the exact solution indicates the correctness of the CTQMC simulation.
At low temperatures (k B T ∆ eff ), the numerical results agree well with those of the approximate formula for the co-tunneling process [equation (34); indicated by blue dashed lines in figure 6]. In this regime, the thermal conductance is always proportional to T 3 (= T 2s+1 ), which is consistent with both results of a previous study [30].
At moderate (k B T ∼ ∆ eff ) and high temperatures (k B T ∆ eff ), the numerical results deviate from the co-tunneling formula and agree well with NIBA (indicated by black solid lines in figure 6). Note that the thermal conductance obtained by NIBA is proportional to T 3−2α at low temperatures, as shown in figure 6. NIBA agrees well even with the low-temperature numerical results for the weak system-reservoir coupling (α 1), whereas it deviates from these results as this coupling becomes large. It is remarkable that NIBA agrees well with the numerical results at arbitrary temperatures for α 1, as shown in figure 6 (a). In figures 6 (a) and (b), we also show the approximate formula for sequential tunneling (indicated by green dot-dashed lines). As shown in this figure, the sequential tunneling formula at moderate temperatures (k B T ∼ ∆ eff ) agrees with the numerical results of the weak system-reservoir coupling (α 1). However, note that NIBA agrees with the numerical results for a wider temperature region than the sequential tunneling formula.
The Markov approximation for incoherent tunneling, indicated by orange dotted lines in figure 6, clearly deviates from the numerical results for α = 0.05, 0.1, and 0.7, indicating the importance of the non-Markovian properties of the system. The Toulouse point α = 0.5 is an exception, as shown in figure 6 (c); NIBA coincides with the Markov approximation since at this point the self-energy in NIBA becomes independent of the frequency for the unbiased case [10]. A detailed discussion on the failure of the Markov approximation is given in section 4.3.
As described in section 3.1, quantum phase transition occurs at α c = 1 for the ohmic case. For α c ≥ 1, the effective tunneling amplitude ∆ eff becomes zero, indicating complete destruction of the superposition of the two localized states. Therefore, heat transport is induced by incoherent tunneling at arbitrary temperatures. In figure 7, we show the thermal conductance for α = 1.0, 1.5, and 2.0 as a function of temperature. As indicated by the black solid lines in the figure, the numerical results agree well with NIBA formula for arbitrary temperatures. Note that for α ≥ 1, the condition for the co-tunneling regime k B T ∆ eff is never satisfied. In figure 7, we also show the Markov approximation for incoherent tunneling (indicated by the orange dashed line). For α ≥ 1, the difference between NIBA and the Markov approximation is not considerably large.
The sub-ohmic case (s < 1)
We first discuss the thermal conductance for the sub-ohmic case wherein the systemreservoir coupling is below the critical value for the quantum phase transition. In figure 8 (a), we show the thermal conductance as a function of the temperature for s = 0.9, ∆/ω c = 0.01, and α = 0.1, for which the ground state is delocalized (α < α c (s, ∆)). At moderate and high temperatures, the numerical results agree well with the NIBA, which is shown by the black solid line. We note that the sequentialtunneling formula cannot be applied to the sub-ohmic case. At low temperatures (k B T ∆ eff ), the numerical results agree well with the co-tunneling formula, showing T 2s+1 -dependence.
We also show the results of the Markov approximation for incoherent tunneling by the orange dotted line in figure 8 (a). The Markov approximation clearly deviates from the numerical results. To understand the failure of the Markov approximation, we show the numerical and analytical result of the symmetrized correlation function S(ω) as a function of ω/ω c for k B T = ω c /64 in figure 8 (b). While the Markov approximation for the incoherent tunneling process agrees with the numerical results at a low frequency, clear deviation is observed at higher frequencies; the numerical result indicates that the high-frequency decay of S(ω) is much faster than that of the Markov approximation, which is proportional to ω −2 (see equation (40)) We note that the numerical result of S(ω) is well reproduced by the NIBA at arbitrary frequencies. These observations indicate that the non-Markovian properties of the system dynamics are important for obtaining correct thermal conductance results for the sub-ohmic case. Next, let us study the effect of the quantum phase transition. Figure 9 shows the phase diagram determined by the CTQMC method. The detailed procedure for the determination of the critical point is given in Appendix D. The obtained critical system-reservoir coupling, α c , for the quantum phase transition is a function of both s and ∆ and is consistent with previous work based on the NRG calculation [17]. The quantum phase transition remarkably affects the temperature dependence of the thermal conductance. In figure 10, we show the thermal conductance as a function of the temperature for s = 0.6 and ∆/ω c = 0.01, for which a quantum phase transition occurs at α = α c = 0.0615. Figure 10 (a) shows the temperature dependence in the delocalized regime (α = 0.02 < α c ), for which ∆ eff remains finite. The numerical results agree well with the co-tunneling formula at low temperatures and with NIBA at moderate-to-high temperatures. This feature is the same as that shown in figure 8. Figure 10 (b) shows the temperature dependence in the localized regime (α = 0.1 > α c ), for which ∆ eff = 0. Reflecting the quantum phase transition, the numerical results agree with NIBA at arbitrary temperatures, as shown in figure 10 (b). Since the condition for the co-tunneling regime, k B T ∆ eff , is never satisfied for ∆ eff = 0, the thermal conductance does not show a universal T 2s+1 -dependence due to the co-tunneling process at low temperatures.
The super-ohmic case (1 < s < 2)
In figure 11, we show the numerical thermal conductance results obtained using CTQMC as a function of temperature for s = 1.5. Here, the horizontal and vertical axes are the normalized temperature k B T / ∆ eff and the normalized thermal conductance κ/(k B γ∆ eff (∆ eff /ω c ) 2s−2 ), respectively, where ∆ eff is the effective tunneling amplitude defined in equation (24). Note that there is no quantum transition for the super-ohmic case (s > 1); ∆ eff is finite for arbitrary system-reservoir couplings. At low temperatures (k B T ∆ eff ), the numerical results agree with the co-tunneling formula (indicated by blue dashed lines) and show T 2s+1 -dependence, regardless of the strength of the system-reservoir coupling. As shown in figure 11 (a), the numerical results for α = 0.1 agree with the sequential tunneling formula at moderate temperatures (k B T ∼ ∆ eff ) and with NIBA at high temperatures. However, from figure 11 (b), it is evident that the numerical results for α = 0.5 agree better with NIBA than with the sequential tunneling formula at moderate-to-high temperatures (k B T ∆ eff ). This change can be explained by the crossover temperature T * , which separates the sequential (T < T * ) and incoherent (T > T * ) tunneling regimes [see equation (31)]. As the system-reservoir coupling α increases, the temperature region for which the numerical results agree with NIBA is widened since the crossover temperature T * is lowered.
The Markov approximation for incoherent tunneling is indicated by orange dotted lines in figure 11. The incoherent tunneling formula clearly deviates from numerical results, indicating the importance of the non-Markovian properties of the system dynamics. The origin of this disagreement is same as that for the sub-ohmic case (see section 4.3).
The super-ohmic case (s ≥ 2)
In figure 12, we show the numerical results of the thermal conductance obtained using the CTQMC method as a function of the temperature for s = 2.0. The normalization of the horizontal and vertical axes as well as the linetypes of the analytical formula are same as those in figure 11. At low temperatures, the numerical results agree well with the co-tunneling formula and show T 2s+1 -dependence, regardless of the strength of the system-reservoir coupling. In contrast to the case of 1 < s < 2, the numerical results agree with the sequential tunneling formula at moderate-to-high temperatures. This is reasonable since the crossover T * becomes of the order of ω c for s = 2.
Summary
We systematically considered heat transport via a local two-state system for all types of reservoirs, i.e., for the ohmic case (s = 1), super-ohmic case (s > 1), and sub-ohmic case (s < 1). We used the exact expression for the thermal conductance obtained from the Keldysh formalism and studied it using both analytic and numerical methods.
First, we considered the approximations of three transport processes: sequential tunneling, co-tunneling, and incoherent tunneling. In particular, we newly derived a universal formula for co-tunneling using the generalized Shiba relation, which predicts the T 2s+1 -dependence of the thermal conductance at low temperatures. We also pointed out that the Markov approximation yielded incorrect results for the thermal conductance in the incoherent tunneling regime since the non-Markovian properties are important. However, for the incoherent tunneling regime, NIBA yielded correct results.
Next, we used a continuous-time Monte Carlo algorithm and systematically compared the numerical results with those of the analytical approximation formulas. We found that all numerical results were well reproduced by one of three formulas, i.e., the sequential tunneling formula, co-tunneling formula, or NIBA. The formulas that yielded correct results are summarized in Table 1. We also showed that for 0 < s ≤ 1, the quantum phase transition between the delocalized and localized phases strongly affected the temperature dependence of the thermal conductance. For the delocalized phase (α < α c ), the thermal conductance is well described by the co-tunneling formula at low temperatures and by NIBA at moderate-to-high temperatures. On the contrary, for the localized phase (α > α c ), NIBA holds at arbitrary temperatures.
Our study is expected to provide a theoretical basis for describing heat transport via nano-scale objects. Herein, we focused on heat transport in a symmetric doublewell-shaped potential near the thermal equilibrium in the limit of ∆ ω c . The effect of asymmetry of system's potential, the cutoff-frequency dependence, and the far-formequilibrium effect constitute an important future problem. The temperature dependence of the thermal conductance in the critical regime near the quantum phase transition is also an intriguing subject for research and will be discussed elsewhere.
contour. By projection onto the real-time axis, the lesser component of equation (A.8) is rewritten as The heat current is then rewritten as where Σ < ν (t, t ) and Σ a ν (t, t ) are the lesser and advanced components, respectively, of the reservoir self-energy which are calculated as respectively. Here, n ν (ω) = (e ω/k B Tν − 1) −1 is the Bose distribution function of phonons (photons) for reservoir ν. The Fourier transformation of equation (A.10) gives where G r σz,σz (ω) and G < σz,σz (ω) are the Fourier transformations of the retarded and lesser components of the nonequilibrium Green function, respectively. Considering the conservation law of energy given by J L = − J R ≡ J , the heat current is rewritten as Here, we used I ν (ω) = α νĨ (ω). Rewriting G r σz,σz (ω) with χ(ω), we finally obtain equation (15).
If, for a moment, we ignore the other low-frequency oscillators, the wavefunctions of the two lowest energy eigenstates for the system-plus-reservoir are described by where |Ψ L and |Ψ R are given by respectively. Here, the prime symbol indicates that the product is in the range pω c < ω νk < ω c . Ψ ± νk is the ground-state wave function of the oscillator k in reservoir ν when the wavefunction of the local system is located at x = ±x 0 /2; it is obtained by translation of the ground-state wavefunction |Ψ 0 νk for the isolated oscillator as Adiabatic renormalization suggests that the tunneling amplitude is renormalized by the overlap between the ground-state wavefunctions of the oscillators for different localized states (σ z = ±1): If the renormalized tunneling amplitude ∆ (p) is less than pω c , the adiabatic renormalization can continue by reducing the factor p. If ∆ (p * ) = p * ω c holds at p = p * , adiabatic renormalization must be stopped there and the finite effective tunneling amplitude ∆ eff = ∆ (p * ) is obtained. On the contrary, if ∆ (p) < pω c holds for an arbitrary value of p, adiabatic renormalization can be completed even at p = 0, yielding an effective tunneling amplitude of zero (∆ eff = 0). For the ohmic case (s = 1), the effective tunneling amplitude is obtained as follows: , (0 ≤ α < 1), 0, (1 ≤ α). (B.8) In this paper, following Ref. [10], we employ a modified effective tunneling amplitude multiplied by a dimensionless function of α: Using this definition, equation (23) is derived. Based on equation (B.7), it is straightforward to show that the effective tunneling amplitude in the super-ohmic case (s > 1) assumes a finite value given by (24) and that it always vanishes for the sub-ohmic case (s < 1).
Appendix C. Continuous-time Quantum Monte Carlo Method
In early numerical studies [62,58,63], the Monte Carlo method has been applied directly to the long-range Ising model, which is mapped from the spin-boson model [10,23,64,65,11]. Subsequently, the continuous-time quantum Monte Carlo (CTQMC) algorithm [66,59] has been applied directly to the spin-boson model without mapping [19]. In this section, we describe the CTQMC algorithm employed in the present numerical simulation.
The partition function of the spin-boson model (5) is written in the path-integral form as [10,19] where σ z (τ )(= ±1) is a spin variable defined on the imaginary-time axis, Dσ z (τ ) indicates the integral for all possible paths σ z (τ ), and K(τ ) is a kernel defined as As shown in figure C1 (a), the path σ z (τ ) is assigned by an alternative configuration of kinks (jumps from σ z = −1 to σ z = +1) and anti-kinks (jumps from σ z = +1 to σ z = −1) and described by the positions τ i (i = 1, 2, · · · , 2n) of the kinks (q i = +1) and anti-kinks (q i = −1) as where n is the number of the pairs of kinks and anti-kinks. Note that the kinks and anti-kinks are alternatively located (q i+1 = −q i ). By substituting equation Here, we apply the CTQMC method to this partition function. The present CTQMC algorithm [66] employs a cluster-flip update similar to that in the Swendsen-Wang cluster algorithm [67]. The cluster-flip update is constructed as follows [19] (see figure C1). We consider the initial path σ z (τ ) of figure as shown in (b-iii), and construct segment clusters. Here, σ z (s i ) is the value of σ z in the segment s i and the positions of the vertices (including the inserted ones) at the two edges of the segment s i are denoted by τ i−1 and τ i , respectively. Finally, we flip each segment cluster with probability 1/2, as shown in (b-iv), and remove the redundant vertices within segments, as shown in (b-v). The final path is then given by figure C1 (c). The Monte Carlo data presented in this paper typically represent averages over 10 3 -10 4 updates at low temperatures and 10 7 -10 8 updates at high temperatures.
To perform this continuation numerically, we usually employ Padé approximation [60,61]. For the weak coupling regime, Padé approximation yields poor results since the imaginary part of the pole nearest to the real frequency axis is small. In this case, we employ another approximation based on the fitting [58]. We assume that the spin correlation function as C(iω n ) ≈ aω 3 0 (ω n + λ) 2 + ω 2 0 + const, (C.10) where a, ω 0 , and λ are the fitting parameters determined using the least-squares method.
It is easy to obtain the dynamical susceptibility Im[χ(ω)] using the fitting function (C.10) with optimized parameters. Note that this fitting method works well for weak couplings since it is compatible with the dynamic susceptibility for the sequential tunneling process. For using the co-tunneling formula (33), we need to calculate the static susceptibility χ 0 . Typically, a simple estimate χ 0 2/( ∆ eff ) yields quantitatively correct results. However, for the sub-ohmic case, χ 0 has nontrivial temperature dependence, even at low temperatures. For this case, we numerically calculate χ 0 using the CTQMC method as follows: | 10,725.4 | 2018-03-21T00:00:00.000 | [
"Physics"
] |
BioModelos: A collaborative online system to map species distributions
Information on species distribution is recognized as a crucial input for biodiversity conservation and management. To that end, considerable resources have been dedicated towards increasing the quantity and availability of species occurrence data, boosting their use in species distribution modeling and online platforms for their dissemination. Currently, those platforms face the challenge of bringing biology into modeling by making informed decisions that result in meaningful models, based on limited occurrence and ecological data. Here we describe BioModelos, a modeling approach supported by an online system and a core team of modelers, whereby a network of experts contributes to the development of species distribution models by assessing the quality of occurrence data, identifying potentially limiting environmental variables, establishing species’ accessible areas and validating modeling predictions qualitatively. Models developed through BioModelos become freely and publicly available once validated by experts, furthering their use in conservation applications. Our approach has been implemented in Colombia since 2013 and it currently consist of a network of nearly 500 experts that collaboratively contribute to enhance the knowledge on the distribution of a growing number of species and it has aided the development of several decision support products such as national risk assessments and biodiversity compensation manuals. BioModelos is an example of operationalization of an essential biodiversity variable at a national level through the implementation of a research infrastructure that enhances the value of open access species data.
Introduction
Species distributions are an essential biodiversity variable (EBV) [1,2], critical to evaluate species' conservation status and trends [3,4], measure biodiversity change [5][6][7], guide conservation and management at the species and community levels [8] as well as to assess their ecosystem services [9], potential impacts on human activities [10,11] and health [12,13]. This EBV is also a key input for the calculation of indicators to evaluate countries' progress towards achieving international targets, such as the Convention on Biological Diversity Aichi targets PLOS [14] and the United Nations' Sustainable Development Goals (SDGs) [15]. For example, according to the Biodiversity Indicators Partnership (www.bipindicators.net), 11 out of 20 Aichi targets and 6 out of 17 SDGs use indicators that require information on species distribution, either for their calculation or for their disaggregation at national and subnational levels. Thus platforms that consolidate and facilitate access to the highest quality data on species distributions are necessary to coordinate biodiversity observation delivery as EBVs and aid biodiversity conservation and management globally [16]. Considerable international efforts and resources have been designated towards the mobilization of primary biodiversity data (PBD), particularly through the Global Biodiversity Information Facility (GBIF). These data are fundamental for many conservation analyses based on species distributions. However, for most areas in the world, our knowledge on species distributions based on PBD is geographically biased and incomplete [17]. This situation is particularly dire in biodiversity hotspots which lack sufficient information on species distributions based on PBD even at coarse spatial scales [17][18][19]. Therefore, for most regions on earth, methods that generalize occurrences to areas representing species distributions are necessary to use PBD in conservation applications.
Species distribution modeling has emerged in the last two decades as a set of methods and practices to estimate species distributions [20][21][22]. They are based on PBD and environmental data and use a variety of statistical methods to infer the probability of occurrence or suitability in unsampled sites. As such, they are a powerful tool to overcome the Wallacean shortfall (i.e. the lack of knowledge on species geographic distributions [23]), due to their ability to produce reasonable predictions with few occurrences [24,25], repeatability and ease of update. However, their implementation is not straightforward [26][27][28] and fully automated, large scale modeling procedures face several challenges [29,30], namely the need of expert's knowledge to detect and correct certain types of errors in PBD [31]; select meaningful environmental covariates [32,33]; determine each species' accessible area [34,35] and judge the biological realism of predictions [36].
Current platforms that provide maps of species' distributions are either based largely on expert maps (e.g. map of life, www.mol.org) or use fully automated modeling workflows (e.g. BIEN, http://biendata.org/). Expert maps in some cases may be the only way to characterize a species' distribution, for example when there are very few observations available, but are difficult to update as new observations accumulate [37], they are not repeatable and their precision is often too coarse to inform conservation at regional scales [38]. On the other hand, large scale, fully automated modeling workflows are unable to detect and fix errors that require domain specific expertise (e.g. species misidentification or geographic outlier detection [31]) and unspecific modeling choices, for example of accessible areas and environmental variables, is likely to result in biologically unrealistic models.
To address the challenge of mapping large numbers of species without compromising biological realism we devised BioModelos (biomodelos.humboldt.org.co), an online system that involves a network of experts and a core team of modelers in the development and validation of species distribution models, which are freely available for public visualization and download. Here we describe the operational approach of BioModelos, the functionalities of its web app and its implementation in Colombia. Although BioModelos has thus far been deployed in a single country, our philosophy and software architecture can may be applied to other regions and even scaled up to global implementations.
Network structure and governance
The aim of BioModelos is to provide distribution maps for a set of species in a particular area that are validated in terms of their biological realism by experts. To that end, in BioModelos experts are arranged into groups according to their areas of taxonomic and/or geographic expertise. Experts are defined as individuals whom are able to either curate and improve the taxonomic and/or geographic quality of occurrence data, inform the selection of certain modeling parameters (e.g. accessible area) or assess the performance of competing species distribution hypothesis, for at least one species in their group.
Each group is coordinated by one or more moderators, whom are ideally well-connected members of a community of researchers interested in advancing the knowledge on the distribution of a set of species. As such, they are responsible for the objective evaluation of the expertise of potential members, setting deadlines for each step in the model development workflow in agreement with group experts and expedite the completion of the group modeling agenda.
The core team of BioModelos, facilitates some of the steps in the modeling workflow according to the group needs, namely aggregating species occurrences and running automated data validation routines, modeling species distributions and processing expert's feedback on models. Additionally, the core team approves the creation of new groups, enables the publication of models generated by third parties and updates occurrence databases following recommendations provided by the groups.
Model development workflow
Species distribution hypothesis available in the BioModelos web app are generated either by collaborative development of species distribution models and expert maps or by third parties that independently upload models to the web app (Fig 1).
Collaborative development of species distribution models
Planning. In BioModelos, every species is associated to a single group of experts whom are tasked with generating, improving or validating a distribution hypothesis for each species in the group. Experts within a given group inform the core team which species they plan to model, propose deadlines for modeling activities and provide information that may be relevant for model development, such as previously curated occurrence data or private data, suggestions of meaningful environmental variables to consider in modeling, among others. Data aggregation and quality assessment. Unless the group provides previously curated data, the core team aggregates occurrences from multiple data providers (GBIF, eBird, Vert-Net, speciesLink etc.), either manually or through web services when available. After aggregation and standardization, a series of automated data quality checks are performed (S1 Table). Importantly, a permanent unique identifier for each occurrence is generated and original identifiers (e.g. occurrence id, institution, collection code, catalog numbers etc.) are maintained throughout the process so that their provenance may be traced and feedback on data quality may be sent back to data providers whenever mechanisms to that end exist. Finally, occurrences and quality assessment information is added to the BioModelos' contents database.
Collaborative data cleaning. Records that pass selected filters based on the automated quality checks become visible on the BioModelos geographic viewer. When there are spatial duplicates (i.e. more than one record falls in a 1 km cell), only the most documented record is visible. The filters implemented for each occurrence dataset are variable, depending on its characteristics and the amount of records available for a particular group (Table 1). Published records are further reviewed in BioModelos by experts to identify and flag likely identification and georeferencing errors not detectable in automated checks. Whenever corrections are possible, experts are encouraged to edit identifications or coordinates. If a correction is not possible, experts flag records according to the manual flags presented in S1 Table. All changes in the contents database are automatically logged so that is possible to track and revert changes made to any particular record.
To inform the development of distribution models, experts are also asked at this step to delineate rough polygons of species' accessible areas (M sensu [39]) using a polygon tool and identify land cover types where species are expected to maintain viable populations by filling out a habitat preferences form.
Distribution modeling. After occurrence data has been assessed for quality, the BioModelos core team develops distribution models using occurrences without quality issues. Our modeling workflow consists broadly of the following steps: (1) occurrence thinning [40], for which a threshold is selected based on exploratory analysis of data availability and spatial autocorrelation; (2) environmental data selection depending on species' biology and dataset quality; (3) selection of modeling method based on available occurrences (1-2: 10 km buffer around points; 3-5: convex hull; 5-9: Bioclim; >9: MaxEnt); (4) sampling background data from accessible areas, from target groups or bias sampling surfaces whenever sufficient sampling exists [41], otherwise at random; (5) use spatial partitioning to evaluate and optimize MaxEnt model parameters through the ENMEval package [28] and evaluate Bioclim models; (6) develop distribution models using the full set of occurrences and for MaxEnt the regularization and feature settings that optimize performance; (7) generation of thresholded models at the minimum, 10th, 20th and 30th percentile training presence; (8) upload metadata for each model to the contents database and model predictions as GeoTIFF files to the BioModelos front-end where they become visible at this stage under status "under development". Although many alternative workflows and modeling choices could be made [27], this workflow adequates well to the availability of data in our country of implementation and similar workflows have been implemented elsewhere in the SDM literature (e.g. [42]). Nonetheless, we emphasize that the BioModelos web app acts as a layer through which information from experts is gathered, but once collected, a variety of modeling workflows may be implemented. This allows the core team to update its modeling workflow based on advances in the field independently from the BioModelos web app.
Expert feedback processing. Models published in the previous step are reviewed by experts whom select through a slider a omission threshold (varying from 0% to 30%) to convert continuous models into binary models. Though other thresholding methods could be easily implemented in the web app [43,44], our current range of thresholds has a straightforward interpretation by experts and is sufficient to elicit their opinion on species' prevalence within an area of interest based on presence-only data. In addition, experts may further refine thresholded models using a polygon tool to delineate areas of model over or under-prediction. Using thie feedback, models are edited offline by the BioModelos core team and the resulting models are published in the "available hypothesis" box of BioModelos under status "pending validation" (Fig 2).
Collaborative development of expert maps
For many species without sufficient occurrence data it is not possible to develop a distribution model. In those cases, experts may still use BioModelos to inform the general range limits of a species and their habitat preferences using the "create your map" feature (Fig 2). By choosing this option, experts can also provide instructions to further refine them based on geographic features such as barriers (rivers, canyons, watersheds) and elevation. These instructions are processed by the core team to generate species distribution maps that are displayed in the Bio-Modelos viewer in the "available hypothesis" box.
Publication by third parties
As a rapidly expanding field, a large number of species distribution models are being generated by numerous researchers. Many of these are produced by experts on the species being modeled, and thus represent a valuable resource to other scientists and users. The publication of these models is facilitated in BioModelos through a publication option in the model generation box (Fig 2B) that allows users to submit their models in raster format, along with occurrence data (optional), model metadata and methods (templates in spanish available at http:// biomodelos.humboldt.org.co/guia_documentacion_y_plantillas.zip). Whenever a submission consists of models used in previously peer-reviewed research, models are published in BioModelos without further review. Otherwise the methodology is reviewed by the core team or an external reviewer and a decision is made regarding its publication.
Model validation and publication
An essential feature of BioModelos is the validation of species' distribution hypotheses by experts. This validation is inherently qualitative and it is made on the basis of experts' subjective judgement on the biological realism of a model. Since at any point there may be several hypotheses for the distribution of a species (for example a species may have a published model and a collaborative model), we ask experts in each group to score the models for species in their group, on a qualitative scale from 1 to 5 (1: no credibility, 5: complete credibility). Models with an average score of 3 or higher are approved and flagged as "validated". If no model is approved, experts may either decide to go back and modify their inputs or suggest the development of a new model altogether. The process of expert model evaluation is repeated whenever new distribution hypotheses are generated, taxonomy changes or significantly new occurrence data becomes available. Once a species has a validated model within BioModelos, the core team calculates a number of statistics based on its distribution to aid the assessment of its conservation status and trends. These statistics are displayed in the species fact-sheet box (Fig 2).
All distribution models visible in BioModelos are available for download in GeoTIFF format at their original resolution and their use and distribution is regulated through Creative Commons 3.0 licenses. Everyone involved in data cleaning, generation of model inputs and validation of models is recognized as a model author. Besides including all author information, our metadata standard for models and occurrences, contains all relevant information pertaining to data sources, model development (including links to modeling logs in GitHub) and performance statistics.
Web application architecture and components
BioModelos has been developed as an open source web application composed of four main components over a three-tier layered architecture: two independent databases (contents and website), an API (Application Programming Interface) and the web application front-end (Fig 3). The contents database was developed following a non-relational scheme in MongoDB and it includes the collections "species" (keeps the taxonomic backbone and species' ancillary information), "records" (occurrence data and quality assessment information) and "models" (model metadata and species' distribution derived statistics). The website database was developed using a relational scheme via PostgreSQL and it stores all relevant information about the interaction of the users within the website, such as users, groups, user created layers, ratings, tasks, publications, and downloads among others. The website database is directly connected to the BioModelos front-end while the content database is accessible through the web services implemented in the API. This architecture allows BioModelos to scale, grow and distribute each database independently as well as to store user interactions on the website privately while opening the BioModelos content database to third-party applications using the API.
The front-end of BioModelos was developed using the Ruby on Rails framework including Javascript libraries supporting some of the functionalities. It also stores all the static files (e.g. images, map files, documents) needed to display the distribution models. The front-end consist of three main components: a search engine, a social network and a geographic viewer. The search engine allows users to find either species distributions by entering their scientific name or sets of species based on attributes using the advanced search functions. The social network component comprises expert's public profiles and group profiles and allows interaction among experts using built in messaging tools. It also facilitates the monitoring of progress in completing particular modeling tasks using the task dashboard, and approval for the admission of new group members by group moderators. Lastly, most user interactions take place in the geographic viewer. It contains tools related to clean data, provide feedback on distribution models and visualize distribution hypotheses, model statistics and metadata. A complete description of these functionalities is available in S2 Table.
Implementation of BioModelos in Colombia
The BioModelos web app and network have both been under simultaneous development since 2013. During this time, we have conducted 15 workshops with experts that have been essential to consolidate expert groups as well as to gather feedback to enhance user experience in the web app and improve the modeling workflow. Currently, out of 1052 registered users, there are 475 experts associated to 20 expert groups, which in turn are managed by 34 moderators (Table 1; data as from October 2018). Collectively, these experts are tasked with contributing to the development of 980 SDMs. Additionally, 17 expert maps and 216 models have been published since the model publishing feature was implemented in January 2017.
User registration to download models has been mandatory from August 2017. Ever since, 430 downloads have been made in BioModelos, 33% of them for academic research, 32% for educational activities and the remaining for applied research, environmental consulting, bioprospecting and other activities. Validated models have allowed Humboldt Institute (BioModelos' host institution in Colombia) to support conservation decision making by informing plans to compensate biodiversity loss [45], elaborate species' extinction risk assessments [46] and the generation of official cartography for Colombia through the biotic component of the Colombian ecosystems map [47].
Discussion
We presented BioModelos, an approach to facilitate collaboration between experts (e.g. field biologists, ecologists, taxonomists, biogeographers and modelers) to generate publicly available information on species distribution mediated by a core team and a web app. By involving experts in the development of models, we aim to fill the gaps in primary biodiversity data and assess the biological realism of model predictions by eliciting experts' opinion on species distribution as well as to avoid the prevalent duplication of efforts in data cleaning and modeling [48]. Both of these features are necessary to advance faster towards the amelioration of the Wallacean shortfall as well as to further the use of SDMs to generate EBV that inform conservation decision making processes [15].
Species distribution modeling is still a very dynamic field in which novel methods and recommendations arise frequently (e.g. [27]). By keeping the expert-opinion data gathering process independent from the modeling process, we have been able to implement multiple modeling workflows with little impact on the design of the BioModelos user interface or experience, keeping the app maintenance costs low. This feature will allow us to continue to refine our modeling workflow in the future, for example by using methods that formally incorporate expert opinion in model development [49] and integrating established semi-automated modeling workflows, such as Wallace [42].
The difficulty of evaluating the accuracy of species distribution models in presence-only models has long been recognized and discussed [50,51]. Simply put, traditional performance metrics (e.g. AUC, TSS) of models built based on presence-only data may only tell us how well a model prediction discriminates presences from arbitrary pseudo-absences and their value and statistical significance depends on how those pseudo-absences are drawn [52,53]. Therefore, these metrics are a measure of relative performance [34] not suitable for comparison among species and of difficult interpretation for interested users of model predictions without a modeling background. An important feature of BioModelos is that besides providing standard measures of model performance for each published model it also encourages the subjective evaluation of models by experts. This qualitative evaluation together with authorship information on the experts that validate the model may help end-users to decide whether to use a model or not in a particular application [54].
An important challenge of the implementation of BioModelos in Colombia has been to motivate the autonomous completion of modeling agendas by experts' groups. Thus far, the most successful collaborative modeling experiences have been in groups that require BioModelos outputs (i.e. models and species fact-sheets) for species risk assessments [46] or action plans (Cycads [55]; Magnoliaceae and Primates, in prep). Also, the publication of models developed by third-parties has contributed an important proportion of the models available in BioModelos and we continue to encourage model publication by contacting modelers identified through publications and scientific events. However, for the remaining species we are still in the process of devising incentives to increase the participation of experts in BioModelos, such as the development of electronic publications that are formally recognized as research products in academic performance reviews.
The BioModelos web app is open source (https://github.com/LBAB-Humboldt/BioModelos. v2) and its use by any interested party is permitted through a MIT License. However, due to the technical requirements for its installation and maintenance, the collaborative nature of the Bio-Modelos approach and the need of a core team consisting at least of a modeler and a network manager for its implementation, it is best suited for national level implementations hosted by research institutions. Although this geographical scale may seem arbitrary as species are not limited per se by national boundaries, it is practical considering that many uses of models for conservation decision making take place at national and subnational levels and that experts usually confine their expertise to countries due to accessibility, funding restrictions and ease to obtain research permits. Hence, by implementing BioModelos at a national scale, we contribute both to increase occurrence data fitness for use in distribution modeling, potentially aiding global modeling initiatives once mechanisms to collect error reports through web services are implemented in data providers such as GBIF [48] and to the consolidation of a global coordinated monitoring system of species distributions through National Biodiversity Observation Networks [56], to make EBVs genuinely global [57].
Supporting information S1
Acknowledgments
We are grateful for the feedback provided in the past five years by the numerous researchers that have attended our workshops and the institutions they represent: Asociación Colombiana de Ornitología, Calidris, Asociación Bogotana de Ornitología, Asociación Colombiana de Herpetología, ASICTIOS, Sociedad Colombiana de Mastozoología, Universidad Nacional, Pontificia Universidad Javeriana, Universidad de Antioquia, Universidad del Valle, among others. Their input has been critical to enhance the contents of BioModelos as well as designing a user-friendly web app. Particularly, we want to thank Cristina López Gallego, Luis Francisco Sanchez and Nicolás Urbina for their advice on developing a feasible collaborative species modeling workflow and their energy in moving forward the BioModelos agenda in the groups they moderate. This project would not have been possible without the institutional backing of BioModelos provided by Instituto Humboldt, particularly by Brigitte Baptiste, Hernando García, Germán Andrade, Juan Carlos Bello, Jose Ochoa, Johanna Galvis and Cristina Ruiz. Carolina Bello, Valentina Grajales, Lina Estupiñán and Ricardo Bastidas provided technical input for the development of BioModelos at several stages. This manuscript was greatly enhanced by comments provided by Robert Anderson. | 5,499.2 | 2018-10-05T00:00:00.000 | [
"Environmental Science",
"Biology",
"Computer Science"
] |
TRPC Channels: Dysregulation and Ca2+ Mishandling in Ischemic Heart Disease
Transient receptor potential canonical (TRPC) channels are ubiquitously expressed in excitable and non-excitable cardiac cells where they sense and respond to a wide variety of physical and chemical stimuli. As other TRP channels, TRPC channels may form homo or heterotetrameric ion channels, and they can associate with other membrane receptors and ion channels to regulate intracellular calcium concentration. Dysfunctions of TRPC channels are involved in many types of cardiovascular diseases. Significant increase in the expression of different TRPC isoforms was observed in different animal models of heart infarcts and in vitro experimental models of ischemia and reperfusion. TRPC channel-mediated increase of the intracellular Ca2+ concentration seems to be required for the activation of the signaling pathway that plays minor roles in the healthy heart, but they are more relevant for cardiac responses to ischemia, such as the activation of different factors of transcription and cardiac hypertrophy, fibrosis, and angiogenesis. In this review, we highlight the current knowledge regarding TRPC implication in different cellular processes related to ischemia and reperfusion and to heart infarction.
Introduction
The heart rate of a healthy adult ranges between 60 and 100 beats/min, which is mainly achieved by adequate function of the cardiac contraction/relaxation cycle. Adequate ventricular contraction is strongly dependent on effective excitation-contraction (EC) coupling in cardiac cells. Electrical stimuli travel across conducting cardiac tissues to the cardiomyocytes, inducing a cell-membrane depolarization activating ion channel and finally activating the cell contractile machinery (reviewed elsewhere [1,2]). EC coupling and cell contraction are critically dependent on Ca 2+ influx and Ca 2+ channel trafficking. The initial cell-membrane depolarization stimulates sarcolemma L-type Ca 2+ channels, prompting a small influx of Ca 2+ from the extracellular medium. Ca 2+ entry triggers a large release of Ca 2+ from the sarcoplasmic reticulum via ryanodine receptors (RyR), resulting in an increase in the intracellular Ca 2+ concentration ([Ca 2+ ] i ). The rise in [Ca 2+ ] i boosts Ca 2+ binding to troponin C, which activates the contractile machinery. After contraction, [Ca 2+ ] i must decrease to allow cell relaxation, which is achieved mainly via two mechanisms: Ca 2+ re-uptake by the sarco-endoplasmic reticulum Ca 2+ ATPase (SERCA) pump and Ca 2+ efflux by the sarcoplasmic Na + /Ca 2+ exchanger (NCX) [2,3]. Dysregulation of any of these Ca 2+ handling processes is commonly associated with cardiac dysfunction.
Recently, other players emerged as key partners in the regulation of cardiac Ca 2+ handling. Among these partners are the transient receptor potential (TRP) channels that are classified in a superfamily, including 28 mammalian TRP proteins divided according their genetic and functional homology into six families: TRPP (polycystin), TRPV (vanilloid), TRPM (melastatin), TRPA (ankyrin), TRPML (mucolipin), and TRPC (canonical). TRP channels are composed of six transmembrane domains (TM1-TM6), with a preserved sequence called the "TRP domain" adjacent to the C-terminus of TM6 and a cation-permeable pore region formed by a loop between TM5 and TM6 (reviewed in Reference [4]). TRP channels are located in the plasma membrane, and their activation allows the entry of Ca 2+ and/or Na + , with higher permeability for Ca 2+ . Although most TRP channels lack a voltage sensor, they can be activated by physical or biochemical changes, regulating Ca 2+ dynamics by directly conducting Ca 2+ or prompting Ca 2+ entry secondary to membrane depolarization and modulation of voltage-gated Ca 2+ channels [5]. The activation of different isoforms of TRP is associated with cell-membrane depolarization, for example, in smooth muscle cells [6,7] and in cardiac cells [8][9][10].
There is substantial evidence that TRP channels have important roles in mediating cardiac pathological processes, including cardiac hypertrophy and fibrosis [11][12][13], which all lead to deleterious cardiac remodeling and subsequent heart failure (HF). This review focuses on the role of TRPC channels and provides an overview of the most relevant and recent findings related to these channels and ischemia-related disease in the heart. Nevertheless, the activation mechanism of TRPC channels is not yet completely clarified, and even less so in cardiac cells. Previous studies using different cell types suggest that TRPCs can interact physically with different splice variants of the inositol triphosphate receptors (IP3R). For instance, TRPC1 [14], TRPC3 [15,16], and a splice variant of human TRPC4 [17] interact physically with the IP3R. Actually, it appears that IP3R and Ca 2+ /calmodulin compete for a common binding site on TRPC3 since the displacement of calmodulin by IP3R from the binding domain activates TRPC3 [18]. Others researchers proved that phosphatidylinositol 4,5-bisphosphate (PIP2) participates in the regulation of TRPC4 and TRPC5 [19,20]. Gα q protein also activates TRPC1/4 and TRPC1/5 through direct interaction [21]. Meanwhile, independent studies demonstrated that TRPC3, 6, and 7 are activated by diacylglycerol (DAG) [22][23][24][25]. Interestingly, TRPC4 and 5 channels also become sensitive to DAG when their interactions with other regulators are inhibited, such as protein kinase C (PKC) and Na + /H + exchanger regulatory factor (NHERF) [26].
TRPC Channels in the Cardiovascular System
TRPC channels are classified into seven members (TRPC1-7) that are distributed based on biochemical and functional similarities into TRPC1/4/5, TRPC3/6/7, and TRPC2, which is a pseudogene in humans. The expression of TRPC isoforms in the heart was examined in different stages of animal development, animal models, and areas of the heart. They are expressed at very low levels in normal adult cardiac myocytes but their expression and activity might increase in pathological processes [12,13,27]. However, they likely display different patterns of expression in cardiac cells isolated from the sinoatrial node and in myocytes isolated from atrial or ventricular heart [22,28]. In human cardiac tissues and/or neonatal rat cardiomyocytes, messenger RNA (mRNA) of TRPC5 [29,30] and TRPC6 [31] was detected. In animal models, the expression of TRPC1/3-7 was confirmed in adult rat and mouse ventricle and atrial cardiac myocytes either at mRNA or protein levels [13,32,33]. Other reports showed that TRPC1/3-6 are expressed in rat ventricular myocytes of fetal and neonatal ventricular myocytes [28,34]. In sinoatrial node cells, TRPC1, 2, 3, 4, 6, and 7 mRNA expression levels are detected using RT-qPCR, whereas TRPC5 expression is not observed. Furthermore, experiments using immunohistochemistry confirmed protein expression of TRPC1, 3, 4, and 6, but not TRPC7 in mouse sinoatrial node and in isolated pacemaker cells [35]. In the case of cardiac fibroblasts, all TRPC isoforms were described. In particular, the mRNAs of TRPC1, 3, 4, and 6 are detected in mouse cardiac fibroblasts. Meanwhile, isolated rat ventricular fibroblasts have significant mRNA expression of TRPC2, 3, and 5 [36][37][38]. Experiments using immunocytochemistry and Western blot also revealed the expression of TRPC1, 3, 4, and 6 proteins in rat and human cardiac fibroblasts [39][40][41].
A functional TRPC channel is composed of four proteins, allowing it to form homo or heterotetramers [42]. However, the concept of TRPC multimerization was barely addressed in cardiac myocytes. A previous study from Molkentin's group suggested multimerization of TRPC3 and homotypic TRPC6 in adult mouse cardiac myocytes since they demonstrated, using an immunoprecipitation approach, that TRPC3 can associate with TRPC4 protein [5]. More recently, TRPC6 was suggested to form a heteromeric complex with TRPC3 and nicotinamide adenine dinucleotide phosphate hydrogen (NADPH) oxidase 2 (NOX2) protein in diabetic mouse heart. Nonetheless, this study used HEK293 cells to confirm the interaction between TRPC3 and TRPC6 by immunoprecipitation [43]. It should be noted that other studies indicated that TRPC channels can form a macromolecule complex with the NCX [44], Na + /K + pump [45], and SERCA pump [46]. Therefore, they might create a microenvironment facilitating the fine-tuning of Ca 2+ homeostasis and excitation-contraction coupling (reviewed elsewhere [47][48][49]). In fact, recent evidence confirmed that TRPC3 mediates Ca 2+ and Na + entry in proximity of NCX, elevating Ca 2+ levels and cardiac contractility [44]. Certainly, more precise investigations about TRPC heteromerization will be welcome to reveal whether this concept is similar to that observed in other cells such as smooth muscle cell [50], platelets [51], hippocampus [52], or rat brain [53]. Actually, Bröker-Lai J et al. [52] combined quantitative high-resolution mass spectrometry with affinity purifications using isoform-specific antibodies on membrane fractions prepared from wild-type (WT) and target-knockout (KO) brains to demonstrate that TRPC1, 4, and 5 form heteromeric complexes in the brain, particularly in the hippocampus.
TRPC Channels Mediate Ca 2+ Influx in Cardiac Myocytes
There are considerable indications that, in cardiac myocytes isolated from the atrium, the ventricle, or from neonatal rat ventricular myocytes (NRVM), TRPC channels participate both in store-operated Ca 2+ entry (SOCE) and receptor-operated Ca 2+ entry (ROCE) pathways, and their activation and/or upregulation is essential for cardiac Ca 2+ signaling, particularly under pathological situations (reviewed elsewhere [54][55][56]). Independent studies showed that DAG, which works as an important mediator of the G-protein coupled receptor (GPCR)-stimulated Ca 2+ signaling pathway, activates TRPC3 and 6. For instance, Onohara et al. [10] demonstrated that stimulation of NRVM with angiotensin-II and 1-oleoyl-2-acetyl-sn-glycerol (OAG), a membrane-permeable DAG analogue, activates TRPC3 and 6 channels, causing membrane depolarization. They further demonstrated that small interfering RNA (siRNA) against TRPC3 and 6 significantly reduces responses to angiotensin-II. OAG also activates a cation current in mouse cardiac myocytes that is significantly reduced by cell dialysis with an anti-TRPC3 antibody [57]. Moreover, the activation of A1 adenosine receptor in atrial and ventricular myocytes activates TRPC3, through DAG, since Ca 2+ influx is inhibited by Pyr3, considered a specific inhibitor of TRPC3 [33].
It should be noted that other studies focused on the role of TRPC channels in SOCE activation in cardiac myocytes. For instance, a recent study by Wen et al. [58] demonstrated the presence of SOCE in normal adult mouse ventricular myocytes and the participation of TRPC1, 3, and 6, since antibodies against these TRPC channels reduced store depletion-mediated Ca 2+ entry. Previously, Wu et al. [5] characterized the participation of TRPC3, 4, and 6 in the exacerbated SOCE observed in mouse cardiac myocytes from hypertrophic hearts. They demonstrated significant reduction of SOCE mediated by specific inhibition of SERCA with cyclopiazonic acid in cardiac-specific transgenic mice expressing dominant-negative (dn) TRPC3 (dn-TRPC3), dn-TRPC6, or dn-TRPC4. The participation of TRPC3 and 4 in SOCE was also characterized in adult rat ventricular myocytes induced by specific activation of EPAC (exchange protein directly activated by cyclic adenosine monophosphate (cAMP)) with 8-(4-Chlorophenylthio) (8-pCPT) [12]. This study revealed significant upregulation of TRPC3 and 4, which correlates with an SOCE increase in 8-CPT-treated cardiac myocyte. In addition, thapsigargin-induced SOCE is inhibited by Pyr3, a TRPC3 inhibitor [12]. Another study suggested a role of TRPC1, 4, and 5 in SOCE caused by aldosterone stimulation of NRVM. Indeed, thapsigargin-induced SOCE is inhibited in aldosterone-treated NRVM transfected with dn-TRPC1 and dn-TRPC4, and with siRNA against TRPC5, whereas dn-TRPC3 did not alter SOCE [59]. Moreover, TRPC1 and 4 overexpression correlates with calcium release activated channel (CRAC)-like current recorded in isolated hypertrophied right ventricular myocytes treated with monocrotaline [60]. Likewise, we proposed that at least TRPC5 may be critical in SOCE since its downregulation inhibits thapsigargin-induced potentiated SOCE in NRVM under ischemia and reperfusion. We further demonstrated that TRPC5 colocalizes with Orai1, the pore-forming sub-unit of store-operated Ca 2+ channel (SOCC) [13]. More recently, Bartoli et al. [61] proposed that TRPC1 and 5 are involved in aldosterone activation of SOCE in adult rat ventricular cardiomyocytes. This study revealed that cardiac myocytes treated for 24 h with aldosterone enhance SOCE through the activation of mineralocorticoid receptor, and increase the store-operated Ca 2+ current (I SOC ), which correlates with specific overexpression of TRPC1 and 5, as well as stromal interaction molecule 1 (STIM1), but not of TRPC3, 4, or 6, nor of Orai1 and Orai3.
It is important to note that all these reports used agents that selectively deplete sarcoplasmic reticulum Ca 2+ stores (e.g., cyclopiazonic acid, thapsigargin) to activate SOCC and avoid contribution of ROCE pathways. The combination of using different TRPC inhibitors, together with functional pore inhibitory antibodies for TRPC proteins and RNA silencing, suggests that TRPC channels account for the prominent SOCE in cardiac myocytes, especially under pathological conditions. Nevertheless and despite the increasing number of studies investigating SOCE in cardiac myocytes, the role of TRPC channels in SOCE is still controversial, which requires further investigation.
Role of TRPC Channels in Cardiac Pathophysiology
There is a general consensus that the overexpression and activation of TRPC channels are associated with deleterious cardiac pathology. As reviewed recently, under physiological conditions, the function of TRPC channels in the heart does not seem to be essential [4,62]. It appears that hearts from KO mice of different TRPC channels do not present any significant contractile abnormalities. Echocardiography analysis showed that TRPC3 KO and TRPC6 KO mice have similar resting left-ventricular mass and fractional shortening as compared to their respective littermate controls [63]. However, the induced stress-stimulated contractility, known as the Anrep effect, is diminished in isolated papillary muscles and cardiomyocytes from TRPC6 KO, but not TRPC3 KO mice [64]. In addition, TRPC1/4 double-KO mice have normal basal cardiac contractility, as well as normal systolic and diastolic functions. In contrast, isoproterenol-induced chronotropic responses are reduced in TRPC1/4 double-KO mice [65].
TRPC channels might play a role in some physiological processes. TRPC channels likely regulate cardiac pacemaking, conduction, ventricular activity, and contractility during cardiogenesis, through the interaction with the Ca v 1.2 channel in isolated hearts obtained from four-day-old chick embryos [22]. TRPC channels also contribute to Ca 2+ homeostasis by directly conducting Ca 2+ or indirectly via membrane depolarization and voltage-gated Ca 2+ channel modulation. The resulting TRPC-mediated Ca 2+ influx is required for the activation of signaling pathways that play minor roles in the healthy heart. For instance, they are involved in the activation of transcription factors promoting cardiac hypertrophy, fibrosis, and/or arrythmia [5,13,28,55,[66][67][68][69]. Here, we discuss the role of TRPC channels in processes related to cardiac ischemic diseases.
TRPC Channels in Myocardial Infarction
One of the first pieces of evidence of the participation of TRPC in myocardial infarction (MI) was proposed using bioinformatic analysis combined with experimental approaches. Zhou et al. [70] demonstrated an increase in the expression of TRPC6, which was experimentally validated in a one-month post-MI rat model, suggesting TRPC6 as a potential therapeutic target for MI. Later, other studies highlighted the induction of TRPC proteins under MI and explored the idea that Ca 2+ influx through TRPC channels overexpressed after MI contributes to cardiac dysfunction and adverse remodeling. In fact, significant increases in TRPC1, 3, 4, and 6 mRNA levels in mice one, two, and six weeks post MI were observed, as compared with sham [71]. This channel upregulation correlates with the increase in Ca 2+ entry when myocytes isolated from MI adult mouse are stimulated with cyclopiazonic acid and OAG. Furthermore, mice expressing dn-TRPC4 have less pathological hypertrophy, better cardiac hemodynamic performance, and increased survival after MI, as compared with WT mice [71]. Therefore, the loss of TRPC4 function likely protects against the progression of cardiac dysfunction after MI. Interestingly, Jung et al. [72] suggested that gain of function of TRPC4 due to a genetic variation (I957V) causes an increase in channel activity, which has a protective effect against MI. The authors identified a single-nucleotide polymorphism (SNP) in TRPC4 that associates with MI risk in a case-control study. They further used multivariate analysis to show a protective effect of the I957V allele against MI risk, but only in diabetic patients. Therefore, the mutated TRPC4-I957V is thought to mediate higher Ca 2+ signals, perhaps to facilitate the generation of endothelium and nitric oxide-dependent vasorelaxation. Nevertheless, the authors did not experimentally test this hypothesis. Recently, we observed significant dysregulation in the expression of several TRPC isoforms in a Wistar rat model of MI induced by transient ligation of the left coronary artery. A PCR-based micro-array, qRT-PCR, and Western blotting demonstrated significant upregulation of TRPC1, 3, 4, 5, and 6, whether in at-risk or in remote zones of infarcted hearts, as compared to sham. Specific downregulation of TRPC5 in MI rats infused with urocortin-2 at the onset of reperfusion was observed, offering a role of TRPC5 in cardioprotection [13].
In the case of TRPC3 and 6, a previous study determined that TRPC6 KO mice had significantly higher rates of mortality due to ventricular wall rupture throughout 3-7 days post MI [37]. In contrast, TRPC3/6/7 triple-KO mice subjected to transient MI (30 min of ischemia followed by 24 h reperfusion) exhibit reduced infarct size, better cardiac performance, and less cardiac tissue damage post MI, as compared with WT animals. In addition, they have reduced apoptosis through the inhibition of the calcineurin-nuclear factor of activated T cells (NFAT) signaling pathway [24]. These results suggest that TRPC3, 6, and 7 contribute significantly to worsening MI impacts on cardiac function. Further investigations will be welcome to clarify the discrepancy between these KO studies. It will be interesting to examine whether the cardioprotective effects observed in the triple-KO mice affect the transformation of myofibroblasts required during wound healing and scar formation.
TRPC Channel Role in Ischemia and Reperfusion Injuries and Cardioprotection
Ischemia and reperfusion (I/R) injury is the main cause of cell apoptosis and necrosis observed after an MI. Several studies demonstrated evidence linking cytosolic Ca 2+ increase through TRPC and apoptosis after I/R [24,73]. Studies using TRPC inhibitors examined their role in I/R injuries. For instance, Kojima et al. [74] showed, in a Langendorff-perfused mouse heart under I/R, that left-ventricular functions are significantly improved by the administration of ion channel blockers (2-aminoethoxydiphenyl borate (APB) and La 3+ ) during the initial 5 min of reperfusion, suggesting a TRPC channel role in contractile dysfunction in reperfused ischemic myocardium. In an atrial cardiac cell line, H9C2, the addition of SKF96365, another widely used inhibitor of TRPC, ameliorates injuries induced by hypoxia-reoxygenation (H/R) [24]. However, it is well known that 2-APB and La 3+ , as reviewed previously [75,76], as well as SKF96365 [75,76], are not specific to TRPC channels and may block other cationic channels. Therefore, these results should be supported by experiments using siRNA and/or TRPC-deficient mice. Actually, other reports used different molecular approaches to identify TRPC isoforms responsible for Ca 2+ entry and its relationship with cardiac myocyte death under I/R. For example, Shan et al. [73] observed that transgenic mice overexpressing TRPC3 in myocardial cells are highly sensitive to injuries after I/R as they enhance apoptosis through increased TRPC3-mediated Ca 2+ influx and calpain cleavage. They also demonstrated significant improvement in the viability of cardiomyocytes after SKF96365 treatment. Moreover, Meng et al. [77] observed that in vitro I/R increases TRPC6 protein expression, [Ca 2+ ] i levels, and cell apoptotic rate in a time-dependent manner in H9C2 cell line. In addition, they suggested TRPC6 as a possible target for cardioprotection in H9C2 cells since the administration of danshensu, an active component of Salvia miltiorrhiza, protects against I/R injury by reducing TRPC6 expression via the c-Jun N-terminal kinases (JNK) signaling pathway [77]. Hang et al. [78] also demonstrated that brain-derived neurotrophic factor (BDNF) protects against MI and inhibits H/R-mediated cardiomyocyte apoptosis through TRPC3 and TRPC6 regulation.
On the other hand, the role of TRPC1 in I/R is still unclear. A recent study suggested that it is implicated in I/R injury, as the expressions of mRNA and protein of TRPC1, Orai1, and STIM1 are significantly increased in vivo in mice subjected to myocardial I/R injury and in vitro in H9C2 cells after H/R [79]. Interestingly, the suppression of STIM1 by siRNA decreases the expression of TRPC1 and Orai1, leading to decreased intracellular Ca 2+ accumulation and apoptosis produced by H/R in H9C2 cells [79]. Therefore, STIM1 likely regulates the expression of TRPC1 and Orai1 in the context of apoptosis and myocardial I/R injury. In contrast, Al-Awar et al. [80] speculated that TRPC1 plays a cardioprotective role against I/R injury. They showed that sitagliptin, an inhibitor of dipeptidyl peptidase-4 (DPP-4), decreases the infarct size in a rat model of I/R which correlates with the increase in protein levels of TRPC1, TRPV1, and calcitonin gene-related peptide in heart tissue. Nevertheless, a specific experiment targeting TRPC1 was not shown. Our recent study, through Western blot, confirms that TRPC1 and 6 are upregulated in a rat model of I/R although they are not inhibited by urocortin-2-mediated cardioprotection. In contrast, urocortin-2 administration in NRVM undergoing in vitro I/R inhibits SOCE and prevents I/R-induced protein overexpression of TRPC5 and Orai1 [13]. Taking into consideration these results, further investigations are necessary to clarify the functional role of TRPC channel increase after I/R.
TRPC Channels in Post-Ischemia Cardiac Repair
After MI, the heart undergoes extensive adaptative processes and myocardial remodeling, involving angiogenesis, cardiac cell hypertrophy, and accumulation of fibrous tissue in both the infarcted and the non-infarcted myocardium, as reviewed previously [81][82][83]. Nonetheless, the role of the TRPC protein in cardiac repair still remains poorly studied.
TRPC Channels in Post-Ischemia Angiogenesis
Angiogenesis relies on new blood vessels forming from pre-existing vessels and the subsequent expansion of the vascular network in the body. Post-ischemic angiogenesis is considered a protective mechanism motivated by the lack of oxygen and blood supply necessary for physiological heart repair after a MI [84,85]. Angiogenesis involves sprouting, proliferation, migration, and tube formation thanks to the stimulation of endothelial cells (ECs) by growth factors such as vascular endothelial growth factor (VEGF), considered as the most potent pro-angiogenic factor specific for ECs (reviewed elsewhere [86,87]). Compelling evidence demonstrated that chronic and transient ischemia significantly increase the expression of VEGF [88][89][90]. Nevertheless, pre-clinical and clinical trials using solely pro-angiogenic factors, such as VEGF, were not shown to be effective in patients with stable angina or critical lower limb ischemia [91,92]. VEGF stimulates two tyrosine-kinase receptors, VEGFR-1 and VEGFR-2 [84,93], to increase [Ca 2+ ] i in ECs involving Ca 2+ release from intracellular stores and extracellular Ca 2+ flux through cation channels, such as TRP channels [87,94]. There is increasing interest in the role of TRPC channels in angiogenesis, especially in studies related to cancer and diabetes [95][96][97]. ECs express different TRPC proteins involved in vascular function (TRPC1, 4, and 6), vascular tone remodeling (TRPC4), and oxidative stress-induced responses (TRPC3 and 4) [98]. It is apparent that TRPC3 and 6 are implicated in VEGF-mediated [Ca 2+ ] i increase in ECs and angiogenesis. Indeed, VEGF-and OAG-induced extracellular signal-regulated kinases (ERK) 1/2 activation and tubulogenesis are significantly suppressed by TRPC3 inhibitor and siRNA in human umbilical vein ECs (HUVEC) [99]. Meanwhile, the overexpression of dn-TRPC6 in human microvascular ECs inhibits the VEGF-mediated [Ca 2+ ] i increase, migration, sprouting, and proliferation, well-known hallmarks of angiogenesis [100]. In addition, TRPC4 siRNA attenuates oxidized low-density lipoprotein (oxLDL)-induced human coronary EC proliferation, migration, and angiogenesis tube formation [101].
Unfortunately, little is known regarding the role of TRPC channels during post-ischemic angiogenesis. In contrast, TRPC channels appear involved in hypoxia-induced angiogenesis [96,102]. For instance, the expression of TRPC4 protein is significantly upregulated in the retina under hypoxic condition. TRPC4 siRNA inhibits VEGF-induced migration and tube formation of retinal microvascular ECs, which suggests a role of TRPC4 in initiating neovascularization in response to VEGF in retina under hypoxia [96]. Moccia et al. [103] hypothesized and debated whether transfecting TRPC3 into autologous endothelial progenitor cells (EPCs) might enhance revascularization and functional recovery of ischemic hearts. However, functional experiments that tested this hypothesis were not performed. Recently, Zhu et al. [104] demonstrated that TRPC5 activation is necessary for EC sprouting, angiogenesis, and blood perfusion in a hind-limb ischemia model. TRPC5 downregulation prevents NFAT activation and EC tube formation under hypoxia. Moreover, TRPC5 KO mice have worse vascular recovery than WT mice after an ischemic injury. Finally, activation of TRPC5 by riluzole stimulates ECs sprouting and significantly improves limb recovery from ischemia injuries [104]. Therefore, it will be interesting to confirm the beneficial role of other TRPC channels in post-ischemic heart angiogenesis.
TRPC Channels in Early Adaptative Cardiac Remodeling
An early cardiac hypertrophy and fibrosis are considered compensatory events to the loss of cardiac myocytes, necessary for wound healing and scar formation after heart infarcts. However, prolonged hypertrophy could lead to the development of HF, arrhythmias, and even sudden cardiac death [105]. Since it is known that the activation of TRPC channels mediates the Ca 2+ influx, which activates Ca 2+ intracellular signaling pathways, such as calcineurin/NFAT, TRPC channels are suggested as Ca 2+ effectors and transducers of hypertrophic genes in the heart. Little is known regarding TRPC channel implication in I/R-induced cardiac hypertrophy. In contrast, there is general agreement regarding the role of TRPC channels in pathological cardiac hypertrophy as a consequence of aortic constriction or under chronic GPCR stimulation using endothelin-1, phenylephrine, or angiotensin-II [31,106,107]. Similarly, Makarewich et al. [71] revealed an upregulation of TRPC1, 3, 4, and 6 channels in mice six weeks post MI as compared to sham animals, along with the activation of the so-called fetal gene program, commonly used as markers of cardiac hypertrophy. They also demonstrated that mice expressing dn-TRPC4 have less pathological hypertrophy, better cardiac hemodynamics performance, and increased survival after MI, as compared with wild-type (WT) mice, which all suggest a critical role of TRPC4 in post-MI heart damage. Cardiac hypertrophy is also observed in rat heart tissue as early as one week post I/R, which correlates with the upregulation of the expression of TRPC1, 3, 4, 5, and 6 mRNA [13] and the activation of the fetal gene program (unpublished data). Recently, Dragún et al. [108] examined the expression of TRP channels in 43 patients with end-stage HF. They discovered, among other TRP channels, a significant increase in TRPC1 and 5 gene expression, while TRPC4 expression was decreased in HF patients compared to a healthy donor. Also, they detected a significant correlation of the gene expression of TRPC1 and MEF2c (myocyte enhancer factor 2c), considered a key transcription factor for cardiac hypertrophy [109]. Interestingly, this pilot study did not detect any significant differences in TRP expression between male and female HF patients, nor between HF patients based on ischemic or non-ischemic background. Another recent study observed a similar increase in the expression of TRPC1 in hearts of patients with hypertrophic cardiomyopathy (HCM) or HF. This study further used human pluripotent stem cell lines of TRPC1 KO generated using clustered regularly interspaced short palindromic repeats (CRISPR)/ CRISPR-associated protein 9 (Cas9) to confirm the role of TRPC1 in regulating cardiac myocyte hypertrophy induced by phorbol 12-myristate 13-acetate (PMA), which was associated with abnormal activation of nuclear factor kappa-light-chain-enhancer of activated B cells (NF-κB) [110]. Altogether, this indicates that TRPC channels might play a similar role in cardiac hypertrophy and HF, independently of patient background. Once TRPC expression is triggered, they perhaps activate several Ca 2+ -dependent factors of transcription and cardiac hypertrophy genes, leading to the same outcome, i.e., HF.
In the case of fibrosis, multiple well-known markers of fibrosis, hypertrophy, and Ca 2+ handling protein were identified recently using genome-wide transcriptome analysis of infarcted hearts [111]. TRPC6 is considered a regulator of myofibroblast differentiation, a hallmark of fibrosis, since its silencing in human cardiac fibroblasts attenuates the transforming growth factor beta 1 (TGF-β1)-induced upregulation of alpha smooth muscle actin (α-SMA), a marker of myofibroblast transformation [112]. A recent study confirmed that the serum level of TGF-β1 is increased 28 days after MI in mice, accelerating cardiac fibrosis [113]. On the other hand, Saliba et al. [114] described that polyphenol extracted from grape pomace decreases angiotensin-II-induced Ca 2+ entry through a direct regulation of TRPC3 and subsequent activation of NFATc3 in human ventricular cardiac fibroblasts, which abrogates myofibroblast differentiation and fibrosis by decreasing collagen secretion. However, the direct contribution of TRPC channels in cardiac fibrosis mediated by ischemia was barely addressed. Different isoforms of TRPC proteins are upregulated in rats showing fibrosis one week post I/R, although their direct role in promoting fibrosis was not examined [13]. Interestingly, TRPC6, through calcineurin-NFAT signaling, seems to be required for myofibroblast transformation after MI, a critical step during which collagen deposition and scar formation happen to maintain ventricular wall structural integrity in the early days post MI. In fact, TRPC6 KO mice show poor wound healing and fewer myofibroblasts, stained with α-SMA antibody, in the infarcted area [37]. Moreover and independently of studies related to ischemia and heart infarct, several reports proposed the participation of TRPC channels in the cardiac interstitial fibrosis caused by pressure overload by thoracic aortic constriction (TAC) in animal models or using vasoactive agonists, such as phenylephrine [36,106,115]. For instance, experiments performed in TRPC1/4 double-KO mice revealed significant amelioration of pressure overload-induced hypertrophy and interstitial fibrosis, which is explained by a reduced activity of TRPC1-and 4-dependent basal Ca 2+ entry in adult ventricular myocytes [65]. At the same time, TRPC3 knockdown, using a small hairpin RNA lentivirus through the tail vein of mice, efficiently suppresses the extent of atrial fibrosis induced by TAC [116].
Concluding Remarks
In light of the reviewed studies, TRPC proteins stand out as key ion channels critical for cardiac cell responses under ischemic stress. A clearly defined role for specific TRPC isoforms in cellular events related to ischemic heart diseases still remains elusive, perhaps reflecting the complexity of these channels, the limitations of pharmacological tools, and the lack of specific inhibitors and antibodies. Nevertheless, TRPC channels were extensively studied since they sense and respond to a plethora of endogenous and exogenous stimuli by Ca 2+ signaling in cardiac cells. Increasing evidence indicates that TRPC channels contribute to pathophysiological consequences of heart infarction, such as cardiac hypertrophy, fibrosis, and post-ischemic angiogenesis, as summarized in Figure 1. The potential to influence these outcomes by specifically modulating the expression and/or function of TRPC channels requires major efforts and more investigation. Further progress in the mechanistic understanding of TRPC channels will certainly help to identify new therapeutic targets for drug development to mitigate the impact of ischemia on cardiac function and to prevent cardiac transition from adaptive responses to harmful heart failure. [5,13,71]. Compelling evidence indicates that TRPC channel overexpression contributes to Ca 2+ entry, mediating the activation of Ca 2+ -sensitive signaling pathways, such as calcineurin-NFAT, a critical pathway involved in apoptosis, cardiac hypertrophy, and fibrosis [13,28,55,66,67]. TRPC proteins are likely also involved in cardiac repairrelated processes. The protective role played by TRPC6 in wound healing is of note [37]. Other studies suggested a role of TRPC channels, such as TRPC5, in angiogenesis and revascularization triggered post ischemia [104].
Conflicts of Interest:
The authors declare no conflicts of interest.
Conflicts of Interest:
The authors declare no conflicts of interest. | 7,077 | 2019-12-25T00:00:00.000 | [
"Biology"
] |
Examining Teachers’ Behavioral Intention to Use E-learning in Teaching of Mathematics: An Extended TAM Model
The aim of this study was to examine factors that influenced experienced teachers’ intention to use E-learning in their teaching of mathematics. Data were collected using a questionnaire from 161 secondary school mathematics teachers who completed a six-month in-service online training provided by the Indonesian Ministry of Education. The Technology Acceptance Model (TAM) was used as the framework while E-learning experience was included as an additional construct. An extended TAM model was proposed and tested in this study. It consisted of five constructs, namely: intention to use, perceived usefulness, perceived ease of use, attitude toward using, and experience. Data were analyzed using Structural Equation Modelling with SMARTPLS 3.0. The findings showed that attitude toward E-learning use and E-learning experience were the two most significant constructs in predicting E-learning use. Contrary to previous studies, perceived ease of use and perceived usefulness were non-significant factors for the prediction of the behavioral intention. Implications for future research and practices are discussed.
INTRODUCTION
In Indonesia, 45.5 million school students and 3.1 million teachers are dependent on online teaching and learning due to school closures during COVID-19 pandemic (Mailizar, Almanthari, Maulina, & Bruce, 2020). As a result, education in the country has changed dramatically with the rising need of E-learning adoption. According to Cigdem and Topcu (2015), institutions that expect their teaching staff to use E-learning should consider their behavioral intention to use E-learning systems. The Technology Acceptance Model (TAM) (Davis, 1986) is the mostly used model in studies of users' acceptance of technologies (Cigdem & Topcu, 2015). The main aim of the model is to describe users' behavior toward the adoption of technology (Chang, Hajiyev, & Su, 2017).
TAM has been widely used in studies that investigate e-learning. Although the model has been scrutinized, validated and praised for its contribution to science, it has been argued that it has some limitations.
/ 16
Theoretically, some researchers have argued that the popularity of TAM is linked to its simplicity which does not address the complexity associated with e-learning use in institutional contexts (Ajibade, 2018;Chuttur, 2009). However, the core of the theory remains the same in most studies and the majority of changes are made to the external variables discussed in the theory or adding elements as part of perceived usefulness and perceived ease of use. For example, Venkatesh and Davis (2000) proposed TAM 2 with additional variables with additional constructs. Some of the constructs broke PU into several components such as job relevance, result demonstrability, etc. Another limitation of TAM is that studies that follow it depend mainly on self-reported data without use of data generated from systems about their use (Chuttur, 2009). Ajibade (2018) argues that TAM assumes that more use is better without placing adequate emphasis on impact of system use on performance. Ajibade (2018) also argues that TAM is more appropriate for personal use than institutional use due its lack of focus on the impact on policies, management, expectations and workplace factors. However, TAM has been adapted differently to suit the specific requirements for specific institutions and contexts.
Previous studies have extended the TAM model, resulting in various external factors of TAM (Abdullah & Ward, 2016;Martin, 2012). Abdullah and Ward (2016) conducted a meta-analysis study and found that subjective norm, experience, perceived enjoyment, computer anxiety and self-efficacy were the most commonly used external factors for TAM. According to Abdullah and Ward (2016), experience is one of the most commonly used external factors in E-learning acceptance studies.
As discussed earlier, TAM was adapted differently by different researchers based on several factors including their needs, contexts, research focus and conceptualization of the TAM. In this study, experience was included due to the need to examine teachers' prior e-learning experience on their perceived usefulness, perceived ease of use and behavioral intention. As discussed in this study, there is dearth of research in this area and investigating it contributes to our understanding of its importance and impact.
In terms of teachers' experience in E-learning, many teachers in Indonesia have experiences of using Elearning for their professional development (PD). The Ministry of Education and Culture offers a six-month online PD course for teachers in 42 higher education institutions throughout the country (Mailizar, Samingan, Rusman, Huda, & Yulisman, 2020). This course, consisting of general pedagogy, subject-specific pedagogy, and content area, is a 12-credit raining course required for teachers in order to be awarded a teaching certificate.
However, to the best of our knowledge, no empirical studies were conducted about secondary school teachers' behavioral intention to use E-learning in teaching, particularly that focus on secondary mathematics school teachers who have experience in using E-learning for their PD (hereafter, experienced teachers). Therefore, in this study, teachers' E-learning experience during their PD was used as an external factor of TAM model. Consequently, using TAM as a theoretical foundation and employing Structural Equation Modelling (SEM), this study aims to examine factors affecting experienced teachers' behavioral intentions to use E-learning in their teaching of mathematics.
THEORY AND RELATED LITERATURE
Several models, such as Technology Acceptance Model (TAM) (Davis, 1986), Theory of Planned Behavior (TPB) (Ajzen, 1991), and Unified Theory of Acceptance and Use of Technology (Venkatesh, Morris, Davis, & Davis, 2003) have been developed and proposed to investigate users' intention to use emerging technology. TAM is one of the most widely used models to investigate and predict users' technology acceptance.
Technology Acceptance Model
TAM is based on the Theory of Reasoned Action proposed by Fishbein and Ajzen (1975). According to Fishbein and Ajzen, behaviour is determined by attitude and subjective norm. Attitude refers to the positive or negative feelings about the behaviour and the subjective norm refers to a person's circle of important people and their acceptance of performing the behaviour.
According to Davis (1996), TAM (Figure 1) is a framework used to investigate how and when users adopt emerging technology. TAM has been proved efficient in explaining users' behavior to use computing technology (Teo, 2010). This model shows the relationship among perceived ease of use (PEU), perceived usefulness (PU), attitude toward use (AT), and intention to use technology (BI).
According to Davis (1989), behavioral intention is affected by attitude toward use. It is also directly and indirectly affected by perceived ease of use and perceived usefulness. Furthermore, perceived ease of use has a direct effect on perceived usefulness, yet the reverse is not true.
Intention to use has a close link with actual behavior (Kiraz & Ozdemir, 2006). It is a factor that shows users' willingness to perform a behavior (Ajzen, 1991). According to Teo (2010), using intention to use as a dependent variable has advantages since asking participants about their actual use of E-learning may discourage them to participate in a study. In addition, compared to actual use, behavioral intention to use is a more progressive dependent variable. Therefore, the present study employed intention to use as a dependent variable.
TAM has been widely used around the globe to examine secondary teacher acceptance of e-learning. For instance, De Smet et al. (2012) collected data from 505 Flemish secondary school teacher to understand acceptance of e-learning by secondary school teachers and to investigate the instructional use of e-learning. The study indicated that perceived ease of use of e-learning is the strongest predictor in e-learning acceptance. Another study was conducted by (Alzahrani, 2019) in Saudi Arabia. This study revealed that the TAM model can be used to explain factors influencing secondary school teachers` acceptance of e-learning in context of Saudi Arabia. Stockless (2018) conducted a study in Canada aiming at identifying the factors that influence teachers' intention to use e-learning. This study showed that perceived usefulness is a strong predictor or teachers' intention to use e-learning.
Experience as an external variable of TAM
Existing studies have revealed that experience significantly influences users' perceived ease of use (PEU) of E-learning systems (De Smet et al., 2012;Lee et al., 2013;Purnomo & Lee, 2013). Users who have more experience tend to have a more favorable feeling toward technology ease of use Purnomo & Lee, 2013). Regarding perceived usefulness (PU), prior studies have revealed a significant effect of users' experience on their PU of E-learning Martin, 2012;Purnomo & Lee, 2013;Rezaei, Mohammadi, Asadi, & Kalantary, 2008). In terms of behavioral intention, previous studies have also shown that users' computer experience affects their intention to use E-learning technology (De Smet et al., 2012; Figure 1. Technology acceptance model (Davis, 1986) 4 / 16 Premchaiswadi, Porouhan, & Premchaiswadi, 2012;Williams & Williams, 2010). In this current study, we used teachers' E-learning experience during their in-service professional development program (hereafter XIT) as an external variable of TAM. Hence, we proposed the following hypotheses: H1. XIT significantly affects PU of E-learning in teaching of mathematics H2. XIT significantly affects PEU of E-learning in teaching of mathematics H3. XIT significantly affects BI to use E-learning in teaching of mathematics
Perceived ease of use (PEU)
In the context of E-learning, PEU is defined as the extent to which a user believes that using E-learning will be free of effort (Lin, Chen, & Fang, 2010). It has an effect on PU (Davis, 1989) as well as on AT (Chang, Yan, & Tseng, 2012;Wu & Zhang, 2014). Furthermore, numerous studies have validated the significance of PEU as one of the main predictors of attitude toward acceptance of technology (Briz-Ponce & García-Peñalvo, 2015;Calisir, Altin Gumussoy, Bayraktaroglu, & Karaali, 2014;Hamid, Razak, Bakar, & Abdullah, 2016). In this study, we looked at teachers' PEU of E-learning in teaching. Hence, the following hypotheses were proposed.
H4. PEU significantly affects PU of E-learning in teaching of mathematics H5. PEU significantly affects AT toward using E-learning in teaching of mathematics H6. PEU significantly affects BI to use E-learning in teaching of mathematics Lin et al. (2010) define PU of E-learning as the extent to which a user believes that E-learning can help them to achieve learning objectives. Previous studies showed that PU is one of the main factors that influences users' attitude toward technology (Chang et al., 2012;Hamid et al., 2016;Hess, McNab, & Basoglu, 2014;Mou, Shin, & Cohen, 2017). Furthermore, PU also has a direct and an indirect effect on behavioral intention (Teo, 2010;Wong, 2015). Hence, regarding the previous studies, we proposed two hypotheses:
Perceived usefulness (PU)
H7. PU significantly affects BI to use E-learning in teaching of mathematics H8. PU significantly affects AT toward using E-learning in teaching of mathematics Attitude toward using (AT) Kaplan (1972) defined attitude as a tendency in response to an event in a favorable or an unfavorable way. Many studies on E-learning acceptance have showed that attitude becomes a significant predictor of BI to use E-learning (e.g., Cheung & Vogel, 2013;Tosuntaş, Karadağ, & Orhan, 2015). The connection between AT and BI implied that users tend to follow certain behaviors based on their positive attitude toward them (Keong, Albadry, & Raad, 2014). Furthermore, attitude toward technology fully mediates effects on behavioral intention. Therefore, we proposed hypothesis 9.
Behavioral intention (BI)
There are two outcome variables of the TAM model, namely behavioral intention (BI) and actual use (AU). BI is defined as behavioral tendency to keep using technology in the future; therefore, it determines acceptance of technology (Alharbi & Drew, 2014). Previous studies have confirmed that BI positively affects AU. Furthermore, earlier studies showed that BI is influenced by PU (Tarhini, Elyas, Akour, & Al-Salti, 2016), PEU (Tarhini et al., 2016;Wu & Zhang, 2014), and AT (Hussein, 2017;Letchumanan & Tarmizi, 2011;Sharma & Chandel, 2013;Taat & Francis, 2019). As mentioned previously, in this study, BI is a dependent variable. Therefore, we incorporated intention to use E-learning as an outcome of our research model as presented in Figure 2.
RESEARCH MODEL
The discussion presented previously suggests that E-learning experience is a significant factor that strongly affects PEU, PU and BI in the E-learning context. As mentioned earlier, in the present study, teachers' Elearning experience during their in-service training was included as an external factor of the TAM model, as shown in Figure 2.
Design of the Study
We employed a quantitative approach with a cross-sectional questionnaire (Fraenkel, Wallen, & Hyun, 2011). A quantitative method is able to provide reliable, valid, objective and generalizable findings, and questionnaires can be distributed to many participants. Furthermore, according to Fraenkel et al. (2011), a quantitative method enables generalizations about the whole population. In addition , a quantitative study relies on hypothesis testing, where clear guidelines and objectives can be followed (Shank & Brown, 2013).
In this research, we tested hypotheses to predict secondary school teachers' behavioral intention to use Elearning in their teaching of mathematics.
Instrument Design and Development
According to Lew, Lau, and Leow (2019), questionnaire is a widely used method in studies of technology acceptance. It allows researchers to collect data that reflect opinions and behaviors of a group of people (Queirós, Faria, & Almeida, 2017). Therefore, a questionnaire of 21 Likert-scale items was developed. The question items were designed based on the five constructs (PU, PEU, AT, BI, and XIT) of the model. We labelled the five scale questions as 'Strongly Disagree', 'Disagree', 'Neutral', 'Agree, and 'Strongly Agree', and they were ranged from 1 to 5, respectively.
In order to assess content validity of the constructs, the questionnaire was reviewed by two experts. The instrument was examined through content validity index. The experts were requested to examine if the items covered all related aspect. The results showed that the average score of the item above the threshold value which is 0.800 (Halek, Holle, & Bartholomeyczik, 2017). Furthermore, all items values were above the threshold values of 0.780. Furthermore, a pilot test was carried out with eight selected teachers. After the teachers completed the questionnaire, they were interviewed to make sure that they understood the questions and that questionnaire items made sense for them. The questions were then revised according to comments from the interviewees.
Research Participants
The respondents of this study were secondary school mathematics teachers in Indonesia who participated in an in-service professional development program offered by the government through an online learning system. It took six months for participants to complete the training. The contents of training were contents of mathematics and the subject specific pedagogy. The training has been offered since 2018 and is available at 42 Higher Education Institutions (HEIs) across the nation.
The study was conducted in one of the assigned HEIs that provided the training for 1200 teachers in 2019 consisting of five cohorts. The institution is a public university located in northern end of Sumatra island, Indonesia. This university is ranked in the top 20 Universities in the country. We chose this university as the teachers who participated in the teacher professional development were not only from the province where the university located but also from other provinces in Indonesia. The participants of the training program were enrolled by the ministry of education and culture into the university.
Random sampling was employed for the selection of respondents. Participants' demographic information is highlighted in Table 1.
A total of 161 secondary school mathematics teachers in Indonesia participated in this study by completing the questionnaire. Respondents consisted of 83 (51.6%) male and 78 (48.4%) female teachers. The majority of respondents had an undergraduate degree (87.6%), while the remaining had a postgraduate degree (12.4%). Majority of the participants had more than six years of teaching experience as well as were being certificated by the Indonesian government.
Data Collection
Prior to data collection, we acquired ethical approval for this study. Subsequently, we conducted an online questionnaire because it could be easily administered and accessible with various devices (See., Fraenkel et al., 2011). The majority of participants were contacted through WhatsApp and email. We administered the online questionnaire in Google Form by sending a link to participants and keeping the questionnaire active for four weeks.
Data Analysis
Structural Equation Modelling (SEM) was utilized. Partial Least Squares SEM (PLS-SEM) was appropriate for the study in order to predict teachers' behavioral intention to use E-learning for teaching of mathematics. Therefore, SMART PLS 3.0 was run to examine Confirmatory Factor Analysis (CFA) and to assess the reliability, validity, internal consistency of the model. A structural model was developed, and the hypotheses were confirmed.
Factor Analysis
Five constructs, namely, XIT, PEU, PU, AT, and BI had been revealed for assessing factor analysis. We present the structural model and its path coefficients in Figure 3.
To assess the accuracy of the structural model, we measured R 2 values. The structural model shows that R 2 is 0.453 for BI as an endogenous construct. It implies that the four exogenous constructs (XIT, PEU, PU and AT) moderately explain 45.3% of the variance in CI (Hair, Hult, Ringle, & Sarstedt, 2017). The inner model suggests that AT is the strongest predictor that significantly affects BI (β= 0.513, t-value = 5.880), followed by XIT (β= 0.164, t-value = 2.267). The results indicate that AT and XIT have a strong positive relationship with BI by having t-value > 1.645 for a significance level of 5% (α = 0.05) (Hair et al., 2017).
Regarding AT as an endogenous construct, the model shows that R 2 is 0.571 for AT. This indicates that the two constructs (PEU and PU) moderately explain 57.1 % of the variance in AT. The model also suggests that PEU is the strongest factor that significantly affects AT (β= 0.472, t-value = 6.217), followed by PU (β= 0.390, t-value = 5.771). Having those t values, it indicates that PEU and PU have a strong positive relationship with AT.
In addition, in terms of PU, the results reveal that R 2 is 0.405 for PU. This implies the two constructs (PEU and XIT) moderately explain 40.5% of the variance in PU. The model shows that PEU is the strongest predictor that significantly affects PU (β= 0.381, t-value = 5.369), followed by XIT (β= 0.380, t-value = 5.971). These results indicate that XIT and PEU have strong positive relations.
Furthermore, three assessment criteria namely, convergent validity, internal consistency reliability, and discriminant validity were employed to assess the theoretical model. Regarding convergent validity, we measured the outer loadings of the indicators and the Averaged Variance Extracted (AVE) (Hair et al., 2017). Loading values equal to or larger than 0.7 indicate adequate convergent validity (Hair et al., 2017). In terms of Composite Reliability (CR), which measures internal consistency reliability, CR value above 0.7 is regarded as adequate. Table 3 presents loading values, CR and AVE of the constructs. Table 2 shows that all indicators have loadings over 0.7, which is considered a high convergent validity and acceptable (Hair et al., 2017). It implies that all indicators have exceeded the threshold value; therefore, indicator reliability was satisfactory. Furthermore, values of AVEs satisfy the threshold level of AVE (≥ 0.5), indicating that convergent validity is confirmed (Hair et al., 2017). Therefore, we can conclude that the constructs meet reliability and convergent validity requirements.
Regarding cross-loading criterion, Table 4 shows that the loading values of all indicators on the constructs were all higher than the loading values of other constructs. This indicates that the indicators of the constructs are interchangeable.
As mentioned earlier, we measured Heterotrait-Monotrait Ratio of Correlation (HTMT) (Henseler (2010) as an alternative approach to assess discriminant validity. HTMT was employed to confirm every construct is distinct from one another. As shown in Table 5, there was no confidence interval of HTMT for the paths with the value of 1, which indicates the constructs have sufficient discriminate validity (See., Henseler, 2010). Finally, we can conclude that Fornell-Larcker criterion, cross-loading criterion and HTMT revealed that the constructs exhibit sufficient discriminant validity. Table 6 presents the results of the assessments of the structural model. First, we addressed the lateral collinearity issue. To assess this, we used the Variance Inflation Factor (VIF). VIF values need to be above 0.2 and below 5.0 (Hair et al., 2017). Table 7 shows that all the inner VIF values for the independent variable are above 0.2 and below 5.0. Therefore, we conclude that in this study lateral multicollinearity is satisfactory.
Second, t-values for all paths were measured using the bootstrapping function of SMART PLS 3 to examine the significance level. Furthermore, we used Cohen's f 2 to evaluate the effect size of AT and XIT on BI (Cohen, 2013). Overall, Table 6 shows that seven out of nine relationships are identified as having t-value > 1.645. Therefore, they are significant at a 0.05 level of significance. Experience in using E-learning (XIT) (β= 0.400, t-value = 5.765, p < 0.001) positively and significantly affects PEU with medium effect size (0.190). Hence, hypothesis 1 is supported. Furthermore, PEU (β= 0.381, t-value = 5.369, p < 0.001) and XIT (β= 0.380, t-value = 5.971, p < 0.001) positively affects PU. Therefore, hypothesis 2 and hypothesis 4 are accepted. In terms of effect size, according to Cohen (2013), f 2 for PEU (0.205) and XIT (0.204) are regarded as medium effect size.
Regarding the effect size of PEU and PU on AT, f 2 for PEU (0.372) and PU (0.254) are considered medium effect size.
Regarding behavioral intention to use E-learning (BI), two out of four relationships were found to have t values > 1.645. Predictors of AT (β= 0.513, t-value = 5.880, p < 0.001) and XIT (β= 0.164, t-value = 2.267, p < 0.05) positively relate to BI. Hence, hypothesis 3 and hypothesis 9 are supported. However, predictors of PEU (β= 0.117, t-value = 1.096, p > 0.05) and PU (β= -0.026, t-value = 0.368, p > 0) do not significantly and positively relate to BI. Therefore, hypothesis 6 and hypothesis 7 are rejected. According to Cohen (2013), f 2 for AT (0.204) is considered as a moderate effect, while f 2 for XIT(0.034) is considered as small effect size. Summary of the results of hypotheses testing is presented in Table 7.
DISCUSSION
The main aim of this study is to examine factors that affect secondary school mathematics teachers' behavioral intention (BI) to use E-learning in their teaching. This study is distinct from other studies because it was conducted in the context of COVID-19 pandemic and it investigated teachers who experience using Elearning for their professional development. Hence, it is necessary to examine teachers' behavioral intention to advance our understanding of the factors that play a significant role in teachers' use of E-learning in their teaching, particularly for teachers who have experience in using E-learning for their professional development. To achieve this aim, TAM model (Davis, 1986) was adopted with an addition of an external factor of teacher experience in using E-learning during in-service professional development. The hypotheses related to the directional link between TAM scales and the external factor were examined. Results of this study show three crucial points of discussion.
First, the study suggests that teachers' E-learning experience (XIT) has a significant direct effect on their perceived ease of use (PEU) and perceived usefulness (PU) of E-learning. This finding is consistent with existing studies (Lau & Woods, 2008;Martin, 2012;Pituch & Lee, 2006;Rezaei et al., 2008;Williams & Williams, 2010), confirming the significant effect of experience on users' PEU. Previous studies revealed that users' experience had a significant effect on their PU of E-learning Martin, 2012;Rezaei et al., 2008). Teachers, throughout their careers, had many chances to participate in online professional development to develop their content knowledge and pedagogical knowledge. For instance, in the Indonesian context, for the last two years the government provided in-service teachers online certification programs. This study shows that such training, to some extent, has affected teachers' PU and PEU of elearning for their teaching. Therefore, this study indicated that teacher online professional development has positive impact on not only teacher knowledge development but also on their acceptance of e-learning for their instructional purpose. The effectiveness of online teacher professional development has received much intention from research around the world. Therefore, it enriches the literature in terms of advancing our understanding of teachers' perceived ease of use and perceived usefulness of e-learning for their instruction.
Second, regarding attitude of use (AT), the results show that PEU (β= 0.472, t-value = 6.217, p < 0.001) and PU (β= 0.390, t-value = 5.771, p < 0.001) appear to be strong predictors of AT. Previous studies have shown the importance of PU and PEU for attitude toward using E-learning (Hamid et al., 2016;Hess et al., 2014;Mou et al., 2017). Therefore, this finding is consistent with our hypothesis that teachers' PEU and PU of Elearning positively and significantly affect their attitude toward using E-learning.
Regarding the context of this study, there are two possible reasons why PU and PEU are not crucial in teachers' behavioral intention to use E-learning in teaching. First, the participants were teachers who had long experience in using E-learning for their online professional development. Lin (2011) found a similar finding that suggests PEU has a more critical impact on intention of less experienced users than more experienced users. These results are consistent with the finding of another study (Castañeda, Muñoz-Leiva, & Luque, 2007), in the context of intention to use the website, showing that PEU is a more important factor for less experienced users than for more experienced users. Therefore, this indicates that, for experienced teachers, perceived ease of use does not play significant role in their adoption of e-learning for teaching purpose. For experienced teachers, other factors play such as attitude play much more significant role then perceived ease of use does. The reason for this lies in the fact that different individual view e-learning from perspective. For example, in the context of the use of a website, Castañeda et al. (2007) revealed that the experience users are more interested in the outcome of search while new users evaluate the website the novelty of the site. In other words, Castañeda et al. (2007) argued that experienced users are influenced by extrinsic motivation while new users are influenced by intrinsic motivations such as perceived ease of use. Second, according to Davis (1993), perceived usefulness is the expected overall positive impact of system use on job outcome while PEU is the extent to which a user thinks that using a system will be free of effort. In the context of this study, the data collection took place during the COVID-19 pandemic when school closure left students and teachers dependent on E-learning. In such circumstances, when the teachers do not have other options for remote teaching, perceived usefulness and perceived ease of use of E-learning might become less important factors in teachers' decisions to use or not to use E-learning.
For that reason, this study leaves room for debate on this issue and future work is necessary to explore it.
The findings of this study indicate that having experience in using E-learning for teacher professional development does not guarantee that teachers will use E-learning in their teaching. For this type of teacher, attitude is the crucial factor that determines their use of E-learning technology in their instruction. However, having teacher training experience positively affects their perceived usefulness and perceived ease of use of E-learning. Therefore, if schools and policymakers would like to enhance the integration of E-learning in secondary schools, particularly during the pandemic, along with providing training for teachers, much more 12 / 16 effort is needed to ensure teachers have strong intentions to adopt technology and more importantly to ensure that teachers possess a strong positive attitude toward the E-leaning system.
CONCLUSION
This study revealed factors that determine teachers' behavioral intention to use E-learning in their mathematics teaching, particularly teachers who have experience in using E-learning for their professional development. The results showed that two out of four exogenous constructs have a positive effect on teachers' behavioral intention to use E-learning, namely attitude toward using, and experience in using Elearning. Teachers' attitudes toward using E-learning plays the most significant role in their behavioral intention. In addition, E-learning experience positively and significantly affects teachers' attitude toward Elearning. This study suggested that, for experienced teachers, perceived ease of use and perceived usefulness did not have a significant positive impact on teacher behavioral intention.
There are several limitations of this study that need to be addressed in future studies. First, the participants of this study were teachers who participated in an online teacher professional development program in one HEI assigned by the Indonesian government. This condition may influence the generalizability of the finding of this study. Therefore, further research is needed to validate this model in other HEIs. Second, the present study revealed that perceived usefulness (PU) and perceived ease of use (PEU) did not have a significant positive effect on teachers' behavioral intention to use E-learning. Thus, future research should aim to explore this issue in the context of teachers who experienced using E-learning for their professional development. Finally, this study has one external factor, namely E-learning experience. However, additional external factors of behavioral intention may also exist. As a result, future work should consider other external variables, such as school facilities and support for the integration of E-learning. | 6,755 | 2021-02-13T00:00:00.000 | [
"Education",
"Computer Science"
] |
Automatic Detection of Epilepsy Based on Entropy Feature Fusion and Convolutional Neural Network
. Epilepsy is a neurological disorder, caused by various genetic and acquired factors. Electroencephalogram (EEG) is an important means of diagnosis for epilepsy. Aiming at the low e ffi ciency of clinical arti fi cial diagnosis of epilepsy signals, this paper proposes an automatic detection algorithm for epilepsy based on multifeature fusion and convolutional neural network. Firstly, in order to retain the spatial information between multiple adjacent channels, a two-dimensional Eigen matrix is constructed from one-dimensional eigenvectors according to the electrode distribution diagram. According to the feature matrix, sample entropy SE, permutation entropy PE, and fuzzy entropy FE were used for feature extraction. The combined entropy feature is taken as the input information of three-dimensional convolutional neural network, and the automatic detection of epilepsy is realized by convolutional neural network algorithm. Epilepsy detection experiments were performed in CHB-MIT and TUH datasets, respectively. Experimental results show that the performance of the algorithm based on spatial multifeature fusion and convolutional neural network achieves excellent results.
Introduction
Epilepsy is a common brain disease, and more and more people suffer from it for a long time [1][2][3]. There are around 65 million people in the world have epilepsy, and the number will reach almost 1 billion by 2030 [4]. The older population aged more than 65 years have higher incidence as one quarter of the new-onsets are diagnosed after this time-point [5]. The individuals with dementias such as Alzheimer's disease have higher risk of developing epilepsy [6][7][8][9]. Oxidative stress is an important intrinsic mechanism involved in the development of epilepsy causing brain damage. The imbalance between the antioxidant system and increased oxygen radicals in epilepsy accelerates the process of apoptosis [10]. During seizures, the patient suffers great physical and mental pain. Therefore, automatic detection of epilepsy by techniques such as EEG signals is of great importance.
The seizure of epilepsy has suddenness and repeatability. It causes intense mental pain to patients and their families and reduces their quality of life [11]. When the brain activity of epileptic patients is abnormal, abnormal epileptic discharge often occurs in the EEG signal [12]. The signal includes spike wave, spike slow wave, sharp wave, sharp slow wave, sharp slow complex wave, and sharp slow complex wave. Spikes have sharp waveforms, most of which occur in grand or localized seizures. Spike wave and sharp wave have the same mechanism, longer time than spike wave, reflecting the synchronization degree of discharge. The occurrence of sharp slow complex wave and spinous slow complex wave at different locations or times indicates that there may be multiple abnormal electrical activity regions. At present, the diagnosis of these abnormal signals is still done by doctors through visual observation, based on longterm work experience. This work not only consumes a lot of time and energy of doctors but also has low accuracy. It is difficult for different doctors to reach a common judgment standard, which is highly subjective. Therefore, automatic recognition of epileptic EEG signals can help doctors reduce their workload and assist clinical treatment. It has important practical significance and economic value [13].
In recent years, researches on the recognition of EEG signal mostly reflect the change process of brain transition from one state to another state by extracting the characteristics of time domain, frequency domain [14], time frequency domain [15], linear [16], and nonlinear [17]. The literature [18] shows that in the process of the interaction of multiple brain regions in the brain, the synchronous phenomenon leads to seizures. When a seizure is imminent, seizure-like discharges begin to spread through various pathways in the patient's brain to surrounding brain areas. It then passes through some neural circuits to return to the place where the discharge began, forming a closed circuit. This happens in an endless cycle, transforming the brain's normal, random discharges into a steady, rhythmic discharge. Such an attack mechanism shows that there is a certain correlation between brain regions in the course of the disease. The above characteristics do not fully consider this correlation. Therefore, through the synchronous analysis of the whole brain, it can more truly reflect the changes in the interaction between brain areas during clinical seizures.
With the development of machine learning, more and more intelligent algorithms are applied to EEG signal epilepsy detection. It contains classification methods such as support vector [19], naive Bayes [20], neural network [21], and fuzzy logic system [22]. It also includes principal component analysis (PCA) [23], wavelet packet decomposition (WPD) [24], and the higher order crossings (HOC) [25]. These methods first feature extraction from the original features. Then, a classification model is trained using the new features obtained. Finally, the trained model is used for prediction, so as to achieve the function of epilepsy detection. Although many feature extraction and classification methods have been used in EEG epilepsy detection, it is still an important challenge to extract effective features with rich identification information for subsequent effective detection.
In recent years, as a machine learning method, deep learning has attracted extensive attention in feature learning and other aspects [11]. Deep learning learns the weight of each layer through the desired output. Each layer of the hierarchy adjusts the features to get the features that are more likely to yield the desired output. Each layer optimizes the learning of the input features to obtain more and more discriminating features. In recent years, deep learning technology has been effectively applied in EEG signal processing. Some studies [26][27][28] have used different feature extraction methods to obtain the characteristics of EEG signals. Then, a convolutional neural network is used to detect epilepsy.
At present, there are only a few literatures that use combined features as the input data of classifier to detect epilepsy. In addition, few literatures have considered the spatial information between electrodes while adopting the combined feature. Therefore, in order to use EEG signals to construct effective features for epilepsy detection, this paper proposes an automatic detection algorithm. The innovations and contributions of this paper are listed below. (1) Single entropy (sample entropy (SE), fuzzy entropy (FE), and permutation entropy (PE)) and different combinations of entropy were input as features to the three-dimensional convolutional neural network for epilepsy detection. (2) Three-dimensional input can not only retain spatial information between electrodes but also integrate various eigenvalues extracted from EEG. The experimental results show that compared with single entropy feature, combined entropy feature can effectively improve the accuracy of epilepsy detection.
The structure of this paper is listed as follows. A related work is described in the next section. The proposed method is expressed in Section 3. Section 4 focuses on the experiment and analysis. Section 5 is the conclusion.
Related Work
2.1. Epilepsy Detection. Bioinformatics, medical image processing, and biological signal processing are all applications of intelligent technologies in biomedicine. Bioinformatics studies protein and genetic information. Medical image processing mainly includes analysis of CT and NMR. Biological signal processing is the study of electrical signals such as EEG and ECG. EEG signal is the expression of brain neuron activity and contains a lot of information about human physiological activity. EEG signals have been widely used in the field of epilepsy detection. Epilepsy detection usually involves the use of automated algorithms to analyze a patient's biological signals to determine whether an epileptic is having a seizure or has had one. An important goal of epilepsy detection is to perform this transformation as quickly and efficiently as possible. In recent years, a variety of algorithms for epilepsy detection have been proposed and achieved certain results [13,14,29,30].
There are three kinds of characteristic states of data distribution in EEG signal, which can be roughly distributed as follows: (1) EEG signals of healthy subjects under normal conditions. (2) EEG signals of epileptic patients during the onset, and (3) epileptic intermittent signals. These three signals all contain their own independent data distribution characteristics, and there are certain differences among them [31]. In previous studies, researchers mostly used signal data under state (1) and state (2) with a large amount of known category information to construct classifiers. According to the study, the performance of the classifier will decline if the above classifiers are used to classify and recognize the signal data in state (3), which is different from the data distribution in state (1) and state (2). At the same time, the existing traditional intelligent modeling technology will no longer be applicable. The transfer learning strategies were introduced to cope with the above challenges and achieved satisfactory results.
EEG signals can be divided into the following five categories [12,31]: (1) EEG signals measured when the healthy volunteers kept their eyes open, (2) EEG signals measured when eyes were closed in healthy subjects, (3) EEG signals of hippocampal structures in patients with epilepsy during interseizure period, (4) EEG signals in epileptic regions of the brain during interseizure period in epileptic patients, and (5) EEG signals measured during seizures in patients with epilepsy, where type (1) and type (2) belong to the signals under state (1). Signals of type (3) and type (4) belong to state (3). Type (5) corresponds to the EEG signal in state (2).
2
Oxidative Medicine and Cellular Longevity The classifier with transfer learning ability constructed in Reference [15] can classify and recognize signal data in states (1) and (3) with large distribution differences based on EEG signals in states (1) and (2). However, the signals of state (1) in the source domain and target domain EEG signals come from the same subclass. However, when the source domain EEG signals come from type (1) and type (5), and the target domain signals come from type (2) and type (5), the classification recognition effect will be significantly reduced. This is because although both types (1) and (2) are EEG signals measured by healthy people under normal conditions, they still have different distribution characteristics and belong to different classes.
In practical application, the data obtained is incomplete, and the loss of a small type of data often occurs. In this case, simply introducing transfer learning strategy into the classification model construction can not effectively solve this problem. Because these methods only consider the distribution difference between the source domain and the target domain when building the classification model. In feature extraction, the dimension of source and target EEG signals is reduced separately, just like the traditional EEG intelligent recognition method, and the difference of source and target distribution is ignored. Features that contribute greatly to the establishment of classification models in the source domain may not contribute greatly to the recognition of the target domain. However, the features of the source domain which can help the target domain classification and recognition are not selected, which leads to the reduction of the classifier recognition effect.
The recognition of epileptic EEG signal is generally divided into the following steps. Firstly, an appropriate feature extraction method is selected for feature extraction of EEG epileptic signals, and the feature vector set composed of relevant and useful feature information is obtained. Secondly, the training samples are used to model the specific classification methods to get the relevant classifier. Then, the trained classifier is used to classify and recognize other EEG epileptic signals.
Classification and Identification
Technology. Since 1990, many intelligent classification methods have been applied to the recognition of EEG signals. The following is a brief description of some common methods.
(1) Decision tree algorithm: DT uses induction to generate decision tree and rules in its process and then classifies test data with the obtained decision tree and rules. The decision tree classifier proposed in reference [32] based on fast Fourier transform to extract EEG signal features has achieved better classification accuracy.
(2) Naive Bayes algorithm: NB is derived from Bayes' theorem in probability theory, with solid theoretical foundation and high efficiency. The literature [33] proposed a data mining model based on the NB algorithm to realize automatic detection of epilepsy.
(3) K-nearest neighbor algorithm: KNN helps to determine the class standard of a sample according to the categories of most samples in K-nearest neighbors of the sample in its feature space. The KNN classification algorithm based on nonlinear discrete wavelet transform to extract EEG signal features described in literature achieves high classification accuracy.
(4) Support vector machine: SVM is considered to be an effective tool to solve the problem of pattern recognition and function estimation [34]. The classification of small samples and high dimensional datasets is particularly effective and has been widely used in EEG intelligent detection.
(5) Deep learning algorithm: in recent years, some people have tried to use convolutional neural network to process EEG signals and achieved good results. In the literature [19], the original EEG signals were convolved with convolutional neural network in one dimension to predict epileptic seizures. In [35,36], the original signal is transformed into the frequency domain through the Fourier transform, and then the convolutional neural network is used for classification.
The Proposed Method
3.1. The Feature of Entropy 3.1.1. Sample Entropy SE. Sample entropy SE represents the rate at which a nonlinear dynamical system generates new modes. The higher the sample entropy, the more complex the sequence. The SE algorithm is as follows: (1) The original sequence phase space i = fi 1 , i 2 ,⋯,i T g is reconstructed to obtain the w-dimension vector, as shown follows: (2) Calculate the distance between vectors IðxÞ and IðyÞ, and the distance between vectors IðyÞ and IðyÞ is the one with the largest absolute value of difference between the corresponding elements where z = 1, 2, ⋯, w − 1, x, y = 1, 2, ⋯, w − 1 The average of all its x values is calculated as follows: (4) Increase the dimension by 1, and the dimension becomes w + 1. Repeat steps (1) to (3) to obtain H w+1 x ðrÞ, H w+1 ðrÞ (5) When the sequence length t is finite, the estimated value of sample entropy can be obtained, which can be expressed as 3.1.2. Permutation Entropy PE. Permutation entropy PE can measure the randomness of one-dimensional time series. The algorithm has the advantages of simplicity, fast calculation speed, and strong antinoise ability. The basic process is as follows: (1) For sequence i = fi 1 , i 2 ,⋯,i T g phase space reconstruction, the following equation is obtained: where w is the embedding dimension and τ is the delay time (2) The reconstructed components in i s ðnÞ are arranged in ascending order of numerical size as follows: where y 1 , y 2 , ⋯, y w represents the sequence number of each element in the reconstructed sequence, so the sequence number π = fy 1 , y 2 ,⋯,y w g has w! different situations (3) f ðπÞ is used to represent the frequency of occurrence of each sort mode, then the probability of occurrence of its corresponding sort mode is where 1 ⩽ x ⩽ w!. According to Shannon entropy definition, the permutation entropy is when u x ðπÞ = 1/w! and B u ðwÞ reaches its maximum ln ðw!Þ (4) Normalize the entropy value, and obtain 3.1.3. Fuzzy Entropy FE. Fuzzy entropy (FE) is an improvement of sample entropy SE, which uses exponential function as fuzzy function to measure the similarity of sample entropy. The fuzzy entropy is smoothed by the continuity of exponential function. The specific steps of the algorithm are as follows: (1) Reconstruct the phase space of the original sequence i = fi 1 , i 2 ,⋯,i T g to obtain the M-dimension vector, as shown in the following equation: (2) Calculate the distance between vector IðxÞ and IðyÞ, and the distance between vector IðxÞ and IðyÞ is the one with the largest absolute value of difference between the corresponding elements, namely, where z = 1, 2, ⋯, w − 1, x, y = 1, 2, ⋯, w − 1 (3) Define the similarity D w xy between vector IðxÞ and IðyÞ by fuzzy function μðd w xy , t, rÞ, namely, where t and r are the boundary gradient and width of the fuzzy function, respectively (4) Define the function as follows: (5) Increase the dimension by 1, and the dimension becomes w + 1. Repeat steps (2) to (4) to get φ w+1 (6) The fuzzy entropy is defined as follows: 4 Oxidative Medicine and Cellular Longevity
Data
Preprocessing. The open source datasets CHB-MIT and TUH were used in this experiment. In order to increase the number of samples, the experimental data were segmented. The EEG data for each epileptic seizure and epileptic-free period is of 2 s, and there are 100 instances on average for each class for each patient. In this paper, sample entropy, permutation entropy, and fuzzy entropy are used for feature extraction of EEG signals, respectively. The main method is to extract three kinds of entropy of each EEG channel and get one-dimensional eigenvectors, respectively. In general, EEG datasets are acquired according to the standard international 10-20 system of electrode distribution of EEG signals. Figure 1(a) is a plan of the International 10-20 system, where the electrodes used in the actual EEG signal are marked in yellow. In the EEG electrode diagram, you can see that each electrode is adjacent to multiple electrodes. These electrodes record EEG signals in specific areas of the brain. In order to retain the spatial information between multiple adjacent channels, a two-dimensional eigenmatrix (H × W) was constructed based on the onedimensional eigenvector according to the electrode distribution diagram in the manner shown in Figure 1, where H and W of the matrix are the maximum values of the channel in the vertical and horizontal directions, respectively. In this case, both H and W are equal to 7. In addition, empty channels are filled with zero. In this experiment, three different eigenvalues of EEG signals were extracted from each EEG signal, and the obtained one-dimensional vectors were converted into two-dimensional matrices according to the method shown in Figure 2, and then three two-dimensional matrices were obtained. Then, these three two-dimensional matrices are superimposed into a three-dimensional matrix as the input of CNN. The specific transformation process is shown in Figure 2. 3.3. Neural Network Structure. A convolutional neural network is a kind of deep feedforward neural network, which has been widely used in many fields such as image recognition. The CNN has the advantages of good fault tolerance and strong self-learning ability. At the same time, it has the advantages of automatic feature extraction and weight sharing. Through many experiments, the convolutional neural network model is finally constructed by four convolutional layers, a full connection layer, and a softmax layer.
The input of CNN network is a three-dimensional feature matrix composed of two-dimensional feature matrices obtained by three different feature extraction methods. The main function of the pooling layer is to reduce the data dimension. But it comes at the cost of lost information. Due to the small amount of data input from the network in this paper, a pooling layer is not added to the CNN network in this paper in order to retain useful information as much as possible. The specific CNN network model structure is shown in Figure 3. The first convolution layer has 32 feature graphs. The feature graph of the later convolution layer is twice that of the previous one, which is 64, 128, and 256, respectively. The convolution kernel is 3 × 3, and the step is 1. After the convolution operation, SELU activation function is added to make the model have nonlinear feature transformation capability. Then, a full connection layer is connected to map 7 × 7 × 256 feature graphs to feature vector F ∈ R 1024 . The last part of the network is a softmax classifier, which outputs the result value of epilepsy classification and recognition. In this paper, truncated normal distribution function is used to initialize weights and Adam optimizer is used to minimize cross entropy loss function. The initial learning rate is set to 0.0001. Use Dropout to output with 50% probability to avoid overfitting. In addition, L2 regularization is used to avoid overfitting and improve generalization ability, and the weight of regularization term is set to 0.5. EEG data from Boston Children's Hospital is found in the CHB-MIT dataset [30]. It includes EEG recordings of pediatric patients with refractory epileptic seizures. It collected EEG data from 23 of the 22 subjects. Here, data case CHB21 was obtained from the same female subject 1.5 years after data case CHB01. Each case contains between 9 and 42 consecutives.edf files from a single topic. In most cases, the .edf file contains only one hour of digitized EEG signals. All signals were sampled at a rate of 256 samples per second with 16-bit resolution. Most files contain 23 EEG signals (24 or 26 in some cases). These records were recorded using an international 10-20 EEG electrode location and naming system. In some recordings, other signals were also recorded.
Experiment
Temple University Hospital (TUH) EEG dataset is the largest EEG dataset available [37]. It included 25,000 EEG recordings and 14,000 cases. It is the total dataset of Temple University Hospital since 2002. EEG signals in this dataset were recorded using Natus Medical Incorporated's Nicolet™ EEG recording technology. The original signal consists of 20 to 128 channel records sampled at the lowest frequency of 250 Hz using a 16-bit A/D converter. Eight types of seizures were recorded, among which focal nonspecific epilepsy, generalized nonspecific epilepsy, and complex partial epilepsy were more common. In the subsequent experiments in this paper, only this three common epilepsy information was detected in the TUH dataset. where TP and TN are positive and negative samples correctly classified and FP and FN are positive and negative samples incorrectly classified. In this paper, the positive samples are the EEG signals of the "reverse" response, and the negative samples are the EEG signals of the "forward" response. The selection of feature directly determines the performance of classifier. Classifiers based on different feature combinations have different performance. There are seven input features in this experiment. It includes single entropy feature and combined entropy feature, respectively (SE, PE, FE, SE + PE, SE + FE, PE + FE, and SE + PE + FE). Among them, the sequence of entropy combination has little influence on the recognition accuracy after several comparative experiments. The three-dimensional characteristic matrix is constructed by referring to the above experimental pretreatment methods and steps. SE, PE, and FE are the Eigen matrices of 9 × 9 × 1. SE + PE, SE + FE, and PE + FE are 9 × 9 × 2 eigenmatrices. SE + PE + FE is the Eigen matrix of 9 × 9 × 3. The above 7 Eigen matrices were, respectively, input into the convolutional neural network shown in Figure 3 for experiment. That is, 7 groups of experiments were conducted on each dimension. In addition, this paper also carries on the comparison experiment according to the conventional Oxidative Medicine and Cellular Longevity entropy combination method. In this experiment, the spatial information of EEG electrodes is not considered when constructing the input features; that is, the input features are not converted from one-dimensional feature vector to twodimensional feature matrix according to the EEG electrode distribution. The seven features without spatial information were input to the one-dimensional convolutional neural network with the same network structure as Figure 3 for experiment, and the experimental Settings were consistent with the neural network Settings proposed in this paper. In order to verify the influence of single entropy feature, combined entropy feature, and spatial information on epilepsy recognition, this paper conducted experiments on single entropy feature including spatial information, single entropy feature without spatial information, and different combined entropy feature. The results are shown separately in Figure 4. The yellow bar graph in the figure represents the experimental results of the one-dimensional convolutional neural network without spatial information, and the green bar graph represents the experimental results of the neural network proposed in this paper. As can be seen from Figure 4, when comparing the three single entropy features, the classification accuracy of sample entropy as the feature is higher than that of fuzzy entropy and permutation entropy. The accuracy and recall rate of sample entropy in a onedimensional convolutional neural network are 76.91% and Oxidative Medicine and Cellular Longevity when combined entropy is used as the feature input. In addition, the experimental results using spatial information are compared with those using the same type of entropy feature without spatial information. The results show that the detection accuracy of all entropy features using spatial information is higher than that of entropy features without spatial information. When SE + PE + FE was used as the input feature, the average accuracy and recall rate were the highest. There-fore, the experimental results show that the spatial information of EEG electrode distribution can effectively improve the accuracy of epilepsy detection. In order to further analyze the experimental results of the neural network proposed in this paper, Figures 5(a) and 5(b), respectively, show the accuracy and recall rate of epilepsy detection in different features. As can be seen from the figure, when single entropy is used as the input feature, 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 In addition, Figure 6 shows the ROC curve and AUC value of the classification model based on training of differ-ent feature combinations. The best value of single entropy is the AUC value of SE, which is only 0.8447. SE + PE + FE had the highest AUC value, which was 0.8837. The feature combination method of the proposed algorithm significantly improves the performance of epilepsy detection.
Comparison of Relevant Algorithms.
In order to be further compared with other methods, the algorithms of literature [2], literature [3], literature [13], and literature [24] are selected here for experimental comparison. The TUH dataset is used for comparison experiments. Different from the previous CHB-MIT dataset, the TUH dataset contains three common epilepsy information: focal nonspecific epilepsy, generalized nonspecific epilepsy, and complex partial epilepsy. Therefore, the detection on the TUH dataset is difficult. If we can achieve good performance on this dataset, it will be more beneficial to prove the effectiveness of our proposed method. Finally, the accuracy and recall rate of epilepsy detection are shown in Tables 1 and 2. By observing the results in Tables 1 and 2, the average accuracy and recall rate of the algorithm presented in this paper exceed those of the other four methods.
Meanwhile, by observing the data in Table 2, the highest accuracy and recall rate of the algorithm in this paper are 92.26% and 93.86%. The result in Table 2 is significantly lower than the result in Table 1. This is because there are more types of epilepsy in the TUH dataset, which belongs to multiclassification task. However, in the CHB-MIT dataset, there are only two types of epilepsy and normal data, which belong to the dichotomous task. The multiclassification task is more difficult to detect than the two-classification task, so the performance of the TUH dataset in this paper is lower than that of the CHB-MIT dataset.
Conclusion
In this paper, the EEG data for each epileptic seizure and epileptic-free period is of 2 s and there are 100 instances on 11 Oxidative Medicine and Cellular Longevity average for each class for each patient, and the entropy value per epoch was calculated, respectively. Transform a onedimensional vector into a two-dimensional matrix according to the method shown in Figure 1. In this paper, sample entropy, permutation entropy, and fuzzy entropy are analyzed, respectively. Three different eigenvalues of EEG signals were extracted from each EEG signal, and three two-dimensional matrices were obtained. The three twodimensional matrices and their different combinations were input into the convolutional neural network as features, respectively, for analysis of epilepsy detection in two dimensions of accuracy and recall rate.
The experimental results show that compared with the single entropy feature, the combined entropy feature proposed in this paper can effectively improve the accuracy and recall rate of epilepsy detection. In addition, the spatial information of EEG electrode distribution can effectively improve the accuracy of epilepsy detection. The threedimensional input convolution neural network combined with the combined entropy feature can retain the spatial information between electrodes and fully extract the EEG signal features. Compared with other relevant methods, the accuracy and recall rate of the proposed method are significantly improved.
Data Availability
The labeled dataset used to support the findings of this study is available from the corresponding author upon request.
Conflicts of Interest
The authors declare that they have no competing interests.
Authors' Contributions
Yongxin Sun as the primary contributor, completed the analysis, experiments, and paper writing. Xiaojuan Chen helped perform the analysis with constructive discussions. | 6,743.8 | 2022-05-11T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Sensor-based predictive communication for highly dynamic multi-hop vehicular networks
We introduce a sensor-aided predictive algorithm for multi-hop link quality estimation. The proposed concept uses vehicle sensor data to improve a link adaptation and end-to-end path selection for a vehicular multi-hop data transmission. The obtained results show that the proposed concept allows better multi-hop link quality estimation and significantly improves end-to-end transmission characteristics in dynamic vehicular environments.
I. INTRODUCTION
The process of driving tasks automation and vehicle-tovehicle (V2V) communications have become key trends in shaping automotive industry of the future.These technological advances combined with various types of mission critical and functional safety applications impose high requirements on the quality of the end-to-end communication between vehicles [1].However, high dynamics of vehicles, acting as communicating partners, combined with unique properties of vehicular environment such as an impact of Doppler effects, presence of scatterers at both sides of the communication link pose a range of distinct link quality estimation challenges compared to existing cellular-based solutions.
In wireless communications, the optimal transmission scheme is adaptively selected based on the estimated channel state information (CSI).The time required for CSI estimation and dynamics of the channel eventually result in a selection of a suboptimal data transmission scheme and in an overall network performance degradation, also known as CSI aging [2].
The problem of channel estimation and optimal end-toend path selection in dynamic multi-hop networks is even more challenging.A large number of possible links and hops to be monitored by a decision making node results in an increased mismatch between the estimated and actual CSI at the transmission time.
The importance of minimizing CSI aging is a recognized problem in literature [3].It has been also shown [4] that technological achievements of modern vehicles lead by developments in automated driving solutions can be beneficially used to improve communication between two vehicles due to a better understanding of the nature of communication link properties.The benefits of sensor-aided prediction for direct link vehicular communications were highlighted in [5].To the best of our knowledge, existing works do not address the the possibility of applying sensor-based vehicular prediction algorithms for a multi-hop link quality forecasting and an endto-end path selection in highly dynamic vehicular networks.
In current work we extend the concept of sensor-aided predictive communications, originally developed for a directlink [5], to a multi-hop V2V scenarios.The key contributions of this paper are: • We evaluate the benefit of sensor-based predictive communications in vehicular multi-hop link-qualityestimation tasks, • We present different ways how the concept of predictive communication can be applied to multi-hop V2V, • We verify the applicability of the proposed approach via simulation in a select vehicular scenario with realistic sensors and environment properties.
AGING
In this section we highlight the problem of an CSI aging in direct-link and multi-hop vehicular communications.
Let us consider a multi-hop decode-and-forward V2V communication environment.Due to high dynamics of vehicles, surrounded by other objects, the communication link between two partners experiences time-varying large and small scale fading effects.We also assume each relay to employ the single carrier frequency division multiplexing (SC-FDM) with the LTE-A uplink reference symbol structure and a Zero-forcing (ZF) equalization strategy [6].
A. Single-hop CSI aging
Given the aforementioned assumptions, a received symbol vector y at subcarrier k can be expressed as [7] where H k,k ∈ C Nr×Nt is a channel matrix for N t transmit and the N r receive antennas between k-th and k -th subcarrier, W k ∈ C Nt×N l is an N l -layer precoding matrix at k-th subcarrier, and s k ∈ C N l is a vector of data symbols with σ 2 s being the average power transmitted on each layer.The component k =k H k,k W k s k represents the impact of the so-called Inter-Carrier Interference (ICI) with the power σ 2 ICI , and n ∈ C Nr denotes zero mean additive white Gaussian noise with variance σ 2 n on antenna n r .Since the pilot symbols are separated in time, the timevarying nature of the channel results in an inevitable channel estimation error between the true channel H and the estimated channel Ĥ at data positions: with σ 2 e being the mean squared error (MSE) at each element.Inserting (1) into (2) and denoting e s k = s k − ŝk , where ŝk is the ZF estimate of s k , we compute the layer-dependent symbol estimation MSE covariance matrix [7]: where σ d is data symbol power and Ψ ∈ C N l ×N l is the inverse of squared estimated effective channel.From (3) the following estimated signal-to-interferenceplus-noise ratio (SINR) at m-th layer is defined [6]: with ψm being m, m-th element of Ψ.
Additionally we assume that precoded symbols are transmitted in blocks using a set of M modulation and coding schemes (MCS), C = {C 1 , C 2 , . . ., C M }, characterized by SINR thresholds S and spectral efficiencies R. The block is assumed to be transmitted efficiently if the estimated SINR γm exceeds the threshold S j with the selected MCS index j.
Due to channel aging, the actual link quality may deviate from underestimated ĵ < j or overestimated ĵ > j applied MCS indexes.In the latter case the link will not be able to successfully transmit data and will need an additional time for the failure detection, MCS reconfiguration and data retransmission.As a result, in both cases the the spectral efficiency of the link becomes underutilized.
B. Multi-hop CSI aging
The problem of delayed channel feedback is even more significant in dynamic multi-hop vehicular environments.
To highlight the problem, let us consider a SISO-based multi-hop communication, where the end-to-end link selection and resource allocation is done at the node initiating the transmission, denoted as A. Also each node is only capable to signal pilot symbols for channel quality estimation only within its scheduled resource blocks.In this case the end-toend multi-hop feedback delay will have twofold contributions.On one hand, estimation of a reliable CSI-feedback at each link takes time: where d ps is a time interval between two consecutive pilot symbols and k avg is a number of consecutive channel estimations needed to obtain reliable channel feedback.
On the other hand the maximum network-relevant delay d nr,est (N ) between a specific node N and the decision making node A is caused by a periodicity of feedback reports d pFB to the decision making node, time per each hop i N to deliver d d,iN and process d pr,iN the channel relevant information over I N hops between A and N : Finally, the vector of maximum feedback delays for the shortest path between the decision making node and each of L nodes of interest will depend on vectors of scheduled feedback periodicities d pFB and multi-hop feedback delivery times: After the required CSI-information has been received by the decision making node, the path selection and resource allocation process will be affected by feedback delays d L .
In contrast to the path selection and resource allocation, the efficiency of the actual end-to-end data transmission at any given link B between node N and its direct neighbor will be also affected by the maximum delay d nr,est (N ) to deliver and process CSI feedback from N to A plus the additional delay to deliver and process data from A to N over I N links: Finally, the dynamics of the SINR aging compared to actual SINR γ(B) for the fixed path at a given link B will be Based on (4), ( 7) and ( 8), the resulting impact of CSI-aging on an end-to-end link efficiency in a multi-hop network will be characterized by the following parameters: • applied channel estimation techniques • relation between the dynamics of the link quality change and the feedback delay • statistical distribution of the feedback delays • applied method of the feedback delivery • ability to predict the expected CSI state
COMMUNICATIONS
The need for channel prediction to compensate a feedback delay is a known problem [3].Existing approaches collect multiple past channel estimates to predict future channel conditions [8].Nevertheless, most of them are applicationor scenario-limited.For example, the performance of spline interpolation and averaging highly depends on the dynamics of environment, while historical averaging fails completely in dynamic environments with direct link communications [8].
A. Sensor-based prediction in V2V communications
Recently it has been shown, that the ability of vehicles to collect surrounding information via on-board sensors brings unique context-aware advantages for direct V2V communications [5].In this paper we extend the sensor-based prediction scheme to the multi-hop vehicular communication according to Fig. 1 and evaluate corresponding applicability regions.
Let us assume a sensor-equipped vehicle capable to fuse available sensor data, process it and conduct feature extraction.This information can be transferred to the communication system, which will learn about the vehicle's position, dynamics, presence of objects with distinct properties and will enable estimation of expected environmental states.Since the exchange of local information about environment is the inherent property of V2V communications, we assume that each vehicle is capable to obtain dynamic and static properties of the surrounding environment including other vehicles with the precision of the available off-the-shelf sensor equipment.
Let us further assume a V2V network with M vehicles involved in multi-hop communication tasks, as shown in Fig. 2. We further assume that the information about environment is periodically exchanged among the vehicles.Finally at the time instant t the following processed information will be available at the decision making node: • a vector of M estimated positions and directions pvh combined with absolute velocities of each vehicle vdv , • a matrix of estimated positions P Ô and estimated dimensions D Ô of Ô detected scattering objects, • a vector of delayed CSI-feedbacks f CSI (γ) from L scheduled links between M nodes, • vectors of corresponding feedback delays d L and data transmission delays d nr,data (N ) defined in ( 7) and ( 8), • a vector of confidence intervals for received feedback parameters, which depend on properties of detected objects.Based on this information, a range of predictive algorithms can be realized at the decision making node by solving an optimization problem and applying a proper constraint.To mention a few: the minimization of the multi-hop CSI-aging impact, minimization of required feedback over the network, selection of the most efficient or stable end-to-end path, etc.
In the current work we limit our scope on applying the predictive multi-hop communication to minimize the impact of CSI-aging on the end-to-end data transmission efficiency between the decision making node A and the node D.
Let us define the optimal efficiency at time t of the endto-end transmission as the spectral efficiency R j,Dw of the weakest link D w between a node D w−1 and D w which has the highest SINR out of all weakest links in all possible paths between A and D. As it was discussed in Chapter II, the actually selected spectral efficiency for each link deviates from the optimal due to multi-hop CSI-aging.Then, if the end-toend path was optimally selected and following (8), the nonpredictive approach according to Section II-B results in the following SINR mismatch: In contrary, for the predictive communication the following algorithm is applied at the decision making node, see Fig. 1: 1) based on available information about environment, such as pvh , vdv , P Ô , DO , and received delayed CSIfeedbacks from each link of interest, predict the expected CSI-states and confidence intervals for every link d out of L at the time of potential link use, 2) find the optimal end-to-end path from A to D given the predicted CSI-states and confidence intervals, Plugging ( 8) in (11) the expanded form will be: (d d,iD w +d pr,iD w )).
(12) As can be seen from ( 12), the sensor-based predictive approach differs by the prediction deviation parameter α(D w ) and by the periodicity coefficient d α(Dw) .
IV. SIMULATION AND RESULTS
In this chapter we define the scenario of interest, describe the simulation setup and present and discuss obtained results.
A. Scenario definition
In order to conduct simulations we first consider the oversimplified configuration with only four vehicles involved in the communication, denoted as A, B, C, D, as shown in Fig. 2. We assume vehicle A to be a source, which collects CSI feedback, and vehicle D a sink of the multi-hop transmission, respectively, while B and C are possible relays.Further we assume only vehicle D to have non-zero velocity of v D = 55 m/s and limit our setup to maximum allowed number of hops being two.We also assume that over the simulation time period the moving vehicle passes next to the obstacle which results in a sharp NLOS-to-LOS transition for the B-D link.
B. Simulation setup
The simulation was conducted in two phases.First, the link level channel mismatch power levels σ 2 e discussed in Section II-A were estimated for a typical range of absolute and relative velocities and typical link level channel conditions for the select scenario, namely Highway: free flow [9].The initial simulations were performed in a Matlab-based V2V linklevel simulation environment, which is developed from the Vienna LTE-A uplink simulator [10].For this purpose a set of changes was introduced to reflect key properties of direct link V2V environment, for the link-level implementation details and simulation setup, see [5].For all simulation scenarios we consider a 5.9 GHz carrier frequency, which results in the most challenging Doppler effects among widely used frequencies for V2V communications.Besides these modifications, we estimated the impact of Doppler effects on the SINR based on available sensor information and sensor accuracy [5].
After the link level properties for a select scenario were obtained, the results in terms of estimated mean squared error (MSE) powers σ 2 e per Doppler shift, Rician K-factor1 and sensor-accuracy were plugged into the specifically developed large scale multi-hop simulation environment.At this point the evolution of large scale parameters for each object was found.The distance dependent pathloss function for V2V Highway scenario defined in [11] was coupled with the impact of three largest Fresnel zones.Then the correlation properties of slow fading process were coupled with the select scenario properties and with an evolution of each link over space according to [12].The prediction-relevant properties, such as velocity, positioning and object recognition error parameters are derived from the existing sensor-fusion solutions, available in [13].Modifications of the simulator and key parameters of both simulation setups are summarized in Table I.
C. Simulation results
To highlight the benefit of predictive algorithms and to understand the regions of their applicability, in the first simulation setup we evaluate the network efficiency loss in terms of SINR mismatch between the actual value and the values available at node A via delayed CSI feedback of the weakest link over different levels of channel variations a γ .For this purpose we fix the multi-hop path to be A-B-D and vary the steepness of the signal attenuation change by changing maximum NLOS attenuation level of the scatterer.
Two different configurations of CSI-feedback updates were considered in this setup: (a) non-predictive periodic CSIfeedback with maximum feedback delays set to d pfb = [7,27] ms; (b) predictive periodic CSI-feedback with positioning inaccuracy modeled by Gaussian process with deviation set to σ = [0.25,0.5] m.In the latter case, the prediction and expected steepness of the SINR change for the NLOS-LOS transition of the weakest link is found based on difference between the actual SINR level at NLOS region and expected SINR at the time of LOS.The predicted SINR for the given link as well as the start and end of the NLOS-LOS transition is estimated based on sensor-aided information about environment, available at vehicles, as discussed in Section III-A.The results of this simulation setup are shown in Fig. 3.The predictive algorithm shows superior performance compared to the non-predictive algorithms.Additionally, it can be seen that position uncertainty of detected objects (vehicles and obstructing objects) influences the efficiency of predictive approach, while an increase in channel variations deteriorates performance of both algorithms.
In the second simulation setup we demonstrate how the predicted information about future CSI at each link reduces the number of feedback messages required to limit the SINR mismatch.For this purpose we consider a non-predictive case with periodic CSI-feedback as in the first simulation setup and predictive approach with the adaptive periodicity of the CSIfeedback generation d pfb,pred , as described in Section III-A.First, the SINR mismatch at different levels of SINR variation rates are calculated for various possible feedback intervals, as described in the previous simulation setup.Then certain tolerable threshold is arbitrary selected based on allowed mean SINR mismatch.Finally, the predicted feedback periodicity is obtained from the intersecting point of the arbitrary threshold and one of feedback periodicity curves at position of the expected SINR variation rate.Fig. 3 illustrates the predictive selection of d pfb,pred = {7, 27} for the arbitrary threshold of 2 dB (which approximately corresponds to one MCS step).The simulation results presented in Fig. 4 show that predictive algorithms are capable to significantly reduce the number of CSI-feedback over the wide range of transition regions compared to non-predictive periodic CSI-feedback.
V. CONCLUSIONS AND FUTURE WORK
We presented the concept of vehicle sensor-aided predictive link quality estimation for multi-hop V2V communications in dynamic environments.The obtained results show the applicability of the proposed concept in multi-hop V2V networks, where nodes are capable to obtain and exchange information about the surrounding environment.In future work, other road scenarios will be investigated.Besides this, further analysis of The transition region is defined by the range of attenuation change, which is the difference between maximum and minimum attenuation over the transition process the impact of scattering reflections, and number of nodes on sensor-aided prediction in multi-hop transmission will be done.
3 )
start transmission and, if applicable, adjust feedback periodicity requests d pFB,pred (D w ) = d pFB • d α(Dw) according to coefficient d α(Dw) dependent on prediction deviation parameter α(D w ), 4) adjust confidence intervals based on the new feedback.Now the predictive SINR mismatch at the weakest link D w depends on prediction deviation per unit of time α(D w ) and time d pred (D w ) to reach the link D w , which is similar to the non-predictive d nr,data (D) case but has variable feedback periodicity parameter d pFB,pred (D w ) instead of fixed d pFB (D): Δγ prd (D w ) = α(D w )d pred (D w ).
Fig. 3 .
Fig.3.Impact of the link variation rate on the SINR mismatch, when a multi-hop feedback is delayed.The link variation rate is defined as the SINR variation a rate of the weakest link in the multi-hop setup
Fig. 4 .
Fig. 4. Average number of feedback messages during NLOS-LOS transition.The transition region is defined by the range of attenuation change, which is the difference between maximum and minimum attenuation over the transition process | 4,412.8 | 2017-08-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Enhancement of Photodetector Responsivity in Standard SOI CMOS Processes by introducing Resonant Grating Structures
A new photodetector concept is described, which is fully compatible with the standard SOI CMOS process and does not require any postprocessing steps. Our simulations are based on two-dimensional RCWA (Rigorous Coupled Wave Analysis) and local absorption theory (K.-H. Brenner, "Aspects for calculating local absorption with the rigorous coupled-wave method" Optics Express 2010, Vol. 18, Iss. 10, pp. 10369-10376, (2010)). The simulations show that optimized lateral grating structures are able to enhance the absorption efficiency of thin semi-conductor detectors by a factor of 32 compared to non-enhanced approaches. [DOI: 10.2971/jeos.2011.11014s]
INTRODUCTION
In order to optimise the design of photodetectors, photovoltaic elements or photolithographic setups, it is a common procedure to simulate intensity distributions. Therefore tools based on various theories are used, like scalar diffraction theories and rigorous methods in the time or modal domain. While all these theories provide sufficiently exact results for the intensity, they do not express the relevant quantity for describing detector performance. For detector responsivity, the important quantity is the absorbed power in a given volume element. A common way to describe absorption is based on the Lambert Beer law, which only applies to homogeneous volumes.
Based on RCWA (Rigorous Coupled Wave Analysis), which provides the electromagnetic field distribution for structured volumes, a method to calculate local absorption was presented in [1]. In the following, this method is called the local absorption theory. Local absorption has also been calculated before [2] by taking the divergence of the Poynting vector.
As an application example, the present paper demonstrates a new concept of a photodetector design [3], which is based on simulation results with the local absorption theory. This photodetector is designed for a wavelength of 850 nm and is fully compatible with the standard SOI CMOS process eliminating the need for any post-processing steps. Using optimised lateral grating structures, it was accomplished to exceed global absorption values of 68% in Silicon layers of only 100nm thickness, while the non-enhanced absorption only reaches 2%. Unlike resonant-cavity-enhanced photonic devices [4]- [6] where light is reflected like in a Fabry-Perot type resonator, our lateral grating structures have been optimised for a certain wavelength to minimise Fresnel reflections and form resonant structures. Thereby the absorption can be enhanced due to diffraction effects that can concentrate light in a desired re-gion. This mechanism alone results in a significant increase of responsivity for the detector. Being compatible with SOI CMOS implies that this detector design can use the same materials and manufacturing steps as the CMOS technology itself. Thus no further process development is needed. Moreover the fabrication process ends up with a very compact design.
LOCAL ABSORPTION THEORY
We assume that the electromagnetic field distribution of a considered volume is known from RCWA calculations or similar methods. Traditionally, the RCWA provides diffraction order efficiencies for reflection and transmission. A value for global absorption is only obtained by applying the principle of energy conservation. It is calculated by adding up all reflection and transmission efficiencies: It was shown in [1] that the relative absorbed power can be expressed as: The formula describes the ratio of absorbed power to incident power for an illumination with a plane wave and a wave number k 0 . A is the illuminated area and the incident direction is expressed by the z-component of the incident k-Vektor k i,z .
The absorbed power in a volume element dV is mainly determined by the imaginary part of epsilon and the relative field intensity. This quantity is the response to an incident illumination amplitude of 1 and is readily availably from almost all rigorous simulation tools.
After discretisation, the volume integral can be approximated by Riemann-sums: In this form all terms of the equation are either input parameters or results from the RCWA and every single term under the Riemann-sum represents the local power absorbed in a volume element j, k, l of size δx j δy k δz l . The summation for j extends from 0 to NX − 1, for k from 0 to NY − 1 and for l from 0 to NZ − 1. Further in this article we will refer to this summation result as the integrated absorption. Since the theory starts with the law of energy conservation and is exact, the integrated absorption theoretically should be equal to the gobal absorption (cf. Eq. (3)) of the RCWA. Expression (5) can easily be implemented in a RCWA algorithms and is the origin of all simulations in this paper.
PHOTODETECTOR DESIGN
Photodetectors based on semi-conductor technology are photodiodes or phototransistors. In any case, the measurable photocurrent depends on the amount of light absorbed in the depletion region. A simple analysis of Eq. (4) reveals that the requirements for high absorption values are a high imaginary epsilon, high intensity and a large volume. Unfortunately this is not what we normally get, if we try to use diodes fabricated by standard SOI CMOS technology as photodiodes. In CMOS technology, the ultra thin layer designs limit the height of the active zone and high doping levels cause narrow depletion regions (cf. Figure 1). Furthermore, the assortment of available materials is limited and for wavelengths above 850 nm, Silicon is almost transparent.
In order to enhance absorption under these circumstances, our approach utilises one of the CMOS layers as a resonance grating, which reduces Fresnel reflections and concentrates the incident light inside the depletion regions. To stay conform with the CMOS process, the grating was realised by structuring the poly-silicon layer, which usually forms the transistor gates in the layer above the active zone.
Grating period (p) and gate width (w) form a two-dimensional parameter space (cf. Figure 2), which can be scanned for absorption maxima. Figure 3 shows a parameter scan of the global absorption values (DOA) at normal incidence of TEpolarised light. The underlaying algorithm is based on the RCWA according to Eq. (3). The parameter scan shows two strong absorption maxima (G1, G2) with a peak absorption of 86.15% at G1 (p = 0.275 µm and w = 0.82p).
However from this picture it cannot be determined how much light is absorbed in each layer. Thus it cannot be ruled out that the maximum of global absorption and the absorption maximum of the active layer may not match. Figure 4 and 5 show exactly this phenomenon, which will be discussed in the following.
The same parameter scan is made for local absorption in the grating and in the active layer separately. It reveals that in the active layer, at parameter configuration G1 (=L1), the local absorption is only 59.6%, whereas a different parameter config- uration (L2) reaches a local absorption maximum of 68.28% (cf. Figure 5). The selection of the peak absorption from the parameter space has optimised the intensity as well as the local absorption to be maximal in the active zone. Moreover, we were fortunate to achieve that the absorption maxima appear right under the gates in the depletion region. This should be beneficial for the generation of the photocurrent. Compared to the non-enhanced absorption, reaching only a value of 2.14%, which can be taken e.g. from Figure 5 at w = 1, the enhanced apsorption of 68.28% means an increase of efficiency by a factor of 32! In Figure 6 the detailed intensity distribution and the local absorption in a two-dimensional (x,z)-cross-sectional map of the grating is shown for L2. It visualises the effect of the grating optimisation as it concentrates intensity in the active zone and thereby increases the local absorption in the depletion region.
For fabrication, tolerances are important characteristics. As shown in Figure 7 the manufacturing requirements for the grating period are much higher than for the gate widths. This actually works in our favour, because the accuracy of the grating period is only dependent on the lithographic precision, whereas the precision of the gate width depends on development chemistry, etch rates etc. and is thus less accurate.
Similar considerations apply to TM-polarisation, although in this case the absorption maxima G1 and G2 are shifted towards larger periods. At parameter configuration G1 the global absorption is just slightly lower than in the TE-case (cf. G1: 83.15% in Figure 8 and G1: 86.15% in Figure 3), but distributed equally among the two layers at L1. At parameter configuration G2 the resonantly enhanced absorption al- most dissapears in the grating (cf. L2: 13.03% in Figure 9) and only concentrates in the active channel (cf. L2: 43.6% in Figure 10). However in the TM-case both L1 and L2 fall short of the TE-case (cf. L1: 68.28% in Figure 4 and L1: 41.6%, L2: 13.03% in Figure 9). Figure 11 illustrates that the intinsity maximum for L1 in TM-polarisation is located in the SiN 3 -region and outside the structure. Moreover the local absorption does not show the nice symmetric shape and centered position under the gate as in the TE-case.
CONCLUSION
In this paper we have demonstrated an efficient photodetector concept with enhanced responsivity based on laterally resonant grating structures. The fabrication is fully compatible with the SOI CMOS process. With this approach, we can utilise the existing layers of the standard CMOS fabrication process. Due to the resonant enhancement, sufficient photo current generation can also be achieved in thin layers, resulting in faster photodiodes. By applying the recently devoloped theory of local absorption [1], we were able to optimise the absorption inside the depletion-region of the diodes. The optimisations were done for 850 nm wavelength and a layer thickness of less than 100 nm. We investigated grating configurations for TE-polarisation as well as for TM-polarisation. Thereby we discoverd that a TM-optimised grating does not qualify for an improved photodetector. In case of an TE-optimised grating though, we have achieved an enhancement of efficiency by a factor of 32. Beyond that the resulting grating offers a good local concentration of intensity and absorption which is ideal for a photodetector. | 2,324 | 2011-04-18T00:00:00.000 | [
"Physics"
] |
The role of clinical medical physicists in the future: Quality, safety, technology implementation, and enhanced direct patient care
A few months ago, this journal published an editorial by Pawlicki and Mundt titled “Continued emphasis on quality and safety jeopardizes clinical medical physics careers in radiation oncology: What can be done about it?”. We were surprised by the position of the authors (“...the continued emphasis on quality and safety is likely a threat to the long‐term viability of clinical medical physicists in radiation oncology.”) Both authors have been staunch and effective advocates for quality and safety in radiation oncology. They are both among the authors of the book “Quality and Safety in Radiotherapy” and have published extensively on this topic. What could cause them to think clinical medical physicists should not focus on quality and safety? The answer is found deep in the editorial, following a summary of the authors' impression of the routine tasks performed by oncology medical physicists as well as the associated value. We begin by offering an alternative understanding of the clinical contributions of the oncology medical physicist and then address the proposals of Pawlicki and Mundt for a revised path for the future of our profession. We find much to agree with in their editorial, but believe the substance of the arguments to be fundamentally flawed, without the likelihood of either adoption, or success if adopted. Let us look at some of the activities of clinical medical physicists in radiation oncology, following the outline of Pawlicki and Mundt. Responsibilities such as equipment calibration and quality assurance (QA), patient‐specific QA, treatment plan checks, weekly chart checks, and patient‐specific measurements are characterized as “routine” and “checking” activities not generally requiring the expertise of the medical physicist. We disagree. Tasks performed over and over, day after day are correctly described as “routine,” a description that refers to their temporal repetition but should not be taken to denigrate the intellectual effort and expertise that is used for each unique instance. Medical physicists should not be at risk for replacement by individuals with more limited training if we, at each of these “routine” endeavors, bring to bear the full spectrum of our training and expertise, unrivaled by anyone else on the radiation oncology team. After the fact, one could identify particular tasks or steps for a particular patient that could have been undertaken by another individual on the team, but it is not possible to identify, a priori, all the possible failure modes that a properly educated and trained medical physicist might identify in a sequence of technical or clinical reviews. It is the ability of the medical physicist to integrate the physical, anatomic, and clinical aspects of each individual patient treatment sequence that brings value to the process. We agree with Pawlicki and Mundt that automation of QA, increased reliability of equipment, etc., will certainly make much of the work we currently perform less time‐consuming or even unnecessary. This is not a new phenomenon in our profession. One of us can remember digitizing patient contours (taken by solder wire) by typing in hundreds of (x,y) coordinates into a teletype console to be used by a remote radiation therapy planning system. Current technologies allow us to enter spatial (and density) information with millions more data points in a few seconds. Do we now have hours left over with nothing to do? No, we spend even more time with tasks such as preparing stereotactic isodose plans, consulting with radiation oncologists, neurosurgeons, pulmonologists, and others on the details of the proposed treatments and then implementing them on the treatment machines. Earlier effort at performing the dosimetric plan verification associated with intensity modulated radiation therapy (IMRT) involved a very lengthy process using film for each of 6– 9 fields, scanning densitometers, H&D corrections, rigid processor quality control, and image analysis. Fast‐forward to today, we have diode‐ and EPID‐based techniques, but our profession continues to advance in value. An increase in complexity and automation often leads to an increase in the need for professional expertise for the big picture, not a decrease. We will need to continue to adapt “routine” QA to meet changing technology. New techniques for new imaging, planning, and delivery systems will be necessary and will require the efforts of medical physicists to design, test, and implement these procedures. While others will do some tests, we will perform some ourselves and personally review and analyze the entire spectrum of testing, assisted by automated techniques. QA tasks, and benchmarks for existing technology will be adjusted in light of current design and clinical implementation. Some additional, specific examples: One of the routine duties for clinical medical physicists is to check patient‐specific charts. This includes both plan review prior to the first treatment and ongoing weekly checks. We believe these activities are very important for the quality and safety of patient care. On many
tine" and "checking" activities not generally requiring the expertise of the medical physicist. We disagree. Tasks performed over and over, day after day are correctly described as "routine," a description that refers to their temporal repetition but should not be taken to denigrate the intellectual effort and expertise that is used for each unique instance. Medical physicists should not be at risk for replacement by individuals with more limited training if we, at each of these "routine" endeavors, bring to bear the full spectrum of our training and expertise, unrivaled by anyone else on the radiation oncology team. After the fact, one could identify particular tasks or steps for a particular patient that could have been undertaken by another individual on the team, but it is not possible to identify, a priori, all the possible failure modes that a properly educated and trained medical physicist might identify in a sequence of technical or clinical reviews.
It is the ability of the medical physicist to integrate the physical, anatomic, and clinical aspects of each individual patient treatment sequence that brings value to the process.
We agree with Pawlicki and Mundt that automation of QA, increased reliability of equipment, etc., will certainly make much of the work we currently perform less time-consuming or even unnecessary. This is not a new phenomenon in our profession. One of us can remember digitizing patient contours (taken by solder wire) by typing in hundreds of (x,y) coordinates into a teletype console to be used by a remote radiation therapy planning system. Current technologies allow us to enter spatial (and density) information with millions more data points in a few seconds. Do we now have hours left over with nothing to do? No, we spend even more time with tasks such as preparing stereotactic isodose plans, consulting with radiation oncologists, neurosurgeons, pulmonologists, and others on the details of the proposed treatments and then implementing them on the treatment machines. Earlier effort at performing the dosimetric plan verification associated with intensity modulated radiation therapy (IMRT) involved a very lengthy process using film for each of 6-9 fields, scanning densitometers, H&D corrections, rigid processor quality control, and image analysis. Fast-forward to today, we have diode-and EPID-based techniques, but our profession continues to advance in value. An increase in complexity and automation often leads to an increase in the need for professional expertise for the big picture, not a decrease. 3 We will need to continue to adapt "routine" QA to meet changing technology. New techniques for new imaging, planning, and delivery systems will be necessary and will require the efforts of medical physicists to design, test, and implement these procedures.
While others will do some tests, we will perform some ourselves and personally review and analyze the entire spectrum of testing, assisted by automated techniques. QA tasks, and benchmarks for existing technology will be adjusted in light of current design and clinical implementation. 4 Some additional, specific examples: One of the routine duties for clinical medical physicists is to check patient-specific charts. This includes both plan review prior to the first treatment and ongoing weekly checks. We believe these activities are very important for the quality and safety of patient care. On many occasions, physicists could find issues with the treatment plan such as missing couch or omitting density overwrite for contrast agent. Such catches can certainly improve the quality of the treatment plan with more accurate dose calculations. In less frequent cases, physicists may detect even bigger errors like wrong image set for planning, which otherwise could jeopardize the patient's safety. These independent plan reviews are critical to the quality and safety of patient care. Literature has shown that many potential errors are caught during the initial and ongoing chart review by clinical medical physicists. 5 Some may argue that these plan review activities can be replaced by automation and artificial intelligence (AI), and that the role of clinical medical physicists therefore becomes less important for the quality and safety of patient care. Indeed, automation and AI are increasingly adopted in radiation oncology for tasks such as contouring, autoplanning, and plan checks. We do not believe, however, that automation would replace clinical medical physicists even for some of the simple tasks that computers can perform. Oftentimes, the task needs to be customized and modified by clinical medical physicists. After all, every patient is unique and a generic algorithm may not provide a one-sizefits-all solution. More importantly, when potential issues are identified for a specific plan, it is clinical medical physicists rather than computers who communicate with other team members such as physicians, therapists and dosimetrists to reach a clinically acceptable solution.
Clinical medical physicists play a very important role when a treatment machine is down or has issues. Physicists need to discuss with service engineers to diagnose the problem and develop a repair plan. tify that physicist's involvement is essential for gated SBRT treatments of liver cancer using the Calypso technology. 6 Many steps are required to interact among the various control systems and the clinical medical physicist is essentially the "orchestra director" for the entire treatment delivery team.
As to other, clinically based changes to the practice of medical physics, we agree wholeheartedly with Pawlicki and Mundt that "Physicists think differently than medically trained healthcare professionals such as radiation oncologists or nurses." (It is this proclivity for a different mode of thinking and possession of a different set of technical and scientific skills that brings value to the medical physics endeavor in the clinic.) They follow this thought with the unsupported assertion that "Unfortunately, the physicist's unique perspective is not being utilized because most of their time is spent checking the work of others." In our experience, that is not the case.
Even those activities that have a "checking" component, such as isodose plan review or weekly chart checks, also involve an analysis of the application of technology to the care process for an individual patient. We also agree that patients will benefit from increased patient contact and interaction. Future success of the medical physics profession will depend on this. While the logistics and practicality of assigning each patient a single medical physicist for the duration of their treatment seem both daunting and of dubious value, having the medical physicist interact directly with patients on a daily basis is common in our practice experience, particularly in brachytherapy, stereotactic procedures, isotope therapy, and the very common occurrence of solving set-up problems in the treatment room. One of us has, on more than one occasion in a social setting, encountered a former patient and been introduced to others as "my medical physicist." We strongly support the call for increased training in clinical areas for medical physicists. Pawlicki and Mundt raise the possibility of medical physicists performing target volume delineationa task transfer that is unlikely to benefit patients absent a multiyear addition to the medical physics training process to include serious preparation in anatomy as well as in physiology, pathology, general oncology, molecular, magnetic and radiographic imaging, medical and surgical oncology, chemical and genetic markers… the list goes on. They make the further suggestion that medical physics could engage in treatment plan approval; but we note that the medical physicist prior to administration must approve every treatment plan in our clinics. We see no patient benefit to removing the physician from the approval process.
Radiation oncology is a technology-driven specialty and the clinical medical physicist is crucial to implementing new technologies for patient care. Over the past three decades, several new technologies (e.g., IMRT, IGRT, SBRT, proton therapy) have emerged and have become widely adopted in radiation oncology. It is worth to note that the need for clinical medical physicists has increased tremendously, for example, the number of American Association of Physicists in Medicine (AAPM) members has tripled during the same period of time. 7 We look forward to the implementation of a different, more clinically oriented training program for medical physicists that will lead to an increase in the direct contact between the patient and the medical physicist. Many of us are experiencing this in our clinics today and are hopeful that proliferation of that practice pattern will continue. We see no reason to denigrate the importance of the quality and safety work by the medical physicist to achieve this goal.
ACKNOWLEDGMENT
We thank Editor-in-Chief Michael Mills, and Associate Editors-in-Chief Timothy Solberg and Per Halvorsen for their invitation, comments and proofreading. Author to whom correspondence should be addressed. | 3,105.2 | 2019-05-17T00:00:00.000 | [
"Medicine",
"Physics",
"Engineering"
] |
The Vector-like Twin Higgs
We present a version of the twin Higgs mechanism with vector-like top partners. In this setup all gauge anomalies automatically cancel, even without twin leptons. The matter content of the most minimal twin sector is therefore just two twin tops and one twin bottom. The LHC phenomenology, illustrated with two example models, is dominated by twin glueball decays, possibly in association with Higgs bosons. We further construct an explicit four-dimensional UV completion and discuss a variety of UV completions relevant for both vector-like and fraternal twin Higgs models.
Introduction
The non-observation of new physics at Run 1 of the LHC poses a sharp challenge to conventional approaches to the hierarchy problem. The challenge is particularly acute due to stringent limits on fermionic and scalar top partners, which are expected to be light in symmetry-based solutions to the hierarchy problem such as supersymmetry or compositeness. Bounds on these top partners rely not on their intrinsic couplings to the Higgs, but rather their QCD production modes, which arise when the protective symmetries commute with Standard Model gauge interactions. However, the situation can be radically altered when approximate or exact discrete symmetries play a role in protecting the weak scale [1][2][3][4]. In this case the lightest states protecting the Higgs can be partially or entirely neutral under the Standard Model, circumventing existing searches while giving rise to entirely new signs of naturalness.
The twin Higgs [1,2] is the archetypal example of a theory where discrete symmetries give rise to partner particles neutral under the Standard Model. Here the weak scale is protected by a Z 2 symmetry relating the Standard Model to a mirror copy; the discrete symmetry may be exact or a residual of more complicated dynamics [3][4][5][6][7]. In the twin Higgs and its relatives, both the Standard Model and the twin sector are chiral, with fermions obtaining mass only after spontaneous symmetry breaking. If the Z 2 symmetry is exact, this fixes the mass spectrum of the twin sector uniquely in terms of the symmetry breaking scale f . Even if the Z 2 is not exact, naturalness considerations fix the mass of the twin top quark in terms of f , while the masses of other twin fermions should be significantly lighter. [8].
In this respect the twin Higgs is qualitatively different from conventional theories involving supersymmetry or continuous global symmetries, in which the masses of nearly all partner particles may be lifted by additional terms without spoiling the cancellation mechanism. This allows states irrelevant for naturalness to be kinematically decoupled, as in the paradigm of natural SUSY [9,10]. As we will show, the cancellation mechanism of the twin Higgs is not spoiled by the presence of vector-like masses for fermions in the twin sector, as these mass terms represent only a soft breaking of the twin symmetry. This raises the prospect that partner fermions in the twin sector may acquire vector-like masses, significantly altering the phenomenology of (and constraints on) twin theories. Moreover due to the vector-like nature of the twin fermions, twin leptons are no longer needed to cancel the gauge anomalies in the twin sector [3]. Any tension with cosmology is therefore trivially removed.
The collider phenomenology of this class of models has a few important new features. While it resembles the 'fraternal twin Higgs' [8] (in that the 125 GeV Higgs may decay to twin hadrons with measurable branching fractions, and the decays of the twin hadrons to Standard Model particles may occur promptly or with displaced vertices), the role of the radial mode of the Higgs potential can be more dramatic than in the fraternal case. Not only are twin hadrons more often produced in radial mode decays, because of the absence of light twin leptons, but also flavor-changing currents in the twin sector can lead to a new effect: emission of on-or off-shell Higgs bosons. Searches for very rare events with one or more Higgs bosons or low-mass non-resonant bb or τ + τ − pairs, generally accompanied by twin hadron decays and/or missing energy, are thus motivated by these models. Other interesting details in the twin hadron phenomenology can arise, though the search strategies just mentioned -and those appropriate for the fraternal twin Higgs -seem sufficient to cover them.
Although a vector-like spectrum of twin fermions appears compatible with the cancellation mechanism of the twin Higgs, it raises a puzzling question: What is the fundamental symmetry? A vector-like twin sector entails additional matter representations not related to the Standard Model by an obvious Z 2 exchange symmetry. In this case it is no longer obvious that the Standard Model and twin sectors share the same cutoff Λ. The vector-like spectrum also necessarily entails unequal contributions to the running of twin sector gauge couplings, so that the cancellation mechanism will be spoiled at two loops. This requires that the vector-like twin Higgs resolve into (at least) a Z 2symmetric UV completion in the range of 5-10 TeV. The emergence of approximate IR Z 2 symmetries from more symmetric UV physics is a natural ingredient of orbifold Higgs models [3,4]. As we will see, orbifold Higgs models inspire suitable UV completions of the vector-like twin Higgs in four or more dimensions. As a by-product, we provide a straightforward way to UV complete the spectrum of the fraternal twin Higgs in [8]. Note also that a vector-like mass spectrum has a natural realization in the Holographic Twin Higgs [5], where spontaneous breaking of a bulk symmetry leads to modest masses for twin sector fermions. This paper is organized as follows: In Section 2 we introduce a toy vector-like extension of the twin Higgs and show that it protects the weak scale in much the same way as the chiral twin Higgs. In Section 3 we present a minimal example of a complete vectorlike twin model, as well as a second, non-minimal model. The former is the vector-like analogue of the fraternal twin Higgs, and provides an equally minimal realization of the twin mechanism. The phenomenological implications of both models are discussed in Section 4. We address the question of fundamental symmetries in Section 5, providing both explicit 4D models inspired by dimensional deconstruction and their corresponding orbifold constructions. We conclude in Section 6. In Appendix A we include a new way to deal with hypercharge in orbifold Higgs models.
The Vector-like Twin Higgs
In this section we review the twin Higgs and introduce our generalization of it, treating the top quark and Higgs sector as a module or toy model. We will explore more complete models in section 3. In the original twin Higgs, the Standard Model is extended to include a complete mirror copy whose couplings are related to their Standard Model counterparts by a Z 2 exchange symmetry. In a linear sigma model realization of the twin Higgs, the interactions of the Higgs and the top sector take the form −L ⊃ − m 2 |H| 2 + |H | 2 + λ |H| 2 + |H | 2 2 + δ |H| 4 + |H | 4 + y t H q u + y t H q u + h.c.
(2.1) with λ, δ > 0 and where H and q, u are the Higgs doublet and the third generation up-type quarks charged under the Standard Model gauge interactions. Similarly, the primed fields denote the twin sector analogues of these fields, charged under the twin sector gauge group. The first two terms in (2.1) respect an SU (4) global symmetry, while the remaining dimensionless terms exhibit the Z 2 symmetry exchanging the primed and unprimed fields. This Z 2 leads to radiative corrections to the quadratic action that respect the SU (4) symmetry. Indeed, a simple one-loop computation with Z 2 -symmetric cutoff Λ gives a correction to the Higgs potential of the form The effective potential possesses the customary SU (4) symmetric form, so that a goldstone of spontaneous SU (4) breaking may remain protected against one-loop sensitivity to the cutoff. When H and H acquire vacuum expectation values, they spontaneously break the accidental SU (4) symmetry, giving rise to a pseudo-goldstone scalar h identified with the Standard Model-like Higgs. This pNGB is parametrically lighter than the radial mode associated with the breaking of the accidental SU (4), provided that δ λ.
Note that the potential (2.5) leads to vacuum expectation values v = v = f / √ 2. Unequal vevs -and a pNGB Higgs aligned mostly with the SM vev -can be obtained by introducing a soft Z 2 -breaking mass parameter δm, such that v v ∼ f occurs with a O(v 2 /2f 2 ) tuning of parameters. The current status of precision Higgs coupling measurements requires v/f 1/3, see for instance [11]. The sense in which twin top quarks serve as top partners is clear if we integrate out the heavy radial mode of accidental SU (4) breaking. This can be most easily done by using the identity to solve for H . In the unitary gauge, this then gives rise to couplings between the pNGB Higgs and fermions of the form where h is the physical Higgs boson and the trailing dots indicate v 3 /f 3 suppressed corrections. These are precisely the couplings required to cancel quadratic sensitivity of the pNGB Higgs to higher scales, provided the cutoff is Z 2 -symmetric. The vector-like twin Higgs entails the extension of this twin sector to include fermions transforming in vector-like representations of the twin gauge group. The vector-like extension of (2.1) is then where we have introduced additional fieldsq andū that are vector-like partners of the twin tops. The generalization to multiple generations, as well as the down-type quark and lepton sectors is again straightforward, and is discussed in detail in the next section. Although the additional fermions and vector-like mass terms M Q,U break the Z 2 symmetry, they do so softly and thus do not reintroduce a quadratic sensitivity to the cut-off. Quadratically divergent contributions to the Higgs potential are still proportional to an SU (4) invariant as in (2.2), assuming equal cutoffs for the two sectors.
There are several points worth emphasizing about this cancellation. First, note that the apparent symmetries of the vector-like twin Higgs also allow additional operators which we have not yet discussed. There are possible Yukawa couplings of the form These couplings, if large, provide additional radiative corrections to the potential for H that would spoil the twin cancellation mechanism. While it is technically natural to havẽ y t 1, there are also several ways of explicitly suppressing this coupling: For instance, in a supersymmetric UV completion, (2.6) is forbidden by holomorphy. Alternatively, in a (deconstructed) extra dimension there could be some geographical separation between H andq ,ū , which would also suppress this Yukawa coupling. Finally (2.6) can be forbidden by a PQ symmetry, which is softly broken by M Q and M U . In section 5 we will present an explicit UV completion which implements the first two ideas. Another set of operators, of the form can lead to a one loop contribution to the Higgs mass of the form In perturbative UV completions one generally expects c ∼ 1 or c 1, which renders (2.7) subleading with respect to a set of logarithmic corrections which we will discuss shortly. (In the supersymmetric UV completions we provide in section 5, c 1.) In strongly coupled UV completions, it could happen that c ∼ 16π 2 , which would require M Q m h . But c can be suppressed below the NDA estimate by a selection rule, or by the strong dynamics itself, as for instance through a geographical separation between H andq in a warped extra dimension.
Second, the additional vector-like fermions change the running of twin sector gauge couplings, which in turn cause twin-sector Yukawa couplings to deviate from their Standard Model counterparts. The most important effect is in the running of the QCD and QCD gauge couplings, which in the presence of three full generations of vector-like twin quarks take the form .
The mismatch in the QCD beta-functions also induces a tiny two-loop splitting between the SM and twin top Yuwaka couplings at the weak scale. But cancellation of quadratically divergent contributions to the Higgs mass is computed at the scale Λ, so that the different running of the strong gauge and Yukawa couplings causes no problem as long as the physics of the UV completion at Λ is Z 2 symmetric. This implies, at the very least, that the model must be UV completed into a manifestly Z 2 symmetric setup at a relatively low scale. Although cutoff sensitivity is still eliminated at one loop, the vector-like masses will result in log-divergent threshold corrections to the Higgs mass that must be accounted for in the tuning measure. To see these features explicitly, it is useful to again work in the low-energy effective theory obtained by integrating out the radial mode of SU (4) breaking in the twin Higgs potential. This now gives The only difference with the conventional twin Higgs is the presence of the vectorlike mass terms. From a diagrammatic point of view, it is now easy to see that the leading hh yt 2f y t f hh y t y t Figure 1: Diagrams correcting the pseudo-goldstone mode.
quadratic divergence exactly cancels as it does in the regular twin Higgs. Moreover any diagrams with additional M Q and M U mass terms must involve at least two such insertions, which is sufficient to soften the diagram enough to make it logarithmically divergent (see Fig. 1). Concretely, this implies log-divergent contributions to the Higgs mass parameter m 2 h of the form Unsurprisingly, this constrains the vector masses by the requirement that the threshold corrections to m h not be too large, meaning M Q , M U 450 GeV. 1 Although the impact of a vector-like twin sector on the twin cancellation mechanism is relatively minor, the effects on phenomenology are much more radical. First and foremost, the vector-like twin top sector, as presented in this section, is anomaly free by itself and therefore constitutes the simplest possible self-consistent vector-like twin sector. In this sense it is the vector-like analogue of the fraternal twin Higgs [8], but without the need for a twin tau and twin tau neutrino. In terms of minimality, this places lepton-free vector-like twin Higgs models on comparable footing with the fraternal twin Higgs. Secondly, in the presence of multiple generations of twin quarks, the M Q,U are promoted to matrices in flavor space. The twin flavor textures of these vectorlike mass terms are not necessarily aligned with that of the Yukawa, such that one generically expects large flavor changing interactions in the twin sector, which may lead to interesting collider signatures.
Example Models
As argued in [8], naturalness of the Higgs potential allows for a substantial amount of freedom in the choice of the field content and couplings of the twin sector. In the 1 One may wonder if this source of Z 2 breaking could naturally generate the v f hierarchy. This is not the case, as it comes with the wrong sign. An additional source of soft Z 2 breaking therefore remains necessary.
vector-like twin Higgs this freedom is even greater, and results in a large class of models featuring rich and diverse phenomenology. Aside from the Higgs sector introduced in the previous section, all models contain a twin sector with the following components: • Gauge sector: A twin SU (2) × SU (3) gauge symmetry is necessary for naturalness, although the difference between the twin gauge couplings and their Standard Model counterparts can be of the order of δg 2,3 /g 2,3 ∼ 10%, evaluated at the scale Λ [8]. In particular this implies that the confinement scale of the twin QCD sector may vary within roughly an order of magnitude. Twin hypercharge does not significantly impact the fine tuning and may be omitted from the model. We will leave the twin U (1) ungauged in what follows, with the consequence of degenerate twin electroweak gauge bosons, which we denote with W and Z . We do however assume that twin hypercharge is present as a global symmetry, and as such it imposes selection rules on the decays of the quarks.
• Top sector: In the top sector naturalness demands that we include the twin partner of the Standard Model top and that the top and twin-top Yukawa couplings differ by no more than about 1%. We must also introduce the left-handed twin bottom, as it forms a doublet with the left-handed twin top. The key difference with the conventional twin Higgs is that these twin partners are now Dirac rather than Weyl. As argued in the previous section, to preserve naturalness the corresponding Dirac mass terms should also not exceed ∼ 500 GeV.
• Quark sector: The remaining quarks are all optional, as they are required neither for naturalness nor anomaly cancellation. If they are present, they can have vectorlike masses as heavy as ∼ 5 TeV, which corresponds to the cut-off of the effective theory. In this case the UV completion must provide some form of flavor alignment between the Yukawa's and the vector-like mass terms, but as we will see, this is generally not difficult to achieve.
• Lepton sector: Unlike in chiral versions of the twin Higgs, twin leptons are not required for anomaly cancellation and are therefore optional as well. If present, they too can be taken heavy, and therefore easily by-pass any cosmological constraints on the number of relativistic degrees of freedom.
The parameter space is too large for us to study in full generality, so instead we study two well-motivated cases: • Minimal vector-like model: We consider the most minimal twin sector required by naturalness, consisting of a single vector-like generation of twin (top) quarks. This model is therefore the vector-like analogue of the fraternal twin Higgs [8], with the crucial difference that twin leptons are absent entirely. We will show that it shares many phenomenological features with the fraternal twin Higgs.
• Three-generation model: In this model we include the partners of all SM fermions, but we effectively decouple the twin partners of the 5 multiplet (d, ), by setting their vector-like masses well above the top partner mass y t f . The twin partners of the 10 (q, u, e) remain near the weak scale, a spectrum which arises naturally in the most simple UV completions (see section 5.1). While we do allow for flavor-generic Dirac masses for the remaining quarks, we take all entries of the mass matrices f / √ 2 to preserve naturalness. The right-handed twin leptons may also be in the few-hundred GeV range, but in the absence of twin hypercharge they decouple completely from the phenomenology, and we will not discuss them further.
In the remainder of this section we will study the spectrum of these two cases, with a focus on the constraints imposed by naturalness. We reserve a detailed study of their collider signatures for section 4. For UV completions of both scenarios we refer to section 5.
Minimal vector-like model
In terms of Weyl spinors -we will use Weyl notation for spinors throughout -the fermion content of the twin sector is just given by The Lagrangian is the one in (2.10). As argued in section 2, the vector-like mass terms are constrained by naturalness to reside in the range 0 < M Q , M U y t f / √ 2 ∼ (f /v) × 170 GeV. The spectrum then contains two top-like states and one bottom-like state, which we will denote with t 1,2 and b 1 respectively. The mass of the b 1 state is just (2.10), the mass matrix of the top sector is given by where q u (q u ) indicates the up component of the doublet q (q ). We neglected the v 2 /f 2 suppressed contribution to the lower left entry. Since y t f / √ 2 M Q , M U , this system contains a (mini) seesaw. This implies the ordering m t 2 > m b 1 > m t 1 . The tops are moreover strongly mixed, with masses where the expansion is for small , this implies that the heavier twin top has a mass between 500 and 600 GeV, while the lighter has a mass which can range between 10 and 200 GeV, as shown in the left-hand panel of Figure 2. From (2.4), the mass eigenstates couple to the SM Higgs as follows where the approximate equalities again indicate an expansion in M Q /f and M U /f . From (3.8) we see that (when its mass is small compared to M Q , M U ) the t 1 couples to the light Higgs with a coupling proportional to minus its mass as follows from the seesaw. This behavior is shown quantitatively in the right-hand panel of figure 2. At this point we can compute the correction to the SM Higgs mass in the minimal vector-like model, accounting for the mixing between the twin tops. The order-Λ 2 piece is which cancels against the contribution from the Standard Model top, as expected. The logarithmically divergent correction is again up to v 2 /f 2 suppressed contributions. The first term in (3.15) is just the contribution from the Standard Model top, whose mass is denoted by m t . In the limit where we turn off the vector-like masses M Q , M U → 0, we have m t 1 → 0 and m t 2 → 1 √ 2 y t f . The lightest twin top then ceases to contribute to (3.14), while the contribution of the heavier twin top matches that of the conventional twin Higgs.
We estimate the tuning induced by this threshold correction as as indicated by the dashed blue lines in Fig. 2. In the limit where M Q = M U = 0, the tuning reduces to as in the conventional twin Higgs. Here we have used that the fact that the SM quartic arises predominantly from the Z 2 -preserving, SU (4)-breaking radiative correction δ ∼
Three-generation model
In the three-generation model, the twin sector has the same matter content as in the Standard Model, but with vector-like fermions. The Lagrangian is then where all fermions carry the same quantum numbers as their Standard Model counterparts, but under the twin SU (3) × SU (2) rather than the SM gauge group. (With the exception that twin hypercharge is absent.) The relative magnitudes of all Yukawa's, except the top Yukawa, are in principle arbitrary, provided they are all much smaller than one. For simplicity, in this section, we will set all three twin Yukawa matrices equal to those in the Standard Model. As a final simplifying assumption, we also largely decouple the members of the 5-5 multiplets (d , ) by setting The twin leptons are therefore either decoupled or sterile and we do not further discuss them here. However as we will see, the d still have a role to play, as they induce flavor-changing higher dimensional operators.
In the absence of the Yukawas and mass terms, the residual twin sector quarks then have a large flavor symmetry which is maximally broken by the flavor spurions Y U , Y D , M Q , M U and M D . To preserve naturalness, we require M Q,U 500 GeV. As in the minimal vector-like model, the mass eigenstates are mixtures of the SU (2) doublet and singlet quarks. Consequently the Z generically has flavor off-diagonal couplings, which are large in the up sector. We will refer to this type of interaction as 'twin flavor changing neutral currents' (twin FCNC's). Moreover it is generally also impossible to diagonalize the mass and Yukawa matrices simultaneously, so we also expect large twin FCNC's in the Higgs sector. 2 Even if we neglect the twin charm and up quark Yukawas, so that the eigenvalues of the up-type Yukawa matrix can be approximated by {y t , 0, 0}, diagonalizing the M Q and M U matrices still leaves the uptype Yukawa matrix completely mixed. The presence of non-zero charm and up Yukawa couplings then has little additional effect. Therefore, each of the six mass eigenstates u i contains a certain admixture of the top partner (i.e., the one up-type state that couples strongly to the twin Higgs doublet). If we take M Q and M U to have eigenvalues of order M y t f , as required for the vector-like twin Higgs mechanism to work, then there will be one heavy mass eigenstate u 6 with mass y t f / (3.20) and M Q,i the eigenvalues of M Q . This induces a twin flavor changing interaction with the Standard Model Higgs, which can potentially be of phenomenological importance in some corners of the parameter space. (A similar higher dimensional operator may exist in the minimal vector-like model; however in that case it does not have any particular phenomenological significance.)
Collider Phenomenology
We now investigate the collider phenomenology of the two limits of the vector-like twin Higgs that we discussed in the previous section. We will first discuss the hadrons of the twin sector, and then turn to how these hadrons may be produced through the Higgs portal, either by the decays of the 125 GeV Higgs h or the radial mode (heavy Higgs)h.
Twin Hadrons
We begin by reviewing the twin hadrons that arise in the fraternal twin Higgs of [8], to which the reader is referred for further details. In this model, there are two twin quarks, 3 a heavy twin top partnert and a lighter twin bottomb with There are also twin leptonsτ ,ν. Theτ must be light compared to f , and in the minimal version of the model,ν is assumed to be very light. There are three different regimes.
• If the twin confinement scale Λ c m b , the light hadrons of the theory are glueballs. The lightest glueball is a 0 ++ state G 0 of mass m 0 ∼ 6.8Λ c . G 0 can mix with h and decay to a pair of SM particles. Its lifetime, a strong function of m 0 , can allow its decays to occur (on average) promptly, displaced, or outside the detector [13,14]. (See [15][16][17][18][19] for detailed collider studies.) Most other glueballs are too long-lived to be observed, except for a second 0 ++ state, with mass (1.8 − 1.9)m 0 , that can also potentially decay via the Higgs portal. In addition there are twin quarkonium states made from a pair of twinb quarks. In this regime they always annihilate to glueballs.
• Alternatively, if m 0 > 4m b , then the glueballs all decay to quarkonium states.
Among these is a set of 0 ++ statesχ. (The lightest quarkonium states are 0 −+ and 1 −− , so theχ states are may not be produced very often.) Theχ states can potentially decay via the Higgs portal and could decay promptly, displaced, or outside the detector. However, twin weak decays to very light twin leptons, if present, can often short-circuit the Higgs portal decays, making theχ states invisible.
• In between, both G 0 andχ can be stable against twin QCD decays, in which case they can mix. The state with the longer lifetime in the absence of mixing tends, when mixing is present, to inherit the decay modes of (and a larger width from) the shorter-lived state.
The minimal model of the vector-like twin Higgs is remarkably similar to the fraternal twin Higgs, despite the fact that it has three twin quarks t 1 , b 1 , t 2 . The surprise is that, as we saw in (3.12), the t 1 's couplings to the Higgs are the same as for the twinb in the fraternal case, up to a minus sign and small corrections. The b 1 itself plays a limited role for the light twin hadrons because its coupling to the Higgs is absent or at worst suppressed, as in (3.20). Consequently the glueball phenomenology, and that of the t 1t 1 quarkonium states, is very similar to that of the fraternal twin Higgs. One minor effect (see figure 3), relevant only for low values of M Q , is that the b 1 makes the twin QCD coupling run slightly slower, so that Λ c and m 0 are reduced by up to 20%. The relation between m 0 and the G 0 lifetime is the same as in the fraternal twin Higgs, so the lifetime correspondingly increases by up to an order of magnitude. This makes displaced glueball decays slightly more likely, as shown in the right-hand panel of figure 3. Here we took |δg 3 /g 3 | < 0.15, which roughly corresponds to a fine tuning no worse than 30%.
The significant new features in the minimal vector-like model are consequences of the absence of light twin leptons, the role of t 2 -t 1 mixing and the presence of the b 1 in some decay chains. Figure 3: Plots of the confinement scale Λ c and G 0 glueball lifetime cτ as a function of the relative deviation δg 3 /g 3 of the twin QCD coupling from the SM QCD coupling at the cut-off scale Λ = 5 TeV. Shown are the fraternal case (solid green) and the minimal vector-like twin Higgs (dashed red). The RGE's were obtained with the SARAH package [20]. The confinement scale is defined as in [8]. The dip in cτ occurs when m 0 ∼ m h .
• Without light twin leptons, the W will be stable (and a possible dark matter candidate [21]) if W →b 1 t 1 is closed.
• Typically the t 2 would decay to b 1 W and from there to b 1b 1 t 1 . However, this decay may be kinematically closed, and there is no twin semileptonic decay to take its place. It therefore may decay instead via t 2 → t 1 Z → t 1 t 1t 1 or t 1 h, via equations (3.10)-(3.11).
• Because of twin hypercharge conservation, the b 1 is stable if the decay b 1 → t 1 W is kinematically closed, so there are also b 1t 1 bound states. Once produced, these "flavor-off-diagonal quarkonia" cannot annihilate and are stable. Flavor-diagonal bottomonium states annihilate to glueballs and/or, if kinematically allowed, toponium states.
Before moving on, let us make a few remarks about the behavior of quarkonium states, specifically in the limit where the glueballs are light. When a twin quarkantiquark pair are produced, they are bound by a twin flux tube that cannot break (or, even when it can, is unlikely to do so), because there are no twin quarks with mass below the twin confinement scale. The system then produces glueballs in three stages: (1) at production, as the quarkonium first forms; (2) as the quarkonium relaxes toward its ground state (it may stop at a mildly excited state); and (3) when and if the quarko- nium annihilates to glueballs and/or lighter quarkonia. During this process unstable twin quarks may decay via twin weak bosons, generating additional excited quarkonium states. Obviously the details are very dependent on the mass spectrum and are not easy to estimate. The general point is that the creation of a twin quark-antiquark pair leads to the production of multiple glueballs, with potentially higher multiplicity if the quarkonium is flavor-diagonal and can annihilate. Let us turn now to the three-generation model, with its up-type quarks u 1 , . . . , u 6 and down-type quarks d 1 , . . . d 3 (plus three SU (2) singlet down-type quarks with mass f ). The most important difference from the fraternal twin Higgs is a twin QCD beta function that is less negative, which implies a lower confinement scale Λ c . The twin glueball masses are therefore low and the lifetimes long, as shown in figure 4. For δg 3 < 0, the typical G 0 decays outside the detector. Thus although the lower mass implies glueballs may be made in greater multiplicity, it may happen that few if any of the G 0 glueball decays are observable. We also expect generally to be in the regime where the glueballs are the lightest states and flavor-diagonal quarkonia can annihilate into glueballs, so we expect no χ decays to the SM. As in the minimal vector-like model there are two stable twin quarks (here called u 1 , d 1 ) and there can be flavor-off-diagonal d 1ū 1 quarkonia, which cannot annihilate. However, heavier d j quarks can in some cases be very long lived, with potentially interesting consequences.
Heavy twin u i quarks can decay via W ( * ) , Z ( * ) or h ( * ) , and will cascade down to u 1 or d 1 . (The ( * ) superscript indicates that the corresponding state may be on-shell or off-shell.) Heavy d i quarks can decay via a W ( * ) if kinematic constraints permit. Heavy d i decays through Z ( * ) or h ( * ) are in principle possible as well, but are heavily suppressed. Since twin FCNCs are large, there can be competition between the various channels, depending on the details of the spectrum. Note that every W ( * ) or Z ( * ) in a cascade produces a new q q , and thus increases the number of quarkonia by one.
Production of twin hadrons via h decays
In the fraternal twin Higgs, as detailed in [8], the rates of twin hadron production, and the decay patterns of the twin hadrons, depend on the confinement scale and the twin bottom mass. Twin hadrons are produced in h decays to twin gluons and/or twinb quarks. The former is almost guaranteed but has a branching fraction of order 10 −3 . Of course the latter is forbidden ifm b > m h /2, but if allowed has a rate that grows witĥ m b ∝ŷ b and easily dominates over decays to twin gluons. In fact the rate is so large that corrections to h decays exclude the model ifm b The minimal vector-like model is quite similar to the fraternal twin as far as h decays. As in the fraternal model, there is a region excluded by an overabundance of h → t 1t 1 decays, shown in the grey shaded region of figure 2, though this is a perturbative estimate with very large non-perturbative uncertainties at the upper edge. The most important difference, as mentioned above, is that without light twin leptons, the χ quarkonium states are more likely to decay visibly, making an experimentally accessible signal more likely.
In the three-generation model, the u 1 coupling to the Higgs may vary by a factor of two or more compared to the minimal vector-like case, as a result of mixing with the other u i states. This changes Br(h → u 1ū 1 ) for a fixed u 1 mass, and therefore also changes the range of u 1 masses excluded by Higgs coupling measurements (the grey band of figure 2).
Since the less negative beta function of the three-generation model pushes down the glueball masses (see figure 4), in most of parameter space u 1ū 1 quarkonia will annihilate to glueballs. In some regimes G 0 is very light and long-lived; if m 0 < 10 GeV, G 0 decays are to cc, τ + τ − and the G 0 lifetime approaches the kilometer scale. All Higgs decays might thus be invisible. But more optimistically, small m 0 implies glueball multiplicity can be large. With enough events and enough glueballs per event, we may hope to observe Higgs decays to missing energy plus a single G 0 displaced decay, giving a lowmass vertex with a small number of tracks. (Note that the vertices are distributed evenly in radius in this long-lifetime regime.) This offers a challenging signal which pushes somewhat beyond what the LHC experiments have attempted up to now.
There is also a small possibility of observing off-shell Higgs bosons in h decay. There is a region of parameter space where h → u 2ū 1 is possible, followed by a prompt u 2 → u 1 Z * → u 1ū 1 u 1 or u 2 → h * u 1 decay. If m u 2 > 3m u 1 , the Z * channel tends to dominate the decay; however if m u 2 < 3m u 1 , then u 2 → h * u 1 will proceed with 100% branching fraction.
Production of twin hadrons via the radial modeh
The radial mode may be a relatively narrow resonance, if a linear sigma model describes the twin Higgs, or it may be wide and heavy if strong compositeness dynamics is involved. If it is sufficiently light and/or wide, gg collisions at the LHC will be able to excite it. For simplicity we will assume the mode is narrow and will refer to it ash, with mass m that is not well-constrained but is likely in the 500-2000 GeV range. Theh decays mainly to its Goldstone modes, namely the SM bosons W W, ZZ, hh as well as the twin bosons W W , Z Z , which may in turn decay to twin quarks. Direct decays ofh to the twin quarks are possible though relatively suppressed, just as a heavy SM Higgs would decay rarely to fermions.
In the fraternal twin Higgs,h decays to twin hadrons are most likely to occur through h → Z Z , because the Z can decay to twin quarks with a branching fraction of order 60%. The W decays only toτν pairs. Meanwhileh decay tot pairs is highly suppressed by couplings and kinematics, but if it is present, the weak decayt →bŴ leads to a single highly-excited twin bottomonium. The bottomonium then deexcites as described in section 4.1, typically producing multiple glueballs.
Without twin leptons and with both t 1 and b 1 quarks, the minimal vector-like twin Higgs differs from the fraternal twin in several ways. Decays ofh to twin bosons may lead to many more twin hadron events, and higher multiplicity on average, because the Z always decays to t 1 or b 1 quark-antiquark pairs, and the W may be able to decay to t 1b 1 . Each of these decays produces an excited flavor-diagonal or flavor-off-diagonal quarkonium. Furthermore, the decayh → t 2t 1 , though suppressed by a mixing angle, may be kinematically allowed even if t 2t 2 is not.
Finally theh decays in the three-generation model have the same rate as in the minimal model, but are potentially more diverse, possibly giving a new visible signature.
The more elaborate spectrum and large twin FCNCs allow Z → u iū j and d kd k , and W → u id k for i, j = 1, . . . , 5 and k = 1, 2, 3, depending on the spectrum of masses.
Alsoh → u 6ū i may be possible though rare. When u i or d k for i, k > 1 is produced, a decay will ensue, possibly via a cascade, to u 1 or d 1 . These decays may produce an onor off-shell h, as we now discuss.
Decays of the heavier u i will most often go via d k W or u j Z if kinematically allowed, however decays to h u j are also possible. This is especially so if the initial state is u 6 , which has sizable off-diagonal Yukawa couplings. For lighter u i the on-shell decays to W and Z are closed, so they are likely to decay via u j h if kinematically allowed. For u i with mass less than m u 1 + m h , the three off-shell decays via W * , Z * , h * all compete.
If a decay mode to three twin quarks is open, decays through W * and Z * will typically dominate; otherwise the decay of the u i must occur through an h * .
Meanwhile, as discussed in section 3.2, see (3.20), the d k have much smaller twin FCNCs. The decay d k → W u j always dominates if kinematically allowed. Otherwise the decay d k → u 1 d 1ū 1 , via an off-shell W , typically will dominate. But for d k too light even for this decay, only d k → h ( * ) d l may be available. The small FCNCs make this decay very slow, and in principle would even permit observable displacement of the decay. However, we must recall that each quark is bound to an antiquark and the quarkonium system relaxes to near its ground state. It seems likely, in this limit, that quarkonium relaxation and annihilation occurs before the individual quarks decay.
For flavor-diagonal d kd k quarkonia, k > 1, annihilation occurs via twin QCD, and this is rapid. Flavor-off-diagonal quarkonia, including both d kd l and d kū 1 , can only decay via twin electroweak processes, namely through flavor-changing exchange, in either the sor t-channel, of a W . Still, this rate seems to exceed that of d k decay. With m q and mq are the masses of the initial state quarks, an estimate of the annihilation width for a ground-state S-wave state to decay via a W is times the squares of flavor mixing angles. The rate is smaller for excited states, but the low glueball mass means that the quarkonium system is unlikely to get stuck in a highly excited state, so the suppression is not substantial. Meanwhile this annihilation rate is to be compared with a decay such as d k → d l h, which is two-body but suppressed by the coefficient |c kl | 2 ∼ y 4 b /M 2 D appearing in the operator (3.20), or a three-body decay via an off-shell h which is suppressed by y 6 b /M 2 D . The annihilation will have a much higher rate than the decay unless the relevant flavor mixing angles are anomalously small, the d k and d l are split by at least m h , and M D 5 TeV, in which case the decay via an on-shell h might be observable. We conclude that for d k that cannot decay via W ( * ) , flavor-off-diagonal d kū 1 and d kd 1 quarkonia annihilate to lighter d lū 1 , u jū 1 quarkonium states (plus at least one glueball). The u 1d1 quarkonium is stable. Again flavor-diagonal quarkonia annihilate to glueballs.
In sum, the three-generation model offers cascade decays of heavier twin quarks which can generate additional quarkonium states, along possibly with prompt on-or off-shell h bosons from u i decay. Consequently the final states fromh decay may have • twin hadrons (glueballs and flavor-off-diagonal quarkonia) that decay displaced or outside the detector; • prompt on-shell h decays; • prompt decays of an off-shell h to bb, τ + τ − , or other jet pairs, similar to twin glueball final states but at a higher and variable mass.
Clearly, even with a very small rate for exciting the radial modeh, we should not overlook the possibility of a handful of striking events with substantial missing energy, at least one Higgs boson, and at least one displaced vertex with low mass.
On the Origin of Symmetries
In the vector-like twin Higgs the Z 2 symmetry is broken explicitly just by the presence of vector-like partners for the twin fermions. It is therefore essential to specify a UV completion from which the Z 2 nevertheless emerges as an approximate symmetry in the IR. Such approximate IR symmetries often arise as a natural ingredient of orbifold constructions, making them ideal candidates for a UV completion of the vector-like twin Higgs. In the interest of clarity, we will first present a very simple and explicit 4D model based on the deconstruction of higher-dimensional theories [22] with orbifold fixed points. These models possess the appropriate set of zero modes and the accidental Z 2 symmetry. We will then discuss the relationship between these simple models and orbifold constructions.
The model
We begin with a simple UV completion for the vector-like twin Higgs that features the correct set of zero modes and an accidental Z 2 symmetry. For concreteness, we focus on the minimal vector-like example, but the generalization to three generations in the twin sector is straightforward. Our example UV completion is heavily inspired by the dimensional deconstruction of an orbifold setup [23][24][25][26][27][28][29][30][31][32] and shares many of its features. As indicated in Fig. 5, the model can be divided into the SM and the twin sector, which each consist of a two-node quiver whose nodes are connected by a set of vector-like link fields, denoted (φ,φ) and (φ ,φ ) respectively. On the SM side each node contains a copy of the usual SU (3) × SU (2) × U (1) gauge group, while on the twin side one node has the full SU (3) × SU (2) × U (1) and the other only SU (3) × SU (2). On the latter node the U (1) is present as a global symmetry, but it remains ungauged. The link fields organize themselves in complete 5-5 multiplets of these gauge groups. We label the nodes in each sector by "symmetric" (S) and "non-symmetric" (N ). The S node in on the SM side contains a SM Higgs field and a single, full generation of the SM fermions. Similarly, the S node on the twin side contains a twin Higgs field and single generation of twin fermions. The N node in the SM sector contains all the SM fermions from the first and second generations, while the N node in the twin sector harbors a single twin anti-generation. The SM and twin sectors only communicate with each other by means of the Higgs potential for H, H given in (2.5).
We further assume a Z 2 permutation symmetry between the symmetric S nodes of the two sectors, which ensures the presence of an approximate SU (4) global symmetry in the Higgs potential. The Z 2 is only broken by the presence of the N nodes on both sides. We assume all couplings of the link fields are moderate in size, such that their effects do not significantly violate the Z 2 symmetry between the S nodes. In a more complete model, the Z 2 symmetry of the S nodes may arise from the unification of the SM and twin gauge groups into a single SU (6) × SU (4) node. While a detailed study is beyond the scope of the present work, as an intermediate step we provide a simple prescription for hypercharge in orbifold Higgs models in Appendix A. Constructions based on Pati-Salam unification or trinification are also possible [4].
The SM Yukawa couplings to top, bottom, and tau, and the analogous couplings for their twin partners, are also present in the S nodes, and the (approximate) Z 2 symmetry assures they are (approximately) equal. The model is further equipped with the SU (4)preserving and SU (4)-breaking quartics λ and δ, as in (2.1). The quartic λ forms the only direct connection between the SM and twin sides of the quiver.
To address the "big" hierarchy problem (namely, the UV completion of the twin Higgs linear sigma model above the scale Λ), we take the theory to be supersymmetric down to a scale of order Λ ∼ 5 − 10 TeV, much as in the supersymmetric twin Higgs [33][34][35]. As a consequence, it is natural to take the mass parameter m 2 in the Higgs potential to satisfy m ∼ Λ/4π, such that the quartic λ can be taken to be perturbative. The subtleties regarding the coset structure of strongly coupled models may therefore be bypassed [2,6]. In addition we assume that the mechanism of supersymmetry breaking triggers vacuum expectation values for the link fields, such that both visible and twin sectors will see their S and N nodes Higgsed down to the diagonal SU (3)×SU (2)×U (1) and SU (3)×SU (2) respectively. (Twin hypercharge is fully broken.) The matter content in the visible sector is that of the Standard Model, while the twin sector contains a Higgs and a single vector-like generation.
There are various options for generating a suitable link field potential that higgses each pair of S and N nodes down to the diagonal subgroup. The potential may be generated non-supersymmetrically, as in [36]. We here assume a set of soft-masses such that φ ∼ φ ∼ Λ1 and similarly for φ andφ . The D-term potentials for the link fields generate suitable quartics to stabilize the link fields at nonzero vev, provided that the soft masses satisfy some consistency conditions. (This is similar to what happens in the MSSM Higgs potential.) 4 Alternately, the link field potential may be generated supersymmetrically by including an additional singlet + adjoint chiral superfield on either the S or N nodes [26].
The necessary Higgs potential is generated with a singlet coupling to the Higgses on each S node as in [33], and the potential (2.1) is reproduced in the decoupling limit where the additional states of the SUSY 2HDM are heavy. Note that SUSY provides a natural explanation for λ > > δ, since λ can be generated by a large F -term quartic while δ is generated by electroweak D-terms. For simplicity we will not commit to a specific model for supersymmetry breaking and mediation, save for enforcing the requirement that it respect the Z 2 symmetry between the two S nodes.
Finally, note that it is straightforward to modify this setup to accommodate a different set of zero modes. For example, we can obtain the three-generation model in section 3.2 by simply putting three generations of matter fields on the S nodes, as well as three anti-generations on the twin N node. Another important example is that of the fraternal twin Higgs, which can be obtained by simply removing theψ 3 from the quiver in figure 5.
Mass scales
The symmetry structure of the theory to some extent controls the form of Yukawa couplings. In particular, third-generation Yukawas are allowed at tree level since both the Higgses and third-generation fields are located on the symmetric node. However, the Yukawa couplings involving first two generations in the visible sector are forbidden by gauge invariance and instead must arise from irrelevant operators generated at a higher scale Λ . In a supersymmetric theory these take the form with f, g = 1, 2. These operators may be induced by integrating out massive matter at the scale Λ as in [37]. The bi-fundamentals φ D and φ T are respectively the doublet and triplet components of the link field φ ≡ (φ T , φ D ). When the link fields acquire vevs, this leads to Yukawa couplings with an intrinsic ≡ φ /Λ ∼ 0.1 suppression. The resulting yukawa textures are which can yield viable masses and mixings, though additional physics is required to explain the hierarchy between the first-and second-generation fermion masses. Since these irrelevant operators are suppressed by the scale Λ and also may have small coefficients (indeed they cannot be too large or the Z 2 will be badly broken), small Yukawa couplings for the first two generations result. Flavor-changing effects that are not directly minimally-flavor-violating are present, since physics at the scale Λ generates flavorviolating four-fermion operators as well as effective Yukawa couplings. These flavorviolating operators are suppressed by both Λ and numerically small coefficients on the order of the CKM angles between the first two generations and the third generation, making it possible to accommodate flavor limits without further special alignment; see [37] for related discussion. Note that detailed flavor constraints may be relevant and perhaps even provide promising discovery channels; see [12] for a recent discussion of flavor signatures in UV complete twin Higgs models. Meanwhile, in the twin sector there are various possible marginal and irrelevant operators of interest, namely where w i are dimensionless coefficients. Once the link fields obtain O(Λ) vevs, the resulting mass spectrum has the following form: where for the latter estimate we take Λ ∼ 100 TeV. The twin neutrino, the left-handed twin tau and the right-handed twin bottom are therefore lifted, while the remaining states remain relatively light. The Yukawa-induced mixing between the left-and righthanded states is generally negligible for both for the bottom and the tau. Since the twin hypercharge is Higgsed at the scale Λ, the right-handed twin tau plays no role in the low-energy collider phenomenology of the twin sector. 5 automatically, as required by naturalness (see Section 2). The twin tops are then heavily mixed, as discussed in Section 3.1. All mass scales are summarized in Fig. 6 for a benchmark point.
In order for the twin mechanism to be effective, the top Yukawa couplings of the twin and SM sectors should be equal to within about 1%, while the twin and SM diagonal gauge couplings g 2,3 and g 2,3 of the SU (3) and SU (2) groups should be equal to within about 10% at the scale Λ. Breaking of the S and N nodes to their diagonal subgroups will violate the latter condition unless the N nodes of both the SM and twin sectors have couplings that either are nearly equal or are somewhat larger than the gauge couplings on the S nodes. Expressed in terms of the coupling strengths α ≡ g 2 /4π, the S nodes in each sector have a common SU (3) coupling α 3,S while the N nodes have relatively large (but generally unequal) SU (3) couplings α 3,N and α 3,N . The couplings α 3 , α 3 of the unbroken SU (3) gauge groups will then be equal up to corrections of order In addition there can be moderate one-loop threshold corrections proportional to log( φ / φ ). An analogous formula applies for SU (2). For instance, if α 3,N = 2α 3,N , the require accuracy can be achieved if α 3,N 0.38 (g 3,N 2.19). With α 2,N = 2α 2,N , we need α 2,N 0.16 (g 2,N 1.4). This implies that the g 2,N coupling will reach a Landau pole before 10 6 TeV, at which scale the model must be UV completed further. 6 Thus we require the N node gauge couplings be moderately large at the scale Λ. We cannot allow them to approach 4π, however, as would be the case at Seiberg fixed points; this would give φ,φ large anomalous dimensions, causing unacceptable Z 2 -violating two-loop corrections to the couplings α S . Having ensured an adequate degeneracy of the SU (3) and SU (2) couplings, we must weak, it could potentially overclose the universe. If this problem arises, it could be avoided if the reheating temperature is lower than Λ, or if the e can decay, either to h if the spectrum permits, or through mixing with the SM neutrino sector, or through a dimension-six operator coupling e to twin quarks or SM fermions. 6 Alternatively, we could have used3-3 ⊕2-2 link fields, which removes the landau pole issue at the price of gauge coupling unification in the symmetric nodes.
also ensure that there are no additional large sources of radiative Z 2 -breaking which feed into the top yukawa. All third-generation yukawas are located on the S nodes, and so do not pose a threat. The link fields cannot couple renormalizably to the top quarks because of their gauge charges. The link fields may possess moderate Z 2 -breaking Yukawas to other fields, but these only feed into the running of the top yukawa at three loops, sub-dominant to the leading effect of the SU (3) running.
Connection with orbifolds
Thus far we have presented a simple toy UV completion for the vector-like twin Higgs, but it is natural to wonder if a more general organizing principle might be at play. The key challenge in UV completing models like the fraternal or the vector-like twin Higgs is the fact that the twin sector looks radically different from the Standard Model sector, and the Z 2 at best only persists as an approximate symmetry in a subsector of the theory. In previous work [3,4], we have shown that such approximate symmetries may be highly non-trivial and are a natural output of orbifold constructions. Concretely, one starts with a fully symmetric mother theory in the UV, which in our case would be a vector-like version of the Standard Model and a complete, vector-like twin copy. A suitable orbifold projection may then remove the unwanted degrees of freedom, while leaving behind a daughter theory with the desired accidental symmetry. Operationally the orbifold is carried out by identifying a suitable discrete symmetry of the theory and subsequently removing all degrees of freedom which are not invariant under the chosen discrete symmetry. In an actual model this projection can be implemented by selecting the zero modes of a higher dimensional theory, or by dimensional deconstruction. We first review the former, following [3], and then provide a deeper motivation for the 4D model presented above.
UV completion in 5D
We consider two copies of the MSSM gauge sector on R 4 ×S 1 , with a global Z 2 symmetry that sets the gauge couplings to be identical between the two. The theory further contains a whole vector-like third generation of MSSM matter multiplets. Owing to the fact that we start from a five-dimensional theory, the degrees of freedom within each multiplet resemble those of 4D N = 2 theories from an effective four-dimensional viewpoint. Matter superfields in five dimensions descend to hypermultiplets in four; the latter can be conveniently thought of as a pair of chiral and anti-chiral N = 1 superfields in the 4D effective theory. The matter fields are thus organized in terms of the hypermultiplets Ψ 3 = (ψ 3 , ψ c 3 ) andΨ 3 = (ψ 3 ,ψ c 3 ), where the ψ 3 andψ 3 were defined in the caption of figure 5. The ψ c 3 andψ c 3 are an additional set of fermion representations conjugate to ψ 3 andψ 3 . The matter content of the twin sector is identical, as required by the Z 2 symmetry. We denote it by the pair of hypermultiplets Ψ 3 andΨ 3 .
We take the S 1 /(Z 2 ×Z 2 ) orbifold of this mother theory: denoting spacetime coordinates ( x, y), the action of the orbifold group on spacetime is the familiar (see for example [38]) P : y → −yP : y → πR − y .
The fundamental domain is thus (0, πR/2), with y = 0 being a P fixed-point and y = πR/2 aP fixed point. We refer to these fixed points as the 'symmetric' and 'non-symmetric' brane respectively, for reasons that will become clear momentarily. P andP also act on fields, in fact those fields which transform non-trivially under P and/orP must vanish at the corresponding orbifold fixed point(s), and their zero modes will be absent from the effective 4D theory. The spacetime actions of both P,P on superfields are identical: on the vector-multiplets they act by (V, Σ) → (V, −Σ), where V and Σ are the N = 1 vector and chiral multiplets respectively. On matter hypermultiplets, the space-time action of P,P takes e.g.
. In addition to this, the Z 2 ×Z 2 acts on the space of fields, with the following assignments: We take P to act trivially on the target space, whileP takes φ →η φ φ with η φ = ±1. The combined action on the vector multiplets and the matter multiplets is given in the following table
Vector multiplet
Hypermultiplet whereη = ±1 can be chosen for each individual field. The hypermultiplet (φ, φ c ) can represent any of the matter hypermultiplets we introduced before. In the language of the 4D N = 1 superfields, only those which transform with a (+, +) sign under (P,P ) can contribute a zero-mode to the effective 4D theory, since a negative sign under either operator requires the field to vanish at the corresponding brane. In fact, the P action manifestly breaks N = 2 supersymmetry down to N = 1: it requires both the Σcomponent of all 5D vector multiplets the φ c component of all 5D matter multiplets to vanish on the symmetric brane, thus killing the corresponding zero modes.
On top of the supersymmetry breaking,P further acts in the way specified by the following table 7 This implies a vanishing (Dirichlet) condition on the non-symmetric brane for certain N = 1 components. In gauge fields the boundary condition applies to the Σ-component ifη = 1, or to the V -component ifη = −1. Overall, all 5D vector multiplets with η = +1 will descend to 4D N = 1 vector multiplets, while A (1) is entirely removed from the spectrum. By analogous reasoning, all the 5D matter fields withη = +1 descend to 4D N = 1 chiral multiplets, whileψ 3 does not contribute zero modes to the 4D effective theory, since its components must vanish on either brane. Finally, in each sector we introduce a pair of 4D N = 1 Higgs multiplets (H u , H d ) and (H u , H d ), localized on the symmetric brane, along with a singlet chiral multiplet S.
on the symmetric brane gives rise to the SU (4)-symmetric quartic λ, while Z 2 -symmetric yukawa couplings connect these Higgses to the bulk fields. The resulting 4D zero-mode spectrum includes a chiral copy of the MSSM and a vector-like copy of the twin sector, realizing a 5D supersymmetric UV completion of the vector-like twin Higgs. Our choice of boundary conditions leaves a zero-mode spectrum with unbroken N = 1 supersymmetry (in contrast with, e.g. folded SUSY [39] where the boundary conditions break all supersymmetries). Further soft supersymmetry breaking may be introduced through local operators on the symmetric y = 0 brane, so that soft masses remain Z 2 -symmetric.
It should be noted that bulk mass terms of the form M ψ 3ψ3 + ψ 3ψ 3 softly break theZ 2 which we used for the orbifold. On the level of the zero modes, this is precisely the origin of the soft Z 2 breaking by the vector-like mass terms, as discussed in section 2. This procedure is easily generalized to a three-generation Standard Model, with all fermions in the bulk. Alternatively one may localize only a copy of the lowest Standard Model generations on theP brane.
While this model exemplifies the key features of a 5D realization of the vector-like twin Higgs, we note that it suffers a modest shortcoming related to the choice of a flat 5th dimension. In general, large brane-localized kinetic terms on the non-symmetric brane at y = πR/2 will shift the effective 4D couplings of zero-mode states. The effect on SM and twin gauge couplings is benign, but the shift in the SM and twin top yukawa couplings is typically larger than the percent-level splitting allowed by the twin mechanism. Such non-symmetric brane-localized terms can be rendered safe in a flat fifth dimension using bulk masses for third-generation fields of order M ∼ 1/R (thereby sharply peaking the corresponding zero mode profiles away from the non-symmetric brane), but at the cost of unreasonably large vector-like masses for the twin sector zero modes. Alternately, the theory may be embedded in a warped extra dimension where the bulk warp factor strongly suppresses the impact of non-symmetric brane-localized kinetic terms. The general features discussed in this section carry over directly to the warped case, although detailed model-building in a warped background is beyond the scope of the present work. 8
UV completion in 4D
Finally, we come full-circle by presenting a 4D theory which yields the same spectrum as the 5D setup in the previous section, and illustrate the relation to our initial 4D model. The basic template for such a setup is a chain of 'nodes' with the gauge group in the bulk of the 5D theory, connected by bi-fundamental link fields. To automatically cancel any gauge anomalies at the boundaries, we take the link fields to be vector-like. 9 The last node on one end of the chain contains the reduced gauge group of the daughter theory, which in our case is the same as the full bulk gauge theory, minus twin hypercharge. We call this node the 'non-symmetric node', in analogy with the 'non-symmetric brane' in the previous section. The node on the opposing end of the quiver has the full gauge symmetry plus the global Z 2 , and we will refer to it as the 'symmetric node', again in analogy with the terminology in the previous section. When the link fields are Higgsed, this construction yields a spectrum identical to the KK-modes of the 5D gauge theory.
The remaining matter content is specified according to the following rules: • All fields which propagate in the 5D bulk appear on the bulk nodes. These correspond the matter hypermultiplets, introduced in the previous section.
• Fields which have a zero mode in the 5D theory appear on one of the boundary 8 In contrast to holographic twin Higgs models [5][6][7], in this case the scale of the IR brane can be somewhat above the scale f , with supersymmetry protecting the linear sigma model. Thus it is sufficient for the accidental symmetry of the Higgs sector to be SU (4) rather than O(8), since higher-dimensional operators are parametrically suppressed [2]. 9 Note that a literal deconstruction of the 5D theory would entail oriented, rather than vector-like, link fields with additional matter on the end nodes to cancel anomalies. Figure 7: A schematic representation of the deconstruction of the orbifold model. For simplicity, only one bulk node is shown. The notation is as in Section 5.2.1.
Symmetric Node
nodes. Which boundary node they are attached to is a priori arbitrary, and all multiplets on the boundary nodes are N = 1 and chiral. Fields which do not have zero modes appear on neither boundary node.
In our example, we choose to attach ψ 3 and ψ 3 to the symmetric boundary node, and to moveψ 3 to the non-symmetric node. This has the advantage that the Z 2 symmetry of the symmetric node is manifestly preserved. In analogy with the previous section, we also add the H u,d and H u,d multiplets on the symmetric boundary node. Neitherψ 3 nor any of the anti-chiral components of the bulk hypermultiplets have a zero mode, and they therefore do not appear on the boundaries. This construction is shown schematically in figure 7.
The resulting quiver has a strong resemblance to the model of section 5.1. In particular, we can obtain the quiver in figure 5 by simply dropping all bulk nodes from the model. This removes all KK-modes from the model, and strictly speaking its interpretation in terms of the deconstruction of an extra dimension is lost. However since the KK-modes are likely to be out of reach at the LHC, the two options are likely indistinguishable in the near future.
Conclusions
The tension between LHC null results and anticipated signals of conventional top partners motivates alternative theories of the weak scale with novel signatures. Many such alternative theories, including the twin Higgs and folded supersymmetry, exhibit hidden valley-type phenomenology intimately connected to the stabilization of the weak scale. In their simplest incarnations, these theories and their signatures are made rigid by the requirement of exact discrete symmetries. Far greater freedom is possible for both models and their signatures if the discrete symmetries are approximate, rather than exact. The precise signatures of these models depend, however, on both the detailed physics of the dark sector and the UV completion, which is required to justify the presence of approximate stabilization symmetries.
In this paper we present an intriguing deformation of the twin Higgs model in which the twin sector may be vector-like without spoiling naturalness. From a bottom-up point of view, this deformation is innocuous in that the presence of these extra mass terms is merely a soft breaking of the twin Z 2 and should therefore not reintroduce the quadratic sensitivity to the cut-off of the theory. However, while the vector-like mass terms represent a soft Z 2 breaking, the presence of vector-like states constitutes a hard breaking (through, e.g., their impact on the running of couplings in the twin sector) that requires a UV completion. We show that this setup can be UV completed in the context of the orbifold Higgs and we provide an explicit model based on dimensional deconstruction. (A similar mechanism is at work in the Holographic Twin Higgs [5] where spontaneous breaking of a bulk symmetry leads to modest bulk masses for twin sector fermions.) The same mechanism can moreover be used as a UV completion of the fraternal twin Higgs.
The phenomenology of the vector-like twin Higgs is very rich, and depends strongly on the number of twin generations, the flavor texture of the vector-like mass terms and their overall size. In this paper we have analysed two example models where the twin quarks are all relatively heavy compared to the twin confinement scale. In this case, the collider phenomenology is similar to that of the fraternal twin Higgs, but with a few important differences. Due to the extra matter charged under twin QCD, the twin confimenent scale tends to be somewhat lower, which increases the likelihood for glueballs to decay displaced. Due to absence of light leptons, either the lightest state in the down-sector or the W is stable. However perhaps the most striking feature is the presence of order-one flavor changing neutral currents in the twin sector. As a result, cascade decays of heavier twin fermions may produce spectacular events with glueball decays in association with one or more on or off-shell Higgses.
There are a number of interesting future directions worth pursuing: • In this paper we have assumed a [SU (3) × SU (2)] 2 gauge group and imposed the Z 2 symmetry by hand on the symmetric nodes in figures 5 and 7. In [3,4] we showed how the Z 2 symmetry can be an automatic ingredient if the Standard Model and twin gauge interactions are unified at some scale near 10 TeV. It may be worthwhile to investigate under what conditions it is possible to generalize this idea to the vector-like twin Higgs, and in particular to construct four-dimensional UV completions.
• We also restricted ourselves to a broad-brush, qualitative description of the collider phenomenology. It would be interesting to study some well motivated benchmark scenarios in enough detail to get a quantitative idea about the reach of the LHC for these models. Of particular interest here would be the signatures resulting from the production of the radial mode or the lowest KK-states (if they are present), along the lines of [40].
• A final direction for further progress is related to cosmology. While the traditional mirror twin Higgs requires a very non-standard cosmology to avoid CMB constraints on a relativistic twin photon and twin neutrino's, this tension can be relaxed significantly in the fraternal twin Higgs [41,21]. In the vector-like twin Higgs, this tension is removed entirely since the neutrinos are vector-like and can therefore be heavy. The lightest twin lepton may still be a twin WIMP dark matter candidate and its annihilation cross section and relic density now depends on the spectrum of the twin quarks. Alternatively, the W may be stable and could make up (part of) the dark matter [21]. Another intriging possibility opens up when the twin quarks are light, as now the twin pions could be the dark matter and freeze out from the twin strong interactions through the SIMP mechanism [42,43]. Even if the CMB constraints can be avoided, this idea is still difficult to realize in the traditional mirror twin Higgs due to the number of light flavors required for the SIMP mechanism to operate. Both this issue and the CMB constraints can be naturally addressed in the vector-like twin if the vector-like masses are below the confinement scale.
A Hypercharge in Orbifold Higgs Models
In [3] we presented a class of models where the twin Higgs or a generalization arises from an orbifold of theory where the SM and twin gauge groups are unified. The explicit unification of the gauge groups of both sectors then provides a natural explanation for the presence of the (approximate) Z 2 . However, in order to ensure that the twin sector is dark under SM hypercharge, these models tend to require (partial) low scale gauge coupling unification of the SM gauge groups. This can be accomplished, for example, with an enlarged version of Pati-Salam unification or trinification.
Here we provide an alternative setup with a Z 2 ×Z 2 orbifold where such low scale unification is not required. To illustrate the principle, we present a simple toy model which only includes the top and Higgs sectors. The generalization to a full model is straightforward. We consider an SU (6) × SU (4) × U (1) A × U (1) B gauge group and two sets of fields (H A , Q A , U A ) and (H B , Q B , U B ) with representations as in table 1. We can identify U (1) A and U (1) B with SM and twin hypercharge respectively. The action is were we assume a Z 2 symmetry which exchanges the A ↔ B.
As will be specified below, the action of the first orbifold reduces the non-abelian gauge symmetries SU at which stage some residual, unwanted fields remain. These are then removed with the second,Z 2 orbifold, very analoguous to what happens in Scherk-Schwarz supersymmetry breaking. Concretely, following the procedure described in [3], we embed the Z 2 ×Z 2 in and η = +1 for the A field and η = −1 for the B fields. After the Z 2 projection, the gauge groups are broken and the only matter fields in table 2 remain. Fields with twin quantum numbers are denoted with a prime as usual. In addition to the usual SM + twin field content, there are two remaining fields in the theory, the q A and q B below the double line in table 2. These phenomenologically troublesome fields are then removed by theZ 2 orbifold. One can easily verify that theZ 2 orbifold does not remove any other fields that were not already projected out by the Z 2 orbifold. We therefore end up with the standard twin Higgs, but with no SM hypercharge for the twin fields. It is worth noting that although the g 2 and g 3 gauge couplings are automatically equal in both sectors due to the unified nature of their respective groups, this is not the case for y t and g 1 . To enforce this we had to impose a Z 2 exchange symmetry by hand in equation (A.1). This is a modest price we must pay with respect to the models in [3], in order to gain more flexibility in the hypercharge sector. | 17,715.6 | 2016-01-26T00:00:00.000 | [
"Physics"
] |
Structural and Mechanistic Insights into the Interaction of Cytochrome P4503A4 with Bromoergocryptine, a Type I Ligand*
Background: Human CYP3A4 metabolizes the majority of administered drugs including bromoergocryptine (BEC), a dopamine receptor agonist. Results: Crystallographic and experimental data suggest the importance of Arg212 and Thr224 in BEC binding. Conclusion: H-bonding interactions with Thr224 and conformational adjustments modulated by Arg212 are critical for the productive orientation of BEC. Significance: Mechanistic insights on the CYP3A4-BEC interaction may help develop new and safer pharmaceuticals. Cytochrome P4503A4 (CYP3A4), a major human drug-metabolizing enzyme, is responsible for the oxidation and clearance of the majority of administered drugs. One of the CYP3A4 substrates is bromoergocryptine (BEC), a dopamine receptor agonist prescribed for the inhibition of prolactin secretion and treatment of Parkinson disease, type 2 diabetes, and several other pathological conditions. Here we present a 2.15 Å crystal structure of the CYP3A4-BEC complex in which the drug, a type I heme ligand, is bound in a productive mode. The manner of BEC binding is consistent with the in vivo metabolite analysis and identifies the 8′ and 9′ carbons of the proline ring as the primary sites of oxidation. The crystal structure predicts the importance of Arg212 and Thr224 for binding of the tripeptide and lysergic moieties of BEC, respectively, which we confirmed experimentally. Our data support a three-step BEC binding model according to which the drug binds first at a peripheral site without perturbing the heme spectrum and then translocates into the active site cavity, where formation of a hydrogen bond between Thr224 and the N1 atom of the lysergic moiety is followed by a slower conformational readjustment of the tripeptide group modulated by Arg212.
Cytochrome P450 enzymes are heme-thiolate proteins that catalyze a wide variety of monooxygenation reactions including hydroxylation, epoxidation, and heteroatom dealkylations (1). Among 57 human P450s, the 3A4 isoform (CYP3A4) is one of the most abundant and important because, in addition to oxidation of various endogenous molecules and xenobiotics, it contributes to the clearance of over 50% of administered pharmaceuticals (2,3). The high capacity of CYP3A4 to oxidize mol-ecules diverse in size and chemical structure is due to its large and malleable active site (4,5). Another intriguing feature of CYP3A4 is the ability to accommodate more than one molecule in the substrate-binding pocket, where one molecule serves as a substrate while another acts as a modulator of substrate metabolism. Among the substrates that exhibit binding cooperativity with CYP3A4 are testosterone, progesterone, diazepam, ␣-naphthoflavone, and several others (6 -8).
A CYP3A4 substrate that does not exhibit binding cooperativity is bromoergocryptine (BEC 2 ; also known as bromocryptine) (9 -11). BEC is an ergot alkaloid that acts as a dopamine receptor agonist. Currently, BEC is prescribed for inhibition of prolactin secretion, the treatment of Parkinson disease, type 2 diabetes, migraines, and pituitary tumors, and correction of abnormal secretion of growth hormone. BEC is one of the largest CYP3A4 substrates (molecular mass of 655) and consists of lysergic acid and a proline-containing cyclic tripeptide linked by an amide bond (Fig. 1). The tripeptide group is critical for the interaction with P450s of the 3A family (12). BEC has a high affinity for microsomal and recombinant CYP3A4 (K d of 0.3-1 M) and upon binding induces type I spectral changes (a lowto-high spin shift) accompanied by a 80-mV increase in the heme redox potential (11)(12)(13)(14)(15)(16)(17). In vivo metabolite analysis has revealed that BEC is oxidized primarily by CYP3A4 at the pyrrolidine moiety, with the 8Ј-mono-and 8Ј,9Ј-dihydroxy derivatives being the major products ( Fig. 1) (18,19).
Although determination of the 2.7 Å crystal structure of BEC-bound CYP3A4 has been reported at a scientific meeting (20), the atomic coordinates are still unavailable. Moreover, to date, there is no structural information on any other type I ligand bound to CYP3A4 in a productive mode. To obtain insights into the mechanism of substrate binding to CYP3A4, we determined and report here the 2.15 Å crystal structure of the CYP3A4-BEC complex. The x-ray data suggest the importance of Arg 212 and Thr 224 for optimal BEC binding, which we
Protein Expression, Purification, and Mutagenesis-R212A
and T224A mutations were introduced to the CYP3A4⌬3-22 expression plasmid using a QuikChange mutagenesis kit (Stratagene). The wild type (WT) and mutants of CYP3A4 were expressed and purified as described previously (21).
Crystallization and Structure Determination-The CYP3A4-BEC complex was crystallized by a micro batch method under oil. Half a microliter of the BEC-bound protein (20 -25 mg/ml) in 20 mM phosphate, pH 7.7, 20% glycerol, and 100 mM NaCl was mixed with 0.5 l of 4% tacsimate, pH 5.0, and 12% polyethylene glycol 3350 (solution No. 11 from the Hampton Research PEG/Ion 2 kit) and then covered with 10 l of paraffin oil. On the next day crystals were harvested and frozen in Paratone oil, used as a cryoprotectant. X-ray diffraction data were collected at the Stanford Synchrotron Radiation Laboratory beamline 7-1. The structure was solved by molecular replacement with PHASER (22) using ligand-free CYP3A4 (Protein Data Bank ID code 1TQN) as a search model. The initial model was rebuilt and refined with COOT (23) and REFMAC (22). The N and C termini as well as residues 266 -268 and 280 -286 are not present in the final structure because of conformational disorder. Data collection and refinement statistics are given in Table 1. The atomic coordinates have been deposited to the Protein Data Bank with the ID code 3UA1.
Spectral Binding Titrations-Binding of BEC (Sigma) to the WT and mutants of CYP3A4 was monitored in 50 mM phosphate buffer, pH 7.4, containing 20% glycerol and 1 mM dithiothreitol. Spectra were recorded after the addition of small amounts of BEC dissolved in dimethyl sulfoxide; the total volume of the solvent added was Ͻ2% (v/v). The difference in absorbance between the wavelength maximum and minimum was plotted against the concentration of BEC, and the spectral dissociation constant (K s ) was calculated using quadratic nonlinear regression analysis as described elsewhere (15).
Kinetics of Ligand Binding-The kinetics of BEC binding to CYP3A4 was monitored at ambient temperature in a SX.18MV stopped flow apparatus (Applied Photophysics) in the absence and presence of Emulgen 913 (Kao Chemicals, Japan), IGEPAL CA-630 (Sigma) and CHAPS (Sigma). CYP3A4 solutions (6 M) in 50 mM phosphate buffer, pH 7.4, were mixed with 0.125-36 M BEC to follow absorbance changes at 417 nm. Owing to low solubility of BEC, 36 M was the maximal concentration that could be reached under our experimental conditions. In a separate experiment, 2 M CYP3A4 was mixed with 2-36 M BEC to confirm that the rate constant for BEC ligation starts to level off when the BEC:CYP3A4 ratio exceeds 2. Kinetic data were analyzed using the IgorPro program (WaveMetrics, Inc.).
RESULTS
Soluble CYP3A4 Has High Affinity for BEC-As interaction of truncated human CYP3A4⌬3-22 with BEC had not been investigated previously, we performed equilibrium titrations to determine a spectral dissociation constant (K s ) for the drug. As seen in Fig. 2, the addition of a 2-fold excess of BEC to soluble CYP3A4 leads to a nearly complete 417 to 387-nm shift in the absorbance maximum (type I spectral changes), indicative of displacement of the coordinated water molecule and conversion of the heme iron to a high spin form. The estimated K s value (0.37 Ϯ 0.02 M) suggests that BEC binds to the truncated form of CYP3A4 as tightly as to the full-length hemoprotein, whose dissociation constant for BEC ranges from 0.3 to 1 M depending on the determination method and protein source (11-13, 15, 16).
Crystal Structure of the CYP3A4-BEC Complex-Owing to the high affinity of the CYP3A4-BEC complex, it was possible to maintain CYP3A4 in the BEC-bound form during crystallization. The deep brown color of the crystals was the first indication that crystalline CYP3A4 forms a high spin complex with BEC (Fig. 3A). The well defined electron density of the BEC molecule shows that the drug is bound in the active site in an extended conformation, with an angle between the tripeptide and lysergic groups of 130° (Fig. 3, B and C). As metabolic analyses have predicted (12,18,19), BEC approaches the heme via the tripeptide moiety ( Fig. 3, C and D), with the primary sites of oxidation, the 8Ј and 9Ј carbons of the proline ring ( Fig. 1), being 4.1 and 3.7 Å, respectively, away from the iron. Thus, in the crystal structure BEC is bound in a productive mode.
The tripeptide group does not establish any specific polar or electrostatic interactions but makes extensive van der Waals contacts with Ile 301 , Phe 304 , and Ala 305 from the I-helix and with the side and main chains of Arg 105 , Arg 212 , Ala 370 , and Arg 372 . To allow the tripeptide moiety to approach the heme iron, Arg 212 adopts a new rotamer relative to the ligand-free structure (29). This conformation is stabilized by two hydrogen bonds formed between the Arg 212 guanidinium group, the carboxyl of Glu 308 , and the carbonyl oxygen of Ile 369 (Fig. 3E). The lysergic moiety of BEC is sandwiched between the parallel Arg 106 and Phe 215 side chains and H-bonded to Thr 224 from the GЈ-helix via the N1 atom (Fig. 3, C and D). To accommodate this bulky group, the Pro 107 -Gly 109 peptide shifts aside by 2.4 Å. The fact that the drug establishes only one hydrogen bond and there are no well defined waters or water-mediated contacts in the active site allows us to conclude that the CYP3A4-BEC interactions are predominantly non-polar.
Rationale for Mutagenesis-Despite the large size and complex chemical structure of BEC, only small changes in the CYP3A4 conformation are needed to position the substrate optimally for oxidation. As mentioned, one notable rearrange- ment is in the Arg 212 side chain but not the associated peptide backbone. This is in contrast to other large ligands such as ritonavir, ketoconazole, and erythromycin, where a major conformational change in the Arg 212 -containing FFЈ-loop occurs upon binding (Fig. 3E) (4,21). Because the tripeptide moiety of BEC binds next to Arg 212 , we anticipated that elimination of the Arg 212 side chain would increase the volume of the active site cavity near the heme iron and, hence, should affect the affinity and/or binding kinetics of the drug. Thr 224 , on the other hand, assists BEC binding by H-bonding with the lysergic group (Fig. 3, C and D). Elimination of this hydrogen bond could increase motional freedom and prevent proper positioning of the drug, which could be manifested through changes in the extent and kinetics of a spin shift in CYP3A4. To test these predictions, we replaced Arg 212 and Thr 224 with alanine to determine how single and double mutations affect the CYP3A4-BEC interaction. Arg 212 and Thr 224 Define Affinity and Facilitate BEC Binding to CYP3A4-Equilibrium titrations show ( Fig. 4) that, at saturating BEC levels, there is only a partial conversion of the mutants to a high spin form: 56, 60, and 52% for CYP3A4 R212A, T224A, and R212A/T224A, respectively, as opposed to ϳ90% for the WT. The K s value calculated for the T224A variant was close to that of the WT, whereas the corresponding constants for the R212A and double mutants were 6-fold higher ( Table 2). This implies that during equilibrium binding, Arg 212 is more critical for the BEC binding than Thr 224 .
To determine the mutational effects on the kinetics of BEC association, we monitored the disappearance of the low spin form as an absorbance decrease at 417 nm after mixing CYP3A4 and BEC in a stopped flow spectrophotometer. Consistent with the incomplete shift from low to high spin when BEC was added to the mutants, the change in amplitude at 417 nm was less for both mutants: 33% for CYP3A4 R212A, and 63% for the T224A and double mutants compared with the WT (Fig. 5, A-D). In addition, the kinetics of BEC binding to the WT and variants was biphasic under all studied conditions. The rate constants for the fast phase of BEC binding (k fast ) were independent of the BEC concentration at subequimolar BEC:CYP3A4 ratios (Ͻ0.5) and then gradually increased and remained unchanged after the BEC:CYP3A4 ratio exceeded 2 (Fig. 5E). A comparison of the k fast values calculated at saturating levels of BEC (Table 2) indicates that BEC binds faster to the R212A mutant (ϳ32% increase in k fast ), whereas the T224A mutation slows down the spin conversion by 30%.
For CYP3A4 T224A, the rate constant for the slow phase (k slow ) was independent of BEC concentration (Fig. 5F). For other proteins, k slow decreased to a different extent until the BEC: CYP3A4 ratio reached 2; this remained unchanged at higher drug concentrations. The most notable changes in the slow phase were observed for the R212A and double mutants. Stopped flow measurements also revealed that regardless of whether or not mutations were present, the percentage of the slow phase changed sharply from ϳ30 -35% to 50 -55% when the BEC:protein ratio exceeded unity (Fig. 5G). Because the latter effect could be due to conformational heterogeneity of CYP3A4 (14), we checked whether the biphasicity of the BEC binding reaction could be eliminated in the presence of detergents.
BEC Binding Kinetics Remains Biphasic in the Presence of
Detergents-The kinetics of BEC binding to WT CYP3A4 was measured in the presence of two non-ionic detergents, Emul- Table 2. gen 913 and IGEPAL CA-630, as well as zwitterionic CHAPS. All detergents were used at a final concentration of 0.1-0.12%, which is lower than the critical micelle concentration and sufficient to dissociate aggregates of full-length CYP1A2 and 2B4 to catalytically active dimers (24). Spectral measurements showed that the addition of CHAPS, Emulgen 913, and IGEPAL CA-630 to CYP3A4 caused partial low-to-high spin shifts (53, 14, and 27%, respectively (Fig. 6A)). This means that each compound binds near the heme and displaces the distal water ligand or partially stabilizes a conformer that favors water ligand displacement. When CHAPS-bound CYP3A4 was mixed with BEC, a large decrease in 417-nm absorbance was detected (Fig. 6B). Further, as observed in a detergent-free buffer, the kinetics of CYP3A4-BEC complex formation in the presence of CHAPS was biphasic, with k fast and k slow values of 0.66 and 0.06 s Ϫ1 , respectively, and the percentage of the slow phase was ϳ50%. According to an absorbance spectrum recorded at the end of the stopped flow experiment (Fig. 6C), the conversion of CYP3A4 to a high spin form was near completion (ϳ100%) at saturating BEC levels.
When Emulgen 913 or IGEPAL CA-630 was present in the medium, the BEC-induced absorbance changes were very small (Fig. 6B). Nevertheless, the reaction remained biphasic at BEC: CYP3A4 ratios of Ͼ1 (k fast and k slow of 5-8 and 0.02-0.04 s Ϫ1 , respectively), with the fast phase accounting for 90% of the total absorbance change. At subequimolar ratios, the ligation of BEC was slow (0.01-0.05 s Ϫ1 ) and monophasic. A BEC-dependent increase in the high spin content reached only 21 and 7% for Emulgen-and IGEPAL-bound CYP3A4, respectively (Fig. 6C). Thus, non-ionic detergents significantly interfere with the BEC binding and, similar to CHAPS, do not eliminate the biphasicity of the reaction.
DISCUSSION
Co-crystallization of CYP3A4 with substrates/type I ligands is very challenging because most of these have low affinity (K d of 5-150 M) and low solubility in aqueous solutions. In the two currently available substrate-bound structures of CYP3A4, erythromycin is bound in a nonproductive mode (4), whereas progesterone is docked outside of the active site pocket (25). The present CYP3A4-BEC complex is the first in which the substrate is bound in a mode suitable for oxidation.
Surprisingly, no major conformational changes in CYP3A4 were required to accommodate a large BEC molecule (the root mean square deviation between the ligand-free and BEC-bound structures is only 0.32 Å). The crystal structure shows that two residues, Arg 212 and Thr 224 , may be important for association and optimal orientation of the drug. Arg 212 , part of the FFЈ-loop (residues 210 -214), is thought to be actively involved in substrate binding and the mediation of cooperativity in steroid hydroxylation reactions (26,27). Although Arg 212 plays no significant role in the hydroxylation of relatively small steroids (26,27), it is predicted to influence the action of the effector molecule and control the substrate orientation via interactions with Phe 304 , supposedly located at the interface between the active and the effector binding sites (28).
In our structure, Arg 212 is positioned strategically near the tripeptide group of BEC and establishes interactions with Glu 308 and Ile 369 . This led us to hypothesize that Arg 212 , as well as the H-bond-forming Thr 224 , could be important for the affinity and kinetics of BEC binding. Indeed, our experimental data show that substitution of Arg 212 or/and Thr 224 with alanine alters both the extent of the low-to-high spin shift and the rate of BEC binding. The reaction of BEC ligation to the WT or mutants of CYP3A4 was biphasic, with non-hyperbolic dependence of k fast on BEC concentration. As the drug is known to bind to CYP3A4 stoichiometrically and non-cooperatively (11,15), such kinetics cannot be explained by the allosteric properties of the enzyme.
One possible reason follows from a previous study in which three kinetic phases were distinguished (20, ϳ1, and 0.009 s Ϫ1 ) based on fluorescence and absorbance changes observed during association of BEC to the full-length CYP3A4 (15). The were mixed with 6 M CYP3A4, and the low-to-high spin conversion was monitored at 417 nm. The kinetics were biphasic for the whole range of BEC concentrations studied. E and F, observed rate constants for the fast (k fast ) and slow phases (k slow ), respectively, are plotted versus BEC concentration. The rate constants calculated at saturating BEC are given in Table 2. G, effect of the BEC concentration on the relative percentage of the slow phase. The arrows indicate where the BEC and CYP3A4 concentrations are equal.
fastest step, detected only by fluorescence spectroscopy, was proposed to correspond to the peripheral binding of BEC, proceeding without perturbations in the heme spectrum. The two subsequent steps, in turn, are thought to reflect the interaction of the second BEC molecule with the active site and the heme. Our data are consistent with this model and provide further insights into the mechanism of BEC association.
Because BEC has low solubility in aqueous solutions, it was not possible to study the binding kinetics under pseudo-first order conditions over a wide range of BEC concentrations (Fig. 5, E and F). Nonetheless, the range studied was sufficient to estimate the limiting k fast/slow values, because BEC is a high affinity ligand whose binding rate becomes concentration-independent when the BEC:CYP3A4 ratio exceeds 2. Small changes in binding rates at low BEC concentrations mean that an event taking place remotely from the heme (e.g. binding of BEC to a peripheral site or structural changes within the protein) precedes the spin state change and is rate-limiting. Once this remote site is saturated, the reaction is first order and presumably is limited by movement of BEC from the remote site to the active site, which leads to the low-to-high spin shift. Another factor that can affect binding is conformational reorganization in the BEC molecule itself. The BEC conformation currently deposited in the DrugBank database (Fig. 1), which we used as a starting model, differs from that in the x-ray model. To fit into the CYP3A4 active site, the tripeptide group of BEC rotates by ϳ180°around the amide bond (Fig. 7A). The existence and interconversion of different BEC conformers may complicate the binding process and limit the overall reaction rate.
Our kinetic data did not allow us to differentiate whether binding of BEC to CYP3A4 occurs before or after a conformational change in CYP3A4 and/or BEC takes place ("induced fit" versus "conformational selection" mechanism). However, the structure-based mutagenesis identified two possible events through which the entering BEC molecules may cause a spin shift. The T224A substitution slows down and the R212A speeds up the fast phase of the BEC binding kinetics. The T224A mutation has little effect on the slow phase, whereas the R212A replacement increases the slow phase. Therefore, these residues play a role in the translocation of BEC from the remote site to the heme active site, with Thr 224 primarily affecting the early stage and Arg 212 the later stage of binding near the heme. Based on this scenario and data reported previously (15), we propose that upon translocation from the peripheral binding site into the active site cavity, BEC first establishes a hydrogen bond with Thr 224 via the lysergic head group (Fig. 8). This interaction directs the tripeptide moiety toward the heme and, if the orientation is favorable, leads to partial displacement of the water ligand and low-to-high spin shift (fast kinetic phase). When a different BEC conformer enters the active site, the tripeptide group must rotate prior to heme ligation. The final step could be positional adjustment of the tripeptide moiety assisted by Arg 212 . A conformational switch in the Arg 212 side chain, stabilized through H-bonding interactions with Glu 308 and Ile 369 , would allow the tripeptide group to come closer to the heme iron, thereby leading to further changes in spin equilibrium.
In addition to the aforementioned factors, the BEC binding reaction may be affected by intrinsic properties of CYP3A4 such as conformational heterogeneity and aggregation. High pressure spectroscopy studies, for instance, have suggested that there are two conformers of full-length CYP3A4 that have distinct spin equilibrium, barotropic properties and reactivity toward BEC (14). The relative content of these conformers, 70 and 30%, was unaffected by BEC concentration but modulated by Emulgen 913. This prompted us to check whether conformational heterogeneity or/and hydrophobic interactions between the CYP3A4 molecules contribute to the multiphasicity of the BEC binding reaction. To do so, we monitored BEC-dependent spin shift in CYP3A4 in the presence of Emulgen 913 and two other detergents frequently used for CYP3A4 purification, zwitterionic CHAPS and non-ionic IGEPAL CA-630 (25,29).
One complication in these experiments was that all three detergents were able to enter the CYP3A4 active site and cause type I spectral changes, which supports the notion that detergents can serve as substrates for CYP3A4 (30). Interestingly, despite the ability to cause the largest spin perturbations (Fig. 6A), CHAPS had virtually no effect on the BEC binding kinetics. Moreover, a low-to-high spin shift was complete when both CHAPS and BEC were present (Fig. 6C) as opposed to 90% conversion induced by BEC in a detergent-free buffer. We attribute this phenomenon to a cumulative effect of the two compounds, because it is unclear at the moment whether CHAPS can be displaced by BEC or the detergent stabilizes a conformer disfavoring coordination of the water ligand.
Unlike CHAPS, Emulgen 913 and IGEPAL CA-630 strongly inhibited the CYP3A4-BEC interaction. This agrees with a previous investigation showing that non-ionic detergents inhibit CYP3A4 catalysis by interfering with substrate binding (30). The fact that neither studied detergent could completely eliminate the biphasicity of BEC association undermines the possibility that protein heterogeneity and/or hydrophobic interactions between the truncated CYP3A4 molecules are major factors complicating the BEC binding reaction. On the other hand, a 30:70% distribution between the slow and fast kinetic phases observed at subequimolar concentrations of BEC and its sharp change to ϳ50:50% when the BEC:CYP3A4 ratio exceeded unity (Fig. 5G) favor the hypothesis on CYP3A4 conformers with different reactivity toward BEC (14). Our data suggest that the relative content of such conformers might be affected by both substrate and detergent binding. One cause for protein heterogeneity could be variations in the Arg 212 conformation. In the two crystal structures of substrate-free CYP3A4 available to date, the Arg 212 side chain faces either the solvent or the active site (25,29). As follows from our study, even this minor structural deviation could significantly affect the rate of BEC binding.
Finally, resonance Raman spectroscopy studies on nanodiscincorporated CYP3A4 detected small changes in the modes associated with disposition of the heme peripheral groups and out-of-plane macrocycle distortion caused by BEC binding (31). A comparison of the ligand-free and BEC-bound structures showed that, indeed, the BEC-dependent change in the heme coordination state distorts the heme plane and vinyl group conformation (Fig. 7B). If the energy barrier between the 6-and 5-coordinated states is high, it could modulate the dynamics of BEC binding and, hence, contribute to the complexity of the reaction.
In conclusion, the crystallographic complex between CYP3A4 and BEC provides the first insights into the productive binding mode of a type I ligand. It also suggests the mechanism of BEC association, helps better understand previously accumulated data, and most importantly, may be useful for developing new and safer drugs. . Proposed mechanism for the binding reaction between CYP3A4 and BEC. BEC exists in different conformations and, as previously suggested (15), may first associate with a peripheral site remote from the heme (step 1). Upon moving into the active site cavity, BEC establishes a hydrogen bond with Thr 224 via the lysergic moiety (step 2), which directs the tripeptide group toward the heme cofactor. If the BEC conformation is favorable (A), partial displacement of the distal water ligand takes place. If another conformer binds to P450 (B), the tripeptide moiety must rotate prior to heme ligation (step 3). The position of BEC may be further optimized upon rearrangement of the Arg 212 side chain, which would lead to additional changes in the spin equilibrium (step 4). | 5,969.6 | 2011-12-07T00:00:00.000 | [
"Chemistry",
"Medicine"
] |
A Carrierless Amplitude Phase (CAP) Modulation Format: Perspective and Prospect in Optical Transmission System
Received Dec 14, 2016 Revised Apr 15, 2017 Accepted Apr 30, 2017 The explosive demand of broadband services nowadays requires data communication systems to have intensive capacity which subsequently increases the need for higher data rate as well. Although implementation of multiple wavelengths channels can be used (e.g. 4 × 25.8 Gb/s for 100 Gb/s connection) for such desired system, it usually leads to cost increment issue which is caused by employment of multiple optical components. Therefore, implementation of advanced modulation format using a single wavelength channel has become a preference to increase spectral efficiency by increasing the data rate for a given transmission system bandwidth. Conventional advanced modulation format however, involves a degree of complexity and costly transmission system. Hence, carrierless amplitude phase (CAP) modulation format has emerged as a promising advanced modulation format candidate due to spectral efficiency improvement ability with reduction of optical transceiver complexity and cost. The intriguing properties of CAP modulation format are reviewed as an attractive prospect in optical transmission system applications. Keyword:
INTRODUCTION
Today's information transmission process relies on data communication system that can provide larger bandwidth at high-speed access due to rapid expansion of communication services such as fast internet communication, video-based multimedia, on-line gaming, etc. Multiple wavelength channels in optically routed network can be used for implementing transceiver modules in order to enhance system capacity and data rate [1]. Mixed modulation format has been implemented for wavelength division multiplexing (WDM) system but there are critical issues to conquer such as nonlinear crosstalk from adjacent channels and the cost increment of the mixed WDM signal control [2].
The demand for cost-effective system through reduction of the optical components used in the system requires the minimal number of wavelengths. Hence, implementing advanced modulation format can be a potential option to avoid complexity and significant costs of optoelectronic devices while further increase spectral efficiency.
Advanced modulation schemes such as M-ary phase shift keying (M-PSK), M-ary quadrature amplitude modulation (M-QAM), discrete multitone (DMT) and orthogonal frequency division multiplexing (OFDM) [3] has been explored extensively. New advanced multilevel modulation formats has been reported to show that there are many technical solutions available for the next generation optical communication [4]. But apparently, all these advanced modulation formats involve complicated and costly transmission system. The optical transmission system will be more feasible and efficient if the employment of advanced modulation formats can reduce the complex portion of the system while simultaneously achieve higher bit rates and spectral efficiency using relatively reduced optoelectronic components count.
Therefore, carrierless amplitude phase (CAP) modulation format can be highlighted as a potentially good choice to build up flexible, less complexity and cost-effective transmission system for optical access and in-home network system. CAP modulation is a multilevel and multidimensional modulation scheme that resembles quadrature amplitude modulation (QAM) at a point where it transmits concurrently two input data streams. But the significant difference between them is CAP uses transversal filters with orthogonal impulse response to generate in-phase and quadrature filters to separate that respective data streams instead of using carrier in order to do so.
This means CAP does not rely on local oscillator for carrier generation, complex mixers and optical IQ modulator. The absence of carrier leads to less expensive digital transceiver implementation as the computation intensive multiplication operations needed for carrier modulation and demodulation becomes unnecessary. Consequently, it makes CAP simpler than single carrier modulation format such as QAM while achieving similar spectral characteristic and performance. Multi carrier modulation format like DMT and OFDM are practically much more complex because inverse fast Fourier transform (IFFT) and fast Fourier transform (FFT) are required at the transmitter and receiver part for modulation and demodulation process.
This paper reviews the privilege properties of CAP that has open up a growing interest of researches to choose and implement CAP modulation format in optical communication link applications. Comparison between CAP and other modulation formats has been carried out substantially in previous work to prove the competitive performance of CAP. Its convincing potential performance can be a good prospect for next generation access and in-home network environment.
PRINCIPLE OF CAP MODULATION FORMAT 2.1. 2D-CAP
The block diagram of CAP transmission system is shown in Figure 1. At the transmitter part, an original bits sequence length of data stream which is based on the pseudo random binary sequence (PRBS) is transmitted. This bits sequence are encoded and mapped according to the given constellation by converting it into a number of multi-level symbols of M-QAM with M = 2k where k is the number of bits/symbol. For example, k is equal to 2 for 2D-CAP-4.
These mapped symbols are split and upsampled to 4 samples per symbol to match the sample rate of shaping filters. The in-phase, I k and quadrature, Q k component which are extracted from the upsampled sequence are shaped by the digital shaping filters in order to achieve square-root raised cosine (SRRC) waveforms. The impulse responses of the filters are given by multiplying these SRRC waveforms with sine or cosine waveforms to achieve orthogonality between them. The waveforms of each filter which is then moved from baseband to passband are given by: These two orthogonal signals are summed and converted to analog signal using digital to analog converter. The output CAP signal transmitted can be represented as: where T indicates the symbol duration and denotes convolution. Figure 2 After direct detection at the receiver part, the convolution process between CAP signal S(t) and H(t) as channel responses (without considering noises) produces the received signal R(t): The received signal, R(t) is then converted from analog to digital signal. This signal is separated to two signals, r 1 and r 2 before being fed into two matched filters, f' 1 and f' 2 that have time-inversion impulse response of the orthogonal filters at the transmitter side. The output signals of both matched filters can be written as: The matched filters recover each dimension and the original sequence of symbols. These symbols are downsampled to form I' k and Q' k component. A linear equalizer is employed afterwards to ensure synchronization in CAP demodulation part due to the serious intersymbol interference (ISI) caused by matched FIR filters with timing error. Then, a decoder is utilized and demapped to retrieve the original bit sequence.
High Dimensionality CAP
High dimensionality CAP employs more than two filter responses where the numbers of orthogonal filters used indicates the numbers of dimensions. The numbers of dimensions correspond to different users and services. According to dimensionality theorem [5], with the symbol interval T, minimum bandwidth W and modulation dimensionality N: (9) If N = 1, the bandwidth W = 1/2T as Nyquist theorem. However, for 2-D and 3-D case, the minimum bandwidth required are W = 1/T and W = 3/2T respectively. This shows that increment of dimensionality will increase the occupied signal bandwidth. On the other hand, the required sample/symbol ratio is linearly proportional to the number of dimensions [6]. This means the up-sampling factor must be increased to support an increased number of dimensions.
Therefore, it is impossible to improve spectral efficiency (SE) by just simply increase the dimensionality because the symbol rate must be decreased if wish to maintain the signal bandwidth or to increase the up-sampling factor. The unavoidable increase of signal bandwidth accompanying the increase of dimension number compensates the higher number of symbols in the alphabet and leads to the same SE of CAP modulations irrespective of the dimensionality [7]. However, the advantage of additional dimensions lies in the possibility to flexibly allocate different services to different users [6].
As mentioned in previous sub section, 2D-CAP uses Hilbert pair modulated waveforms in order to achieve two orthogonal filters at the transmitter and its inversion or matched filters at the receiver. However, this Hilbert pair modulated waveform cannot be applied for higher dimensionality CAP. Hence, other method is required to develop a new set of filters for higher dimensionality CAP system.
The method which is formerly known as minimax optimization procedure was first introduced for high dimensionality CAP [6]. This method was later improved and modified to overcome design flaws in previous work [8]. This new method is also a straightforward way to extend this approach to design 4D or higher dimensional CAP systems.
The shaping filter at the transmitter and its matched filter at receiver must satisfy perfect reconstruction (PR) condition to avoid inter-dimensional crosstalk. To assure the PR of the filters, the new optimization algorithm (OA) has been applied to extend the conventional 2D-CAP scheme to higher dimensionality. The new optimization algorithm (OA) is described as follows [8]: ,... , , max min (10) where F i is a discrete Fourier transform (DFT) of vector f i . The high pass (HP) subscript denotes the out-ofband portion of the transmitter response above the boundary frequency, f B to ensure the transmitters and receiver's frequency magnitude response will be matched to each other. (10) subject to perfect reconstruction (PR) condition: The variables f i and f' j are the vector form represents the CAP transmitter and receiver finite impulse response (FIR) filters respectively. P(f i ) is a shift matrix operating on vector f i , where the shift is by the upsampling factor. ~ is a vector with one unity element and zeroes elsewhere while 0 is a vector of all zeroes. The constraint in (13) requires the receiver to be the matched filter of the transmitter. There is some minimum bandwidth, f B,min value that will allow the PR condition. Any value smaller than f B, min will not result in any PR solution. For a 3D-CAP system, the minimum bandwidth required is 3/2T which resulting from (9) in order to achieve PR condition. Therefore, the boundary frequency, f B must be at least equal to or bigger than 3/2T. Figure 3 shows the 3D-CAP or 4D-CAP system. Data in the transmitter is mapped according to the given constellation by converting a number of raw data bits into a number of multi-level symbols. These symbols are upsampled and shaped by the digital finite impulse response (FIR) shaping filters to achieve the where x i denotes the signal after upsampling process with subscript i denotes the dimension. After conversion to digital signal at the receiver, the dimension of the signal is recovered by filters that matched to the shaping filters at transmitter. The signal from the filters is downsampled to form x' i where then equalized and decoded to retrieve original data stream.
Multiband CAP (MultiCAP)
The basic concept of Mutiband CAP (MultiCAP) counts on dividing the signal into independent smaller sub-bands that represent different frequency bands. This means that the signal power and modulation order can be assigned freely to the SNR in each sub-band while increasing the flexibility of the total throughput in comparison to conventional CAP.
In MultiCAP systems, each sub-band are assigned for each users using distinct shaping filter pairs at the transmitter, and the users can retrieve their original data back using the matched filter pair at the receiver. In this way, N sub-bands can be assigned to N users without any interference. As illustrated in Figure 4, MultiCAP employs more than one filter pair for each data stream. The individual data stream in MultiCAP complies with 2D-CAP modulation operation principle as stated in previous sub section. Each data stream is separately mapped into the multilevel symbols of M-QAM. The signal is upsampled before being split into the real and imaginary parts, I k and Q k before filtering. The pair of modulated signal waveform (1) and (2) form a Hilbert pair that denotes in-phase and quadrature filters. The impulse response of the filters are given as the product of a square root raised cosine (SRRC) filter and a cosine (real) or sine (imaginary) wave with frequency at least twice that of the pulse width of the SRRC filter. The output of each filter is real-valued and summed to form the time domain signal for transmission. The output signal S(t) is given by:
Receiver
where T indicates the symbol duration and denotes convolution. The received signal will pass through the time-inversed in-phase and quadrature receiver filters which are matched to the corresponding transmitter filter. The signal is then downsampled, equalized and demapped to obtain the transmitted signal.
CAP MOTIVATION
Carrierless amplitude phase (CAP) modulation format or formerly known as carrierless AM/PM was proposed in mid-1975 by the Bell Labs as a viable modulation technique for high-speed communication links over copper wires [9]. CAP is merely getting attention and become popular during early and mid-1990s with the dawn of the digital subscriber loop (DSL) techniques that were aimed for private consumers asymmetric digital subscriber line (ADSL) [10], [11].
The quality shown by CAP has put an interest to be implemented in asynchronous transfer mode (ATM) local area networks (LAN) standard application [12]. Consequently, CAP has been adopted primarily and widely used for ADSL and ATM LANs because of its high bandwidth efficiency and low implementation costs. CAP was proven to be very sensitive to non-flat spectral channels and required very complex equalizers [13], sacrificing the simplicity of CAP. Since then, CAP was pushed aside in favor of DMT modulation by international telecommunications union (ITU) [14].
However, for high speed transmission, CAP modulation has been demonstrated to be simpler and has better performance than discrete multitone (DMT) [15]. Multi carrier modulation format like DMT is much more complex compared to CAP where inverse fast Fourier transform (IFFT) and fast Fourier transform (FFT) are needed in modulation and demodulation process while on the other hand, CAP uses a filter or digital convolution with less computational complexity.
The interesting feature of CAP is the possibility to extend its signal basis to higher dimension for DSL application [6], [16]. It shows that CAP supports modulation in more than two dimensions whereas orthogonal pulse shapes can be identified to provide multiple services application. For high dimensionality CAP, the optimization algorithm (OA) has been used to extend the conventional 2D-CAP scheme to higher dimensionality and to assure perfect filter reconstruction [8].
High dimensionality 3D-CAP and 4D-CAP with directly modulated vertical cavity surface emitting lasers (DM-VCSELs) over 20 km SSMF has been experimentally demonstrated for the first time to provide more flexibility in optical fiber systems [17]. However, it requires excessive bandwidth due to the higher upsampling factor that resulting to decreased spectral efficiency performance. This tradeoff between the flexibility and spectral efficiency needs to be considered in the system design. Nevertheless, CAP modulation with higher dimension is useful and highly beneficial for service and user allocation in WDM optical access [18]. The results indicate the prospects of combining the orthogonal division multiple access (ODMA) in WDM network for service and user allocation.
In order to extend the bandwidth of each channel for high speed data transmission, multi-level multi-band CAP (MultiCAP) or m-CAP modulation format (where m is the number of subcarriers or subbands) has been introduced by dividing the CAP signal into smaller sub-bands [19], [20]. The use of carrierless amplitude phase (CAP) in a novel multiband approach (MultiCAP) for high capacity optical data links manage to extend the system capacity to 102.4 Gb/s over 15 km of SSMF. The result achieves record spectral efficiency, increases tolerance towards dispersion and bandwidth limitations, and reduces the complexity of the transceiver. The transmission distance for high speed CAP is limited due to spectrum fading effect caused by chromatic dispersion. Optical single-side band (OSSB) technology is a good solution against spectrum fading effect. Therefore, the first demonstration of MultiCAP in WDM-CAP-PON based on OSSB for multi-users access network has been reported [21]. The experiment successfully transmits 11 channels, 55 sub-bands for 55 users at 10 Gbps downstream data rate per user over 40 km SMF.
CAP PROSPECT IN OPTICAL COMMUNICATION SYSTEM
In recent years, there is growing interest to implement CAP modulation format in various optical communication system application. Numerous comparisons between CAP and other modulation scheme also have been reported to show how CAP modulation format manage to outperforms others.
For the first time, high speed of 40 Gb/s 2D-CAP-16 modulation over 10 km standard single mode fiber (SSMF) has been shown to provide significant dispersion advantages compared to non-return-to-zero (NRZ) modulation [22]. It proves the feasibility of generating and decoding 40 Gb/s CAP channels using current low-cost transversal filters. The comparison of CAP and DMT modulation using DM-VCSELs has been carried out experimentally over MMF. As shown in Figure 5 [24], [25]. The results show that for lengths up to 30 km of SMF, the 2D-CAP-16 scheme without forward error correction (FEC) offer good performance and have half the power consumption of optical orthogonal frequency division multiplexing (OOFDM) schemes at 28 Gb/s data rate.
Then the comparison of link power budget and power dissipation has been extended between 100Gb/s CAP-16/64 and 16/64-QAM-OFDM systems over 2 km FEC enhanced SMF links using directly modulated lasers (DML) [26], [27]. Simulation results showed that these schemes, incorporating FEC and decision feedback equalisation (DFE), single channel CAP-16 and 16-QAM-OFDM can support transmission over >5 km of SMF, with transceiver power dissipation of about 2 times that of the 4x25 Gb/s NRZ version of 100 Gigabit Ethernet. But CAP schemes which do not require DAC/ADCs have great potential for costeffectiveness and power efficiency.
Step-index polymer optical fibers (SI-POF) are particularly an attractive transmission medium for high speed in-home optical network system application due to easy installation and low cost properties [28]. POF links use light emitting diode (LED) as an optical source to continuously sustain the system costeffectiveness, ease system installation and maintenance. The main challenge arises in the LED-based SI-POF link is to further increase the system capacity due to limited bandwidth of LED and SI-POF itself.
A straightforward way to increase system capacity is to improve system spectral efficiency by allowing the same bit rate to be transmitted using a reduced bandwidth. In this case, employing simple implementation of CAP modulation format in SI-POF link can be an appealing approach to increase bandwidth efficiency while maintaining an overall low complexity and relatively low energy cost system.
For this reason, a record high error free transmission without feed forward error correction (FEC) over 50 m SI-POF link at 1.5 Gbps is experimentally demonstrated where the 2D-CAP-64 scheme is shown to generally gives a higher system margin than NRZ modulation using a low-cost commercially resonant cavity light emitting diode (RC-LED) [29].
Comparisons between gigabit NRZ, CAP and Optical OFDM systems over FEC enhanced 50 m POF links at 2.1 Gbps using LEDs has been performed. Figure 6 [30], [31] shows a plot of the maximum bit rate versus POF length for NRZ, CAP and Optical OFDM systems. For all the modulation schemes, the bit rate decreases with increasing POF length, simply due to increased fiber dispersion. However, the results show that CAP-64 outperforms both NRZ and 64-QAM-OFDM scheme for all POF lengths and supports record high 3.5 Gb/s bidirectional and 2.1 Gb/s unidirectional transmission over 50 m POF link. The unidirectional 2.1 Gb/s transmission over 50 m POF achieved by CAP-64 represents an improvement of 70% in capacity compared with the published best performance of 1.25 Gb/s NRZ signal over 50 m POF unidirectional transmission [32].
CAP-64 modulation also consumes similar transceiver power compared with NRZ modulation whilst 64-QAM-OFDM consumes double that. This indicates that CAP-64 modulation offers great potential in terms of signal capacity and power efficiency for LED based POF links. On the other hand, comparison between CAP and PAM modulation schemes for data transmission over SI-POF has been carried out [33]. Using 2D-CAP-16 over 50 m SI-POF link at 5 Gbps, the measurement results show that M-CAP offering potentially the same spectral efficiency as a corresponding M-PAM modulation format and provides a slightly better performance when the signal-to-noise ratio (SNR) is high enough.
LED-based visible light communication (VLC) has attracted considerable interests in recent years as an alternative wireless communication technique in next-generation indoor wireless LAN in regards with LED capability to simultaneously provide illumination and communication. VLC using white LEDs offers many advantages for home area networking and next-generation short-range wireless access, such as worldwide availability, high security, immunity to radio frequency interference, and spatial reuse of the modulation bandwidth in adjacent communication cells [34]. High spectrally efficient modulation format like CAP can be a direct method to increase the capacity due to severe bandwidth limitation caused by LED in VLC system.
Performance comparison between CAP signal and OFDM signal over high capacity RGB-LEDbased WDM visible light communication (VLC) on 25 cm air-transmission has been made [35]. The maximum aggregate data rates of CAP and OFDM are 3.22 and 2.93 Gb/s, respectively, shows the CAP scheme gives competitive performance and provides an alternative spectrally efficient modulation for next generation optical wireless networks compared to OFDM.
Comparison of PAM, CAP and DMT modulation in a VLC link using white phosphorescent lightemitting diode (LED) has been reported [36]. At data rates of 450 Gbps on 30 cm air-transmission, 2-level PAM and CAP modulations exhibit better immunity to nonlinear distortions and allow for lower BER than their 4-level counterparts. DMT performance, however, was substantially worse than the performance of 2level modulations.
CONCLUSION
This paper reviews the employment of CAP modulation format in various optical communications applications. Recently, CAP modulation format has gained serious interest in the research community due to its intriguing properties such as high spectral efficiency, scalability to higher order modulation, potentially low cost implementation and simplicity compare to modulation formats with carrier. The flexibility to extend the number of CAP orthogonal filters for higher dimensional CAP can potentially be utilized to allocate different services to different users.
Numerous papers have investigated the employment of CAP modulation format in new Ethernet standards, visible light communications (VLC), single mode fiber (SMF), multimode fiber (MMF) and polymer optical fiber (POF) link. Comparison of CAP against other modulation format also has been demonstrated and the results show that CAP can be an alternative modulation format candidate with competitive performance in optical transmission system applications.
However, the significant drawback of CAP system is that it requires a flat channel frequency response. The non-flat frequency response of the transmission system will significantly degrade the system performance. Therefore, further research need to be carried out to implement more advance equalization techniques in order to mitigate such impairments.
Other drawback of CAP is difficulties to further improve system spectral efficiency. The utilization of high dimensional CAP requires high upsampling factor thus limits the overall achievable spectral efficiency. Continuous effort also must be taken into account to improve CAP receiver which is highly sensitive to timing errors, high susceptibility to symbol timing offset and jitter. With further improvement of the CAP system, it is worth mentioning that CAP deserves further attention for next generation optical access, in-home and wireless network. | 5,339.6 | 2018-02-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
An LSTM and GRU based trading strategy adapted to the Moroccan market
Forecasting stock prices is an extremely challenging job considering the high volatility and the number of variables that influence it (political, economical, social, etc.). Predicting the closing price provides useful information and helps the investor make the right decision. The use of deep learning and more precisely of recurrent neural networks (RNNs) in stock market forecasting is an increasingly common practice in the literature. Long Short Term Memory (LSTM) and Gated Recurrent Unit (GRU) architectures are among the most widely used types of RNNs, given their suitability for sequential data. In this paper, we propose a trading strategy designed for the Moroccan stock market, based on two deep learning models: LSTM and GRU to predict the closing price in the short and medium term respectively. Decision rules for buying and selling stocks are implemented based on the forecasting given by the two models, then over four 3-year periods, we simulate transactions using these decision rules with different settings for each stock. The returns obtained will be used to estimate an expected return. We only hold stocks that outperform a benchmark index (expected return > threshold). The random search is then used to choose one of the available parameters and the performance of the portfolio built from the selected stocks will be tested over a further period. The repetition of this process with a variation of portfolio size makes it possible to select the best possible combination of stock each with the optimized parameter for the decision rules. The proposed strategy produces very promising results and outperforms the performance of indices used as benchmarks in the local market. Indeed, the annualized return of our strategy proposed during the test period is 27.13%, while it is 0.43% for Moroccan all share Indice (MASI), 15.24% for the distributor sector indices, and 19.94% for the pharmaceutical industry indices. Noted that brokerage fees are estimated and subtracted for each transaction. which makes the performance found even more realistic.
trading strategy to better trade stocks able to outperform the market. Accurate forecasts provide valuable information to investors and allow them to adjust their position (buy, hold or sell) according to the price trend, it also allows building a winning trading strategy. This will explain the large number of works published in this field in recent years. In all of these articles, we find a lack of works devoted to the Moroccan market. In this paper, We propose a new trading strategy tailored to the Moroccan market comprising two parts: the first part for forecasting and the second is dedicated to decision rules for buying and selling stocks. The proposed strategy will be the first strategy designed for the Moroccan market. Unlike the American or European market where almost every day we have an interesting trade with a high return expectation. The Moroccan market is characterized by its occasional opportunities. The proposed strategy address this problem and guarantees a high return even in a crisis period (impact of COVID 19 on the Moroccan stock market). The results of the proposed approach outperform the set of indices used in the local market as a benchmark for evaluating the performance of financial products such as Undertaking for Collective Investments in Transferable Securities (UCITS). The rest of this article is structured as follows: In the related work section, we go through a non-exhaustive list of previous work using LSTM and GRU in several areas, and then we focus on articles using both architectures for stock price forecasting. In the following section, we provide a more detailed overview of the proposed trading strategy components. The experimental results section will be dedicated to the experiments as well as the results obtained, accompanied by the discussion section in which we comment on the results obtained. Finally, this paper ends with a conclusion and future work.
Related work
Machine learning techniques [3] are used in a variety of fields. Deep learning is one of its most popular techniques, particularly for time series problems using recurrent units (mainly LSTM and GRU) perfectly suited to the sequential nature of the data. In fact, LSTM and GRU architectures showed high performance for forecasting tasks in several fields like healthcare, transportation, finance as well as others. Shahid et al. [4] evaluated different machine learning and deep learning models to predict confirmed cases, recovered cases and death cases for the COVID 19 pandemic. They found that LSTM, GRU and Bidirectional LSTM (Bi-LSTM) provided accurate predictions for all three time series. ArunKumar et al. [5] propose RNN-LSTM and RNN-GRU models for a 60-day forecast of the covid19 pandemic. For the prediction of confirmed cases, the authors conclude that LSTM model perform quite well for countries like the United States, Brazil, South Africa, and Iran. The GRU model has better performance for countries like India, Russia, Mexico, and the United Kingdom. Wang et al. [6] Proposed a new approach for truck traffic flow prediction, paper authors noticed that LSTM and GRU have better performance compared to existing approach based on support vector machine (SVM) or Autoregressive integrated moving average (ARIMA). Regarding financial time series modeling, the number of papers, including LSTM and GRU is very important. Liveris et al. [7] present a CNN-LSTM model in forecasting gold price time series. The proposed model was compared to advanced deep learning and classical machine learning approaches and it turns out that it provides the best performance: the mean absolute error (MAE) and the root mean squared error RMSE are respectively 0.008 and 0.09 (gold price variable is ranging from 100$ and 134$). Liu et al. [8] Proposes a new hybrid system for one day ahead forecasting, closing price data will be decomposed into several components using Empirical Wavelet Transform (EWT) algorithm [9], LSTM predictors with a dropout are built to predict the decomposed closing price time series, LSTM model hyperparameters are selected using the particle swarm optimization (PSO) algorithm [10]. Error correction is carried out using the outlier robust extreme learning machine (ORELM) method. Finally the LSTM forecasts and the ORELM error forecasts are added to produce the final closing price prediction. The proposed framework produces excellent results (mean absolute percentage error (MAPE) ≈ 0.15% ). In [11] Althelaya et al. used GRU and LSTM units to build a stacked and bidirectional architecture for short term (1 day ahead) and long term (30 day ahead) forecasting of the SP500 index. The results of the GRU and LSTM models are very close and outperform the multilayer perceptron (MLP), which was also used in this study. In [12] Patel et al. propose a model incorporating LSTM and GRU for one, three, and seven days ahead prediction of Litecoin and Monero cryptocurrency. In order to build a threshold based portfolio Lee and Yoo [13] develop three types of RNN models : classical RNN, LSTM and GRU to forecast one month ahead, the top ten stocks in Standard and Poor's 500 index using monthly data (OHLCV: open,high,low,close and volume). They conclude that the LSTM outperformed the two other architectures. In [14] Cao et al. propose a new hybrid financial time series forecasting, decomposition part is carried out using empircical mode decomposition (EMD) [15] and complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) [16], an LSTM model is then trained on the intrinsic mode function (IMF) including the residual component, the final forecast is obtained by adding the set of predictions obtained for each component. Despite the wide availability of financial forecasting papers, there is no model adapted to the Moroccan market in the literature; however, many works have been proposed dedicated to other markets. Li et al. [17] propose a framework for predicting the Hong Kong stock market. The proposed framework incorporating stock market data and news sentiment. Technical analysis is used to represent stock prices. Sentiment analysis is used to represent news information, then the sequential representation is fed to an LSTM model for the prediction task. The proposed model outperforms SVM and Multiple Kernel Learning (MKL) used as a benchmark. In [18] Budiharto propose an LSTM-based approach for stock price forecasting in Indonesia. Yadav et al. [19] Propose an optimized LSTM for Indian stock market forecasts. The Sri Lanka market was the subject of an RNN model proposal by Samarawickrama et al. in [20] Nti et al. [21] use artificial neural network (ANN) to predict future movement of stock price in Ghana for 1, 7, 30, 60 and 90 days ahead based on public opinion. Papers cited above demonstrated that both of LSTM and GRU models perform brilliantly in financial time series forecasting. We will also use them for our proposed approach.
Proposed model
In this paper, we propose a new trading strategy tailored to the Moroccan market, based on two deep learning models. LSTM model provides short term prediction while GRU model predicts the medium term. Once the predictor component provides the price forecasts, suitable decision rules for each stock should be established to complement the proposed strategy. Before going in depth of the proposed approach, we will give a brief overview of LSTM and GRU architectures.
Long short term memory
Recurrent Neural Networks RNNs are a type of neural network widely used in the field of deep learning [22][23][24], it turns out that classical RNNs are extremely difficult to train to handle long term dependency [25] because of gradient vanishing problem (gradient exploding can occur also, but very rarely). To overcome the vanishing gradient problem LSTM is proposed initially by Hochreiter et al. [26], then improved by Gers and Schmidhuber [27]. The LSTM unit is the most basic component of an LSTM architecture; it's a series of gates and cells that work together to produce a final result. The forward pass of an LSTM unit is modeled by Eqs. (1)(2)(3)(4)(5)(6) where σ is sigmoid function, f t is forget gate activation vector, i t input gate activation vector, o t output gate activation vector, c t cell input activation vector, c t cell state, h t output vector of LSTM unit, all W and U are weights, b is biases vector and symbol * for Hadamard product (element wise product). Weights W, U and biases b are to be learned during the training process. In order to better decipher the equations cited above, let's start with cell state c t , it contains two kinds of information: old information to keep from past state c t−1 specified using forget gate, it decides the percentage of information to keep by computing a value between 0 (throw completely) and 1 (keep completely), and new information to include in the cell state calculated using input gate i t and cell activation c t which are computed using (2) and (3), respectively. Finally the calculation of the final output value is performed in two steps. A potential value is calculated using (5) for the first time. This value will be regulated using the information present in cell state as indicated in (6). The use of the cell state in the final calculation makes the LSTM powerful in tasks where information must be stored and used later (long term). Language modeling is a simple example of this situation. The verb conjugation in the middle or even the end of the sentence depends on the subject at the beginning of the sentence.
Gated recurrent unit
Gated Recurrent Unit GRU was introduced in 2014 by Cho et al. [28] To solve the vanishing gradient problem experienced by classical recurrent networks. Same as LSTM, the input value interacts with the information from the previous state to calculate the different values of intermediate gates which will subsequently be used to decide on the value to be output. The forward pass of a GRU unit is modeled by Eqs. (7)(8)(9)(10) where σ is sigmoid function, z t is update gate activation vector, r t reset gate activation vector, h t candidate vector and h t is the output vector of GRU unit. W and U are weights, b is bias vector and symbol * for Hadamard product. Same as for LSTM, weights and biases are to be learned during the training process. Let us go deep through the equations cited above to better understand how a GRU unit works. First update gate is computed using input vector x t , output of previous unit h t−1 and sigmoid activation function. Reset gate is calculated in the same way as update gate using its own weights and bias. The reset gate is then involved in the candidate value calculation. It determines how much information from the previous state should be preserved. Indeed, from Eq. (9) we can notice that with an r t value very close or equal to zero, only the input value will be considered in candidate value calculation. Finally, the output value is calculated by calibrating the previous output and the new candidate output. The calibration is carried out by the updating gate z t . ( z t = 0 copy the previous output, z t = 1 generate a new output regardless of the old output). We can observe a kind of similarity between LSTM and the GRU, indeed both implement an intermediate gating mechanism used later in the output value calculation. Regarding the two models performance, Chung et al. [29] demonstrated that GRU outperforms LSTM in a number of tasks, this observation is also shared by Jozefowicz et al. [30] who found that GRU outperforms LSTM in a number of tasks except language modeling. On the other hand, Shewalkar et al. [31] finds that the LSTM outperforms GRU in the speech recognition task. However, they confirm that the GRU optimization is faster. In general, both LSTM and GRU are powerful and well-suited to sequential data. The GRU has the benefit of faster optimization compared with an LSTM because it has less parameters.
Prediction component
This part aims to forecast short and medium term price pattern, a high forecasting accuracy will lead to a successful trading strategy. We will be interested in the moving average described using the formula (11) to better capture the trend: where y t is closing price at time t. We will use the notation y h t to design moving average of next h days at time t. It will be our target variable we are attempting to forecast. It reduces daily price noise and summarizes well the trend information. Fig. 1 is a more accurate depiction of how the price average captures trend data. Once we identify our target variable we need to deal with data issues. Indeed, the Moroccan market presents peculiarities that distinguish it from other well-developed markets such as the European or American market. The major problem encountered lies in lower liquidity. We observe a discontinuity of exchanges for a large number of shares as shown in Fig. 2. It has often occurred that a stock is neither sold, nor bought for several days or even several weeks or months. Which automatically results in missing data for those trading days. In addition, the Moroccan market only offers 76 stocks for trading [32]. This number associated with the missing data problem reduces considerably the amount of data obtained over a given number of years. It is well known, acquiring a large dataset contributes to enhance the model performance and throw it away from overfitting phenomena [33]. To address the problem of data volume we will use external data from other markets (American and French market). These collected data, make it possible to obtain a large dataset. With regard to the problem of trading discontinuity, we will rule out the stocks that suffer from liquidity issues. The relatively demanded shares on the market will be retained. At this level, we have all the required elements to train our two deep learning models. Data from the US and French market will be used as training data and Moroccan data will act as validation and testing data. Using only historical prices, two recurring networks will be trained to forecast future average closing prices for the short and medium term. The two models are shown in Fig. 3. Although training data was gathered from external markets. The use of local data as validation will allow to adjust the parameters according to the Moroccan market characteristics.
Parameters tuning
To develop a successful trading strategy, forecasts should be followed by good decision rules. Let ŷ m t and ŷ s t the predictions given by our predictor component for medium and short term horizon respectively. We define the following ratio: where y t is closing price at time t. This ratio indicates how is the medium term prediction compared to the actual price. A value greater than one mean price is expected to rise during the upcoming days. We will try to identify optimal threshold θ * , when it is outstripped, the trading is profitable. The methodology used to evaluate the θ * will be detailed below once all the decision rules of the strategy have been described. In the trading field, buying a financial instrument is known as an open position, and selling it is called a close position. We only open a position for our proposed strategy when three conditions are verified: where ŷ s t is the short term forecasting and y t is the actual closing price. • There is no open position with the stock to buy.
The first condition seems quite clear, medium term forecasting should exceed a given threshold (tuned previously for each stock). With the respect to the second condition, it completes the strategy and regularizes open position timing. Indeed, even though if we expect prices to increase in the medium term horizon, price can decrease in the short term horizon before increasing. In this case, we delay the purchase to buy at a lower (12) θ =ŷ m t /y t , Concerning close position conditions, it seems more intuitive, we close the position when prediction is less than the current price for short and medium term horizon. To secure trading operation, trader typically sets a stop limit. It is a minimal limit when it's reached, the operation is immediately closed in order to reduce the losses in case of strong prices drop. Notice that no stop limit is defined in our proposed strategy. It is extremely risky, but the ground truth proves us right, the results obtained support this decision. Now that the strategy is well detailed, we can describe the tuning methodology for θ parameter. We will simulate the strategy for each stock over periods of equal duration (3 years each, for example). The performance (annualized return) of each stock is tracked using various θ values. Throughout all simulation times, we record the return achieved by all stocks for each value of the θ parameter. The returns obtained will be used to estimate the expected return for each configuration (each value of θ parameter). Then a return threshold is set. We will keep the configurations (stock, θ ) that have estimated return greater than the threshold. The selected stocks are then used to build portfolios of varying sizes. To evaluate the performance of the portfolios a simulation over a new period is used. For stocks with multiple θ parameter values that meet the selection criteria, we will choose θ at random(several repetitions will be performed to check various θ values). This whole process will be carried out for a variety of portfolio size. The suggested method is a hybrid between grid and random search. Finally, the shares in the portfolio that has the maximum observed return, will be retained by our strategy. The entire portfolio selection process is depicted in Fig. 4. It is important to highlight that in all the simulations, brokerage fees are estimated and added to buying prices or subtracted from selling prices. Regarding the time complexity of our approach. Let s be the number of stocks, the number of θ parameters used during the simulation, and d the number of trading days available during the simulation period.
Dataset description
In order to train predictor component models, we will use two sources of data: the first one is data of all stocks in the SP500 index (except BRK.B and BF.B discarded due to data loading error) extracted from yahoo finance using pandas datareader library [34]. The second is data of all stocks in the CAC40 index, collected from investing.com using investpy library [35]. Extracted data are from January 2010 to January 2019. Concerning Moroccan data, to escape the discontinuity issue described in the prediction component section we will be working with stocks that have a relatively high average volume traded in 2019 and 2020. As a result, out of the 76 stocks available, we choose 32. Data from January 2010 to January 2019 will be used as validation data to tune LSTM and GRU predictors parameters. Data from March 2019 to March 2021 will be used to test (hold out dataset) our proposed strategy performances. Note that the Moroccan data are extracted in the same way as the CAC40 data.
Metrics
We have two parts to evaluate, first we have to evaluate the quality of the predictions provided by both models LSTM and GRU. We will use popular metrics mean absolute percentage error (MAPE), mean squared error (MSE) and root mean squared error (RMSE) given respectively by Eqs. (13)(14)(15) where y is ground truth variable and ŷ is its prediction. Regarding the proposed trading strategy, it will also be evaluated using return-oriented metrics. We will use the return given by the following formula:
Fig. 4 Stock selection process
where y f and y i are respectively final sell price and initial buying price. We also use the annualized return to capture the effects of compounding (earnings reinvestment), annualized return is calculated using the below formula: where r is return calculated in Eq. (16) and years is the total investment period in years.
The wine rate is also used, it is an indicator that gives the winning trade percentage, given by the following formula :
Strategy components
We will try 2 types of RNN architectures for the predictor component. For short term prediction, we will train an LSTM model, regarding medium term forecasting, we train a GRU model. All models are designed with Keras API [36], number of layers and unit are determined after a random search. Weights are initialized using He initializer [37] and optimized using adaptive moment estimation (Adam) algorithm [38]. Finally Dropout [39] is used as regularization technique and Gaussian noise is added for better generalization [40]. Once the prediction component is completed, a simulation is run over the following periods from 2010 to 2012, 2011 to 2013, 2012 to 2014 and 2013 to 2015, using a set of values of θ parameter and for all stocks. We calculate stock return obtained over each period using formula (10). The return is then analyzed using (17) (investment period is 3 years for each period). The top three results for each simulation time, as well as the parameter used, are shown in Table 1. Then a benchmark return is created. We pick the stocks that outperform the benchmark (using estimated return). using random search we try several stocks combination and we choose the combination of stocks that yielded the highest return. Note that the amount invested is the same for each stock. The profit generated from trade is fully reinvested (compounding). Finally the return is calculated at the end of the period using the initial amount and the final amount generated.
(17) r a = (1 + r) 1/years − 1, Results COL (Colorado) and MDP (Med paper) are the stocks to keep in our watched list, with θ parameters of 1.01 and 1.03, respectively. In order to evaluate our proposed strategy performance, we will use data between March 2019 and March 2021. Table 2 show LSTM and GRU model prediction evaluation. Notice that no benchmark is used to evaluate prediction provided by models because it's not the main scope of the paper, we are more focused on the proposed strategy performance. Table 3 illustrates the individual performance of our final stocks, while Figs. 5, 6 represent the buying and selling times for the same stocks respectively, and Fig. 7 highlight return generated per trade. In addition to individual stock results, the performance of our proposed strategy will be compared to the MASI index as well as the performance of the following sector indices: Distributors, Pharmaceuticals Industry, and Software & IT Services. Sector indices were chosen based on their results (annualized return > 15% ). Table 4 highlights this comparison. Notice that the decision to compare our results with indices results is justified by the lack of a publication dedicated to the Moroccan market. In addition, indices performance have always acted as a benchmark for evaluating performance of financial product like UCITS.
Discussion
Referring to Table 2, If we inspect the MAPE metric, the LSTM (short term prediction) model has 1.9% and 3.1%, respectively for the forecasting of the COL and MDP stock price. The GRU model (medium term prediction) has 2.1% and 4.3% for both stocks. Knowing that this is not a closing price forecasting, but rather a prediction of the moving average price for the next five days (resp 10 days for GRU). It is an excellent result because if we consider the closing price the difference will be even smaller. This finding is supported by the winerate Table 3 for our stocks (up to 91% both), which proves that the potential increase detected by the GRU or the decrease predicted by the LSTM are all relevant. For the same reason, the given MSEs and RMSEs are also worst-case indicators. Compared to the closing price their values will be even smaller. Referring to Table 3, the individual performance of the selected stock are perfectly aligned. Indeed, the amount invested at the start of the test period increase by 57% for COL and by more than 64% for MDP. A very strong observation for both stocks, the win rate ratio is very good, indeed 91% and 93% are quite high, indicating that our trade is winning most of the time. Fig. 7 is a visual form to observe the winerate ratio, we can see that most of the time the trade is winning in addition, 75% of the time the transaction return is greater than 2.5% for COL and greater than 5% for MDP. We also note the presence of some failure, a losing trade of 14% and 20% respectively for COL and MDP, the implementation of a security mechanism to limit losses can be an improvement. Despite this, the two stocks performance largely covers these losses and allows us to achieve a very satisfactory annualized return for our proposed strategy. From Table 4, the returns of our proposed strategy exceed the returns of all indices except Software & IT Services index. This index performance has risen dramatically because of the widespread use of remote applications during the COVD19 pandemic. Fig. 8 confirms this finding, the index is at its maximum never observed before during the pandemic phase. To better compare our strategy with the Software and Computer Services' index and all other indices, Fig. 9 illustrates the return evolution for our proposed strategy as well as the other benchmark indices overall testing periods. We clearly observe the supremacy of our proposed . In fact, until April, our proposed method return was near 100%, while Software & IT Services' index return does not exceed 40%. Distribution index return was also around 40%, while the rest of the indices were lower. The COVID19 crisis has halted the evolution of our proposed approach performance, indeed Fig. 10 shows a decrease in total return in March 2020. However, its return has not collapsed during this crisis. The overall result even during the crisis period remains very positive and it is a good point for our approach in terms of crisis management. Notice that pharmaceuticals, distribution, and information communication technology (ICT) are among the sectors not impacted by the COVID19 crisis, on the contrary, they are among the sectors that have gained from it. Despite this, only the ICT performance outperformed the result of our proposed strategy.
Conclusion
This paper proposes a new trading strategy tailored for the Moroccan market, driven by two models (LSTM and GRU) forecasting. Customized decision rules for each stock complete the proposed trading strategy. Both deep learning models provide accurate short term and medium term forecasting. The proposed decision rules developed following varying simulations help detecting the potential price uptrend (resp downtrend) for the selected stocks (COL and MDP). The proposed approach allows selecting profitable stocks and the creation of portfolio that outperform all indices used as a benchmark except the Software & IT Services indices which witnessed an abnormal boom during COVID19 pandemic time. The proposed strategy is very promising and its performances are overall very satisfactory. Indeed, our proposed model provides an annualized return of 27.13% and a monthly return of 2.02%, whereas the best performance of a competing Indice (excluding Software & IT Services Indice) is 19.94% and 1.53% for annualized return and monthly return respectively. In our future work, we will continue to work on deep learning portfolio building techniques, but we will focus much more on the prediction of medium and long term horizon. We will also extend preprocessing part by incorporating natural language processing (NLP) techniques to capture the effect of social media, news, and rumors on the stock market price. | 6,888 | 2021-06-08T00:00:00.000 | [
"Computer Science"
] |
HERITAGE LANGUAGES ACQUIRED, LEARNED, AND USED AMONG KALIMANTAN UNIVERSITY STUDENTS, INDONESIA: A PERCEPTION
Heritage language is a language taught and learned from parents. It was integrated from generation to generation. In Kalimantan, heritage languages are very various and has developed due to many influences from internal and external factors. This research aims (1) to describe how students perceive the languages(s) they acquired, learned, and used and (2) to describe their use and perceptions on heritage languages maintenance. This research was a qualitative statistic. In collecting data, the researcher employed interview, questionnaire, and observation. In analyzing data, the researcher relied on Miles and Huberman model for qualitative data and statistic formulas for quantitative data. Based on the findings, it was found that (1) students perceive the languages they acquired, learned, and used in specific ways from the three languages mastered, such as involving in a language community, communicating with family members, friends, including learning from TV or film, and (2) students feel happy to use their HL to communicate with people with different backgrounds and show their identities. Therefore, it can be concluded that students acquire, learn, and use their languages into various ways, while maintaining those languages, they tend to involve in a language community and family members. Indeed, students from various ethnic backgrounds can make the learning process meaningful by knowing many local languages and cultures in a classroom.
INTRODUCTION
In this globalized era, heritage language is an important language to be preserved because its speakers are decreasing.According to Little (2017), heritage languages face problems in this globalization era.Those languages seem to disappear slowly because the young generation of the language is decreasing.Heritage language loss is specific anxiety among younger generation.Therefore, it is a big problem in society when a heritage language is no longer attractive to young age.It is taught since birth and spoken at home with parents and other family members (King and Ennser-Kananen in Ansó Ros et al., 2021;Indriani, 2019).Moreover, humans are not born to produce utterances in a specific language, but they acquire their first language in a culture (Yule, 2020, p. 11).
From those situations, the researcher thinks it is needed to analyze current research relating to heritage languages that focus on language use and maintenance from the problem.In definition, language use in a multilingual context is related to language choice in a particular situation and domain, while language maintenance refers to a situation in which a minority language (Nursanti et al., 2020, p. 232;Plešković et al., 2021, p. 70).
Therefore, the researcher would like to see how heritage language is used and maintained.
One state university, located in Samarinda, East Kalimantan, Indonesia, was chosen as an object of this research.The university is the biggest state in East Kalimantan, with 13 faculties and 96 departments.The researcher chose the university because (1) Students studying in the university are more varied.They come from cities around and outside of East Kalimantan and, and (2) the university is one of the choices in studying at higher education around the new capital city of Indonesia in the future.To be more specific, the researcher chose the English literature department to be investigated.Three reasons for involving the department in this research because (1) the department focus on languages and literature studies, (2) students of the department are multilingual that do master not only Indonesian and English language but also master local languages, and (3) students are from various background around East Kalimantan.
Several previous studies were used to support this research.First, a study about heritage language learning (HLL) and Ethnic Identity maintenance among Chinse-Canadian Adolescents found that heritage language is important to learn and build a strong communicative orientation in the acquisition (Chow, 2018).Second, Indriani (2019), in a study about preserving Indonesian's heritage language in a globalization era, reported that parents' attitudes and institutional policy are considered two factors in significantly determining heritage languages maintenance.Third, maternal input quality may affect the outcome of heritage language acquisition of children, as concluded in research about the effects of parental input quality in child heritage language acquisition (Daskalaki et al., 2020).Last, Carreira & Kagan (2011), in a survey of HL for teaching, curriculum design, and professional development, found that learners have limited exposure to the HL outside the home and have positive HL attitudes and experiences.
Therefore, this research aims to gain a concept of heritage language use and maintenance among English Literature Students by addressing two research questions to discuss, namely: (1) how do the students perceive the language (s) they acquired, learned, and used?, and (2) how are their use and perceptions on heritage language maintenance?
REVIEW OF THE LITERATURE Language Use in the Education setting
Language use is always related to how people use a language in society.Sometimes, a different language is used at home, school, or friends' circles.Several studies have been reported to analyze language use itself.First, Nursanti et al. (2020) analyzed patterns of language use among multilingual university students majoring in English revealed that students tend to use Javanese at home because of intimacy and habit.English is used for academic purposes, while Indonesian is used to communicate with lecturers.Second, Kang (2008) investigated the language classroom use by a Korean school EFL teacher found that a non-native EFL teacher did not employ TETE (Teaching English through English) entirely because of students' interest and teacher motives.Third, Vizconde (2011) focused on language use at one university in the Philippines.The study revealed that students and teachers use their first language in comfort zones (e.g., home, recreational centers, etc.), while English is used in an academic situation.
In another, Fatima & Al Qenai (2021) analyzed Arabic and other English language use.Most students argue that English is better when taught in the classroom as a medium instruction at high school and college levels.Almusharraf (2021) studied first language use in EFL classes among Saudi Arabian faculty and learners found that L1 use in L2 classes becomes a subjective issue to instructors' experience, learners' proficiency level, and the complexity of the skill being taught.
From those explanations, it can be said that language use in education is varied enough.In the education field, most of the students from several studies mentioned tend to employ English as a medium of instruction during the learning process.In contrast, L1 or heritage language tends to be used at home or family's circle.
Affecting Factor in Language Maintenance
Language maintenance is perceived as someone or one minority community preserving their language among the majority.Two factors in language maintenance: (1) if families from a minority group live near each other and see each other frequently, this also helps them maintain their language and (2) for those who emigrate is the degree and frequency of contact with the homeland (Holmes & Wilson, 2017, p. 67).In another factor, Pauwels in Plešković et al. (2021) writes that there are three factors of Language Maintenance or shift: individual characteristics (e.g., age, gender, educational background), minority group characteristics (e.g., the number of language speakers, cultural similarity), and majority group characteristics (e.g., attitudes towards the minority language/culture).
These factors cannot work independently and become the single factor of LM or LS.
Two previous studies reported language maintenance.First, P Veettil et al. (2021) studied language maintenance, and language shift found that family domains valorize their heritage language and even speak other languages by assimilating dissimilar cultures.Mass media hold a crucial role in maintaining their language and cultural identity among Keralites in Oman.Second, Bissoonauth & Parish (2017), focused on the perception of ancestral languages and cultures in New Caledonia, identified that there are distinguished language practices among older and younger generations of New Caledonians of Melanesian descent where French is used as the lingua franca for all and English is more prevalent among younger generations who are studying than ancestral languages.
From those explanations, it can be said that affecting factors in language maintenance can be seen from age, gender, education, the number of a speaker, and even culture.Besides that, language practice and attitude also influence maintaining a minor language in a major language.
Heritage Language Studies
Parents teach a heritage language and are usually used on a certain scale.Heritagelanguage studies were upcoming to hold with the anxiety among persistent similarities throughout exceptional background languages, which point out the universality of fundamental approaches and the outcomes of situational elements on the identical language itself (Polinsky & Scontras, 2020).HL and L2 grammars are not affected by transfer from the dominant/native language (Romano, 2021).Linguistic and cultural capability and literacy skills in the prevailing societal language are generally attained early on by all heritage speakers(y Cabo et al., 2017).Blackledge et al. (2008) analyzed the perspectives of Bengali teachers and students in one city in the UK found out that (1) a language should be preserved and kept free from the contamination of other sets of linguistic resources and (2) teaching 'language' and 'heritage' was a means of reproducing 'Bengali/Bangladeshi' identity in the next generation.Palm et al. (2019) studied the Somali language in Sweden found out that younger children tend to speak Somali languages at home with their parents.Adolescents think about the importance of learning the language for daily lives and developing the language.Beaudrie (2011), focused on the Spanish heritage language program, found that offering courses are a necessary but small step toward ensuring quality HL education among students.The convergence and divergence between HL and L2 learners found that at the classroom level, there are three items for HL, namely (1) diagnostic tests for understanding heritage speakers in terms of interests, proficiencies, and goals, (2) collaborative activities where L2 and heritage speakers would benefit from each other's strengths, and (3) opportunities for differentiated instruction and resources that are appropriate for students with different skill, needs, and goals (Albirini, 2014, p. 460).
From those explanations, it can be said that heritage language is a language taught by parents.Every family member must preserve it.Although the frequency and domain area of the heritage language is restricted, the speakers must maintain and make sure that the HL is free from other contamination resources linguistically.
Research design and respondent
This research employed a descriptive statistics.Fraenkel et al. (2012, p. 187) state that descriptive statistics permit researchers to describe the information contained in many, any scores with just a few indices.Data gained in this research are shown in number and percentage, then the data are presented descriptively.Therefore, the researcher chooses the descriptive statistics to conduct this research.
Twenty five (25) students of the English Literature Department in one state university located in Samarinda, East Kalimantan, Indonesia were chosen as the respondents in this research.All students' names were not shown to keep their confidentiality.The researcher just showed their age, sex, semester, and origin.
Students' Tribe
Chart 2 Students' tribe
Data collection
The students were asked to answer a questionnaire composed and shared from Google Form in collecting data.The questionnaire consisted of 40 statements about language opinion, feeling, practice, and domain by employing two a-4 score scales, namely (1) 1=never, 2=sometimes, 3=often, and 4=always, and (2) 1=never, 2=seldom, 3=sometimes, and 4=always.Five of the students were interviewed: student MKCD, HF, AF, BG, and IGDA.Their names are pseudonyms to keep their confidentiality-reasons for choosing those five students, namely (1) active students and (2) voluntary-based.There were nine questions to ask the researcher to the students in a semi-structured interview.In findings, the researcher just presented selected interview data that seemed relevant to the
Data analysis
In analyzing data, the researcher employed two steps.That is qualitative and quantitative qualitative analysis.The Miles and Huberman model analyzed qualitative data collected from the interview by following three steps: (1) data display.The researcher displayed all collected data from the interview, (2) data reduction.The researcher selected data needed to support research findings in this step and (3) conclusion.The researcher concluded data that had been chosen in the previous step.
In quantitative, data from the questionnaire were analyzed by statistic technique with depending on the three formulas, namely percentage, mean, and standard deviation that are explained, as follows: Formula to find out percentage..
FINDINGS AND DISCUSSION
These findings show results of a questionnaire that were shared with the participants, namely: (1) students' language mastery, (2) students' opinion on language, (3) students' feeling on language, (4) students' practice on language, and (5) students' domain on language.In addition, these findings also show several selected data of interviews to complete analysis, as follows:
Students' Language Mastery
This part shows students' L1, L2, and L3 mastery that are from various languages, as follows: Chart 3 Students' L1 Mastery A majority of the students mastered Java language and Kutai language as the first language (L1), being closely followed by Banjar language, and other local languages (e.g., Sasak, Dayak, Sunda, Minang, etc.) (see chart 3), while second language (L2) mastered is Indonesia language talking about the third language (L3) mastered are English and Mandaring language (see chart 4).
Students' Opinions on Language
This part shows students' opinions about language use.There are ten statements to answer by choosing a-4 score scale where three statements for L1, three statements for L2, and four statements for L3 as is shown in table 2, as follows:
Students' L1 Opinions
Most of the students very agree that L1 must be preserved.Talking about the benefits of L1 use, most of the students agree that they gain many benefits from it, while the needs of L1 in the globalized era, they also agree that L1 is still needed.The mean score ranged from 3.160 to 3.560, while the standard deviation ranged from 0.583 to 0.723.It means that the data variance of the students' choice in those statements is relatively smaller, or there is no gap large enough because the standard deviation is smaller than the mean (see
Students' L2 Opinions
Most of the students very agree that L2 has vocabulary and grammar that are easy to understand.In communicating, most of the students very agree that L2 helps students communicate with friends/colleagues with different language backgrounds and the L2 use.
The mean score ranged from 3.600 to 3.840, while the standard deviation ranged from 0.436 to 0.577.It means that the data variance of the students' choice in those statements is relatively smaller, or there is no gap large enough because the standard deviation is smaller than the mean (see (translated) In my opinion, English is also very important because English is an international language that we must master.Eh, then often used because I am an English literature student, I often use English in teaching and learning activities.
Discussing data gained about students' opinion on language, it can be said that students' opinions have various data that mostly perceive L1 as heritage languages where the languages should be preserved and important for a family.At the same time, L2 seems easy to master as the national language, and L3 is a medium in accessing technology and supporting a career in the future.It is in line with Chow's (2018) study that HL is important to learn.In contrast, HL is served for the Indonesian language, not local languages, as cited in (Indriani, 2019) and (2011), who discuss HL's limitation.
However, the students should be wise to involve the heritage languages (e.g., Banjar language or Java language) in communication and be aware of the preservation of the languages and the purposes of using the language.Besides the importance of employing L2 as the lingua franca among different ethnicities in Indonesia and L3 as the international language.
Students' Feelings on Language
This part shows students' feelings on language from three languages they have mastered.There are nine statements to answer by choosing a-4 score scale where three statements for L1, three statements for L2, and three statements for L3 as is shown in table 3, as follows:
Students' L1 Feelings
Most of the students agree that L1 is important to learn, but most disagree that there is a shy feeling when speaking using L1.In contrast, they very agree that the students feel happy when non-native speakers communicate with L1.The mean score ranged from 1.840 to 3.400, while the standard deviation ranged from 0.645 to 0.800.It means that the data variance of the students' choice in those statements is relatively smaller, or there is no gap large enough because the standard deviation is smaller than the mean (see
Students' L2 Feelings
Most of the students very agree that they feel comfortable using L2 to discuss with friends and agree that the students feel happier learning L2 than L1.They also feel that L2 is easy to follow during a course.The mean score ranged from 2.800 to 3.920, while the standard deviation ranged from 0.277 to 0.913.It means that the data variance of the students' choice in those statements is relatively smaller, or there is no gap large enough because the standard deviation is smaller than the mean (see Table 3-L2-number 14-16).
Similarly, one respondent's interview named student HF is Javanese states that there is no doubt to use L2 in communicating because it is believed that almost 90% of people understand the Indonesian language, except local language (see excerpt 5).
Excerpt 5 'Eh tidak ada keraguan Pak, dalam artian karena kan eh kita ada di satu negara yang sama tapi eh pasti ada 90% mungkin semua orang paham tentang bahasa Indonesia.Jadi tidak ada keraguan untuk lawan bicara untuk tidak paham, kecuali bahasa daerah yah mungkin agak kesulitan ya…'(INTW/F/26) (translated) Eh, there's no doubt, Sir, in the sense that we're in the same country, but indeed there's 90%, maybe everyone understands Indonesian, so there is no doubt for the interlocutor not to understand, except for the regional language, perhaps it is a bit difficult.
Students' L3 Feelings
Most of the students very agree that there is a feeling shy when the students cannot communicate using other languages.However, the students agree that their vocabularies are insufficient.They also very agree that L3 is important to master.The mean score ranged from 3.120 to 3.960, while the standard deviation ranged from 0.200 to 0.927.It means that the data variance of the students' choice in those statements is relatively smaller, or there is no gap large enough because the standard deviation is smaller than the mean (see another language is used to study abroad.From the researcher's point of view, students' pride when using L1, L2, and L3 should be maintained because those languages are important to use by the students to face many situations at home, campus, or friendship.
Students' Practices on Language
This part shows how the students' practices three languages that they have mastered.
There are ten statements to answer by choosing a-4 score scale where three statements for L1, three statements for L2, and four statements for L3, as is shown in table 4 as follows:
L1 Practices
Most of the students sometimes employed L1 with family members at home, and seldom used L1 with friends around the neighborhood.In addition, they never followed the L1 activity.The mean score ranged from 1.640 to 2.640, while the standard deviation ranged from 0.860 to 0.995.It means that the data variance of the students' choice in those statements is relatively smaller, or there is no gap large enough because the standard deviation is smaller than the mean (see Table 4-L1-number 20-22).
In contrast, one respondent's interview named student AF from Javanese argues that he does not master L1, so he does not use it with parents at home (see excerpt 7).
(translated) because I am less proficient in good language, polite language, so I do not use the Javanese language to my parents.
L2 Practices
Most of the students have similar statements where the students always employed L2 with their friends and always used L2 to communicate with lecturers or staff at the campus.
The students also always employed L2 with family members at home.The mean score ranged from 3.240 to 3.800, while the standard deviation ranged from 0.408 to 0.779.It means that the data variance of the students' choice in those statements is relatively smaller, or there is no gap large enough because the standard deviation is smaller than the mean (see Table 4-L2-number 23-25).
In contrast, one respondent's interview named student BG from Javanese argues that L2 is used for professional purposes because she dreams of being a translator in the future.
The practice of L2 is used to translate between English and Indonesian language (see excerpt 8).
(Translated) Oh, for that language, I will use it in a professional area, especially I want to, later I want to work, God willing, to become a translator, now for my needs, of course, I need to use this language to translate between Indonesian and English.
L3 Practices
Most of the students sometimes employed L3 based on the grammar.During following a course, most of the students sometimes must use L3.To improve their L3 mastery, they always listen to the music and employed L3 to translate isolate terms.The mean score ranged from 3.040 to 3.680, while the standard deviation ranged from 0.557 to 0.790.It means that the data variance of the students' choice in those statements is relatively smaller, or there is no gap large enough because the standard deviation is smaller than the mean (see Table 4-L3-number 26-29).
In another practice, one respondent's interview named student HF from Javanese argues that to improve his practice in English, he consumed all things relating to English because he believes that knowledge will be forgotten if it is not improved.The practices are done through watching podcasts, films, or reading novels.
Excerpt 9 'Kalau dari saya pribadi, eh ini lebih ke personal ya.Eh kalau biasa yang saya lakukan adalah eh mengonsumsi hal-hal yang berbau tentang bahasa Inggris, karena sehebathebatnya kita pada bidang ilmu pasti kalau tidak di pelihara pasti akan lupa pak ya, jadi mengonsumsi eh bahasa inggris, misal menonton podcast eh mendengarkan podcast, menonton film yang menggunakan bahasa Inggris atau bahkan membaca buku atau novel yang menggunakan bahasa Inggris.Jadi kita bisa memperbanyak, mengembangkan, memperbanyak bahasanya apa ya, eh vocabulary yang ada disana Pak seperti itu...' (INTW/HF/34) (Translated) As for my personality, this is more personal.Eh, what I usually do is eat things that smell like English, because as great as we are in the field of exact sciences, if we don't take care of it, we will forget it, Sir, so consuming English, for example watching podcasts, listening to podcasts, watching movies in English or even reading books or novels in English.So we can reproduce, develop, and reproduce the language, huh, the vocabulary there, Sir.
Discussing students' practice on language, it can be said that frequency of L1 use and maintenance in language practice should be evaluated because L1 use is seldom than L2 and L3.Besides that, L3 mastery is limited as one data gain during the interview.It is in line with Carreira & Kagan (2011), who conduct a study to evaluate the design of teaching HL, so it is important to develop HL in the future.In L3, it is in line with Fatima & Al Qenai (2021), EFL is used as a medium of instruction in the school.
However, From the researcher's point of view, the frequency of L1 use outside the circle's family is smaller than others because the culture in Kalimantan is different from others.In contrast, practically, the frequency of Indonesian language use is bigger to communicate, although there is a code-mixing with heritage language, such as dialect.
Students' Domains on Language
This part shows the students' domains in three languages they have mastered.There are eleven statements to answer by choosing a-4 score scale where four statements for L1, three statements for L2, and four statements for L3, as is shown in table 5, as follows:
L1 Domains
Most of the students seldom asked their parents to speak L1 at home.Moreover, the students also seldom wrote SMS/WA/VC using L1.In the outer circle, the students never joined a cultural community, but the rest of the students always and seldom use L1 with friends in tribe communities.The mean score ranged from 1.360 to 2.600, while the standard deviation ranged from 0.700 to 1.118.It means that the data variance of the students' choice in those statements is relatively smaller, or there is no gap large enough because the standard deviation is smaller than the mean (see YouTube or podcasts, then I learn from it.Then the motivation is to dance in the international world like that. Discussing students' domain on language, it can be said that it is enough varied in several activities.However, the frequency of the domain has not been maximum, such as L1 use in writing a message, L2 use in watching TV/film, and L3 is just used for application feature of mobile phone.It is in line with Daskalaki et al. (2020) that concludes that parental input quality affects HL acquisition in children.In contrast, Carreira & Kagan (2011) state a limitation exposure in HL outside.From the researcher's point of view, the domination of L2 use in all domains has given various effects where L1 use is not interested anymore because all information needed always use the Indonesian language.
CONCLUSION
Based on the explanation in findings and discussion, the researcher can answer two research questions proposed in the introduction about heritage language acquired, learned, used, including its maintenance among English literature students.First, the students perceive the languages they acquired, learned, and used in specific ways from the three languages mastered.In L1, they tend to acquire and learn the languages from parents rather than a language community.Besides that, they use L1 to communicate with family members and seldom use it with friends/a tribe community.In L2, they acquire and learn the language from friends or daily communication at the campus.In addition, the language was used to communicate with another.In L3, they acquire the language from reading magazines/newspapers, including writing a letter/task.They also learn L3 from the courses followed.Besides that, they use L3 as a future supporting career, translating Englishisolate-term, including accessing application features of handphone or electronic devices and watching English-speaking-TV/film.
Second, talking about students' use and perception of heritage language maintenance, the students feel happy to use their HL to communicate with people with different backgrounds and show their identity (e.g.Banjarnese, Buginisnese, Beraunese, and so on).
Moreover, the students still believe that HL is needed in this globalized era with many benefits besides learning other languages, such as English or Mandarin.In the language maintenance, extra maintenance should be done to preserve the language in the future because the frequency of using HL is still limited among the English literature students, although they always speak using the languages with parents or other family members at home or with friends or colleagues who have a similar tribe (e.g.Javanese language, Banjarnese language, and so on).Involvement of HL in society is also limited because the students statically never join in a community related to their local languages or cultures.Wang (2018) argues that HL maintenance is a complex process.The hegemony of foreign language (e.g., English) and the dominance of mainstream culture meant that participants confronted unbalanced bilingualism, and assimilation of language and culture discouraged children from maintaining their heritage languages Relating to the implication in teaching and learning, this research gives new insight for a language teacher in designing an instructional design of teaching English or other languages by involving local languages.Although students come from various ethnic backgrounds, it can make the learning process meaningful by many local languages and cultures.It can be assimilated in acquiring language learning.Besides that, this research has drawn how university students in Kalimantan use and maintain their languages.Students must be aware of the importance of heritage language, although they have mastered other languages (e.g., English, Mandarin, etc.).
Expanding the research sample in different regions is suggested to make the study more extensive and comprehensive.Besides that, this research can be one of the references in studying language use and maintenance.
Table 2 -
L2-number 4-6).Most of the students very agree that L3 is used to learn knowledge and technology based on the students' chosen department.In supporting a career in the future, most of the students very agree that L3 supports it.In contrast, most of the students disagree that L3 has complicated vocabularies and grammar and very agree that L3 makes the students feel confident communicating with many people.The mean score ranged from 2.120 to 3.840, while the standard deviation ranged from 0.374 to 0.707.It means that the data variance of the students' choice in those statements is relatively smaller, or there is no gap large enough because the standard deviation is smaller than the mean (see Table2-L3-number 7-10).Similarly, one respondent's interview named student AF from Javanese states that English is important to be mastered.Besides that, the language is always used to teach and learn during the course (see excerpt 3).
Table 3
Students' Feelings on Language
Table 3
Similarly, one respondent's interview named BG from Javanese argues that she is happy to use L1 because of the environment and mastering many languages (see excerpt 4).Excerpt 4 'Eh… untuk perasaan tentu saja saya merasa sangat senang ya karena di lingkungan, di Samarinda sendiri agak jarang menggunakannya, dan ketika saya menggunakan itu saya merasa senang karena saya merasa bahwa saya menguasai banyak bahasa gitu...' (INTW/BG/11) (translated) Eh… for feeling, of course, I feel delighted because in my environment, in Samarinda itself, it is pretty rare to use it, and when I use it, I feel happy because I feel that I know many languages like that.
Table 3 -
L3-number 17-19).Discussing students' feelings on language, it can be said that there is no shy feeling when speaking L1 because L1 is important to learn.At the same time, L2 is comfortable to use in casual conversation, and L3 is a problem when the students cannot master it well.It is in line withChow's study (2018)that heritage language is important to learn, while L3 as other languages is in line withBissoonauth & Parish's statement (2017)that English as foreign language may be a little nervous because it is still recently mastered even though it is still not at a perfect stage, roughly like that.
Table 4
Students' Practices on Language
Table 5
Students' Domains on Language
Table 5 -
L1-number 30-31).Film, they tend to have similar statements with the previous one where the students sometimes used L2.The mean score ranged from 2.920 to 3.360, while the standard deviation ranged from 0.678 to 0.852.It means that the data variance of the students' choice in those statements is relatively smaller, or there is no gap large enough because the standard deviation is smaller than the mean (see Table5-L2-number 34-36).
(Translated) Okay, so the most frequently used language is Indonesian, so if the language used quite rarely is the local language, Sir, how about that, if it's Indonesian sometimes just with your own family if you don't understand Balinese, we change to Indonesian.If it's a regional language, it's pretty rare, | 7,138.6 | 2023-02-15T00:00:00.000 | [
"Sociology",
"Linguistics",
"Education"
] |
Synthesis and Elastic Characterization of Zinc Oxide Nanowires
Zinc oxide nanowires, nanobelts, and nanoneedles were synthesized using the vapor-liquid-solid technique. Young’s modulus of the nanowires was measured by performing cantilever bending experiments on individual nanowires in situ inside a scanning electron microscope. The nanowires tested had diameters in the range of 200–750 nm. The average Young’s modulus, measured to be 40 GPa, is about 30% of that reported at the bulk scale. The experimental results are discussed in light of the pronounced electromechanical coupling due to the piezoelectric nature of the material.
INTRODUCTION
Zinc oxide (ZnO) exhibits several unique properties, such as being a semiconductor and a piezoelectric material [1], and consequently, is used in a wide variety of sensors and actuators.ZnO nanostructures are being explored for a wide range of applications in nanoscale devices, such as nanogenerators [2], gas sensors [3], field emission transistors [4], nanocantilevers [5], and in biomedical systems, such as ultrasensitive DNA sequence detectors [6].Apart from the technological significance of ZnO nanostructures, their quasi onedimensional structure with diameters in the range of tens of nanometers to hundreds of nanometers makes them interesting from a scientific point of view.In this size range, they are expected to possess interesting physical properties and pronounced coupling that are quite different from their bulk counterpart [7].
Although ZnO nanowires are touted as the next generation materials for use in nanoscale systems [8], very few experimental investigations on their mechanical properties are reported in literature.The lack of experimental studies is mainly due to the challenges of material characterization at the nanoscale, such as (i) specimen manipulation, alignment, and gripping to achieve the desired boundary conditions, and (ii) application and measurement of force and displacement with very high resolution [1].Additionally, ability to perform in situ experiments is important for nanoscale materials characterization.In situ experiments, usually con-ducted in analytical chambers such as the scanning or transmission electron microscope (SEM or TEM), enable direct visualization of the events as they occur, thus providing qualitative information along with quantitative data.In situ experiments also ensure accuracy of the experimental procedures, which is challenging to supervise at the nanoscale.
In this paper, we study the effect of temperature and gas flow rate on the growth of different zinc oxide nanostructures synthesized using the vapor-liquid-solid (VLS) technique and present experimental results on Young's modulus of single ZnO nanowires.The modulus was measured by bending the nanowire in a cantilever configuration inside a SEM, enabled for in situ observations.In Section 2, we review the main techniques in literature for mechanical characterization at the nanoscale, and experimental results on Young's modulus of zinc oxide nanostructures.
REVIEW OF NANOMECHANICAL EXPERIMENTAL TECHNIQUES AND YOUNG'S MODULUS VALUES FOR ZINC OXIDE NANOSTRUCTURES
One of the direct techniques to measure Young's modulus of materials is uniaxial tensile testing.However, it is difficult to adapt this technique for nanoscale material characterization, due to the reasons mentioned in Section 1. Microelectromechanical systems (MEMS) are used as test beds for characterizing the mechanical properties of nanostructures to circumvent some of these problems at the cost of complexity in device design and fabrication [19][20][21].Desai and Haque [1] have used the uniaxial tensile testing technique to measure Young's modulus of ZnO nanowires.It is interesting to note that the authors reported fracture strains as high as 15% for the nanowires, which is unusual considering that bulk ZnO is a brittle material at the bulk scale [22].Young's modulus of nanowires can also be extracted from the resonant frequency of a single nanowire, induced by an alternating electric field.As examples, researchers have used this dynamic characterization technique to measure the modulus of zinc oxide nanowires [13], carbon nanotubes [23], and gallium nitride nanowires [24].
The quasi-static counterpart of the dynamic experiments essentially involves bending the nanowire specimen with a very soft spring (e.g., cantilever beam).The experiment is generally performed using an Atomic Force Microscope (AFM).Here, the deformation is primarily strain gradient dominant at the rigid support.Song et al. [11] and Hoffmann et al. [17] have used this technique to measure Young's modulus of ZnO nanowires.It is important to note that the effect of varying levels of strain gradient in these experiments (caused by bending) may result in significant deviation in the mechanical properties, because of the piezo-electric nature of the material.AFM-based experiments are popular techniques for mechanical characterization, because the stiffness of the tip is very small, and hence, the force measurement resolution is very high (on the order of nano-Newtons).However, understanding the tip-nanowire interaction is crucial for accurate and reliable experimental studies.For instance, friction (due to slipping) and van der Waals forces between the nanowire and tip will introduce errors in the measurement of mechanical properties [25].The influence of these surface forces on the mechanical properties will be more significant in the case of smaller diameter (less than 30-40 nm) or high aspect ratio nanowires (greater than 100), where the magnitude of forces is very small (on the order of pico-Newtons to few nano-Newtons).It is also important to note that specimen geometry, crystallographic orientation, synthesis process, and the nature of experimental techniqueuniform strain versus strain gradient-dominant and static versus dynamic deformation, all significantly affect the experimental results.Consequently, a huge spread is observed in Young's modulus values reported in literature for zinc oxide nanowires, as summarized in Table 1.
In this paper, we present results on in situ cantilever bending experiments inside a Focused Ion Beam-Scanning Electron Microscope (FIB-SEM) on ZnO nanowires.These experiments provide information on the elasticity of ZnO nanowires determined using a quasi-static and strain gradient-dominated technique.In situ experiments enabled us to observe the nanowires-tip interaction during the experiment.In Section 3, we discuss the synthesis process of zinc oxide nanowires.
NANOWIRE SYNTHESIS PROCESS
We synthesized the zinc oxide nanowires by the vapor-liquidsolid (VLS) mechanism [26] using gold as a catalyst.The Lindberg Anneal single tube furnace (Blue M) was used for the nanowire growth process; the schematic of the furnace is shown in Figure 1.We started with ZnO powder (Alfa Aesar, 99.99%) and graphite powder (Alfa Aesar, 99.99%) in 1 : 1 ratio by weight in an alumina crucible inside the furnace.Argon gas was allowed to flow in the tube (from right to left in Figure 1) at 10 sccm.The silicon (Si) substrates with 20 nm gold (Au) films (on [100] silicon surface) were placed downstream from the crucible, and served as the platform for nanowire growth.
As the temperature of the crucible increases to approximately 1000 • C, the ZnO powder is reduced by graphite to form zinc (Zn), carbon monoxide (CO), and carbon dioxide (CO 2 ) vapors.The argon gas carries these vapor-phase products to the silicon samples placed at different temperatures.Meanwhile, gold and silicon droplets form a eutectic alloy at each catalyst site.The gaseous products produced by the reduction reaction adsorb and condense on the alloy droplets.Subsequently, the ZnO nanowire synthesis reaction is catalyzed by the Au-Si alloy at solid-liquid interface to form zinc oxide nanowires [27].The ZnO vapor saturates the alloy droplet, followed by the nucleation and growth of solid ZnO nanowire, due to the super saturation of the liquid droplet.Incremental growth of the nanowire taking place at the droplet interface, constantly pushes the catalyst upwards until no more zinc vapor is available, or all the gold is used up.We observed nanowire growth from around 500 • C to 900 • C, the diameters of the nanowire ranged from 30 nm to 750 nm and the lengths were up to 100 μm.Some of the nanowires had a gold tip on the end (Figure 2(c)), indicating VLS mechanism for the growth process.We also observed nanobelt (Figure 2(a)) and nanoneedles (Figure 2(b)) formation in the lower-temperature regions.In Section 4, we discuss the sample preparation techniques for mechanical characterization experiments (cantilever bending) of nanowires.
SPECIMEN PREPARATION
The ZnO nanowires grown by VLS mechanism generally occurred as clusters, but individual nanowires are required for the experiments.Individual ZnO nanowires were picked using a micromanipulator (Creative Devices Inc., NJ, USA) fitted with an electrochemically sharpened tungsten probe tip.The nanowires adhere to the probe tip due to both short-and long-range attractive forces, which we generically term as van der Waal's forces.The nanowire was then placed on the edge of a chip of silicon wafer (coated with 100 nm thick gold film to improve imaging in SEM), as shown in Figure 3(a).The nanowire was oriented perpendicular to the edge of the silicon wafer using the probe tip.We "glued" the end of the nanowire near the edge of the silicon wafer by platinum deposition using a focused ion beam (FIB) (FEI Quanta 3D 200 FIB/SEM), as shown in Figure 3
EXPERIMENTAL SETUP AND RESULTS
We performed cantilever-bending experiments to estimate Young's modulus of the nanowires.Bending loads were applied on the nanowires using an AFM cantilever with a known spring constant.The AFM cantilever (MikroMasch, CSC12) was mounted on the tungsten probe tip in the omniprobe (a three-axis piezoelectric actuator in the FIB-SEM) along the x-axis (Figure 4(a), probe is not shown in image).
The SEM image plane is the X-Y plane.
We then tilted the probe such that the tip face was aligned perpendicular to the viewing screen (parallel to the Z-axis) (Figure 4(b)).This ensures that the loading direction is in the desired plane (X-Y plane).
We then mounted the nanowire specimen inside the SEM chamber, and rotated the SEM stage (about the Z-axis) to align the longitudinal axis (length) of the nanowire parallel to the length of the AFM cantilever (Figure 5(a)).This ensures that the central axis of the nanowire and AFM tip are parallel before loading.We then tilted the stage (around the x-axis) to verify that the nanowire is completely in the X-Y plane.After ensuring that the nanowire and the AFM tip were aligned, we performed the cantilever-bending experiment inside the SEM.The schematic of the bending experiment is shown in Figure 5(a).Note that Figure 5(a) is not to scale and in reality, the nanowire is thinner than the AFM cantilever.Figure 5(b) shows the in situ bending experiment inside the SEM.
The AFM cantilever moves vertically downwards (negative Y direction) to vertically load the nanowire and the deflection of the AFM cantilever and nanowire tip are the same.We can estimate Young's modulus of the nanowire from the deflections of the AFM cantilever base and nanowire tip.
We assume the following conditions for the cantilever bending experiments: (1) clamped fixed end, (2) small nanowire tip deflections (valid for y nw /l ≤ 0.1, where y nw and l are the tip deflection and length of the nanowire, resp.).
Based on these assumptions, the normal or tensile bending stress (σ) and the normal or tensile bending strain (ε) on the nanowire, during the cantilever bending are given by σ = 32k tip y base − y nw l πd 3 , where k tip is the stiffness of the AFM cantilever tip, y base is the displacement of the AFM cantilever base, y nw is the displacement of the nanowire, d is the diameter of the nanowire, and the stresses and strains are the maximum values that occur on the outermost diameter of the nanowire at the clamped end.The deflections of the nanowire and AFM cantilever base are estimated from processing the SEM images during the loading experiments.Thus, using the deflection values of the AFM cantilever base and nanowire tip and (1), we can estimate the normal stress and strain on the nanowire, and plot a stress-strain diagram.The slope of the stress-strain curve (linear fit) is Young's modulus of the nanowire.We performed cantilever bending experiments inside the SEM on ZnO nanowires specimens with diameters ranging from 350-750 nm and did not observe any dependence of Young's modulus on the diameter of the nanowire.Figure 6 shows a representative stress-strain diagram.Young's modulus values of the nanowires (five specimens) ranged from 35 GPa to 44 GPa, which is within the expected error in the experimental data.In some of the bending experiments, the deflections of the nanowires were large and the expressions for stress and strain (1) are not accurate.In those cases, the nonlinear moment curvature differential equation is numerically solved to match the bending profile of the nanowire to obtain more accurate values of the stresses and strains (details in [25]).
For mechanical measurements, the boundary condition of the cantilever support is critical for an accurate estimation of the properties.Typically, the nanowire specimen is clamped with electron-beam-induced deposition [28,29] or the focused ion beam-based platinum deposition (FIB-Pt) [19], which might introduce ion beam induced stresses [25].However, our in situ SEM observations show that for nanoscale bending experiments on specimens without deposition-based clamping, there is no observable rotation of the nanowire at the fixed end.This suggests that specimensubstrate adhesion could be strong enough to work as a clamping mechanism.
In order to study the effect of boundary conditions, we repeated the experiment on adhesion-clamped specimens prepared using the technique employed by Ding et al. [29].
From the experiments on adhesion-clamped specimens, we measured Young's modulus of the nanowires to vary from 18 GPa to 27 GPa (four specimens), and the range of diameters was from 200 nm to 330 nm.This discrepancy can be attributed largely to the difference in boundary conditions in the two specimen clamping techniques.If the applied bending force is comparable to the adhesion and friction forces, the rigid support boundary condition is no longer valid, as the nanowire has free boundary conditions on a significant part of its outer surface.In Section 6, the synthesis process and the experimental results are discussed.
DISCUSSION
We studied the effects of temperature on the synthesis of ZnO nanostructures using the VLS technique, and subsequently characterized their elastic properties.We observed no nanowire growth at gas flow rates higher than 10 sccm implying that the zinc, carbon monoxide, and carbon dioxide vapors are carried away rapidly from the substrates and do not have enough time to react at the silicon-gold interface.In most of the cases, the nanowire growth temperatures were between 500 and 800 • C, which is consistent with the binary phase diagram of gold and zinc [30].Ideally, the nanowire growth temperatures should be set between the eutectic temperature (683 • C) for gold and zinc and melting point of zinc (420 • C), reaching a maximum of 750 • C [31].However, it should be noted that the equilibrium phase diagrams at the nanoscale might be different from bulk and could result in different preferential growth temperatures for the nanowires.In some cases, we observed the growth of different nanostructures of zinc oxide such as nanobelts and nanoneedles (Figures 2(a) and 2(b)), similar to observations by other researchers [32,33].
From transmission electron microscope (TEM) images, the growth direction of the nanowire was determined to be [0001], and the nanowires had a wurtzite crystal structure (single crystal) with lattice constants close to those of bulk crystals.At the bulk scale, Young's modulus of zinc oxide in the [0001] direction is 140 GPa [34], which is significantly higher than the modulus value reported in this paper.This is commonly attributed to the surface stress effects in the literature.Due to the lower coordination number of surface atoms compared to bulk atoms, there exist intrinsic surface stresses in materials [35,36], and the mechanical properties of surfaces are different from bulk.The effects of surface stresses are significant when the size of the material is on the order of h 0 = S/E, where S is the surface elastic constant and E is the modulus of the bulk material [37].For zinc oxide, the order of h 0 is approximately a few angstroms, which implies that the surface effects cannot alone explain the size effects observed in Table 1.
One of the reasons for the observed scatter in the modulus values (Table 1) is the difference in experimental techniques used to estimate Young's modulus.The dynamic experiments performed by Huang et al. [13], Chen et al. [12], and Zhou et al. [16] are expected to show slightly higher, unrelaxed (E U ) modulus, whereas in this paper we report modulus values estimated by quasi-static experiments (E R ).In dynamic experiments, the time period of motion of the nanowire is much lower than the relaxation time, and hence, the modulus values estimated during dynamic experiments tend to be higher than the values estimated during static or quasi-static experiments (E U > E R ) [38].Also, the oscillating electric field applied to the specimen induces charges on the nanowire surface, which can significantly overestimate the elastic properties [39].For aspect ratios of around 100 for copper nanowires, the measured modulus could be 1.5 times the actual modulus.
The modulus value reported in this paper is less than the modulus value estimated by Feng et al. [14] (90-100 GPa), using nanoindentation.They estimated the modulus of the nanowires from the hardness values of the nanowires, which were measured during nanoindentation.In order to estimate the hardness of the nanowire, they had to make assumptions about the elastic properties of the nanoindenter tip and the nanowire material.In their experiments, they assumed that the elastic properties of bulk ZnO are applicable to ZnO nanowires, and this may have influenced the final estimated modulus value of the ZnO nanowire.Ni and Li [15] estimated the bending Young's modulus of ZnO nanobelts as 38.2 GPa and nanoindentation modulus as 31.1 GPa, which compare favorably with the modulus values reported in this paper.
A possible mechanism to explain the reduction in modulus of ZnO nanowire compared to bulk is the strong electromechanical coupling in zinc oxide.Due to its noncentrosymmetric wurtzite structure and ionic nature of the interatomic bond, internal electric fields are induced in ZnO when the material is strained [40,41].The positive sign of the electromechanical coupling coefficient, e 33 , along the [0001] direction implies that the induced electric field tends to reduce the measured modulus of the nanowire.Additional electrical polarization is introduced in the nanowire during flexural deformation due to the flexoelectric effect which arises because of the high-strain gradient at the nanoscale [42].For piezoelectric materials with low dielectric constant, such as zinc oxide, quasi-static tests are not recommended for measurements of elastic constants (Young's modulus) because of the uncertainties in electrical boundary conditions [43].As a result, the measured modulus values in quasistatic nanomechanical characterization (e.g., the technique reported in this paper) are influenced by the electromechanical coupling resulting in Young's modulus of ZnO nanowire being different from bulk.Another approach for explanation of reduction in modulus is that the elastic properties of a material can be described at the atomistic level, where the bond length, bond energy, and arrangement of atoms influence the overall elastic behavior of the material [44,45].In case of ZnO, the effective charge (e * ) on the zinc-oxygen changes due to charge redistribution when the material is strained [46].Since Young's modulus of the material depends on e * , the modulus of the material should change at higher strains.The techniques used for measuring the elastic properties at bulk scale involve negligible strains compared to nanoscale bending experiments.As a result, the measured modulus values of nanoscale zinc oxide are different from bulk due to strain-dependent modulus.
CONCLUSION
Zinc oxide nanostructures (nanobelts, nanoneedles, and nanowires) were synthesized using the vapor-liquid-solid technique.Young's modulus of the nanowires was estimated by bending experiments performed in situ in a scanning electron microscope on individual nanowires.Young's modulus was measured to be about 40 GPa, which is about 30% of the modulus value at the bulk scale (140 GPa).It was observed that the specimen preparation technique influences the boundary conditions, which affects the measured modulus value.The observed size effect was discussed on the basis of the pronounced electromechanical coupling and strain gradient at the nanoscale.
(b).The inset in Figure3(b)shows the platinum deposition or "glue" on the nanowire near the edge of the silicon wafer.The microscale version of the pick-and-place technique is time intensive, but enables us to consistently prepare long nanowire specimens for the experiments.In Section 5, we discuss the experimental technique and results on Young's modulus of ZnO nanowires.
Figure 4 :
Figure 4: (a) Tipless AFM cantilever (b) when mounted on the tungsten tip in the omniprobe.
Figure 5 :
Figure 5: (a) Schematic of nanowire-bending experiment, (b) superimposed images from the in situ bending experiment inside the SEM showing the specimen and only the tip of the loading structure and (c) "spring" equivalent of the experimental setup.
Table 1 :
Young's modulus values of ZnO nanostructures reported in literature. | 4,716.6 | 2008-01-01T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Salvianolic acid B ameliorates hepatocyte lipid droplet accumulation via stimulation of autophagy
Salvianolic acid B (Sal B) the most abundant bioactive member in Salvia miltiorrhiza and has been reported lots of benefits on the treatment of cardio-cerebral vascular diseases and metabolic diseases. Lipid droplets are dynamic organelles, excessive lipid droplets accumulation in liver caused fatty liver disease. In this study, we are interested in the effect of Sal B on hepatic lipid accumulation and its possible mechanism. We found Sal B treatment significantly decreased lipid accumulation and TG level in primary hepatocytes, this is the first demonstration for the effect of Sal B on lipid droplets. Meanwhile, a stimulation of Sal B on autophagy of hepatocytes was discovered, further we used autophagy inhibitor 3-MA and reveled that after administration of 3-MA Sal B showed little effect on improving hepatic lipid accumulation, this means the effects of Sal B on lipid drop depend on autophagy. This study demonstrates Sal B ameliorate hepatic lipid accumulation through activation of autophagy. These findings contribute to its benefit on liver disease related to hepatic lipid accumulation and hepatic autophagy. In our study, we found a stimulation of autophagy and inhibition of lipid accumulation in hepatocyte by Sal B. Then we investigated whether the inhibition of lipid accumulation by Sal B was stimulated via autophagy. Our results showed that
Background
Lipid droplets are dynamic organelles that store neutral lipids during times of energy excess and serve as an energy reservoir during deprivation. Excessive lipid droplets accumulation in liver caused fatty liver disease is a hepatic manifestation of metabolic syndrome. The amount of lipid that can be exported from the liver is dependent on synthesis as well as the availability of triglycerides (TGs) that are stored within the hepatocyte in the lipid droplets (LDs) [1]. Hepatocytes have the greatest capacity to store TGs in the form of small LDs [2]. Excessive LDs accumulation that occurs in hepatocytes can result in lipotoxicity with consequences of inflammation and subsequent cell death and is likely due to disruption in LDs packaging and/or secretory [3][4]. Recent study has reported that autophagy plays an important role in regulating hepatic lipid homeostasis [5].
Autophagy is an intracellular catabolic process with an essential function in the maintenance of cellular and energy homeostasis [6]. Autophagosome formation requires the localization of phosphatidylethanolamine conjugated microtubule-associated protein 1 light chain 3 (LC3) to the autophagosomal membrane which indicates the autophagosome occurrence [7]. It is an evolutionarily conserved lysosomal pathway tasked with degrading long-lived, unnecessary, or damaged proteins and organelles to maintain intracellular homeostasis [8]. This catabolic process also responds to various stresses including nutrient depletion/starvation, oxidative stress, and infection by viral and bacterial pathogens to confer cytoprotection [9]. Impaired autophagy in hepatocyte may promote triglycerides accumulation in liver and interfered with subsequent mitochondrial β-oxidation, which provided the first concrete evidence for a connection between autophagy and hepatic lipid metabolism. [10]. Since then, many studies supported the point of view that lipids could also undergo degradation by macroautophagy, by sequestration into autophagosomes that then fused with lysosomes, thus, autophagy has been considered as a new cellular target for abnormalities in lipid metabolism and accumulation [11,12].
Salvianolic acid B (Sal B) is the most abundant and bioactive member of polyphenolic compounds in Salvia miltiorrhiza. It is the most common medicine that is widely used in the treatment of cardio-cerebral vascular diseases [13,14]. Recent studies also reported a potential implication of Sal B in the treatment of insulin resistance, obesity, and type 2 diabetes [15][16][17]. In vivo studies demonstrated that Sal B improved glycemic control, dyslipidemia, and insulin sensitivity in high fat diet and streptozocin-induced type 2 diabetic rats as well as in db/db mice [17,18], which indicated a contribution of Sal B on improving glucose and lipid disorder. In hepatocytes, Sal B has been reported a strong ability to protect cells from injury induced by oxidative stress, inflammation and enhance hepatic differentiation [19][20][21].
In this study, we interestingly found that Sal B could stimulated hepatic autophagy while reduced lipid accumulation in hepatocyte, which indicated a possible mechanism for Sal B improved lipid disorder and contributed to metabolic syndrome.
Primary culture of hepatocytes and treatment
Isolation of primary mouse hepatocytes was performed by nonrecirculating perfusion with collagenase. Briefly, a male C57BL/6 mouse (6-week old) was anaesthetized, sterilized and fully exposed its abdominal cavity.
Western blotting analysis
Western blotting was performed as previously described. Protein lysates were subjected to SDS-PAGE, transferred to hybond-PVDF membranes then incubated with specific primary antibodies against LC3 (Sigma, USA), p62, (abcam, USA), Beclin-1 (ImmunoWay Biotechnology Company, USA). Equal loading was checked by incubated the membrane with monoclonal antibody against β-actin (ZSGB, China) or GAPDH (ImmunoWay Biotechnology Company, USA). After washing, the membrane was incubated with anti-mouse or anti-rabbit secondary antibodies (ZSGB, China) for 2h at room temperature. ECL was purchased from Millipore.
Bodipy staining
For neutral lipid drop staining [22], Primary hepatic cells grown on coverslips were treated with Sal B or 3-methyladenine (3-MA) combined with SalB for 24h, and then were incubated with BODIPY 493/503 (Life Technologies, 20 μg/ml) to label lipid drops for 20 min and hoechst33342 (Sigma, 5 mg/mL) to stain nucleus for 5 min at 37°C and washed cells with PBS three times, after fixed and mounted on glass slides, and images were acquired using Olympus microscope (Olympus BX53, Japan).
Oil red O staining
The
Statistical analysis
Statistical analyses were performed by ANOVA and means compared by Fisher's protected least-significant difference using StatView software from SAS Institute Inc.
(Cary, NC). The data were summarized as mean ± S.D. p-Value <0.05 was considered statistically significant.
Sal B inhibited lipid accumulation in primary cultured hepatocytes
As the main component of neutral lipids of cell, triglyceride was extracted and further tested. Our result showed Sal B decreased triglyceride (TG) content in primary hepatocytes at the concentration of 100 and 250μM (Fig. 1a). And neutral lipids droplet staining with Bodipy 493/503 revealed that control cells have lots of dispersed spherical lipid drop (LD) structures with strong fluorescence throughout the cytoplasm, while cells treated with Sal B of 100μM weakened the staining of LD (Fig. 1b), further analysis of total fluorescence of bodipy showed that Sal B decreased LD accumulation in hepatocytes to nearly a third of the control cells (Fig. 1c).
Sal B stimulated autophagy in primary cultured hepatocytes
In our study, we found Sal B concentration from 50nM to 200nM significantly stimulated LC3B expression but inhibited P62 level in hepatocyte (Fig. 2a). We then selected 100μM of Sal B as an experimental concentration, and a significant higher expression of LC3B and Beclin1 in Sal B treated hepatocyte were observed (Fig. 2b) whereas a decreasing expression of P62 (Fig. 2b) compared to control group. It suggested Sal B activated autophagy and stimulated autophagosome formation.
Inhibition of autophagy induced lipid accumulation in hepatocytes
Many studies reported a central role of hepatic autophagy in regulating whereas increased P62 expression ( Fig. 3a and b). Meanwhile, oil red staining result showed that 10mM 3-MA obviously stimulated lipid accumulation in hepatocyte (Fig. 3c). The absorbance of the isopropanoldissolved Oil-Red stain increased 50% as determined from the absorbance for 3-MA-treated cells compared to untreated cells (Fig. 3d). These data confirm the view that impaired autophagy might lead to hepatic lipid accumulation.
Sal B inhibited lipid accumulation through activated autophagy in hepatocytes
In our study, we found a stimulation of autophagy and inhibition of lipid accumulation in hepatocyte by Sal B. Then we investigated whether the inhibition of lipid accumulation by Sal B was stimulated via autophagy. Our results showed that Sal B inhibited lipid accumulation was significantly abolished by treatment of 3-MA ( Fig. 4a and b). However, stimulation of lipid accumulation by 3-MA was not affected by extra Sal B (Fig. 4a and b). Meanwhile, the inhibition of TG level by Sal B was also prevented after administration of 3-MA, but stimulation of TG content in the hepatocytes by 3-MA was not affected by SalB. It suggested a central role of autophagy in Sal B inhibited hepatic lipid accumulation.
Discussion
In this study, we found Sal B significantly inhibited lipid accumulation in and, if necessary, pharmacological treatment [29]. Based on these, the improvement of autophagy might be related to hepatic lipid drop accumulation by Sal B, these will be benefit for people with hepatic lipid accumulation.
We also found inhibition of autophagy by 3-MA induced intracellular lipid droplet accumulation and increased TG level in hepatocytes. Meanwhile, Sal B reduced LD accumulation and TG level in hepatocytes was abrogated by 3-MA, Although Sal B has been reported a variety of beneficial metabolic effects including ameliorated the histopathological alterations of pancreas, increased muscle glycogen content, increased p-AMPK protein expression in skeletal muscle and liver; increased protein expressions of PPARαand p-ACC in liver [17,18]. Our study showed that Previous studies reported pharmaceutical inhibition of autophagy significantly increased LD accumulation in hepatocytes [30]. We also found inhibition of autophagy by 3-MA induced intracellular lipid droplet accumulation and increased TG level in hepatocytes. Meanwhile, Sal B reduced LD accumulation and TG level in hepatocytes was abrogated by 3-MA, which indicated Sal B inhibited hepatic lipid accumulation through stimulation of autophagy.
In conclusion, our study exhibited another function of Sal B on decreasing lipid drop accumulation in hepatocyte, and provided a possible mechanism of Sal B regulating hepatic lipid metabolism through activation of autophagy, that may contribute to its benefit on liver disease related to hepatic lipid accumulation and hepatic autophagy. The effect of Sal B on neutral lipid droplet and TG content when autophagy was inhibited with 3-MA in primary mice hepatocytes. Bodipy staining (a), total uorescence of bodipy (b) and TG content (c) were measured in primary hepatocytes treated with Sal B, 3-MA and Sal B combined with 3-MA. *p<0.05, compared to control cells, & p<0.05, compared to Sal B-treated cells. | 2,290.4 | 2020-04-12T00:00:00.000 | [
"Medicine",
"Biology"
] |
Tree potential growth varies more than competition among spontaneously established forest stands of pedunculate oak (Quercus robur)
Analyses of dendrochronological data from 15 recently established tablished stands of pedunculate oak (Quercus robur L.) revealed that functions describing potential tree growth in the absence of neighbours varied more between stands than functions describing competitive effects of conspecific neighbours. This suggests that competition functions can more easily be transferred among stands than potential growth functions. The variability inherent in the natural establishment of tree stands raises the question whether one can find general models for potential growth and competition that hold across stands. We investigated variation in potential growth and competition among recently established stands of Q. robur and tested whether this variation depends on stand structure. We also tested whether competition is symmetric or asymmetric and whether it is density-dependent or size-dependent. Lastly, we examined whether between-year growth variation is synchronous among stands. Potential growth, competition and between-year growth variation were quantified with statistical neighbourhood models. Model parameters were estimated separately for each stand using exhaustive mapping and dendrochronology data. Competition was best described with an asymmetric size-dependent model. Functions describing potential growth varied more among forest stands than competition functions. Parameters determining these functions could not be explained by stand structure. Moreover, annual growth rates showed only moderate synchrony across stands. The substantial between-stand variability in potential growth needs to be considered when assessing the functioning, ecosystem services and management of recently established Q. robur stands. In contrast, the relative constancy of competition functions should facilitate their extrapolation across stands.
Introduction
In contrast to many regions of the world, Europe has experienced an increase in forest cover over the last decades Handling editor: Irene Martín-Forés (Guest Editor) This article is part of the topical collection on Establishment of second-growth forests in human landscapes: ecological mechanisms and genetic consequences Dominique Lamonica<EMAIL_ADDRESS>Extended author information available on the last page of the article. (Díaz et al. 2019). Due to agricultural land abandonment (Fuchs et al. 2013;Potapov et al. 2015;Song et al. 2018) and the reduction in farmlands in many European regions, forest area has increased by 0.8 million hectares each year since 1990 (Unece 2015). This trend should continue in coming years (Schrȯter et al. 2005). Newly establishing forest areas may have effects on community composition, net primary production, rates of decomposition and nutrient cycles (Whitham et al. 2006;Allan et al. 2012). They may deliver ecosystem services, such as carbon storage, increasing potential habitat for associated species, biodiversity "refuges" or corridors for species migration, but they may also cause threats such as fire risk increase, invasive species spread or loss of open landscapes, in cases of uncontrolled spread (Rey Benayas and Bullock 2012). To be able to adequately manage the passive restoration of agricultural lands, it is necessary to deepen our understanding of the underlying mechanisms of forest establishment. Several recent studies have been conducted on various processes occurring in establishing forests. These include growth patterns and sensitivity to climate (Alfaro-Sȧnchez et al. 2019), effect of shrub cover or herbivory on recruitment (Ramirez and Diaz 2008;Cruz-Alonso et al. 2019;Rey Benayas et al. 2015) or carbon storage (Vilà-Cabrera et al. 2017). Nevertheless, the dynamics of natural forest regeneration and the underlying demographic processes, such as growth and competition, still remain to be further explored. In general, the dynamics of tree populations are difficult to describe and model because the fundamental demographic processes driving forest dynamics, such as individual growth, fecundity, dispersal, recruitment and mortality, strongly depend on the spatial arrangement and size structure of neighbouring trees. If forest establishment occurs naturally (rather than as a consequence of reforestation), established tree populations can show particularly strong variation in spatial and size structure. Moreover, founder effects can cause substantial variation in the genetic structure of these populations that can have profound consequences for ecological functioning (Whitham et al. 2006). This raises the question of whether one can find general models for tree growth and competition that hold across multiple recently established tree populations. To do this, one must also consider that an individual's growth varies from year to year, according to temporal variation of environmental conditions, in particular temperatures and water availability during early summer in temperate forests (Rozas 2005;Scharnweber et al. 2011;Canham et al. 2018).
Competition between individuals is a main driver of tree population dynamics (Pacala et al. 1996;Bugmann 2001). Spatially restricted competition for light, nutrients and/or water gives rise to negative effects of neighbours on the growth of a target individual. These competitive effects are generally expected to increase with the size of neighbours. They can be described with size-dependent neighbourhood models. Alternatively, competition can be described by a density-dependent neighbourhood model in which the competitive effect of neighbours is independent of their size. Additionally, competition can be either symmetric (if a target tree experiences competition from all neighbours) or asymmetric (if a target tree only experiences competition from larger neighbours). Asymmetric competition arises when resources are not homogeneously distributed in space or when resource supply is directional (Schwinning and Weiner 1998). Competition for light is often assumed to be more asymmetric than competition for soil resources (Schenk 2006) and can be considered as the major competition process in tree populations (Bourdier et al. 2016).
Various neighbourhood models where individual growth is modelled as a function of size and distance to neighbours have been developed and used (Bella 1971;Hegyi 1974;Lorimer 1983;Wimberly and Bare 1996;Berger and Hildenbrandt 2000;Canham et al. 2004;Canham et al. 2006;Uriarte et al. 2004a;Uriarte et al. 2004b;Stadt et al. 2007;Coates et al. 2009;Gȯmez-Aparicio et al. 2011;Das 2012;Buechling et al. 2017;Latreille et al. 2017). Many of the aforementioned studies investigated tree growth and neighbourhood competition for populations of mixed species and/or single-species populations, but not how tree growth and competition vary between populations of the same species, except Latreille et al. 2017 who focussed on climate effects on silver fir growth. In other studies (Canham et al. 2006;Stadt et al. 2007;Buechling et al. 2017), different locations were compared but they comprised different mixtures of species and concerned established populations. Several studies have underlined how growth and/or competition can vary between stands of different ages (Alfaro-Sȧnchez et al. 2019), or over spatial extents ranging from a few hundred meters to a couple of kilometres (Linares et al. 2010;Fraver et al. 2014), or across Europe (Ruiz-Benito et al. 2014). However, they generally describe competition through global competition indices, whereas neighbourhood models have the advantage that they include a free parameter that describes the spatial scale of competitive interactions.
This study presents neighbourhood model analyses of 15 recently established stands of pedunculate oak (Quercus robur L.) in south-western France. For each stand, we use comprehensive data on the spatial location and annual growth increments of individuals to quantify potential tree growth in the absence of neighbours as well as intraspecific competition with neighbours, while accounting for between-year variation in growth. These analyses serve to address four objectives: (1) to determine whether competition is symmetric or asymmetric and whether it is size-or density-dependent; (2) to quantify the extent to which potential growth and competition functions vary among stands; (3) to test whether this between-stand variation can be explained by the age, size or spatial structure of stands; and (4) to test whether between-year variation in tree growth is synchronous across stands.
Study area
The study area was located between 15 and 45 km southwest of Bordeaux, France (44 • 41 N, 00 • 51 W) (Appendix Fig. 6). This region is covered by 1 million ha of Pinus pinaster L. Widespread deciduous tree species include holm oak (Q. ilex L.), Pyrenean oak (Q. pyrenaica Willd.), silver birch (Betula pendula L.) and different willows (Salix spp.). The region's climate is oceanic (mean annual temperature of 12.8 • C and annual precipitation of 873 mm over the last 20 years), and the soil is sandy (spodosols), very dry during summer and wet during winter (see Valdės-Correcher et al. (2019) for more details on soil and climate characteristics).
Sampling and dendrochronology analysis
We randomly selected 15 forest stands from the 18 isolated newly established oak forest stands selected by Valdés-Correcher and colleagues (see Section 6 Appendix Table 3 for further information on forest stand characteristics). In the study forest stands, each Q. robur individual above 3 cm of diameter at breast height (dbh) was mapped (using GPS) and its dbh was measured in summer 2018 . All 661 living individuals (above 3 cm dbh) were cored once (in summer 2018) with a 5-mm Pressler increment borer. For the majority of these (98%), the drilling height was 30 cm. After sampling, cores were air dried. The tree ring measurement was done optically, after scanning cores at 1200 dpi. The chronologies were visually cross-dated at the time of the measurement with Windendro 2017a, using the Gleichläufigkeit index (Schweingruber 1988) and some pointer years with usually low or high level of growth (see Alfaro-Sánchez et al. 2020 for a detailed description of the field sampling, laboratory procedures and data collection).
In the case of multistemmed trees, we calculated the basal area for each stem based on the measured dbh and summed those basal areas to obtain the total basal area of the tree. Then, the ratio between the basal area of the cored stem and the total basal area of the tree was used as a correction factor to estimate the annual growth increment of the entire tree from the growth increment of the cored stem.
Tree age at the end of 2017 was calculated as the sum of the number of measured rings, the number of missing rings at the pith, the number of missing rings at drilling height and the number of missing rings under the bark. When the core passed the tree pith, the number of missing rings at the pith was set to 0. Otherwise, it was estimated based on the estimated distance to the pith and the growth of the five closest rings to the pith. The number of missing rings at drilling height was set to 3 when the drilling height was 30 cm (for 98% of the individuals). For the remaining 2% of individuals, which were drilled higher, we added more years assuming an average height growth of 10 cm per year (Gerzabek et al. 2017).
Model description
The theoretical approach developed by Canham and colleagues states that the absolute growth rate, i.e. the observed diameter growth rate (width per year), is the product between the hypothetical diameter growth rate of a "free-growing-tree" and the potential factors that may reduce this growth, such as neighbour effects (crowding and shading (Canham et al. 2004;Uriarte et al. 2004a, b;Stadt et al. 2007;Das 2012)), and environmental factors: site effect, climate, pests (Canham et al. 2006;Coates et al. 2009;Gȯmez-Aparicio et al. 2011). There is no consensus on how to link diameter growth and size (Coates et al. 2009). Nevertheless, the aforementioned studies have modelled tree growth according to size using a lognormal function because of its flexibility and empirical support.
For each individual i and year t, we described the observed ring width Y i,t as a function of the focal individual's size X i,t−1 , namely dbh, at time t − 1, and the neighbourhood effect NE i,t at time t − 1.
For individual i, we modelled the effect of dbh X i,t−1 at year t − 1 on the potential growth rate (i.e. the growth rate without competitors) at year t (in mm/year) g i,t as: with g m the maximum growth rate (mm/year) and S(X i,t−1 ) the lognormal growth-size relationship: where x 0 is the dbh at maximum growth rate (cm) and x b the shape parameter (larger values lead to a more shallow relationship between dbh and growth rate). The neighbourhood effect NE i,t on the target individual i at year t is modelled as an inverse function of the distance D i,j (m) between the individual and its neighbours j (within a radius of 50 m) and an effect of the neighbour dbh f (X j,t ): where β describes how sharply the neighbour effect decreases with distance. We also tested a Gaussian kernel to describe the decline of competitive effect with distance, as used in Nottebrock et al. (2017), but this model failed to converge. We tested four different models describing the alternative competition hypotheses and one model without competition. In the size-dependent models, the competitive effect of neighbours depends on their size whereas in the densitydependent models all neighbours have in principle the same competitive effect. In the asymmetric models, trees only respond to the competitive effects of larger neighbours whereas in the symmetric models they respond to competition by all neighbours. For the no competition (NC) model, we set NE i,t = 0. For symmetric density-dependent competition (SDC), neighbour effects are independent of neighbour size so that f (X j,t ) = 1 for all neighbours j . For symmetric size-dependent competition (SSC), neighbour effects increase with neighbour size and f (X j,t ) = log(X j,t ) for all neighbours j . For asymmetric densitydependent competition (ADC), only neighbours that are larger than the focal plant have a (size-independent) neighbour effect so that f (X j,t ) = 1 for neighbours j which verify X j,t > X i,t and 0 otherwise. Finally, for asymmetric size-dependent competition (ASC), f (X j,t ) = log(X j,t ) for neighbours j which verify X j,t > X i,t and 0 otherwise. We used the logarithm of dbh rather than the dbh because it was a better explanatory variable for our dataset.
The competition effect C i,t , which is the reduction in the potential growth rate of individual i at year t is then: with a the sensitivity to competition. We included an annual random effect t ∼ N (0, φ 2 ) on the logarithm of growth rate at year t. The logarithm of the realised growth rate of individual i at year t, y i,t , including the competition effect and the annual random effect, is thus: The logarithm of the observed ring width Y i,t for individual i at year t follows a normal distribution:
Statistical analysis
Parameter estimation For each of the five model types (NC, SDC, SSC, ADC and ASC), parameters were estimated independently for each of the 15 forest stands. The models were fitted in a Bayesian framework using Markov Chain Monte Carlo (MCMC). We defined vague prior distributions for each parameter (Table 1) and we used the same prior distributions for each inference process. MCMC computations were performed using the rjags R package (Plummer 2009; R Core Team 2018) (JAGS version 4.3.0, R version 3.4.4, rjags version 4-6). For each model and forest stand, 30,000 iterations were performed for each of three chains and the 25,000 first iterations were discarded as burn-in, leading to 15,000 values for the posterior distributions.
To check convergence, we used the Gelman and Rubin (1992) convergence diagnostic. All models converged (see Appendix Table 4 for convergence diagnostics of the ASC model). For plotting and prediction of model functions, the obtained posterior distributions were subsampled to 1000 parameter combinations. For each forest stand, we computed Bayesian R 2 according to Gelman et al. (2018).
Model comparison
In order to find the most appropriate competition model for each forest stand, we computed the Deviance Information Criterion (DIC) (Spiegelhalter et al. 2002) for the five different models fitted to the same data, and identified for each forest stand the DIC-minimal model.
Among-forest stand variation of potential growth and competition
To quantify the variation of potential growth and competition among forest stands, we computed the respective functions for each forest stand using posterior medians of parameters. Specifically, we calculated potential growth rate g(X) (1), growth-size relationship S(X) (2) and the competitive effect on growth C(X) ((4), for a focal tree that has a single larger neighbour of 10 cm dbh). For each of the 15 forest stands, potential growth rate and the growth-size relationship were predicted for 12 dbh values that were equally spaced between 5 and 60 cm. This covers the dbh range in which most individuals lie. For each of the 12 dbh values, we computed the coefficient of variation of the 15 predicted potential growth rates and growth-size relationships (one prediction per forest stand). We thus obtained 12 coefficients of variation (one per dbh value) that quantify to what extent the potential growth rate and the growth-size relationship vary among forest stands. Similarly, for each forest stand, the competitive effect of a neighbour was predicted for 12 distance values that were equally spaced from 1 to 12 m (above 12 m the competitive effect is very close to 1 for all stands). We thus obtained 12 coefficients of variation representing the among-stand variability of competition functions.
Effects of stand structure on growth and competition functions In order to investigate the effect of stand structure in space, size and age on growth and competition functions, we calculated seven variables. For spatial structure, we selected the number of trees, the density and the average nearest neighbour distance (i.e., for each forest stand, we recorded the distance to the nearest neighbour of each individual and computed the mean of those distances). For size structure, we selected the mean and standard deviation of the size (dbh) distribution. For age structure, we selected the maximum and the mean age. For each combination of a model parameter and a forest stand structure variable, we used the parameter posterior distributions (15,000 values) for the different forest stands and performed 15,000 linear regressions of the sampled parameter values against the forest stand structure variable, using the lme4 R package (version 1.1-17) (Bates et al. 2015).
Model comparison
Competition among Quercus robur trees was found to be asymmetric rather than symmetric: the asymmetric models (ADC and ASC) had a lower DIC than the symmetric models (SDC and SSC) for all forest stands except three (Table 2). Between the two models describing asymmetric competition, the size-dependent version had a lower DIC than the density-dependent version for all but one of the 15 stands (Table 2), even though competition asymmetry was more important than size dependence (DIC of ADC model was overall lower than DIC of SSC model). Across forest stands, the "predominantly best" model was thus the asymmetric size-dependent competition model. For better comparability, all following results on inter-forest stand variation are therefore only shown for ASC model.
Model fit and parameter estimation
The data were overall satisfactorily described by the model: between 94.0 and 96.9% of the observed data are within the 95% credibility interval of the predicted data. The fit of median predictions to the observed data ( Fig. 1) varied between forest stands, being satisfactory for some stands (A, F, K, L) to poor for some others (B, O, P). The proportion of variance explained by the model ranged from 0.21 to 0.73 (mean of R 2 distribution, Fig. 1). We obtained overall narrow posterior distributions for almost all parameters compared with the prior distributions, suggesting that sufficient information was available from our data to accurately estimate model parameters (Fig. 2). The narrow posterior distributions of parameters led to small uncertainty around the potential growth rate and competition functions (Appendix Fig. 7).
Variability across forest stands and effect of forest stand structure
Parameter estimates varied among forest stands (Fig. 2) which led to different potential growth rate and competition effect functions for each forest stand (Fig. 3, predictions with the median values of parameter posterior distributions). However, potential growth functions were overall more variable across forest stands than competition effect functions (Fig. 3). Different shapes of the potential growth function arise from variation in the value of the dbh at maximum growth rate x 0 (Fig. 2) (potential growth rate can be monotonically increasing with size for high x 0 or show an intermediate size optimum for lower x 0 ). The between-stand coefficients of variation of the potential growth rate g(X) and the growth-size relationship S(X) were higher than the coefficients of variation for competitive effect E(X) (Fig. 4). Hence, competition functions varied less between stands than potential growth functions. The bootstrap linear regression of parameter values against forest stand structure variables showed no significant effect of the forest stand structure on parameter estimates (see Appendix Fig. 8).
Annual variation in growth
There is no obvious consistency in the annual effects on growth across forest stands: for a given year, the median values of t are overall variable among stands (Fig. 5). Some forest stands showed a lot of inter-annual variability in growth rates, while some others showed very little variation across years (Appendix Fig. 9). Nevertheless, some years could be identified as overall "positive years for growth", when the median value of t is positive for more than 75% of the forest stands. Those years are the following : 1993, 1994, 2000, 2003, 2004 and 2007. Similarly, some years could be identified as overall "negative years for growth".
Discussion
Mode of competition The model describing size-dependent asymmetric competition generally described the growth data best (Table 2). This implies that smaller trees are more sensitive to competition, as stated in previous studies (Hegyi 1974;Schenk 2006;Bourdier et al. 2016). It might indicate that shading is the dominant source of competitive pressure (Canham et al. 2004). Although in Fig. 1 Observed and predicted median absolute growth rates for each forest stand, with the asymmetric size-dependent competition model. Dots are full when the corresponding observed data point was within the 95% credibility interval of predicted data and they are empty when the corresponding observed data point was outside the 95% credibility interval of predicted data. Dashed red line is the identity line heterogeneous soils size-asymmetric root competition may also occur, root competition is rarely as asymmetric as shoot competition (Schenk 2006;Rasmussen et al. 2019). Moreover, in dry conditions, size-asymmetric competition may become increasingly symmetric at later stages of stand development, possibly as a result of decreasing soil water availability (Masaki et al. 2006). Competition for light will be particularly important in later stages of forest stand establishment, when the canopy increasingly closes as the first founder trees grow large and exert strong competition on later and smaller recruits.
Potential growth and competition functions
Overall, the value ranges of estimated growth parameters seem in accordance with previous studies using this growth model (Canham et al. 2004;Uriarte et al. 2004a, b;Stadt et al. 2007;Das 2012). In several studies, the neighbour size is scaled in the competition equation, with values ranging from 0.7 to 3.5 depending on the species (Canham et al. 2004;Uriarte et al. 2004a, b;Stadt et al. 2007;Das 2012), while we used logarithm of dbh, suggesting an overall weaker effect of neighbour size in our forest stands. Concerning competition kernel shape, β has often been estimated below 1 (Canham et al. 2006;Stadt et al. 2007;Coates et al. 2009;Das 2012), while it was above 1 for a majority of our forest stands, leading to a quite steeper decline of competitive effects with distance. Growth variability at population level has previously been observed between two sets of five silver fir stands along an elevation gradient (Latreille et al. 2017). Our finding that competition functions are more consistent across populations than functions describing potential growth is in accordance with results of the global (and much coarser scale) study of .
Goodness of fit and uncertainty sources
Although the residual variance in predicted growth rates remains large for some forest stands (Fig. 1), we obtained satisfactory fits of the model to the data, when comparing R 2 obtained in similar studies, which generally ranged from 0.10 to 0.60 (Canham et al. 2004;Uriarte et al. 2004a, b;Stadt et al. 2007;Das 2012), but reached 0.40 to 0.90 in Coates et al. (2009). Uncertainty around the potential growth and competition functions (deriving from parameter estimation uncertainty) was low and in general inter-annual effects did not explain interindividual variation. The exhaustive sample of the forest stands is likely to lead to this rather low uncertainty. A major part of the individual variability in observed growth is not explained by the competition process itself, nor the annual variation in growth, nor by uncertainty in parameter estimation, but is probably mainly due to intrinsic biological variation between individuals, or by individual-year interactions (Clark 2010). In addition, damage to the global health of the individual, such as physical damage, herbivory, pathogens or pests, can affect growth (Hansen and Goheen 2000;Dobbertin et al. 2001;Wood et al. 2003). Among-stand variability in potential growth functions The potential sources of variation in potential growth functions are numerous. Forest stand history may have influenced growth patterns, through past management of the surroundings and the forest stand itself and non-human disturbances such as storms or fire (Rademacher et al. 2004;Davis et al. 2005;Rigg 2005). Mortality and thinning events might have occurred in the past, but would not be directly observed in the available data. Such events have been shown to affect growth, and their effects can also be delayed in time (if a large neighbour of a target tree dies, the target tree would not immediately develop roots and crown structures to exploit the newly available resources) (Wright et al. 2000). Additionally, variation in potential growth functions could be due to herbivory which varies substantially between the studied oak stands (Valdės-Correcher et al. 2019). Moreover, local environmental conditions have probably had a large impact on growth as micro-topographic conditions differ among forest stands. Since developing forests have been found to be particularly sensitive to low water availability and high temperature (Coll et al. 2013;Madrigal-Gonzȧlez and Zavala 2014;Ruiz-Benito et al. 2014), differences in the frequency and intensity of drought across stands could lead to subsequent variability in growth across forest stands.
The observed differences in potential growth functions were due to both variation in maximum growth rate g m and in growth-size relationship S(X) and may result from genetic and environmental influences (as assumed by Canham et al. 2004): genetic variability might play a part in determining potential growth and especially sensitivity to environmental factors influencing growth (genotype×environment interaction) (Atwood et al. 2002). For example, trees are likely to respond to temperature on a genetic basis at a provenance level (Saxe et al. 2001). A recent study on the tree individuals investigated here revealed considerable genetic effects on leaf herbivory (4)) with each parameter equal to the median value of its posterior distribution. Each curve for potential growth rate was predicted for the range of observed dbh values in the corresponding forest stand by insects (Valdés-Correcher et al. 2020), a process that should to some extent trigger trees' resource acquisition and ultimately tree radial growth.
No effect of stand structure on potential growth and competition functions The variation in growth patterns could not be explained by forest stand structure in terms of size, age or spatial arrangement. Concerning the effect of size structure, it has been surprising to not find any effect on growth parameters, knowing that different shapes of growth functions can be statistically selected depending on the range of tree sizes covered by the data (Das 2012). Although our forest stands show different size structures, they are not skewed to very small or very large trees, which could strongly influence the shape of the growth function (Das 2012). Forest stand spatial structure showed no effect on competition parameters, suggesting that both sensitivity to competition and competition kernels do not depend on the average spatial structure of stands. Former studies have preferentially focused on whether competition coefficients Fig. 4 Coefficients of variation across forest stands for predicted potential growth rate (1), growth-size relationship (2) and competition effect on growth (4), with the asymmetric size-dependent competition model. For each of the three processes, the boxplot displays 12 coefficients of variation are influenced by spatial segregation of species, rather than the spatial arrangement of individuals per se (Freckleton and Watkinson 2001;Turnbull et al. 2004;Canham et al. 2006). However, the absence of spatial structure effects suggests that the present model included enough spatial information to describe competition. Finally, the absence of an effect of age structure on competition contrasts with the study of Masaki et al. (2006) who found that competition patterns change with stand age.
Between-year variation in tree growth In old oak forests, 29% of the variance of oak ring-width was explained by climate between 1925 and 1980 (Rozas 2011). Moreover, Alfaro-Sȧnchez et al. (2020) found a positive correlation between growth and soil water moisture in June-July of the current year and September-October of the previous year, as well as a negative correlation between growth and temperature in August-September of the previous year. In our study, however, between-year variation in growth rates was overall not synchronous across stands. This could be because effects of climate variables on tree growth are age-dependent (probably because of physiological changes due to ageing) for oaks (Rozas 2005). Also, populationlevel variation in tree responses to several climate factors (precipitation, temperature, relative humidity) has been observed in silver fir (Latreille et al. 2017). Local soil, biotic and topographic conditions interact with general climatic effects, for instance leading to substantial betweensite variation in water availability between populations, which is a critical factor in pedunculate oaks, especially during early summer (Rozas 2011;Scharnweber et al. 2011). Furthermore, competition for resources was found to influence tree response to climate (Clark et al. 2014). These complex interactions between large-scale climate, stand structure and the local biotic and abiotic environment could explain why we did not find strong between-stand synchrony in annual growth rates.
Conclusion and perspectives
In this study, we modelled growth and intraspecific neighbourhood competition, including between-year variation, of 15 young stands of Quercus robur. We found that competition was overall asymmetric and size-dependent, and that competition functions were relatively similar between stands whereas potential growth functions were highly variable. Between-stand variation in model parameters could not be explained by the size, age or spatial structure of stands. Additionally, we only found moderate synchrony of annual growth rates across the forest stands.
Our study suggests that measuring and taking into account the variability in potential growth is essential for predicting the dynamics of young tree populations. Indeed, growth rate is strongly linked to tree health and individual mortality (Hulsmann et al. 2018), which drive the survival of the population and thus may contribute to shaping spontaneous forest establishment. Similarly, growth variability between populations may have an impact on other demographic processes, such as maturity or fecundity, which are driven by individual size. Ecosystem services linked to growth, such as carbon storage, are also likely to vary between populations. In contrast, competition functions (both the spatial extent and the effect of competition) were found to be consistent, so that this process may be transferable among stands. Since stands with high potential growth rates do not show qualitatively different competition functions than stands with low potential growth, growth at low density should be a good predictor of growth in the same stand at higher density (and vice versa). Our study thus suggests that-within stands-growth measurements at a given density can be used to predict growth at different densities. This should help to predict the ecosystem functions that newly established stands of Q. robur will provide.
Given that population structure was not able to explain variability in growth and competition functions, investigating genetic variation in growth across populations would be of major interest. Similarly, local environmental conditions might be a source of variability to explore. In order to better understand why intraspecific competition functions were consistent across populations, it would be valuable to explore the mechanisms of competition. One could also test if the consistency of links between traits and competition found ) holds for establishing forests and can help to predict their dynamics.
Acknowledgments We sincerely thank the reviewers and the editors for their important, detailed and insightful input into this manuscript.
Author contributions Contributions of the co-authors DL, JP and FMS conceived the ideas and designed methodology; EVC and DB collected the data; DL developed the model, analysed the data and wrote the manuscript; AH and FMS supervised the study and aquired the funding. All authors contributed to the final draft and gave final approval for publication.
Funding information Open Access funding provided by Projekt DEAL. D, JP and FMS were funded by the German Research Foundation (DFG, SCHU 2259/7-1) in the framework of ERA Net BiodivERsA project Sponforest.
Compliance with ethical standards
Conflict of interest The authors declare that they have no conflict of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creativecommonshorg/licenses/by/4.0/.
Appendix
Shape of the growth curve − log 10 (x b ) ∼ U(−2, 2) a Sensitivity to competition log 10 (a) ∼ U(−4, 1) β Scale of neighbour effect with distance − log 10 (β) ∼ U(−1, 2) φ Standard deviation of growth rate inter-annual random effect − φ ∼ U(0, 5) σ Standard deviation of log growth rate − log 10 (σ ) ∼ U(−2, 1) Appendix Table 3) and forest stand-level medians of the parameters (see Table 1 for parameter definition) of the asymmetric size-dependent competition model. Lines show average regressions (using the mean of the intercepts and slopes estimated with the bootstrap linear regressions). NND denotes the nearest neighbour distance Fig. 9 Annual random effects on growth rate for each forest stand estimated by the asymmetric size-dependent competition model. Solid lines are the median values and dashed lines delimit the 95% credibility intervals | 7,959 | 2020-08-17T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
Performance of DGPS Smartphone Positioning with the Use of P(L1) vs. P(L5) Pseudorange Measurements
This paper presents numerical analyzes of code differential GPS positioning with the use of two Huawei P30 Pro mobile phones. Code observations on L1 and L5 frequencies were chosen for DGPS positioning analysis. For project purposes, we additionally used one high-class geodetic GNSS receiver (Javad Alpha) acting as a reference station. Smartphones were placed at the same distance of 0.5 m from the reference receiver. Such a close distance was specially planned by the authors in order to achieve identical observation conditions. Thus, it was possible to compare the DGPS positioning accuracy using the same satellites and the P(L1) and P(L5) code only, for single observation epochs and for sequential DGPS adjustment. Additionally, the precision of observations of the second differences in the observations P(L1) and P(L5) was analyzed. In general, the use of the P(L5) code to derive DGPS positions has made it possible to significantly increase the accuracy with respect to the positions derived using the P(L1) code. Average errors of horizontal and vertical coordinates were about 60–80% lower for the DGPS solution using the P(L5) code than using the P(L1) code. Based on the simulated statistical analyses, an accuracy of about 0.4 m (3D) with 16 satellites may be obtained using a smartphone with P(L5) code. An accuracy of about 0.3 m (3D) can be achieved with 26 satellites.
Introduction
DGPS (Differential Global Positioning System) measurements have been used since the beginning of satellite navigation. This positioning technique supports above all codeonly receivers, essentially using civil code C/A (coarse/acquisition), which is accessible in all GPS receivers. Code-based DGPS positioning has been widely used not only in navigation, but also includes surveying and other applications. Pseudorange measurements observed at accurately placed reference stations are compared with analogous ranges computed from the known coordinates. The errors determined are transmitted as differential corrections for DGPS users within range. Unfortunately, the drawback of such a solution is the distance limitation over which the differential corrections are valid. It caused by rapid decorrelation of the error sources. There are some research and development studies on generating corrections over larger areas. In the early 1990s, scientists proposed wide-area DGPS (WADGPS) system design based on a limited number of reference stations, in which determined corrections led to accuracies very close to those accomplished by traditional DGPS, not depending on the distance between the reference stations and users [1]. This problem was also handled by other researchers using a reference station network by applying the network DGPS concept (called NDGPS). The NDGPS corrections were computed from data received via the Internet using NTRIP (Networked Transport of RTCM via Internet Protocol) [2,3]. Usefulness of its simplified algorithms and acceptable precision is an advantage. Even though the accuracy of code measurements is eminently worse than carrier-phase measurements, code measurements are, nowadays, primary in every GNSS receiver. While obtaining centimeter or even millimeter accuracy, the code measurements always support determining the exact coordinates, because the carrier-phase positioning performance relies on the code measurements [4]. For this reason, there is not much research performed on the efficiency of DGNSS positioning taking part in RTK phase measurements, even though the DGNSS technique may play a significant role in the course of time. In the reverse solution, Weng et al. modified the local-area DGNSS through the use of network RTK corrections. They reduced distance-dependent errors and accuracy for a longer baseline length, which was significantly improved by more than 50% for a 17.9 km baseline [5]. Other researchers also presented a network real-time kinematic (RTK) solution which was used to reduce the decorrelation error in the DGPS system [6]. They used the Flächen Korrektur parameter (FKP) to complement the current DGPS and the results show that the positioning accuracy of the DGPS was improved by a maximum of 40%.
There is also more recent research on DGNSS positioning algorithms. Some of them give attention to the combination of using one satellite for each system in relative positioning and one receiver clock parameter for one system in an absolute positioning technique. When we consider the fusion of multiple global navigation satellite system data, an inter-system bias (ISB) is frequently studied. In consideration of ISB, the typical DGNSS model should use separate clock parameters for each system to establish the precision of positioning results and increase the performance of the DGNSS technique. Such a GPS/GLONASS/BEIDOU/Galileo real-time model with ISB applied in a differential positioning was proposed by [7].
Various research works are focused on the source of code biases, along with their effects on GNSS positioning, and their estimation. Pseudorange observations are known to be influenced by differential code biases (DCBs) which are signal and frequency dependent. There is a need to consider them in the observation model [8,9]. When using the GNSS ground network, we can usually precisely estimate and correct this differential code bias of a GNSS receiver. It depends on characteristics of global ionosphere maps (GIMs) representing ionospheric total electron content (TEC). They are applicable in many scientific and engineering applications. Other research work confirms that DCB can also be estimated using a recursive method together with the selection of an individual reference station [10]. Among presented techniques, the DGNSS technique based on pseudorange correction (PRC) is also approved in number of applications improving real-time positioning accuracy in low-cost satellite receivers. Researchers using predicted PRC demonstrated that for DGPS/DBeiDou horizontal positioning errors were at one meter level of accuracy. Such a solution would unquestionably be very helpful to keep DGNSS positioning during outages of correction data [11]. Considering the ambiguity resolution, estimation, and analysis of code biases is also very important in this process, depending on the pseudorange method. It is recognized for GNSS double-differencing method along with undifferenced circumstances [12]. In order to achieve this particular purpose, the usefulness of observable specific signal biases (OSB) can be picked out for analysis of code biases [13].
Regarding the advantage of GNSS technology, it should be pointed out, that currently there has also been a massive boost in the interest in positioning using smartphones, handheld and low-cost GNSS receivers. Since 2016, scientists have been focusing on the usefulness of GNSS observations derived from mobile phones. The latest modern smartphones or mass-market portable mobile receivers with built-in GNSS chipsets can reach very impressive positioning quality. They use application programming interface (API) which is based on predefined functions for developing custom applications to interface with the GNSS chipset. It is useful for obtaining not only pseudorange information but also carrierphase observations. The carrier-phase observations are the key to obtain precise positioning accuracies based on ability to fix their ambiguities to the correct integer values [14].
First, the Android N operating system allowed us to access raw GNSS measurements from smartphones or tablets through various APIs [15]. Scientists started first analyses of main errors sources of the GNSS chipset installed in smartphone. The essential positioning error source in mobile phones is not affiliated with GNSS chipset, but with the interior antenna [16] and an important factor, which is enabled duty cycling to low power consumption smartphones [17]. The initial experiment results demonstrated that with the Nexus 9 smart device using raw GNSS phase observations, it is possible to obtain decimeter-level accuracy in static measurements [18][19][20][21]. In 2018, the world's first dual-frequency GNSS smartphone (Xiaomi Mi 8), equipped with a Broadcom BCM47755 chip, was introduced to the market. Regarding the first positioning results with the usefulness of this smartphone, by using the multi-constellation technique, researchers observed an average improvement of 17% compared to the single GPS approach. For absolute positioning, the best results were achieved using Galileo E5a measurements collected by Xiaomi Mi 8 mobile phone [22]. In the case of carrier phase-based relative positioning conducted in static mode, the accuracy was also at decimeter-level accuracy (L1 float solution).
Mobile phone GNSS measurements have lately been also a topic to extensive studies on their application to more precise positioning techniques, such as Precise Point Positioning (PPP) or Real Time Kinematics (RTK) [23,24]. The positioning accuracy of Xiaomi Mi 8 mobile phone with the usefulness of dual-frequency ionosphere-free combination PPP algorithm was analyzed, and the results showed that decimeter-level accuracy in static mode may be achieved and was comparable to the geodetic receiver in single-frequency mode [25,26]. Other researchers explored the relationship between the data quality of GNSS observations and single-frequency RTK positioning accuracy based on the same Xiaomi Mi 8 smartphone [27,28]. They demonstrated that it is not feasible to reach the phase ambiguities fixed. Despite this, the precisions are still good and the obtained accuracies of positioning solutions are mostly at decimeter level. For most mobile devices scientists observed that the phase observations do not have integer characteristic but appear to have random biases. Most of the observations were still in a "float"-type solutions. Accordingly, strong multipath influence is also observed in measurements [29][30][31]. Other researchers have developed dedicated tool that allows performing Network RTK (NRTK) positioning while considering a threshold for the ambiguity fixing method. They tested smartphones in the CORS Network, considering both VRS and the nearest stations. Unfortunately, results showed satisfying effects in terms of precision, but not in the aspect of accuracy [32].
With recent public access to raw GNSS observations on smartphone devices, there are many approaches to obtain accuracy, even at centimeter-level, addressed to applications requiring high-accuracy of measurements. Some researchers succeeded in replacing smartphone's internal GNSS antenna with an external one and performed precise point positioning ambiguity resolution, which led to centimeter-level accuracy [33,34]. It was noted recently that the carrier-phase observations collected by the latest smartphones do not have the integer property, but for the Huawei P30 or Xiaomi Mi 8 such an integer property can be successfully recovered by means of detrending and obtaining a centimeter-level of accuracy is then possible [35][36][37]. Carrier phase centimeter-level smartphone positioning requires precise information of the average phase center position and possible phase center variations [37,38].
The smartphone positioning trend has been expanded recently by an increased interest in performing carrier phase ambiguity fixing and positioning with a smartphone that contains a dual-frequency (L1/L5) GNSS module. Guo at al. showed that the pseudorange noise of the L1/E1 smartphone observations ranges from 3 to 9 m in complex dynamic environments, while that of L5/E5 observations is about 1.5 m [39]. Recent research is also focused on performing an evaluation in terms of the signal strength, satellite tracking capabilities, and observational noise. After performance comparison of geodetic receiver with smartphones, it was proved that satellite elevation dependence of the signal strength is not always valid for smartphones, as it is true for geodetic receivers. It was also observed that mobile phone pseudoranges are much noisier compared to the professional geodetic receivers, which confirmed that with the observations of some selected smartphones, it is possible to fix the ambiguities to their integer values [40][41][42].
A wide spectrum of applications, such as pedestrian and vehicle navigation, social networking, safety management, and many others, have already been appreciated. Considering these applications, the absolute positioning mode using single-frequency code observations, which provides the accuracy ranging from a few meters to tens of meters (under difficult conditions), is mostly sufficient in smartphones [43,44]. This means that the DGNSS technique, being considered by authors, would also be implemented in smartphones, significantly increasing the positioning accuracy. Today, the connectivity to the Internet, capability of running various applications, and modern GNSS modules may improve positioning performance in smartphones by implementing DGNSS technology. Yoon et al. proposed and implemented a DGNSS-correction projection method for available commercial smartphones. The results of static and kinematic experiments showed that absolute GPS positioning accuracy could be improved even by 30%-60% using the proposed approach [45]. Others proposed a DGNSS solution that corrects the single GNSS position of smartphones, using the corrections from a reference station. Client/Server architecture was developed to serve a larger number of smartphone users. Field tests in an open environment showed that the horizontal positioning accuracy could be better than 2 m [46]. The other experiments showed that absolute positioning can be comparable to the DGNSS technique and can generally achieve an accuracy of one meter in horizontal positioning [47]. As far as vertical positioning was concerned, they demonstrated that DGNSS is largely preferable to single point positioning.
Since all of the available research that has been performed so far on DGNSS smartphone positioning is based on P(L1) code observations, the authors of this paper present how the usefulness of P(L5) code observations significantly increases the positioning accuracy. While implementing proposed DGNSS technique, the users may obtain positioning accuracy even at decimeter-level. This would be highly satisfactory for many applications, such as intelligent transportation systems [48,49] or as a method for recovery of precise position of aircraft in air transport [50,51], where the DGNSS system along with extra equipment is essential for control and safety systems.
DGPS Positioning Based on the Least Squares Method
Usually, in DGPS technique, we use only L1 code observations. Thus, we can write the following observation equation for each satellite [52]: where ( ) is the measured pseudorange, ( ) is the true receiver-to-satellite geometric range, is the speed of light, ( ) is the satellite clock error, ( ) is the receiver clock error, ( ) is the ionospheric delay error, ( ) is the tropospheric delay error, ( ) is the satellite ephemeris error, and ( ) represents other pseudorange errors, such as multipath, interchannel receiver biases, thermal noise, receiver and satellite hardware delay, as well as pseudorange measurement noise.
The pseudoring correction (PRC) for a satellite at the epoch ( ) is calculated by the equation [53]: where ( ) values determined for some satellites are involved in the mathematical model of absolute positioning in a GNSS receiver. Its coordinates should be determined in the reference stations' frame. Hence, for satellites i, j, k, l, and the assigned station S, we can write a system of equations in the form of a matrix [53] where and matrixes are as follows: and the solution with the least squares method is: where is a diagonal weight matrix.
Field Experiments
Test measurements were made on 28 January 2021, and 29 January 2021. The authors chose a period of 1500 epochs in this research. Raw GNSS observation data were collected using Rinex ON mobile application (Nottingham Scientific Ltd., Nottingham, United Kingdom, 2020) by two Huawei P30 Pro smartphones. These mobile phones record data from GPS, GLONASS, BEIDOU, and GALILEO positioning systems. We used phase center information, which was determined by us very precisely (millimeter level of accuracy) in the previous work [37]. Smartphones were mounted vertically (at a distance of 1 m from each other) on the base made of an aluminum beam with centrally positioned mandrel that allows for mounting it on the levelling head, which may be centered over the reference point (Figure 1). Aluminum beam pointed exactly North-South. The first smartphone, called S1, was located on the north edge and S2 on the south edge. In both cases, smartphone displays were facing south. In the central point of aluminum beam, geodetic GNSS Javad Alpha receiver (Javad GNSS, San Jose, CA, USA) was positioned and acted as a reference station. The aim of the experiment and analysis was to carry out measurements in the period with the largest number of GPS satellites capable of receiving the L5 frequency. The satellite configuration during the test measurements is shown in Figure 2. During the measurement, there were 5 satellites that received signals on the L5 frequency. These are the satellites: G01, G08, G10, G27 and G32. Therefore, only those satellites for which DGPS calculations were performed for solutions using the P(L1) and P(L5) codes were used in the research. As a result, the true errors for DGPS P(L1) and DGPS P(L5) were compared. The true errors for the horizontal and vertical coordinates are shown in Figures 3-6. For DGPS measurements and observations for the P(L1) code and P(L5) code, the true errors are presented in Figures 3-4 for 28 January 2021, and in Figures 5 and 6 for the observations of 29 January 2021. It should be also clarified that P stands for pseudorange measurements, not for P (precise) code.
Numerical DGPS Analysis of Single Epoch Solutions
In this chapter, DGPS calculations were performed using a single measurement epoch, handling both the P(L1) and the P(L5) codes. For DGPS positioning, using the P(L1) code (Figure 3), the true errors of the horizontal coordinates for the S1 phone (North edge, display facing south) ranged from −5.05 m to 7.93 m and from −16.98 m to 19.46 m for the N and E coordinates and in the range from −39.67 m to 28.02 m for the vertical coordinate h. For DGPS positioning, using the P(L5) code, the true errors in the horizontal coordinates were in the range from −10.20 m to 2.47 m, and from −18.58 m to 3.37 m for the N and E coordinates and in the range from −7.61 m to 8.49 m for the vertical coordinate h. Thus, we can see that for the N coordinate the minimum values had larger errors for the P(L5) code than for the P(L1) code. However, this was due to the change in the number of satellites observed on the L5 frequency (dropped from 5 to 4 satellites). It was during the period, when the S1 phone was receiving only four P(L5) satellites, that there was a shortterm deterioration in the accuracy of DGPS P(L5) positioning in relation to the P(L1) code. However, during the P(L1) signal measurement, signals from five GPS satellites were available all the time. However, for the vertical coordinate, it is obvious that the DGPS positioning accuracy using the P(L5) code is several times greater than the DGPS positioning accuracy using the P(L1) code despite receiving only four P(L5) code satellites. for the vertical coordinate h. In this case, both for DGPS positioning using the P(L1) code or the P(L5) code, the S2 phone was constantly receiving code observations from the same five satellites. Therefore, this measurement result is more reliable for P(L1) vs. P(L5) positioning comparison. With the P(L5) code, errors in the horizontal and vertical coordinates are much smaller than with DGPS positioning using the P(L1) code. It can even be seen that the maximum error values for the S2 phone using the P(L5) code are at the level of errors that we can expect when using geodetic receivers. In this case, the S1 smartphone at the end of the measurement only received P(L1) observations from four satellites, which caused a particularly large degradation in the vertical position for this mobile phone. In contrast, for the P(L5) code, there were no gaps and both horizontal and vertical position errors were more stable throughout the analyzed period. As is well known, even a shortterm change of configuration causes a significant deterioration of both autonomous and DGPS positioning accuracy. For the same day of measurement, in the case of S2 smartphone for DGPS positioning (Figure 6), using the P(L1) code, the true errors of the horizontal coordinates ranged from −5.85 m to 7.91 m and from −12.66 m to 9.04 m for the N and E coordinates, and in the range from −17.10 m to 56.34 m for the vertical coordinate h. For DGPS positioning, using the P(L5) code, the true errors in the horizontal coordinates were in the range from −2.88 m to 1.86 m and from −2.66 m to 4.56 m for the N and E coordinates, and in the range from −6.82 m to 6.17 m for the vertical coordinate h. In this case, the smartphone S2 maintained uninterrupted contact with 5 satellites during the entire measurement, both for the P(L1) and P(L5) codes. Therefore, the maximum errors are more reliable than in a case when there is a sudden change in the number of satellites observed. For both the horizontal and vertical position, we can observe smaller errors when using the P(L5) code than when using the P(L1) code.
Additionally, the RMS errors were calculated for each DGPS solution, which are presented in Table 1. We can see that the average RMS errors for DGPS positioning using the P(L5) code are much smaller than for DGPS positioning using the P(L1) code. For the P(L1) code with DGPS positioning, the average RMS errors for the horizontal positions were below 6 m, while for the vertical position, the average RMS errors were below 11 m. In the case of DGPS positioning for the P(L5) code, the average RMS errors for the horizontal position were below 1.8 m, while for the vertical position, vertical position, the average RMS errors were below 2.7 m. Based on the calculated RMS errors presented in Table 1, the percentage increase in the accuracy of the RMS error was calculated for the values of N (dN), E (dE), and h (dh). The results are presented in Table 2. In general, DGPS positioning using the P(L5) code resulted in a significant increase in DGPS positioning accuracy, on average by 59% for the E component, 71% percent for the N component, and 75% for the vertical h component.
Static DGPS Sequential Positioning
In the computational tests, presented in Figures 3-6, individual measurement epochs were used, which, in fact, may refer to the kinematic model. If we assume that we are interested in static positioning, we can use sequential DGPS positioning which means that for a given measurement epoch it is possible to perform calculations using all previous positions. Therefore, for any time (t) and any x component we have: We can write the above formula in the following form: Then, denoting: for any measurement epoch we can write the estimator: The above formula is convenient to use for DGPS sequential static positioning, for horizontal and vertical coordinates. The results of such calculations are presented in Figures 7 and 8.
When analyzing charts in Figures 7 and 8, which concern DGPS sequential adjustment, one can see more similar final results for DGPS P(L1) and DGPS P(L5) positioning. Final position errors after using 1500 measurement epochs are shown in Table 3. On the first day of test measurements, the real errors for DGPS P(L5) are smaller than for DGPS P(L1), but only for the S1 smartphone, whereas for the S2 smartphone, the errors in the horizontal coordinates of the DGPS solution P(L1) were smaller than for the DGPS solution P(L5). On the second day of test measurements, the real errors for the horizontal positions in the DGPS solution were smaller for the S1 mobile phone with the DGPS P(L1) solution. For the vertical coordinate, better results were obtained for the DGPS P(L5) solution. On the second day, for the S2 smartphone, horizontal errors were at a similar level for both the DGPS P(L1) and DGPS P(L5) solutions. However, it should be noted that on the second day of measurements, very large errors were obtained for the height with the DGPS solution P(L1), with a value of 5.37 m, and that with the use of 1500 observation epochs. However, on the second day of P(L5) code measurements for S2 smartphone, an error of 0.26 m was obtained.
Discussion
In the calculations presented in the previous chapter, the results of the least squares adjustment for DGPS positioning were analyzed. The output coordinates are, therefore, dependent on the code observations but also on the ephemeris data. Therefore, the double differences (DD) in code observations for the Javad-S1 vector were analyzed, adopting the satellite number 08 as the reference satellite ( Figure 9). Therefore, four independent observations were created, which are shown in Figure 9. The basic statistical data of the DD observations for the Javad-S1 and S1-S2 baselines are presented in Table 4. Figure 9 clearly shows a much smaller deviation amplitude for the observations of the double differences using the P(L5) observations than for the P(L1) observations. For the Javad-S1 baseline, the standard deviation for the DD observations was four times higher for P(L1) than for P(L5). It should be noted that, for the baseline S1-S2 and satellites SV: 08-27, the quotient of standard deviations was as high as 11.39 m, because the standard deviation for P(L1) was 13.65 m, while for P(L5) was only 1.20 m. Table 4. Standard deviations of double differenced code P(L1) and P(L5) measurements, for baselines: Javad-S1 and S1-S2.
Baseline: Javad-S1
Baseline: S1-S2 For both baselines: Javad-S1 and S1-S2, the average standard deviation was quite similar, and the result was 1.07 m and 1.36 m for P(L5) observations. Therefore, assuming the accuracy of the DD observation with the use of the P(L5) code at the level of about 1.4 m, the accuracy of the determined position from such measurements can be simulated, using a simple dependence from the statistics [54]: The chart of possible average error values depending on the standard deviation and the number n, which represents the number of satellites, is presented in Figure 10. Based on the chart in Figure 10, it can be assumed that with 9 satellites, the accuracy of GNSS Huawei P30 Pro smartphone positioning with P(L5) code using permanent reference stations, can be obtained for individual measurement epochs at the level of 0.6 m. When at least two GNSS systems (e.g., GPS + GALILEO) are used, an accuracy of 0.4 m with 16 satellites may be obtained. An accuracy of 0.3 m can be achieved with 26 satellites. Therefore, the use of three navigation systems simultaneously may allow us to navigate at a level of 0.3 m using smartphones with P(L5) code observations.
Summary and Conclusions
The paper presents analysis of DGPS positioning accuracy using the P(L1) code and the P(L5) code, with the use of two Huawei P30 Pro smartphones and one reference station represented by the GNSS Javad Alpha geodetic receiver. We are aware that usefulness of more distant reference stations could be more practical, with predictable consequences on positioning accuracy. The aim of our experiment was to compare coordinate accuracy of relative P(L1) vs. P(L5) smartphone DGPS positioning according to very close located reference station (Javad receiver) in order to achieve identical observation conditions.
The analyses were carried out with the use of the same satellites, both in the DGPS P(L1) and DGPS P(L5) solutions, by analyzing the true errors and the average RMS error. Additionally, analysis of standard deviation errors was presented, which concerned double differences for the P(L1) and P(L5) code observations. The research showed that the true errors of DGPS positioning for P(L5) were much smaller than for DGPS positioning using the P(L1) code. The average RMS errors for the DGPS solution P(L5) were more than 50% lower than for the DGPS solution P(L1), both for the horizontal and vertical coordinates. Similarly, the values of the standard deviations of the double differences for the P(L5) observations were significantly lower than for the P(L1) observations, several times. Finally, DGPS positioning accuracy was simulated using P(L5) code from mobile phones. The average standard deviation for the DGPS solution P(L5) was used for the simulation. The simulations showed that in the case of the P(L5) code, and using the DGPS method, it is possible to obtain an accuracy of 0.4 m for about 16 available satellites. | 6,356 | 2022-02-14T00:00:00.000 | [
"Computer Science"
] |
Hyphaene thebaica (Areceaeae) as a Promising Functional Food: Extraction, Analytical Techniques, Bioactivity, Food, and Industrial Applications
Hyphaene thebaica, also known as doum, is a wild plant growing in Egypt, Sudan, and other African countries. It is usually used to prepare nutritive diets, tasty beverages, and other food products. This review aimed to highlight the phytochemical composition of the doum plant using NMR, GC–MS, HPLC, and UPLC/Qtof/MS. The reported active constituents are also described, with flavonoids, phenolic acids, and saponins being the most dominant components. Extraction methods, both conventional and non-conventional, and their existing parameters were summarized. The in vitro and in vivo studies on the extracts and active constituents were also reported. We focused on different applications of doum in functional food products, animal feeding systems, and pharmaceutical applications. Doum is considered a promising dietary and therapeutic candidate to be applied on a wider scale. Proteomic analysis of doum and clinical assessment are still lacking and warrant further investigations in the future.
Introduction
Foods that possess additional physiological effects beyond their nutritional functions of providing nutrients are called functional foods.The concept of functional food received scientific attention in the last four decades.Functional food can provide nutraceuticals with the therapeutic value that is capable of protecting against infectious and chronic disease (Rivera et al. 2010;Kumar et al. 2021a).The use of plantbased dietary components or functional foods was promoted globally because of their therapeutic benefits and nutritive properties (Kumar et al. 2021b;Prakash et al. 2021).In the current study, we focused on an interesting nutritive plant, Hyphaene thebaica (L.), which is native to Egypt and North Sudan as a potential functional food.This plant is considered sacred by the Egyptian and other African civilizations for its nutritive and therapeutic potential.H. thebaica is commonly known as doum and belongs to the Arecaceae family.It is native to the northern half of Africa and grows in Senegal, Mauritania, Tanzania, Sudan, and Arabian Peninsula (Sinai, Yemen, Saudi Arabia, and Palestine).The fruit of H. thebaica is edible and oval-shaped.It contains proteins, sugars, fats, calcium, phosphorus, and a high level of iron.Fruits contain a wide array of phytochemical compounds including hydroxy cinnamates, flavonoids, essential oils, and saponins.It is also rich in niacin, amino acids, thiamin and riboflavin (El-Beltagi et al. 2018).It exhibited many industrial applications as a stabilizer, filling powder, the basis of fibers, and nectars, as a flavoring agent.Bioactive compounds of H. thebaica demonstrated many biological activities including antimicrobial, anticancer, hyperlipidemia, antioxidant, anti-inflammatory, and antidiabetic effects (Hsu et al. 2006;Abdallah 2021).Recent studies of H. thebaica focused on the fruit (doum fruit) because of its medicinal properties and high nutritional value (El-Beltagi et al. 2018;Gibril et al. 2020;Aamer 2016;Salib et al. 2013;Aboshora et al. 2014a).Although in recent years numerous bioactive compounds and plant products were examined for their role as functional foods with potential therapeutic uses and nutritional health benefits, few of them were subjected to thorough clinical investigation of their therapeutic uses and nutritional health benefits.A small number of bioactive compounds and plant foods surpassed the standard requirements by the FDA's significant scientific agreement for the authorization of their health claim (Hasler 2002).Despite the wide use of doum in African countries, there are no detailed reviews on their constituents, biological activities, and potential applications.This study focused on the extraction techniques, food, and industrial applications of H. thebaica.We discussed reported studies on the functionality of H. thebaica as a potential functional food, research on the bioactivity of the plant extracts, and applications in the food industry.We also provided recommendations for the future production and consumption of doum and its products.
Agricultural and Harvesting Aspects (Drying and Storage Effect)
The analysis of bibliographic data related to the collection, drying, and storage of H. thebaica revealed interesting results.We found only one study that was carried out by Ewansiha et al. (2021).It presented all aspects of harvesting to the final preparation of the sample including the washing and drying stages.H. thebaica grows naturally in the southwestern part of Egypt (Taha et al. 2020) and the Wina region in Cameroon.It is harvested in the period between November 2019 and February 2020 (Kolla et al. 2021).However, crops of this species were reported on the agricultural lands of Jambutu Yola (Ewansiha et al. 2021), and Kayauki of Adamawa and Katsina (Salisu and Saleh 2019) in the respective states in Nigeria.It also grows in the Tayba garden in the city of Elgazira Aba in Sudan (Aboshora et al. 2017), and in the city of Wudil State of Kano in Nigeria (Salihu et al. 2019).
3
To the best of our knowledge, no bibliographic data mentioned the agricultural practices of this species in the regions mentioned above.The majority of studies on H. thebaica focused on the fruit, except for the work reported by Taha et al. (2020) who studied the phytochemical content of the leaves and fruits.Salisu and Saleh (2019) studied the phytochemical content of the stem.
Regarding the collection method, no bibliographic data described this procedure.The plant washing step was optional in most of the studies, only one study cited washing the fruit with 70% ethanol, while another study mentioned washing the fresh stems with distilled water.
The drying of the plant material is one of the most important steps in the preparation of the extracts.In this step, the plant matrix is dehydrated to eliminate any risk of microbial development thus extending the storage period of the plant material.
Three drying methods were used in the literature including sun drying (Aboshora et al. 2014b), drying in the dark (room temperature) (Aboshora et al. 2017;Bello et al. 2017), and in an oven (45 °C) (Taha et al. 2020) or (50 °C) (Gibril et al., 2020).The plant material was either crushed and directly extracted (Ewansiha et al. 2021;Shehu et al. 2017), or crushed and stored for future use (Bello et al. 2017;Taha et al. 2020;Gibril et al. 2020;Kolla et al. 2021;Aboshora et al. 2014a).The best methods of preserving plant material were either stored in a glass jar or a plastic bag.These methods allowed the samples to be kept hermetically reducing structural changes caused by oxidation phenomena (Méndez and Falqué 2007).Another study published by Kolla et al. (2021) mentioned the use of cardboard boxes for the preservation of H. thebaica fruits.A collective list of different culturing and drying methods is shown in Table 1.Conventional extraction methods require a longer extraction time and a large volume of organic solvents (Rasul 2018).Modern extraction methods are also known as green extraction methods or non-conventional methods that were developed to increase efficiency and achieve more yield.These methods include ultrasonic extraction, supercritical fluid extraction, enzyme extraction, and microwave-assisted extraction.These methods require low organic solvent, short extraction time, and offer high selectivity (Kumar et al. 2020;Singh et al. 2020).For the extraction of bioactive compounds from H. thebaica, both conventional and nonconventional methods were adopted.
Hot Water Extraction
The hot water extraction of 10 g dry fruit powder of H. thebaica was carried out under optimized conditions.The extraction time was 30 min with constant stirring and infusion with deionized boiling water (600 mL).The yield of H. thebaica obtained under these optimized conditions was 29.3 g (23.9% w/w) (Hsu et al. 2006).
In another study, H. thebaica fruit aqueous extraction was carried out using two different methods.In the first method, crushed fruit was soaked in water with the sampleto-solvent ratio of 1:5 w/v, extraction time (4, 8, 12 h) with a temperature of 22 ± 2 °C.In the second method, crushed fruit powder is mixed in boiling water with the sample-tosolvent ratio of 1:5 w/v, extraction time (5, 10, 15 min), and temperature 100 °C.In the first method (12 h), the yield of the total phenolic compounds was 35.98 ± 0.23 mg/100 g and the total flavonoid compounds were 3.60 ± 0.06 mg/100 g which was higher in comparison to the extraction yield of the second method (5 min) with a yield of the total phenolic compounds 23.47 ± 0.25 mg/100 g and flavonoids 3.18 ± 0.03 mg/100 g (Aamer 2016).
Solvent Extraction
In another study, the extraction of H. thebaica dry fruits was carried out under optimized conditions with two different solvents including ethanol (70% v/v) and methanol (80% v/v) in a shaking water bath (70 rpm), with an extraction time of 30 min, extraction temperature 70 °C for ethanol extract and 60 °C for methanol extract, and the sample-tosolvent ratio was 1:10.The yields of the phenolic compounds in ethanol and methanol extract were 116.26 ± 0.43 mg/g and 132.51 ± 0.51 mg/g, respectively.The yields of the flavonoid compounds in the ethanol and methanol extracts were 24.04 ± 0.17 mg/g and 41.55 ± 0.17 mg/g obtained under the optimized conditions (Aboshora et al. 2014b).
Maceration Extraction
In a recent study, the maceration of H. thebaica fruit powder (100 g) was carried out under optimized conditions, (Gibril, et al., 2020) -
In another study, the maceration of H. thebaica epicarp (500 g) was carried out.The extraction of epicarp powder was done using the optimized conditions at room temperature in a Soxhlet extractor, acetone was used as the solvent.The yield of H. thebaica epicarp powder was 150 g (Salib et al. 2013).
Ultrasonic Extraction
The extraction of H. thebaica dry fruits was performed under optimized conditions with two different solvents including ethanol (70% v/v) and methanol (80% v/v) in an ultrasonic bath (220 V and 50 Hz), extraction time was 30 min, extraction temperature was 70 °C for the ethanol extract, and 60 °C for the methanol extract.The sample-to-solvent ratio was 1:10.The yields of the phenolic compounds in the ethanol and methanol extracts were 123.36 ± 1.48 mg/g and 139.48 ± 1.18 mg/g.The yields of the flavonoid compounds in the ethanol and methanol extracts were 28.62 ± 0.12 mg/g and 47.17 ± 0.17 mg/g, obtained under the optimized conditions (Aboshora et al. 2014b).
In ultrasonic extraction, high yields of phenolic component 139.48 ± 1.18 mg/g and flavonoid component 47.17 ± 0.17 mg/g were reported.Conventional methods were more time-consuming compared to non-conventional methods.A compiled list of the extraction methods of H. thebaica is presented in Table 2.
Yield Optimization of Bioactive or Nutritive Compounds
The fruit of H. thebaica is consumed widely in some African countries and even its waste has some applications.In a recent study, the seed powder of H. thebaica was analyzed showing moisture content of 10.0 wt% (weight%), volatile matter of 85.31 wt%, and ash content of 1.3 wt%.The highperformance liquid chromatography (HPLC) results indicated that 70 wt% of the seed sugar content was mannose.Mannan, a storage polysaccharide having β (1-4) linkage present in the primary cell wall of plants, was extracted from 5 g seed powder of H. thebaica under the following optimized conditions including an alkali solution (0.25 N), sample-to-solvent ratio 1:30, extraction time 90 min, and temperature 90 °C.The yield of mannan under optimized conditions was 13.07 wt% suggesting the seed of H. thebaica to be a rich and low-cost source for mannan extraction (Gibril et al. 2020).
In another study, the microwave-assisted mercerization of fibers obtained from the stalks of H. thebaica was carried out.The obtained fibers were used as low-cost biosorbents to treat wastewater having Pb 2+ and Cu 2+ ions.Microwaveassisted mercerization of H. thebaica fiber was carried out under the following optimized conditions including extraction time of 20 min, microwave power of 700 W, frequency of 2450 MHz, and solvent sodium hydroxide solution 5% w/v.This pretreatment increased the hydrophilicity of fibers and removed the wax, lignin, and oil (Salisu and Saleh 2019).
GC/MS
The essential oil of doum fruit was obtained by hydrodistillation using a modified Karlsruher apparatus.The hydrodistilled oil was then analyzed by GC/MS adopting linear temperature programming (80 °C to 270 °C at 10 °C/ min).This resulted in the identification of 57 constituents.Diterpenes represented the predominant class, being amounted to 40.49% of the components, with incensole (17.52%) and incensole acetate (19.81%) as the main components in addition to cembrene A and cembrene C. Monoterpenoids were detected constituting 15.97%, mainly limonene, β-pinene, terpinene-4-ol, and sabinene (2.42, 1.98, 1.77 and 0.82%, respectively).Interestingly, the oxygenated compounds reached 66.78% reflecting an economically important value of doum oil.The authors attributed the scent of doum fruits to their richness with volatile diterpenes (Ayoub et al. 2011).In another study, GC-MS analysis was performed to detect the primary metabolites of doum fruits after sample derivatization.This allowed the identification of 26 compounds including mono-and disaccharides and organic and amino acids.Sucrose showed a high abundance accounting for 45-58% of the total ion chromatogram (Farag and Paré 2013).
HPLC and UPLC
Phenolic compounds from the fruit bulbs of H. thebaica collected from Sudan were analyzed and quantified for flavonoids and phenolic acids (Salih and Yahia 2015).Sixteen compounds were detected with methoxycinnamic and sinapic acids, catechin, and chlorogenic acid as the major compounds.They amounted to 2219.4, 1367.6, 584.6, and 572 mg/kg DW, respectively.Farag and Paré (2013) analyzed the aqueous and organic extracts of doum via UPLC-PDA-TOF to profile their phenolics and lipid compounds.
°C
The solvent used: methanol (80% v/v) Sample to the solvent ratio: 1:10 w/v The experimental yield of the phenolic content was 132.51 ± 0.51 mg/g and the flavonoid content yield was 41.55 ± 0.17 mg/g (Aboshora et al., 2014b) H. thebaica The obtained experimental yield of the phenolic content was 139.48 ± 1.18 mg/g and the flavonoid content yield was 47.17 ± 0.17 mg/g (Aboshora et al., 2014b) The results revealed the presence of 17 compounds including cinnamates, flavonoids (mainly O-glycosides), fatty acids, sphingolipids, unknown stilbene, and isolariciresinol glycoside.Chlorogenic acid and O-caffeoyl shikimic acid constituted the major phenolics as represented by 551.4 and 421.1 µg/g fruit dry weight.Aamer (2016) identified 19 phenolic compounds in doum fruit aqueous extracts by RP-HPLC and they also studied the extraction parameters (time and temperature) on the concentration of these compounds.They found 3-hydroxytyrosol, vanillic acid, catechin, and chlorogenic acid as the most enriched metabolites.Their concentration was related to the time of soaking at ambient temperature.Increasing the boiling time resulted in a lower concentration of the phenolics.
Recently, Taha et al. (2020) investigated the metabolomics profile of different organs of doum, where 14 compounds were detected against co-injected standards.Interestingly, the fruits exhibited the highest content of chlorogenic acid (0.152 mg/g) being in line with previous reports as a major doum metabolite.Also, apigenin-7-glucoside was detected in doum fruit up to 0.169 mg/g.Apigenin-7-glucoside and rutin were the predominant phenolics detected in doum leaves (5.428 and 2.695 mg/g, respectively).While the male parts were mostly rich in vanillic, rosmarinic, and protocatechuic acids (1.081, 0.544, 0.463 mg/g, respectively).
NMR Analysis
1 H-NMR was utilized to identify and further quantify the metabolites of doum fruit.Three classes of compounds were observed in the amino and organic acid regions in addition to the sugar region.Sucrose was detected as the most abundant metabolite constituting 219 mg/g (Farag and Paré 2013).The identification of the previously isolated compounds from doum was confirmed mainly using NMR and these compounds will be discussed in "Isolated Phytochemical Compounds from H. thebaica (doum Palm)."Three aglycones including luteolin (8), kaempferol (12), and chrysoeriol (18) were also obtained.The structures of identified phytochemicals from doum are represented in Fig. 1.
Recent In Vitro and In Vivo Studies on H. thebaica (Doum Palm)
Different extracts of H. thebaica were evaluated in a panel of in vitro and in vivo assays as illustrated in Fig. 2 and Tables 3 and 4.
Antioxidant Activity
The hot water extract of H. thebaica fruits acted as a potent source of antioxidants.It showed hydrogen-donating activity of 2.85 mmol ascorbic acid equivalent, Fe 2+ -chelating activity of 1.78 mmol ethylenediamine tetra acetic acid equivalent, hydroxyl radical-scavenging activity of 192 mmol gallic acid equivalent, inhibition of substrate site-specific hydroxyl radical formation of 3.36 mmol gallic acid equivalent, superoxide radical-scavenging activity of 1.78 mmol gallic acid equivalent, and reduction of the power of 3.93 mmol ascorbic acid equivalent (Hsu et al. 2006).Faten (2009) reported the antioxidant activity of the H. thebaica fruits to extract using the DPPH assay.It showed an IC 50 value of 1000 µg/mL and showed 80% inhibition at a concentration of 1500 µg/mL as compared to the quercetin standard which showed 69% inhibition at a concentration of 1000 µg/mL.
Regarding the 70% ethanol extract of the leaves, it showed inhibition of the reactive oxygen species (ROS) attack on salicylic acid with an IC 50 value of 1602 µg/mL using xanthine/hypoxanthine oxidase assay.The antioxidant potential would be attributed to the major phenolic compounds, namely, gallic acid, quercetin glucoside, kaempferol rhamnoglucoside, and dimethyoxyquercetin rhamnoglucoside identified by HPLC-ESI analysis (Eldahshan et al. 2009).
The methanol extract of the bark showed 90.7% inhibition of the free radical-scavenging activity using the DPPH assay at a concentration of 100 µg/mL.And the recorded IC 50 value was 44 ± 1.5 µg/mL (Fayad et al. 2015).
Atito et al. ( 2019) reported a significant difference in the DPPH-scavenging activity of the 80% methanol extract of the endocarp, mesocarp, and coat samples of H. thebaica fruits where the IC 50 values were 53.76, > 500, and 137.89 µg mL, respectively.The results revealed that endocarp showed the most potent radical-scavenging activity.Another study tested the DPPH-scavenging activity of the aqueous H. thebaica extract and the IC 50 was 771 µg/mL (Abd-ELmageed et al. 2019).
A recent study revealed that the 80% methanol extract of the leaves and male parts of doum palm showed more potent DPPH-scavenging activity with IC 50 values of 62.6 and 66.2 µg/mL, respectively, as compared with the fruits with an IC 50 value of 76.8 µg/mL.The antioxidant activity could be correlated with their total antioxidant capacity (TAC).The male parts showed the highest TAC (308.8 mg ascorbic acid equivalent/g extract), and the leaves and fruits showed lower values (123.8 and 127.9 mg ascorbic acid equivalent/g extract, respectively) (Taha et al. 2020).
Anti-inflammatory Activity
The anti-inflammatory activity of the chloroform extract of H. thebaica seeds was evaluated using atropinized rat fundus strip by induced inflammation with kidney homogenate.The results revealed that the chloroform extract at a dose of 5 ng/mL showed significant inhibition of kidney muscle stimulation by 60% as compared with the reference drug, indomethacin (5 µg/mL), which showed complete inhibition of the kidney muscle stimulation (Eltayeb et al. 2009).
Also, the 80% methanol extract of H. thebaica fruits showed anti-inflammatory potential through cyclooxygenase-1 enzyme inhibition.The extract demonstrated inhibition of COX-1 with an IC 50 value of 20.9 ± 3.2 mg/mL in comparison with the reference COX-1 inhibitor (SC-560) that showed an IC 50 value of 50 ± 6.7 µM.Its anti-inflammatory activity was likely to be mediated by flavonoid conjugates, oxygenated fatty acids, and sphingolipids found in fruits (Farag and Paré 2013).
Antimicrobial Activity
The methanolic and aqueous extracts of H. thebaica showed potent inhibitory effects against Gram-positive (S. aureus & B. subtilis) and Gram-negative bacteria (P.aeruginosa & S. typhi).However, slight inhibition was observed against L. monocytogenes, and no inhibitory effect was observed against E. coli.The methanolic extract of H. thebaica showed more potent antifungal (A.niger) and anti-yeast (C.albicans) activities than the aqueous extracts (Mohamed et al. 2010).
The aqueous extract obtained from the pericarp showed MIC values of 25 mg/mL against Staphylococcus aureus, Streptococcus pyogenes, and Salmonella typhi.The MIC value for E. coli and Shigella dysenteriae was 50 mg/mL.It also showed (minimum bactericidal activity) MBC values of 50 mg/mL (Auwal et al. 2013).
The n-hexane extract of the fruits showed antibacterial activity with a diameter zone of inhibition (DZI) ranging from 15.10 ± 0.51 to 2.0 ± 0.55 mm towards K. pneumoniae, 10.20 ± 0.57 to 2.00 ± 0.35 mm towards P. aeruginosa, and 8.00 ± 0.35 to 1.00 ± 0.55 mm towards S. typhi, while the DZI for the aqueous extract was 7.10 ± 0.23 to 2.0 ± 0.35 mm towards K. pneumoniae, 6.20 ± 0.31 to 2.00 ± 0.35 mm towards S. typhi, and 5.42 ± 0.55 to 2.05 ± 0.75 mm towards P. aeruginosa using agar well diffusion method.The MIC and MBC values of the extracts were 100 mg/mL and 200 mg/ mL, respectively (Ewansiha et al. 2021).The aqueous extract of fruits showed high antimicrobial activity towards S. aureus with the highest DZI of 20.33 mm followed by E. coli with a DZI of 16.00 mm (Abd-ELmageed et al. 2019).Atito et al. (2019) reported that the 80% methanol extract of H. thebaica endocarp showed the most potent activity against two strains of bacteria S. pneumonia and B. subtilis.Taha et al. (2020) reported the antimicrobial activity of the 80% methanol extract (500 µg) of doum palm's different parts using the disc diffusion assay.The results revealed that the leaf extract exhibited the highest impact followed by fruit and male part extracts.While only the fruit extract showed antifungal activity towards Candida albicans with an inhibition zone of 9.0 ± 0.0 mm.The presence of secondary metabolites such as anthocyanins, saponins, phenolics, flavonoids, and tannins, which are well-known antibacterial agents against a wide range of Gram-positive and negative bacteria, could explain the potential antimicrobial activity of doum palm in different parts (Taha et al. 2020).The inhibition of proteases and/or inactivation of microbial adhesions may contribute to the mechanism of polyphenols toxicity towards microbes (Cowan 1999).
The antimicrobial activity of the 80% methanol extract of the fruit pulp at 500 mg/mL was evaluated using the agar-well diffusion technique.The results revealed a good antimicrobial activity against Staphylococcus aureus and Pseudomonas aeruginosa with inhibition zones of 16.0 ± 1.0 mm and 18.5 ± 0.5 mm, respectively.On the other hand, weak antibacterial activity was recorded with Bacillus cereus and Escherichia coli with inhibition zones of 9.0 ± 0.0 mm and 7.5 ± 0.5 mm, respectively.The minimum inhibitory concentrations (MIC) of the methanolic extract of the fruit were 62.5 mg/mL and 125 mg/mL for S. aureus and P. aeruginosa, respectively (Abdallah 2021).
Cytotoxic Activity
Faten (2009) reported the cytotoxic activities of the H. thebaica fruit extract.The extract showed antiproliferative activity against acute myeloid leukemia (AML) cells with an IC 50 value of 3 µg/mL.Fayad et al. (2015) reported that the methanol extract of the bark showed cytotoxicity against A549 (lung carcinoma cell line) and MCF-7 (breast cancer cell line) with IC 50 values of 32 ± 0.9 and 38 ± 1.2 µg/mL, respectively, and at a concentration of 100 µg/mL, it showed inhibition by 87% and 89%, respectively (Fayad et al. 2015).
The antiproliferative activity of the 80% methanol extract of the leaves, male parts, and fruits was evaluated using the sulforhodamine B (SRB) colorimetric assay towards human hepatocellular carcinoma (HepG-2) and lung carcinoma (A549) cell lines.The leaves, male parts, and fruits showed cytotoxic activity against HepG-2 cells with IC 50 values of 3.08, 1.14, and 3.07 µg/mL, respectively.While their cytotoxic activity against A549 cells was more potent with IC 50 values of 2.07, 1.15, and 2.76 µg/mL, respectively.The results indicated that the male part extract exhibited the most potent antiproliferative activity against HepG2 and A549 cell lines (Taha et al. 2020).
Antidiabetic Activity
In the digestive system, α-amylase is a key enzyme that catalyzes the hydrolysis of starch into simple monosaccharides, which are further degraded by α-glucosidases to produce glucose for intestinal absorption, which in turn, increases blood glucose levels (Van de Laar et al. 2005;Zhu et al. 2020).Inhibiting the function of these enzymes can decrease blood glucose post-prandial levels since only monosaccharides can be absorbed through intestinal mucosa thus decreasing the demand for insulin, and consequently reducing hyperglycemia in patients with type-2 diabetes (Teng and Chen 2017).Shady et al. (2021) reported that the aqueous extracts of H. thebaica fruits are rich in flavonoids including myricetin ( 19), (8), and apigenin (5).The in vitro insulin secretion assay revealed that the major bioactive flavonoids (19, 8, 5) were able to promote insulin release by human pancreatic cells by 20.9 ± 1.3, 13.74 ± 1.8, and 11.33 ± 1.1 ng/mL, respectively.
The 80% methanol extract of the endocarp, mesocarp, and coat samples of H. thebaica fruits showed different α-amylase inhibitory activity with IC 50 values of 87.06, 81.20, and 81.83 µg/mL, respectively.Where the mesocarp showed the most potent activity followed by the coat and endocarp (Atito et al. 2019).Also, a recent report by Khallaf et al. (2022) investigated various fractions obtained by different solvents from the total ethanolic extract of H. thebaica for α-glucosidase inhibitory activity, a key enzyme in carbohydrate metabolism.They found that the dichloromethane fraction exerted potent inhibition with IC 50 = 52.40µg/ml.The subsequent fractions from the dichloromethane showed powerful inhibition as compared with acarbose (IC 50 3.79-5.13µg/ml versus 2.33 µg/ ml).However, in another study, El-Manawaty and Gohar (2018) tested the inhibitory effect of methanol extract of H. thebaica flowers on the enzymatic activity of α-glucosidase.They showed that H. thebaica extract exhibited very low inhibitory activity with 2% inhibition on α-glucosidase at the tested concentration of 25 ppm (El-Manawaty and Gohar 2018).Thus, further investigations were carried out to demonstrate the potential of the antidiabetic effect of H. thebaica using in vivo models (Table 3).
Antidiabetic Activity
The phytochemical investigation of the water-soluble fraction of H. thebaica fruits showed its richness in flavonoids.One of the isolated flavonoids identified as a new natural f lavonoid was chrysoeriol 7-Oꞵ-D galactopyranosy-(1 → 2)-α-L-arabinofuranoside (15).Both water-soluble fractions of H. thebaica and compound (15) at a dose of 20 mg/Kg b.wt.showed antidiabetic activity in alloxan-induced diabetic rats.The results revealed an improvement in glucose, insulin tolerance, and kidney function as well as a significant reduction in blood glycosylated hemoglobin levels.Also, compound (15) significantly reduced AST and ALT levels in the liver (Salib et al. 2013).
An in vivo study of the STZ-induced diabetic model showed that the aqueous extract of the fruits at a dose of 1 g/Kg b.wt.increased blood glucose level.And improved the levels of different biomarkers including serum total lipids, cholesterol, triglycerides, LDL, and HDL.Also, the activities of serum enzymes ASAT, ALAT, ALP, GGT, and LDH were ameliorated (Tohamy et al. 2013).AbdEl-Moniem et al. (2015) reported the recovery ability of the fruit extract on streptozotocin (STZ)-induced diabetic nephropathy at a dose of 150 mg/Kg b.wt.The biochemical results revealed that the levels of blood glucose, urea, and creatinine were significantly decreased, while insulin and C-peptide levels significantly increased.Cystatin C and neutrophil gelatinase-associated lipocalin decreased.Regarding histopathological observation, collagen fiber deposition was elevated, associated with apparent thickening of the parietal layer of Bowman's capsules and the basal lamina of convoluted tubules.Different extracts of doum palm fruits were investigated for their antidiabetic activity in the STZ-induced diabetic model at a dose of 60 mg/kg.The aqueous extracts considerably lowered high blood glucose and enhanced the relative expression of insulin.On the other hand, a significant reduction of the inflammatory mediators (TNFα and TGFꞵ) was observed.Regarding the histopathological examination, the aqueous extract significantly reverted the ꞵ-cell necrosis generated by STZ.The noticed potential antidiabetic activity of the aqueous extracts would be attributed to the presence of flavonoids; apigenin (5), luteolin (8), chrysoeriol (18), and myricetin (19) as major bioactive compounds.The molecular docking results revealed that they targeted the SUR1 binding site attaining binding energy ratings ranging from − 9.9 to − 10.3 kcal/mol (Shady et al. 2021).
Anti-inflammatory Activity
An in vivo anti-inflammatory study on different fractions from different parts of H. thebaica at a dose of 200 mg/kg b.wt.revealed that the chloroform and ethanol extracts of the seeds showed the most potent anti-inflammatory activity of 27% and 22%, respectively, as compared with the standard drug aspirin at a dose of 100 mg/kg b.wt.(Eltayeb et al. 2009).
Miscellaneous Activities
The protective effect of H. thebaica fruits extract (500 and 1000 mg/kg b.wt.) on liver/kidney functions was evaluated in mercuric chloride-induced hepatotoxicity in rats.The pre-treatment with H. thebaica extract elevated the hepatic antioxidant system (GSH-Px, GST, and CAT).A reduction in ALT and AST levels and decrease in proinflammatory cytokines (TNF-α and IL-1β) and hepatic MDA levels were detected as compared with the healthy control group (Shehata and Abd El-Ghffar 2017).
The water extract of H. thebaica fruits at doses of 20 and 40 mg/kg b.wt.exhibited a significant reduction in the total cholesterol, triglycerides, and LDL concentrations as compared with atorvastatin as the positive control group (40 mg/kg b.wt.).Also, the level of HDL was significantly increased.The histopathological examination of the liver revealed an improvement in the tissue alterations caused by a high level of cholesterol when compared with the control positive group.A significant reduction in liver weight at two doses (20 and 40 mg/kg) by 14.79% and 7.30%, respectively (Alharbi and Sindi 2020).Another report revealed that different fractions of H. thebaica fruits (aqueous, 80% methanol, ethyl acetate, chloroform) exhibited hypocholesterolemic effects in vivo at different doses (1.8, 3.5, 4, 2.5, 0.5, 7 and 3 g/kg b.wt.).All fractions significantly reduced total cholesterol and non-HDL cholesterol levels.The fractions improved the lipid profile after 2 weeks of treatment (Hetta and Yassin 2006).Salihu et al. (2019) reported the in vivo effect of the alkaloids-rich fraction of H. thebaica fruits at doses of 100 and 250 mg/kg b.wt. in high fat-fed obese Wistar rats as compared with atorvastatin.The alkaloid fraction significantly increased catalase activity, glutathione peroxidase, and superoxide dismutase levels while MDA (malondialdehyde) and AChE (acetylcholinesterase) levels were significantly decreased.Also, the levels of total cholesterol, triglyceride, and LDL cholesterol were decreased and the level of HDL was increased.
The 70% ethanol extract of the fruits (1.0 g/kg b.wt.) was evaluated in vivo for its neuroprotective effect against Alzheimer's disease (AD) induction.The results of biochemical analyses revealed that the level of reduced glutathione GSH was increased by 39.49%.The lipid peroxidation MDA and AChE levels were significantly decreased as compared with the group with AD-induced group and showed improvement by 66.98% and 58%, respectively.Moreover, the improvement in the total cholesterol level reached 96.59%.On the genetic level, the increase in the 8-hydroxy-2-deoxyguanosine 2-deoxyguanosine (8-OHdG/2-dG) ratio in the AD-induced group was partially improved due to H. thebaica extract.AD induction in rats significantly elevated the expression of the amyloid precursor protein (APP) gene in brain tissue which was improved by using H. thebaica extract (Farrag et al. 2020).
The aqueous and ethanolic extracts of H. thebaica (200 mg/kg) were investigated for hypoglycemic, hypolipidemic, and antioxidant activities in albino rats.The aqueous and ethanolic extracts of H. thebaica decreased blood glucose levels (from 382.4 to 145.2-157.4mg/dL) and HbA1c (from 14.08 to 6.5-6.98 g/100 g).The liver (AST, ALT, and ALP) and kidney function markers were also improved, which was attributed to the antioxidant activity.The lipid profile was enhanced, which was represented by a decrease in LDL and TGs and an increase in HDL (El-Hadary 2022).Hussein et al. (2022) studied the effect of feeding diabetic rats on H. thebaica fruit powder on their lipid profile and vital organ functions.Feeding rats on doum powder for 8 weeks resulted in a significant reduction in blood glucose level (from 649 to 118 mg/dl) and a notable enhancement in lipid profile represented by a decrease in LDL, VLDL, and TG and an increase in HDL compared with control groups.A significant improvement in liver and kidney function markers was observed including ALT, AST, urea, creatinine, and uric acid (Table 4).
Doum Pharmaceutical Formulations
Recent studies focused on the implication of doum extracts in different formulations.El-Said et al. (2018) prepared encapsulated doum liposomes intending to fortify yogurt products.The high antioxidant activity and high bioavailability were observed in these products.Interestingly, these biological properties were not affected by milk proteins.Mohamed et al. (2019a) utilized green synthesis to prepare silver nanoparticles of aqueous extract of the doum fruits.Promising biological potentials were revealed including antioxidant, antibacterial, antifungal, cytotoxic, and inhibition of protein kinases.
Other nano-formulae were prepared from doum fruit extracts to include copper oxide, chromium oxide, cerium oxide nanoparticles, and bismuth vanadate nanorods (Mohamed et al. 2021(Mohamed et al. , 2019b(Mohamed et al. , 2020;;Ahmed Mohamed et al. 2020).To physically characterize these formulae, different techniques were performed such as X-ray diffraction, Fourier-transformer infrared, and energy dispersive spectroscopy.The prepared doum fruit's formulae demonstrated significant biological properties as antioxidant, antimicrobial, and cytotoxic agents.
Chemical/Industrial Applications
Mannan polysaccharides are regarded as safe constituents with myriad applications in pharmaceuticals, food industries, cosmetics, and textiles (Singh et al. 2018).The powdered doum seeds were found to be an excellent source of mannan production as compared to commercial mannan.Mannan was extracted using an alkaline solution of NaOH (Gibril et al. 2020).Also, the physical characteristics of doum seeds powder were determined using gravimetric, elemental, microscopical, and X-Ray diffraction analyses.Bsheer (2020) was able to utilize doum leaves to extract cellulose, which was then chemically converted to carboxymethyl cellulose.This application offers good stability and appropriate production yield when compared to other conventional methods.Additionally, Elnasri et al. (2013) reported the potential of doum fruit to be actively carbonized showing high adsorption properties which could be used as a purification bed of ferrous ions in water treatment systems.
Food Industry Applications
Fortifying foods have gained more attention with the purpose to enhance the health-benefiting effects, especially for children, and women to protect them against malnutrition as well as vitamin and mineral deficiencies (Olson et al. 2021).Doum fruits' powder was involved in bread baking such as toast bread and gluten-free pan.These fortified products were superior to plain white flour bread in terms of nutritive values (proteins, carbohydrates, minerals, and vitamin contents) in addition to the healthy content of antioxidant and antimicrobial compounds (Aboshora et al. 2016;Shahin and Helal 2021;El-Hadidy and El-Dreny 2020).In another study, the doum fruit powder was incorporated to fortify cake and tahina, thus enhancing their sensory, nutritive, and healthy properties (Siddeeg et al. 2019).Other doum food products also include biscuits, crackers, syrups, jelly, and ice creams.Noteworthy, Ismail et al. (2020) reported lower acceptability (65.00%) in the sensory evaluation of ice creams with increasing the percentage of doum fruit syrups and pomegranate peels (5% and 0.5% respectively).A detailed list of the doum-based functional foods is shown in Table 5.
Animal Nutrition
Doum fruits were implicated in some animal/poultry feeding aspects.The dried mesocarp of doum fruits was utilized as a constituent of livestock feed formulations, where it exhibited good nutritive food alternatives in terms of caloric content but was better to be fortified with a protein source (Nwosu et al. 2008).In the same context, Makinde et al. (2018) reported the nutritive use of the dried doum mesocarp as a good substitute for a broiler chicken diet.Nevertheless, the authors recommended first minimizing the anti-nutritional factors mostly present in doum before using it in bird feed resulting in increased WBCs count.The supplementation of the highly nutritive and antioxidant-rich doum powder to the rabbits' diet resulted in a significant increase in their body weight, with improved semen characteristics, sperm concentration and motility, and accordingly higher fertility in addition to decreased abortion and mortality compared to the control groups (Hassanien et al. 2020).
Conclusions, Challenges, and Future Perspectives
The current review represents the first comprehensive review on H. thebaica integrating its chemical and biological data in addition to its food and pharmaceutical applications for further exploitation of this interesting plant.Here, we report on the technology of extraction, and analytical methods, in addition to food and industrial applications of H. thebaica.It was reported that several technological methods of extraction were adopted to prepare different extracts from this species.The phytochemical composition of H. thebaica was investigated by several analytical methods such as GC-MS, HPLC, UPLC, and NMR analyses.H. thebaica extracts showed important in vitro and in vivo biological properties ranging from antidiabetic, anti-inflammatory, antimicrobial, and antioxidant effects.Including the doum in various food products was mainly correlated with improved rheological and physical properties in addition to being fortified with antioxidants.These studies should be up-scaled to find out the possibility for further industrial applications.
The reported biological effects should be further investigated in future studies to highlight the pharmacodynamic and pharmacokinetic parameters involved.Also, molecular docking of H. thebaica bioactive compounds should be performed to identify the target proteins related to their biological potential.Moreover, to validate the use of H. thebaica formulations, more in-depth biological studies with mechanistic approaches and pre-clinical and toxicological studies must be evaluated.The proteomics profile of doum should be also investigated so the doum fruits can be incorporated into more products based on solid scientific data.Another proposed challenge is to apply response surface methodology (RSM) to improve the phenolic yield of the plant.
Are Used in the Extraction of H. thebaica Extraction techniques play important role in the preparation of high-quality plant formulations.Modern extraction methods or non-conventional methods are more effective and showed significant advantages over conventional ones.Conventional methods include hot water extraction, reflux extraction, alkali extraction, and maceration extraction.In conventional extraction methods, the extraction yield depends on various parameters including extraction time, pH, number of extraction cycles, and solvent-to-liquid ratio.
Fig. 2
Fig. 2 The schematic diagram for the recent in vitro and in vivo studies on Hyphaene thebaica
Table 1
Culture, dry, and storage conditions of Hyphaene thebaica
Table 2
Conventional and non-conventional methods of Hyphaene thebaica
Table 3
In vitro studies of Hyphaene thebaica
Table 4
In vivo studies of Hyphaene thebaica | 8,526 | 2022-10-21T00:00:00.000 | [
"Agricultural and Food Sciences",
"Chemistry"
] |
Photon counting of extreme ultraviolet high harmonics using a superconducting nanowire single-photon detector
Laser-driven light sources in the extreme ultraviolet range (EUV) enable nanoscopic imaging with unique label-free elemental contrast. However, to fully exploit the unique properties of these new sources, novel detection schemes need to be developed. Here, we show in a proof-of-concept experiment that superconducting nanowire single-photon detectors (SNSPD) can be utilized to enable photon counting of a laser-driven EUV source based on high harmonic generation (HHG). These detectors are dark-count free and accommodate very high count rates—a perfect match for high repetition rate HHG sources. In addition to the advantages of SNSPDs for classical imaging applications with laser-driven EUV sources, the ability to count single photons paves the way for very promising applications in quantum optics and quantum imaging with high energetic radiation like, e.g., quantum ghost imaging with nanoscale resolution.
Introduction
The ability to visualize small features down to the nanoscale has ever since been an important key to scientific and technological advances. The resolution of a conventional microscope is limited by Abbe's law to approximately half of the wavelength of the light source. Thus, a straightforward way to improve the resolution is to decrease the wavelength of the light. Using light in the extreme ultraviolet (10-124 nm wavelength) and soft X-ray range (1-10 nm) for microscopy enables nanoscale resolution and exhibits an unique high elemental contrast combined with the ability to penetrate a few micrometer into solid samples [1,2]. However, the technical realization of such microscopes is extremely demanding in every aspect starting from the light source via the available optics to the detection of the radiation.
For a long time, synchrotrons were the only source with adequate photon flux for extreme ultraviolet (EUV) and soft X-ray (SXR) imaging applications. However, recent advances in the development of high-power ultrashort lasers drastically improved the photon-flux of laser-driven EUV sources using the process of high harmonic generation (HHG) in gases [3][4][5]. When ultrashort infrared laser pulses are focused into diluted gases, a small fraction of the infrared photons is coherently converted [6,7] into higher harmonics of the fundamental frequency, reaching wavelengths down to a few nanometers. The emitted EUV or SXR radiation has laser-like properties such as high spatial coherence and low beam divergence. Because of the large bandwidth it can even support pulse lengths on the attosecond scale [8]. To date, HHG, at least in the low energetic EUV range ( ∼ 30eV), reaches photon-fluxes that are comparable to large-scale facility synchrotrons [9]. As a consequence, laboratory-based EUV imaging [10,11], as well as spectroscopic applications [12,13] became feasible triggering widespread research activities in these fields.
Besides the light source, the technical implementation of the optical setup is equally demanding. Due to strong absorption, refractive optics like lenses cannot be used. Instead, grazing incidence mirrors or Fresnel zone plates must be employed. This, however, constrains the achievable numerical apertures (NA) and in consequence the resolution. These limitations can be mitigated by lensless imaging techniques [14,15]. The sample is illuminated by the source and the diffracted light is directly sent to the detector without any optics in between the sample and detector. Then, the actual image needs to be computed from the recorded diffraction pattern by numerically reconstructing the missing phase with sophisticated phase retrieval algorithms [16]. Since these regularly require a high degree of coherence, HHG sources are very well adapted to lensless imaging approaches [11].
Yet, one challenge remains: the detection process of the diffracted EUV radiation itself. The sensitivity of an EUV detector for lensless imaging methods should be as high as possible for several reasons. On the one hand, the resolution directly scales with the number of detected photons [17], i.e, the signal-to-noise ratio (SNR). Typically in EUV imaging, high SNR is realized by high photon flux, high quantum efficiency, and long exposure times over which the photon flux is integrated. On the other hand, it is beneficial to actually limit the incidence flux on the sample. After all, EUV and SXR light is ionizing radiation and induces damages [18].
In common EUV/SXR imaging applications typically back-illuminated thinned silicon-based CCD detectors are used as detectors. The quantum efficiency of these detectors can reach values higher than 90 % [19] in the EUV range. However, the SNR is limited by read-out noise and dark counts and thus not ideal for EUV imaging with minimized photon throughput. Furthermore, due to its integrating measurement principle in conjunction with the vastly different illumination of different regions on the detector as typical for lensless imaging, these detectors cannot exploit the very high repetition rate of several 100 kHz of high-flux HHG sources since the readout times are typically in the order of milliseconds to even seconds. Therefore, an optimal detector for imaging application with HHG radiation should offer high readout speed, ideally event-based after every laser pulse, and a high SNR, only limited by the photon shot-noise. Fast single-photon counting detectors can fulfill these requirements [20]. In addition, such detectors would open up novel quantum optical imaging possibilities in the EUV, which can reduce the required flux on the sample even below the classical limits by methods like quantum ghost imaging [21].
Photon counting in the EUV and SXR range usually has been implemented in the past by applying electron multiplier multichannel plates (MCP). However, these devices suer from low efficiencies limited by the open area ratio of the device, high dark count rates, and limited count rates. For harder X-rays, CCD cameras can detect single photons and thus can be used as counting detectors, but the above-mentioned readout-time limits persist. For lower energy photons, counting with CCDs is not possible due to read-out noise.
A variety of photon counting semiconducting detectors like avalanche photo diodes [22] or electron-multiplier-CCD detectors [23] are available but mostly for the infrared and optical wavelength range. Therefore, these devices typically exhibit no or very low detection efficiency for EUV and SXR photons and their count rates are typically much smaller than the repetition rate of high flux HHG sources.
Here, we demonstrate that superconducting nanowire single-photon detectors (SNSPDs) [26], which originally were also designed for infrared radiation [27], can directly be utilized for photon counting of EUV radiation from laser-driven high harmonic sources. These detectors, invented almost two decades ago [24], are based upon breaking of superconductivity in a very thin and narrow cryogenic-cooled nanowire when a photon is absorbed. Their working principle can be explained in a simplified way by the so-called hot spot model [25]: The superconducting nanowire (a few nanometers thick and tens of nanometers wide) is biased by a certain direct current (DC) I B below its maximum supercurrent (the critical current I c , see Fig. 1b). If a photon with sufficiently high energy h hits the superconducting nanowire and is absorbed, a small region of the wire, the so-called hot spot, becomes normal conducting. The critical current density of the superconductor is exceeded around this hot n a n o w i r e substrate hotspot a b c d Fig. 1 SNSPD working principle in a simplified hot spot model [24,25]: a Artwork of photon absorption by SNSPD. b Below critical current density, critical temperature and critical magnetic field, the complete meandered nanowire is superconducting and a constant DC bias current I B is applied. A photon with energy h hits the meander and is absorbed. c The energy of the absorbed photon is high enough to generate a so-called hot spot in the nanowire (red area). By this, the current density exceeds the critical current density of the superconductor and a small normal conducting region covers the whole crosssection of the nanowire. This resistive part together with I B generates a voltage peak across the SNSPD. d After a few nanoseconds (see also Fig. 3) the heat is distributed and dissipated into the substrate of the SNSPD (and finally to the cryogenic bath) and the entire nanowire becomes superconducting again and is ready for a new event spot. Therefore, the normal conducting region expands and covers the whole cross section of the nanowire (see red area in Fig. 1c). This resistive part of the nanowire together with I B generates a voltage peak across the SNSPD. After a few nanoseconds, the thermal energy will be distributed to the substrate of the SNSPD and finally to the cryogenic bath. The entire nanowire becomes superconducting again and the SNSPD is ready for a new photon event (Fig. 1d). The actual mechanism behind this photon detection and the role of magnetic vortices are still under investigation [28,29]. SNSPDs can be fabricated in pixalated arrays [30], which are in principle suited for imaging applications and are predominantly used for photon counting in the far and near infrared and the visible spectral range. Nowadays, they are commercially available and become more and more popular in the heavily growing fields of quantum optics, quantum communication and quantum imaging [31][32][33][34]. When applied to EUV radiation, SNSPDs have two major advantages in comparison to other single-photon detectors like above-mentioned avalanche photo diodes or multiplier tubes. The recovery time lies in the range of a few nanoseconds or even picoseconds and thus SNSPDs can achieve count rates up to several GHz. This makes SNSPDs a perfect match to the repetition rate of state-of-the-art high-flux HHG sources driven by a repetition rate of several 100 kHz [9]. Furthermore, SNSPDs exhibit an outstanding low dark count rate and thus would allow background-free detection. Therefore, SNSPDs in the EUV range are promising detectors not only for imaging applications, but also for experiments to measure the quantum statistics of HHG sources as Gorlach et al. proposed [35].
We want to point out that SNSPDs have actually been applied to harder x-ray radiation > 6keV by Inderbitzin et al. [36][37][38][39] by increasing the thickness of the wire for higher absorption of x-ray photons (X-SNSPD). However, to the best of our knowledge they have never been utilized neither for EUV radiation in general nor with laser-driven high harmonic sources in particular. As a proof-of-concept, we show in the following that a 10 nm thin meandered NbN-SNSPD, originally developed for the visible range, is capable of photon counting in the EUV.
Setup
For the proof-of-concept study a SNSPD was illuminated with EUV radiation from an HHG source [40]. Figure 2a shows the experimental setup.
The HHG source is driven by laser pulses with a central wavelength of 1300 nm, a pulse energy of 2 mJ, a pulse duration of ∼ 50fs, and with a repetition rate of 1 kHz. These pulses are generated by an Optical Parametric Amplifier (OPA), which was pumped with a Ti:Sa laser (35 fs pulse duration, 9 mJ pulse energy central wavelength 790 nm). By focusing the linearly polarized laser pulses from the OPA into an argon gas jet, the HHG process is triggered, resulting in a mixture of EUV radiation with the typical harmonic comb structure and remaining infrared laser light. EUV photons up to an energy of ∼ 100 eV are produced [41]. They are separated from the laser radiation by thin metal foils. Two filter materials were used. Aluminum transmits EUV radiation in the range of 15 to 72 eV, whereas zirconium transmits radiation above ∼ 60eV. In addition to the spectral filtering, the foils are also used for differential pumping of the residual gas load from the gas jet in the HHG chamber. Ultrahigh vacuum conditions at the position of the cryogenic SNSPD are crucial to avoid freezing of the nanowire chip.
The divergent EUV radiation is focused into another vacuum chamber by a toroidal mirror. At the focus of the EUV beam, a flat EUV mirror can be moved into the beam to steer it into a high resolution EUV spectrometer [42], which served as an online diagnostic for the produced HHG radiation. Without the mirror, the divergent beam propagated to another pumping stage into the detection chamber consisting of the cryogenic cooling unit and the actual SNSPD, which was exposed to the EUV radiation without any further filter foils. Due to the divergence of the EUV beam behind the focus and the finite size of the SNSPD, the amount of photon flux on the detector could be controlled by its distance to the focus. It has been set to ∼1 m to limit the photon flux in the SNSPD area to less than one photon per laser shot and thus to ensure single-photon events. The toroidal mirror was used to steer the beam to the detector and to ensure overlap.
The SNSPD consists of a 10 nm thick and 100 nm wide meandered niobium nitride nanowire with a gap size of 100 nm, resulting in an active area of about 4.8 × 4.8 m 2 with a filling factor of 0.5 [27]. The nanowire is embedded in a coplanar waveguide connected to a bias-tee, which separates the bias and readout lines. A battery-powered current source was used to bias the detector. The pulses upon photon detection were amplified by means of two amplifiers (Mini-Circuits ZX60-33LN-S+) at room temperature with an overall gain of about 35 dB. The amplified signal was then observed and recorded with an oscilloscope. In addition, a standard photo diode was used to register the incoming IR-pulses on the same oscilloscope to enable coincidence measurements.
Here, we irradiate the 4.8 × 4.8 m 2 large active area of the SNSPD with broadband EUV radiation from an HHG source. The incident EUV spectrum is shown in Fig. 2b). It ranges from ∼ 35 to 72 eV limited by the aluminum filter. Hence, the detector is hit by photons with a variety of different discrete photon energies. The simulated absorption depth of the EUV radiation for three different energies (40 eV, 55 eV, 70 eV) is depicted in Fig. 2c). Depending on the photon energy the absorption in the 10 nm-thin NbN wire is up to 35 % including the filling factor of 0.5. The remaining radiation is absorbed in the bulk material (Al 2 O 3 ). The absorption in the wire and thus the nominal detection efficiency could be drastically enhanced by a larger filling factor and increased wire thickness. Nevertheless, even in its current form, we demonstrate that the SNSPD is capable to count single photons of an HHG source.
Results
When the SNSPD was irradiated with the broadband EUV radiation, the detector registered single-photon events. 2 a Experimental Setup: the laser radiation of an OPA is focused into an argon gas jet. EUV radiation is produced and filtered from laser light by thin metal foils. After that, the EUV beam is focused by a toroidal mirror to an intermediate focus and then propagates to the SNSPD. Additionally, a mirror can be moved to the focus area to steer the beam into an XUV spectrometer. The signal of the SNSPD is amplified with a room temperature amplifier (Mini-Circuits ZX60-33LN-S+) and measured with an oscilloscope. Furthermore, the signal of an IR diode is used to register the incoming IR pulses in coincidence. b Spectrum of the HHG radiation with an aluminum filter foil used to separate the EUV radiation from the remaining infrared laser light. The filter has a transmission window of 15-72 eV, which determines the bandwidth of the EUV photons incident on the SNSPD. c Caluculated penetration depth of EUV radiation into the SNSPD, which consists of a meandered 10 nm thick NbN layer on a sapphire substrate. Up to 35 % of the radiation is directly absorbed in the NbN layer. The remaining part is absorbed in the substrate. The area lling factor of the meandered wire of 0.5 has been already included in this calculation The pulse height of around 100 mV after amplification lies in the expected range for the readout setup and corresponds to the pulse height in the case of photon detection in the visible range. For a 10 nm thick detector, we expect the resistance of the normal domain after photon absorption to be much higher than the 50 Ω impedance of the readout. Therefore, the signal amplitude does not vary with the normal domain resistances induced by the absorption of photons of different energies. Thus, the pulse height itself does not allow for the energy resolution of the incoming photons. The decay time of around 4 ns is determined by the kinetic inductance of the nanowire, which roughly accounts to L k = 0 2 NbN l∕(wd) ≈ 42 nH , with the vacuum permeability 0 , the magnetic penetration depth NbN ≈ 550 nm [27,31], and the nanowire's total length l = 110 m, width w = 100 nm and thickness d = 10nm. The overshoot and long tail of the signal amplitude after 4 ns is related to the reactances in the bias and readout circuit and reflections in the readout line. The long tail is mostly caused by an impedance mismatch between the SNSPD-chip and the output line. It takes about 50-100 ns to fully converge the output signal of the measurement system. Note, that this is neither the recovery time of the SNSPD nor does it influence the detection efficiency or the dark count rate of the system.
At a bias current of 62 A, the count rate was 940 events per minute and thus far below the repetition rate of the source of 1kHz, which ensures that the detection events are related to single incident photons. To prove that the detected events are really caused by EUV photons, the measurement was repeated with the infrared driving laser switched off to show that the events are neither dark counts nor caused by any residual light inside the chamber. Indeed, no event occurred during five minutes. As a second test, the gas supply was switched off while maintaining the laser radiation to prove that the events are not triggered by infrared photons, which might have passed through the aluminum filter. Again, no events were detected within five minutes. Both findings prove that the events are caused by EUV photons. As an additional cross-check, we investigated the time-delay between the incoming laser pulse and a detected event. Figure 4 shows a typical signal trace as well as the signal of the photo-diode, which detects the incoming IR pulses (see Fig. 2a).
All detected events are clearly correlated to the laser pulses. The jitter of the time-delay between SNSPD ( l = 110 m, w = 100nm) signal and diode signal between 27 different events was 50 ps (FWHM) at a mean value of 28.115ns, which corresponds to the lengths of the BNC cable ( ∼ 12 m) used for the diode signal. The time-delays of several events are depicted in Fig. 4b). The timing jitter is attributed to the hot spot emergence and subsequent vortex crossing upon photon absorption [43]. At higher photon energies, the exponential distribution diminishes. The recorded jitter lies in the expected range, which is known from IR measurements with geometrically comparable SNSPDs ( l = 30 m, w = 110nm) [44].
After the above-mentioned proof of EUV photon detection by the SNSPD, we want to discuss the efficiency of the detection process in the following. The experiment is arranged to have a high probability to register single-photon events only. It is done in such a way, that the recorded count rate was adjusted to be significantly lower than the repetition rate of the laser. This was realized by placing the detector in the divergent beam instead of the focus region, the photon-flux hitting the detector area could be changed by adjusting the distance of the detector to the focus point. The time structure of the EUV photons within a single laser pulse is in the order of femtoseconds or even attoseconds. Thus, they are indistinguishable with respect to the time resolution of the detector, which is in the order of picoseconds. The overall flux of the source in this spectral range can be estimated to be 3 × 10 9 photons/s in the full bandwidth (see also Fig. 2b). Considering the active area of the detector of ∼ 25 m 2 and the beam size ( 1∕e 2 ) at the detector position of roughly 0.8 cm 2 a maximum of 880photons/s will hit the detector assuming an a b Fig. 4 a Time-delay of the EUV-related SNSPD signal to the driving infrared laser pulse: The rising edge of the SNSPD signal (red) is used for triggering. Additionally, a fast photo diode was used to detect the HHG-driving infrared laser pulse. The diode signal (blue) has been propagated within a ≈ 12 m long BNC cable resulting in a delay of approx. 28ns. The bias current was set to I B = 62 A and the operating temperature was T = 3.0 K. b Time-delay between SNSPD signal and driving infrared laser pulse for several events: The gray dots depict the time-delay for a single measurement. The red dashed line shows the mean value ideal Gaussian beam shape and perfect alignment of the small detector area directly at the center of the beam. However, as mentioned above, the measured count rate was 940counts/min or ∼ 16counts/s. This can be explained by two effects. On the one hand, the real beam shape is distorted and no longer Gaussian since a toroidal mirror (see Fig. 2a) was used for an intermediate focusing of the beam as well as for steering the beam to the fixed detector position. By aiming with the toroidal mirror to the detector, the angles are adjusted and the former Gaussian beam profile becomes strongly elongated at the detector due to astigmatism. We roughly estimate this effect to lower the incident photon number by at least one order of magnitude. On the other hand, only a part of the incident photons is actually detected by the SNSPD due to the restricted efficiency. It is limited by the absorption in the nanowire and lies around 10-35 % for the given spectral window (see Fig. 2c). However, as literature predicts, it is possible to absorb a high energetic photon also in the substrate and afterwards dissipate the energy into the nanowire and still break superconductivity [36]. Consequently, this effect can increase the efficiency of the used detector in a trade-off with the timing jitter, since the duration of the dissipation process is randomly distributed. Due to the small numbers of measurements within this first proof-of-concept experiment, a robust analysis of the delay times with regard to substrate absorption is not possible. It will be investigated in continuative experiments.
Nanowires with thicknesses in the range of the penetration depth of EUV photons into NbN would ensure that the photons are absorbed by the SNSPD and not by the underlying substrate. This will significantly increase the overall detection efficiency of the detector from which the targeted EUV imaging applications would heavily benefit. Therefore, SNSPDs can be optimized for the EUV range as they have been developed for the x-ray range [37].
To quantify the effect of the bias current on the detection rate of the SNSPD, the count rates for different currents were measured by keeping all other parameters constant. The result is depicted in Fig. 5.
With increased bias current the count rate is also increased. This effect can be explained by the spectral bandwidth of the HHG radiation. The detector is illuminated with broadband harmonic radiation between ∼ 35 − 72eV. For a low bias current only photons with very high energies are able to break superconductivity and thus are registered. In other words, the detector is blinded for all photons below a certain energy depending on the bias current. For higher bias currents this detection limit shifts to lower photon energies. This gives rise to a certain integrated energy resolution of the detection scheme by altering the bias current. Spectrallyresolved measurements will be conducted in the future to investigate this effect in more detail.
The HHG source is capable of producing photons up to an energy of ∼ 100eV. To demonstrate the capabilities of the SNSPD in this higher energetic region, we interchanged the aluminum filter of the HHG source with a zirconium foil. Thus, radiation between ∼ 60 − 100 eV hits the detector and could be registered. However, the IR-transmission of the used zirconium foils for the driving laser wavelength was higher in comparison to the aluminum filter. Therefore, infrared photons reached the detector and caused high count rates in the order of 20counts per second even without actual harmonic radiation. By exploiting the above-mentioned dependence of the detection process on the bias current it was however possible to select a current ( I B = 30 A), which switched the sensitivity of the SNSPD for infrared photons almost completely off while simultaneously maintaining the EUV sensitivity. Therefore, the detector can be used to count EUV photons even if remaining infrared laser light is present. The ability to blind the detector for IR light is a significant advantage over other detector technologies. To preserve the detector from too high infrared photon-flux, we had to slightly changed the beam direction for measurements at higher photon energies to hit the detector only at the edge of the Gaussian light distribution. For this reason, we measured an EUV count rate of 5 counts/minute for EUV photons above 60eV. The background count rate with infrared laser only (gas supply off) was 0.6counts/minute.
Conclusion
Our findings prove that a single-photon detector based on superconducting nanowires can be utilized for photon counting of EUV photons in particular from a laser-driven HHG source with very low dark count rates. The laser repetition rate in the presented experiment is 1kHz. Novel fiber-laser approaches for high-power laboratory-based HHG sources however are realized by drastically increasing the repetition rate up to the MHz [45] regime so that the high possible count-rate of the SNSPD can be exploited in the future. The detector can be blinded for infrared photons by adjusting the bias current, which is a major advantage for laser-based EUV sources. Apart from the possibility to disregard low energy photons, SNSPDs inherently offer only limited ways to differentiate between photons of different energies [46].
For imaging purposes, spatial resolution needs to be added to the detection scheme. This could be implemented with pixelated setups of several SNSPDs or by measuring the position of the event on the meandered wire by evaluating the delay between the signal at the two ends [47]. The presented proof-of-concept paves the way for future applications of SNSPDs and other cryogenic detector schemes [48,49] for laser-based high harmonic light sources in coherent imaging [50], EUV quantum optics [35], and quantum imaging [51,52].
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 6,402.2 | 2022-01-22T00:00:00.000 | [
"Physics"
] |
Fluency detection on communication networks
When considering a social media corpus, we often have access to structural information about how messages are flowing between people or organizations. This information is particularly useful when the linguistic evidence is sparse, incomplete, or of dubious quality. In this paper we construct a simple model to leverage the structure of Twitter data to help determine the set of languages each user is flu-ent in. Our results demonstrate that imposing several intuitive constraints leads to improvements in performance and stability. We release the first annotated data set for exploring this task, and discuss how our approach may be extended to other applications.
Introduction
Language identification (LID) is an important first step in many NLP pipelines since most downstream tasks need to employ language-specific resources. In many situations, LID is a trivial task that can be addressed e.g. by a simple Naive Bayes classifier trained on word and character n-gram data (Lui and Baldwin, 2012): a document of significant length will be quickly disambiguated based on its vocabulary (King et al., 2014). However, social media platforms like Twitter produce data sets in which individual documents are extremely short, and language use is idiosyncratic: LID performance on such data is dramatically lower than on traditional corpora (Bergsma et al., 2012;Carter et al., 2013). The widespread adoption of social media throughout the world amplifies the problem as less-studied languages lack the annotated resources needed to train the most effective NLP models (e.g. treebanks for statistical parsing, tagged corpora for part-of-speech tagging, etc). All of this motivates the research community's continued interest in LID (Zampieri et al., 2014).
Tweet #1
Êîç ýâåðèñèíã þ äó èñ ìýäaeèê Tweet #2 omg favourite day of the week! In this paper, we consider the closely-related task of determining an actor's fluencies, the set of languages they are capable of speaking and understanding. The observed language data will be the same as for LID, but is now considered to indicate a latent property of the actor. This information has a number of downstream uses, such as providing a strong prior on the language of the actor's future communications, constructing monolingual data sets, and recommending appropriate content for display or further processing.
This paper also focuses on the situation where a very small amount of content has been observed from the particular user. While this may seem strange considering the volume of data generated by social media, this is dominated by particularly active users: for example, 30% of Twitter users post only once per month (Leetaru et al., 2013). This contentstarved situation is exacerbated by certain use-cases, such as responding to emergency events where sudden focus is directed at a particular location, or focusing on new users with shallow histories.
Previous Work
Twitter and other social media platforms are a major area of ongoing NLP research, including dedicated workshops (NAA, 2015;ACL, 2014). Previous work has considered macroscopic properties of the entire Twitter network (Gabielkov et al., 2014), and pondered whether it is an "information" or "social" network (Myers et al., 2014). Studies have focused on determining user attributes such as gender (Li et al., 2015), political allegiance (Volkova et al., 2014), brand affinity (Pennacchiotti and Popescu, 2011a), sentiment analysis (West et al., 2014), and more abstract roles (Beller et al., 2014). Such demographic information is known to help downstream tasks (Hovy, 2015). Research involving social media communication networks has typically focused on homophily, the tendency of users to connect to others with similar properties (Barberá, 2014). A number of papers have employed features drawn from both the content and structure of network entities in pursuit of latent user attributes (Pennacchiotti and Popescu, 2011b;Campbell et al., 2014;Suwan et al., 2015).
Definitions
We refer to the entities that produce and consume communications as Actors, and the communications (packets of language data) as Messages. Each message occurs in a particular Language, and each actor has a set of Fluencies, representing the ability to produce and consume a message in a given language. We refer to a connected graph of such entities as a Communication Network. For Twitter data, messages are simply associated with a single actor, who is in turn associated with other actors via the "following" relationship, the actor's "friends" in Twitter's terminology. 1 We assume each message (tweet) is written in a single language, and actors are either fluent or not in each possible language.
Twitter Data Set
To build a suitable data set 2 for fluency detection, we first identified 1000 Twitter users who, according to the Twitter LID system, have tweeted in Russian and at least one additional language. For each of these "seed" users, we gather a local context (a "snowflake") as follows: we choose 20 of their friends at random. For each of these friends, we choose 15 of their friends (again, at random). Finally, we randomly pull 200 tweets for each identified user. The data set consists of 989 seed users, 165,042 friends, and 55,019,811 tweets. We preserve all Twitter meta-data for the users and tweets, such as location, follower count, hashtags, etc, though for the purposes of this paper we are only interested in the friendship structure and message text. We then had an annotator determine the set of languages each of the 1000 seed users is fluent in. For each seed user, the annotator was presented with their 200 tweets, grouped by Twitter language ID, and was asked to 1) flag users that appear to be bots and 2) list the languages they believe the user is fluent in. These steps are reflected in Figure 4. Over 50% (507) of the users were flagged as possible bots and not used in this study. The remaining 482 were observed employing 7 different languages: Russian, Ukrainian, German, Polish, Bulgarian, Latvian, and English. At most, a single user was found to be fluent in three languages.
Structure-Aware Fluency Model
Our goal was to explicitly model each actor's fluency in different languages, using a model with sim-ple, interpretable parameters that can be used to encode well-motivated assumptions about the data. In particular, we want to bias the model towards the belief that actors typically speak a small number of languages, and encode the belief that all actors participating in a message are highly likely to be fluent in its language. Our basic hypothesis is that, in addition to scores from traditional LID modules, such a model will benefit from considering the behavior of an actor's interlocutors. To test this, we designed a model that employs scores from an existing LID system, and compare performance with and without awareness of the communication network structure. To demonstrate the effectiveness of the model in situations with sparse or unreliable linguistic content, we perform experiments where the number of messages associated with each actor has been randomly down-sampled.
Linear Programming Linear Programming (LP) is a method for specifying constraints and cost functions in terms of linear relationships between variables, and then finding the optimal solution that respects the constraints. The restriction to linear equations ensures that the objective function is itself linear, and can be efficiently solved. If some or all variables are restricted to take discrete values, referred to as (Mixed) Integer Linear Programming (ILP), finding a solution becomes NP-hard, though common special cases remain efficiently solvable. We specify our model as an ILP with the hope that it provides sufficient expressiveness for the task, while remaining intuitive and tractable. Inference is performed using the Gurobi modeling toolkit (Gurobi Optimization, 2015).
Model definition Given a communication network with no LID information, ideally we would like to determine the language of each message, and the set of languages each actor is fluent in. Initially, we assume access to a probablistic LID system that maps unicode text to a distribution over possible languages. We use the following notation: A 1:T and M 1:U are the actors and messages, respectively. F (a i ) is a binary vector indicating which languages we believe actor a i is fluent in. L(m i ) is a one-on binary vector indicating which language we believe message m i is written in. P (m i ) is the set of actors participating in message m i : for Twitter data, where messages are (usually) not directed at specific users, we treat a user and the users' friends as participants. LID(m i ) is a real vector representing the probability of message m i being in each language, according to the LID system.
To build our ILP model, we iterate over actors and messages, defining constraints and the objective function as we go. There are two types of structural constraints: first, we restrict each message to have a single language assignment: (1) Second, we ensure that all actors participating in a given message are fluent in its language: The objective function also has two components: first, the language fit encourages the model to assign each message a language that has high probability according the the LID system: Second, the structure fit minimizes the cardinality of the actors' fluency sets (subject to the structural constraints), and thus avoids the trivial solution where each actor is fluent in all languages: Finally, the two components of the objective function are combined with an empirically-determined language weight to get the complete objective function: Note that these are not all linear relationships: in particular, the multiplication operator cannot be used in ILP when the operands are both variables, as in equation 2. There are however techniques that can represent these situations in a linear program by introducing helper variables and constraints (Bisschop, 2015).
Language Identification Scores and Fluency
Baseline To get LID scores, we ran the VaLID system (Bergsma et al., 2012) on each message, and normalize the output into distributions over 261 possible languages. VaLID is trained on Wikipedia data (i.e. out-of-domain relative to Twitter), although it does employ hand-specified rules for sanitizing tweet text, such as normalizing whitespace and removing URLs and user tags. VaLID uses a datacompression approach that is competitive with Twitter's in-house LID, despite no consideration of geographic or user priors. These language scores are used in the structure-aware model to compute the language fit.
Because VaLID makes no use of the communication network structure, we also use its scores to create a baseline structure-unaware fluency model. To get structure-unaware baseline scores for the fluency identification task, we average the LID distributions for each actor's messages and consider them fluent in a language if its probability is above an empirically-determined threshold.
Tuning parameters We empirically determine the thresholds for the baseline model and the language weights for the structure-aware model via a simple grid search, repeated 100 times. We randomly split the data into 20%/80% tune/test sets, and evaluate filter thresholds and language weights from 0 to 1 in .01 increments, with messages per actor ranging between 1 and 10. We expected the baseline model to have a consistent optimal threshold (though with higher performance variance with fewer messages), and this was borne out with optimal performance at a threshold of 0.06, independent of the number of messages per actor. For the structure-aware model, the optimal language weight was 0.9, although the entire range from 0.1-0.9 showed similar performance. This result was surprising, as we expect the structure-aware model to rely heavily on the structural fit when the number of messages is small, and on the language fit when the number is large. This trend doesn't emerge because the structural fit actually relies on the language fit to make assignments for the seed actor's friends and their messages. 6 Results and discussion Figure 2 compares the performance 3 of the structure-aware ILP model with the baseline model as a function of the number of messages per actor, using the empirically-determined threshold and language weight. At the left extreme, the models only have a single, randomly-selected message from each actor. As this number increases, the baseline model improves as it becomes more likely to have seen enough messages to reflect the actor's full spectrum of language use. The structure-aware model is able to make immediate use of the actor's friends, immediately reaching high performance even when the language data is very sparse. Its most frequent type of error is over-hypothesizing fluency in both Ukrainian and Russian, when the user is in fact monolingual, followed by incorrectly hypothesizing fluency in English. This is understandable given the similarity of the languages in the former case, and the popularity of English expressions, titles, and the like in the latter.
Conclusion
We have presented promising results from leveraging structural information from a communication network to improve performance on fluency detection in situations where direct linguistic data is sparse. In addition to defining the task itself, we release an annotated data set for training and evaluating future models. Planned future work includes a more flexible decoupling of the language and structure fits (in light of Section 5), and moving from pre-existing LID systems to joint models where LID scores are directly informed by structural information. | 3,031 | 2016-11-01T00:00:00.000 | [
"Computer Science",
"Linguistics"
] |
IGM Constraints from the SDSS-III/BOSS DR9 Ly-alpha Forest Flux Probability Distribution Function
The Ly$\alpha$ forest transmission probability distribution function (PDF) is an established probe of the intergalactic medium (IGM) astrophysics, especially the temperature-density relationship of the IGM. We measure the transmission PDF from 3393 Baryon Oscillations Spectroscopic Survey (BOSS) quasars from SDSS Data Release 9, and compare with mock spectra that include careful modeling of the noise, continuum, and astrophysical uncertainties. The BOSS transmission PDFs, measured at $\langle z \rangle = [2.3,2.6,3.0]$, are compared with PDFs created from mock spectra drawn from a suite of hydrodynamical simulations that sample the IGM temperature-density relationship, $\gamma$, and temperature at mean-density, $T_0$, where $T(\Delta) = T_0 \Delta^{\gamma-1}$. We find that a significant population of partial Lyman-limit systems with a column-density distribution slope of $\beta_\mathrm{pLLS} \sim -2$ are required to explain the data at the low-transmission end of transmission PDF, while uncertainties in the mean Ly$\alpha$ forest transmission affect the high-transmission end. After modelling the LLSs and marginalizing over mean-transmission uncertainties, we find that $\gamma=1.6$ best describes the data over our entire redshift range, although constraints on $T_0$ are affected by systematic uncertainties. Within our model framework, isothermal or inverted temperature-density relationships ($\gamma \leq 1$) are disfavored at a significance of over 4$\sigma$, although this could be somewhat weakened by cosmological and astrophysical uncertainties that we did not model.
realized that the amount of resonant Lyman-α (Lyα) scattering off neutral hydrogen structures observed in the spectra of these quasars could be used to constrain the state of the inter-galactic medium (IGM) at high-redshifts: they deduced that the hydrogen in the inter-galactic medium had to be highly photo-ionized (neutral fractions of n HI /n H < 10 −4 ) and hot (temperatures, T > 10 4 K). Lynds (1971) then discovered that this Lyα absorption could be separated into discrete absorption lines, i.e. the Lyα "forest".
Beginning in the 1990s, detailed hydrodynamical simulations of the intergalactic medium led to the current physical picture of the Lyα forest arising from baryons in the IGM which trace fluctuations in the dark mat-ter field induced by gravitational collapse, in ionization balance with a uniform ultraviolet ionizing background (see, e.g., Cen et al. 1994;Miralda-Escudé et al. 1996;Croft et al. 1998;Davé et al. 1999;Theuns et al. 1998). A physically-motivated analytic description of this picture is the fluctuating Gunn-Peterson approximation (FGPA, Croft et al. 1998;, in which the Lyα optical depth, τ , scales with underlying matter density, ρ, through a polynomial relationship: where Γ is the background photoionization rate, and ∆ ≡ ρ/ ρ is the matter density relative to the mean density of the universe at the given epoch. In the second proportionality above, we have made the assumption that the local temperature of the gas has a polynomial relationship with the local density, where T 0 is the gas temperature at mean-density and γ parametrizes the temperature-density relation, which encodes the thermal history of the IGM (e.g., , Schaye et al. 1999, Ricotti et al. 2000, McDonald et al. 2001, Hui & Haiman 2003; see Meiksin 2009 for a detailed overview on the relevant physics). Over the the past decade-and-a-half, the 2000-2008 Sloan Digital Sky Survey (SDSS-I and -II, York et al. 2000;Stoughton et al. 2002, http://www.sdss.org) spectroscopic data has represented a dramatic improvement in the statistical power available to Lyα forest studies: McDonald et al. (2006) measured the 1-dimensional Lyα forest transmission power spectrum from ≈ 3000 SDSS quasar sightlines. This measurement was used to place significant constraints on cosmological parameters and large-scale structure (see, e.g., McDonald et al. 2005b;Seljak et al. 2005;Viel & Haehnelt 2006).
The McDonald et al. (2006) quasar sample, which in its time represented a ∼ 100 increase in sample size over previous data sets, is superseded by the Baryon Oscillations Sky Survey (BOSS, part of SDSS-III; Eisenstein et al. 2011;Dawson et al. 2013) quasar survey. This spectroscopic survey, which operated between fall 2009 and spring 2014, is aimed at taking spectra of ∼ 150, 000 z qso 2.2 quasars (Dawson et al. 2013) with the goal of constraining dark energy at z > 2 using transverse correlations of Lyα forest absorption (see, e.g., Slosar et al. 2011) to measure the baryon acoustic oscillation (BAO) scale 15 . At time of writing, the full BOSS survey is complete, with ∼ 170, 000 high-redshift quasars observed, although this paper is based on the earlier sample of ∼ 50, 000 BOSS quasars from SDSS Data Release 9 (DR9 Ahn et al. 2012;Pâris et al. 2012;Lee et al. 2013).
The quality of the individual BOSS Lyα forest spectra might appear at first glance inadequate for studying the astrophysics of the IGM, that have to-date been carried out largely with high-resolution, high-S/N spectra: the typical BOSS spectrum has S/N ∼ 2 per pixel 16 , since the BAO analysis is optimized with large numbers of low signal-to-noise-ratio sightlines, densely-sampled on the sky (McDonald & Eisenstein 2007;McQuinn & White 2011). It is therefore interesting to ask whether it is possible to model the various instrumental and astrophysical effects seen in the BOSS Lyα forest spectra, to sufficient accuracy level to exploit the unprecedented statistical power.
In this paper, we will measure the probability distribution function (PDF) of the Lyα forest transmission, F ≡ exp(−τ ), from BOSS. This one-point statistic, which was first studied by Jenkins & Ostriker (1991), is sensitive to astrophysical parameters such as the amplitude of matter fluctuations and the thermal history of the IGM. However, the transmission 17 PDF is also highly sensitive to effects such as pixel noise level, resolution of the spectra, and systematic uncertainties in the placement of the quasar continuum level, especially in moderate resolution spectra such as SDSS or BOSS. Desjacques et al. (2007) studied the transmission PDF from a sample of ∼ 3500 Lyα forest spectra from SDSS Data Release 3 (Abazajian et al. 2005). Using mock spectra generated from a log-normal model of the Lyα forest with parameters tuned to reproduce high-resolution, high-S/N spectra, they fitted for the estimated pipeline noise level and continuum-fitting errors in the SDSS spectra. They concluded that the noise levels reported by the SDSS pipeline were underestimated by ∼ 10%, consistent with the findings of McDonald et al. (2006). They also found that the quasar continuum-level was systematically lower by ∼ 10% in comparison with a power-law extrapolated from redwards of the quasar Lyα line, with a RMS variance of ∼ 20%, although certain aspects of their study, e.g., the noise modelling and quasar continuum model, were rather crude.
We intend to take an approach distrinct from that of Desjacques et al. (2007): instead of treating the noise and continuum as free parameters, we will attempt to measure the BOSS Lyα forest transmission PDF using a rigorous treatment of the noise and continuum-fitting, and then adopt a "forward-modeling" approach of trying to model the various instrumental effects as accurately as possible in mock spectra generated from detailed hydrodynamical simulations. Using the raw individual exposures and calibration data from BOSS, we will first implement a novel probabilistic method for co-adding the exposures, which will yield more accurate noise estimates as well as enable self-consistent noise modelling in mock spectra. Similarly, we will use a new method for continuum estimation called mean-flux regulated/principal component analysis (MF-PCA; Lee et al. 2012). This technique provides unprecedented continuum accuracy for noisy Lyα forest spectra: < 10% RMS errors for S/N ∼ 2 and < 5% RMS errors for S/N 5 spectra.
On the modeling side, we will use the detailed hydrodynamical IGM simulations of Viel et al. (2013a) as a basis. The mock spectra are then smoothed to BOSS resolution, have Lyman-limit systems (LLS) and metal contamination added, followed by the introduction of pixel noise based on our improved noise estimates. We will then self-consistently introduce continuum errors by applying our continuum-estimation procedure on the mock spectra.
With the increase in statistical power from the sheer number of BOSS spectra, and our improved modeling of the noise and continuum, we expect to significantly reduce the errors on the measured transmission PDF in comparison with Desjacques et al. (2007). This should enable us to place independent constraints on the shape of the underlying transmission PDF, and the thermal history of IGM as parametrized by the power-law temperature-density relation, γ and T 0 .
The IGM temperature-density relationship is a topic of recent interest, as Bolton et al. (2008) and Viel et al. (2009) have found evidence of an inverted temperaturedensity relation, γ < 1, implying that voids are hotter than overdensities, the IGM at z ∼ 2 − 3 from the transmission PDF from high-resolution, high-S/N Lyα forest spectra (Kim et al. 2007).
This result is in contrast with theoretical expectations of γ ≈ 1.6 (Miralda-Escudé & Rees 1994; Theuns et al. 1998;Hui & Haiman 2003), which arises from the balance between adiabatic cooling in the lowerdensity IGM and photoheating in the higher-density regions. Even inhomogeneous He II reionization, which is expected to flatten the IGM temperature-density relation (see, e.g., Furlanetto & Oh 2008;Bolton et al. 2009;McQuinn et al. 2009), is insufficient to account for the extremely low values of γ ∼ 0.5 estimated by the aforementioned authors (although inversions could occur at higher densites, see, e.g., Meiksin & Tittley 2012).
Indeed, earlier papers studying the temperaturedensity relationship using either the transmission PDF (McDonald et al. 2001) or by measuring the Doppler parameters and hydrogen column densities of individual forest absorbers (the so-called b − N HI relation, e.g., Schaye et al. 1999;Ricotti et al. 2000;Rudie et al. 2012) have found no evidence of an inverted γ. In recent years, the decay of blazar gamma rays via plasma instabilities ; although see Sironi & Giannios 2014) has been invoked as a possible mechanism to supply the heat necessary to flatten γ to the observed levels (Puchwein et al. 2012).
It would be desirable to perform an independent re-analysis of high-resolution data taking into account continuum-fitting bias (Lee 2012), to place these claims on a firmer footing. However, Lee & Spergel (2011) have argued that the complete SDSS DR7 (Abazajian et al. 2009) Lyα forest data set could have sufficient statistical power to place interesting constraints on γ, even assuming continuum-fitting errors at the ∼ 10% RMS level. Therefore, with the current BOSS data, we hope to model noise and resolution, as well as astrophysical systematics, at a sufficient precision to place interesting constraints on the IGM thermal history. This paper is organized as follows: we first give a broad overview of the BOSS Lyα forest data set, followed by our measurement of the BOSS transmission PDF with detailed descriptions of our method of combining multiple raw exposures and continuum estimation. We then discuss how we include various instrumental and astrophys-ical effects into our modeling of the transmission PDF starting with hydrodynamical simulations. The model transmission PDF is then compared with the observed PDF to obtain constraints on the thermal parameters governing the IGM.
2. DATA 2.1. Summary of BOSS BOSS (Dawson et al. 2013) is part of SDSS-III (Eisenstein et al. 2011; the other surveys are SEGUE-2, MARVELS, and APOGEE). The primary goal of the survey is to carry out precision baryon acoustic oscillations at z ∼ 0.5 and z ∼ 2.5, from the luminous red galaxy distribution and Lyα forest absorption field, respectively (see, e.g., Anderson et al. 2014;Busca et al. 2013;Slosar et al. 2013). Its eventual goal is to obtain spectra of ∼ 1.5 million luminous red galaxies and ∼ 170, 000 z > 2.15 quasars over 4.5 years of operation.
BOSS is conducted on upgraded versions of the twin SDSS spectrographs (Smee et al. 2013) mounted on the 2.5m Sloan telescope (Gunn et al. 2006) at Apache Point Observatory, New Mexico. One thousand optical fibers mounted on a plug-plate at the focal plane (spanning a 3 • field of view) feed the incoming flux to the two identical spectrographs, of which 160-200 fibers per plate are allocated to quasar targets (see Ross et al. 2012;Bovy et al. 2011, for a detailed description of the quasar target selection). Both spectrographs split the light into a blue and red camera that cover 3610 − 10140Å, with the dichroic overlap region occurring at around 6000Å. The resolving power R ≡ λ/∆λ ranges from 1300 at the blue end to the 2600 at the red end.
Each plate is observed for sufficiently long to achieve the S/N requirements set by the survey goals; typically, 5 individual exposures of 15 minutes are taken.The data are processed, calibrated, and combined into coadded spectra by the "idlspec2d" pipeline, followed by a pipeline which operates on the 1D spectra to classify objects and assign redshifts . However, as described later in this paper, we will generate our own co-added spectra from the individual exposures and other intermediate data products.
Data Cuts
In this paper we use data from the publicly-available SDSS Data Release 9 (DR9 Ahn et al. 2012). This includes 87,822 quasars at all redshifts, that have been confirmed by visual inspection as described in Pâris et al. (2012). In Lee et al. (2013), we have defined a further subset of 54,468 quasars with z qso ≥ 2.15 that are suitable for Lyα forest analysis, and have provided in individual FITS files for each quasar various products such as sky masks, masks for damped Lyα absorbers (DLAs), noise corrections, and continua; these are designed to ameliorate systematics in the BOSS spectra and aid in Lyα forest analysis (see Table 1 in Lee et al. 2013 for a full listing). While we use this Lee et al. (2013) catalog as a starting point, in this paper we will generate our own custom co-added spectra and noise estimates.
The typical signal-to-noise ratio of the BOSS Lyα forest quasars is low: S/N ≈ 2 per pixel within the Lyα forest; this criterion is driven by a strategy to ensure a large number of sightlines over a large area in order to optimize the 3D Lyα forest BAO analysis. (McDonald & Eisenstein 2007;McQuinn & White 2011), rather than increasing the S/N in individual spectra. However, for our analysis we wish to select a subset of BOSS Lyα forest sightlines with reasonably high S/N in order to reduce the sensitivity of our PDF measurement to inaccuracies in our modeling of the noise and continuum of the BOSS spectra. We therefore make a cut on S/N, including only sightlines that have a median S/N ≥ 6 per pixel within the Lyα forest 18 , defined with respect to the pipeline noise estimate (see Lee et al. 2013) -this selects only ∼ 10% of the spectra with the highest S/N. The 1041 − 1185Å Lyα forest region of each quasar must also include at least 30 pixels (∆v = 2071 km s −1 ) within one of our absorption redshift bins of z = 2.3, z = 2.6, and z = 3.0, with bin widths of ∆z = 0.3 (see § 3.3).
We discard spectra with identified DLAs in the sightline, as listed in the 'DLA Concordance Catalog' used in the Lee et al. (2013) sample. This DLA catalog (W. Carithers 2014, in prep.) includes objects with column densities N HI > 10 20 cm −2 ; however, the completeness of this catalog is uncertain below N HI = 10 20.3 cm −2 . We therefore discard only sightlines containing DLAs with N HI ≥ 10 20.3 cm −2 , and take into account lower column-density absorbers in our subsequent modelling of mock spectra. At the relatively high S/N that we will work with (see below), the detection efficiency of DLAs is essentially 100% (see, e.g., Prochaska et al. 2005;Noterdaeme et al. 2012) and thus we expect our rejection of N HI ≥ 10 20.3 cm −2 DLAs to be quite thorough.
Measurements of the Lyα forest transmission PDF are known to be sensitive to the continuum estimate (Lee 2012), but in this paper we use an automated continuumfitter, MF-PCA (Lee 2012), that is less susceptible to biases introduced by manual continuum estimation. Moreover, unlike the laborious process of manually-fitting continua on high-resolution spectra, the automated continuum estimation can be used to explore various biases in continuum estimation. For this purpose, we will use the same MF-PCA continuum estimation used in Lee et al. (2013), albeit with minor modifications as described in § 3.2. We select only quasars that appear to be welldescribed by the continuum basis templates, based on the goodness-of-fit to the quasar spectrum redwards of Lyα. This is flagged by the variable CONT_FLAG= 1 as listed in the Lee et al. (2013) catalog (see Table 3 in that paper). Broad Absorption Line (BAL) quasars, which are difficult to estimate continua due to broad intrinsic absorption troughs, have already been discarded from the Lee et al. (2013) sample. Another consideration is that the shape of the transmission PDF is affected by the resolution of the spectrum, especially since the BOSS spectrographs do not resolve the Lyα forest. The exact spectral resolution of a BOSS spectrum at a given wavelength varies as a function of both observing conditions and row position on the BOSS CCDs. The BOSS pipeline reports the wavelength dispersion at each pixel, σ disp , in units of the co-added wavelength pixel size (binned such that 18 Defined as the 1041 − 1185Å region in the quasar restframe -Wavelength dispersions, σ disp , for 236 BOSS quasar spectra randomly-selected from the z = 2.3, 6 < S/N < 8 PDF bin. The ordinate axis on the right shows the equivalent spectral resolution, R ≡ λ/∆λ. The dashed-red lines are objects that have been discarded from the analysis on account of being outliers in spectral dispersion. ln(10) ∆(λ)/λ = 10 −4 ). This is related to the resolving power by R ≈ (2.35 × 1 × 10 −4 ln 10 σ disp ) −1 . Palanque-Delabrouille et al. (2013) have recently found, using their own analysis of the width of the arc-lamp lines and bright sky emission lines, that the spectral dispersion reported by the pipeline had a bias that depended on the CCD row and increased with wavelength, up to 10% at λ ≈ 6000Å. We will correct for this bias when creating mock spectra to compare with the data, as described in § 4. Figure 1 shows the (uncorrected) pixel dispersions from 236 BOSS quasars from the z = 2.3, S/N = 6 − 8 bin, as a function of wavelength at the blue end (λ = 3700 − 4200Å) of the spectrograph. At fixed wavelength, there are outliers that contribute to the large spread in σ disp , e.g., ranging from σ disp ≈ 0.9 − 1.8 at 3700Å. We therefore discard spectra with outlying values of σ disp based on the following criterion: we first rank-order the spectra based on their σ disp value evaluated at the central wavelength of each PDF bin (i.e. λ = [4012, 4377, 4863]Å at z = [2.3, 2.6, 3.0]), and then discarded spectra below the 5th percentile and above the 90th percentile. This is illustrated by the red-dashed lines in Figure 1.
Finally, since our noise estimation procedure uses the individual BOSS exposures, we discard objects that have less than three individual exposures available.
Our final data set comprises 3373 unique quasars with redshifts ranging from z qso = 2.255 to z qso = 3.811, and a median S/N of S/N = 8.08 per pixel. This data set represents only a small subsample of the BOSS DR9 quasar spectra, but is over two orders-of-magnitude larger than high-resolution quasar samples previously used for transmission PDF analysis. Table 1 summarizes our data sample, and the statistics of the redshifts and S/N bins for which we measure the transmission PDF. Figure 2 shows histograms of the pixels used in our analysis, as a function of absorption redshift. In this section, we will measure the Lyα forest transmission PDF from BOSS. In principle, the transmission PDF is simply the histogram of the transmitted flux in the Lyα forest after dividing by the quasar continuum. However, with the comparatively noisy BOSS data we need to ensure an accurate estimate of the pixel noise. We will therefore first describe a new probabilistic method for co-adding the individual BOSS exposures that will enable us to have an accurate noise estimate. We will also describe the continuum-estimation method with which we normalize the forest transmission.
Co-addition of Multiple Exposures and Noise
Estimation Since we intend to model BOSS spectra with modest S/N, we need an accurate estimate of the pixel noise that also allows us to separate out the contributions from Poisson noise due to the background and sky as well as read noise from the the detector. In this subsection, we will construct an accurate probabilistic model of the flux and noise of the BOSS spectrograph, based on the individual exposure data that BOSS delivers.
The basic BOSS spectral data consists of a spectrum of each raw exposure, f λi (inclusive of noise), an estimate of the sky s λi , and a calibration vector S λi , where i indicates the exposure of the n exp exposures taken 19 . The quantity s λi is the actual sky model that was subtracted from the fiber spectra in the extraction. The calibration vector is defined as S λi ≡ f λi /f N i , with f N i being the flux of exposure i in units of photoelectrons. The idl-spec2d pipeline then estimates the co-added spectrum of the true object flux, F λ , from the raw individual exposures, sky estimates, and calibration vectors.
The BOSS data reduction pipeline also delivers noise estimates in the form of variance vectors, which are however known to be inaccurate (McDonald et al. 2006;Desjacques et al. 2007;Lee et al. 2013;Palanque-Delabrouille et al. 2013).
To quantify the fidelity of the BOSS noise estimate, we used the so-called 'side-band' method described in Lee et al. (2014a) and Palanque-Delabrouille et al. (2013), which uses the variance in flat, absorption-free, regions of the quasar spectra to quantify the fidelity of the noise estimate. First, we randomly selected 10,000 BOSS quasars (omitting BAL quasars) from the Pâris et al. (2012) catalog in the redshift range 1.4 ≤ z qso < 3.4, evenly distributed into 20 redshift bins of width ∆z qso = 0.1 (i.e., 500 objects per bin). We then consider the flat 1460Å < λ rest < 1510Å spectral region in the quasar restframe, which is dominated by the smooth power-law continuum and relatively unaffected by broad emission lines (e.g., Vanden Berk et al. 2001;Suzuki 2006) or absorption lines. The pixel variance in this flat portion of the spectrum should therefore be dom- 19 Typically there are nexp = 5 exposures of 15 minutes each, although this can vary due to the requirements to achieve a given (S/N) 2 over each individual plug-plate, as determined by the overall BOSS survey strategy (see Dawson et al. 2013). -A quantitative test of the noise estimation fidelity in the spectra. Each point shows the ratio of the pixel variance divided by the estimated noise variance, averaged over the restframe 1460Å < λrest < 1510Å flat spectral region of 500 BOSS quasars within redshift bins of ∆zqso = 0.1 and plotted as a function of the corresponding observed wavelength of the flat spectral region. If there is no bias in the noise estimation, this ratio should be unity. The black asterisks show this quantity estimated using the BOSS pipeline co-added spectra and noise estimates, while the red triangles show the results from the MCMC co-addition and noise estimation procedure described in § 3.1. The MCMC method clearly provides a better noise estimation than the BOSS pipeline.
inated by spectral noise, allowing us to examine whether the noise estimate provided by the pipeline is accurate. We then evaluate the ratio of, σ side , the pixel flux RMS in the restframe 1460Å < λ rest < 1510Å region divided by the average pipeline noise estimate, σ λ : where the summations and average flux is evaluated in the quasar restframe 1460Å < λ rest < 1510Å. In Figure 3, this quantity is averaged over the 500 individual quasars per redshift bin and plotted as a function of the observed wavelength corresponding to λ = (1 + z qso )1485Å. With a perfect noise estimate, σ side /σ λ should be unity at all wavelengths, but we see that the BOSS pipeline underestimates the true noise in the spectra at λ 5000Å, by up to ∼ 15% at the blue end of the spectra, with an overall tilt that changes over to an overestimate at λ 4500Å. Lee et al. (2013) and Palanque-Delabrouille et al. (2013) provide a set of correction vectors that can be applied to the pipeline noise estimates to bring the latter to within several percent of the true noise level across the wavelength coverage of the blue spectrograph.
Unfortunately, these noise corrections are inadequate for our purposes, since we want to generate realistic mock spectra that have different realizations of the Lyα forest transmission field from the actual spectra, i.e., a different F λ . We therefore require a method that not only accurately estimates the noise in a given BOSS spectrum, but also separates out the photon-counting and CCD terms in the variance, that results from applying the Horne (1986) optimal spectral extraction algorithm: where σ RN is the CCD read-noise.
To resolve this issue, we apply our own novel statistical method to the individual BOSS exposures to generate co-added spectra while simultaneously estimating the corresponding noise parameters for each individual spectrum. This procedure, which uses a Gibbs-sampled Markov-Chain Monte Carlo (MCMC) algorithm, is described in detail in the Appendix. Initially, we attempted to model the noise with just a single constant noise parameter which rescales the read-noise term of Equation 4, but this was found to be inadequate. This is likely because an optimal extraction algorithm weights by the product of the S/N and object profile, causing the corresponding variance to have a non-linear dependence on the flux and sky level. Furthermore, systematic errors in the reduction, sky-subtraction and calibration will result in additional noise contributions which could depend on sky level, object flux, or wavelength, hence deviating from this simple model.
After considerable trial-and-error to find a model that best minimizes the bias illustrated in Figure 3, we settled on the form: where the A j are free parameters in our noise model, while the σ disp (λ) factor in the 2nd term (the pixel dispersion) provides a rough approximation for the wavelength-dependence of the spot-size (i.e. the size of the raw CCD image in the spatial direction). Meanwhile, σ disp = 12 is the average CCD read-noise per wavelength bin in the BOSS spectra (D.J. Schlegel et al., in preparation). The quantities s λ,i , S λ,i , and σ disp (λ) (sky flux, calibration vector, and dispersion, respectively) are taken directly from the BOSS pipeline.
In addition, we assume that the pixel noise can be modeled as a Gaussian distribution with a variance given by Equation 5. The first, photon counting, term in the equation should formally be modeled as a Poisson distribution, but since the BOSS spectrograph always receives 30 − 40 counts even at the blue end of the spectrograph where the counts are the lowest, it is reasonable to use the Gaussian approximation because even in the limit of low S/N (i.e. when the spectrum is dominated by the sky flux), the moderate resolution ensures that there are at least several dozen sky photons per pixel in each exposure.
For each BOSS spectrum, we use the MCMC procedure described in the Appendix to combine the multiple exposures while simultaneously estimating the noise parameters A j and true observed spectrum, F λ . With the optimal estimates of A j and F λ for a given spectrum, the estimated noise variance is then simply Equation 5.
An important advantage of the form in Equation 5 is that the object photon noise ∝ F λ is explicitly separated out. This facilitates the construction of a mock spectrum with the same noise characteristics as a true spectrum, but with a different spectral flux. For example, a mock spectrum of the Lyα forest will have a very different transmission field than the original data, and so the variance due to object photon counting noise can be added appropriately, in addition to contributions from the known sky, and the read noise term (Equation 5). Our empirical determination of the parameters govering this noise model for each individual spectrum form a crucial ingredient in our forward model, which we will describe in § 4.
Our MCMC procedure works for spectra from a single camera, either red or blue; we have not yet generalized it to combine blue and red spectra of each object. However, the spectral range of the blue camera alone (≈ 3600 − 6400Å) covers the Lyα forest up to z ∼ 5, i.e., most practical redshifts for Lyα forest analysis. For the purposes of this paper, we restrict ourselves to spectra from the blue camera alone.
In Figures 4 and 5, we show examples of co-added BOSS quasar spectra, using both the MCMC procedure and the standard BOSS pipeline. In the upper panels, the MCMC co-adds are not noticeably different from the BOSS pipeline, although the numerical values are different. In the lower panels, we show the estimated noise from both methods -the differences are larger than in the fluxes but still difficult to distinguish by eye.
We therefore return to the statistical analysis by calculating σ side /σ λ , the ratio of the pixel variance against the estimated noise from the flat 1460Å < λ rest < 1510Å region of BOSS quasars; this ratio, computed for our MCMC coadds, is plotted in Figure 3. With these new co-adds, we see that this ratio is within roughly ±3% of unity across the entire λ ∼ 3800 − 5000Å wavelength range relevant to our subsequent analysis, with an overall bias of 1% (i.e. the noise is still underestimated by this level). Crucially, we have removed the strong wavelength dependence of σ side /σ λ that was present in the standard pipeline, and we suspect most of the scatter about unity is caused by the limited number of quasars (500 per bin) available for this estimate, which will be mitigated by the larger number of quasars spectra available in the subsequent BOSS data releases. In principle, we could correct the remaining 1% noise bias, but since our selected spectra have S/N > 6, this remaining 1% noise bias would smooth the forest transmission PDF by an amount roughly 1/25 of the average PDF bin width (∆F = 0.05). As we shall see, there are other systematic uncertainties in our modeling that have much larger effects than this, therefore we regard our noise estimates as adequate for the subsequent transmission PDF analysis, without requiring any further correction.
Mean-Flux Regulated Continuum Estimation
In order to obtain the transmitted flux F of the Lyα forest 20 we first need to divide the observed flux, F λ , by an estimate for the quasar continuum, c. We use the version of mean-flux regulated/principal component analysis (MF-PCA) continuum fitting described in Lee et al. (2013). Initially, PCA fitting with 8 eigenvectors is performed on each quasar spectrum redwards of the Lyα line (λ rest = 1216 − 1600Å) in order to obtain a prediction for the continuum shape in the λ rest < 1216Å Lyα forest region (e.g., Suzuki et al. 2005). The slope and amplitude of this initial continuum estimate is then corrected to agree with the Lyα forest mean transmission, F cont (z), at the corresponding absorber redshifts, using a linear correction function.
The only difference in our continuum-fitting with that in Lee et al. (2013) is that here we use the latest meanflux measurements of Becker et al. (2013) to constrain our continua. Their final result yielded the power-law redshift evolution of the effective optical depth in the unshielded Lyα forest, defined in their paper N HI ≤ 10 17.2 cm −2 (although they only removed contributions from N HI ≥ 10 19 cm −2 absorbers). This is given by with best-fit values of [τ 0 , β, C] = [0.751, 2.90, −0.132] at z 0 = 3.5. However, the actual raw measurement made by Becker et al. (2013) is the effective total absorption within the Lyα forest region of their quasars, which also contain contributions from metals and optically-thick systems: where τ metals and τ LLS (z) denote the IGM optical depth contributions from metals and Lyman-limit systems, respectively. For the purposes of our continuum-fitting, the quantity we require is τ eff (z), since the τ metals and τ LLS (z) contributions are also present in our BOSS spectra. Becker et al. (2013) did not publish their raw τ eff (z), therefore we must now 'uncorrect' the metal and LLS contributions from the published τ Lyα,B13 (z). The discussion below therefore attempts to retrace their footsteps and does not necessarily reflect our own beliefs regarding the actual level of these contributions. We find τ metals = 0.02525 by simply averaging over the Schaye et al. (2003) metal correction tabulated by Faucher-Giguère et al. (2008) (i.e., the 2.2 ≤ z ≤ 2.5 values in ∆z = 0.1 bins from their Table 4), that were used by Becker et al. (2013) to normalize their relative meanflux measurements. Note that there is no redshift dependence on τ metals in this context, because Becker et al. (2013) argued that the metal contribution does not vary significantly over their redshift range. Whether or not this is really true is unimportant to us at the moment, since we are merely 'uncorrecting' their measurement.
The LLS contribution to the optical depth is reintroduced by integrating over f (N HI , b, z), the columndensity distribution of neutral hydrogen absorbers: where b is the Doppler parameter and W 0 (N HI , b) is the rest-frame equivalent width (we use the analytic approximation given by Draine 2011, valid in the saturated regime). Following Becker et al. (2013), we adopted a fixed value of b = 20 km s −1 and assumed that f (N HI , z) = f (N HI )dn/dz, where f (N HI ) is given by the z = 3.7 broken power-law column density distribution of Prochaska et al. (2010) and dn/dz ∝ (1 + z) 2 . Becker et al. (2013) had corrected for super-LLSs and DLAs in the column-density range [N min , N max ] = -Examples of co-added BOSS spectra from the MCMC procedure described in § 3.1 (red), and from the BOSS pipeline (black) are shown in the upper panels, in the restframe interval 1035 − 1260Å. The corresponding pixel noise estimates are shown in the upper panels. The blue line shows the MF-PCA continuum used to extract the Lyα forest transmitted flux, while the vertical dotted lines delineate the 1041 − 1185Å restframe interval which we define as the Lyα forest. The continuum discontinuity at λrest = 1185Å is where we have applied the 'mean-flux regulation' correction to the Lyα forest. In the top figure, masked pixels have had their flux and noise set to zero. The signal-to-noise ratios for the two spectra are S/N ≈ 11 (top) and S/N ≈ 6 (bottom) within the Lyα forest.
This estimate of the raw absorption, F eff (z) = exp[−τ eff (z)], is now the constraint used to fit the continua of the BOSS quasars, i.e. we set F cont = F eff (z). Note that in our subsequent modelling of the data, we will use the same F cont (z) to fit the mock spectra to ensure an equal treatment between data and mocks. Since F cont (z) includes a contribution from N HI < 10 20.3 cm −2 optically-thick systems, our mock spectra will need to account for these systems as we shall describe in §4.2.
The MF-PCA technique requires spectral coverage in the quasar restframe interval 1000 − 1600Å. However, as noted in the previous section, we work with co-added BOSS spectra from only the blue cameras covering λ 6400Å; this covers the full 1000−1600Å interval required for the PCA fitting only for z 3 quasars. However, the differences in the fluxes between our MCMC co-adds and the BOSS pipeline co-adds are relatively small, and we do not expect the relative shape of the quasar spectrum to vary significantly. We can thus carry out PCA fitting on the BOSS pipeline co-adds, which cover the full observed range (3700 − 10000Å), to predict the overall quasar continuum shape. This initial prediction is then used to perform mean-flux regulation using the MCMC co-adds and noise estimates, to fine-tune the amplitude of the continuum fits.
The observed flux, f λ , is divided by the continuum esti- Figure 4, but the 1050Å < λrest < 1090Å restframe region is expanded to better illustrate the differences between the MCMC and pipeline co-added spectra.
mate, c, to derive the Lyα forest transmission, F = f λ /c. For each quasar, we define the Lyα forest as the rest wavelength interval 1041−1185Å. This wavelength range conservatively avoids the quasar's Lyβ/O VI emission line blend by ∆v ∼ 3000 km s −1 on the blue end, as well as the proximity zone close to the quasar redshift by staying ∆v ∼ 10, 000 km s −1 from the nominal quasar systemic redshift. We are now in a position to measure the transmission PDF, which is simply the histogram of pixel transmissions F ≡ exp(−τ ).
3.3. Observed transmission PDF from BOSS Since the Lyα forest evolves as a function of redshift, we measure the BOSS Lyα forest transmission PDF in three bins with mean redshifts of z = 2.3, z = 2.6, and z = 3.0, and bin sizes of ∆z = 0.3. These redshifts bins were chosen to match the simulations outputs ( § 4.1) that we will later use to make mock spectra to compare with the observed PDF; this choice of binning leads to the gap at 2.75 < z < 2.85 as seen in Figure 2. In this paper, we restrict ourselves to z 3 since the primary purpose is to develop the machinery to model the BOSS spectra. In subsequent papers, we will apply these techniques to analyze the transmission PDF in the full 2 z 4 range using the larger samples of subsequent BOSS data releases (DR10, Ahn et al. 2014).
Another consideration is that the transmission PDF is strongly affected by the noise in the data. While we will model this effect in detail ( § 4), there is a large distribution of S/N within our subsample ranging from S/N = 6 per pixel to S/N ∼ 20 per pixel. We therefore further divide the sample into three bins depending on the median S/N per pixel within the Lyα forest: 6 < S/N < 8, 8 < S/N < 10, S/N > 10. The consistency of our results across the S/N bins will act as an important check for the robustness of our noise model ( § 3.1).
We now have nine redshift and S/N bins in which we evaluate the transmission PDF from BOSS; the sample sizes are summarized in Table 1. For each bin, we have selected quasars that have at least 30 Lyα forest pixels within the required redshift range, and which occupy the quasar restframe interval 1041 − 1185Å. The co-added Table 1 summarizes the number of spectra and pixels which contribute to each bin. spectrum is divided with its MF-PCA continuum estimate (described in the previous section) to obtain the transmitted flux, F , in the desired pixels. We then compute the transmission PDF from these pixels.
Physically, the possible values of the Lyα forest transmission range from F = 0 (full absorption) to F = 1 (no absorption). However, the noise in the BOSS Lyα forest pixels, as well as continuum fitting errors, lead to pixels with F < 0 and F > 1. We therefore measure the transmission PDF in the range −0.2 < F < 1.5, in 35 bins with width ∆(F ) = 0.05, and normalized such that the area under the curve is unity. The statistical errors on the transmission PDF are estimated by the following method: we concatenate all the individual Lyα forest segments that contribute to each PDF, and then carry out bootstrap resampling over ∆v = 2 × 10 4 km s −1 segments with 200 iterations. This choice of ∆v corresponds to ∼ 250 − 300Å in the observed frame at z ∼ 2 − 3according to Rollinde et al. (2013), this choice of ∆v and number of iterations should be sufficient for the errors to converge (see also Appendix B in McDonald et al. 2000).
In Figure 6, we show the Lyα forest transmission PDF measured from the various redshift-and S/N subsamples in our BOSS sample. At fixed redshift, the PDFs from the lower S/N data have a broader shape as expected from increased noise variance. With increasing redshift, there are more absorbed pixels, causing the transmission PDFs to shift towards lower F values. As discussed previously, there is a significant portion of F > 1 pixels due to a combination of pixel noise and continuum errors, with a greater proportion of F > 1 pixels in the lower-S/N subsamples as expected. Unlike the high-resolution transmission PDF, at z 3 there are few pixels that reach F = 0. This effect is due to the resolution of the BOSS spectrograph, which smooths over the observed Lyα forest such that even saturated Lyα forest absorbers with N HI 10 14 − 10 16 cm −2 rarely reach transmission values of F 0.3. The pixels with F 0.3 are usually contributed either by blends of absorbers or optically thick LLSs (see also Pieri et al. 2014).
An advantage of our large sample size is that also able to directly estimate the error covariances, C boot , via bootstrap resampling-an example is shown in Figure 7. In contrast to the Lyα forest transmission PDF from high-resolution data which have significant off-diagonal covariances (Bolton et al. 2008), the error covariance from the BOSS transmission PDF is nearly diagonal with just some small correlations between neighboring bins, although we also see some anti-correlation between transmission bins at F ∼ 0.8 and F ∼ 1.
It is interesting to compare the transmission PDF from our data with that measured by Desjacques et al. (2007) from SDSS DR3. This comparison is shown in Figure 8, in which the transmission PDFs calculated from SDSS DR3 Lyα forest spectra with S/N > 4 (kindly provided by Dr. V. Desjacques) are shown for two redshift bins, juxtaposed with the BOSS transmission PDFs calculated from spectra with the same redshift and S/N cuts.
While there is some resemblance between the two PDFs, the most immediate difference is that the Desjacques et al. (2007) PDFs are shifted to lower transmission values, i.e., the mean transmission, F , is considerably smaller than that from our BOSS data: F (z = 2.4) = 0.73 and F (z = 3.0) = 0.64 from their measurement, whereas the BOSS PDFs have F (z = 2.4) = 0.80 and F (z = 3.0) = 0.70. This difference arises because the Desjacques et al. (2007) used a powerlaw continuum (albeit with corrections for the weak emission lines in the quasar continuum) extrapolated from λ rest > 1216Å in the quasar restframe; this does not take into account the power-law break that appears to occur in low-redshift quasar spectra at λ rest ≈ 1200Å (Telfer et al. 2002;Suzuki 2006). Later in their paper, Desjacques et al. (2007) indeed conclude that this must be the case in order to be consistent with other F (z) measurements. Our continua, in contrast, have been constrained to match existing measurements of F (z), for which there is good agreement between different authors at z 3 (e.g., Faucher-Giguère et al. 2008;Becker et al. 2013).
Another point of interest in Figure 8 is that the error bars of the BOSS sample are considerably smaller than those of the earlier measurement. This difference is largely due to the significantly larger sample size of 2D density plot of the error covariance matrix for the Lyα forest transmission PDF from the z = 2.6, S/N=8 − 10 BOSS subsample as a function of transmission bins, along with (bottom) the corresponding correlation function. The covariance matrix was estimated through bootstrap resampling, and the values been multiplied by 10 4 for clarity. The covariances are largely diagonal, except for some cross-correlations between neighboring bins.
BOSS. The proportion of pixels with F 0 appears to be smaller in the BOSS PDFs compared with the older data set, but this is because Desjacques et al. (2007) did not remove DLAs from their data.
We next describe the creation of mock Lyα absorption spectra designed to match the properties of the BOSS data.
MODELING OF THE BOSS TRANSMISSION PDF
In this section, we will describe simulated Lyα forest mock spectra designed, through a 'forward-modelling' process, to have the same characteristics as the BOSS spectra, for comparison with the observed transmission PDFs described in the previous section. For each BOSS spectrum which had contributed to our transmission PDFs in the previous section, we will take the Only sightlines with S/N > 4 were used in evaluating these PDFs. The lower average transmission of the DR3 PDFs is because Desjacques et al. (2007) had directly extrapolated a powerlaw from λrest > 1216Å for continuum estimates, which does not take into account a flattening of the quasar continuum that occurs at λrest ∼ 1200Å; our BOSS spectra, in contrast, have been normalized to mean-transmission values in agreement with latest measurements and takes this effect into account.
Lyα absorption from randomly selected simulation sightlines, then introduce the characteristics of the observed spectrum using auxiliary information returned by our pipeline.
Starting with simulated spectra from a set of detailed hydrodynamical IGM simulations, we carry out the following steps, which we will describe in turn in the subsequent subsections: In the subsequent subsections, we will describe each step in detail. The effect of each step in on the observed transmission PDF is illlustrated in Figure 9. As the basis for our mock spectra, we use hydrodynamic simulations run with a modification of the publicly available GADGET-2 code. This code implements a simplified star formation criterion (Springel et al. 2005) that converts all gas particles that have an overdensity above 1000 and a temperature below 10 5 K into star particles (see Viel et al. 2004). The simulations used are described in detail in Becker et al. (2011) and in Viel et al. (2013a).
Hydrodynamical Simulations
The reference model that we use is a box of length 20 h −1 comoving Mpc with 2 × 512 3 gas and cold DM particles (with a gravitational softening length of 1.3 h −1 kpc) in a flat ΛCDM universe with cosmological parameters Ω m = 0.274, Ω b = 0.0457, n s = 0.968, H 0 = 70.2 km s −1 Mpc −1 and σ 8 = 0.816, in agreement both with WMAP-9yr (Komatsu et al. 2011) and Planck data (Planck Collaboration et al. 2013). The initial condition power spectra are generated with CAMB (Lewis et al. 2000). For the boxes considered in this work, we have verified that the transmission PDF has converged in terms of box size and resolution.
We explore the impact of different thermal histories on the Lyα forest by modifying the ultraviolet (UV) background photo-heating rates in the simulations as done in e.g., Bolton et al. (2008). A power-law temperaturedensity relation, T = T 0 ∆ γ−1 , arises in the low density IGM (∆ < 10) as a natural consequence of the interplay between photo-heating and adiabatic cooling Gnedin & Hui 1998). The value of γ within a simulation can be modified by varying a density-dependent heating term (see, e.g., Bolton et al. 2008). We consider a range of values for the temperature at mean density, T 0 , and the power-law index of the temperature-density relation, γ, based on the observational measurements presented recently by Becker et al. (2011). These consist of a set of three different indices for the temperature-density relation, γ(z = 2.5) ∼ 1.0, 1.3, 1.6, that are kept roughly constant over the redshift range z = [2 − 6] and three different temperatures at mean density, T 0 (z = 2.5) ∼ [11000, 16000, 21500] K, which evolve with redshift, yielding a total of nine different thermal histories. Between z = 2 and z = 3 there is some temperature evolution and the IGM becomes hotter at low redshift; at z = 2.3, the models have T 0 ∼ [13000, 18000, 23000] K. We refer to the intermediate temperature model as our 'reference' model, or T REF, while the hot and cold models are referred to as T HOT and T COLD, respectively. The values of T 0 of our simulations at the various redshifts are summarized in Table 2.
Approximately 4000 core hours were required for each simulation run to reach z = 2. The physical properties of the Lyα forest obtained from the TreePM/SPH code GADGET-2 are in agreement at the percent level with those inferred from the moving-mesh code AREPO (Bird et al. 2013) and with the Eulerian code ENZO (O' Shea et al. 2004).
For this study, the simulation outputs were saved at z = [2.3, 2.6, 3.0], from which we extract 5000 optical depth sightlines binned to 2048 pixels each. To convert these to transmission spectra, the optical depths were rescaled such that the skewers collectively yielded a desired mean-transmission, F Lyα ≡ exp(−τ Lyα ). For our fiducial models, we would like to use the mean-transmission values estimated byestimated by Becker et al. (2013), which we denote as for F Lyα,B13 ≡ exp(−τ Lyα,B13 ). However, their estimates assume certain corrections from optically-thick systems and metal absorption. We therefore add back in the corrections they made (see discussion in §3.2) to get their 'raw' measurement for F that now includes all optically thick systems and metals, and then remove these contributions assuming our own LLS and metal absorption models (see below).
Later in the paper, we will argue that our PDF analysis in fact places independent constraints on F Lyα .
Lyman-limit systems
In principle, all optically-thick Lyα absorbers such as Lyman-limit systems (LLSs) and damped Lyα absorbers (DLAs) should be discarded from Lyα forest analyses, since they do not trace the underlying matter density field in the same way as the optically-thin forest (Equation 1), and require radiative transfer simulations to accurately capture their properties (e.g., Rahmati et al. 2013).
While DLAs are straightforward to identify through their saturated absorption and broad damping wings even in noisy BOSS data (see, e.g., Noterdaeme et al. 2012), the detection completeness of optically-thick systems through their Lyα absorption drops rapidly at N HI 10 20 cm −2 . Even in high-S/N, high-resolution spectra, optically thick systems can only be reliably detected through their Lyα absorption at N HI 10 19 cm −2 ("super-LLS"). Below these column densities, optically-thick systems can be identified either through their restframe 912Å Lyman-limit (albeit only one per spectrum) or using higher-order Lyman-series lines (e.g., Rudie et al. 2013). Neither of these approaches have been applied in previous Lyα forest transmission PDF analyses (McDonald et al. 2000;Kim et al. 2007;Calura et al. 2012;Rollinde et al. 2013), so arguably all these analyses are contaminated by LLSs.
Instead of attempting to remove LLSs from our observed spectra, we instead incorporate them into our mock spectra through the following procedure. For each PDF bin, we evaluate the total redshift pathlength of the contributing BOSS spectra (and corresponding mocks) -this quantity is summarized in Table 1. This is multiplied by l LLS (z), the number of LLS per unit redshift, to give the total number of LLS expected within our sample. We used the published estimates of this quantity by Ribaudo et al. (2011) 21 which is valid over 0.24 < z < 4.9: 21 Note that the value l z0 = 0.30 given in Table 6 of Ribaudo et al. (2011) is actually erroneous, and the correct normalization is in fact l z0 = 0.1157, consistent with the data in their paper, which is used in Equation 10. Dr. J. Ribaudo, in private communication, has concurred with this conclusion.
After estimating the total number of LLSs in our mock spectra, l LLS (z)∆z, we add them at random points within our set of simulated optical depth skewers. We also experimented with adding LLSs such that they are correlated with regions that already have high column density (e.g., Font-Ribera & Miralda-Escudé 2012), but we found little significant changes to the transmission PDF and therefore stick to the less computationallyintensive random LLSs.
For each model LLS, we then draw a column density using the published LLS column density distribution, f (N HI ), from Prochaska et al. (2010). This distribution is measured at z ≈ 3.7, so we make the assumption that f (N HI ) does not evolve with redshift between 2 z 3.7. For our column densities of interest, this distribution is represented by the broken power-laws: For the normalizations k 1 and k 2 , we demand that and require both power-laws to be continuous at N HI = 10 19.0 cm −2 . These constraints produce k 1 = 10 −4.505 and k 2 = 10 3.095 . After drawing a random value for the column density of each LLS, we add the corresponding Voigt profile to the optical depth in the simulated skewer.
In addition to the LLS with column densities of 10 17.5 cm −2 < N HI < 10 20.3 cm −2 that are defined to have τ HI ≥ 2, there is also a population of partial Lymanlimit systems (pLLSs) that are not well-captured in our hydrodynamical simulations since they have column densities (10 16.5 cm −2 N HI < 10 17.5 cm −2 ) at which radiative transfer effects become significant (τ HI 0.1). However, the incidence rates and column-density distribution of pLLSs are ill-constrained since they are difficult to detect in normal LLS searches. We therefore account for the pLLS by extrapolating the low-end of the power-law distribution in Equation 11 down to N HI = 10 16.5 cm −2 , i.e.
f (10 16.5 cm −2 < N HI < 10 17.5 cm −2 ) = k 1 N −0.8 HI . (13) This simple extrapolation does not take into account constraints from the mean-free path of ionizing photons (e.g., Prochaska et al. 2010) which predicts a steeper slope for the pLLS distribution, but we will explore this later in §5.2.
Comparing the integral of this extrapolated pLLS distribution with Equation 12 leads us to conclude that and we proceed to randomly add pLLSs to our mock spectra in the same way as LLSs.
The other free parameter in our LLS model is their effective b-parameter distribution. However, due to the observational difficulty in identifying N HI 18.5 cm −2 LLSs the b-parameter distribution of this distribution has, to our knowledge, never been quantified. Due to this lack of knowledge, it is common to simply adopt a single b-value when attemping to model LLSs (e.g., Font-Ribera & Miralda-Escudé 2012; Becker et al. 2013). We therefore assume that all our pLLSs and LLSs have a b-parameter of b = 70 km s −1 similar to DLAs (Prochaska & Wolfe 1997), an 'effective' value meant to capture the blending of multiple Lyα components. However, the b-parameter for this population of absorbers is a highly uncertain quantity and as we shall see, it will need to be modified to provide a satisfactory fit to the data although it will turn out to not strongly affect our conclusions regarding the IGM temperature-density relationship.
Spectral Resolution
The spectral resolution of SDSS/BOSS spectra is R ≡ λ/∆λ ≈ 1500 − 2500 (Smee et al. 2013). The exact value varies significantly both as a function of wavelength, and across different fibers and plates depending on observing conditions ( Figure 1).
For each spectrum, the BOSS pipeline provides an estimate of the 1σ wavelength dispersion at each pixel, σ disp , in units of the co-added wavelength grid size (∆ log 10 λ = 10 −4 ). The spectral resolution at that pixel can then be obtained from the dispersion, through the following conversion: R ≈ (2.35 × 1 × 10 −4 ln 10 σ disp ) −1 . Figure 1 shows the pixel dispersions from 236 randomly-selected BOSS quasar as a function of wavelength at the blue end of the spectrograph. Even at fixed wavelength, there is a considerable spread in the dispersion, e.g., ranging from σ disp ≈ 0.9 − 1.8 at 3700Å. The value of σ disp typically decreases with wavelength (i.e., the resolution increases).
In their analysis of the Lyα forest 1D transmission power spectrum, Palanque-Delabrouille et al. (2013) made their own study of the BOSS spectral resolution by directly analysing the line profiles of the mercury and cadmium arc lamps used in the wavelength calibration. They found that the pipeline underestimates the spectral resolution as a function of fiber position (i.e. CCD row) and wavelength: the discrepancy is < 1% at blue wavelengths and near the CCD edges, but increases to as much as 10% at λ ∼ 6000Å near the center of the blue CCD (c.f. Figure 4 in Palanque-Delabrouille et al. 2013). Our analysis is limited to λ ≤ 5045Å, i.e. z ≤ 3.15, where the discrepancy is under 4%. Nevertheless, we implement these corrections to the BOSS resolution estimate to ensure that we model the spectral resolution to an accuracy of < 1%.
For each BOSS Lyα forest segment that contributes to the observed transmission PDFs discussed in § 3.3, we concatenate randomly-selected transmission skewers from the simulations described in the previous section. This is because the simulation box size of L = 20 h −1 Mpc (∆v ∼ 2, 000 km s −1 ) is significantly shorter than the path length of our redshift bins (∆z = 0.3, or ∆v ≈ 27, 000 km s −1 ). This ensures that each BOSS spectrum in our sample has a mock spectrum that is exactly matched in pathlength.
We then directly convolve the simulated skewers by a Gaussian kernel with a standard deviation that varies with wavelength, using the estimated resolution from the real spectrum, multiplied by the Palanque-Delabrouille et al. (2013)
Metal Contamination
Metal absorption along our observed Lyα forest sightlines acts as a contaminant since their presence alters the observed statistics of the Lyα forest. In high-resolution data, this contamination is usually treated by directly identifying and masking the metal absorbers, although in the presence of line blending it is unclear how thorough this approach can be.
With the lower S/N and moderate resolution of the BOSS data, direct metal identification and masking is not a viable approach. Furthermore, most of the weak metal absorbers seen in high-resolution spectra are not resolved in the BOSS data.
Rather than removing metals from the BOSS Lyα forest spectra, we instead add metals as observed in lower-redshift quasar spectra. In other words, we add absorbers observed in the restframe λ rest ≈ 1260 − 1390Å region of lower-redshift quasars with 1 + z qso ≈ (1216Å/1300Å)(1 + z ), such that the observed wavelengths are matched to the Lyα forest segment with average redshift z . Figure 11 is a cartoon that illustrates this concept. This method makes no assumption about the nature of the metal absorption in the Lyα forest, and includes all resolved metal absorption spanning the whole range of redshifts down to z ∼ 0. The disadvantage of this method is that it does not include metals with intrinsic wavelengths λ 1300Å, but the relative contribution of such metal species towards the transmission PDF should be small 22 since most of the metal contamination comes from low-redshift (z 2) C IV and Mg II.
We use a metal catalogue generated by B. Lundgren et al. (in prep; see also Lundgren et al. 2009), which lists absorbers in SDSS (Schneider et al. 2010) and BOSS quasar spectra (Pâris et al. 2012) -the SDSS spectra were included in order to increase the number of z qso ≈ 1.9 − 2.0 quasars needed to introduce metals into the z = 2.3 Lyα forest mock spectra, which are not well sampled by the BOSS target selection (Ross et al. 2012). We emphasize that we work with the 'raw' absorber catalog, i.e. the individual absorption lines have not been identified in terms of metal species or redshift. For each quasar, the catalog provides a line list with the observed wavelength, equivalent width (EW, W r ), full-width at half-maximum (FWHM), and detection S/N, W r /σ Wr . 22 Si III an obvious exception, but we will later account for this omission in our error bars ( §5.3).
To ensure a clean catalog, we use only W r /σ Wr ≥ 3.5 absorbers in the catalog that were identified from quasar spectra with S/N > 15 per angstrom redwards of Lyα. The latter criterion ensures that even relatively weak lines (with EW 0.5Å) are accounted for in our catalog. Figure 12 shows an example of the lower-redshift quasar spectra that we use for the metal modelling.
However, we want to add a smooth model of the metalline absorption to add to our mock spectra, rather than adding in a noisy spectrum. We therefore use a simple model as follows: For each Lyα forest segment we wish to model at redshift z , we select an absorber line-list from a random quasar with 1 + z qso ≈ (1216Å/1300Å)(1 + z ). We next assume that all resolved metals in the SDSS/BOSS spectra are saturated and thus in the flat regime of the curve-of-growth. The equivalent width is then given by where τ 0 is the optical depth at line center, b is the velocity width and c is the speed of light. In the saturated regime, W r is mostly sensitive to changes in b while being highly insensitive to changes in τ 0 . We can thus adopt τ 0 as a global constant and solve for b, given the W r of each listed absorber in the selected 'sideband' quasar.
We have found that τ 0 = 3 provides a good fit for most of the absorbers. We then add the Gaussian profile into our simulated optical depth skewers: centered at the same observed wavelength, λ, as the real absorber. The red curve in Figure 12 shows our model for the observed absorbers, using just the observed wavelength, λ, and equivalent width, W r , from the absorber catalog.
Our method for incorporating metals is somewhat crude since one should, in principle, first deconvolve the spectrograph resolution from the input absorbers, and then add the metal absorbers into our mock spectra prior to convolving with the BOSS spectral resolution. In contrast, we fit b-parameters to the absorber catalog without spectral deconvolution, therefore these b-parameters can be thought of as combinations of the true absorber width, b abs and the spectral dispersion, σ disp , i.e. b 2 ∼ b 2 abs + σ 2 disp . While technically incorrect, this seems reasonable since the template quasar spectra and forest spectra that we are attempting to model both have approximately the same resolution, and in practical terms this ad hoc approach does seem to be able to reproduce the observed metals in the lower-redshift quasar spectra ( Figure 12). The other possible criticism of our approach is that it does not incorporate weak metal absorbers, although we attempted to mitigate this by setting a very high S/N threshold on the template quasars for the metals. However, we have checked that such weak metals do not significantly change the forest PDF (and indeed metals in general do not seriously affect the PDF, c.f. Figure 9c).
We also tried adding metals with similar redshifts to Fig. 12.-A continuum-normalized spectrum of a BOSS quasar showing the metal absorbers in the 1300Å < λrest < 1390Å 'sideband' region, which would be used to add metals to z = 2.6 mock Lyα forest spectra. The red curve shows our metal model for this spectrum, generated from the observed wavelengths and equivalent widths in the absorber catalog generated by the automatic algorithm of Lundgren et al. (2009). We also assume that the absorbers all lie on the saturated portion of the curve-of-growth and have τ 0 = 3, with the equivalent width (labeled above each absorption line) proportional to the b-parameter. The model absorption profiles represented by the red curve would be added to our mock Lyα forest spectra. We have chosen to plot this particular 'sideband' because it has more absorbers than average -the typical spectrum has less metal absorption than this.
-and correlated with -forest absorbers (e.g., absorption by Si II and Si III) measured in Pieri et al. (2010) and Pieri et al. (2014) using a method described in the appendix of Slosar et al. (2011). We found a negligible impact on the transmission PDF owing mainly to the fact that these correlated metals contribute only ∼ 0.3% to the overall flux decrement, so we neglect this contribution in our subsequent analysis.
Pixel Noise
It is non-trivial to introduce the correct noise to a simulated Lyα forest spectrum: given a noise estimate from the observed spectrum, one needs to first ensure that the mock spectrum has approximately the same flux normalization as the data. This is challenging, as the Lyα forest transmission at any given pixel, which ranges from 0 to 1, will vary considerably between the simulated spectrum and the real data.
The simplest method of adding noise to a mock spectrum is simply to introduce Gaussian deviates using the pipeline noise estimate for each spectrum-this was essentially the method used by Desjacques et al. (2007) and the BOSS mocks described in . However, with the MCMC co-addition procedure described in § 3.1, we are in a position to model the noise in a more robust and self-consistent fashion.
Recall that the MCMC procedure returns posterior probabilities for two quantities: the true underlying spectral flux density, F λ , and the four free parameters A j , which parametrize the noise in each spectrum. This estimate of the A j from each quasar spectrum allows us to accurately model the pixel noise using Equation 5.
The MF-PCA method ( § 3.2) produces an estimate of the quasar continuum, c, providing approximately the correct flux level at each point in the spectrum. We can now multiply c with the simulated Lyα forest transmission spectra, F , which had already been smoothed to the same dispersion as its real counterpart (the estimated quasar continuum is already at approximately the correct smoothing, since it was fitted to the observed spectrum).
This procedure produces a noiseless mock spectrum with the correct flux normalization and smoothing. We can now generate noisy spectra corresponding to a given BOSS quasar, using our MCMC noise estimation described in Section 3.1. First, we substitute our mock spectrum as F λ into Equation 5, and then combine the A j noise parameters (estimated through our MCMC procedure) as well as the calibration vectors S λ,i and sky estimates s λ,i . This lets us generate self-consistent noise vectors corresponding to each individual exposure that make up the mock quasar spectrum, σ λi . The noise vectors are then used to draw random Gaussian deviates that are added to the mock spectrum, on a per-pixel basis, to create the mock spectral flux density, f λi . Finally, we combine these individual mock exposures into the optimal spectral flux density for the mock spectrum, through the expression (see Appendix): where Figure 9c illustrates the effect of adding pixel noise to the smoothed Lyα forest transmission PDF. As expected, Fig. 13.-Simulating the noise properties and continuum errors of a BOSS quasar. The top panel shows the observed spectrum of a BOSS quasar, and its associated continuum fit, c, in blue. The middle panel shows the simulated transmission spectra (after adding LLS, smoothing and adding metals) multiplied by the quasar continuum fitted to the true spectrum. In the lower panel, we have added noise to the mock spectrum using the noise parameters estimated from the true spectrum (see § 3.1). A new continuum, c ′ , (red) is re-fitted to the noisy mock spectrum. The difference between new continuum c ′ and 'true' continuum, c, of the mock (blue) introduces continuum errors to our model. The vertical dotted lines indicates the range of pixels that contribute to the z = 3.0 subsample in our transmission PDF; a small segment between (1 + zqso)1040Å = 4461Å and (1 + 2.75)1216Å = 4560Å also contributes to the z = 2.6 bin.
this scatters a significant fraction of pixels to F > 1, and also to F < 0 to a smaller extent.
Continuum Errors
With the noisy mock spectrum in hand (see, e.g., bottom panel of Figure 13), we can self-consistently include the effect of continuum errors into our model transmission PDFs by simply carrying out our MF-PCA continuum-fitting procedure on the individual noisy mock spectra. Dividing out the mock spectra with the new continuum fits then incorporates an estimate of the continuum errors (estimated by Lee et al. 2012 to be at the ∼ 4 − 5% RMS level) into the evaluated model transmission PDF. This estimated error includes uncertainties stemming from the estimation of the quasar continuum shape due to pixel noise, as well as the random variance in the mean Lyα forest absorption in individual lines-ofsight.
Note that regardless of the overall mean-absorption in the mock spectra (i.e. inclusive of our models for metals, LLSs, and mean forest absorption -see § 5.4), we always use F cont (z), the same input mean-transmission derived from Becker et al. (2013) (described in § 3.2) to fit the continua in both the data and mock spectra. While the overall absorption in our fiducial model is consistent with that from Becker et al. (2013), as we shall see later, the shape of the transmission PDF retains information on the true underlying mean-transmission even if fitted with a mean-flux regulated continuum with a wrong input F (z).
The effect of continuum errors on the transmission PDF is shown in Figure 9e: like pixel noise, it degrades the peak of the PDF, but only near F ∼ 1.
MODEL REFINEMENT
In an ideal world, one would like to do a blind analysis by generating the transmission PDF model ( §4) in isolation from the data, before 'unblinding' to compare with data -this would then in principle yield results free from psychological bias in the model building. However, as we shall see in §5.1, this does not give acceptable fits to the data so we have to instead modify our model to yield a better agreement, in particular our LLS model ( §5.2) and assumed mean-transmission ( §5.4).
Initial Comparison with T REF Models
For each of our 9 hydrodynamical simulations (sampling 3 points each in T 0 and γ), we determine the transmission PDF from the Lyα forest mock spectra that include the effects described in the previous section, for the various redshift & S/N subsamples in which we had measured the PDF in BOSS ( §3.3). In Figure 14 At first glance, the model transmission PDFs seem to be a reasonably match for the data, especially considering we have carried out purely forward modelling without fitting for any parameters. However, when comparing the 'pull', (p data,i − p model,i )/σ p,i , between the data and model (bottom panels of Figure 14), we see significant discrepancies in part due to the extremely small bootstrap error bars. Nevertheless, it is gratifying to see that the shape of the residuals are relatively consistent across the different S/N subsamples at fixed redshift and γ, since this indicates that our spectral noise model is robust.
We proceed to quantify the differences between the simulated transmission PDFs, p model , and observed transmission PDFs, p data , with the χ 2 statistic: where we use the bootstrap error covariance matrix, C boot . Note that we also include a bootstrap error term that accounts for the sample variance in the model transmission PDFs, since our pipeline for generating mock spectra is too computationally expensive to include sufficiently large amounts of skewers to fully beat down the sample variance in the models 23 . We limit our model comparison to the range −0.1 ≤ F ≤ 1.2, i.e. 27 transmission bins with bin width ∆(F ) = 0.05. This range covers pixels that have been scattered to 'unphysical' values of F < 0 or F > 1 due to pixel noise, as is expected from the low-S/N of our BOSS data, and also captures > 99.8% of the pixels within each of our data subsets. In particular, it is important to retain the bins with F > 1 because the F ∼ 1 transmission bins are highly sensitive to γ (Lee 2012) and therefore we want to fully sample that region of the PDF even if it will require careful modeling of pixel noise and continuum errors.
There are two constraints on all our transmission PDFs: the normalization convention and the imposition of the same mean transmission due to the mean-flux regulated continuum-fitting such that all the mock spectra have the same absorption, F cont (z). This is because the mock spectra have been continuum-fitted ( §4.6) in exactly the same way as the BOSS spectra, which assumes the same mean Lyα transmission inferred from the Becker et al. (2013) measurements ( §3.2). The 'true' optically-thin meantransmission, F Lyα , imposed on the simulation skewers is in principle a different quantity from F cont , since the latter includes contribution from metal contamination and optically-thick LLSs. This leaves us with ν = 27 − 1 − 2 = 24 degrees of freedom (d.o.f.) in our χ 2 comparison. The χ 2 for all the models shown in Figure 14 are shown in the corresponding figure legends.
In this initial comparison, the χ 2 values for the models in Figure 14 are clearly unacceptable: we find χ 2 200 for 24 d.o.f. in all cases. However, it is interesting to note that the γ = 1.6 or γ = 1.3 models are preferred at all redshifts and S/N cuts. Note that the S/N=8 − 10 subsamples (middle column in Figure 14) tends to have a slightly better agreement between model and data compared to the other S/N cuts at the same redshift: this simply reflects the smaller quantity of data of the subsample (c.f. Table 1) and hence larger bootstrap errors.
A closer inspection of the residuals in Figure 14 indicate that there are two major sources of discrepancy between the models and data: firstly, at the lowtransmission end, we underproduce pixels at 0.1 F 0.4 while simultaneously over-producing F 0.1 pixels, especially at z = 2.3 and z = 2.6. This seems to affect all γ models equally. Pieri et al. (2014) found that at BOSS resolution, pixels with F 0.3 come predominantly from saturated Lyα absorption from LLS. We therefore investigate possible modifications to our LLS model in §5.2.
The other discrepancy in the model transmission PDFs manifests at the higher-transmission end in the z = 2.6 and z = 3 subsamples, where we see a sinusoidal shape in the residuals at F > 0.6 that appears consistent across different S/N. This portion of the transmission PDF depends on both γ and, as we shall see, on the assumed mean-transmission F (z), which we shall discuss in more detail in §5.4.
Finally, our transmission PDF model includes various uncertainties in the modelling of metals, LLSs, and continuum-fitting which have not yet been taken into account. In §5.3, we will estimate the contribution of these uncertainties, by means of a Monte-Carlo method, in our 2) and steeper modification (red; §5.2). The distributions are normalized assuming the overall LLS incidence rate at z = 2.25 (c.f. Eq. 10). The vertical dashed-lines denotes the N HI = 10 17.5 cm −2 boundary between pLLS and LLS, and N HI = 10 19 cm −2 boundary between LLS and super-LLS. The shaded regions show the range of possible distributions as determined by Prochaska et al. (2010), but there are few robust constraints in the 10 16.5 cm −2 ≤ N HI ≤ 10 17.5 cm −2 pLLS regime. The 'initial' distribution was used in the preliminary data comparisons in §5.1, but all subsequent analysis (after §5.2) assumes the 'steep' distribution. error covariances.
Modifying the LLS Column Density Distribution
With the moderate spectral resolution of BOSS, there are few individual pixels in the optically-thin Lyα forest that reach transmission values of F 0.4. Such lowtransmission pixels are typically due to either the blending of multiple absorbers (see, e.g., Figure 2 in Pieri et al. 2014), or optically-thick systems (see Figure 10 in this paper).
As we have seen in Figure 14, at low-transmission values the discrepancy between data and model has a distinct shape, which is particularly clear at z = 2.3: the models underproduce pixels at 0.1 F 0.4 while at the same time overproducing saturated pixels with F ≈ 0.
To resolve this particular discrepancy would therefore require either drastically increasing the amount of clustering in the Lyα forest, or modifying our assumptions on the LLSs in our mock spectra. The first possibility seems rather unlikely since the Lyα forest power on relevant scales are well-constrained (Palanque-Delabrouille et al. 2013), and would in any case require new simulation suites to address -beyond the scope of this paper.
On the other hand, it is not altogether surprising that our fiducial column density distribution ( § 4.2) -which was measured at z ≈ 3.7 (Prochaska et al. 2010) -do not reproduce the BOSS data at z = 2.3 − 2.6. We therefore search for a LLS model that better describes the low-transmission end of the BOSS Lyα forest. Looking at the z = 2.3 PDFs in Figure 14, we see that our fiducial model over -produces pixels at F = 0, yet is deficient at slightly higher F . This suggests that our model is over-producing super-LLS (N HI > 10 19 cm −2 ) that contribute large absorption troughs with F = 0, while not providing sufficient lower-column density absorbers that can individually reach minima of 0.1 F 0.4 when smoothed to BOSS resolution. In other words, our fiducial model appears to have an excessively 'top-heavy' LLS column density distribution.
For a change, we will try a LLS column density distribution with a more ample bottom-end, using the steepest power-laws within the 1σ limits estimated by Prochaska et al. (2010): HI if 10 17.5 < N HI < 10 19.0 k 2 N −1.4 HI if 10 19.0 < N HI < 10 20.3 . (22) We use the same l LLS (z) as before, and obey the integral constraints from Prochaska et al. (2010) that demand that the ratios of f (N HI ) dN HI between the two column-density regimes be fixed. This gives us k 1 = 10 2.819 and k 2 = 10 7.039 , although the new distribution is no longer continuous at N HI = 10 19 cm −2 . This new distribution is illustrated by the red power-laws in Figure 15.
Another change we have made is to the partial LLS model, which was possibly too conservative in the fiducial model. Instead of extrapolating from the LLS distribution, we now adopt the pLLS power-law slope of β pLLS = −2.0 inferred from the total mean-free path to ionizing photons by Prochaska et al. (2010). This dramatically increases the incidence of pLLS in our spectra relative to LLS: we now have l pLLS = 1.8 l LLS , where l LLS is the same value we used previously (Equation 10). This increase, while large, is not unreasonable in light of the large uncertainties in direct measurements on the H I column-density distribution from direct Lyα line-profile fitting (e.g., Janknecht et al. 2006;Rudie et al. 2013). Note also that even this increased pLLS incidence only amounts to, on average, less than one pLLS per quasar (∆(z) ∼ 0.3 − 0.4 per quasar at our redshifts).
We found that while increasing the number of pLLS relieves the tension between data and model at 0.1 F 0.4, it does not resolve the excess at the fully absorbed F ≈ 0 pixels in the models. However, changing the bparameter of the LLS and pLLS from our original fiducial value of b = 70 km s −1 modifies the PDF in a way that improves the agreement. This is a reasonable step, since the effective b-parameter is otherwise observationally ill-constrained for the LLS and pLLS populations. This is because LLSs are typically complexes of multiple systems separated in velocity space, and while there have been analyses of the b-parameter in these individual components, the 'effective' b-parameter for complete LLS systems has never been quantified to our knowledge.
We therefore search for the best-fit b-parameter with respect to the T REF, γ = 1.3 model at z = 2.3, focusing primarily on the agreement in the 0 ≤ F ≤ 0.4 bins ( Figure 16). Our choice of model for this purpose should not significantly affect our subsequent conclusions regarding the IGM temperature-density slope, since there is little sensitivity towards the latter in the relevant lowtransmission bins (c.f. Figure 14). However, there will be some degeneracy between the LLS b-parameter and T 0 (Figure 17) since changing the latter does somewhat change the low-transmission portion of the PDF -we will come back to this point in §7.
As shown in Figure 16, a value b = 45 km s −1 gives the best agreement with the data at 0 ≤ F ≤ 0.4. This yields χ 2 = 116 for 24 d.o.f., which is dramatically improved over those quoted in Figure 14, but still not quite a good fit. In the subsequent results, we will adopt this steeper pLLS/LLS model and b-parameter as the fiducial model in our analysis, and will correspondingly decrease the degrees of freedom in our χ 2 analysis to account for the fitting of b.
Note that while significantly improving the PDF fit, this new b-parameter still does not give a perfect fit to the low-transmission (F < 0.4) end. This is probably due to the simplified nature of our LLS model, which neglects the finite distribution of b-parameters and internal velocity dispersion of individual components. These properties are currently not well-known, and it seems likely that an improved model would allow a better fit to the low-transmission end of the PDF.
Estimation of Systematic Uncertainties
While we have estimated the sample variance of our BOSS transmission PDFs by bootstrap resampling on the spectra, there are significant uncertainties associated with each component of our transmission PDF model as described above, e.g., the LLS incidence rate and level of continuum error. These uncertainties can be incorpo- 2D density plot of the error covariance matrix representing our systematic uncertainties in the LLS incidence rate, pLLS column-density distribution, LLS b-parameter, metal absorption, and continuum scatter, as estimated through the Monte Carlo method described in § 5.3. The bottom plot shows the corresponding correlation function. This particular covariance matrix was estimated for the z = 2.6, S/N = 8 − 10 subsample, and the values in the covariance have been multiplied by 10 4 for clarity. rated into a systematics covariance matrix, C sys that can then be added to the bootstrap covariance, C boot , when computing the model likelihoods. This requires assuming that C sys and C boot are uncorrelated, and that the errors are Gaussian distributed.
We adopt a Monte Carlo approach to estimate C sys by generating 200 model transmission PDFs that randomly vary the systematics. We then evaluate the covariance of the transmission PDFs, p i , relative to the fiducial model, p ref,i at each transmission bin i. This allows us to construct a covariance matrix with the elements that encompasses the errors from the uncertainties in the LLS model, metal absorption, and continuum scat-ter. Note that estimation of systematic uncertainties is typically a subjective process, and for most of these contributions we can only make educated guesses as to their uncertainty.
Our Monte Carlo iterations sample the various components of our model as follows: LLS Incidence: We sample the uncertainty in the power-law exponent γ LLS of the redshift evolution in LLS incidence rate (Equation 10), which is σ γLLS ± 0.21 as reported by Ribaudo et al. (2011). We assume this uncertainty is Gaussian and draw l LLS (z) accordingly. This primarily affects the lowflux regions −0.1 F 0.3 of the PDF.
partial-LLS Slope: Our choice of slope for the distribution of partial LLS (N HI < 10 17.5 cm −2 absorbers is from an indirect constraint with significant uncertainty (Prochaska et al. 2010). We therefore vary the pLLS slope around the fiducial β pLLS = −2.0 by ±0.5 assuming a flat prior in this range, which primarily alters the 0 F 0.4 portion of the PDF since pLLS typically do not saturate at BOSS resolution.
LLS b-parameters: Also in the previous section, we found that a global b-parameter of b = 45 km s −1 gives the best agreement with the data, but this is an ad hoc approach with significant uncertainties. In our Monte Carlo Sampling we therefore adopt a conservative b = 45 km s −1 ± 20 km s −1 with a uniform prior. This primarily affects the PDF at −0.1 ≤ F ≤ 0.4 as can be seen in Figure 16.
Intervening Metals: Although we used an empirical method to model intervening metals ( § 4.4), we may have missed metals with rest wavelengths λ 1300Å. Furthermore, we have a relatively small set (∼ 300 − 400) of 'template' quasars from which our metal model is derived, which may contribute some sampling variance. We therefore guess at an Gaussian error of ±30% for the metal incidence rate. This modulates the extent to which metals pulls the overall PDF towards lower F -values (c.f. Figure 9c).
Continuum Errors:
The overall r.m.s. scatter in our continuum estimation also affect the flux PDF ( Figure 9e). This can be varied in our model by rescaling the quantity c ′ (λ)/c(λ) − 1, where c is the 'true' continuum used to generate the mock spectrum, while c ′ is the model continuum which we subsequently fit (Figure 13). For each iteration in our Monte Carlo systematics estimation, we dilate or reduce c ′ (λ)/c(λ) − 1 by a Gaussian deviate assuming ±20% scatter. This primarily affects the high-transmission (F > 0.8) end of the PDF.
For these Monte Carlo iterations, we used the identical thermal model (γ = 1.6, T REF) as well as fixed the same random number seeds used for the selection of simulation skewers and generation of noise vectors in our spectra, in order to ensure that the only variation between the different iterations are from the randomly-sampled systematics. Figure 18 shows 50 of these Monte Carlo iterations on the transmission PDF for the z = 2.3, S/N=8 − 10 subsample. Figure 19 shows an example of the systematic contribution to the covariance matrix. The overall amplitude of the systematic contribution is considerably higher than that estimated from the bootstrap resampling (c.f. Figure 7), indicating that we are in the systematics-limited regime. We also see significant anti-correlations at almost the same level as the positive correlations, which are due mostly to correlations between transmission bins on either side of 'pivot points' as the transmission PDF varies from the systematics -these anti-correlations will somewhat counteract the increased size of the diagonal components. In the subsequent analysis, we will use an error covariance matrix, C = C boot + C sys , in which the systematics covariance matrix estimated in this subsection is added to the bootstrap covariance matrix (described in § 3.3) estimated from the BOSS transmission PDFs.
We have at this point yet to address one more parameter which can significantly change the shape of our model transmission PDFs, namely the Lyα forest meantransmission assumed in the mock spectra, F Lyα . However, this is an important astrophysical parameter which we did not want to treat as a 'systematic', so the next sub-section will describe our treatment of F Lyα .
Modifying the Mean-transmission
In the initial comparison of the model transmission PDFs shown in Figure 14, the models show a discrepancy with the data at higher transmission bins F 0.6. Such differences can be alleviated by varying the mean-transmission of the pure Lyα forest, F Lyα ≡ exp(−τ Lyα ), i.e. ignoring the contribution from metals and LLS. This quantity can be varied directly in the simulation skewers (Section 4.1). When we vary F Lyα in the simulations, the quantity F cont , which is used to normalize the continuum level of the mock quasar spectrum, is always kept fixed to F eff (z) = exp[−(τ Lyα +τ metals +τ LLS )] as derived from Becker et al. (2013) (see Section 3.2). However, since we are applying the same F cont to both the real and mock spectra, F cont can be best thought of as a normalization that does not actually need to match F eff . Once both the real and mock spectra have been normalized by F cont , the transmission PDF retains information on the respective contributions from the Lyα forest, metals and LLSs regardless of the assumed F cont , because these contributions affect the shape of the PDF in different ways. In principle, it is possible to vary these all components to infer their relative contributions, but due to the crudeness of our metal and LLS models, we choose have only F Lyα as a free parameter while keeping F metals = exp(−τ metals ) and F LLS = exp(−τ LLS ) fixed. The possible variation of these latter two components are instead incorporated into the systematic uncertainties determined in Section 5.3. The effect of varying F Lyα is illustrated in Figure 20, where we plot the same IGM model with different underlying values of F Lyα in the simulation skewers whilst keeping fixed the contribution from metals, LLSs etc.
We therefore explore a range of F Lyα around the vicinity of that estimated by Becker et al. (2013), F Lyα,B13 , and at each value of F Lyα evaluate the In the bottom panel, the dashed horizontal lines indicate ±1σ discrepancies between models and data, although we caution against 'chi-by-eye' due to the significantly non-diagonal covariances in the errors. The central F Lyα value shown here corresponds to that estimated by Becker et al. (2013), while the other two are evaluated at ±1σ of their reported errors. The mean-transmission value, F cont , assumed in the mean-flux regulated continuum fitting is constant in all cases. Note that the χ 2 values, which are for 23 d.o.f., are much improved over the previous data comparisons, since they now include the improved LLS/pLLS model as well as the full covariance matrix including systematic uncertainties. χ 2 summed over all the S/N subsamples for each z and γ combination. In addition, we now adopt the updated LLS/pLLS model described in §5.2, while the χ 2 evaluation now uses the full covariance matrix including both the bootstrap and systematics ( §5.3) uncertainties to compare with the transmission PDFs measured from the BOSS data.
The models are compared with the BOSS data as we vary F Lyα , and for each F Lyα we compute the total chi-squared summed over all three S/N subsamples, where each subsample contributes 27 − 1 − 2 = 24 d.o.f. (c.f. Equations 20 and 21) along with a further reduction of one d.o.f. since we have effectively fitted for the LLS bparameters in § 5.2, for a total of ν = 71 d.o.f. The result of this exercise is shown in Figure 21 which shows the χ 2 values for the T REF models with different γ -we only vary γ and not T 0 because the F 0.6 portions of the transmission PDF that change the most with F Lyα do not vary as much with respect to changes in T 0 (c.f. Figure 17). Examples of the corresponding best-fit model PDFs in one S/N subsample are shown in Figure 22, where we see that varying F Lyα can indeed change Fig. 23. In §6 we will marginalize over the uncertainties in F Lyα to obtain our final results. the shape of the F 0.6 portion of the transmission PDF sufficiently, improving the fits in those transmission ranges compared to the fiducial models ( Figure 14).
The 3, 2.6], whereas at z = 3, the error bars on the PDF are sufficiently large that acceptable fits are obtainable using γ = 1.0, with χ 2 = 68 for 70 d.o.f. (P = 54%). However, this requires a +5σ discrepancy in F Lyα with respect to Becker et al. (2013). In Figure 22, one sees that fitting for F Lyα allows the γ = 1.0 models to be in good agreement with the data in the F > 0.7 portion of the PDF, but gives rise to discrepancies in the 0.4 F 0.7 range which limits the goodness-of-fit, and cannot easily be compensated by modifying the metals or LLS model.
From Figure 21, it is clear that as we move to higher redshifts, we require increasingly higher F Lyα relative to the fiducial Becker et al. (2013) values in order to agree with the data: at z = 2.3, our best-fit mean-transmission for the γ = 1.6 model agrees with Becker et al. (2013), but at z = 3 there is a significant deviation of +2σ with respect to the Becker et al. (2013) measurement. The same trend is true for the best-fit γ = 1.3 and γ = 1.0 models, but these require even greater discrepancies with respect to the fiducial F Lyα .
One possible explanation for this discrepancy is the effect on the Becker et al. (2013) measurement of u-band selection bias in the SDSS quasars. This was first noted by , who found that the color-color criteria used to select SDSS quasars preferentially selected quasars, specifically in the redshift range 3 z qso 3.5, that have intervening Lyman-breaks at λ rest < 912Å. The 3 z qso 3.5 SDSS quasars are thus more likely to have intervening LLS in their sightlines, yielding an additional contribution to the Lyα absorption and hence causing Becker et al. (2013) to possibly underestimate F Lyα when stacking the impacted quasars. Becker et al. (2013) mentioned this effect in their paper but argued that it was much smaller than their esti- The red and black curves show the excess Lyα absorption expected from sightlines of zqso = 3.2 and zqso = 3.4 quasars, respectively, relative to the mean IGM transmission. This is caused by the SDSS selection bias described in , which yield above-average numbers of intervening LLS. These are derived from the same curves shown in Figure 17 of , but replotted as ratios smoothed by a boxcar function over 12 pixels for clarity. The top axis labels the Lyα absorption redshift corresponding to each wavelength, while the shaded region indicates the wavelength range of our z = 3.0 bin. The dashed-line shows, for comparison, the relative errors on the Lyα forest mean transmission estimated by Becker et al. (2013). The discrepancy due to the SDSS bias is significant compared to the Becker et al. (2013) errors. mated errors by referencing theoretical IGM transmission curves estimated by (Figure 17 in the latter paper).
Dr. G. Worseck has kindly provided us with these transmission curves, T IGM (λ), which were generated for both the average IGM absorption and that extracted from SDSS quasars affected by the color-color selection bias. In Figure 23 we plot the relative difference between the biased Lyα transmission deduced from z qso = 3.2 and z qso = 3.4 quasars and the true mean IGM transmission, using the transmission curves. It is clear that at Lyα absorption redshifts of z abs ≈ 3, the excess LLS picked up from such quasars contribute an additional ∼ 1% compared to the mean IGM decrement, a discrepancy that is of the same magnitude as the error bars in the Becker et al. (2013) measurement, indicated by the dashed line.
This could partially explain the higher F Lyα required to make our z = 3 models fit the data in Figure 21. Note that we expect this UV color selection bias to be much less significant in our BOSS data, since we have selected bright quasars in the top 5th percentile of the S/N distribution. Given that such quasars have high signal-to-noise ratio photometry, their colors separate much more cleanly from stellar contaminants. Furthermore, such bright quasars are much more likely to have been selected with multi-wavelength data (e.g., including near-IR and radio in addition to optical photometry see Ross et al. 2012). For both of these reasons, we expect our quasars to be much less susceptible to biases in color-selection related to the presence of an LLS. A careful accounting of this bias is beyond the scope of this paper, but from now on we will inflate by a factor of two the corresponding errors on F Lyα at z = 3 to account for this possible bias in the mean transmission measurements (dashed vertical lines in bottom panel of Figure 21).
Another possibility that could explain a bias in the F Lyα measured by Becker et al. (2013) is their assumption that the metal contamination of the Lyα forest does not evolve with redshift. While there are few clear constraints on the aggregate metal contamination within the forest, assuming that the metals actually decrease with increasing redshfit (e.g., in the case of C IV, Cooksey et al. 2013), then the assumption of an unevolv-ing metal contribution calibrated at z ≈ 2.3 would lead to an underestimate of F Lyα at higher redshifts, which could explain the trend we seem to be seeing. It is clear from the previous discussion that there is some degeneracy between γ and F Lyα in our transmission PDFs. However, we are primarily interested in γ, while the F Lyα has been extensively measured over the years allowing strong priors to be placed. In the next section, we will therefore marginalize over F Lyα in order to obtain our final results.
RESULTS
Due to the uncertainties in F Lyα described in the previous sub-section, for a better comparison between transmission PDFs, p, from models with different [γ, T 0 ] we will marginalize the model likelihoods, L = exp(−χ 2 /2), over the Lyα forest mean-transmission, F : where A( F ) is the prior on F (for clarity in these equations, F is used as a shorthand for F Lyα ). We assume a Gaussian prior: where F B13 and σ F are the optically-thin Lyα forest mean-transmission and associated errors, respectively, estimated from Becker et al. (2013). Note that for z = 3, we have decided to dilate the error bars by a factor of 2 to account for the suspected quasar selection bias discussed in the previous section.
For each model, we generate transmission PDFs with different F Lyα (similar to Figure 21) and evaluate the combined χ 2 summed over different S/N. We interpolate the χ 2 over F Lyα to obtain a finer grid, which then allows us to numerically integrate Equation 24 using fivepoint Newton-Coates quadrature.
At this stage, we also analyze models with different IGM temperatures at mean-density, T 0 . Hitherto, we have been working only with the central T REF model (T 0 (z = 2.5) ∼ 16000 K) , but we now also compare models from the T HOT and T COLD simulations, that have T 0 (z = 2.5) ∼ 11000 K and T 0 (z = 2.5) ∼ 21500 K, respectively. Each of these temperature models also sample temperature-density relationships of γ = [1.0, 1.3, 1.6] for a model grid of 3 × 3 parameters at each redshift.
The marginalized χ 2 values for all the models are tabulated in Table 3, and plotted as a function of γ in Figure 24. In general, the T REF models with γ = 1.6 provide the best agreements with the data at all redshifts with χ 2 ≈ 60−70 for 69 d.o.f.. The T HOT models (with higher IGM temperatures at mean density) provide fits of comparable quality, and indeed at z = 2.3 the T HOT model with γ = 1.3 gives essentially the same goodness-of-fit as the γ = 1.6 T REF model. The cooler T COLD models are less favored by the data, and at z = 2.6 give unreasonable fits to the data with χ 2 = 89 for 69 d.o.f. (P = 5%), but at other redshifts they are acceptable fits to the data. In other words, the transmission PDF does not show a strong sensitivity for T 0 , which we shall show later is due to degeneracy with our LLS model in the low-transmission end of the transmission PDF. The more important question to address is the possibility of isothermal or inverted temperature-density relationships (γ ≤ 1) as suggested by some studies on the transmission PDF of high-resolution, high-S/N echelle quasar spectra (e.g., Bolton et al. 2008;Viel et al. 2009;Calura et al. 2012). It is clear from Table 3 and Figure 24 that for all T 0 models the isothermal, γ = 1.0 models disagree strongly with the BOSS data. The closest match for an isothermal IGM is the T REF model at z = 3.0, which yields χ 2 = 78 for 69 d.o.f., or a probability of 21% of obtaining the data from this model. However, relative to the γ = 1.6 model at z = 3.0 which gives the minimum χ 2 at that redshift, we find ∆χ 2 ≈ 16 for the isothermal model, i.e. a ∆χ 2 = 4σ discrepancy from the best-fit model. The isothermal model is also strongly disfavored at the other redshifts, where we find ∆χ 2 ≈ [15, 40] at z = [2.3, 2.6] or (∆χ 2 ) ≈ [3.9σ, 6.3σ]. Since the shape of the transmission PDF varies continuously as a function of γ (see, e.g., Bolton et al. 2008;Lee 2012), these results imply that inverted (γ < 1) IGM temperature-density slopes are even more strongly ruled out.
DISCUSSION
In this paper, we have studied the z = 2.3−3 Lyα forest transmission probability distribution function (PDF) from 3373 BOSS DR9 quasar spectra. Although this is a relatively small subsample selected to be in the top 95th percentile in terms of S/N, they provide 2 ordersof-magnitude larger Lyα forest path length than highresolution, high-S/N data sets previously used for this purpose, providing unprecedented statistical power for transmission PDF analysis.
In order to ensure accurate characterization and al- f.) from models with different γ and T 0 at different redshifts, after marginalizing over uncertainties in the mean-transmission F of the Lyα forest. Models with γ = 1.6 are generally favored, although γ = 1.3 with the T HOT model is also acceptable at z = 2.3. The same quantities are also tabulated in Table 3.
low subsequent modelling of the spectral noise, we have introduced a novel, probabilistic method of combining the multiple exposures that comprise each BOSS observation, using the raw sky and calibration data. This method significantly improves the accuracy of the noise estimation, and additionally allows us to generate mock spectra with noise properties tailored to each individual BOSS spectrum, but self-consistently for different Lyα forest realizations. We believe that our noise modeling -which yields noise estimates accurate to ∼ 3% across the relevant wavelength range -is the most careful treatment of spectral noise in multi-object fiber spectra to-date, and we invite readers with similarly stringent requirements in understanding the BOSS spectral noise to contact the authors. In the future, the spectral extraction algorithm described by Bolton & Schlegel (2010) may solve some of the issues which affected us, but this has yet to be implemented.
For the continuum estimation, we used the meanflux regulated/principal component analysis (MF-PCA) method introduced in Lee (2012). This method, which reduces the uncertainty in the continuum estimation to σ cont 5%, fits for a continuum such that the resulting Lyα forest has a mean-transmission F matched to external constraints, for which we use the precise measurements by Becker et al. (2013). While MF-PCA does require external constraints for F , we argue that so long as both the real quasars and mock spectra are continuum-fitted in exactly the same way, the shape of the transmission PDF retains independent information on the Lyα forest mean-transmission.
To compare with the data, we used the detailed hydrodynamical simulations of Viel et al. (2013a), that explore a range of IGM temperature-density slopes (γ ≈ 1.0 − 1.6) and temperatures at mean density (T 0 (z = 2.5) ≈ [11000, 16000, 21500] K). We processed the simulated spectra to take account the characteristics of the individual BOSS spectra in our sample, such as spectral resolution, pixel noise, and continuum fitting errors. We also incorporate the effects of astrophysical 'nuisance' parameters such as Lyman-limit systems (LLSs) and metal contamination. The LLSs are modeled by adding 10 16.5 cm −2 N HI 10 20.3 cm −2 absorbers into our mock spectra, based on published measurements of the observed incidence l LLS (z) (Ribaudo et al. 2011) and H I column density distribution f (N HI ) (Prochaska et al. 2010). Meanwhile, contamination from lower-redshift metals are modeled in an empirical fashion by inserting λ rest > 1216Å absorbers observed in lower-redshift SDSS/BOSS quasars into the same observed wavelengths of our mock spectra.
Our initial models did not provide satisfactory agreement with the transmission PDF measured from the BOSS spectra, with discrepancies at both the hightransmission and low-transmission bins. However, the differences between data and models were consistent across the different S/N subsamples, indicating that our noise modelling is robust. To resolve the discrepancies at the low-transmission end of the PDF, we explored various modifications to our LLS model. Firstly, we steepened the column-density distribution slope of partial LLS (16.5 < log 10 (N HI ) < 17.5 systems) to β LLS = −2 a value suggested from the mean-free path of ionizing photons (Prochaska et al. 2010). This change relieved the tension between model and data in the F ≈ 0.1 − 0.4 bins, but implies increasing the number of pLLS by nearly an order of magnitude, but this is not unreasonable given the current uncertainties on this population (Janknecht et al. 2006;Prochaska et al. 2010). We believe that the necessity of a pLLS distribution with β LLS ≈ −2 to fit the BOSS Lyα transmission PDF supports the claims of Prochaska et al. (2010) regarding the column-density distribution of this population.
However, after adding pLLSs a major discrepancy remained in the saturated F ≈ 0 bins, which we addressed by adjusting the effective b-parameter assumed in all the optically-thick systems in our model. We found that an effective value of b = 45 km s −1 gave the best-fit to our model 26 .
At the high-transmission (F 0.6) end of the model transmission PDFs, we found that modifying the Lyα forest mean-transmission in the simulations, F Lyα , allowed much better agreement with the BOSS data. At z = [2.3, 2.6], the F Lyα that gave the best-fitting model PDFs were within 1σ of the Becker et al. (2013) measurements, but at z = 3 we required a value that was ∼ 2σ larger. We argue that this discrepancy could be due to a color-color selection bias in the 3 z qso 3.5 SDSS quasars used by Becker et al. (2013), which preferentially selected sightlines with intervening LLS, giving rise to additional Lyα absorption (and thus lower F Lyα ) at a level comparable to the errors estimated by Becker et al. (2013). Our BOSS spectra, on the other hand, should be comparatively unaffected on account of being the brightest quasars in the survey, hence they they separate more cleanly from the stellar locus in colorspace, and were more likely to have been selected with additional criteria (radio, near-IR, variability etc) beyond color-color information (Ross et al. 2012).
To deal with these uncertainties, we decided to marginalize over the mean-transmission in our χ 2 analysis. At z = 2.3, the preferred model is for a hot IGM with (T 0 = 23000 K) along with γ = 1.3 (P ≈ 45%), although the intermediate-temperature model (T 0 = 18000 K) with γ = 1.6 is nearly as good a fit with P ≈ 82%. The preferred models at z = [2.6, 3.0] are for γ = 1.6 at temperatures at mean-density of T 0 = [21500, 9000] K (P = [46%, 78%], respectively. We find that the isothermal (γ = 1) temperature-density relationship is strongly disfavored at all redshifts regardless of T 0 , with discrepancies of ∆χ 2 ∼ 4 − 6σ compared to the best-fit models.
One might be skeptical of the results given the various assumptions we had to make in modelling astrophysical nuisance parameters. To test the robustness of our results to systematics, we generated 20 iterations of model transmission PDFs sampling all nine of our [T 0 , γ] models (i.e. 180 PDFs in total) in the z = 2.6, S/N=8-10 bin, where each iteration has a random realization of the systematics (LLS, metals, continuum errors etc) drawn in the same way as our Monte-Carlo estimate of systematic uncertainty ( §5.3). We then asked how many times each T 0 or γ model gave the lowest χ 2 when compared with the data. For this test we only evaluated the χ 2 at the fiducial F Lyα without marginalization.
The results of this test is shown in Figure 25. In the top panel, the T REF and T HOT models are favored ∼ 40% of the time but the T COLD has ∼ 15% of being favored depending on the (random) choice of systematics. In other words, there is significant degeneracy between our sys- tematics model and T 0 . We suspect this is driven largely by the choice of the LLS b-parameter, which changes the shape of the transmission PDF in a similar way to T 0 (compare Figure 16 with Figure 17). In contrast, the bottom panel of Figure 25 shows that whatever systematics we choose, γ = 1.6 is always favored indicating a robust constraint.
There is however some degeneracy between γ and the Lyα forest mean transmission, F Lyα . While we marginalize over the latter quantity, the choice of prior can, in principle, affect the results. However, at z = [2.3, 2.6], the chi-squared minimum of the γ = 1.0 PDF model as a function of F Lyα is χ 2 ≈ 100 for 71 d.o.f. (Figure 21), which has a probability of P ≈ 1%. In other words, even if we fine-tuned F Lyα in an attempt to force the isothermal model as the best-fit model at these redshifts, it would still be an unacceptable fit, and the γ = 1.3 model would still be preferred over it. This is less clear-cut at z = 3, where the error bars are large enough to permit a reasonable minimum chi-squared of χ 2 ≈ 70 for 71 d.o.f. using the γ = 1 model, but this requires a value of F Lyα = 0.71, which is 5σ discrepant from the value reported by Becker et al. (2013). While this F Lyα measurement is dependent on corrections for metals and LLS absorption (and indeed we argue that they have neglected a subtle bias related to SDSS quasar selection), they have attempted to incorporate these uncertainties into their errors and we have no particular reason to believe that they have underestimated this by a factor of > 5. A quick survey of the available measurements on the forest meantransmission from the past decade yield F Lyα (z = 3) ≈ 0.65−0.69 (Kim et al. 2007;Faucher-Giguère et al. 2008;Dall'Aglio et al. 2008), albeit with larger errors. The use of any of these measurements as priors for our analysis would therefore disfavor an IGM with γ ≤ 1, (which requires F Lyα (z = 3) ≥ 0.71), unless all the available literature in the field have significantly underestimated the mean-transmission.
There are several cosmological and astrophysical effects that we did not model, that could in principle affect our conclusions on γ. Since the Lyα forest transmission PDF essentially measures the contrast between high-absorption and low-absorption regions of the IGM, this can be degenerate with the underlying amplitude of matter fluctuations which is specified by a combination of σ 8 and n s , the matter fluctuation variance on 8 h −1 Mpc scales and the slope of the amplitude power spectrum, respectively. While these parameters are increasingly well-constrained (e.g., Planck Collaboration et al. 2013), there is still some uncertainty regarding the level of the fluctuations on the sub-Mpc scales relevant to the Lyα forest which could be degenerate with our γ measurement. Bolton et al. (2008) explored this degeneracy between σ 8 and γ in the context of transmission PDF measurements from high-resolution spectra, and found that the PDF is less sensitive to plausible changes in σ 8 compared to γ, e.g. modifying σ 8 by ∆σ 8 ± 0.1, affected the shape of the PDF less than a modification of ∆γ ± 0.25 (Figure 2 in their paper). This degeneracy is in fact further weakened when an MCMC analysis of the full parameter space is considered, as shown by the likelihood contours in Viel et al. (2009).
The astrophysical effects that could be degenerate with γ include galactic winds and inhomogeneities in the background UV ionizing field. The injection of gas into the IGM by strong galaxy outflows could in principle modify Lyα forest statistics at fixed γ; this was studied using hydrodynamical simulations by Viel et al. (2013b), who concluded that the effect on the PDF is small compared to the uncertainties in high-resolution PDF measurements. Our BOSS measurement has roughly the same errors as those from high-resolution spectra once systematic uncertainties are taken into account, therefore it seems unlikely that galactic winds could significantly bias our conclusions on γ. Meanwhile, fluctuations in the UV ionizing background, Γ, that are correlated with the overall density field could also be degenerate with the temperature-density relationship (c.f. Equation 1). This effect was studied by McDonald et al. (2005a) in simulations using an extreme model that considered only UV background contributions from highly-biased AGN, which maximizes the inhomogeneities. They concluded that while these UV fluctuations affected forest transmission statistics at z ∼ 4, the effect was small at z 3, the redshift range of our measurements.
Various observational and systematic effects could also, in principle, affect our constraints on γ. For exam-ple, our modeling of the BOSS spectral resolution assumes a Gaussian smoothing kernel which might affect our constraints if this were untrue. However, in their analysis of the 1D forest transmission power spectrum, Palanque-Delabrouille et al. (2013) examined the BOSS smoothing kernel and did not find significant deviations from Gaussianity. There are also possible systematics caused by our simplified modeling of LLS and metal contamination in the data, for example in our assumption of a single b-parameter for all LLSs and our neglect of very weak metal absorbers. However, we believe that the test performed in Figure 25 samples larger differences in the transmission PDF than those caused by our model simplifications, e.g. it seems unlikely that going from a single LLS b-parameter to a finite b-distribution could cause to greater differences in the flux PDF than varying the single b-parameter by ±50% as was done in Figure 25. As for continuum-estimation, we carry out the exact same continuum-fitting procedure on the mock spectra as on the real quasar spectra, which leads no overall bias since in both cases the resulting forest transmission field is forced to have the same overall transmission, F cont . The only uncertainty then relates to the distribution of c ′ /c − 1, i.e. the per-pixel error of the estimated continuum, c ′ , relative to the true continuum, c. In reality the shape of this distribution could be different between the data and mocks, whereas within our mocks framework we could only explore overall rescalings of the distribution width. Again, we find it unlikely that differences in the transmission PDF caused by the true shape of the c ′ /c − 1 distribution could be so large as to be comparable to the effect caused by varying the width of the continuum error distribution, that we have examined.
While we do not think that the effects described in the previous few paragraphs qualitatively affect our conclusion that the BOSS data is inconsistent with isothermal or inverted IGM temperature-density relationships (γ ≤ 1), when taken in aggregate these systematic uncertainties do weaken our formal 4 − 6 σ limits against γ ≤ 1 and need to explicitly considered in future analyses.
Astrophysical Implications
How does this compare with other results on the thermal state of the IGM? McDonald et al. (2001) analyzed the transmission PDF from 8 high-resolution, high S/N spectra and compared with now-obsolete hydrodynamical simulations. They found the data to be consistent with a temperature-density relationship (TDR) with the expected values of γ ≈ 1.5 ). More recently, Bolton et al. (2008) and Viel et al. (2009) carried out analyses of the transmission PDF measured from a larger sample (18 spectra) of Lyα forest sightlines measured by Kim et al. (2007) and found evidence for an inverted TDR (γ < 1). Viel et al. (2009) found that at z ≈ 3.0, the temperature-density relation was highly inverted (γ ≈ 0.5), and remained so as low as z ≈ 2.0 although at the lower redshifts the data was marginally consistent with an isothermal IGM. They suggested the difference between their results and those of McDonald et al. (2001) was due to the now-obsolescent cosmological parameters and less-detailed treatment of intervening metals in the earlier study. However, Lee (2012) then pointed out that there is a sensitivity of the measured values of γ from the transmission PDF on continuum-fitting. Since continuum-fitting of highresolution data generally involves manually placing the continuum at Lyα forest transmission peaks which do not necessarily reach the true continuum, it is conceivable that continuum biases combined with underestimated jacknife errors bars (e.g., Rollinde et al. 2013) could have led Bolton et al. (2008) and Viel et al. (2009) to erroneously deduce an inverted temperature-density relation (see Bolton et al. 2014 for a detailed discussion on this point). In our analysis we have fitted our continua using an automated process that is free from the same continuum-fitting bias, although it does require an assumption on the underlying Lyα forest transmission which we have marginalized over in our analysis.
Most recent measurements of the transmission PDF from high-resolution data have continued to favor an isothermal or inverted γ - Calura et al. (2012) analyzed the transmission PDF from a sample of z ≈ 3.3 − 3.8 quasars and also found an isothermal TDR at z = 3, although combining with the Kim et al. (2007) data drove the estimated γ to inverted values at z < 3. However, Rollinde et al. (2013) carried out a re-analysis of the transmission PDF from various high-resolution echelle data sets, which included significant overlap with the Kim et al. (2007) data. They argue that previous analyses have underestimated the error on the transmission PDF, and found the observed transmission PDF to be consistent with simulations that have γ ≈ 1.4 over 2 < z < 3 -this discrepancy is probably also driven by a different continuum-estimation from the Kim et al. (2007) measurement.
The use of other statistics on high-resolution spectra have however tended to disfavor an isothermal or inverted TDR. Rudie et al. (2012) analyzed the lower-end of the b-N HI cutoff from individual Lyα forest absorbers measured in a set of 15 very high-S/N quasar echelle spectra, and estimated γ ≈ 1.5 at z = 2.4. Bolton et al. (2014) compared the Rudie et al. (2012) measurements to hydrodynamical simulations and corroborated their determination of the TDR slope. Garzilli et al. (2012) analyzed the Kim et al. (2007) sample and found that while the transmission PDF supports an isothermal or inverted TDR, a wavelet analysis favors γ > 1. Note, however, that the b-N HI cutoff and the transmission PDF are sensitive to different density ranges, with the PDF probing gas densities predominantly below the mean (e.g., Bolton et al. 2014).
Our result of γ ≈ 1.6 at z = [2.3, 2.6, 3.0] are thus in rough agreement with measurements that do not involve the transmission PDF from high-resolution Lyα forest spectra (with the exception of Rollinde et al. 2013). Our value of γ at z = 3 is somewhat unexpected because one expects a flattening of the TDR close to the He II reionization epoch at z ∼ 3 (Furlanetto & Oh 2008;McQuinn et al. 2009; but see Gleser et al. 2005;Meiksin & Tittley 2012), but γ = 1.3 is not strongly disfavored ( (∆χ 2 ) ∼ 2.6) Taken at face value, the TDR during He II reionization can be made steeper by a density-independent reionization and/or a lower heating rate in the IGM (Furlanetto & Oh 2008), which could be reconciled with an extended He II event (Shull et al. 2010;).
Our constraints on γ appear to be in conflict with the prediction of the theories of Broderick et al. (2012) and Chang et al. (2012), who elucidated a relativistic pairbeam channel for plasma-instability heating of the IGM from TeV gamma-rays produced by a population of luminous blazars. This mechanism provides a uniform volumetric heating rate, which would cause an inverted TDR in the IGM (Puchwein et al. 2012) since voids would experience a higher specific heating rate compared with heating by He II reionization alone. This picture has been challenged by the recent study of Sironi & Giannios (2014), who dispute the amount of heating this mechanism could provide, since they found that the momentum dispersion of such relativistic pair beams allows ≪ 10% of the beam energies to be deposited into the IGM.
However, in this paper we have assumed relatively simple TDRs in which the bulk of the IGM in the density range 0.1 ∆ 5 follows a relatively tight power-law. We have therefore not studied more complicated T − ∆ relationships, e.g., with a spread of temperatures at fixed density (e.g., Meiksin & Tittley 2012;Compostella et al. 2013) that might be caused by He II reionization or other phenomena. It is therefore possible that such complicated TDRs could result in Lyα forest transmission PDFs that mimic the γ ≈ 1.6 power-law; this is something that needs to be examined in more detail in future work.
Future Prospects
Looking forward, the subsequent BOSS data releases will significantly enlarge our sample size, e.g., DR10 (Ahn et al. 2014) is nearly double the size of the DR9 sample used in this paper, while the final BOSS sample (DR12) should be three times as large as DR9. In particular, the newer data sets should be sufficiently large for us to analyze the transmission PDF and constrain γ during the epoch of He II reionization at z > 3. This would be a valuable measurement, since high-resolution spectra are particularly affected by continuum-fitting biases at these redshifts (Faucher-Giguère et al. 2008;Lee 2012).
The analysis of the optically-thin Lyα forest transmission PDF from these expanded data sets will have vanishingly small sample errors, and the errors will be dominated by systematic and astrophysical uncertainties. At the high-transmission end, our uncertainties are dominated by the scatter of the continuum-fitting, which is dominated by the question of whether our quasar PCA templates, derived from low-luminosity low-redshift quasars (Suzuki et al. 2005), or high-luminosity SDSS quasars (Pâris et al. 2011), respectively, are an accurate representation of the BOSS quasars. This uncertainty should be eliminated in the near-future by PCA templates derived self-consistently from the BOSS data (Nao Suzuki et al. 2014, in prep). The modelling of metal contamination could also be improved in the near future by advances in our understanding of how metals are distributed in the IGM (e.g., Zhu et al. 2014), although metals are a comparatively minor contribution to the uncertainty in our transmission PDF.
We also aim to improve on the rather ad hoc data analysis in this paper, in which we accounted for some uncertainties in our modelling by incorporating them into our error covariances (e.g., LLSs, metals, continuum er-rors), while F Lyα was marginalized over a fixed grid. In future analyses, it would make sense to carry out a full Markov Chain Monte Carlo treatment of all these parameters which would rigorously account for all the uncertainties and allow straightforward marginalization over nuisance parameters.
Since this paper was initially focused on modelling the BOSS spectra, for the model comparison we used only simulations sampling a very coarse 3 × 3 grid in T 0 and γ parameter space, and were unable to take account for uncertainties in other cosmological (σ 8 , n s etc) and astrophysical (e.g., Jeans' scale, Rorai et al. 2013;or galactic winds, Viel et al. 2013b) parameters in our analysis. However, methods already exist to interpolate Lyα forest statistics from hydrodynamical simulations given a set of IGM and cosmological parameters (e.g., Viel & Haehnelt 2006;Borde et al. 2014;Rorai et al. 2013). In the near future we expect to do joint analyses using other Lyα forest statistics in conjunction with the transmission PDF, such as new measurements of the small-scale (k 0.2 s km −1 ) 1D transmission power spectrum (Walther et al. 2014, in prep.), moderate-scale (0.002 s km −1 k 0.2 s km −1 ) transmission power spectrum in both 1D (e.g., Palanque-Delabrouille et al. 2013) and 3D (from ultra-dense Lyα forest surveys using high-redshift star-forming galaxies, Lee et al. 2014a,b), the phase angle probability distribution function determined from close quasar pair sightlines (Rorai et al. 2013), and others. Such efforts would require a fine grid sampling the full set of cosmological and IGM thermal parameters in order to ensure that the interpolation errors are small compared to the uncertainties in the data (see e.g., Rorai et al. 2013). Efforts are underway to utilize massively-parallel adaptive-mesh refinement codes (Almgren et al. 2013) to generate such parameter grids to study the IGM (Lukić et al. 2014) However, one of the findings of this paper is the importance of correct modelling of LLS, in particular partial LLS (10 16.5 cm −2 N HI 10 17.5 cm −2 ), in accounting for the shape of the observed Lyα transmission PDF. Since our hydrodynamical simulations did not include radiative transfer and cannot accurately capture optically thick systems, we had to add these in an ad hoc manner based on observational constraints which are currently rather imprecise. In the near future, we would want to use hydrodynamical simulations with radiative transfer (even if only in post-processing, e.g., Altay et al. 2011;Altay et al. 2013;Rahmati et al. 2013) to self-consistently model the optically-thick absorbers in the IGM. With the unprecedented statistical power of the full BOSS Lyα forest sample, this could provide the opportunity to place unique constraints on the column-density distribution function of partial LLS.
SUMMARY/CONCLUSIONS
In this paper, we analyzed the probability distribution function (PDF) of the Lyα forest transmitted flux using 3393 BOSS quasar spectra (with S/N ≥ 6) from Data Release 9 of the SDSS-III survey.
To rectify the inaccurate noise estimates in the standard pipeline, we first carried a custom co-addition of the individual exposures of each spectrum, using a probabilistic procedure that also separates out the signal and CCD contributions, allowing us to later create mock spectra with realistic noise properties. We then estimated the intrinsic quasar continuum using a mean-flux regulated technique that reduces the scatter in the estimated continua by forcing the resultant Lyα forest mean transmission to match the precise estimates of Becker et al. (2013), although we had to make minor corrections on the latter to account for our different assumptions on optically-thick systems in the data. This now allows us to measure the transmission PDF in the data, which we do so at z = [2.3, 2.6, 3.0] (with bin widths of ∆z = 0.3), and split into S/N subsamples of S/N = [6-8, 8-10, 10-25] at each redshift bin.
The second part of the paper describe finding a transmission PDF model which describes the data, based on detailed hydrodynamical simulations of the opticallythin Lyα forest that sample different IGM temperaturedensity relationship slopes, γ, and temperatures at meandensity, T 0 (where T (∆) = T 0 ∆ γ−1 ). Using these simulations we generate mock spectra based on the real spectra. These take into account the following instrumental and astrophysical effects: Lyman-Limit Systems: These are randomly added into our mock spectra based on published incidence rates (Ribaudo et al. 2011) and column-density distributions (Prochaska et al. 2010), including a large population of partial LLS (10 16.5 cm −2 ≤ N HI ≤ 10 17.5 cm −2 ) with a power-law distribution of roughly f (N HI ) ∝ N −2 HI . We assumed an effective b = 45 km s −1 for the velocity width of these absorbers.
Metal Contamination: We measure metal absorption rom the 1260Å λ 1390Å restframe region of lower-redshift quasars at the same observed wavelength, then add these directly into our mock spectra.
Spectral Resolution and Noise: Each mock spectrum is smoothed by the dispersion vector of the corresponding real spectrum (determined by the BOSS pipeline), and we apply corrections which bring the spectral resolution modeling to within ∼ 1% accuracy. We then introduce pixel noise based on the noise parameters estimated by our probabilistic co-addition procedure on the real data, which also achieves percent level accuracy on modeling the noise.
Continuum Errors: Since we generate a full mock Lyα forest spectrum including the simulated quasar continuum (based on the continua fitted to the actual data), we can apply our continuum-estimation procedure on each mock to fit a new continuum. The difference between the new continuum and the underlying simulated quasar continuum yields an estimate of the continuum error.
We then compare the model transmission PDFs with the data, using an error covariance that includes both bootstrap errors and systematic uncertainties in the model components described above. At z = 3.0 we find a discrepancy in the assumed Lyα forest meantransmission, F Lyα , between our data and that derived from Becker et al. (2013), which we argue is likely caused by a selection bias in the SDSS quasars used by the latter. We therefore marginalize out these uncertainties in F Lyα to obtain our final results. The models with an IGM temperature-density slope of γ = 1.6 give the best-fit to the data at all our redshift bins ( z = [2.3, 2.6, 3.0]). Models with an isothermal or inverted temperature-density relationship (γ ≤ 1) are disfavored at the (∆χ 2 ) = [3.9, 6.3, 4.0]σ at z = [2.3, 2.6, 3.0], respectively. Due to a degeneracy with our LLS model, we are unable to put robust constraints on T 0 but we have checked that our conclusions on γ are robust to such systematics as can be considered within our model framework. There are other possible systematics we did not consider that could in principle affect our measurement, such as cosmological parameters (σ 8 , n s ) and astrophysical effects (galactic winds, inhomogeneous UV ionizing background), but we argue that these are unlikely to qualitatively affect our conclusions. spectrum 27 while simultaneously estimating the noise variance in terms of a parametrized model. We assume the noise in each pixel can be described by whereŜ λi = S λi (1 − exp(−A 3 λ + A 4 )) . (2) The true object flux F λ and A j=1−4 are noise parameters which we will determine given the individual exposure spectra f λ,i , sky flux estimates s λ,i , and calibrations vectors S λ,i (which convert between detector counts and photons). σ RN,eff is the effective read noise which we fixed to σ RN,eff = 12; this can be thought of as an effective number of pixels times the true read noise of the CCD squared, which we multiplied by the spectrograph dispersion σ disp (λ) to approximately account for the change in spot-size as a function of wavelength. Equation 2 parametrizes wavelength-dependent biases in the calibration vector.
We search for the model that best describes the multiple exposure spectra f λi , where our model parameters are A j from Eq. (1) and F λ is the true flux of the object. In what follows, we will outline a method for determining the posterior distribution P (A j , F λ |f λi ) using a Markov Chain Monte Carlo (MCMC) method. From this distribution, we can obtain both an accurate model for the noise via Eq. (1), and our final combined spectrum. The estimates for A j can also be used to self-consistently generate pixel noise in mock Lyα forest spectra.
The probability of the data given the model, or the likelihood, can be written Note that individual exposure data f λi are on the native wavelength grid of each CCD exposure, whereas the BOSS pipeline interpolates and then combines these individual spectra into a final co-added spectrum, defined on a wavelength grid with uniform spacing. Furthermore, flexure and other variations in the spectrograph wavelength solution will result in small (typically sub-pixel) shifts between the individual exposure wavelength grids. In Eq. (3) our model F λ must be computable at every wavelength f λi of the individual exposures. We are free to choose the wavelengths at which F λ is represented, but this choice is a subtle issue for several reasons. First, note that we want to avoid interpolating the data, f λi , onto the model wavelength grid, as this would correlate the data pixels, and require that we track covariances in the likelihood in Eq. (3), making it significantly more complicated and challenging to evaluate. Similarly, it is undesirable to interpolate our model F λ , as this would introduce correlations in the model parameters, making it much more difficult to sample them with our MCMC. Finally, note that F λ also represents our final co-added spectrum, so we might consider opting for a a uniform wavelength grid, similar to what is done by the BOSS pipeline. Our approach is to simply determine the model flux F λ at each wavelength of the individual exposures f λi . Shifts among the individual exposure wavelength grids result in a more finely sampled model grid. For the reasons explained above, we use nearest grid point (NGP) interpolation, so that the f λi are evaluated on the F λ grid (and vice versa) by assigning the value from the single nearest pixel.
In our MCMC iterations, we use the standard Metropolis-Hastings criterion to sample the parameters A j , with trials drawn from a uniform prior. For the F λ , we exploit an analogy with Gibbs sampling, which dramatically simplifies MCMC for likelihood functions with a multivariate Gaussian form. Gibbs sampling exploits the fact that given a multivariate distribution, it is much simpler to sample from conditional distributions than to integrate over a joint distribution. To be more specific, the likelihood in Eq. (3) is proportional to the joint probability distribution of the noise parameters A j and F λ , but it is also proportional to the conditional probability distribution of the F λ at fixed A j . With A j fixed the probability of F λ is then which is very nearly a multivariate Gaussian distributions for F λ with a diagonal covariance matrix. The equation above slightly deviates from a Gaussian because the σ λi depend on F λ via Eq. (1). In what follows, we ignore this small deviation, and assume that the conditional PDF of the F λ (at fixed A j ) is Gaussian. Given that Eq. 4 is a multivariate Gaussian with diagonal covariance, the Gibbs sampling of the F λ becomes trivial. Since, Eq. (4) can be factored into a product of individual Gaussians, we need not follow the standard Gibbs sampling algorithm, whereby each parameter is updated sequentially holding the others fixed. Instead we need only hold A j fixed (since the likelihood is not Gaussian in these parameters), and we can sample all of the F λ simultaneously. This simplification, which dramatically speeds up the algorithm, is possible because the conditional distribution for F λ can be factored into a product of Gaussians for each pixel F λ , thus the conditional distribution at any wavelength is completely independent of all the others. | 32,151 | 2014-05-05T00:00:00.000 | [
"Physics"
] |
The evolution of a non-autonomous chaotic system under non-periodic forcing: a climate change example
Complex Earth System Models are widely utilised to make conditional statements about the future climate under some assumptions about changes in future atmospheric greenhouse gas concentrations; these statements are often referred to as climate projections. The models themselves are high-dimensional nonlinear systems and it is common to discuss their behaviour in terms of attractors and low-dimensional nonlinear systems such as the canonical Lorenz `63 system. In a non-autonomous situation, for instance due to anthropogenic climate change, the relevant object is sometimes considered to be the pullback or snapshot attractor. The pullback attractor, however, is a collection of {\em all} plausible states of the system at a given time and therefore does not take into consideration our knowledge of the current state of the Earth System when making climate projections, and are therefore not very informative regarding annual to multi-decadal climate projections. In this article, we approach the problem of measuring and interpreting the mid-term climate of a model by using a low-dimensional, climate-like, nonlinear system with three timescales of variability, and non-periodic forcing. We introduce the concept of an {\em evolution set} which is dependent on the starting state of the system, and explore its links to different types of initial condition uncertainty and the rate of external forcing. We define the {\em convergence time} as the time that it takes for the distribution of one of the dependent variables to lose memory of its initial conditions. We suspect a connection between convergence times and the classical concept of mixing times but the precise nature of this connection needs to be explored. These results have implications for the design of influential climate and Earth System Model ensembles, and raise a number of issues of mathematical interest.
Introduction
The theory of non-autonomous nonlinear dynamical systems has enjoyed great popularity over the past few decades, particularly within the climate modelling community [1].This is because complex global climate models, or rather Earth System Models (ESMs), which are widely used to make projections of the 21st century and to support the IPCC's climate assessment reports, are subject to non-periodic, climate-change-like forcing, which inevitably breaks their autonomy.These models are also high-dimensional, multi-component, multi-scale, chaotic nonlinear systems and as a consequence, any forward computation -that is to say, projection of the future within the model -is highly sensitive to the finest details † of the initial state, making climate prediction a non-trivial task.
Uncertainty in the state from which to initialise ESMs is known as initial condition uncertainty (ICU).The sensitivity of such climate system models to ICU is well known since the early 60s' [2] and has led to the development of ensemble weather forecasting [3].Its relevance for climate forecasting is also increasingly being recognised [4,5,6,7], as it is the necessity of using large climate initial condition ensembles (ICEs) to characterise ICU [8].Nevertheless, it is often assumed that the uncertainty arising from ICU can be addressed by taking statistics from a single, long trajectory, which it is assumed, over time, would explore all possible states in phase space.In a stationary system ‡ this is essentially an ergodic [9], or "kairodic" [8], assumption: that averages and distributions of states over long periods (e.g. 30 years for IPCC) are representative of any particular instant -with the caveat that it would require infinite time for convergence.However, under non-periodic, climate-change-like forcing -such as increasing atmospheric greenhouse gas concentrations -the system is not ergodic, and hence cannot be studied in this way [9].
The non-autonomous nature of ESMs under anthropogenic, non-periodic climate change forcing means that, in general, such a system do not possess an attractor in the classical sense, because we cannot take the asymptotic limit as time tends to infinity.Recent years have seen the emergence of a number of approaches from the mathematical community to address this issue [1,10].Central to these approaches is the idea that a model's climate can be formally seen as an evolving probability distribution constructed from an ensemble of simulations which have been initialised from different ICs, initialised in the very remote past.This can be thought of as multiple "evolutions" of the same Earth System (that is to say, they all obey the same physical laws) but with each one starting from different initial points [10].
For a wide class of nonautonomous systems, it has been shown that, in this "parallel climate realisations" approach, the correct concept to describe a time-dependent set in the phase space as the "limit" of a set of ICs is the pullback attractor.[11,12,13,14,15].Many climate models (including the one discussed here § ) satisfy some form of energy balance which typically implies the core structural hypotheses required to establish the existence of pullback attractors.At any instant in time, the system's 'climate' can therefore be taken as an instantaneous slice of the pullback attractor -this slice is the so-called snapshot attractor.Furthermore, in the same way that the (pullback) attractors are some form of "limit" for a set of IC's, the initial distribution of IC's might converge to a time dependent "pullback" probability measure supported on the pullback attractor.Invariant and pullback measures are typically not unique but here we are specifically interested in so-called natural or physical pullback measures, which emerge as the limit of smooth IC distributions ¶ [16].However, while mathematically appealing, these concepts are of limited use in supporting the construction of climate change ensembles of ESMs, and therefore in making climate projections and ultimately supporting society.By definition, the pullback attractor depends on initialisation infinitely far in the past ‖ .Generally, this problem can be overcome by noting that in most cases we can assume that mixing happens on finite time scales, which, however long, can be taken as providing a convergence time: the time taken for the ensemble dynamics to forget its initial state.We do not therefore require infinitely long simulations, only sufficiently long, where "sufficient" is defined by this convergence time.Nevertheless this means that the pullback attractor is only applicable for long term climate analyses -longer than the convergence time.This convergence time can be small (around 5 years) for a simple conceptual low-dimensional atmospheric model system [17] but rather long (over 150 years) even for fast-mixing atmosphere variables in an intermediate-complexity ESM [18].In other words, the pullback attractor approach might give us a good description of our idealised model system's climate by the end of the next century (i.e., in about 150 years time), but it can not tell us how we will get there.
This means that, while the pullback attractor represents the internal variability of the mathematical system on timescales beyond the convergence time, it is not the relevant object to represent climate on shorter timescales because it does not reflect knowledge regarding the current state of the climate system.On shorter timescales, the representative distribution is more constrained.The set of trajectories that make up this constrained distribution is a subset of those making up the pullback attractor, but it is not clear how the two distributions relate to each other.
Here we consider how to quantify this initial response and how such forward distributions can depend on both our knowledge of the current state and the characteristics of the non-autonomous forcing.These issues are critical to understanding what is required to make climate projectionseven in the perfect model scenario [19] -and in characterising the behaviour of non-autonomous, non-periodic, nonlinear systems more broadly.To do so, we use a low-dimensional system with characteristics of an ESM [20].The concept of an evolution set is introduced to describe the set on which a more constrained distribution would be supported.We also introduce the concept of an evolution distribution to describe the more constrained distribution and we consider the convergence time for this evolution distribution to become indistinguishable from the pullback invariant distribution.
The paper is divided as follows.In Section 2, we describe the model used in this study, as well as the experiments performed.In Section 3, we elaborate on the concept of the pullback attractor, demonstrate it with examples from our model, and define and illustrate the convergence time for different variables in a stationary situation.In Section 4, we approach the transient climate change problem in combination with some hypothetical, highly-constrained knowledge of the initial state -so-called micro ICUs [4,7].In Section 5 we consider situations where the initial state is not well constrained -so called macro ICUs [4,7], while revisiting the concept of convergence time in the non-autonomous situation.In Section 6, we explore the influence of the forcing on the evolving distributions.We then conclude the paper with Section 7, where we discuss further questions and future directions for the this study.
the importance of this concept is demonstrated, showing that such a proof would be worthwhile.
Model
We use a low-dimensional coupled ocean-atmosphere model, which is taken as a conceptual representation of a climate model.In this model, the ocean domain is presented as two connected but distinct basins, say, one representing the ocean at high latitudes and another representing it at low latitudes in the same hemisphere, with its dynamics given by the Stommel '61 (S61) model [21].The S61 model is based on the free convection controlled by density differences maintained by heat and salt exchange between the reservoirs.The atmosphere is represented by a simplified description of its large scale circulation in one hemisphere, given by the Lorenz '84 (L84) model [22,23].The L84 model is based on the interaction of the westerly, mid-latitude wind current and large scale, pole-ward eddies.The L84 model and the S61 model form the coupled ocean-atmosphere model used in this study, which we shall refer to as Lorenz 84-Stommel 61 (L84-S61) model.
Mathematically, the L84-S61 model consists on the following five coupled ODEs 2) where and (2.8) In the above, the variables X, Y, Z represent the high-frequency, atmospheric variables from the L84 model: X represents the intensity of the symmetric westerly wind, Y and Z are the Fourier amplitudes characterising a chain of large-scale eddies, which transport heat towards the pole at a rate proportional to their amplitude.The variables T, S are the slow ocean variables as in the S61 model: T and S denote the pole-equator temperature and salinity differences, respectively.The function f (T, S) represents the strength if the thermohaline circulation (THC), while F 0 (t) is the forcing due to seasonal variation in the heating contrast between the pole and equator.The latter corresponds to an average forcing equals F m which varies seasonally according to a cosine function with amplitude M , and can be forced towards another value at a rate H.All the variables in the model are non-dimensional.The model parameters and their reference values are described in Table S.1, except the forcing function F 0 (t) which are presented separately in Table 1.
While t denotes the non-dimensional time, we note that the characteristic time for the this model is 5 days, and hence, one time unit in this model corresponds to 5 days, as originally assumed by Lorenz (1984) [22].We refer to this as 1 [8].
The L84-S61 model is a nonlinear, non-autonomous system of ODEs [24].Using vector notation, this system can be written as where X = (X, Y, Z, T, S), and F(X, t) is a time-dependent, nonlinear vector function of X given by the right-hand side of Equations (2.1) to (2.5).Its solutions are bounded, i.e. ||X|| < C, with C being a positive constant.The system is conditionally dissipative, i.e. ∇ • F(X, t) < 0 under certain conditions, meaning that finite-volume attractors might exist.Despite being a simplified representation of the ocean-atmosphere system, the L84-S61 model retain some of main characteristics of a state-of-the-art ESM: it is nonlinear, multiscale, multicomponent, complex and chaotic.Hence, conceptual results obtained from this model can be insightful, if not informative, of general properties of ESMs.However, contrary to complex ESMs, which are high-dimensional (normally with billions or even trillions of degrees of freedom), the L84-S61 model consists of only 5 ODEs, making it an affordable model to be (extensively) studied computationally -in particular, allowing for very large ensembles to be run.
Numerical solver, parameter values and ensemble design
The L84-S61 model is solved using the 4th-order Runge-Kutta method, with time step 0.01 LTUs (1.2 hours).The output frequency is 0.2 LTUs (1 day).All results, whether single trajectories or ensembles, are presented as 1-year averages * * .
All simulations use the parameter values as shown in Tables 1 and S.1, except for some simulations in Sections 3 and 6, in which H = 0 and 0.0025, respectively.Regarding the forcing, note that the values presented in Table 1 means that the forcing oscillates seasonally around an * * Strictu sensu, averages are not a solution to the L84-S61 system's IVP.However, for the concepts and computational results presented in this paper, this difference is of little importance.In fact, for temperature and salinity, the difference between annual averages and actual values is small, and hence the latter can be used instead as proof of concept.For the atmosphere, it only matters if we were to look at observables where the annual average of the observable is very different from the observable of the annual average.Such a function would have to be nonlinear to begin with.The only point where the difference potentially matters is in the convergence time for the atmosphere.
average value F m = 7 with seasonal amplitude M = 1, while being driving to another value at a rate of H = 0.01 units per year, or 1 unit per 100 years.
The ensembles run in this work are designed as follows.Given an initial condition X 0 = (X 0,1 , X 0,2 , X 0,3 , X 0,4 , X 0,5 ) in the phase space, we randomly sample another 1,000 initial conditions such that, for each dependent variable, the sample is normally distributed around X 0,j with variance given by σ X0,j -with σ X0,j being two orders of magnitude lower than X 0,j .Hence, each ensemble has 1,001 members.The details of each individual experiment, including duration and parameter values, can be found in the Supplementary Materials.
The pullback attractor and convergence time
The pullback attractor [11,12] is a mathematical object that generalises the concept of attractor to non-autonomous dynamical systems.This approach consists on the idea that, for most non-autonomous systems, there exists a time-dependent object in the phase space, to which trajectories that started in the infinite past will converge.Such object presents therefore a natural distribution for the internal variability of the system.
A formal definition can be presented as follows.Let us denote the solution to the initial value problem (IVP) given by Equation (2.9) and X(t 0 ) = X 0 by X(t; X 0 , t 0 ), and the corresponding phase space by X.A set A = A(t) in the phase space is said to "pullback" attract a set, or ensemble of points for all t, where dist X (•, •) denotes the Hausdorff semi-distance between sets in the phase space.The time-dependent set A(t), if also invariant with respect to the dynamics, is called pullback attractor.When pullback attractors exist, there might also exists an invariant probability distribution supported on this set, so-called the pullback invariant measure (or distribution), which we will generically denote by µ A [14].An explicit, rigorous computation of both A(t) and µ A is only viable for very simple dynamical systems, and usually not possible for most nonlinear ones, including L84-S61.However, for nonconservative systems, a more practical approach is possible.This relies on the fact that, in general, a solution (or ensemble) starting near or on the attractor takes only a finite time to lose most of its dependency on the initial condition and run through (span) most of the attractor.The time for this convergence is dependent on the system and its relevant time scales, and can also be estimated numerically, as we shall see below.
Figures 1(a-c) illustrate this convergence to the pullback attractor for some of the variables of L84-S61.There, the pullback attractor and its natural distribution are computed from a micro ICE normally distributed around a central IC point X 0 in the attractor, with variance σ X0 being O(10 −2 ) for atmosphere variables, O(10 −3 ) for the ocean temperature and O(10 −4 ) for ocean salinity (as per Daron and Stainforth, 2013 [8]; see also Supplementary Materials).Note that, soon after the simulation starts, the initial micro cluster of trajectories disperses quickly and cover most of the attractor within a few years.The exact number of years depends on the variable of consideration though.For example, the time taken is visibly long for the ocean temperature (Figure 1(a)), and even longer for the salinity (Figure 1(b)), but very short for the fast, atmospheric variable X (Figure 1(c)).The latter is in line with what has been reported by Drotós et al. (2015) [17] and Tél et al. (2020) [10] for the L84 atmospheric model.
Convergence time
The convergence time, which we shall denote as t conv , can be loosely defined as the time taken by a localised ensemble to become indistinguishable from the pullback attractor.A statistically formal way to compute t conv is by comparing, at each instant of time, the distribution of interest with a snapshot of the numerically estimated pullback invariant distribution, via a hypothesis test using some suitable statistics, where the null hypothesis H 0 is that both distributions come from the same population.If we define a function of time h such that h(t) = 1 if the null hypothesis is rejected at time t and h(t) = 0 if not rejected, then we could define t conv such that In the definition above † † , there might exist t > t conv K such that h(t) = 1, which might put in question whether the convergence has been achieved.To avoid that, a statistically robust way to define t conv would be to take the distribution of h(t) in the time interval of consideration, repeat the experiment several times, and build the distribution of h(t) values for all those experiments, which can then be translated into a distribution of t values, with associated uncertainties.This resulting distribution should cluster around a value t that would be taken as t conv .
Both ways of estimating t conv are clearly dependent on the system of interest, as well as the initial condition and also the dynamical variable in question.Crucially, in practice, when dealing with computationally-generated distributions, such computation is also dependent on the size of the ensemble.There is not a unique way of doing it, and hence t conv is also dependent on the test used, as well as the significance level chosen.There are several ways to test this hypothesis [28].In this work, we use a two-sample Kolmogorov-Smirnov (KS) test [29].For two distributions P 1,n1 (x) and P 2,n2 (x) of sizes n 1 and n 2 respectively, the KS test is defined as For the KS test, null hypothesis H 0 should be rejected ‡ ‡ at significance at level α if D(P 1,n1 , P 2,n2 ) > C n1,n2,1−α , where C n1,n2,1−α can be found in [30].
We illustrate this approach by computing t conv for the spinup distribution shown in Fig- ures 1(a-c).To do so, we test H 0 with significance level α = 0.05, where the reference distribution is given by a 100,000 years single-trajectory solution starting from the same central IC (Supplementary Materials).This is presented in Figures 1(d-f), which shows that t conv is 90 years for salinity, 50 years for temperature, but only 19 years for atmosphere.The latter is substantially higher than what has been reported by Drotós et al. (2015) [17], which found a t conv of only 5 years for the L84 system, suggesting that the coupling with slow-mixing variables increases the relaxation period for the atmosphere variables in this context.
An alternative way to define a convergence time would be to assume that initially the statistic -in this case the KS statistic -D decays exponentially, such that D(t) ≈ D(t 0 ) exp −τ (t−t0) .In this case, such a convergence time could be taken as 1/τ .The characteristic decay exponent can be estimated by looking at the logarithm of D, which is presented in Figure S.1, and computing the angular coefficient of the straight line it approaches in the first few years of decay.This gives τ equals 0.0378, 0.264 and 0.1221 for T , S and X respectively.These correspond to estimated times of approximately 26 years, 38 years and 8 years respectively, which is roughly half the values † † Note that we opted to define tconv as normalised by K, so that the corresponding unit is year, instead of LTU.
‡ ‡ For convenience, in this work we use MATLAB's build-in function kstest2 instead.This function rejects the null hypothesis based on the p-value, and not by comparing the test statistic with a reference value.
of t conv estimated via Equation (3.2).Hence although quantitatively different, both approaches provide very similar information.
Caveats with the pullback attractor approach
The pullback approach has been proposed as an alternative way of defining climate: it gives a mathematically sound measure of the system's internal variability, and being time dependent, provides both a natural set of plausible states at each instant of time -the snapshot attractorand a natural probability distribution of events at each instant of time -the pullback invariant distribution.This has been discussed and illustrated by several authors [1,31], and has proven to be a more rigorous and useful definition of climate for long-term (e.g.IPCC-like) future scenarios.
This approach comes with some caveats though.By definition, the computation of such object requires an ensemble to be initialised in the infinite past, which is impractical from the computational point of view.In general, it is possible to approximately compute the attractor provided that the system is run for longer than t conv .But again, this is problematic, particularly in climate modelling: on one hand, some components of the Earth System evolve on long timescales of hundreds to thousands of years; on the other hand, anthropogenic, non-periodic forcing started only a couple of centuries ago.
Another caveat is that, while the pullback attractor represents all the internal variability of the mathematical model, it is known that only a few of these states can be representative of today's climate.Therefore, using the pullback attractor to measure "tomorrow's" climate might include a large number of unrealistic states -they are part of the internal dynamics of the model but not attainable within that time frame for a given initial condition.This will be discussed in the next section.
Micro initial condition ensembles and the evolution set
Although the pullback attractor provides a useful, mathematically sound definition for longterm climate (beyond the convergence time), it is less useful in quantifying the variability in the short-mid term (months to years, or even decades), when the intermittency of the dynamics is still dependent on the initial state of the system.This is because it overestimate the forecast uncertainty by allowing all possible states within the attractor, including those that do not reflect our knowledge of the present state of the system.
For example, considering the snapshot attractor for a given day (say "today"), it corresponds to a large range of possible values.But given sufficient information, it might be that only one of those states is possible (up to a certain level of residual uncertainty), so many of the states on the snapshot attractor are unrealistic given our knowledge of "today's" system.We also know that the climate today constrains the climate of tomorrow, in the range whose the system still carries the memory from the initial state -that therefore excludes a large portion of the pullback attractor.This means that any snapshots of the pullback attractor over-quantifies the variability and distorts the probability of events in the short and mid-term.This is illustrated in Figure 2, where we present the evolution of a micro ICE under climate change next to the evolution of the pullback ICE of Figure 1.This side-by-side comparison (see also Figure S.2 in the Supplementary Materials) shows that, in the first few decades, the pullback natural distribution, which is intrinsic to the mathematical system, over-represents climate uncertainty.Note that the evolution of the micro ICE is initially constrained to an smaller set, which is evolving over time, and seems to converge to the pullback attractor A(t) only after a few decades.For this reason, we name this as the evolution set E(t).For a given non-autonomous chaotic system, this set is solely dependent § § on the initial state X 0 , on the initial micro-uncertainty given by the variance σ X0 and on the initial time t 0 .Therefore, we shall denote the evolution set as E(t; X 0 , t 0 , σ X0 ).Some basic properties of this set are straightfowward.First, its existence is guaranteed by the existence and uniqueness for the IVP for (2.9).Second, by the definition, we have that E(t 0 ) = D X0 , the ICE set.Also by definition, we have that E(t; X 0 , t 0 , σ X0 ) −→ A(t) as t 0 −→ −∞.It also follows that, for an initial ensemble set within the pullback attractor, i.e.D X0 ⊆ A(t 0 ), we have that E(t) ⊆ A(t), for all t ≥ t 0 .In practice, when estimating both E(t) and A(t) numerically, these properties do not hold ipsis literis, and the design of the ensemble becomes quite important.We also note that, associated to E(t), this numerical example suggests the existence of a distribution µ E supported on this set, which we will assume to be true.Its relationship to the pullback invariant distribution µ A is not as clear though.
Climate modellers are familiar with the idea of exploring ICU using micro ICEs.Nevertheless, they are in general taken simply as an exploration of uncertainty, rather than the object we are trying to characterise.Here, we bring together the ideas of the pullback attractor with the methods applied in climate modelling and produce an attractor-like object which essentially represents future climate under climate change -which we called the evolution set.
The formalism above allow us to revisit the content of previous section, and reframe it in terms of "forward" convergence ¶ ¶ .There, the existence of a convergence time t conv might suggest that E(t) ≈ A(t) almost everywhere for t > t conv .Hence, the question is: does that really happen?Which conditions are necessary to prove that, for t >> t 0 : 1) E(t) and A(t) are sufficiently close * * * ; 2) µ A approximates µ E as t −→ t conv .If such statements are true, the pair (A, µ A ) would hold key mathematical information regarding future climate.
In the next section, we will explore some features of the evolution set by looking at its dependence on the initial conditions.
Macro initial condition uncertainty
Another issue related to the short-to-mid term climate prediction is the level of uncertainty of the actual state of the system in some variables.While small uncertainty can be covered by a micro IC ensemble, the uncertainty in the initial state of some variables might be of the same order of magnitude of the typical values for the variable itself, for instance if the initial state is based on a model spinup, or derived from the interpolation of sparse datasets, or even because of a lack of data.
From a climate prediction point of view, these are relevant, and macroscale variations in ocean quantities such as temperature and salinity, and atmospheric ICU have already been linked to decadal variations in regional climate in the Northern Hemisphere [7].The question is therefore how would such macro uncertainty impacts the evolution of the system, via its evolution E(t) set.§ § In practice, any numerical estimate of E will also depend on the size and shape of the initial ensemble.
¶ ¶ Here, we note again that, pullback attractors are not forward attractors in general.Although, under certain conditions, a pullback attractor could satisfy a weak form of forward convergence.The interested reader can find a detailed exposition of this in the section 9.5 of Kloeden and Yang (2020) [12].
* * * Note that two sets might be infinitely close but disjoint.For instance, the sets of rational and irrational points within the interval [0, 1] are disjoint but their closure (with respect to the standard topology) equals the full interval.
Macro ICU from a control simulation (single trajectory)
One of the sources of macro ICU is the potential to initiate climate ensembles from different states -including ocean states -from a long control run with an ESM.To illustrate this, we chose four different points in the attractor, all corresponding to a point in an existing trajectory after an initial 5,000 years spinup.For simplicity, we name these ICs by IC 1, IC 2, IC 3 and IC 4, with corresponding micro ICEs referred as ICE 1, ICE 2 and so on.Note that those ICs differ in all five dependent variables, and are illustrated in Figure 3 for the ocean variables.All ensemble distributions have the same variance, as noted in Section 2.2.
Figure 4 shows that, for the ocean variables, the dependence on the initial condition is significant.A first remark is that all four micro-initialised resulting distributions differ substantially from the pullback invariant distributions shown in Figures 2(a,b).Further to that, they are also different among themselves.For instance, in Figure 4(b), the micro ICE centred at IC 2 starting from a low temperature tends to decrease for a few years before increasing again, despite the monotonic increase in forcing.This is not followed by the micro ICE centred at the nearby IC 1, as shown in Figure 4(a) which spreads out very quickly after initialisation and is visually (increasing) monotonic from the beginning.In the case of IC 2, the decrease in temperature is accompanied by an initial increase in salinity as shown in Figure 4(f), which is then followed by a steady decrease.Nevertheless, in all four cases, the distributions seems to coincide after a few decades, becoming visually indistinguishable from each other and from the pullback invariant distribution (see also Figure S.3).
This macro ICU dependence has important consequences for climate prediction in seasonal to decadal time scales.A common practice in climate modelling is to start a simulation from initial conditions obtained from a spinup "control" run.This control run allows one to find the system's attractor, but does not resolve the uncertainty about where in the attractor one should start from.As we have seen, different micro ICEs could lead to different transient distributions, representing a different climate in the short-to-mid term -even if the initial condition is obtained from the same solution after spinup.
Macro ICU that reflect uncertainty in one variable
Another source of macro ICU is when initialising the model from observations, in which case the uncertainty in some variables could be orders of magnitude higher than others.As an example, if in-situ data is being used to initialise the model, it is possible that one might have measurements for one variable but not for others, for instance in case of defective equipment (e.g. via biofouling).In this case, the initial state of the variable is subjected to macro uncertainty.
This scenario is illustrated in Figure 5, where we highlighted four possible initial conditions, named by IC 5, IC 6, IC 7 and I 8, which are identical in the atmosphere variables X, Y, Z, but may differ in temperature and salinity (Supplementary Materials).For instance, IC 5 and IC 6 has identical temperature but differ in salinity; the converse is true for IC 6 and IC 7, and so on.
The sensitive to macro ICU with respect to a single variable is illustrated in Figure 6, which shows the results for micro ICEs starting from the ICs indicated in Figure 5.Note that macro uncertainty in salinity does not seen to alter the evolution set and its distribution, as indicated in Figure 6(a,b).On the other hand, macro uncertainty in temperature has a significant effect on salinity, as shown in Figures 6(e,g): the evolution set ans its distribution for salinity are significantly different, despite having ensembles around the same initial salinity state (see also Figure S.4).
This sensitivity of both E and µ E to macro ICU in a single, slow variable is remarkable, and suggests that a proper quantification of the uncertainty in future climate projections requires an assessment of macro ICU as well.
Convergence time and macro ICU
As macro ICU impact the evolution set and its distribution, one might asks whether the convergence time t conv is also affected by it.Here we revisit the concept of convergence time and show how it can vary with in a macro ICU scenario.We illustrate this by computing t conv , using Equation (3.2), for the eight micro ICEs shown in Figure 3 and Figure 5.The resulting t conv and corresponding evolution of the KS statistics are shown in Figure 7 for the ocean variables.
When starting from a control run trajectory (as per Figure 3), the resulting t conv can vary dramatically.This is shown in Figures 7(a,c).First, IC 1 provides a short t conv for both temperature and salinity, being of 14 years and 46 years, respectively.This t conv increases substantially from IC 1 to IC 2, being of 30 years for temperature and 70 for salinity.For IC 3 and IC 4, while the t conv for temperature remains of the same order (34 and 32 years, respectively), it still varies substantially for salinity, resulting in a t conv of 101 years for IC 3 and and 92 years for IC 4. We also note that the order of t conv is the same for both variables in this case: IC 1 has shortest t conv for both temperature and salinity, IC 2 is second, and so on.
When starting from chosen-values within the attractor (as per Figure 5), the results are rather different.This is shown in Figures 7(b,d).In particular, both variability and and order of t conv differs from those shown in Figures 7(a,c).For instance, the variability in t conv is 27 to 34 years for temperature (instead of 14 to 34 years) and 69 to 112 years (instead of 46 to 112) for salinity.Also, the shortest t conv for temperature (27 years) is given by IC 5, while the shortest for salinity is given by IC 6 (69 years).
The starkest contrast is observed when comparing IC 6 and IC 8.Note that, while both ICs have the same value of salinity, their respective micro ICEs have a t conv that differs by 43 years, highlighting the impact that macro ICU in a single variable (in this case ocean temperature) can have in other variables.
In Herein et al. ( 2016) [18], where the authors only looked at uncertainty using an intermediate-complexity model, they noted that t conv did not chance for micro ICEs starting at different instants of time.However, their micro ICE was generated perturbing only one variable (the surface pressure field), keeping the others equal for all ensemble members, while the results correspond to another variable, the annual mean surface temperature in a single grid point of the model (what they called a small scale) located within continental Europe.While treated there as a simple approximation, the results presented here for a much simpler model suggests that uncertainty in other variables can have a significant impact on the distribution and its convergence time, as the response from the initial uncertainty in slow variables could take a while to show.
How rate of change in forcing affects the uncertainty of climate predictions
In the context of climate change, the system is under an external forcing (e.g.change in temperature due to anthopogenic carbon dioxide emissions) that is both dynamic and uncertain.Those uncertainties are usually investigated via scenarios, which in the context of IPCC, have shown to dramatically affect the climatology predicted by CMIP models.In the context of this work, this external forcing uncertainty may also affect the evolution of an ICE as a distribution.We illustrate this by looking at the evolution of the micro ICE centred in IC 2 (shown in Figure 3) but under a slower rate of "climate" change regime.Here, we reduce the rate of change in forcing H by a quarter, from H = 0.01 to H = 0.0025, meaning that it now takes 400 years for the baseline forcing F m to increase by one unit.The resulting time series are shown in Figure 8 for ocean temperature and salinity, where we also included the H = 0.01 time series for reference.Changing, or in this case reducing, the speed of climate change has important effects on the resulting distributions.While the ICE distributions in Figure 8(c) shows a mildly monotonic decrease, Figure 8(a) shows that this behaviour, while kept, is much more pronounced under a weaker forcing.As this slowly changing distribution evolves, it again shows a distinct behaviour at around year 120: the distribution suddenly gets broader, with the temperature of several ensemble members decreasing sharply.About 40 to 50 years later, the ensemble narrows again and regain a shape akin to that of Figure 8(c).These behaviour are mirrored by the salinity distribution, as shown in Figures 8(b,d).
These curious behaviour, which is consequence solely of altering the rate of change in forcing, can be better seen when looking at the projection of the phase space onto the ocean variables subspace, as shown in Figure 9.At a faster climate change rate, shown in Figure 9
Conclusions
This article discussed several aspects related to the climate predictability in short-and mid-time scales, including annual to multi-decadal.To do so, we introduced the idea of an evolution set, where we combined the concepts of pullback attractor and micro ICU to produce an object lying within the system's pullback attractor whose shape is constrained by a more refined knowledge of the initial state of the system -via a micro ICE.While the evolution set is usually contained in the pullback attractor set, the latter is much larger, and their respective distributions, or climate projections, are different.
In addition to that, we attempted at defining a convergence time, as the time taken for an ICE distribution to become indistinguishable from the pullback invariant distribution.We also explored micro and macro ICU, revisited the concept of pullback attractor, and discussed the influence of those in the evolution set and the convergence time.We also discussed the effect of different rates of change in forcing in the evolution set.Given the significant differences produced, these results suggest that all these aspects should be considered when designing ensembles for chaotic, non-autonomous systems, in particular for ESMs in a climate-change scenario -i.e.under non-periodic external forcing.
Although the results obtained are dependent on the particular low-dimensional model used, the ideas are model-independent and should be applicable to any chaotic non-autonomous system.This includes the concepts of evolution set, micro and macro ICU and convergence time.
From a theoretical point of view, this work leaves many questions to be answered, which we believe to be of both mathematical and climate science relevance.The first set of questions relate to the evolution set E = E(t; t 0 , X 0 , σ X0 ): • Is it possible to prove rigorous results regarding the sensitivity and dependence of E to the central IC X 0 , initial time t 0 and variance σ X0 ?
• What is the relationship of E to the pullback attractor A? Is there any other relationship beyond E ⊆ A when D X0 ⊆ A(t 0 )?
• How many ensemble members are needed to characterise E for a given X 0 , t 0 and uncertainty as measured by σ X0 ?
• How dependent is E on the shape of the ICE?For instance, would a non-Gaussian distribution lead to a very different E?
• How does the distribution µ E relates to the pullback invariant distribution µ A of the pullback attractor?
Another important question is how does uncertainty in one variable propagates, or rather influence others?For example, we saw that macro ICU in temperature seems to greatly affect salinity, but the converse is not true.
A final but more ambitious question relates to the "size" of attractors, as illustrated in Figure 10.How large are attractors in ESMs?In other words: • Is it possible to estimate their shape without resourcing to brute force, given the computational limitations of running such models?
A final note on the evolution set E is that there can be many, depending on what observations ones uses to constrain the possible climate scenarios with.The same applies to the the evolution distribution µ E .In this practical sense, the central IC and variance used in the definition of E are just fudges to simulate the residual uncertainty after the information from the observations has been brought in.So the questions above, although generically formulated, might be asked in relation to an E constructed from assimilating some observation into a more realistic climate model for example.Nevertheless, answers to those questions would be a valuable resource in the design of relevant and influential climate model ensembles.
Experiments
Here we describe in detail the experimental design of each simulation.The following holds for all experiments, unless otherwise noted: • Length of simulation: 200 years, except experiments 1, 2, 3, 13 and 14.
The ICs vary across experiments, and are presented in detail below.
Figure 1 :
Figure 1: Left column: Pullback attractor and its natural distribution for L84-S61 computed from a 500 years micro ICE simulation, where green solid line shows the numerical solution starting from the central IC.Right column: Corresponding convergence time computed using Equation (3.2) and the KS statistic based on a 100,000 single trajectory simulation.(a,d) ocean temperature; (b,e) ocean salinity; (c,f) atmosphere variable X (intensity of westerly wind).
Figure 2 :
Figure 2: Comparing the pullback invariant distribution with the distribution generated by a micro ICE, with H = 0.01 in the first 100 years, and H = 0 in the remaining 100 years.Left column shows the evolution of an ensemble which initially covers the entire pullback attractor.Right column shows the evolution of a micro ICE.Panels (a-f) show: (a,d) ocean temperature; (b,e) ocean salinity; (c,f) atmosphere variable X (intensity of westerly wind).
Figure 3 :
Figure 3: Attractor for the system L84-S61 with H = 0 when F m = 7 (blue) and F m = 8 (red) projected on the ocean temperature-salinity (T, S) subspace.The black dots on the F m = 7 attractor indicated the location of ICs 1 to 4.
Figure 4 :
Figure 4: Macro ICU from a control run simulation: comparing the evolution set and distribution of the slow-mixing ocean variables for different micro ICEs in a macro ICU scenario, with H = 0.01 in the first 100 years, and H = 0 in the remaining 100 years.Left column shows the ocean temperature.Right column shows ocean salinity.Panels (a-f) show: (a,e) IC 1; (b,f) IC 2; (c,g) IC 3; and (d,h) IC 4.
Figure 5 :
Figure 5: Attractor for the system L84-S61 with H = 0 when F m = 7 (blue) and F m = 8 (red) projected on the ocean temperature-salinity (T, S) subspace.The black dots on the F m = 7 attractor indicated the location of ICs 5 to 8.
Figure 6 :
Figure 6: Macro ICU in a single variable: comparing the evolution set and distribution of the slow-mixing ocean variables for different micro ICEs in a macro ICU scenario, with H = 0.01 in the first 100 years, and H = 0 in the remaining 100 years.Left column shows the ocean temperature.Right column shows ocean salinity.Panels (a-f) show: (a,e) IC 5; (b,f) IC 6; (c,g) IC 7; and (d,h) IC 8.
Figure 7 :
Figure 7: Distance between the micro ICE distributions to the pullback invariant distribution, measured through the KS statistics (solid lines), and convergence time (dashed-dot lines) computed using Equation (3.2): (a,b) ocean temperature; (c,d) ocean salinity, for the micro ICEs centred at: ICs 1 to 4 (left column) as per Figure 4; ICs 5 to 8, as per Figure 6.
Figure 8 :
Figure 8: ICE distributions starting from IC 2 in Figure 3, for H = 0.01 (100 years of climate change, followed by 100 years of non-forced climate with F m = 8) shown in the upper panels, and H = 0.0025 (400 years of climate change, followed by 100 years of non-forced climate with F m = 8) in the bottom panels.Left column panels show temperature for (a) H = 0.01; (b) H = 0.0025.Right column panels show salinity for (c) H = 0.01; (d) H = 0.0025.
(b), the distribution seems to have less freedom to explore the phase space and has its way forced into the attractor F m = 8.At a slower climate change rate, presented in Figure 9(a), the ensemble members have now more freedom -and time -to explore the phase space and any intermediate attractors between those of F m = 7 and F m = 8.As suggested by Figure 10, one of those intermediate attractors is somehow broader (in the ocean variables) than the neighbour ones, and trajectories entering there might eventually reach (time allowing) lower values of temperature and higher values of salinity.
Figure 9 :
Figure 9: Projection of the phase space onto the (T, S) subspace, with a heatmap indicating the number of ensemble members that passes through each point at least once (no repetitions are counted): (a) H = 0.01; (b) H = 0.0025.These correspond to the joint distributions shown in Figure 8(a,c) and Figure 8(b,d), respectively.
Figure 10 :
Figure 10: Attractor for the non-forced L84-S61, projected over the ocean temperature-salinity (T, S) subspace, for several values of F m between 7 and 7.5.All attractors shown correspond to a single trajectory starting from the same IC (black dots).
Figure S. 2 :
Figure S.2: Comparing the pullback invariant distribution with the evolution distribution generated by a micro ICE, with H = 0.01 in the first 100 years, and H = 0 in the remaining 100 years.Solid line shows the ensemble mean, and the shade shows 1 standard deviation from the mean.In blue is shown an ensemble that initially covers the entire pullback attractor.In red is shown the evolution of a micro ICE.Panels correspond to individual variables: (a) atmosphere X; (b) atmosphere Y ; (c) atmosphere Z; (d) temperature; (e) salinity.
Figure S. 3 :
Figure S.3: Macro ICU from a control run simulation: comparing the evolution set and distribution of the slow-mixing ocean variables for different micro ICEs in a macro ICU scenario, with H = 0.01 in the first 100 years, and H = 0 in the remaining 100 years.Solid line shows the ensemble mean, and the shade shows 1 standard deviation from the mean.Results for IC 1, IC 2, IC 3 and IC 4 are presented in blue, red, yellow and magenta, respectively.Panels correspond to individual variables: (a) atmosphere X; (b) atmosphere Y ; (c) atmosphere Z; (d) temperature; (e) salinity.
Figure S. 4 :
Figure S.4: Macro ICU in a single variable: comparing the evolution set and distribution of the slow-mixing ocean variables for different micro ICEs in a macro ICU scenario, with H = 0.01 in the first 100 years, and H = 0 in the remaining 100 years.Solid line shows the ensemble mean, and the shade shows 1 standard deviation from the mean.Results for IC 5, IC 6, IC 7 and IC 8 are presented in blue, red, yellow and magenta, respectively.Panels correspond to individual variables: (a) atmosphere X; (b) atmosphere Y ; (c) atmosphere Z; (d) temperature; (e) salinity.
Table 1 :
Description of the parameters and their reference values in the forcing function F 0 (t), as per Daron andStainforth (2013) The table below contains all model parameters and their values.The values shown were the same in all simulations.Coefficient of internal diffusion in the ocean k a 1.8 • 10 −4 Coefficient of heat exchange between the ocean and atmosphere ω 1.3 • 10 −4 Coefficient derived from the linearised equation of state ϵ 1.1 • 10 −3 Coefficient derived from the linearised equation of state γ 0 7.8 • 10 −7 Coefficient for the atmospheric water transport γ 1 9.6 • 10 −8 Coupling parameter for the wind dependent atmospheric water transport Table S.1: Description of the parameters and their reference values used in the L84-S61 model, as per Daron and Stainforth (2013) [1] . | 11,270 | 2023-09-21T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Simultaneous Resource Accumulation and Payoff Allocation : A Cooperative Fuzzy Game Approach
We develop a simultaneous resource accumulation and payoff allocation algorithm under the framework of a cooperative fuzzy game that builds on our earlier work on the role of satisfaction in resource accumulation and payoff allocation. The difference between the twomodels lies in the fact that while focus wasmore on getting an exact solution in our previousmodel, the negotiation process in the currentmodel accountsmore for the role of the intermediate stages.Moreover we characterize our solution using two properties: asymptotic fairness and efficiency. Our model includes a suitable penalty function to refrain players from unreasonable demands. We focus on real life situations where possibly one or more players compromise on their shares to ensure a binding agreement with the others.
Introduction
In the literature, cooperation among self-interested players under binding agreements has been well explained by the theory of crisp cooperative games.In [1][2][3] it is shown that players' satisfaction has the ability to set up a solution concept in a cooperative game which is easily computable through an algorithm.It is therefore important to take account of how coalitions are formed vis-à-vis and how the worth is allocated among participating players so that they are individually satisfied up to a desired level.
In this paper, we propose a dynamic process of accumulating resources from players and of finding a payoff allocation simultaneously under a cooperative fuzzy game theoretic environment.The preliminary works are done in [3].Here we enhance the same model incorporating (a) the role of players in the intermediate steps of the negotiation process and (b) the notion of a penalty function that restrains the players from making unreasonable demands.
Consider, for example, the role of Uber and Ola cab services in different Indian cities.Both these companies acquire vehicles from the local vehicle providers of the cities on lease and offer them a portion of the profits accrued from the customers on per day basis.It follows that the cab services accumulate resource (in terms of number of vehicles from different service providers), generate some profit, and finally allocate this profit among the stack holders.It may be the case that the local vehicle providers have their shares in both the companies that results in different levels of satisfactions in terms of their trade relations with these companies.Retention becomes a big concern for both the companies.Therefore they need to satisfy the vehicle providers by providing sufficient opportunity to both generate and share the profit.More precisely let us consider a hypothetical situation where three agents 1 , 2 , and 3 in their respective territorial areas collect monetary resources from the customers and provide this accumulated resource for an investment firm.The firm in turn invests the whole amount in shares of various companies (finitely many) in the market and rewards the agents according to their performances.The agents take notes on how their resources are being invested in different companies.This would determine how much payoff they would get after the firm profits from such investments.
Here the manager of the firm acts as the mediator who Advances in Fuzzy Systems takes care of all the resources of the agents.The payoffs to the agents are usually measured through their resource collection capacities, whereas, in addition to weighing on such straight performances, many organizations provide extra incentives adopting background corrections.This may include the geographical disadvantages the agents have to comply with in their territorial areas, their prospects towards future expansion of business, team working capabilities, and so on.Such incentives essentially gear up the satisfaction levels of the agents.However it is not found in the literature whether any standard procedure is followed in provision of such incentives.Moreover most of the business organizations follow the principle "if you perform well you get betteroff."Note that performance and satisfactions can never be universally standard.This motivates us to pursue the present study.Similar works can be traced back to [4][5][6] and so on.
In [7] a dynamic payoff allocation method that converges to a specific allocation under a given set of external restrictions is designed.Similar models of frequency allocations in wireless networks through a game in satisfaction form are found in [8] where payoff to a player accounts only for her satisfactions.However none of these studies uses the notion of fuzzy cooperative games as a tool to address their problems.For similar studies we refer to [4,9,10] and so on.
In our previous model [3], the problem of simultaneous resource and payoff allocation among participating players by a mediator is discussed under a dynamic setup.It shows that an exact resource versus payoff allocation matrix evolves as a solution to a minimization problem at each stage of the negotiation process.However we observe that here the emphasis is put on how the exact solution (as a part of the limiting process) can be obtained without accounting for the intermediate solutions.In real life situations one needs to consider time and monetary constraints that do not allow the negotiation process to last longer.Therefore, in our present model we focus on the fact that the players get an optimal solution with the least possible time and money.We provide an axiomatic characterization of the exact solution which is based on two very natural axioms: efficiency and asymptotic fairness.We provide an example at the end of the paper to highlight this issue.
Let = {1, 2, . . ., } be the set of players (agents) which is known as the grand coalition.Any subset of is called a crisp coalition.In a crisp coalition, a player can give either full participation (investment in our case) in the coalition with her complete resource or no participation (no resource).The collection of all crisp coalitions of is denoted by 2 .A crisp cooperative game is a pair (, V) where V : 2 → R + ∪ {0} is a real valued function known as the characteristic function such that V(0) = 0.If the players' set is fixed, the cooperative game is denoted by the function V.For each crisp coalition ∈ 2 , the real number V() is known as the worth of the coalition (or profit incurred from ).On the other hand, if a player needs to participate in more than one coalition simultaneously, with her resources (or power) in hand, it is possible to provide only fractions of her full resource (power) for those coalitions.Such coalitions are called fuzzy coalitions.A fuzzy coalition is nothing but a fuzzy set of , represented by an -tuple, where its th component is the membership degree (or fraction of the power) of player ranging between 0 and 1.A crisp coalition can be realized as a special type of fuzzy coalition where the degree of participation of any player in it is either 1 or 0. In the literature Aubin [11], Butnariu [12], and Branzei et al. [13] have well developed the theory of cooperative fuzzy games and justified the fuzzification in terms of the players' membership degrees in a coalition.In both crisp and fuzzy environment, it is quite necessary to determine how a fuzzy coalition structure is formed as well as how a suitable payoff distribution is proposed to the players accordingly.Solution concepts for cooperative games are found in [11][12][13][14][15][16][17].In [1], a new solution concept that evolves as a result of a dynamic negotiation process is defined.It is termed as the payoff allocation.In [2], a process of resource allocation of players is obtained in different cooperative actions with the formation of a fuzzy coalition structure.The key objective in [1][2][3] is to investigate the influence of individual satisfactions upon a payoff and resource allocations (i.e., how it can be used to arrive at a suitable payoff and resource distribution over time).The process of resource allocation (equivalently resource accumulation) is made here synonymous with the formation of a fuzzy coalition.It follows that their model is equivalent to solving an -person cooperative game with distinct fuzzy coalitions.
The goal of our present study is to provide a more developed and systematic treatment of satisfaction levels as a basis for negotiation among rational agents, who are capable of participating in different fuzzy coalitions with possibly varied rate of memberships simultaneously.We introduce the notion of a penalty to restrict the irrational demands of the players.Our present model is seen to be efficient than the one discussed in [3] in situations where the allocation process does not continue for long rather than stopping at some intermediate stage as part of a trade-off.We modify the variance function of [3] and show that at an intermediate stage our solution is more efficient than that of the earlier model.
The allocation process (Figure 1) goes exactly in the same way as that in [3].Initially, the mediator would accumulate budgets/resources from the rational players.The mediator and the players would jointly determine the number of possible coalitions.Then the players would announce their expected total budgets/resources for each coalition.The mediator finds the optimal coalition structure and the optimal total budget allocation in such a way that the sum of all budgets allocated for all coalitions of the optimal coalition structure is equal to the total budget in her hand.
In the next stage of the allocation process, the mediator proposes the resource and payoff allocations simultaneously for each coalition of the optimal coalition structure which we call a solution here.The rational players would provide their satisfaction degrees in each coalition according to their own investment and payoff satisfaction functions.Based on this information, the mediator would update her belief and propose the next solution.The process continues until a stopping condition is reached.Thus the mediator would propose successive offers (solutions) to the players judging on their reactions to the previous proposal.Similar to what was proposed in [1][2][3] we construct a stopping rule and propose the process of updating the belief of the mediator by use of a pair of suitable functions towards the possible reactions of the players upon different offers of resource allocations and payoff allocations.We call them, respectively, the approximate investment satisfaction and approximate payoff satisfaction functions.A variance function is constructed to evaluate the closeness among the degrees of investment satisfaction and payoff satisfaction of the individual players in a coalition over a solution.If the variance of a solution is below a certain threshold to be determined by all the players collectively, then it would be considered as a possible trade-off solution to the problem.Situation may arise where the variance does not converge further after a particular threshold which is still an open problem and we keep it for our future works.
An exact solution is a solution whose variance is zero, see [3].Therefore for an exact solution all the investment and payoff satisfactions are equal in each coalition.The process of proposing new allocations at successive stages leads to an exact solution.We assume that every player keeps her satisfaction functions unknown to the mediator and other players.The negotiation strategy is designed so that the mediator would propose only offers (possible solutions) for Advances in Fuzzy Systems which the variance would be minimum at each stage of the negotiation process.In the negotiation process, each rational player has a single motive: to maximize her individual payoff, which is well represented by some monotonic increasing functions characterizing the fuzzy sets of their satisfactions.Since, negotiation requires a player to be considerate about the desires and views of all the other players.An appropriate negotiation process can restrain the players from making irrational demands while rewarding ones who are willing to work together by forming coalitions.
The remaining part of the paper is designed as follows.In Section 2, we build the theoretical framework of our proposed model and prove the existence of a better offer as a solution to our model.Section 3 deals with the notion of a penalty function to protect the negotiation processes from irrational demands.Some hypothetical examples are presented to discuss our model in Section 4. In Section 5, we present the concluding remarks.This section builds mainly on the notions and results discussed in [3].We assume that for every ( = 1, 2, 3, . . ., ), the amount of resources available for agent is ≥ 0 (this can be time, money, etc.).In order to make the model simple we take the resource inputs and the payoff outputs of each player as real numbers.Each player can choose to invest in a joint project any portion ≤ ∈ R of her total resource.We call this joint project a fuzzy coalition.The total resource allocation is represented by a nonnegative vector R = ( 1 , . . ., ) ∈ R , and possible coalitions are identified with the vectors that are (coordinatewise) smaller than R. By what is called an abuse of notation we shall represent the sum ∑ =1 by here.
Thus formalizing the notion, we have the following: for every nonnegative vector R ∈ R , let (R) be the box given by The point R is interpreted as the "grand coalition" in fuzzy sense, and every r ∈ (R) is a possible fuzzy coalition while 0 ∈ (R) is the zero vector where all the players put zero resource.For every R ≥ 0 : R ∈ R , a cooperative fuzzy game is a pair (R, V) such that Since infinitely many fuzzy coalitions with limited resources of the players are not practically useful, let there be only possible fuzzy coalitions ( < ∞).A fuzzy coalition structure S (, ) = (S 1 (), S 2 (), . . ., S ()) with respect to the grand coalition R (or equivalently ) is an -vector whose components S (), 1 ≤ ≤ , with players are called fuzzy coalition variables.Thus a vector r ∈ (R) is a fuzzy coalition in the fuzzy coalition structure S (, ) if S () = r for some : 1 ≤ ≤ .
Note that if V : (R) → R + ∪ {0} is continuous and all the resources are of same kind, then V depends only on the total amount of resources of a coalition r ∈ (R) instead of different distributions: that is, V is constant on every set {( 1 , 2 , . . ., ) | ∑ =1 = } for each ∈ R. For example, if resources are considered in monetary units, then V being symmetric in all variables generates a unique function : R → R + ∪{0} such that V = where : (R) → R is defined for every r ∈ (R) by (r) = ∑ =1 .Thus with an abuse of notations we can use V and alternatively and hence represent V(r) by V(∑ ).For every ∈ {1, 2, 3, . . ., }, let the maximum resource that can be accumulated in the th fuzzy coalition be .In view of the above discussion a fuzzy coalition structure S (, ) can therefore be denoted by an -tuple of real numbers ( It follows that a resource investment matrix is a fuzzy coalition structure. Definition 2 (see [3]).A solution (x, y) of a cooperative game V with players and fuzzy coalitions is an × bimatrix, where the coordinates of the (, )th entry ( , ) represent the respective resource investments and payoff allocations to player in the th coalition, satisfying the following four conditions: (i) 0 ≤ ≤ , 0 ≤ ≤ V( ), = 1, 2, . . ., ; = 1, 2, . . ., .
Let S , (V) denote the set of all solutions (x, y) with respect to a cooperative game V with players and fuzzy coalitions.The column vectors x and y ( = 1, 2, . . ., ) corresponding to a solution (x, y) represent, respectively, the resource investment and payoff allocation vectors for the th coalition.
1.1.Formation of the Cooperative Fuzzy Game.Initially, player ∈ = {1, 2, . . ., } would submit her budget to the mediator.The mediator and the players jointly determine an optimal coalition structure; see [2]. 1 The mediator offers fractions of resources among the players and offers a possible investment vector to invest in a joint project (that is to form a fuzzy coalition).The players would react by announcing their degrees of satisfactions.Thus, we associate a satisfaction function for each player for such resource investment and call it the investment satisfaction.
Thus the investment satisfaction function of player in the cooperative fuzzy game V, denoted by V , is defined as follows (see [3]).Definition 3. Let be the total resource of the player ∈ in a cooperative game with fuzzy coalitions V. Then the function V : R → [0, 1] is said to be an investment satisfaction function of player if the following hold: V is continuously differentiable and strictly monotonic increasing in [0, ].
Probable physical significance of the above assumptions as given in [1] is that each player is keen to invest her whole resource .Her degree of satisfaction is therefore zero if she has no investment at all and one if she has full investment there.Moreover, it is natural to expect that satisfaction of any player increases continuously with respect to her investment.Furthermore, every player tries to increase her resource investment in a coalition, so the investment satisfaction functions are convex.
The payoff satisfaction function for player over her payoff allocation in the cooperative fuzzy game V, denoted by V , is defined as follows.
where − is the total resource in some coalition where player 's investment is zero and each player , such that ̸ = , invests in it.One can interpret as the fuzzy version of the marginal contribution of player .Then the function V : R → [0, 1] is said to be a payoff satisfaction function of the player if the following hold: V is continuously differentiable and strictly monotonic increasing in [0, ].
(c) 𝑃 𝑖
V is convex or equivalently the derivative of Probable physical significance of the above assumptions as given in [1] is that each player has a degree of satisfaction for any reward ranging from zero to , where zero is the least reward player can assume and the maximum.Moreover, degree of satisfaction increases if the reward increases from 0 to .So the function V is considered as continuously differentiable and also strictly monotonic increasing in [0, ].Again in any coalition the players mostly try to increase their payoffs by decreasing their satisfaction degrees upon an offer.Thus Assumption 3 reflects the choice of a particular satisfaction function that tries to address the individual preferences of each player.
Remark 5. Since the range of investment satisfaction and payoff satisfaction functions in Definitions 3 and 4 is normalized to [0, 1] the implicit assumption we make here is that the satisfaction of each player is a cardinal utility that admits an interpersonal comparison.
Definition 6 (see [3]).A solution (x, y) of a cooperative game V is said to be an exact solution if all the players in every fuzzy coalition are equally satisfied with their resources and payoffs; that is, Definition 7 (see [3]).A solution (x, y) of V is said to be an approximate solution if there exist and ( ̸ = ) such that for some ∈ {1, 2, . . ., }.
Let H be the history over which the mediator makes proposals to the players.Given a time (stage) ∈ H and the set S , (V) of solutions (x , y ), let us define the following sets [3]: where V (x ) = ∑ =1 V ( )/ and V (y ) = ∑ =1 V ( )/.For each ∈ H, let us call the solution (x , y ) ∈ S , (V) a dynamically evolved solution (DES) at time .
Our Model
Definition 8.In order to obtain a better solution from the previous solution, we define the variance function var : S , (V) → R as follows: where = ( V ( ) V ( )− V (x ) V (y )) 2 and V (x ) and V (y ) are as given by Eq. ( 2).
Note that in (3), relates to the satisfactions of the players with their resources and payoffs at each stage.In our model in [3], effect of was implicit and could only be realized at the last stage of the exact solution searching procedure.In the intermediate stages the investment and the payoff satisfactions were assumed to be independent of one another.
Advances in Fuzzy Systems
However in reality this is more of a strong assumption as satisfactions cannot be treated in isolation.Moreover, we have already explained that due to time and monetary constraints it is unlikely to allow the allocation process to iterate for long to reach the exact solution and rather a trade-off is sought to arrive at some optimal solution.Thus suggests that optimal solutions at intermediate stages should also account for such interdependence between the investment and the payoff satisfaction functions.Definition 9. A solution (x, y) is said to be a better approximate solution over another solution (x , y ) if var(x, y) < var(x , y ).
Negotiated Allocation Strategies.
We assume that, in our allocation model, the negotiation is governed by the negotiation strategies adopted by the mediator.These strategies determine how the mediator identifies suitable solutions and how the players evaluate those solutions in the light of their own interests.The mediator would propose solutions at each stage/time of the negotiation process until a solution is accepted by the players with equal satisfactions within a coalition using exact solution searching strategy [3].Alternatively the mediator may suggest the players to adopt a trade-off or an optimal solution searching strategy [3] which we discuss in the following briefly.
In the allocation process the mediator does not have any initial information about the investment and payoff satisfaction functions of the players; she will only update her belief on each player at each succeeding stage using the lower and upper bounds of those satisfactions.At stage of the allocation process, these bounds can be obtained for player ∈ in the th coalition by defining two approximate functions (see [3] for more details); namely, : R → R and : R → R.These functions are simply linear approximations (from below) of the actual satisfaction functions of the players for the next offer at stage + 1 and are obtained by joining the pairs of points [(0, 0), ( , V ( ))] and [(0, 0), ( , V ( ))], respectively.Thus we have Similarly for x ∈ 2 denote Finally for x ∈ 3 denote In a similar manner, denote for For y ∈ 2 denote Finally for y ∈ 3 denote Thus the expected proposal (x * , y * ) that benefits all the players can be defined as follows.
Definition 10.Assuming that the mediator offers a DES (x , y ) to the players at stage and that all the players subsequently announce their investment and payoff satisfaction degrees, the expected better resource allocation (x * , y * ) for the proposal at stage + 1 is defined as where Figure 3 and the set of constraints is given by Here is the sum of the squared differences of different satisfaction levels of the players using approximate satisfaction functions in each fuzzy coalition.If at stage , the proposed DES (x , y ) is such that, for some player , any one of 1 , 2 , 1 , and 2 is empty, then the exact solution searching process fails.In such case, if there exists an all player-accepted trade-off threshold such that var(x , y ) < , the current solution is accepted as an optimal solution.Otherwise the allocation process stops there and we say that the process ends with conflict.
Step 1.All the players will submit their resource , = 1, 2, 3, . . ., , to the mediator.Also, the players will construct their investment as well as payoff satisfaction functions independently using the cooperative game V.
Step 2. The mediator along with the players will determine the optimal coalition structure { 1 , 2 , . . ., } where is the total budget for the coalition .
Proof.We have to prove that there exists a better DES at stage + 1 over the one at stage .This is achieved in two steps.In Step 1 we prove the existence of such a solution to the game, and then in Step 2, we show that this solution is better than the previous one.
Step 1 (existence of a solution).This part follows directly from Theorem 1 in [3] and so is omitted here.
Step 2 (there is a new DES better than the previous one).Here we are using the fact that ∑ =1 = ∑ =1 , along with the following: (i) for some player and some for some player and some Similarly, one can find a y ∈ R × R using the following facts: Thus we have Therefore, (x, y) is a better solution than (x , y ), so either we can denote (x, y) by (x +1 , y +1 ) or we can find better DES (x +1 , y +1 ) which will minimize var(x, y).This completes the proof.
Theorem 13.If at each stage , for each player , none of the sets 1 , 2 , 1 , and 2 is empty, then the process of obtaining a better DES converges to the exact solution.
Proof.The proof proceeds exactly in the same way as of Theorem 2 in [3] and so is omitted here.
Axiomatization of the Exact Solution
It follows from Theorem 13 above that the notion of an exact solution represents fairness and egalitarianism among players and the mediator is assumed to minimize the deviation from the fairness criterion.It is natural to ask: What should be a player's ultimate interest?Investment satisfaction or payoff satisfaction?The negotiated allocation strategies are designed so that the deviations in both investment and payoff satisfactions are minimized simultaneously.Thus a formal characterization of the solution to rationalize these strategies is equally important.In this section, we provide an axiomatic characterization of the exact solution with the axioms of efficiency and asymptotic fairness as mentioned in Section 1.In what follows, we provide the formal descriptions of these axioms.Definition 14.A solution (x * , y * ) to the cooperative fuzzy game V ∈ (R) is called asymptotically fair if there exists a sequence of DES (x , y ) for ∈ H and a continuous function It follows form Definition 14 that, when a solution is asymptotically fair, some kind of equitability among the players is preserved.The continuous function determines the type of equitable condition to be met by the solution.
Asymptotic Fairness (AF).A solution (x * , y * ) to the cooperative fuzzy game V ∈ (R) is asymptotically fair.Efficiency ().For (x, y) ∈ S , (V) of V ∈ (R) it holds that (18) Note that the efficiency axiom conforms with our basic assumption in the current model that accounts for accumulation of resource and allocation of payoffs together.Thus we have the following theorem.Theorem 15.There exists an efficient and asymptotically fair solution to any game V ∈ (R).It is uniquely determined by the continuous function given by (17).In particular when ≡ var given by (3), it is the exact solution that is both efficient and asymptotically fair.
Proof.The proof follows from Lemma 11 and Theorems 12 and 13.
Penalty
Let us now introduce the notion of penalty to the negotiation process.This makes the allocation model more realistic.Usually, it happens that some players expect unreasonably high payoff by contributing merely in a coalition.This would render a never ending process of negotiations with little progress.In order to avoid such situations, the mediator may impose some penalty on the players who ask for unreasonable payoffs by showing unfair satisfaction against offers.We start with few definitions.Definition 16.An investment satisfaction function for player ∈ is said to be normal if its curve of description is a straight line, passing through the points (0, 0) and ( , 1), with ̸ = 0. Thus, the equation of a normal investment satisfaction function for any player is given by If the negotiation stops here and (x , y ) is the required solution of the problem, using (a) to (e), we see that the penalty of player 1 is ∑ 2 =1 1 ( 1 , 1 ) = 0 + 0 = 0 and penalty of player 2 is given by ∑ 2 =1 2 ( 2 , 2 ) = 0.622996 + 0.622996 = 1.24599.
Conclusion
This paper illustrates a dynamic allocation process to solve the problem of resource investment and the corresponding allocation of payoffs among the players in a cooperative environment contemporaneously.Since players may be involved in different projects at a time and are able to provide only fractions of their resources, we adopt a cooperative fuzzy game theoretic approach.Convergence in the proposed model is restrictive and each player is endowed with two satisfaction functions: one for resource investment and the other for payoff allocation.Combining these functions to have a common satisfaction function and obtaining a more general formulation for convergence are the topics of our future study.
Figure 1 :
Figure 1: Flowchart of the allocation process.
will repeat Step 4 to get an exact solution with variance 0 or optimal solution with variance less than (nonzero).Let the DES at stage be (x , y ) = [( , )] × such that for each player , none of Theorem 12. Let V ∈ (R).At stage , let (x , y ) = [( , )] × be a DES which is not exact, and for each , none of the sets Definition 17.A payoff satisfaction function for player ∈ is said to be normal if its curve is a straight line, passing through the points (0, 0) and ( , 1), with ̸ = 0.Advances in Fuzzy SystemsLet at some stage the proposal offered by the mediator be(x , y ) = ( (2, 7.27168) (2, 7.27168) (4, 10.7283) (4, 10.7283) ) . | 6,994.4 | 2018-04-01T00:00:00.000 | [
"Economics"
] |
Integration of whole-body [18F]FDG PET/MRI with non-targeted metabolomics can provide new insights on tissue-specific insulin resistance in type 2 diabetes
Alteration of various metabolites has been linked to type 2 diabetes (T2D) and insulin resistance. However, identifying significant associations between metabolites and tissue-specific phenotypes requires a multi-omics approach. In a cohort of 42 subjects with different levels of glucose tolerance (normal, prediabetes and T2D) matched for age and body mass index, we calculated associations between parameters of whole-body positron emission tomography (PET)/magnetic resonance imaging (MRI) during hyperinsulinemic euglycemic clamp and non-targeted metabolomics profiling for subcutaneous adipose tissue (SAT) and plasma. Plasma metabolomics profiling revealed that hepatic fat content was positively associated with tyrosine, and negatively associated with lysoPC(P-16:0). Visceral adipose tissue (VAT) and SAT insulin sensitivity (Ki), were positively associated with several lysophospholipids, while the opposite applied to branched-chain amino acids. The adipose tissue metabolomics revealed a positive association between non-esterified fatty acids and, VAT and liver Ki. Bile acids and carnitines in adipose tissue were inversely associated with VAT Ki. Furthermore, we detected several metabolites that were significantly higher in T2D than normal/prediabetes. In this study we present novel associations between several metabolites from SAT and plasma with the fat fraction, volume and insulin sensitivity of various tissues throughout the body, demonstrating the benefit of an integrative multi-omics approach.
S1. MoDentify
MoDentify identifies modules of metabolites associated with a selected outcome, that was Mvalue in this case 1 . It gains statistical power from metabolomics cohorts originating from multiple tissues. In this case, we used MoDentify to compute partial correlations of pairs of metabolites for SAT and plasma, while regressing out the other metabolites. Partially correlated metabolites were then represented into a network which was used as the base to identify functional modules of metabolites for the unified set of tissues. A score maximization algorithm was used to "walk" the network and to identify modules that are significantly associated with M-value at FDR<0.1, while controlling for BMI, WHR, age and sex. MoDentify identified two modules that we visualized in Cytoscape (Supplementary Figure S5).
S2. Metabolic Profiling
Information about reagents, solvents, standards, reference and tuning standards, and stable isotopes internal standards are displayed in the Supplementary Note -1.4 Metabolic profiling solvents.
Sample Preparation
Sample preparation of plasma was performed according to A et al. 2 . In detail, 900µL of extraction buffer (90/10 v/v methanol: water) including internal standards for both GC-MS and LC-MS (Supplementary Note -1.4 Metabolic profiling solvents) were added to 100µL of serum. The samples were shaken at 30Hz for two minutes in a mixer mill and proteins were precipitated at +4°C on ice. Afterwards, the samples were centrifuged at +4°C, 14000 rpm, for 10 minutes. The supernatants, 200µL for both batches of the LC-MS analysis, and 200µL and 50µL, respectively for the two batches of the GCMS analysis, were transferred to micro vials and evaporated to dryness in a SpeedVac concentrator.
SAT samples were extracted as follows: 500µL of 2/1 (v/v) CHCl3: methanol (including D4-Cholic Acid) and, 100µL water (including 13C9-phenylalanine) and two tungsten beads were added to each sample (18-23mg). The samples were shaken at 30Hz for 3 minutes. The tungsten beads were removed, and the samples were left standing at room temperature for 30 minutes. Samples were centrifuged at 14000 rpm, +4°C for 3 minutes and 80µL of the aqueous phase was transferred to Eppendorf tubes. 320µL methanol (including D6-salicylic acid) was added to the Eppendorf tubes, whereupon remaining proteins were precipitated at -20°C for 1 hour. The samples were centrifuged for 10 minutes at 14000 rpm, +4°C and 50µL supernatant was taken out to GC vials and 200µL for LC-MS. Solvents were evaporated and the samples were stored at -80°C until analysis.
The remaining supernatants of each tissue were pooled and used to create tissue-specific quality control (QC) samples. Tandem mass spectrometry (MS/MS) analysis for LC-MS was performed on the QC samples for identification purposes. Samples were analysed in tissuedependent batches according to a randomized run order on both GC-MS and LC-MS.
GC-MS
Derivatization and GC-MS analysis were performed as described previously 2 . SAT samples were derivatized in a final volume of 30µL rather than 90µL which was used for plasma 3 .
Batch 1
1µL of the derivatized sample was injected in splitless mode by a CTC Combi Pal autosampler (CTC Analytics AG, Switzerland) into an Agilent 6890 gas chromatograph equipped with a 10m x 0.18mm fused silica capillary column with a chemically bonded 0.18 µm DB 5-MS UI stationary phase (J&W Scientific). The injector temperature was 270°C, the purge flow rate was 20mL/min and the purge was turned on after 60 seconds. The gas flow rate through the column was 1mL/min, the column temperature was held at 70°C for 2 minutes, then increased by 40°C/min to 320°C, and held there for 2 minutes. The column effluent was introduced into the ion source of a Pegasus III time-of-flight mass spectrometer, GC/TOFMS (Leco Corp., St Joseph, MI, USA). The transfer line and the ion source temperatures were 250°C and 200°C, respectively. Ions were generated by a 70eV electron beam at an ionization current of 2.0mA, and 30 spectra/s were recorded in the mass range m/z 50-800. The acceleration voltage was turned on after a solvent delay of 150 seconds. The detector voltage was 1500-2000V.
Batch 2
0.5µL of the derivatized sample was injected in splitless mode by a L-PAL3 autosampler (CTC Analytics AG, Switzerland) into an Agilent 7890B gas chromatograph equipped with a 10m x 0.18mm fused silica capillary column with a chemically bonded 0.18µm Rxi-5 Sil MS stationary phase (Restek Corporation, U.S.). The injector temperature was 270°C, the purge flow rate was 20mL/min and the purge was turned on after 60 seconds. The gas flow rate through the column was 1mL/min, the column temperature was held at 70°C for 2 minutes, then increased by 40°C/min to 320°C, and held there for 2 minutes. The column effluent was introduced into the ion source of a Pegasus BT time-of-flight mass spectrometer, GC/TOFMS (Leco Corp., St Joseph, MI, USA). The transfer line and the ion source temperatures were 250°C and 200°C, respectively. Ions were generated by a 70eV electron beam at an ionization current of 2.0mA, and 30 spectra/s were recorded in the mass range m/z 50-800. The acceleration voltage was turned on after a solvent delay of 150 seconds. The detector voltage was 1800-2300V.
LC-MS
The LC-MS was performed identically for both batches. Before LC-MS analysis the sample was re-suspended in 10+10µL methanol and water. Batches from the samples were first analysed in positive mode, next the instrument was switched to negative mode and a second injection of the samples was performed.
The chromatographic separation was performed on an Agilent 1290 Infinity UHPLC-system (Agilent Technologies, Waldbronn, Germany). 2µL of each sample was injected into an Acquity UPLC HSS T3, 2.1 x 50mm, 1.8µm C18 column in combination with a 2.1mm x 5mm, 1.8µm VanGuard precolumn (Waters Corporation, Milford, MA, USA) held at 40°C. The gradient elution buffers were A (H2O, 0.1% formic acid) and B (75/25 acetonitrile:2-propanol, 0.1% formic acid), and the flow-rate was 0.5mL/min. The compounds were eluted with a linear gradient consisting of 0.1-10% B over 2 minutes, B was increased to 99% over 5 minutes and held at 99% for 2 minutes; B was decreased to 0.1% for 0.3 minutes and the flow-rate was increased to 0.8mL/min for 0.5 minutes; these conditions were held for 0.9 minutes, after which the flow-rate was reduced to 0.5mL/min for 0.1 minutes before the next injection.
The compounds were detected with an Agilent 6550 Q-TOF mass spectrometer equipped with a jet stream electrospray ion source operating in positive or negative ion mode. The settings were kept identical between the modes, with the exception of the capillary voltage. A reference interface was connected for accurate mass measurements; the reference ions purine (4µM) and HP-0921 (Hexakis(1H, 1H, 3H-tetrafluoropropoxy)phosphazine) (1µM) were infused directly into the MS at a flow rate of 0.05mL/min for internal calibration, and the monitored ions were purine m/z 121.05 and m/z 119.03632; HP-0921 m/z 922.0098 and m/z 966.000725 for positive and negative mode respectively. The gas temperature was set to 150°C, the drying gas flow to 16L/min and the nebulizer pressure 35psig. The sheath gas temp was set to 350°C and the sheath gas flow 11L/min. The capillary voltage was set to 4000V in positive ion mode, and to | 1,954.4 | 2020-05-20T00:00:00.000 | [
"Biology",
"Medicine"
] |
Multi-Attribute Group Decision Making Based on Multigranulation Probabilistic Models with Interval-Valued Neutrosophic Information
: In plenty of realistic situations, multi-attribute group decision-making (MAGDM) is ubiquitous and significant in daily activities of individuals and organizations. Among diverse tools for coping with MAGDM, granular computing-based approaches constitute a series of viable and efficient theories by means of multi-view problem solving strategies. In this paper, in order to handle MAGDM issues with interval-valued neutrosophic (IN) information, we adopt one of the granular computing (GrC)-based approaches, known as multigranulation probabilistic models, to address IN MAGDM problems. More specifically, after revisiting the related fundamental knowledge, three types of IN multigranulation probabilistic models are designed at first. Then, some key properties of the developed theoretical models are explored. Afterwards, a MAGDM algorithm for merger and acquisition target selections (M&A TSs) with IN information is summed up. Finally, a real-life case study together with several detailed discussions is investigated to present the validity of the developed models.
A Brief Review of MAGDM
By applying decision-making issues with multiple attributes to the setting of group decision-making, multi-attribute group decision-making (MAGDM) generally provides consistent group preferences by analyzing various alternatives expressed by individual preferences [1]. To date, many granular computing (GrC)-based approaches [2][3][4][5][6] have been utilized to solve numerous complicated MAGDM problems, which have started a new momentum of constant development of social economy.
In the process of solving a typical MAGDM problem, it is recognized that three key challenges need to be managed reasonably, i.e., MAGDM information representation, MAGDM information fusion, MAGDM information analysis. Among the above-stated key challenges, how to express MAGDM information, especially for a complicated uncertain real-world scenario, as a standard decision matrix via alternatives and attributes is the first step to address MAGDM problems.
A Brief Review of Interval-valued Neutrosophic Information
In order to meet the demands of describing fuzzy and indeterminate information at the same time from nature and society, Smarandache [7,8] founded the notion of neutrosophic sets (NSs), which can be regarded as many generalizations of extended fuzzy sets [9] and used in plenty of meaningful areas [10][11][12]. An NS contains three types of membership functions (the truth, indeterminacy and falsity ones), and all of them take values in ]0 − , 1 + [. In accordance with the mathematical formulation of NSs, using NSs directly to a range of realistic applications is relatively inconvenient because all membership functions are limited within ]0 − , 1 + [. Thus, it is necessary to update ]0 − , 1 + [ by virtue of standard sets and logic. Following the above-stated research route, Wang et al. [13]
A Brief Review of Multigranulation Probabilistic Models
To handle MAGDM information fusion and analysis with IN information effectively, GrC-based approaches own unique superiorities in constructing problem addressing approaches via multi-view problem solving tactics [23,24]. During the past several years, taking full advantage of GrC-based approaches, many scholars and practitioners have obtained fruitful results in merging NSs with rough sets [25][26][27][28][29][30][31][32], formal concept analysis [33,34], three-way decisions [35][36][37], and others [38,39]. In the current article, we plan to propose a new IN MAGDM method via multigranulation probabilistic models, which can provide a risk-based information synthesis scheme with the capability of error tolerance in light of GrC-based approaches. In particular, the notion of multigranulation rough sets (MGRSs) [40][41][42] and probabilistic rough sets (PRSs) [43][44][45] is scheduled to establish multigranulation probabilistic models, and the merits of MGRSs and PRSs can be reflected in the process of MAGDM problem addressing.
The Motivations of the Research
MGRSs play a significant role in dealing with MAGDM problems in diverse backgrounds. One one hand, some scholars have made eminent contributions to the applications of MGRSs in MAGDM problems in recent years. For instance, Zhang et al. [46,47] developed various MGRSs in the context of hesitant fuzzy and interval-valued hesitant fuzzy sets for handling person-job fit and steam turbine fault diagnosis, respectively. Sun et al. [48,49] proposed several MGRSs with linguistic and heterogeneous preference information, then they further designed corresponding MAGDM approaches. Zhan et al. [50] and Zhang et al. [51] put forward two novel covering-based MGRSs with fuzzy and intuitionistic fuzzy information for addressing MAGDM problems. On the other hand, some scholars adopted PRSs to address MAGDM problems. For instance, Liang et al. [52][53][54] studied novel decision-theoretic rough sets in hesitant fuzzy, incomplete, and Pythagorean information systems. Xu and Guo [55] generalized MGRSs to double-quantitative and three-way decision frameworks. Zhang et al. [56,57] combined decision-theoretic rough sets with MGRSs in Pythagorean and hesitant fuzzy linguistic information systems. In this paper, we generalize MGRSs and PRSs to IN information and apply them to M&A TSs. Specifically, the following motivations of utilizing MGRSs and PRSs in IN MAGDM problems can be summed up: 1. In order to address the challenge of MAGDM information representation, INSs take advantages of NSs and interval-valued sets at the same time. Thus, INSs play a significant role in describing indeterminate and incomplete MAGDM information. 2. In order to address the challenge of MAGDM information fusion, MGRSs excel in accelerating the information fusion procedure by virtue of processing multiple binary or fuzzy relations in parallel. In addition, classical MGRSs offer decision makers both an optimistic information fusion rule and a pessimistic counterpart. In conclusion, MGRSs play a significant role in constructing reasonable MAGDM information fusion methods [46][47][48][49][50][51]. 3. In order to address the challenge of MAGDM information analysis, starting from the probability theory and Bayesian procedures, PRSs own the capability of fault tolerance. Hence, PRSs play a significant role in coping with incorrect and noisy data and they can be seen as a useful tool for robust MAGDM information analysis [52][53][54][55][56][57].
The Contributions of the Research
In this work, we aim to utilize MGRSs and PRSs in solving complicated MAGDM issues with IN information. Specifically, several comprehensive risk-based models named IN multigranulation PRSs (MG-PRSs) over two universes are looked into. Then, we further present a MAGDM approach in the setting of M&A TSs in light of the developed theoretical models that can avoid an impact of three above-mentioned challenges. Finally, a real-world example is employed to prove the validity of the established decision-making rule. In addition, it is noteworthy that plenty of interesting nonlinear modeling approaches have been proved to be successful in various applications [58][59][60][61][62][63][64][65][66][67]. For instance, Medina and Ojeda-Aciego [58] applied multi-adjoint frameworks to general t-concept lattice, and some other works on fuzzy formal contexts based on GrC-based approaches were explored in succession [64][65][66][67]. Takacs et al. [59] put forward a brand-new oft tissue model for constructing telesurgical robot systems. Gil et al. [60] studied a surrogate model based optimization of traffic lights cycles and green period ratios by means of microscopic simulation and fuzzy rule interpolation. Smarandache et al. [61] explored word-level sentiment similarities in the context of NSs, and some other meaningful works on word-level sentiment analysis were also investigated recently [62,63,68,69].
Compared with existing popular nonlinear modeling approaches, the vital contributions of the work lie in the utilization of IN information and multigranulation probabilistic models. For one thing, the above-mentioned literature on nonlinear modeling approaches can not process various practical situations with indeterminate and incomplete information effectively, thus this work shows some merits in the representation of uncertain MAGDM information. For another, the majority of nonlinear modeling approaches can not fuse and analyze multi-source information with incorrect and noisy data reasonably, thus this work shows some merits in the fusion and analysis of MAGDM problems with IN information. Moreover, several specific key contributions of the work can be further concluded below: 1. Several new IN membership degrees are put forward to handle incorrect and noisy data via the capability of fault tolerance. 2. Three types of multigranulation probabilistic models are designed according to diverse risk attitudes of decision makers, i.e., the first one in light of optimistic rules, the second one in light of pessimistic rules, and the third one in light of adjustable rules. 3. On the basis of GrC-based methods, IN MG-PRSs over two universes can address a typical IN MAGDM issue from multiple views, and integrate different individual preferences by considering risk appetites and error tolerance.
The Structure of the Research
The rest of the work is arranged below. The next section intends to review several basic knowledge on INSs, MGRSs, and PRSs. Three types of theoretical models along with their key properties are explored in Section 3. In the next section, we develop an IN MAGDM approach via multigranulation probabilistic models in the context of M&A TSs. In Section 5, a practical illustrative case study is explored to highlight the validity of the presented IN MAGDM rule. Finally, Section 6 contains several conclusive results and future study options.
Preliminaries
The current section plans to revisit various preliminary knowledge in terms of INSs, MGRSs, and PRSs in a brief outline.
INSs
The concept of INSs was put forward by Wang et al. [13] by updating the formulation of ]0 − , 1 + [ from the scope of membership functions in NSs. With this update, INSs are equipped with the capability of expressing indeterminate and incomplete information simultaneously. and
MGRSs
As one of the most influential generalized rough set theories, the idea of MGRSs was initially established by Qian et al. [40][41][42] by means of parallel computational frameworks and risk-based information fusion strategies.
Definition 4 ( [40,41]). Suppose R 1 , R 2 , . . . , R m are m crisp binary relations. For any X ⊆ U, the optimistic and pessimistic multigranulation approximations of X are provided as the following mathematical expressions: is named an optimistic MGRS with regard to X, whereas the pair is named a pessimistic MGRS with regard to X.
PRSs
Considering that the formulation of classical rough sets is fairly strict which may affect the application range of it, hence the concept of PRSs [43][44][45] was developed subsequently by means of the probabilistic measure theory.
Definition 5 ([43]
). Suppose R is an equivalence relation over U, P is the probabilistic measure, then (U, R, P) is named a probabilistic approximation space. For any 0 ≤ β < α ≤ 1 and X ⊆ U, the lower and upper approximations of X are provided as the following mathematical expressions: is named a PRS of X with regard to (U, R, P).
IN MG-PRSs over Two Universes
In what follows, prior to the introduction of new theoretical models, we shall revisit the formulation of IN relations within the context of two universes [28] at first.
Definition 6 ([28]
). Suppose U and V are two an arbitrary universes of discourse. An IN relation over two universes R over U × V is provided as the following mathematical expression:
Optimistic IN MG-PRSs over Two Universes
It is noted that the term "optimistic" is originated from the first paper of MGRSs [40]. Within the context of MGRSs, the notion of single and multiple IN inclusion degrees is scheduled to propose at first in the current section.
Definition 7.
Suppose R i is an IN relation over two universes over U × V. For any E ∈ I N (V), x ∈ U, y ∈ V, the single IN membership degree of x in E in terms of R i is provided as the following mathematical expression: , the multiple IN membership degrees of x in E with regard to R i are provided as the following mathematical expressions: In light of maximal IN membership degrees, optimistic multigranulation probabilistic models can be put forward conveniently. Definition 8. Suppose R i is an IN relation over two universes over U × V. For any E ∈ I N (V), x ∈ U, y ∈ V, the two IN thresholds are represented by α and β with α > β, then the lower and upper approximations of E in optimistic multigranulation probabilistic models are provided as the following mathematical expressions: In what follows, some key properties of lower and upper approximations for optimistic multigranulation probabilistic models are explored in detail.
Proposition 1.
Suppose R i is an IN relation over two universes over U × V. For any E, F ∈ I N (V), x ∈ U, y ∈ V, the two IN thresholds are denoted by α and β with α > β, then the lower and upper approximations for optimistic multigranulation probabilistic models own the following properties: 5. According to the above conclusions, (F) can be deduced analogously.
Pessimistic IN MG-PRSs over Two Universes
According to previous definitions, starting from minimal IN membership degrees, pessimistic multigranulation probabilistic models can be established in a similar way.
Definition 9.
Suppose R i is an IN relation over two universes over U × V. For any E ∈ I N (V), x ∈ U, y ∈ V, the two IN thresholds are represented by α and β with α > β, then the lower and upper approximations of E in pessimistic multigranulation probabilistic models are provided as the following mathematical expressions: Next, some key properties of lower and upper approximations for pessimistic multigranulation probabilistic models are presented, and we can prove them according to above-mentioned proofs for Proposition 1.
Proposition 2.
Suppose R i is an IN relation over two universes over U × V. For any E, F ∈ I N (V), x ∈ U, y ∈ V, the two IN thresholds are denoted by α and β with α > β, then the lower and upper approximations for pessimistic multigranulation probabilistic models own the following properties: For any E ∈ I N (V), x ∈ U, y ∈ V, the adjustable IN membership degrees of x in E with regard to R i are provided as the following mathematical expressions:
Adjustable IN MG-PRSs over Two Universes
Next, adjustable multigranulation probabilistic models can be designed similarly. Definition 11. Suppose R i is an IN relation over two universes over U × V. For any E ∈ I N (V), x ∈ U, y ∈ V, the two IN thresholds are represented by α and β with α > β, then the lower and upper approximations of E in adjustable multigranulation probabilistic models are provided as the following mathematical expressions: In what follows, some key properties of lower and upper approximations for adjustable multigranulation probabilistic models are presented, and we can also prove them according to above-mentioned proofs for Proposition 1.
Proposition 3.
Suppose R i is an IN relation over two universes over U × V. For any E, F ∈ I N (V), x ∈ U, y ∈ V, the two IN thresholds are denoted by α and β with α > β, then the lower and upper approximations for adjustable multigranulation probabilistic models own the following properties:
Relationships between Optimistic, Pessimistic, and Adjustable IN MG-PRSs over Two Universes
In previous sections, three types of multigranulation probabilistic models with IN Information are investigated in detail. The following section aims to discuss relationships between optimistic, pessimistic, and adjustable multigranulation probabilistic models.
Proposition 4.
Suppose R i is an IN relation over two universes over U × V. For any E, F ∈ I N (V), x ∈ U, y ∈ V, the two IN thresholds are denoted by α and β with α > β; then, we have: (E) can also be proved.
IN MAGDM Based on Multigranulation Probabilistic Models
In the following section, we aim to sum up a viable and effective MAGDM approach by using the newly developed theoretical models. As pointed out in Section 1, multigranulation probabilistic models can manage the three challenges of typical MAGDM situations well-to be specific, the overall study context of IN MG-PRSs over two universes is IN information, which excels in depicting indeterminate and incomplete information at the same time. In addition, the development of multigranulation probabilistic models provide decision makers with an effective strategy in MAGDM information fusion and analysis by taking superiorities of MGRSs and PRSs, and the proposed IN MG-PRSs over two universes are also equipped with the ability of describing risk preferences of information fusion quantitatively and dynamically. Hence, IN MG-PRSs over two universes play a significant role in solving MAGDM problems, and it is necessary to put forward corresponding MAGDM methods.
Next, for the sake of exploring MAGDM methods in a real-world scenario, we put the following discussions in the background of M&A TSs. We first let the universe U = {x 1 , x 2 , . . . , x j } be a set containing selectable M&A targets, whereas universe V = {y 1 , y 2 , . . . , y k } is a set containing assessment criteria. Then, we also let E ∈ I N (V) be a standard set containing several needs of corporate acquirers from the aspect of assessment criteria. Afterwards, m decision makers in a group provide several relations R i ∈ I NR (U × V) (i = 1, 2, . . . , m) between the above-mentioned universes. Finally, an information system (U, V, R i , E) for M&A TSs can be established as the input for the following MAGDM algorithm based on multigranulation probabilistic models.
Remark 1.
In what follows, we intend to interpret the scheme of selecting the parametric value λ. According to Definition 10, the adjustable IN membership degrees of x in E with regard to R i are provided as Ξ In light of the standpoint of risk decision-making with uncertainty from classical operational research [1], can be seen as the "completely risk-seeking" strategy, Ξ can be seen as the "completely risk-averse" strategy, and Ξ can be seen as the "risk-neutral" strategy. Moreover, Ξ m ∑ i=1 R i E (x) can be seen as the "somewhat risk-seeking" strategy when λ ∈ (0.5, 1), and Ξ m ∑ i=1 R i E (x) can be seen as the "somewhat risk-averse" strategy when λ ∈ (0, 0.5). According to the above-stated theoretical explanations, the parametric value λ represents the risk preference of different decision makers in M&A TSs. In specific, the larger of the parametric value λ when all decision makers are more risk-seeking, whereas the smaller of the parametric value λ when all decision makers are more risk-averse. In general, the parametric value λ is determined by all decision-makers' risk preference or the empirical studies and inherent knowledge in advance. In practical MAGDM situations, suppose there are m decision makers in a group, each decision maker provides his or her risk preference λ i (λ i ∈ [0, 1], i = 1, 2, . . . , m), and then the final In what follows, an algorithm for M&A TSs by virtue of adjustable IN MG-PRSs over two universes is established.
An Illustrative Example
For the sake of making an efficient comparative analysis with existing similar IN MAGDM approaches, we plan to utilize the case study that was previously investigated in [28]. In what follows, we first present the general context of M&A TSs and show basic steps of obtaining the optimal M&A target by means of the newly proposed algorithm developed in Section 4. Remark 2. Section 4 acts as a transition part which links between the theoretical models proposed in Section 3 and the application case presented in Section 5. To be specific, we first put forward two special theoretical models named optimistic and pessimistic IN MG-PRSs over two universes. Then, we further generalize these two special theoretical models to adjustable IN MG-PRSs over two universes. All the proposed three theoretical models are foundations for addressing MAGDM problems. Next, we propose a novel algorithm for M&A TSs in light of adjustable IN MG-PRSs over two universes in Section 4. Finally, in order to show the reasonability and effectiveness of the proposed algorithm, the following section plans to conduct several quantitative and qualitative analysis via an illustrative example.
MAGDM Procedures
In this illustrative example, we use the case study that was previously investigated in [28].
According to Algorithm 1, we aim to obtain the optimal M&A target by means of IN MG-PRSs over two universes. First, we calculate single IN membership degrees as follows.
Algorithm 1 An algorithm for M&A TSs in light of adjustable IN MG-PRSs over two universes.
Require: An information system (U, V, R i , E) for M&A TSs. Ensure: The optimal alternative.
Step 1. Calculate single IN membership degrees η Step 5. Determine score values of Ξ m ∑ i=1 R i E (x) for all selectable M&A targets; Step 6. The best alternative is the one with the largest score value for Ξ With regard to the relation presented in R 1 , we obtain In a similar manner, we also obtain With regard to the relation presented in R 2 , we also obtain With regard to the relation presented in R 3 , we also obtain In order to make an efficient comparison with the MAGDM method proposed in [28], the risk coefficient λ = 0.6 is noted in [28]; then, we also take the risk coefficient λ = 0.6 in this case study. In what follows, adjustable IN membership degrees can be calculated. To be specific, for M&A target x 1 , we have In what follows, we also have ; the final ranking result shows x 5 x 1 x 4 x 3 x 2 , i.e., the supreme alternative is x 5 .
Sensitivity Analysis
In the previous section, we obtain the optimal M&A target by using adjustable multigranulation probabilistic models with the risk coefficient λ = 0.6. The following sensitivity analysis aims to investigate the influence of the risk coefficient by changing the value of λ. To be specific, supposing the value of λ is taken as 0, 0.4, 0.5, 0.6, and 1, respectively, then we can obtain the final ranking orders in Table 1 below.
somewhat pessimistic (somewhat risk-averse) somewhat optimistic (somewhat risk-seeking) completely optimistic (completely risk-seeking) According to the final ranking orders in Table 1, it is easy to see that the best x i is insensitive to changing values of λ, that is, all results show the best x i is x 5 . Thus, the best alternative is reliable and stable. The only difference lies in the ranking order of x 3 and x 4 when λ = 0, i.e., x 3 is superior to x 4 when λ = 0, whereas x 3 is inferior to x 4 in other situations. The cause of this phenomenon is that the changing values of λ may affect the ranking order of x 3 and x 4 when the risk preferences is completely risk-averse.
Comparative Analysis
In what follows, we aim to compare with the MAGDM method proposed in [28] to present the merits of the proposed MAGDM algorithm. In [28], the authors put forward an algorithm for M&A TSs via IN MGRSs over two universes without the support of PRSs. The mathematical structures of optimistic and pessimistic IN MGRSs over two universes are presented below: is named a pessimistic IN MGRS over two universes of E.
More concretely, the optimistic and pessimistic IN multigranulation rough approximations in terms of an information system (U, V, R i , E)(i = 1, 2, 3) for M&A TSs are calculated at first, which are denoted by , whereas lower and upper versions of pessimistic counterparts can also be synthesized, which is denoted by The above-stated sets are obtained as follows: calculated, and it is convenient to determine the ranking orders of five M&A targets via the above score values and obtain the optimal M&A target, which is x 3 , and x 5 is ranked second. The reason for the difference with the result obtained by using the proposed method lies in IN MGRSs over two universes lacking the ability of error tolerance; the MAGDM result is sensitive to outlier values from original information for M&A TSs.
Discussion
In order to address complicated MAGDM problems effectively, three key challenges are focused on at first. Then, under the guidance of multigranulation probabilistic models, we utilize the model of INSs, MGRSs and PRSs to handle the above-mentioned challenges. Moreover, compared with existing popular nonlinear modeling approaches, such as formal concept analysis [33,34,58,[64][65][66][67], control systems [59,60] and sentiment analysis [61][62][63]68,69], which neither effectively handle indeterminate and incomplete information in complicated MAGDM problems, nor reasonably fuse and analyze multi-source information with incorrect and noisy data, it is necessary to combine INSs, MGRSs with PRSs to develop some meaningful hybrid models along with corresponding MAGDM approaches. In light of MAGDM procedures in the current section, we sum up the merits of the proposed MAGDM algorithm below: 1. INSs act as a viable and effective tool for depicting various uncertainties in typical MAGDM situations. By dividing the notion of membership degrees into three different parts, indeterminate and incomplete MAGDM information can be described precisely. 2. In MAGDM information fusion procedures, with the support of MGRSs, the computational efficiency of information fusion can be enhanced to a large extent. Moreover, decision risks of information fusion strategies can also be modeled well. 3. Compared with [28], the proposed MAGDM algorithm excels in fusing the superiorities of PRSs in the construction of hybrid models. To be specific, IN MG-PRSs over two universes own the fault-tolerance ability when coping with incorrect and noisy data.
Hence, the developed IN MG-PRSs over two universes perform outstandingly in MAGDM information representation, information fusion, and information analysis; they provide a beneficial tool for addressing complicated MAGDM problems.
Conclusions
This work mainly presents a general framework for dealing with complicated IN MAGDM problems by virtue of multigranulation probabilistic models. At first, three different types of multigranulation probabilistic models are put forward, that is, the optimistic version, the pessimistic version, and the adjustable version, and both definitions and key properties are discussed in detail. Then, relationships between optimistic, pessimistic, and adjustable multigranulation probabilistic models are further explored. Afterwards, corresponding IN MAGDM approaches are proposed in the background of M&A TSs. Finally, a practical example of M&A TSs is presented with several quantitative and qualitative analysis.
In the future, it is meaningful to generalize IN MG-PRSs over two universes to more extended neutrosophic contexts such as neutrosophic duplets, triplets and multisets. Furthermore, establishing efficient IN MAGDM approaches for problems with dynamic situations, high-dimensional attributes and large-scale of alternatives are also necessary. Another future interesting study option is to apply the presented theoretical models to other areas such as clustering, feature selections, compressed sensing, image processing, etc. | 6,160 | 2020-02-09T00:00:00.000 | [
"Computer Science"
] |
Quantum Spectral Curve and Structure Constants in N=4 SYM: Cusps in the Ladder Limit
We find a massive simplification in the non-perturbative expression for the structure constant of Wilson lines with 3 cusps when expressed in terms of the key Quantum Spectral Curve quantities, namely Q-functions. Our calculation is done for the configuration of 3 cusps lying in the same plane with arbitrary angles in the ladders limit. This provides strong evidence that the Quantum Spectral Curve is not only a highly efficient tool for finding the anomalous dimensions but also encodes correlation functions with all wrapping corrections taken into account to all orders in the `t Hooft coupling. We also show how to study the insertions of scalars coupled to the Wilson lines and extend our results for the spectrum and the structure constants to this case. We discuss an OPE expansion of two cusps in terms of these states. Our results give additional support to the Separation of Variables strategy in solving the planar N=4 SYM theory.
Introduction
Integrability is a unique tool allowing one to obtain exact non-perturbative results in fully interacting field theories even when the supersymmetry is of no use. The range of theories where integrability is known to be applicable includes supersymmetric theories such as planar N = 4 SYM and ABJM theory, which are important from a holographic perspective. Quite significantly, recently found examples of integrable theories include a particular class of scalar models in 4D possessing no supersymmetry at all [1][2][3][4][5].
Integrability methods of the type used here started being developed in the seminal papers [6] in the QCD context and independently in [7] for N = 4 SYM. After almost 20 years of development it was shown that both approaches can be united by the Quantum Spectral Curve (QSC) formalism [8,9] 1 of which both are some particular limits [9,12].
The QSC was initially developed with the primary goal of computing the spectrum of anomalous dimensions or, equivalently, two point correlators. The QSC is based on the Qsystem, a system of functional equations on Q-functions (see [13,14] for a recent review). At the same time, the Q-functions are known to play the role of the wave functions in the Separation of Variables (SoV) program initiated for quantum integrable models in [15][16][17][18] and recently generalized to SU (N ) spin chains in [19] leading to a new algebraic construction for the states (see also [20,21]). In all these models the Q-functions (Baxter polynomials in this case) give the wave functions in separated variables 2 . 3 From this perspective it is natural to expect that the Q-functions of the QSC construction in N = 4 SYM contain much more information than the spectrum and should also play an important role for more general observables.
There are a few important lessons one can learn from the simple spin chains. In particular one should introduce "twists" (quasi-periodic boundary conditions/external magnetic field) in order for the SoV construction to work nicely. One of the main reasons why the twists are important is that they break global symmetry and remove degeneracy in the spectrum. This makes the map between the Q-functions and the states bijective. Fortunately, one can rather easily introduce twists into the QSC construction [33][34][35] (see also [36]), however the interpretation of these new parameters is not always clear from the QFT point of view. The γ-deformation of N = 4 SYM [37][38][39][40] is one of the cases which is rather well understood, but only breaks the R-symmetry part (dual to the isometries of S 5 part of AdS/CFT) of the whole PSU(2, 2|4) group. 4 The situation where the twist in both AdS 5 and S 5 appears naturally is the cusped Maldacena-Wilson loop. In this paper we consider the correlation function of 3 cusps for 3 general angles (see Fig. 1). We consider a ladders limit [42,43] where the calculation can be done to all loop orders starting from Feynman graphs. We observe that the result obtained The expectation value of this object behaves exactly in the same way as a three point correlation function of 3 local operators but provides additional 6 parameters (2 for each cusp) φ 1 , φ 2 , φ 3 and cos θ 1 = n 12 · n 23 , cos θ 2 = n 23 · n 31 , cos θ 3 = n 31 · n 12 , which are associated with twists in the QSC description.
as a resummation of the perturbation theory takes a stunningly simple form when expressed in terms of the Q-functions, which we produced from the QSC.
Set-up and the Main Results. The Maldacena-Wilson lines we consider are defined as W = Pexp dτ (iA µẋ µ + Φ a n a |ẋ|) , where n a is a constant unit 6-vector parameterizing the coupling to the scalars Φ a of N = 4 SYM. The observable we study is the Wilson loop defined on a planar triangle made of three circular arcs 5 , see Fig. 1. It is parameterized by three cusp angles φ i at its vertices and also three angles θ i between the couplings to scalars on the lines adjacent to each vertex. At each cusp we have a divergence controlled by the celebrated cusp anomalous dimension Γ cusp (φ i , θ i ) which can be efficiently studied via integrability [34,44,45] and is analogous to the local operator scaling dimensions in its mathematical description by the QSC. Due to this we will use notation ∆ for the cusp dimension. To regularize the divergence we cut an -ball at each of the cusps. The whole Wilson loop has a conformally covariant dependence on the cusp positions and defines the structure constant C 123 for a 3-point correlator of three cusps. We focus on the ladders limit in which θ i → i∞ while the 't Hooft coupling g = √ λ/(4π) goes to zero with the finite combinationŝ Each arc is the image of a straight line segment under a conformal transformation and thus is locally playing the role of three effective couplings. The perturbative expansion for ∆ can then be resummed to all orders leading to a stationary Schrödinger equation [42,43,46]. However, the 3-cusp correlator is much more nontrivial and depends on three couplingsλ i which we can vary separately. We have studied the case when two of them are nonzero, corresponding to the structure constant we denote by C ••• 123 . The result may be written in terms of the Schrödinger wave-functions but it is a highly complicated integral which does not offer much structure. Yet once we rewrite it in terms of the QSC Q-functions q(u), we observe miraculous cancellations leading to a surprisingly simple expression where the bracket f (u) is defined for the functions which behave as ∼ e uβ u α at large u and are analytic for all Re u > 0 as The functions q 1 (u), q 2 (u) describe the first and the second cusp, while e −φ 3 u is just the Qfunction at zero coupling corresponding to the third cusp. Each of the Q-functions solves a simple finite difference equation (2.7). This is precisely the kind of result one expects for an integrable model treated in separated variables. Note that all the dependence on the angles and the couplings is coming solely through the Q-functions, which depend nontrivially on these parameters, in particular at large u we have q i (u) u ∆ i e φ i u . We also found a very simple expression for the derivative of ∆ w.r.t. the couplingĝ and the angle φ in terms of the bracket · which has the form very similar to (1.3) with q 1 = q 2 = q and different insertions in the numerator! These quantites can be interpreted as structure constants of two cusps with a local BPS operator [47].
In the limit when the triangle collapses to a straight line, this configuration has recently attracted much attention as it defines a 1d CFT on the line [48][49][50][51][52]. In particular the structure constants we consider were computed in [50] by resumming the diagrams using the exact solvability of the Schrödinger problem at φ = 0. Our results in the zero angle limit can be simplified further by noticing that for φ i → 0 the integral is saturated by the leading large u asymptotics of the integrand. This leads to q i q j → 1/Γ(1 − ∆ i − ∆ j ), reproducing the results of [50].
As a byproduct, we also resolved the question of how to use integrability to compute the anomalous dimension for the cusp with an insertion of the same scalar as that coupled to the Wilson lines. We propose that it simply corresponds to one of the excited states in the Schrödinger equation (and to a well-defined analytic continuation in the QSC outside the ladders limit). We verified this claim at weak coupling by comparing with the direct perturbation theory calculation of [53] 6 . Very recently the importance of the cusps with such insertions were further motivated in [54] where the 3 loop result was extracted. We demonstrate some of our results in Fig. 2 where we show the plots of the spectrum and the structure constant for a range of the effective couplingĝ.
Structure of the paper. The rest of the paper is organized as follows. In Sec. 2 we briefly review the QSC and present the Baxter equation to which it reduces in the ladders limit. We also derive compact formulas for the variation of ∆ with respect to the coupling and the angle φ. In Sec. 3 we write the regularized 2-pt function in terms of the Schrödinger equation wave functions, in particular deriving the pre-exponent normalization which is important for 3-pt correlators. We also relate the wave functions to the QSC Q-functions via a Mellin transform. In Sec. 4 we study the 3-cusp correlator and derive our main result for the structure constant (1.3). In Sec. 5 we describe the interpretation of excited states in the Schrödinger problem as insertions at the cusp. We generalize our results for 3-pt functions to the excited states and provide both perturbative and numerical data for their scaling dimensions. In Sec. 6 we describe the limit when the 3-cusp configuration degenerates, in particular reproducing the results of [50] when all angles become zero. In Sec. 7 and 8 we present numerical and perturbative results for the structure constants. Finally in Sec. 9 we interpret the regularized 2-pt function as a 4-cusp correlator for which we write an OPE-type expansion in terms of the structure constants, perfectly matching our previous results. In Sec. 10 we present conclusions. The appendices contain various technical details, in particular the detailed strong coupling expansion for the spectrum.
Quantum Spectral Curve in the ladders limit
In this section we provide all necessary background for this paper about the Quantum Spectral Curve (QSC). More technical details are given in Appendix A.
The QSC provides a finite set of equations describing non-perturbatively the cusp anomalous dimension ∆ at all values of the parameters φ, θ and any coupling g. Let us briefly review this construction and then discuss the form it takes in the ladders limit. The QSC was originally developed in [8,9] for the spectral problem of local operators in N = 4 SYM. It was extended in [34] to describe the cusp anomalous dimension, reformulating and greatly simplifying the TBA approach of [44,45]. The QSC is a set of difference equations (QQ-relations) for the Q-functions which are central objects in the integrability framework. When supplemented with extra asymptotics and analyticity conditions, these relations fix the Q-functions and provide the exact anomalous dimension ∆ (see [13] for a pedagogical introduction and [14] for a wider overview).
The QSC is based on 4+4 basic Q-functions denoted as P a (u), a = 1, . . . , 4 and Q i (u), i = 1, . . . , 4 which are related to the dynamics on S 5 and on AdS 5 correspondingly. The P-functions are analytic functions of u except for a cut at [−2g, 2g]. They can be nicely parameterized in terms of an infinite set of coefficients that contain full information about the state, including ∆. Details of this parameterization are given in Appendix A. The other 4 basic Q-functions Q i are indirectly determined by P a via the 4th order Baxter equation [12] Q where the coefficients D n ,D n are simple determinants built from P a and are given explicitly in Appendix A 7 . Here we used the shorthand notation Being of the 4th order, this Baxter equation has four independent solutions which precisely correspond to the four Q-functions Q i . Different solutions can be identified by the four possible asymptotics Q i ∼ u 1/2±∆ e ±uφ which uniquely fix the basis of four Q-functions up to a normalization if we also impose that the solutions Q i (u) are analytic in the upper half-plane of u, which is always possible to do. Then they will have an infinite set of Zhukovsky cuts in the lower half-plane with branch points at u = ±2g − in (with n = 0, 1, . . . ). Finally in order to close the system of equations we need to impose what happens after the analytic continuation through the cut [−2g, 2g]. It was shown in [34] that in order to close the equations one should impose the following "gluing" conditions where q i (u) = Q i (u)/ √ u andq i is its analytic continuation under the cut. These relations fix both Pand Q-functions and allow one to extract the exact cusp anomalous dimension ∆ from large u asymptotics. The equations presented above are valid at any values of g and the angles φ, θ. For the purposes of this paper we have to take the ladders limit of these equations. We will see that they simplify considerably.
Baxter equation in the ladders limit
In the ladders limit (1.2) the coupling g goes to zero and the QSC greatly simplifies as all the branch cuts of the Q-functions collapse and simply become poles. This limit was explored in detail in [55] for the special case φ = π corresponding to the flat space quark-antiquark potential. Here we briefly generalize these results to the generic φ case.
The key simplification is that the 4th order Baxter equation (2.1) on Q i factorizes into two 2nd order equations, the first one being and another equation obtained by ∆ → −∆. This follows from the fact that coefficients A n , B n entering P's via (A.1), (A.4) scale as ∼ 1 in the ladders limit 8 . Then as in [55] one can carefully expand the 4th order Baxter equation for t ≡ e iθ/2 → 0 and recover the 2nd order equation (2.7). As the large u behaviour of q(u) is fixed by the Baxter equation (2.7), we denote them as q + and q − according to the large u asymptotics q ± ∼ e ±φu u ±∆ . For example in the weak coupling limitĝ = 0 for ∆ = 0 we see that q ± are simply q (0) At finiteĝ the Q-functions become rather nontrivial. While q ± (u) are regular in the upper half-plane including the origin, they have poles in the lower half-plane at u = −in, n = 1, 2, . . . . The equation (2.7) is just an sl(2) (non-compact) spin chain Baxter equation, similarly to [3]. This is expected based on symmetry grounds. What is less trivial is the "quantization condition" i.e. the condition which will restrict ∆ to a discrete set. It was first derived in [55] for φ → π and later generalized to the very similar calculation of two-point functions in the fishnet model [3]. The derivation of the quantization condition for any φ is done in Appendix A and leads to the following result: Together with the Baxter equation (2.7), this relation fixes ∆ as well as q + . Note that the r.h.s. of (2.9) contains q + , which has to be found from the Baxter equation and thus also depends on ∆ nontrivially. Due to this (2.9) is a non-linear equation, which may have several solutions. Some intuition behind it becomes clearer after reformulating the problem in a more standard Schrödinger equation form as we will see in section 3.1. At the same time we see that we only need q + to find the spectrum. For this reason we will simply denote it as q(u) in the rest of the paper.
The meaning of the Q-functions from the QFT point of view is still a big mystery. There is no known observable in the field theory which is known to correspond to them directly. However in the "fishnet" theory, which is a particular limit of N = 4 SYM, such an object was recently identified [3]. Here, in the ladders limit we will be able to relate q(u) with a solution of the Bethe-Salpeter equation, which resums the ladder Feynman diagrams and thus has direct field theory interpretation.
Scalar product and variations of ∆
In this section we demonstrate the significance of the bracket · , which we defined in the introduction in (1.4). In particular we will derive a closed expression for ∂∆/∂ĝ which can be considered as a correlation function of two cusps with the Lagrangian [47]. Even though that seems to be the simplest application of the QSC for the computation of the 3-point correlators, it is not yet known how to write the result for ∂∆/∂g for the general state in a closed form. We demonstrate here that this is in fact possible to do at least in our simplified set-up.
First we rewrite the Baxter equation (2.7) by defining the following finite difference operatorÔ where D is a shift by i operator so that the Baxter equation (2.7) becomeŝ Now we notice that this operator is "self-adjoint" under the integration along the vertical contour to the right from the origin, meaning that where c > 0 9 . Indeed, consider the term with D: (2.13) which now became the term with D −1 acting on q 1 (u). In the last equality we changed the integration variable u → u − i. The fact thatÔ has this property immediately leads to the great simplification for the expression for ∂∆/∂g. We can now apply the standard QM perturbation theory logic. Changing the coupling and/or the angle φ will lead to a perturbation of both the operator O and the q-function in such a way that the Baxter equation is still satisfied , (Ô + δÔ)(q + δq) = 0 , δÔ = 1 u 2 (8ĝδĝ + 2u sin φδ∆ + 2u 2 sin φδφ + 2∆u cos φδφ) . (2.14) An explicit expression for δq could be rather hard to find, but luckily we can get rid of it by contracting (Ô + δÔ)(q + δq) with the original q(u): At the leading order in the perturbation we can now drop δq to obtain | q(8ĝδĝ + 2u sin φδ∆ + 2u 2 sin φδφ + 2∆u cos φδφ)q du In terms of the bracket · this becomes This very simple equation is quite powerful. For example by plugging the leading order q = e uφ from (2.8) and computing the integrals by poles at u = 0 we get which gives immediately the one loop dimension ∆ = −ĝ 2 4φ sin φ + O(ĝ 4 ). Furthermore, another interesting property of the bracket is that solutions with different ∆ s are orthogonal to each other. Indeed, consider two solutions q a of the Baxter equation with two different dimensions ∆ a , such thatÔ 1 q 1 =Ô 2 q 2 = 0. Then Figure 3. The two cusp correlator with four different cut-offs Λ a , which can be considered as a particular case of 4-cusp correlator. We take n points along each of the circular arcs and connect them with scalar propagators. We have to integrate over the domain −Λ 1 < t 1 < t 2 < · · · < t n < Λ 3 and −Λ 4 < s 1 < s 2 < . . . s n < Λ 2 . One should use a specific parameterization given in (3.3).
In the next section we relate the Q-function to the solution of the Bethe-Salpeter equation resumming the ladder diagrams for the two point correlator.
Bethe-Salpeter equations and the Q-function
In this section we consider a two cusp correlator with amputated cusps shown on Fig. 3 which we denote by G(Λ 1 , Λ 2 , Λ 3 , Λ 4 ). We derive an expression for it re-summing the ladder diagrams. To do this we write a Bethe-Salpeter equation and then reduce it to a stationary Schrödinger equation, expressing G in terms of the wave functions and energies of the Schrödinger problem. After that we discuss the relation between the wave functions and the Q-functions introduced in the previous section.
Bethe-Salpeter equation
Our goal in this section is reviewing the field-theoretical definition of the cusp anomalous dimension and its computation in the ladder limit, where it relates to the ground state energy of a simple Schrödinger problem.
First we define more rigorously the object from the Fig. 3. We are computing an expectation value For simplicity we can assume that the contours belong to the ( * , * , 0, 0) two dimensional plane (which can be always achieved with a suitable rotation) and we use a particular "conformal" parameterization of the circular arcs by x ± (s) = (Re(ζ ± (s)), Im(ζ ± (s)), 0, 0), where ζ ± (s) = z 1 + (z 2 − z 1 ) 1 ∓ ie ∓s+i(χ±φ)/2 (3.4) such that x 1 ≡ (Re(z 1 ), Im(z 1 ), 0, 0) = x ± (∓∞) and x 2 = (Re(z 2 ), Im(z 2 ), 0, 0) = x ± (±∞). Here x + corresponds to the upper arc in Fig. 3, and x − to the lower one. The configuration has one parameter χ, which allows one to bend two arcs simultaneously keeping the angle between them fixed. This is the most general configuration of two intersecting circular arcs up to a rotation. Next we notice that in the ladders limit we can neglect gauge fields so we get 10 where the last term is the scalar propagator withĝ 2 = g 2 n a · n b /2 (which is equivalent in the ladders limit to the definition ofĝ in (1.2) as n 1 · n 2 = cos θ). The main advantage of the parameterization we used is that the propagator P (s, t) is a function of the sum s + t: P (s, t) = 2ĝ 2 cosh(s + t) + cos(φ) .
Finally, we have to specify the boundary conditions. We notice that whenever one of the Wilson lines degenerates to a point the expectation value in the ladders limit becomes 1, which implies Stationary Schrödinger equation. In order to separate the variables we introduce new "light-cone" coordinates in the following way x y 1 2 future light cone past light cone Figure 4. We have to impose the boundary conditionG Λ1,Λ2 (x, y) = 1 on the light-rays intersecting at x = Λ 1 − Λ 2 and given by the equation x = Λ 1 − Λ 2 ± 2y. The initial functionG Λ1,Λ2 (x, y) is only defined inside the future light cone. It can be extended to the whole plane by setting it to zero outside the light cone and imposingG Λ1,Λ2 (x, y) = −G Λ1,Λ2 (x, −y) for negative y. so that (3.5) becomes In order to completely reduce this equation to the stationary Schrödinger problem, we have to extend the function G Λ 1 ,Λ 2 (x, y) to the whole plane. Currently it is only defined for −Λ 1 < Λ 3 and −Λ 2 < Λ 4 i.e. inside the future light-cone, see Fig. 4. We extendG Λ 1 ,Λ 2 (x, y) to the whole plane using the following definition: With this definition it is easy to see that if (3.12) was satisfied in the future light cone, it will hold for the whole plane. After that we can expandG Λ 1 ,Λ 2 (x, y) in the complete basis of the eigenfunctions of the Schrödinger equation in the x direction, and a n (y) has to satisfy a n (y) = −E n a n (y). SinceG(x, y) is odd in y we get In the above expression we assume the sum over all bound states with E n < 0 and integral over the continuum E n > 0 (see Fig. 5).
Next we should determine the coefficients C n (Λ 1 , Λ 2 ), for that we consider the small y limit. For small y we see that G(x, y) is almost constant inside the light cone (+1 for y > 0 and −1 for y < 0) and is zero for Λ 1 − Λ 2 − 2y < x < Λ 1 − Λ 2 + 2y. In other words for small y we haveG at the same time from the ansatz (3.17) we have, in the small y limit Contracting equations (3.18) and (3.19) with an eigenvector F n (x) and comparing the results, we get Which results in the following final expression for G (3.21) We will use this result in the next section to compute the two-point function in a certain regularisation including the finite part. This will be needed for normalisation of the 3-cusp correlator.
Two-point function with finite part
Now let us study the two-cusp configuration shown in Fig. 6, regularised by cutting -balls around each of the cusps. Here we show that the correlator has the expected space-time dependence of a two-point function with conformal dimension ∆ = − √ −E 0 . In order to compute this quantity we need to work out which cut-offs in the parameters s and t appearing in (3.4) correspond to the -regularisation. By imposing we find (asymptotically for small ) which allows us to write, using (3.21) where we use that for large Λ only the ground state contributes. We use the notation so that ∆ 0 is the usual cusp anomalous dimension. We see that the result for the 2-cusp correlator takes the standard form with a rather non-trivial normalization coefficient which we will use to extract the structure constant from the 3-cusp correlator.
Relation to Q-functions
Here we describe a direct relation between solutions of the Schrödinger equation and the Q-functions. From the previous section we can identify ∆ = − √ −E resulting in In this section we will relate F (z) with q(u). The relation is very similar to that found previously for the φ = π case in [55]. For φ > 0, the map is defined as follows and q(u) ≡ q + (u) is one of the solutions of the Baxter equation (2.7), specified by the large u asymptotics q(u) u ∆ e uφ . We remind that we use the notation | for the integration along a vertical line shifted to the right from the origin. For negative ∆ the integral in (3.28) converges for any finite z, and we can shift the integration contour horizontally, as long as we do not cross the imaginary axis where the poles of q(u) lie. Let us show that if q satisfies the Baxter equation (2.7), then F (z) computed from (3.28) satisfies the Schrödinger equation (3.27). Applying the derivative in z twice to the relation (3.28) we find where D represents the shift operator D[f (u)] = f (u + i). Shifting the integration variable and using the Baxter equation (2.7), the rhs of (3.30) simplifies leading to (3.27). Notice that this relation between the Baxter and Schrödinger equations holds also offshell, i.e. when ∆ is a generic parameter and the quantization condition (2.9) need not be satisfied. In Appendix B we show that the quantization condition (2.9) is equivalent to the condition that F (z) is a square-integrable function, so that it corresponds to a bound state of the Schrödinger problem.
Reality. Let us show that the transform (3.28) defines a real function F (z). Here we assume the quantization condition to be satisfied. Taking the complex conjugate of (3.28) we find A precise relation between q(u) andq(u) is discussed in appendix A. In particular, from (A.16), (A.27) we see that, when the quantization conditions are satisfied, for large Re u. Shifting the contour of integration to the right we see that the contribution of the omitted terms in (3.32) is irrelevant, and therefore the integral transforms involvingq(u) and q(u) are equivalent. This shows that F * (z) = F (z).
Inverse map. The transform (3.28) can be inverted as follows: The above integral representation converges for Im(u) > 0 and ∆ < 0. Assuming F (z) is a solution to the Schrödinger equation with decaying behaviour F (z) ∼ e ∆z/2 at positive infinity z → +∞, this map generates the solution to the Baxter equation q(u). When additionally F (z) decays at z → −∞, q(u) satisfies the quantization conditions.
Relation to the norm of the wave function. From the Schrödinger equation (3.27) we can use the standard perturbation theory to immediately write We will rewrite the numerator in terms of the Q-function. For that we use that F n (z) is either an even or an odd function depending on the level n, then we can write F 2 (z) = (−1) n F (z)F (−z) and then use (3.28). The advantage of writing the product in this way is that the factor e +∆z/2 in (3.28) cancels giving Next we notice that the integration in z can be performed explicitly Note that the function K(u − v) is not singular by itself as the pole at u = v cancels. We are going to get rid of the integral in u in (3.35), for that we notice that we can move the contour of integration in v slighly to the right from the integral in u, and after that we can split the two terms in K(u − v). The first term ∼ e φ(u−v) u−v decays for Re v → +∞ and we can shift the integration contour in v to infinity, getting zero. Similarly the second term ∼ e φ(v−u) u−v decays for Re u → +∞ and we can move the integration contour in u to infinity, but this time on the way we pick a pole at u = v. That is, only this pole contributes to the result giving At the same time, above in (2.17) we have already derived an expression for ∂∆/∂ĝ in terms of the Q-function. Comparing it with (3.34) and using (3.37) we conclude that We will use the relations between q and F to rewrite the 3-cusp correlator in terms of Qfunctions in the next section.
Three-cusp structure constant
In this section we derive our main result -an expression for the structure constant. First, we compute it for the case when only one of the 3 couplings is nonzero. We refer to this case as the Heavy-Light-Light (HLL) correlator 11 . Then we generalize the result to two non-zero couplings, this case we call the Heavy-Heavy-Light (HHL) correlator. In both cases we managed to find an enormous simplification when the result is written in terms of the Q-functions. We postpone the Heavy-Heavy-Heavy (HHH) case for future investigation.
Set-up and parameterization
In this section we describe the 3-cusp Wilson loop configuration, parameterization and regularisation, which we use in the rest of the paper. The Wilson loop is limited to a 2D plane and consists of 3 circular arcs coming together at 3 cusps (see Fig. 7). The 3 angles φ i , i = 1, 2, 3 can be changed independently. The geometry is completely specified by the angles and the positions of the cusps x i , i = 1, 2, 3.
In the rest of this paper, we consider the following "triangular" inequalities on the angles: To understand the geometric meaning of these relations, consider the extension of the arcs forming the Wilson loop past the points x i : this defines three virtual intersections A, B, C (see Fig. 7). The inequalities (4.1) mean that A, B, C are all outside the Wilson loop. Our results will hold in this kinematics regime. In the limit where we approach the boundary of the region (4.1) our result significantly simplifies and will be considered in Sec. 6, in particular we will reproduce the results of [50] for the case φ 1 = φ 2 = φ 3 = 0. Now we describe a nice way to parametrize the Wilson lines. Consider the two arcs departing from x 1 . Extending these arcs past the points x 2 , x 3 , they define a second intersection point A. By making a special conformal transformation, we map A to infinity and both arcs connecting x 1 with A to straight lines, which we can then map on a cylinder like in (3.3). The most convenient parametrization corresponds to the coordinate along the cylinder. By mapping A back to some finite position we get a rather complicated but explicit parametrization like the one we used in Sec. 3.1.
It is again very convenient to use complex coordinates, similarly to (3.3), so that the cusp points are x i = (Re(z i ), Im(z i ), 0, 0), i = 1, 2, 3. For the arcs departing from z 1 we obtain, as described above, the following representation , where z ab = z a − z b . Notice that we have slightly redefined the parameters such that s = 0 and t = 0 correspond to the other two cusp points: By a cyclic permutation of all indices, we define similar parametrizations for the other arcs. Notice that, in this way, all arcs are parametrized in two distinct ways, e.g. the same arc connecting x 1 and x 2 is described by the functions ζ 12 (s) and ζ 21 (t), which are different.
The main advantage of the parametrization (4.3) is that the propagator between the two arcs is very simple: However, since we decided to shift the parameters so that s = 0 gives x 2 and t = 0 gives x 3 , the propagator appears to be shifted compared to (3.8) by the quantity with δx 2 and δx 3 defined similarly by cyclic permutations of the indices 1, 2, 3. We see now the importance of the inequalities (4.1) as they ensure δx i are real.
Notation. Below we consider correlators where the ladder limit is taken independently for the three cusps. Namely, by choosing appropriately polarization vectors n i on the three lines, we define effective couplingsĝ for the three cusps i = 1, 2, 3. Correspondingly, in this section we use the notation 12 ∆ i,0 , i = 1, 2, 3, to denote the scaling dimensions corresponding to the ground state for the three cusps (in the setup we consider we always haveĝ 3 = 0, ∆ 3,0 = 0). The extension to excited states will be discussed in section 5.
The Q-functions describing the ground state for the first and second cusps will be denoted as q i (u), i = 1, 2, respectively. Explicitly, q i (u) is the solution of the Baxter equation q + (u), evaluated at parametersĝ =ĝ i , ∆ = ∆ i,0 and φ = φ i .
Regularization
The 3 cusp correlator is UV divergent. To regularize the divergence we are going to cut -circles around each of the cusps 13 -the same way as we regularized the 2-cusp correlator in the previous section. This will set a range for the parameters s i and t i entering the parametrizations ζ ij (s i ), ζ ij (t i ) defined above. Namely from (4.3) it is easy to find that instead of running from −∞ they now start from a cutoff: where .
(4.8) 12 This should not be confused with the notation for the scaling dimensions for excited states ∆n used in other parts of the paper. 13 See [56] for a general argument why the divergence depends on the geometry only through the angles φi. All other Λ s i and Λ t i for i = 2, 3, can be obtained by cyclic permutation of the indices 1, 2, 3. We note that
Heavy-Light-Light correlator
Now we consider the simplest example of three point function in the ladder limit, where we have only one non-vanishing effective coupling,ĝ 1 for the cusp at x 1 , withĝ 2 =ĝ 3 = 0. Correspondingly, we will have ∆ 2,0 = ∆ 3,0 = 0, so that this can be considered as a correlator between one nontrivial operator and two protected operators (see Fig. 8). For simplicity we will denote ∆ 1,0 as just ∆ 0 in this section. We start by defining a regularized correlator, which we denote as Y x 1 , ( x 2 , x 3 ), which is obtained by cutting the integration along the Wilson lines at a distance from x 1 . To compute this observable we consider the sum of all ladder diagrams built around the first cusp and covering the Wilson lines (12), (13) up to the points x 2 , x 3 , respectively, see Fig. 8. As discussed in section 3, this is described by the Bethe-Salpeter equation, which takes a very convenient form using the parameterization introduced in the previous section for the Wilson lines departing from x 1 : γ 12 (s) = (Re(ζ 12 (s)), Im(ζ 12 (s)), 0, 0), and γ 13 (t) = (Re(ζ 13 (t)), Im(ζ 13 (t)), 0, 0). The appropriate integration range for cutting an -circle around , with cutoffs defined in (4.8). However, in order to make a connection with G(Λ 1 , Λ 2 , Λ 3 , Λ 4 ) defined in section 3, we have to take into account the fact that the propagator in (4.4) is shifted by δx 1 . This means that we have to redefine s → s + δx 1 , which will shift the range to s ∈ [−Λ s 1 − δx 1 , −δx 1 ], furthermore due to (4.9) the range becomes s ∈ [−Λ t 1 , −δx 1 ] . From that we read off the values of Λ k and find Again, at large Λ s only the ground state survives and we get Substituting the values for Λ t 1 from (4.8) leads to which naturally has the structure of the 3-point correlator in a CFT, where we have defined Finally, to extract the structure constant we have to divide (4.12) by the two point functions (4.14) Let us now write the result in terms of the Q-functions. Using (3.28) to evaluate the shifted wave function in (4.14), we already notice a nice simplification: therefore (using also parity of the ground-state wave function) and taking into account also the norm formula (3.38), we find where the constant K 123 is defined as Using the parity of the ground state wave function F 0 , it can be verified that the result is symmetric in the two angles φ 2 ↔ φ 3 . We see that the result takes a much simpler form in terms of the Q-functions. The structure becomes even more clear when written in terms of the bracket · defined in (1.4): which is amazingly simple!
Heavy-Heavy-Light correlator
Now, we switch on the effective couplingsĝ i , i = 1, 2 for both the first and the second cusp. This means that this observable is defined perturbatively by Feynman diagrams with two kinds of ladders built around the cusps x 1 and x 2 , see Fig. 9. As in the previous section let us denote by Y x 1 , ( x 2 , x 3 ) the sum of all ladders built around the cusp point x 1 , with a cutoff at distance from the cusp. We introduce a similar notation for the ladders built around the second cusp.
The sum of all diagrams contributing to the -regularized Heavy-Heavy-Light correlator can be organized as follows: where the part W •••, 123 1 represents the sum of all diagrams with at least one propagator around the cusp x 1 . As we are about to show, the leading UV divergence comes only from the connected part, which behaves as ∼ ∆ 1,0 +∆ 2,0 . Since the disconnected contributions in (4.21) have a milder divergence ∼ ∆ i,0 (i = 1, 2), we can drop them since they are irrelevant to the definition of the renormalized structure constant.
As illustrated in Fig. 10, the main contribution can be computed as follows: 12 13 Y x 2 , 12 x 3 Figure 10. We split the propagators into two groups by explicitly writing the last propagator between γ 12 and γ 13 . Then we re-sum the propagators surrounding cusp x 2 into Y x2 ( x 3 , γ 12 ) and those around where we are denoting with Y x 1 , ( γ 12 , γ 13 ) the sum of all ladder diagrams up to the points γ 12 , γ 13 on the arcs (12), (13), respectively (and similarly for Y x 2 , ( x 3 , γ 12 )).
To compute the connected integral explicitly we choose the following parametrization for the arcs (12) Exactly as described in section 4.3, redefining the parameters we find, in terms of the amputated four point function where δx 1 is defined in (4.5), and for → 0 we have The other ingredient appearing in (4.21) is Y x 2 , ( x 3 , γ 12 (s)). Computing this quantity is slightly more complicated, since the ladders built around the second cusp point x 2 are described most naturally in terms of a different parametrization, which uses the functions ζ 21 (t 2 ), ζ 23 (s 2 ) to parametrize the arcs (12), (23). In fact, it is only in the variables s 2 and t 2 that the propagator takes the simple form (4.4), with δx 1 → δx 2 . Therefore we need to relate the two alternative parametrizations, ζ 21 (t 2 ) vs ζ 12 (s 1 ), for the line (12). To this end we introduce the transition map T 12 (s): (4.26) which is given explicitly by . (4.27) Using this map, we find that Y x 2 , ( x 3 , γ 12 (s)) is defined by the Bethe-Salpeter equation with propagator shifted by δx 2 and integration ranges Taking into account the shift in the propagator, we have (4.29) where L 231 is defined applying a cyclic permutation to (4.18). Combining (4.25), (4.29) in (4.21), we find, for the leading divergent part: 30) where N ••• 123 is a finite constant which can be written explicitly as 14 Again, we see that (4.30) has the correct space-time dependence for a CFT 3-point correlator.
Normalizing by the 2-pt functions factors N ∆ i ,φ i defined in (3.26) for the two cusps, we get a finite expression for the structure constant: (4.32) 14 Notice that in this formula we have sent to infinite all the cutoffs defining the ranges of integration. Since the integrals in (4.31) are convergent, this does not change the leading UV divergence of the correlator, which is enough to get to the final result for the OPE coefficient. A more detailed argument would show that, by sending the cutoffs to infinity in (4.31), we also restore the disconnected contributions with subleading divergences.
Using the Schrödinger equation for F 1,0 , we can simplify the expression for N ••• 123 further and remove one of the integrations: While (4.33) provides an explicit result, it still appears rather intricate, especially since it contains the complicated transition function T 12 (s). We will now show that it can be reduced to an amazingly simple form in terms of the Q-functions.
First, applying the transform (3.28), and using parity of the ground state wave function, We then plug these relations into (4.33). We noticed a magic relation between the integrands of (4.34) and (4.35), which suggests that we switch to a new integration variable ξ = w φ 1 (s − δx 1 ) − φ 3 /2. Notice that the integration measure is invariant, ds ∂ s = dξ ∂ ξ . Taking into account (4.36) we get: and remarkably we can do the integral explicitly and find We can simplify this expression further. In fact, notice that the integrand has no poles for Re(u) > 0, Re(v) > 0, in particular there is no pole at u ∼ v. Therefore we can shift the two integration contours independently. Similarly to the trick used in section 3.3, we shift the v integration contour to the right so that Re(v) > Re(u), and split the integral into two contributions. One of them vanishes since the v-integrand is suppressed and the integration contour can be closed at Re(v) = ∞: while for the second integral it is the u-integrand that is suppressed. Closing the contour we now pick a residue at u ∼ v: Combining all ingredients, we get the final expression for the structure constant in terms of the Q functions: where the constants K 123 , K 213 are defined as in (4.18) by permutation of the indices. Again, it simplifies further in terms of the bracket · defined in (1.4) In this form it is clear that the final expression is explicitly symmetric for 1 ↔ 2, even though for the derivation we treated cusp x 1 differently from x 2 . This strikingly compact expression is one of our main results. Notice that it also covers the HLL case, namely if we send one of the effective couplingsĝ 1 ,ĝ 2 to zero we recover (4.19) as for zero coupling q 2 = 1.
Excited states
In this section we explore the meaning of the excited states and give them a QFT interpretation as insertions at the cusps. We will also extend our result for the structure constant to the excited states.
Excited states and insertions
First, let us discuss the structure of the spectrum of the Schrödinger equation. When we increase the coupling we find more and more bound states in the spectrum at E < 0. If we analytically continue the bound state energy by slowly decreasing the coupling we will find that the level approaches the continuum at E = 0 and then reflects back. After that point the state will strictly speaking disappear from the spectrum of the bound states as the wave function will no longer be normalizable. However, if we define the bound state as a pole of the resolvent, it will continue to be a pole, just not on the physical sheet, but under the cut of the continuum part of the spectrum.
At the same time, from the expression for G(Λ 1 , Λ 2 , Λ 3 , Λ 4 ) in (3.21) we see that the natural variable is not E but rather ∆ = − √ −E. In the ∆-plane the branch cut of the continuum spectrum will open revealing all the infinite number of the resonances bringing them back into the physical spectrum (see Fig. 12).
In order to give the field theory interpretation of those bound states we build projectors, which acting on our main object G(Λ 1 , Λ 2 , Λ 3 , Λ 4 ) will project on the excited states ∆ n in the large Λ i limit. First let us rewrite (3.21) in terms of ∆ n 's 15 Since G has an interpretation as a 4-BPS correlator, one can think about (5.1) as an OPE expansion in the t-channel. We will also see soon that the coefficients appearing there are the HLL structure constants with excited states. We will come back to this point in section 9. When Λ's tend to infinity the sum is saturated by the smallest ∆ n . To suppress the lowest states we define the following differential operators: With the help of these operators we define which at large Λ scales as e −2∆nΛ since all terms with k < n are projected out! Notice that, as discussed in Sec. 3.2, G(Λ, Λ, Λ, Λ) can be used to describe a regularized two-point function, where the cutoff is identified with x 12 e −Λ = , similarly we get Naively, the interpretation of the operators corresponding to the excited states is only valid for large enough coupling when ∆ n < 0. In the next section we verify that it remains true at weak coupling at one loop level. Below, we also extend our result for the 3-cusp correlator to excited states. For this, we will need to know the long-time asymptotics ofG Λ 1 ,Λ 2 (x, y) computed with the new type of boundary conditions described by the action of the projector O n . We have, for y → ∞, where Finally, from the 2-point correlator (5.4) we extract the normalization coefficients which we will need to normalize the structure constant in the next section.
Correlator with excited states
We will redo the calculation of the HLL correlator for the case when the heavy state is excited. We mostly notice that all the steps are essentially the same as in the case of the ground state. 16 We expect that for the finite θ case, i.e. away from the ladder limit, one should simply replace ∂± with the corresponding covariant derivatives at least at weak coupling. 17 In (5.5) and (5.6) the scalar coupled to n1 is located at position −Λ1 on the contour, and the scalar coupled to n2 is at Λ1.
We begin by applying the projector operator O n , defined in (5.2) to the cusp at x 1 and use that in the small limit we simply use the leading asymptotics (5.7) to obtain, very similarly to the ground state (4.10) with c n defined in (5.8). Normalizing the result with (5.9) to get a finite result for the structure constant we get rewriting it in terms of q-functions exactly as for the ground state we obtain where q 1,n denotes the solution of the QSC corresponding to the n-th excited state, with parametersĝ =ĝ 1 , φ = φ 1 . The (−1) n appears from the corresponding factor in the relation for the norm of the wavefunction in (3.38), it is needed to ensure the denominator is real at large couplings.
Similarly for the HHL correlator we simply replace q-functions and the corresponding dimensions, but the expression stays the same!
Excited states at weak coupling from QSC
As we discussed above (see section 3), for large coupling the Schrödinger equation has several bound states while for small coupling all of them except the ground state disappear. Nevertheless the excited states have remnants at weak coupling which are not immediately apparent in the Schrödinger equation but are directly visible in the QSC. By solving the Baxter equation (2.7) and the gluing condition (2.9) numerically, we can follow any excited state from large to small coupling and we find that ∆ has a perfectly smooth dependence onĝ. The first several states are shown on Fig. 13 and Fig. 14 which also demonstrate an intricate pattern of level crossings that we will discuss below. Forĝ → 0 we moreover observe that ∆ becomes a positive integer L, ∆ = L + ∆ (1)ĝ2 + ∆ (2)ĝ4 + . . . , L = 1, 2, . . . . (5.14) Remarkably, for each L > 0 we have two states which become degenerate at zero coupling. In contrast, the ground state (corresponding to L = 0) does not merge with any other state. Figure 13. The first few states for φ = 1.5 . We show numerical data for ∆ as a function ofĝ, obtained from the Baxter equation. We see that all the states, except the ground state, are paired together at weak coupling. Figure 14. The first few states for φ = 3.0 . We plot ∆ as a function of the couplingĝ similarly to Fig. 13.
This pattern is consistent with our proposal for the insertions (5.2) -the states with n = 2m and n = 2m − 1 have the same number of derivatives and thus should have the same bare dimension. We can explicitly compute ∆ for these states at weak coupling from the Baxter equation. We solve it perturbatively using the efficient iterative method of [57] and the Mathematica package provided with [34]. We start from the solution atĝ = 0 and improve it order by order inĝ. Atĝ = 0 the solution for any L ≡ ∆|ĝ =0 has the form of a polynomial of degree L multiplied by e uφ . At the next order we already encounter nontrivial pole structures. This procedure gives q-functions written in terms of generalized η-functions [34,58] defined as As an example, for L = 1 we find where ∆ (1) is the 1-loop coefficient in (5.14). The second solution q − is more complicated and already involves twisted η-functions such as η e 2iφ 1 , but fortunately we only need q + to close the equations. The quantization condition (2.9) then gives a quadratic equation on ∆ (1) which fixes Thus as expected from the numerical analytsis we find two separate states, which become degenerate at zero coupling. For comparison, for the ground state (L = 0) we have For the ground state (L → 0) this formula also gives the correct result although only the minus sign is admissible. For the first several states we also computed ∆ to two loops, e.g. for L = 1 The two-loop results for L = 2, 3 are given in 18 Appendix C. All these results are also in excellent agreement with QSC numerics. For completeness, the ground state anomalous dimension to two loops is [59,60] 19 Let us note that for the ground state the leading weak coupling solution q = e φu immediately provides the 1-loop anomalous dimension via the quantization condition (2.9). However for excited states the leading order q-function is not enough because it vanishes at u = 0, leading to a singularity in the quantization condition (resolved at higher order inĝ). Table 1. The table shows the correspondence between the weak and strong coupling behaviour of the first few excited states. The notation ∆ n denotes the ordering of the states at strong coupling (in particular see (E.7)), while the notation ∆ L,± is related to the form of the one-loop correction, see (5.19). The pattern evident from the table continues for all excited states.
Comments on level crossing. Let us now discuss another curious feature of the spectrum, namely the presence of level crossings for ∆ > 0 which is evident from Fig. 13. Level crossings are of course forbidden in 1d quantum mechanics, but there is no contradiction as our states only correspond to energies of the Schrödinger problem when ∆ < 0. As we increase the coupling, for any state ∆ eventually becomes negative and the levels get cleanly separated. At the same time the odd (even) levels do seem to repel from each other. At large coupling it is natural to label the states by n = 0, 1, 2, . . . starting from the ground state. However the reshuffling of levels makes it a priori nontrivial to say what is the weak coupling behavior of a state with given n. First, we observe that ∆ at zero coupling is given by L = n/2 (rounded up). Moreover we found a nice relationship between n and the signs plus or minus in (5.19) determining the 1-loop anomalous dimension. Namely, the levels with n = 0, 1, 2, . . . correspond to the following sequence of signs: In order to understand this pattern it is helpful to consider the analytically solvable case when φ = 0. We plot the states for this case on Fig. 15. The spectrum of the Schrödinger problem for φ = 0 is known exactly [46], Here only the values of n for which ∆ n < 0 actually correspond to bound states. One may try to analytically continue ∆ n inĝ starting from large coupling where it is negative, and arrive to weak coupling. However this would not be correct, as we know that half the levels At large coupling the levels are given by (5.24), so dependence on the coupling switches from (5.24) to (5.25) (where m and n may be different) at the point where these two curves intersect. Moreover, at this point two levels meet, and they correspond to adjacent values of n of the same parity. In this way e.g. the levels with even n 'bounce' off each other, and the same is true for odd n. That explains the pattern of signs in (5.23).
In fact as we see in Fig. 15 the behavior of ∆ can switch multiple times between forms (5.24) and (5.25), before finally becoming the expected curve (5.24) at large coupling. The derivative ∂∆/∂ĝ is discontinuous at these switching points. However when φ becomes nonzero the picture smoothes out and the level crossing at the intersection point is also avoided (though some other level crossings truly remain) as can be see on Fig. 13.
Having ∆ as a piecewise-defined function made up of parts given by (5.24) and (5.25) reminds somewhat the spectrum of local twist-2 operators at zero coupling, where the anomalous dimension becomes a piecewise linear function of the spin (with different regions corresponding e.g. to the BFKL limit [6,63] or to usual perturbation theory 21 ).
One may regard (5.25) as an analytic continuation of (5.24) around the branch point at g = i/4. There are more branch points at complex values ofĝ where curves of the form (5.24) and (5.25) intersect, and we expect all the levels to be obtained from each other by analytic continuation inĝ, even for generic φ. Again this situation is reminiscent of the twist operator spectrum. 20 Clearly, (5.24) would instead give a negative 1-loop coefficient with ∆ = n − 4ĝ 2 + . . . . Also note that for φ = 0 the 1-loop correction (5.19) becomes equal to ±4ĝ 2 and does not depend on n. 21 See e.g. [64] for a discussion and [65] for some finite coupling plots.
Excited states at weak coupling from Feynman diagrams
In this section we compute the diagrams contributing to the anomalous dimensions of the lowest excited states. First let us reproduce the one loop correction to the ground state. For that case there is only one diagram, shown on Fig. 16, It can be computed exactly for any Λ, (5.27) and at large Λ it diverges linearly as D 0 = 8ĝ 2 φ sin φ Λ + O(Λ 0 ). Recalling that Λ = log x 12 we read-off the anomalous dimension γ 0 = −4ĝ 2 φ sin φ in agreement with (5.22). For the lowest excited states we have 4 diagrams (see Fig. 17). For example, the 4th diagram D 4 is given by the double integral , (5.28) and corresponds to the following differentiation of the four point function: Below we give the result for these diagrams for large Λ, keeping e −2Λ terms: Combining these diagrams we can construct the operators described in section 5.1, in particular here we consider operators obtained with the insertion of one scalar at the cusp 22 . We have 23 and from the diagrams computed above we find Again identifying the cutoff with Λ = log x 12 , we read off the one-loop dimension ∆ 1 = 1−4ĝ 2 .
Remarkably, it perfectly matches the analytic continuation to weak coupling of the first excited t s Figure 16. One loop diagram, contributing to the ground state anomalous dimension.
state energy, computed from the QSC above in (5.20). This state corresponds to the second line from below on Fig. 13. Another operator one can build is obtained from the following combination of derivatives: The r.h.s. here can be written in terms of the diagrams we have computed and is equal to where γ 0 = −4ĝ 2 φ/ sin φ is the one-loop scaling dimension for the ground state. The logarithmic divergence in (5.34) correctly reproduces the energy of the analytic continuation of the second excited state at one loop ∆ 2 = 1 + 4ĝ 2 , matching the QSC result (5.19). This state corresponds to the third line from below in Fig. 13. The one-loop result agrees with the one obtained in [53,54] at θ = 0 (we expect in the ladders limit this result should be the same).
Simplifying limit
In this section we consider the limit when φ 1 + φ 2 → φ 3 . Geometrically this limit, which lies at the boundary of the regime of parameters considered in the rest of the paper (4.1), describes the situation where the cusp point x 3 belongs to the circle defined by the extension of the arc (12). In this situation, the points A and B shown in Fig. 7 both coincide with the cusp point x 3 . A special case of this limit is the situation when all angles are zero and the triangle reduces to a straight line. The main simplification comes from the most important part of the result | du u q 1 q 2 e −φ 3 u (6.1) which now can be evaluated explicitly. When φ 1 + φ 2 → φ 3 we can deform the integration contour to infinity and notice that only the large u asymptotic of the integrand contributes. 22 The operators with more scalar insertions built this way may include derivatives acting on the scalars. 23 In the r.h.s. of (5.31) and (5.33) we omit an overall irrelevant prefactor. This is clear from the following integral where in our case β = φ 1 + φ 2 − φ 3 is small and positive. We see that the integral (6.2) allows us to convert the large u expansion into small β series. The large u expansion of the integrand is very easy to deduce from the Baxter equation (2.7), one just has to plug into the Baxter equation (2.7) the ansatz to get a simple linear system for the coefficients k i , which gives which allows us to compute explicitly In this way we get the following small-β expansion for the bracket in the numerator of structure constant with insertions at 1 and 2: In principle, the expansion can be performed to an arbitrary order in β = φ 1 + φ 2 − φ 3 . Similarly, the norm factors appearing in the denominator of the structure constants simplify when φ i → 0 for one of the cusps i = 1 or i = 2. This limit describes the situation where the cusp angle disappears. As we reviewed in Sec. 5.3, at φ = 0 the Schrödinger equation becomes exactly solvable and the spectrum is explicitly known [46].
The main ingredient for the computation of the norm is the integral (3.38), and it is clear that for small φ it simplifies for the very same mechanism we have just described. In particular, every term in the 1/u expansion of the integrand gives an integral of the kind (6.2), which allow us to organize the result in powers of φ. Naturally we should also take into account the scaling of the coefficients k i appearing in (6.3) for φ ∼ 0. Notice that the expressions (6.5) are apparently singular at φ ∼ 0. However, a nice feature of this limit is that most of these divergences are cancelled systematically due to the fact that the scaling dimension too depends on φ in a nontrivial way. In particular, we found numerically that, for the QSC solution corresponding to the ground state, the coefficients k n have the following scaling for φ → 0: This observation is quite powerful. Indeed, combined with the parametric form of the coefficients (6.5), the requirement that they scale as (6.8) fixes all terms 24 in the expansion of ∆ for small φ ! More precisely, we find that the scaling (6.8) corresponds to two solutions for ∆(φ): one is the ground state, for which we reproduce the results of [46] obtained using perturbation theory of the Schrödinger equation, namely, for the first two orders, The other solution describes one of the excited states trajectories 25 24 A very similar observation was made in the context of the fishnet models at strong coupling in [3]. 25 As explained in Sec. 5.3, this trajectory strictly speaking is formed patching together pieces of infinitely many levels, which are separate for finite φ, see Fig. 15. It is straightforward to generate higher orders in φ with this method. The remaining infinitely many states can be described allowing for a more general scaling of the coefficients k m , see Appendix D for details and some results. Plugging in the scaling of coefficients (6.8), for the solution corresponding to the ground state we find which combined with (6.7) gives a finite result for the OPE coefficient at φ 1 = φ 2 = φ 3 = 0: where we used (6.9) in the last step. This is in perfect agreement with the result of [50]. It is simple to obtain further orders in a small angle expansion, the next-to leading order in all angles is reported in Appendix D.
Numerical evaluation
The expression for the 3-cusp correlator we found has the form of an integral | q ∆ 1 q ∆ 2 e −uφ 3 du 2πiu which is guaranteed to converge for large enough coupling as the q-functions behave as e φu u ∆ where ∆ decreases linearly withĝ and reaches arbitrarily large negative values. However, we would like to be able to use these expressions at small coupling too, where the convergence of the integral is only guaranteed when both states are ground states, but for the excited states the integral is formally not defined.
To define the integrals we introduce the following ζ-type of regularization. We multiply the integrand by some negative power u α , compute the integral for large negative enough α and then analytically continue it to zero value. The key integral is (6.2) where the r.h.s. gives the ananlytic continuation to all values of α.
We see that for large negative α the expression decays factorially. This fact is crucial for our numerical evaluation of the correlation function. Once the value of the energy is known numerically it is very easy to get an asymptotic expansion of the q-functions at large u to essentially any order. However, since the poles of the q-functions accumulate at infinity, this expansion is doomed to have zero convergence radios. Nevertheless if we expand the integrand at large u and then integrate each term of the expansion using (6.2) we enhance the convergence of this series by a factorially decaying factor making it a very efficient tool for the numerical evaluation.
We applied this method to compute the correlation function for several excited states (see Fig. 18). The method allows one to compute the correlator even faster than the spectrum. We checked that it works very well for φ ∼ 1 giving 10 digits precision easily, but seems to diverge for φ = 1.5. To cross check our precision we also used the d∆/dg correlator (2.17), which is given by the same type of integrals.
Correlation functions at weak coupling
In this section we present some explicit results for the structure constants at weak coupling.
Our all-loop expression for the structure constants (1.3) is rather straightforward to evaluate perturbatively. First one should find the Q-function q at weak coupling, which can be done by iteratively solving the Baxter equation as discussed in section 5.3. The result at each order is given as a linear combination of twisted η-functions (see (5.15)) multiplied by exponentials e φu and rational functions of u, as in e.g. (5.18). Then the integrals appearing in the numerator and denominator of (1.3) can be easily done by closing the integration contour to encircle the poles of q(u) in the lower half-plane, giving an infinite sum of residues 26 : The residues come from poles of the η-functions, e.g.
To get the residue one may need more coefficients of this Laurent expansion, which are given by zeta values or polylogarithms. Finally one should take the infinite sum in (8.1) which again may give polylogs.
In this way we have computed the first 1-2 orders of the weak coupling expansions, as a demonstration (going to higher orders is in principle straightforward, limited by computer time and the need to simplify the resulting multiple polylogarithms). The integrals giving the norm of q-functions are especially simple. Below, we assume that q(u) is normalized 27 such that the leading coefficient in the large u expansion is 1, so q(u) u ∆ e φu . For the ground state (L = 0) we find where γ E is the Euler-Mascheroni constant. For the excited states (L, ±) 28 corresponding to insertion of L scalars, we have The L = 3 result is given in (C.5). Notice here that for the states 2 + and 2 − the signs of q 2 are different at weak and strong coupling. Indeed, at strong coupling the relation with the wavefunctions (3.38) implies that q 2 is positive/negative for even/odd states, respectively. Since the even state is 2 − (see Table 1), in (8.5) we see explicitly that these signs can change at weak coupling.
The structure constants are more involved. For the HHL correlator without scalar insertions we have to 1-loop order For the correlators with excited states both the numerator and the denominator in the expression (1.3) for C ••o vanish at weak coupling. Due to this even the leading order in the expansion is nontrivial and requires using q(u) computed toĝ 2 accuracy. For the correlators with two L = 1 states we find . . , (8.8) while for L = 2 we get a nontrivial dependence on the angles, Here we have the plus sign for correlators corresponding to (L + , L + ) or (L − , L − ) states, and the minus sign for the (L + , L − ) correlator.
Curiously, the HHL results do not have a smooth limit when one of the couplings goes to zero corresponding to the HLL case (this is related to a singularity in the 2-pt function normalization). This means we have to compute the HLL correlators separately. For H n LL with the excited state being ∆ 1,+ we get For the L = 2 states we find These two structure constants are purely imaginary due to the sign of q 2 at weak coupling. We also present the results for the L = 3 states in Appendix C.
The 4-point function and twisted OPE
In this section we examine more closely the expression for the 4-point function which we obtained in (5.1). We interpret it as an OPE expansion and cross-test it at weak coupling against our perturbative data for the correlation functions. We also present some conjectures on the generalization of this OPE expansion and its applications to the computation of more general correlators.
The 4-cusp correlation function
Our starting point is an OPE-like formula (5.1) for the 4-cusp correlator. It is based on the 2-pt function of cusps with angle φ 0 , but the four cutoffs Λ 1 , . . . , Λ 4 give it the structure of a 4-point function with four cusp angles φ a determined by Λ's as shown on Fig. 19. To make the analogy more clear we notice that we can get rid of the wavefunctions in (5.1) entirely and rewrite it in terms of the structure constants as follows , while the angles φ 1 , . . . , φ 4 at the cusps y a (see Fig. 19) can be found from where we denoted φ ab = φ a − φ b and The factor L abc as before is defined by We can view equation (9.1) as defining the 4-cusp correlator in terms of the structure constants, opening an easy way for computing this quantity in various regimes including numerically at finite coupling. This equation suggests a natural interpretation in terms of an OPE expansion for pairs of cusps. To understand this point, let us first investigate the space-time dependence of the 4pt function (9.1), which comes through the factors e −2Λ L 012 L 034 ∆n . (9.5) To decode the dependence of (9.5) on the cusp positions, it is convenient to introduce six complex parameters: four space-time positions y i , i = 1, . . . , 4, defined as (where ζ ± is the parameterization defined by (3.4)) together with the intersection points of the two arcs x 1 , x 2 (see Fig. 19), which we denote as y 0 ≡ x 1 , y 5 ≡ x 2 . These six points are not all independent as we can express y 5 in terms of the other five complex coordinates through the solution of the equations 29 y 53 y 10 y 31 y 50 = y * 53 y * where y ab = y a − y b . From these two relations we can obtain y 5 as a rational function of y i , i = 0, . . . , 4 and their complex conjugates. 30 Eliminating the parameters Λ i in favour of the y i coordinates, we find that the term (9.5) appearing in the 4pt function can be written as This relation is illustrated in Fig. 20 and it strongly reminds the usual OPE decomposition of a 4pt function in terms of 3pt correlators. In the next subsection we provide an interpretation of this relation on the operator level.
The cusp OPE
Let us now rederive the decomposition (9.9) of the 4pt function from first principles using the logic inspired by the usual OPE. The idea, illustrated in Fig. 21, is to express the cusps at y 1 , y 2 as a combination of cusp operators inserted at y 0 : y 0 , (9.10) 30 We have also found nice explicit parameterizations of the spacetime dependence in terms of crossratios of these points and we present them in Appendix G.1. where C y 1 ,y 2 n are some coefficients, W y x are the Wilson line operators defined in (3.2), and O n represent projector operators on the n-th excitation of the cusp at y 0 . To make sense of the rhs of (9.10), we need to specify a regularization scheme; we assume that the regularization defined in the rest of the paper is used, and the projectors O n are the ones defined explicitly in section 5.1. Notice that the expansion corresponds to a change in the limit of integration of the Wilson lines. Derivatives of the Wilson line with respect to its endpoints produce the scalar insertions described in Sec. 5.1. For this reason, at least in the ladder limit considered here, we expect that only these excitations are involved in the OPE. To determine the coefficients C y 1 ,y 2 n , we proceed in the standard logic of the OPE and place equation (9.10) inside an expectation value. Considering the limit where y 3 ,y 4 converge towards y 5 (with the usual point-splitting regulator ), and projecting on the n-th state, we havē where we noticed that in this limit the configuration reduces to an HLL 3pt function, which we related to the structure constant as in Sec. 5.2. Here, the constant N ∆n, is the square root of the normalization of the 2pt function, explicitly defined in (5.9). On the other hand from the rhs of (9.10) we obtain (see Fig. 22): |y 05 | 2∆n , (9.12) therefore we find the coefficients: Taking the expectation value of (9.10) now fixes the 4pt function precisely to the form (9.9).
In the next subsection we will discuss how to apply similar logic to higher-point correlators.
OPE expansion of more general correlators
The OPE approach we presented above can also be applied to more general correlation functions. As one of the possible generalizations 31 , let us consider the four point function shown 31 One could also consider correlators with more than four protected cusps. In particular, the 4pt function considered in this section can naturally be viewed as a limit of the correlator of six protected cusps, which is in Figure 23. For simplicity of notation, we assume that the same scalar polarization n is chosen for the Wilson lines denoted as C and B, while on lines A and D we have a different polarization vector m. This defines a configuration where the two cusps at y 1 and y 4 are not protected, while the remaining two are. Explicitly, we are considering the expectation value: where we divided by the usual 2pt function normalization factors N 1 , N 4 for the unprotected cusps (defined explicitly in (5.9)) in order to get a finite result 32 .
Our conjecture for this quantity is based on the assumption that we can use the same type of OPE expansion as in the previous section. This allows us to replace each pair of consecutive cusps with a sum over excitations of a single cusp, whose position is defined by the geometry. For instance, the two cusps at y 3 and y 4 , which are defined by the consecutive sides A B C of the Wilson loop, are traded for a sum over excitations of a single cusp at the point D, defined by the extension of the lines A and C.
As expected, the OPE expansion gives rise to nontrivial crossing equations. Let us see this explicitly here. Taking into account the space-time dependence as in the previous section, from the contraction of y 3 and y 4 we obtain (see Fig. 24 on the right): |y B4 | ∆n+∆ 0 |y B3 | ∆n−∆ 0 |y 34 | ∆ 0 −∆n |y BD | 2∆n , (9.15) which now involves HHL structure constants 33 . Performing the OPE decomposition in the crossed channel, which corresponds to contracting y 1 and y 3 (see Fig. 24 on the left), yields a different expansion: (9.16) Notice that we left the dependence on all angles implicit; however, we point out that the sums in (9.15) and (9.16) are over different spectra, characterized by the same coupling but different obtained by introducing a finite cutoff around y1 and y4. This six point function can also be decomposed using the OPE. 32 As usual we assume the point-splitting -regularization close to the cusps. 33 Here we assume that the excited states studied in the rest of this paper constitute a full enough basis which makes possible this decomposition. This point requires further investigation. If that is not the case one will have to add a sum over some additional states as well. cusp angles. Proving the equivalence between (9.15) and (9.16) would be an important test of these expressions, and more generally of the OPE expansion on which they are based 34 . We leave this nontrivial task for the future. Crossing relations such as the one presented above could perhaps also be used to gain information on the HHH structure constants, which would appear in one of the two channels in the OPE expansion of correlators of the form G •••• 1234 .
Checks at weak coupling
In this section, we present some tests of the 4pt OPE expansion (9.9) at weak coupling. We will show that perturbative expansion of the 4pt function reproduces our results for HLL structure constants. In Appendix G.2 we also verify at 1 loop that when two of the four points collide, the 4pt function reduces precisely to a 3pt HLL correlator, including the expected spacetime 34 A somewhat related OPE approach was discussed in [50] for the φ = 0 case. It would be interesting to clarify possible connections with the OPE that we discuss here, which seems to be not a completely trivial task. We thank S. Komatsu for discussions of this point.
dependence. This provides an important test of our results for the structure constants and also of the OPE expression for the 4pt function. At one loop it is very easy to compute the 4pt function, and we find , (9.17) resulting in where we denoted (note the difference with (9.3)) Expanding this expression at large Λ we get: where the first coefficient is rather involved, while the rest are simpler, g 2 = −4ĝ 2 cosh (Λ 12 + Λ 43 ) cos(φ 0 ) , g 3 = 8ĝ 2 9 cosh 3(Λ 12 + Λ 43 ) 2 (2 cos(2φ 0 ) + 1) .
Rewriting this in terms of the angles using (9.2) we obtain where we used that there are only two states n = 1, 2 which converge to ∆ = 1 at weak coupling. Furthermore, we can identify precisely n = 1 and n = 2, by using the fact that the n = 1 state is associated with an odd state and thus should give an odd function in φ 12 . This results in 24) in complete agreement with our perturbative results (8.11) and (8.10) ! In the same way we find for the L = 2 states in agreement with (8.13) and (8.12). We also verified the L = 3 states and reproduced expressions (C.6), (C.7) given in Appendix C.
We also notice that the term h 0 is indeed equal to 2∆ (1) 0 i.e. the ground state energy at 1 loop. Finally, the expression g 0 can be compared with the HLL structure constant of three ground states, which reads at weak coupling (C •oo ) L=0 = 1 +ĝ 2 F 123 + . . . (9.27) where F 123 is given explicitly by the lengthy formula (8.7). From the OPE (9.1) we expect that and indeed our result (9.21) for g 0 precisely matches this complicated expression! This is a nontrivial check of the OPE as well as the HLL structure constant at 1 loop.
Conclusions
Our main result is the all-loop computation of the expectation value of a Wilson line with three cusps with particular class of insertions at the cusps in the ladders limit. We demonstrated that in terms of the q-functions it takes a very simple form, reminiscent of the SoV scalar product. The key ingredient in the construction is the bracket · , which allows to wrote the result in a very compact form (1.3). We also found a similar representation for the diagonal correlator of two cusps and the Lagrangian (1.5). This gives a clear indication that the Quantum Spectral Curve and the SoV approach can be able to provide an all-loop description of 3-point correlators.
In order to generalise our results one could consider correlators with more complicated insertions which should help to reveal more generally the structure of the SoV-type scalar product. We expect in this case that the bracket · will involve product of several Qfunctions: for some universal measure function µ, which should not depend on the states, but could be a non-trivial function of coupling 35 . It would also be important to extend the results obtained 35 In fact L itself may be nontrivial to define at finite coupling as states with different values of the charges can be linked by analytic continuation.
in this paper to the more general HHH configuration where all three effective couplings are nonzero. The form of our result (1.3), where the BPS cusp always appears with a different sign for the rapidity, suggests that in the most general case one of the Q-functions may need to be treated on a different footing as the other two. Therefore, the generalization to the HHH case may be nontrivial and reveal new important elements. Going away from the ladders limit (see e.g. [62,66]) could also give some hints about the measure in the complete N = 4 SYM theory and eventually lead to the solution of the planar theory. Potentially a simpler problem is the fishnet theory [1,3,4], where some 3− and 4−point correlators were found explicitly and have a very similar form to the φ → 0 limit of our correlator. As they involve only conventional local operators this is another natural setting for further developing our approach. It would be also interesting to consider the cusp in ABJM theory for which the ladders limit was recently elucidated in [67]. It would be also useful to utilize the perturbative data from other approaches [68][69][70][71][72][73][74] in order to guess the measure factor.
Let us mention that our result incorporates all finite size corrections (in particular the 2point functions are given exclusively by wrapping contributions). These corrections are rather nontrivial to deal with in the hexagon [71] approach to computation of correlators (see also [73][74][75][76]). The diagonal correlators, which we studied numerically in this paper at any value of coupling, are proven to be particularly hard in the hexagon formulation which is known to be incomplete in this situation. Nevertheless, it would be interesting to draw parallels between the two approaches. The hexagon techniques could be especially helpful in generalisation of our results for the longer states, where the wrapping corrections are suppressed by powers of 't Hooft coupling.
Another possible limit which would be interesting to consider is near-BPS. This could be either the small spin limit of twist-2 local operators or the φ θ limit of the cusps. In both cases the analytic solutions of the QSC are known explicitly [33,77] (see also [78]), which could be helpful in fixing the measure factor. In particular, at the leading order, the Q-functions q(u) describing the excited states of a cusp are orthogonal on [−2g, 2g] with the measure µ(u) = sinh(2πu) [33,78,79]. It is not clear how this measure is related to our result yet, but there are some promising signs which we discuss in the Appendix F. Let us point out that the naive guess that this is the measure we need is not consistent in an obvious way with the structure expected from SoV (10.1), where we expect multiple interactions for the insertions of such scalars. It would be really interesting to compare with localisation methods, which are applicable in the near-BPS limit. Some preliminary results were reported recently [80] (see also [81] for partial results for the spectrum). Let us also mention that often the measure can be bootstrapped from the orthogonality requirement, see [82] for a higher-loop result in the sl(2) sector. One could try this strategy too in order to find the measure in N = 4 SYM.
As another new result, we understood the meaning of the bound states of the Schrödinger problem resulting from Bethe-Salpeter resummation of ladder diagrams. They correspond to insertion of scalar operators of the same type as those on the Wilson lines 36 , see [83] for a string theory interpretation. From the point of view of the Bethe-Salpeter equation the excited states can be interpreted as resonances -poles of the resolvent on the non-physical sheet, which can be reached by analytic continuation under the branch cut of the continuum. As such they are hard to study analytically or numerically. In the QSC approach there is no continuum spectrum and the bound states can be studied on completely equal footing with the vacuum state. Moreover they can be easily tracked away from the ladders limit and should still correspond to scalar insertions. In addition, we showed that our results for the 3-cusp correlators immediately generalize to the case with these scalar insertions.
Our result opens the way to efficiently study the cusp with scalar insertions at arbitrary values of θ using the powerful QSC methods, both analytically and numerically. We already found the first few orders in the weak and strong coupling expansions of the energies of excited states in the ladders limit. The result at 1 loop for the first excited state matches the known 1-loop prediction [53] (assuming it is not changed in the ladders limit).
It would be also important to further investigate the OPE picture we presented in section 9. In order to reveal more structure for higher point correlators it would be very useful to find a compact way to perform the spectral sums appearing in the OPE. Recent results of [85] for the SYK model suggest that this could be feasible at least in the ladder limit. One could also explore the applicability of modern conformal bootstrap techniques [86,87] for the OPE expansion we considered. Finally, the structure of our OPE expansion is very reminiscent of the one for null polygonal Wilson loops [88], and it could be useful to explore this analogy.
A Technical details on the QSC
Here we provide details concerning the formulation of the QSC for the cusp anomalous dimension at generic values of the coupling g and the angles φ, θ [34].
The P-functions of the QSC can be written in a compact form as , where the functions f (u) and g(u) have powerlike asymptotics at large u with f 1/u and g u. The prefactor in this normalization reads The functions f (u) and g(u) are regular outside of the cut [−2g, 2g], which can be resolved using the Zhukovsky variable x(u), where we choose the solution with |x| > 1. In terms of x these functions simply become power series, The coefficients A n and B n encode nontrivial information about the AdS conserved charges including ∆. In particular, for the first few of them we have The fourth order Baxter type equation (2.1) on Q i is written in terms of several deter-minants involving the P-functions. They are given by: (A.11)
A.1 Derivation of the quantization condition
Let us explain the derivation of (2.9) in detail. For consistency with standard QSC notation [34] we denote in this section the two solutions of the Baxter equation (2.7) as q 1 and q 4 which in the notation of section 2.1 corresponds to with large u asymptotics q 1 ∼ e uφ u ∆ , q 4 ∼ e −uφ u −∆ . First we notice that the Baxter equation (2.7) is invariant under complex conjugation, soq 1 andq 4 are linear combination of the two solutions q 1 and q 4 with i-periodic coefficients that we denote Ω j i ,q (A.14) Our strategy is to constrain as much as possible the form of Ω's and then fix them completely using the gluing conditions from the QSC. The analytic properties of q's already impose strong restrictions on Ω j i . Both q 1 (u) and q 4 (u) are analytic in the upper half-plane, but the Baxter equation implies that they can have second order poles at u = −in, n = 1, 2, . . . in the lower half-plane. Accordingly,q 1 ,q 4 will have second order poles in the upper half plane which can only originate from Ω's in the r.h.s. of (A.13) and (A.14). Therefore these Ω's can have at most 2nd order poles. Their rate of growth at u → +∞ and u → −∞ is moreover constrained by the known asymptotics of q 1 , q 4 . To fix normalization we impose for u → +∞ where the constant prefactor for q 4 is determined by the canonical normalisation of Qfunctions 37 ). Assuming φ > 0 we see that q 1 is the dominant solution at u → +∞ and therefore e.g. Ω 1 4 must vanish for large positive u (though not necessarily for u → −∞). By arguments of this type we can write all the components of Ω in terms of just a few parameters, namelyq Moreover, we can use the trick suggested in [3] to express these parameters a n , b n in terms of q's. As in [55] we will focus on Ω 4 1 , which as we see from (A.16) is given by , Nicely, the denominator of (A.21) is precisely the Wronskian of the Baxter equation, which is a constant we denote by C W . Its precise value is not important here but can be found from the asymptotics (A.15), 37 At finite angles we should have q1q4 i (cos θ−cos φ) 2 2∆ sin 2 φ at large u, see [34].
We expect that Ω 4 1 has a singularity at u = 0, which in this expression can only come from q 1 (u + i). Using the fact thatq 1 satisfies the original Baxter equation (2.7), we find Plugging this into (A. 23) gives (A.25) At the same time, expanding the expression for Ω 4 1 from (A.18) we find Comparing (A.25) with (A.26) we can express a 3 and a 4 in terms of q 1 (0) and q 1 (0), in particular 38 So far we have not used any relations from the QSC involving analytic continuation around the branch points. Now we will apply one of such relations, which was derived in [55] using the gluing condition forq 1 given in (2.3). It reads In fact we will only use that as a consequence of this relation Ω 4 1 must be even, which gives Combining the first relation with (A.27) we get precisely the quantization condition (2.9) presented above.
A.2 Quantization condition from asymptotics of the Ω functions
There is also an alternative way to arrive at the quantization condition, which though just an observation at the moment is very instructive for the discussion that will follow in section 3.
In this alternative approach we start from the same Baxter equation (2.7) but never use any relations from the QSC involving tilde, i.e. analytic continuation around the branch points such as in (2.3). Instead we observed that it is sufficient to demand that Ω 4 1 vanishes at u → +∞. This immediately fixes a 3 = a 4 and thus leads via (A.27) (which as we showed above follows from the Baxter equation) to the same quantization condition (2.9). The importance of this observation will become apparent in section 3, where we will see that the vanishing asymptotics of Ω 4 1 ensures finiteness of various scalar products that play a key role in our construction.
Curiously, in the fishnet theory [1,4] it is also possible to derive the quantization condition solely from asymptotics of Ω as was recently found in [84]. It would be interesting to better understand the underlying reason behind this.
B Quantization condition and square-integrability of the wave function In Sec. 3.3, we introduced an explicit map between the Q-function and a solution of the stationary Schrödinger equation: As we showed there, the fact that q(u) satisfies the Baxter equation implies that F (z) solves the Schrödinger equation. This statement does not require that the quantization conditions are satisfied, and is valid for any value of the parameter ∆ 39 . In this Appendix we show that, for ∆ < 0, the quantization conditions are equivalent to the square-integrability of F (z). In particular, notice that, since the potential in the Schrödinger equation is vanishing at infinity, any solution to (3.27) can have one of the two behaviours ∼ e ±∆z/2 at large z, therefore it can either decay or grow exponentially. We will show that F (z) is always decaying at z → +∞, while it is decaying at z → −∞ if and only if q(u) satisfies the quantization conditions. We will use the same convention as in Sec. A and denote the two independent solution of the Baxter equation as q 1 and q 4 , see (A.12), where q(u) = q 1 (u).
They are characterized by the following asymptotics in the upper half plane In preparation for the following argument, we will need to determine the asymptotics of q 1 (u) also along the part of the integration contour in (B.1) which extends in the lower half plane.
To determine the asymptotics along this line, we reflect it to the upper half plane using complex conjugation, and then use the exact relation (A.16) between q andq. This leads to where the constants a 3 , a 4 are defined in (A. 18). Notice that in (B.3) we dropped the terms proportional to Ω 1 1 , since they give a subdominant contribution suppressed as ∼ u ∆ (in 39 Notice that, strictly speaking, the integral transform in (3.28) requires −1 < ∆ < 0 for convergence.
In this section we restrict consideration to this range of parameters, and then extend the result by analytic continuation.
this appendix we assume ∆ < 0 throughout). Equation (B.3) shows that q(u) grows for large |Im(u)| in the lower half plane. Despite this fact, notice that the integral (3.28) still converges as long as −1 < ∆ < 0, since, for any finite z, the integrand is oscillatory. Let us now come to the core of the argument. To determine the behaviour of F (z) for z → +∞, we study the following limit where the last term in (B.5) is zero due to the fact that the integrand is suppressed at least as ∼ u ∆−1 at large u. Therefore, we found that F (z) is always decaying for z → ∞.
To analyse the situation at z ∼ −∞ we now look at the limit Notice that by definition this limit is finite if and only if F (z) is decaying at z ∼ −∞. Accordingly, we find that, for a generic value of ∆, the last integral in (B.7) is not convergent.
To understand why, notice that, as a consequence of (B.3), the integrand in (B.7) behaves as along the part of the contour extending in the lower-half plane. Therefore, the integral is clearly divergent. However, the quantization conditions coming from the QSC correspond precisely to a 3 = a 4 (see (A.27) )! When they are satisfied, the most singular part of the asymptotics (B.8) is cancelled and the integral (B.7) is still convergent, which implies that F (z) is a squareintegrable function. Therefore we have just shown that the (negative) scaling dimensions described by the QSC are associated with the spectrum of bound states of the Schrödinger equation (3.27). While we derived this relation for ∆ in a specified range −1 < ∆ < 0, this correspondence can be extended beyond this regime by analytic continuation in the coupling constant. This analytic continuation is such that, for small enough coupling, ∆ n becomes positive for almost all levels except for the ground state. In this regime, the scaling dimensions no longer correspond to bound states in terms of the Schrödinger potential problem, but can be understood as resonances.
C Perturbative results
Here we list our weak coupling results supplementing the main text.
As explained in Sec. 6, one can also obtain a systematic expansion of the structure constants in the limit where φ 1 ∼ φ 2 ∼ φ 3 ∼ 0. In the case where the ground state is inserted 41 Except for the ground state ∆0, each ∆ n,φ=0 , ∆ n,φ=0 corresponds to a patchwork of different excited states levels, which split at finite φ, see Sec. 5.3. at every cusp we obtain, up to next-to-leading order: where ψ (0) (z) = Γ (z)/Γ(z) and C ••• 123 | φ 1 =φ 2 =φ 3 =0 is given in (6.12). For the norm of excited states at small φ we get , in proximity of the trajectories (D.1), In the case of excited states, the small-angles limit for the numerator of structure constants depends on the relative scaling of the three angles. For example, for the HHL structure constants involving two n = 1 trajectories, assuming φ 3 = 0 and φ 1 = φ 2 = φ ∼ 0 small, we get while in the scaling φ 2 << φ 1 ∼ φ 3 ∼ 0 we get
(E.5)
For n ∼ĝ we get rather complicated elliptic integrals. However, for n ∼ 1 the integral (E.5) can be computed easily by poles and the equation (E.5) gives the quantization condition for ∆ n , ∆ n cos φ 2 = −2ĝ + n + 1 2 + 1 g where s = sin φ 2 . Re-expanding these relations at small φ we reproduce the largeĝ expansion of (D.4). It would be interesting to compute the strong coupling asymptotics of the correlation functions using the WKB expansion presented in this appendix.
F The near-BPS limit
In this section we show that a formula very similar to the one we presented in (1.5) in the ladders limit captures ∂∆/∂φ in a completely different regime -namely in the near-BPS limit when φ → θ. We will consider the generalized cusp dimension corresponding to L scalars inserted at the cusp, which should however be independent from those coupling to the lines. 42 The QSC solution in this case was presented in [33,34] where the details can be found. The Q-function which we will use is q = Q 1 / √ u which to leading order in φ − θ is given by (up to irrelevant normalization) q L = P L (x)e gφ(x−1/x) , (F.1) 42 This observable is simpler than the one with insertions discussed in section 5 and corresponds from that perspective to the ground state, not an excited one.
and the twisted Bessel functions are defined as Notice a useful property P L (x) = P L (−1/x) .
(F.5)
The key point is that for P L (x) we have a natural scalar product with respect to which they are orthogonal 43 . For Q-functions it translates into orthogonality with respect to the scalar product q a q b guess ≡ 2 sin β 2 α dx sinh(2πu) q a q b (F. 6) where q a q b ∼ e βu u α and the integral goes along the unit circle (which in the u variable would correspond to going around the cut [−2g, 2g] 44 ), i.e. we have The prefactor in the scalar product is defined in the same way as for the bracket (1.4) we use in the main text. The full meaning of this scalar product and its precise relation with the bracket we used in the ladders limit are not completely clear yet. However it allows us to 43 It is also natural from their interpretation in matrix model terms, see [79] and [78]. 44 Notice that this integration contour is consistent with the vertical one used in the main text of the paper.
Indeed, our vertical integration contour can be bent and closed to the left; in general, we would need to take into account an infinite sequence of cuts of the Q functions at [−2g, 2g] − in, but in the near-BPS limit only the cut at [−2g, 2g] remains.
write ∂∆/∂φ in almost exactly the same way as in the ladders limit where according to (1.5) it corresponds to an insertion of u in the integral: − 2 ∂(sin φ∆) ∂φ = q 2 u q 2 (ladders limit) (F.8) Remarkably we find that in the near-BPS case this derivative again corresponds to an insertion of u ! That is, 2 ∂(sin φ∆) ∂φ φ=θ = q 2 u guess q 2 guess (near-BPS limit) (F.9) so the only difference with the ladders limit is the overall sign (whose interpretation remains to be understood). Concretely, in the near-BPS limit we have so that ∂∆ ∂φ φ=θ = ∆ (1) (g, φ) (F.11) and our formula (F.9) precisely reproduces the complicated all-loop result from [33] which reads where M (a,b) N is the matrix M N with row a and column b deleted. Regardless, it is rather nontrivial that (F.9) provides the correct non-perturbative result. This may be viewed as a hint towards the existence of an underlying structure capturing the exact result at all values of the parameters. As an important testing ground, it would be very interesting to see whether replacing → guess in our main result (1.3) yields the structure constants in the near-BPS limit, which should also be accessible with localization [80]. G More details on the space-time dependence of 4pt functions Here we give a few more details on the space-time dependence of the basic 4pt function (3.1) (given in OPE terms in (9.1)). First we discuss some alternative parameterization of the spacetime dependence in terms of the angles and crossratios. Then we show that when two points collide the spacetime dependence matches the one for a 3pt correlator as expected.
G.1 Parameterization of the four points
Let us first show how to eliminate the two coordinates y 0 , y 5 , defined in Sec. 9.1, in favour of the angles φ, φ 12 ≡ φ 1 − φ 2 , φ 43 ≡ φ 4 − φ 3 (defined by (9.2)) 45 . We will see that the result 45 Notice that the angles can be seen as parameters specifying the configuration, i.e. the four operators corresponding to the four points. In particular the structure constants depend on these angles. depends only on the cross ratio r 1234 of the four insertion points, together with the angles φ, φ 43 , φ 21 . Translating between the Λ parametrization and the space-time coordinates, we find where Λ = 1 4 (Λ 1 + Λ 2 + Λ 3 + Λ 4 ), r abcd = |y ab y cd | |y ac y bd | , (G. 3) and we recall that L abc and K abc are defined as Solving (G.2) for e −2Λ , and plugging it back in the four point function, we see that the terms (9.5) appearing in the OPE expansion of the correlator are simple algebraic functions of the cross ratio r 1234 . Finally, let us mention that the factors K 0ab can be interpreted as particular cross ratios involving the points y 0 and y 5 . In fact from (9.2), converting from Λ i 's to space-time points we find e −iφ 43 = e iφ + 2i sin φ y 40 y 35 y 34 y 05 = e −iφ + 2i sin φ y 45 y 30 y 34 y 05 , (G.5) e −iφ 12 = e iφ + 2i sin φ y 20 y 15 y 12 y 05 = e −iφ + 2i sin φ y 25 y 10 y 12 y 05 , (G.6) from which we see that r 3045 = sin 1 2 (φ + φ 3 − φ 4 ) sin φ = K 034 , r 3540 = sin 1 2 (φ + φ 4 − φ 3 ) sin φ = K 043 , (G.7)
G.2 HLL correlator from the 4-point function
Let us verify explicitly that taking the limit of two coincident points in our 4-point function reproduces the correct spacetime dependence of the 3-point HLL correlator. The general proof of this was given in Section 4, here we will check this at 1 loop (testing also the 1-loop HLL structure constant). We will consider the limit when Λ 1 = Λ 2 ≡ Λ → ∞ (G.9) but Λ 3 , Λ 4 are finite. Then the left ends of the two arcs in Fig. 19 will approach the first cusp point. The four arc endpoints correspond to y 1 , . . . , y 4 , and for large Λ the two left endpoints are at equal small distance from the cusp, so that Λ is related to the distance as (see (3.23)) The perturbative expression for the 4-pt function (9.18) reduces in this limit to It is far from obvious that the dependence on the 3 endpoint positions here (two are parameterized by Λ 3 , Λ 4 while the last one is x 1 ) is the one expected for a CFT 3-pt correlator. In the notation given on Fig. 19 this dependence should be of the form corresponding to a HLL correlator of 3 cusps without insertions, with ∆ 0 being the ground state anomalous dimension. In order to compare this expression with (G.12) we plug into (G.13) the coordinates y 3 = ζ + (Λ 3 ), y 4 = ζ − (−Λ 4 ) using the parameterization (3.4), and also use that by simple geometry the angles φ 3 , φ 4 are related to Λ 3 , Λ 4 by e Λ 4 −Λ 3 = sin φ−φ 4 +φ 3 2 sin φ+φ 4 −φ 3 2 . (G.14) Then taking the ratio of (G.12) and (G.13) we find after some manipulations G G CFT = 1 +ĝ 2 csc φ 2φ log 2 sin 2 φ cos δφ − cos φ + iLi 2 e −iφ csc 0 log + log 2 cos φ 2 + F 123 φ, π 2 , π 2 (G. 15) where ∆ (1) 0 = 4φ csc(φ) is the 1-loop ground state dimension, δφ = φ 4 − φ 3 and F 123 is the 1-loop HLL structure constant given as a function of the three angles in (8.7). Remarkably, we see that all spacetime dependence (involving Λ 3 , Λ 4 ) has disappeared in the ratio G/G CFT ! What remains in (G.15) is a function only of the regulator and the angles φ, φ 3 and φ 4 which characterise the three cusp operators whose correlator we are computing. Furthermore, the term in square brackets in (G.15) precisely matches the 2pt normalization factor from (3.26) at 1 loop. If we divide by this factor in order to get the normalized correlator, what is left is precisely the HLL structure constant for three ground states C •oo = 1 +ĝ 2 F 123 (φ, φ 4 , φ 3 ) matching the 1-loop expansion (8.7) of our exact result.
Thus we have verified at 1 loop that in the limit when two points collide we recover perfectly the 3pt correlator from the 4pt function, including the correct normalization and spacetime dependence. This is a direct 1-loop check of our all-loop result for the HLL correlator. | 26,056.8 | 2018-02-12T00:00:00.000 | [
"Physics"
] |
Model-Independent Determination of the Cosmic Growth Factor
Since the discovery of the accelerated cosmic expansion, one of the most important tasks in observational cosmology is to determine the nature of the dark energy. We should build our understanding on a minimum of assumptions in order to avoid biases from assumed cosmological models. The two most important functions describing the evolution of the universe and its structures are the expansion function E(a) and the linear growth factor D_+(a). The expansion function has been determined in previous papers in a model-independent way using distance moduli to type-Ia supernovae and assuming only a metric theory of gravity, spatial isotropy and homogeneity. Here, we extend this analysis in three ways: (1) We extend the data sample by combining the Pantheon measurements of type-Ia supernovae with measurements of baryonic acoustic oscillations; (2) we substantially simplify and generalise our method for reconstructing the expansion function; and (3) we use the reconstructed expansion function to determine the linear growth factor of cosmic structures, equally independent of specific assumptions on an underlying cosmological model other than the usual spatial symmetries. We show that the result is quite insensitive to the initial conditions for solving the growth equation, leaving the present-day matter-density parameter {\Omega}_m0 as the only relevant parameter for an otherwise purely empirical and accurate determination of the growth factor.
The expansion function of the universe and the linear growth factor of cosmic structures are the two most fundamental functions describing the evolution of the universe and its structures.They are indirectly accessible to astronomical observations, such as luminosity-distance measurements of type-Ia supernovae (SN Ia).Combining both functions allows to distinguish between different cosmological models.
The accelerated expansion rate of the Universe has been established nearly twenty years ago based on SN Ia distance measurements [1,2].In the framework of the cosmological standard model, this acceleration is explained by the cosmological constant or a dynamical dark-energy component currently dominating the energy content of the universe [3].The nature of the dark energy, however, is largely unknown.So far, all attempts to derive it from fundamental theory have led to values which are way too small to explain the cosmic acceleration.Phenomenological explanations are typically based on a dark-energy equation of state, possibly varying with time.They bypass fine-tuning problems, but lack fundamental justifications.Determining the nature of the dark-energy is among the most important tasks for contemporary cosmology.The two functions, the cosmic expansion function and the linear growth factor of cosmic structures, are the most important ingredients to investigate the nature of the dark energy.
We are here proposing a method to constrain the linear growth factor of cosmic structures without reference to any specific model for the energy content of the universe.We derive the expansion function in a way similar to that proposed by [4] and [5], but substantially simplified and standardised.The only assumptions made there are that the universe is topologically simply connected, spatially homogeneous and isotropic on average, and that the expansion rate is reasonably smooth.Extending this analysis to the linear growth of cosmic structures, we only add the assumption that the linear growth of cosmic structures on the relevant scales is locally determined by Newtonian gravity.We briefly review and revise the method of [4] in Sect. 2 and apply it to the Pantheon sample of type-Ia supernovae (SN-sample hereafter) and to the Pantheon sample combined with a sample of distance measurements from baryonic acoustic oscillations (BAO, hereafter SN-BAO-sample) to obtain a purely empirical and tight constraint of the cosmic expansion function.We describe our method to calculate the linear growth factor in Sect.3, discuss the initial conditions for solving the growth equation, and present the results obtained from the SN-sample and the SN-BAO-sample.Finally, we summarise our conclusions in Sect. 4.
Cosmic expansion 2.1 Method
As outlined in [4], the expansion function can be deduced from the luminosity of light sources of known intrinsic luminosity, such as calibrated SN Ia, without assuming any specific Friedmann-Lemaître model.We briefly review this method in this Section in a modified, simplified, and standardised version.
Even though gravity is commonly described by general relativity (GR), we only need to assume that space-time is described by a metric theory of gravity.We thus treat space-time as a four-dimensional, differentiable manifold with a metric tensor g.Assuming spatial isotropy and homogeneity, this metric has to be of the Robertson-Walker form with a scale factor a. In general relativity, Einstein's field equations applied to the Robertson-Walker metric turn into the Friedmann equations, and the metric further specialises to the Friedmann-Lemaître-Robertson-Walker form.Then, the cosmic expansion function E(a) is given in terms of the Hubble function H(a) by This defines the cosmic expansion function E(a) in terms of the Hubble constant H 0 and the contributing energy-density parameters.These are the radiation density Ω r0 , the matter density Ω m0 , the density parameter Ω K of the spatial curvature, all at the present time, and the possibly time-dependent dark-energy density parameter Ω DE (a).In the standard ΛCDM cosmology, Ω DE is replaced by the cosmological constant with the density parameter Ω Λ0 at the present time.
It is important in our context that we do not assume any specific parameterisation of the expansion function of the type (1).Rather, we merely assume that we can build upon an underlying, but unspecified metric theory of gravity with the two common symmetry assumptions of spatial isotropy and homogeneity.The metric must then be of Robertson-Walker form, and its single remaining degree of freedom must be described by some expansion function E(a) whose form is a priori undetermined.
We reconstruct E(a) from data without assuming the parameterisation (1).
As an uncritical simplification, we further assume that the spatial sections of the space-time manifold are flat, following the empirical evidence that the spatial curvature of our Universe cannot be distinguished from zero within the limits of our observational uncertainties [6].It would be rather straightforward to extend our analysis by replacing the radial comoving distance w in Eq. ( 9) below by the comoving angular-diameter distance f K (w).
We modify the approach developed in [4,5] and used in [7,8] in two important ways, allowing a substantial simplification and rendering the results much more portable than before.First, we use Chebyshev polynomials of the first kind T n (x), shifted to the interval [0, 1], as an orthonormal basisfunction system (see Appendix A).Second, we do not expand the distance, but a scaled variant of the inverse expansion function E(a) into these polynomials.
Given measurements of distance moduli µ i and redshifts z i , with 1 ≤ i ≤ N, we convert the distance moduli to luminosity distances D lum,i via and scale the redshifts z i to the variable normalised to the interval [0, 1], where a min = (1 + z max ) −1 is the scale factor of the maximum redshift in the sample.We further introduce the scaled luminosity distance Since the uncertainties on the redshifts are very small compared to those of the distance, the relative uncertainty of d i is unchanged compared to that of D lum,i .We thus obtain a scaled data sample {x i , d i }.
The radial comoving coordinate is in terms of the normalised scaled factor x. We define and use ẋ = to write e(x) as The luminosity distance in units of the Hubble radius c/H 0 is in spatially-flat geometry, using a = a min (1 + δax).Thus, the scaled luminosity distance d(x) is and the scaled, inverse expansion function e(x) is its negative derivative, We now proceed as follows with the transformed data set {x i , d i }.We expand e(x) into shifted Chebyshev polynomials, Then, the scaled distances d(x) are given by Defining the matrix P by its components the vector c of coefficients c j is determined by the data vector d = (d i ) via With the covariance matrix C := d ⊗ d of the scaled luminosity distances d, the maximum-likelihood solution for c is The uncertainties ∆c j of the coefficients and ∆E(a) of the expansion function are obtained from the Fisher matrix F = P C −1 P in the following way.First, we diagonalise the Fisher matrix by rotating it into its eigenframe with a rotation matrix R, find its eigenvalues σ −2 i and define a vector of decorrelated coefficient uncertainties ∆ c = (σ 1 , . . ., σ M ).Second, we rotate this vector back into the frame of the Chebyshev polynomials and find ∆ c = R ∆ c .The uncertainties ∆c i obtained this way are slightly larger than the Cramer-Rao bound F −1/2 ii , as they are expected to be.Beginning with a large number M of coefficients, only those are kept which are statistically significant, i.e. which satisfy |c j | ≥ ∆c j .
Cosmic expansion function from the SN-sample
We first reconstruct the expansion function using the SN-sample of type-Ia supernovae [9], covering the scale-factor range a ∈ [0.3067, 1].We apply the algorithm described in the preceding subsection to derive the function e(a) defined in Eq. ( 8).Using the covariance matrix provided with the data, we determine the coefficient vector c using Eq. ( 16) and derive its uncertainty ∆ c from the Fisher matrix as described above.We arrive at M = 3 significant coefficients.
We then return to E(a) via Eq.( 8) and determine its uncertainty from This results in the expansion function and its uncertainty shown in Fig. 1.The uncertainties are very small.This is due to the fact that the entire information taken from the SN-sample is compressed into three coefficients here.The best-fitting ΛCDM model with requires Ω m0 = 0.324 ± 0.002.It is shown by the red curve in Fig. 1.
Cosmic expansion function from the SN-BAO-sample
We repeat our analysis on the SN-BAO-sample.We collected a sample of BAO measurements by searching the literature for papers that appeared in the reviewed literature between January, 2014, and December, 2018.We selected 21 papers according to the quality and the completeness of the data description and collected 89 measurements of the angular-diameter distance D ang /r d,fid in terms of a fiducial value r d,fid for the drag distance, setting the physical scale of the BAOs.The drag distance is the sound horizon at the end of the baryon-drag epoch.Of these measurements, we kept 75, removing those that seemed to be either dependent on or superseded by other measurements.These measurements fall into the redshift range [0.24, 2.4] and thus extend the scale-factor range of our reconstruction of the expansion function.
The drag distance r d,fid is unknown to us.It is determined by and thus needs the expansion function for scale factors smaller than a d ≈ 1100 −1 .In order to remain as model-independent as possible, we choose to determine r d by an empirical calibration: we applied an offset to the distance moduli corresponding to the BAO measurements such as to bring them into least-squared distance with the sample of distance moduli from the SN-sample.This offset turns out to be redshift-independent, as expected.Its value of ∆µ = 10.783 ± 0.041 corresponds to a drag distance of in good agreement with the value expected in the standard ΛCDM cosmology.We further estimate the covariance matrix of the BAO data via the uncertainties quoted in the papers, combined the two statistically fully independent samples and repeated the determination of the coefficients c and the expansion function as for the SN-sample alone.The result is shown in Fig. 2. For the SN-BAO-sample, we obtain M = 4 significant coefficients.
Within their uncertainties, the expansion functions obtained from the SN-sample alone and from the SN-BAO-sample agree very well, but the uncertainties due to the combined sample are somewhat smaller, and the redshift range of the reconstruction is slightly extended.The fit to the standard-ΛCDM expansion function leads to a result virtually indistinguishable from the SN-sample alone, with Ω m0 = 0.319 ± 0.002, and is therefore not shown again in Fig. 2.
Intererestingly, the expansion function determined purely from the data is slightly more curved than the best-fitting Friedmann-Lemaître model.This difference is formally highly significant, but we do not want to emphasise it since it may be caused by systematic uncertainties in the data or their which would imply for the function q(a) quantifying the time evolution of the dark energy and its uncertainty.This function is shown in Fig. 3 for the SN-BAO-sample, setting Ω m0 = 0.32 as obtained from the best-fitting, ΛCDM model determined above.It illustrates one of the advantages of our approach, as the empirically determined expansion function does not assume any specific cosmological model in general, nor a specific model for dynamical dark energy in particular.
3 Linear growth of cosmic structures
Equation to be solved
Relative to the background expanding as described by E(a), structures grow under the influence of the additional gravitational field of density fluctuations δρ( x, t) = ρ(t)δ( x, t), where ρ(t) is the mean matter density and δ the density contrast.Structures small compared to the curvature radius of the spatial sections of the universe with a density contrast δ 1 can be treated as linear perturbations of a cosmic fluid in the framework of Newtonian gravity.Linearising the corresponding Euler-Poisson system of equations in the perturbations and expressing spatial positions in comoving coordinates leads to the well-known second-order, linear differential equation δ + 2H δ = 4πG ρδ (23) for the density contrast δ of pressure-less dust.Since Eq. ( 23) is homogeneous in δ, the solutions for δ can be separated into a time dependent function D(t) and a spatially dependent function f ( x), writing δ( x, t) = D(t) f ( x), where D(t) alone now has to satisfy Eq. ( 23).Of the two linearly independent solutions of Eq. ( 23), one decreases with time and is thus irrelevant for our purposes.We focus on the growing solution D + (t), i.e. the linear growth factor.Transforming the independent variable in Eq. ( 23) from the time t to the scale factor a then gives the equation for the linear growth factor, where primes denote derivatives with respect to a.This equation depends only on the expansion function E(a), its first and second derivatives, and the matter-density parameter Ω m .We know E(a) empirically in a model-independent way from the procedure described in Sect. 2 applied to the luminosity distances of the type-Ia supernovae contained in the SN-sample, and to the distances from the SN-BAO-sample.The time-dependent matter-density parameter Ω m (a) is given by in terms of the expansion function E(a) and the present-day matter-density parameter Ω m0 .
Initial conditions and results for the linear growth factor
Before we can proceed to solve Eq. ( 24) for the growth factor, we need to set Ω m0 and to specify initial conditions.Since we know E(a) from data taken in the scale-factor interval [a min , 1], we need to set the initial conditions at a min .Since Eq. ( 24) is homogeneous, the initial value of D + is irrelevant and can be set to any arbitrary value.We choose D + (a min ) = 1.Concerning the derivative D + (a) at a = a min , we begin with the ansatz D + = a n near a = a min , assume that n changes only slowly with a and use Eq. ( 24) to find for the growing solution, using the definitions In the matter-dominated phase, both ε and ω are small compared to unity, and n is approximated by With the reconstructed expansion rate E(a), the parameter ε is fixed.For any choice of Ω m0 , also ω is set via Eq.( 25), thus so is the growth exponent n, and we can start integrating the growth function with the remaining initial condition For each choice Ω m0 , we can now solve Eq. ( 24) with the initial conditions Eq. ( 29) and D + (a min ) = 1.After doing so, we normalise the growth factor such that it is unity today, D + (a = 1) = 1.The uncertainty of the expansion function E(a) propagates to D + (a), but the uncertainty on D + shrinks towards a = 1 because of the normalisation.The result is shown in Fig. 4 for Ω m0 = 0.3 ± 0.02.The uncertainty in the growth exponent n disappears in the line width of the plot.2, obtained from the SN-BAO-sample (dark blue) and from the SN-sample alone (light blue).As described in the text, the growth factors are obtained by solving Eq. (25) with Ω m0 = 0.3 based on the empirically derived expansion functions.The shaded areas cover the 1-σ uncertainty implied by the uncertainty of the expansion function E(a).Compared to the uncertainty due to E(a), the uncertainty due to varying γ(a min ) within [0.4,0.8] is very small.
The growth index of linear perturbations
A common representation of the derivative of the growth factor with respect to the scale factor is given by the growth index γ, defined by Theoretically predicted values of γ that can be found in the literature [10][11][12][13][14][15][16][17] range from approximately γ = 0.4 (for some f (R) modifications of gravity [18]) to γ = 0.7.This range includes models with varying w [10,16], curved-space models [15] and models beyond general relativity [10,11,18,19].Even for models with strongly varying γ, the values for redshifts z ∈ [0, 2] are usually very close to γ ∼ 0.6.Without further specification, Eq. ( 30) is obviously valid for any cosmology since the growth index γ(a) could be any function of a.The substantial advantage of writing the logarithmic slope of the growth function in this way is that γ(a) is very well constrained for a wide range of cosmological models and can be used as a diagnostic for the classification of models based on gravity theories even beyond general relativity [10,11].For a recent and well structured review about constraints for γ in a wide range of models, see [11].
Another substantial advantage of Eq. ( 30) is that γ happens to be quasi-constant for a wide range of models.[12] found a general expression for γ(a) that applies to any model with a mixture of cold dark matter plus cosmological constant (ΛCDM) or quintessence (QCDM).For example, for a dark-energy equation of state parameterized by a slowly varying function w(Ω m ) in a spatially-flat universe, the growth index reduces to [13].Thus, for any constant w, the growth index γ is itself constant and reduces to γ = 6/11 for ΛCDM.
It is interesting in our context that we can derive γ based on the reconstructed expansion function E(a).As we show in Appendix B, an approximate, yet sufficiently accurate solution for γ is For a ΛCDM model, thus ε = 3ω, and Eq. ( 32) reduces to γ = 6/11.With our reconstruction of the expansion function E, we can determine γ and its uncertainty for any choice of Ω m0 .The result for Ω m0 = 0.3 is shown for both data samples in Fig. 5.
The growth index follows the ΛCDM result very closely for a 0.5, but increases for smaller scale factors.Again, we abstain from drawing any conclusions here, but emphasise that our reconstruction method allows a direct determination of γ.It is likely that systematic errors in the data or any unaccounted covariance between the data points is responsible for the behaviour of γ at a 0.5.
Conclusions
We have shown here how the linear growth factor D + (a) of cosmic structures can be inferred from existing data with remarkably small uncertainty without reference to a specific cosmological model.Following up on, modifying and extending earlier studies, we have derived the cosmic expansion function E(a) in a way independent of the cosmological model from the measurements of distance moduli to the type-Ia supernovae of the Pantheon sample (SN-sample), and from the Pantheon sample combined with a sample of BAO distance measurements compiled from the literature (SN-BAO-sample).
All we need to assume is that underlying the cosmological model is a metric theory of gravity and that our universe satisfies the symmetry assumptions of spatial homogeneity and isotropy reasonably well.The uncertainty on this empirically determined expansion function already is remarkably small, and the results obtained from the SN-sample alone and from the SN-BAO-sample agree very well.This expansion function is the main ingredient for the differential Eq. ( 25) describing cosmic structure growth in the linear limit.Only one further parameter is needed to solve this equation, viz. the present-day matter-density parameter Ω m0 , because it enters into the initial conditions for solving Eq. (25).Assuming Ω m0 , we can also solve for the growth index γ defined in Eq. (30).This implies that, due to measurements of the distance moduli to the type-Ia supernovae in the SN-and SN-BAO-samples, the expansion function is accurately determined, and the linear growth factor D + as well as the growth index γ are tightly constrained up to a single remaining parameter, i.e. the present-day matter density parameter Ω m0 .
Comparing our results to the best-fitting expansion function of a spatially-flat, Friedmann-Lemaître model universe illustrated in Fig. 3, and the constraint on the growth index γ shown in Fig. 5 demonstrate how our method can be used with future data to derive the possible time evolution of the dark energy and the growth index directly from distance measurements.
In future work, we will extend the method presented here to further types of data.Our goal is to determine the two centrally important functions of cosmology, E(a) and D + (a), with as few assumptions as possible and without reference to a specific cosmological model.Such applications of our results may be particularly interesting which so far require assuming cosmological parameters or models for a possible evolution of dark energy, e.g.cosmological weak gravitational lensing.
B Derivation of the growth index
In terms of the logarithmic derivative and using the parameters ε and ω introduced in Eq. ( 27), the linear growth equation ( 24) reads We write use Eq. ( 25) to find and Eq. ( 30) to write d ln f approximating ln Ω m = ln(1 − ω) ≈ −ω in the last step.Neglecting terms of order εω, we have Inserting this result into Eq.(41), dividing by f and approximating we arrive at to linear order in ε and ω.Solving for γ finally gives the result quoted in Eq. ( 32).
C BAO sample
The sample of BAO measurements collected from the literature is listed in Tab. 2.
2 2 . 3 5 3 3 . 1 3 . 2 4 13 A Chebyshev polynomials 14 1
Cosmic expansion function from the SN-sample 5 Cosmic expansion function from the SN-BAO-sample Linear growth of cosmic structures 8 Equation to be solved 8 Initial conditions and results for the linear growth factor 10 3.3 The growth index of linear perturbations 12 Conclusions arXiv:1912.04560v1[astro-ph.CO] 10 Dec 2019 1 Introduction
32 Figure 1 :Figure 2 :
Figure1: The cosmic expansion function E(a) is shown here as reconstructed from the luminositydistance measurements in the SN-sample.Beginning with the monomials q j (a) = a j−1 , the model needs three significant coefficients c j whose error bars are determined by the covariance matrix of the data (see the entries in Tab. 1).The 1-σ uncertainty shown here is so small because the entire data set is thus compressed into three numbers.The red line shows the best-fitting, spatially-flat, Friedmann expansion function.
Figure 3 :
Figure 3: Constraints on a dynamical evolution of dark energy, obtained by comparing the expansion functions derived from the SN-BAO-sample with the expectation for a spatially-flat Friedmann-Lemaître model (dark blue).The light blue band shows analogous constraints obtained from the SN-sample only.As in Figs. 1 and 2, 1-σ uncertainties are shown.
Figure 4 :
Figure 4: Linear growth factors D + (a) implied by the two expansion functions E(a) shown in Fig.2, obtained from the SN-BAO-sample (dark blue) and from the SN-sample alone (light blue).As described in the text, the growth factors are obtained by solving Eq.(25) with Ω m0 = 0.3 based on the empirically derived expansion functions.The shaded areas cover the 1-σ uncertainty implied by the uncertainty of the expansion function E(a).Compared to the uncertainty due to E(a), the uncertainty due to varying γ(a min ) within [0.4,0.8] is very small.
Figure 5 :
Figure 5: Growth index γ derived from the expansion function E, reconstructed from the SN-sample and from the SN-BAO-sample, assuming Ω m0 = 0.3.
Table 1 :
Significant expansion coefficients and their uncertainties The expansion coefficients determined from both data sets, i.e. for the SN-sample and for the SN-BAO-sample, are listed in Tab.1.An interesting, albeit possibly premature, comparison concerns the hypothetical time evolution of the dark energy.If the expansion function E(a) derived from the data were to be represented by the expansion function E ΛCDM (a) for a spatially-flat Friedmann-Lemaître model with dynamical dark energy, we should have E 2 (a
Table 2 :
BAO data (continued) n z D A /r d ∆(D A /r d ) Description | 5,812.8 | 2019-12-10T00:00:00.000 | [
"Physics"
] |
Exact holographic tensor networks for the Motzkin spin chain
The study of low-dimensional quantum systems has proven to be a particularly fertile field for discovering novel types of quantum matter. When studied numerically, low-energy states of low-dimensional quantum systems are often approximated via a tensor-network description. The tensor network's utility in studying short range correlated states in 1D have been thoroughly investigated, with numerous examples where the treatment is essentially exact. Yet, despite the large number of works investigating these networks and their relations to physical models, examples of exact correspondence between the ground state of a quantum critical system and an appropriate scale-invariant tensor network have eluded us so far. Here we show that the features of the quantum-critical Motzkin model can be faithfully captured by an analytic tensor network that exactly represents the ground state of the physical Hamiltonian. In particular, our network offers a two-dimensional representation of this state by a correspondence between walks and a type of tiling of a square lattice. We discuss connections to renormalization and holography.
Introduction
One of the hallmarks of critical behavior is the divergence of correlations, and the emergence of scale invariance; the low-energy behavior of the system seems to be of a similar nature on small and large scales. Beyond the physical beauty of such states, their treatment has helped develop many important ideas and tools, such as, conformal field theory (CFT) and the renormalization group (RG) [1,2].
At temperatures approaching absolute zero, criticality is of a quantum nature and is a focal point of interest for low-temperature many-body physics. At such quantum critical points, long-range quantum fluctuations and correlations allow the system to sustain emergent large-scale quantum behavior (see e.g. Ref. [3]). Such phenomena have been observed in many experiments, including in magnetic systems where quantum critical behavior may develop when a magnetic transition is driven by changes in chemical doping [4], pressure [5], external fields [6] and other system parameters.
A major current challenge is therefore the detailed description of scale-invariant ground states of quantum systems. This is a particularly difficult problem since such systems are typically highly entangled, and harder to study than gapped generic ground states with shortrange correlations.
A recent promising approach to simulating manybody states is via so-called tensor networks. Tensor network notation [7,8] offers a convenient graphical representation of the entanglement structure of manybody quantum states. A particularly simple class of 1D tensor networks, particularly useful for describing spin-chains with finite range correlations, are known as matrix product states (MPS). These were introduced in Ref. [9], and are the variational class of states used in White's density matrix renormalization group (DMRG) numerical procedure [10], arguably the most successful tool for numerical investigation of quantum phases in 1D. Another class of tensor network states, known as the multi-scale entanglement renormalization ansatz (MERA) [11], was proposed by Vidal [12] to be specially tailored for describing quantum critical points.
While there are a number of examples of physical spin systems for which an exact matrix product state representation is known for the ground state, most notably the celebrated AKLT state of Affleck, Lieb, Kennedy and Tasaki [13], the situation for critical systems is quite different. To our knowledge, until now, there has been no exact example of a scale-invariant tensor network that represents the ground state of a simple local Hamiltonian. Such networks are typically computed numerically. The only prior case of an analytical MERA we are aware of describes a sequence of approximate descriptions for free fermions, shown in Ref. [14,15].
Here we provide the first exact analytic hierarchical tensor network for describing a critical state: the ground state of the Motzkin spin chain. Our results suggest that scale-invariant tensor networks may be useful beyond dealing with CFTs. Indeed, it is well known that the Motzkin Hamiltonian does not have the energy level scaling associated with a CFT [16]. Despite this, the Motzkin spin chain bears many hallmarks of quantum critical systems. For instance, the Hamiltonian's gap closes polynomially quickly [17], the ground state has logarithmically growing entanglement entropy [17], and the ground state has a scale invariant description. This last point is one of our main results.
The Motzkin model, as well as the closely related Fredkin model, grew out of the study of "frustrationfree" Hamiltonians. These are an important class of Hamiltonians, where the ground state is also a simultaneous ground state for the local interactions comprising the Hamiltonian. Frustration-free Hamiltonians have been recently used for constructing novel quantum states of matter and also as representations for a variety of quantum optimization problems. Examples of frustration-free Hamiltonians include Kitaev's toric code [18], the AKLT Hamiltonian, and the Rokhsar-Kiveson model for a quantum dimer gas [19]. Moreover, any MPS state is the ground state of a frustration-free Hamiltonian [9,[20][21][22].
The Motzkin Hamiltonian, a spin-1 Hamiltonian introduced by Bravyi et al. in Ref. [17], represents an important new class of Hamiltonians that are both frustration-free and critical. The model admits a straightforward geometric interpretation in terms of random walks called "Motzkin walks" and its ground state is exactly solvable. Moreover it is the starting point for several generalizations that uncovered rich new possibilities for how the entanglement in a ground state can scale. In particular, Movassagh and Shor [16] showed how a higher spin ("colored") version of the model can feature much enhanced entanglement (going from typical logarithmic behavior in critical spin chains to a power law). Motivated by this model, Zhang, Ahmadain and Klich [23] found a parametric deformation of the model that yielded a continuous family of frustration-free Hamiltonians featuring a new quantum phase transition interpolating between an area-law phase and a "rainbow" phase with volume scaling of half-chain entanglement entropy. A spin 1/2 version of the model has been proposed based on so-called Fredkin gates by Salberger and Korepin [24,25], and its deformation was presented in Ref. [26][27][28]. An interesting recent variation of this class of models can be found in, e.g. Ref. [29] using symmetric inverse semigroups.
Here we present two types of tensor network that faithfully capture both the geometric properties and the entanglement structure of the colorless models: The binary height network and the height renormalization network. As described below, each path is mapped onto a tiling of a grid that defines the tensor network. Thus, our networks have the rules that govern the random walker baked into their building blocks. In particular, the second network we propose has a MERA-like structure [30] and defines a natural renormalization process of Motzkin walk configurations.
Motzkin spin chains
As stated above, the spin-1 Motzkin model was introduced as an example of a critical spin chain described by a frustration-free Hamiltonian with nearest-neighbor interactions. These have been generalized to include a deformation parameter t, where t = 1 corresponds to a critical point.
The Motzkin Hamiltonian has a unique zero-energy frustration-free ground state for any value of t > 0. For our present purpose, we need the exact form of the ground state as we describe below. (The interested reader may refer to the detailed description of the Hamiltonian in the Appendix A and in the references). This ground state is a superposition of walks called "Motzkin Walks". A Motzkin walk w is a walk on the Z 2 lattice using the line segments { , -, } that start at (0, 0), go to (2n, 0), and never go below the x axis. Each walk represents a spin configuration via identifying the Motzkin line segments with the local spin {|1 , |0 , |−1 } states, respectively. The ground state can be written as [23]: Here A(w) denotes the area below the Motzkin walk w, and N is a normalization factor. A similar type of ground state occurs in the Fredkin model, which is a half-integer spin model with essentially the same structure; it only lacks the "flat" move. The half-chain entanglement entropy, which is a measure of the degree of quantum correlations in the system, is maximal for the t = 1 case, where it grows logarithmically in n. For t < 1 and t > 1, the ground state satisfies an area law [27]: the entanglement entropy is bounded by a constant independent of system size. The deformed Motzkin and Fredkin walks can naturally be viewed as constrained trajectories of a random walker in the presence of drift; the "x" axis plays the role of time; and the t parameter measures the strength and direction of the drift. More details can be found in references [23,24,26,27].
Though we will focus on the unweighted t = 1 version of the ground state, we will also describe the details of the t > 0 cases (see Eqs. (37) and (41)).
The Motzkin model can also be studied with periodic boundary conditions [16]. The ground space of the periodic model has a 4n + 1 degeneracy, and the ground states |Ψ k consist of equal superpositions over all spin-z configurations that have a total magnetization k, where −2n ≤ k ≤ 2n [16]. Recently, such states were shown to act as approximate quantum error correcting codes [31].
Binary height tensor network
The Motzkin model can be described as a "height model": the height of each walk is encoded within a field value at each point. This is a natural starting point for constructing a field theory description of the colorless Motzkin state, as was done in Ref. [32].
If we describe the state using a binary representation of the heights, we see that, since at site x, the walk could have reached at most height x, we need at most log 2 (2x) bits to encode the height. The height encoded at site x+1 results from the adding (or subtracting) the spin value at site x + 1. Thus, our first step is to generate a tensor network that implements these additions. We find it convenient to encode both the walk and the binary addition using a set of tiles, as explained below.
Walks as tiles
Consider the square tiles A 1 , . . . , A 6 represented below: (2) Top and bottom edges are labeled by 1, −1, or 0 if that edge touches a "↑"-line, a "↓"-line, or neither, respectively. Left and right edges are labeled by 1 or 0 depending on if that edge touches a "→"-line or not.
Given a tiling of a square grid, we say that a tiling is valid if all edges match. To represent a 2n-step Motzkin walk by a valid tiling, we consider a square-grided "step pyramid", as shown in Fig. 1. The steps vary in length because only a logarithmically growing number of bits ( log 2 2x ) is required to store the height at the x th column, for 1 ≤ x ≤ n. Also, the network is symmetric about the halfway point. We impose boundary conditions on the exterior north, east, and west edges such that they are all equal to zero. The values on the south boundary correspond to the local spin values of the walk. As shown in Fig. 1, the height of the walk is encoded The tiles from Eq. (2) are placed on a square-gridded step pyramid. The height of the walk is encoded in binary between columns of tiles, where "1" and "0" values are associated with the presence or absence of a horizontal arrow. For example, the middle point has height 4, which, in the network is encoded by the sequence 0100, representing 4 in binary. Vertical arrows act as "carry" bits in the binary addition process. Each valid tiling can be described pictorially. A value of 1 at the base of the pyramid starts an arrowed line that travels through the bulk. These arrowed paths travel horizontally to the right, but to move vertically, two arrowed paths must fuse, and similarly, an arrowed path bifurcates when moving downwards.
in the binary strings between columns of tiles. This defines an isomorphism between Motzkin walks and valid tilings.
Tiles as tensors
Let us define a tensor network that sums over all configurations along the bottom edge of a square-lattice step pyramid that yield a valid tiling. First, we identify each tile A j with a tensor. We define the rank-one four-index tensor δ k1k2k3k4 (w, x, y, z) = δ k1,w δ k2,x δ k3,y δ k4,z . (3) where δ j,k is the Kronecker delta function and w, x, y, z ∈ Z. Next, identify each tile in Eq. (2) with such a tensor where the values w, x, y, z correspond to the north, east, south, and west boundary values, respectively. Explicitly, where we compress the four index notation k 1 , k 2 , k 3 , k 4 , into a 4-tuple k. Now we can define which is the primary building block of our first tensor network representation of the Motzkin ground state, which we call the binary-height tensor network. It is shown in Fig. 2(a). Recall that the north, east, and west facing outer edges of each tiling must be projected onto the value 0. This boundary condition is set by indices on these edges with Contracting this network gives a value of 1 if the tiling is valid and 0 otherwise. Therefore, it represents an equal-weight superposition of Motzkin walks, which is exactly the t = 1 Motzkin ground state.
The height of this network is log 2 n , therefore the bond dimension between the two halves of the network grows as O(n). The total number of B tensors required scales as O(n log n) 1 . An immediate upper bound on the entanglement entropy between the two sides, as estimated by the number of cuts needed, is simply log n.
The generalization to the spin-1/2 case (known as the Fredkin model) and the case of area-weighted walks is given in Eqs. (23) and (37), respectively. Below we show how the binary height network can be straightforwardly generalized to describe the spin 1/2 Fredkin model, as well as the Motzkin model with periodic or height-varied boundary conditions. 1 More specifically, for n = 2 y − 1, where y is a positive integer, exactly 2(n + 1) log 2 (n + 1) − 2n many B tensors are required.
Boundary conditions
First, we consider modifying the shape of the network. If we extend the width-2n square-gridded step pyramid of the binary height-network to an m × 2n squaregridded rectangle with m ≥ log 2 2n , then there is no increase in the number of valid tilings. In fact, tilings of the rectangle are merely embedded tilings of the pyramid that have been padded from above with blank A 4 tiles.
Next, consider generalizing the left and right boundary conditions of the binary-height network (extended to a height m rectangle) from |0 ⊗ log 2 2n to a product of |0 and |1 states where b kL(R) is the k th bit in the binary expansion of the integer p (q) between 0 and 2 m − 1. Fig. 2(b) shows a tensor network that incorporates these changes. If 2 m − 1 ≥ max(p, q) + n, this network represents the equally weighted sum over all walk configurations with non-negative heights starting at height p and finishing at height q.
If the left and right height boundary conditions are increased so that min(p, q) ≥ n, then all length 2n walks with net height q − p will be generated by the network. This is because no walk is long enough to reach a negative height value. This observation can be used to find a representation of |Ψ k . Choosing left and right boundary vectors to represent any p and q such that q − p = k and min(p, q) ≥ n will generate |Ψ k . One example is to set max(p, q) = 2n and min(p, q) = 2n − |k|.
Zipping up: an exact network RG transformation
The binary-height tensor network discussed above offers a compact and intuitive representation of the Motzkin ground state. Nevertheless, the O(n log n) number of B tensors required is suboptimal. Here we show that the t = 1 Motzkin and Fredkin ground states, and their generalizations with periodic boundary conditions, can be represented exactly using a tensor network that only requires O(n) many tensors.
We offer two equivalent constructions. In the main text we present a tile-based approach consistent with the construction of the binary-height tensor network presented above. In Figs. 6-9 we provide an independent construction that is more closely related to existing tensor-network methods. In particular, it leverages the framework of U(1)-invariant tensors, originally described in Ref. [33].
The key idea behind the tile-based construction is a network renormalization group step associated to the "zipper lemma": where the triangular tensor will be defined below, and the proof is given in the appendix. Sequential application of the zipper lemma leads to a new tensor network where each layer discards the highest frequency components of the layer below. Thus, it naturally represents a renormalization process for Motzkin walks, and resembles a MERA.
First we will treat the periodic case, which is more closely related to the binary-height tensor network and involves a simpler tile set. After this, we will generalize the construction to the original Motzkin model.
The height renormalization network with periodic boundary conditions
We will require the following three index tensor: where, and all indices i, j, and k are restricted to values {−1, 0, 1}. Consequently, the values of i and j cannot be both 1 or both -1. Now we introduce the basic unit of renormalization. Consider the tensor network shown below, which maps two spin values s 1 and s 2 , to a single spin value s 3 (10) where Equivalently, it depends on if the segment corresponding to s1 starts at an even or odd height, respectively. Note that the values on the right two columns do not depend on the height of the middle node in the first column. (b) Example of walk renormalization. The height achieved at each dashed line on the top walk is equal to half the height (rounding down) at the same point in the walk below. As was pointed out in (a), the height values at the thickly dashed lines for the higher walks do not depend on the heights at the more finely dashed lines for walks below. This is the information that is discarded by the coarse graining process.
This map can be represented pictorially, as shown in Fig. 3(a). Next, consider the action of a single layer of these 3tensor renormalization networks, as shown in Fig. 3(b). As a linear map acting on a walk configuration on the 2n bottom indices, it outputs a "coarse grained" version of that walk on the n top indices. More specifically, the height of the walk between sites l and l + 1 on the top indices is half the height (rounding down) of that between sites 2l and 2l + 1 at the bottom indices.
If this course-graining procedure were repeated log 2 (2n) times, it could be represented by the network shown in Fig. 4(a). It requires O(n) many square and triangular tensors 2 . Note that we have left bit values of the left and right boundary vectors | b L and Figure 4: (a) The height renormalization tensor network. The square tensors B are identical to those defined in Fig. (2) The triangular tensors T are defined in Eq. (13). This definition involves the triangular tiles (ii-viii), which are themselves defined using δ k (x, y, z) (see Eq. (14)).
| b R unspecified. These encode two integers p and q, respectively.
Contracting this tensor network with a walk configuration gives the value 1 if the walk reaches a net height of p − q, and gives zero otherwise. Therefore, by choosing max(p, q) = 2n and min(p, q) = 2n − |k|, this tensor network represents the periodic ground state |Ψ k .
One way to verify this claim would be to compute all valid tilings of this network. The triangle tensors can be redefined in tile notation as follows where the triangular tiles D l are defined in Fig. 4 We use the zipper lemma to show in detail how the height renormalization tensor network is equivalent to the binary-height tensor network representation of the periodic model in Fig. 10.
Next, we show that by modifying the B and T tensors, the same type of tensor network can represent the original Motzkin model (with open boundary conditions).
Original Motzkin model
The main modification we make to the network is that we increase the bond dimension of all indices from three to four. In addition to the three spin values {−1, 0, 1}, each index can also be assigned a non-physical value labeled ω. Each tensor will still be represented as the sum of tiles. New tiles will indicate the ω-value of an edge using a dotted line. In order to still represent a spin-1 chain, physical indices of the network (those that appear at the bottom) must take only spin-1 values. To ensure this, we define the projector which will be appended to all physical indices of the network.
We define the new tensor network in Fig. 5(a). We modify the square and triangular building blocks of the tensor as follows.
The square tensors in Fig. 5(a) are denoted C and are defined as where A 7 and A 8 are defined in Fig. 5(b), and B was defined in Eq. (4). The triangle tensors in Fig. 5(a) are denoted S, and are defined as where E 1 , . . . E 12 are defined in Fig. 5(c). Similar to the height renormalization tensor network for the periodic case, we verify in Fig. 12 that this network represents the Motzkin ground state by proving its equivalence to the binary-height tensor network shown in Fig. 2.
We also note that the height renormalization tensor network can be generalized to the spin-1/2 Fredkin model in the same way as the binary-height tensor network. The generalization to area-weighted walks is given in Fig. 41.
Discussion
We presented exact hierarchical tensor networks for representing the ground state of the spin-1 Motzkin spinchain. These networks generate the sum over tile configurations that are one-to-one with valid Motzkin walks. The networks utilize the characterization of the states as height models, the ability to encode the height efficiently in a binary way, and that binary addition can be understood as a local operation between digits in an addition algorithm. The tile-based two-dimensional representation of each walk provides a bulk description of the spin chain: each valid bulk "picture" corresponds to a particular boundary state in the ground state superposition. It is interesting to note that bulk boundary correspondence in terms of tiling has played a useful theme in tackling other hard problems, most recently in the context of spin glasses [34].
At this point it is interesting to consider the relation between our scale-invariant network, the renormalization group (RG), and the MERA. The height renormalization network we describe is clearly similar in structure to a 1D binary (scale-invariant) MERA [35], but where the disentangling has been realized through the action of a matrix product operator (MPO) as opposed to the tensor product of local unitary gates utilized in a standard MERA. Indeed, the alternative construction of a network for the Motzkin chain described in Figs. 6-9 proceeds from the perspective of an RG transformation, building the network layer-by-layer in a manner comparable to that constructions of MERA using entanglement renormalization [12], thus further illuminating the connection between these ideas. However, a significant difference is that the tensors in our network lack the isometric constraints imposed on MERA tensors, which are responsible for the finite-width "light cone" in MERA. Therefore our network does not preserve locality when viewed as an RG transformation, and should be viewed as a non-unitary MERA.
The results presented here offer an exact analytic hierarchical tensor network representation of the ground state of a gapless system in the thermodynamic limit. Related works include networks based on network realizations of the Fourier transform [36] and recent works on holographic tensor networks for disordered systems [37]. While recent advances utilizing wavelets have allowed for the analytic construction of MERA that approximate the ground states of certain lattice CFTs with arbitrarily high precision [14,15], these constructions only become exact in the limit of infinite bond dimension. The inability of a finite bond dimension MERA to achieve an exact representation of a CFT can be understood as a direct consequence of the finitewidth light cone in MERA, which, at finite bond dimension, can only support a finite number of scaling operators [38]. This is incompatible with achieving an exact representation of a CFT, which typically possess an infinite set of scaling operators. However, this suggests that one should consider hierarchical networks of the form derived in this Article, which replace the (local) unitary disentangling with a (non-local) disentangling implemented by an MPO, more generally, as these do not have the limitation of only supporting a finite number of scaling operators. Indeed, our present work opens the exciting possibility that other systems, potentially including ground states of lattice CFTs, could also have an exact representation as a (finite bond dimension) network of this form. Notice also, that if the MPO disentangler was required to be a unitary operator [39,40], one could achieve a quasi-local RG transformation that may still be computationally viable as a variational ansatz. It thus remains an interesting avenue for future research to investigate whether a non-unitary hierarchical tensor network ansatz, which sacrifices the exact locality present in a standard MERA, could lead to improved numeric simulation algorithms.
It has been noted that scale-invariant networks, and in particular MERA, have a special connection to holographic duals in the sense of the AdS/CFT correspondence [41]. Here, the bulk of a MERA tensor network can be understood as a discrete realization of 3d anti-de Sitter space (AdS 3 ), identifying the extra radial holographic dimension with the RG group scale dimension in the MERA. While the MERA tensor network is used to represent ground states of relativistic CFTs, where both time and space may be rescaled simultaneously, the tensor networks presented in this work perhaps more naturally describe ground states of non-relativistic field theories that are invariant under non-relativistic symmetry groups. These non-relativistic symmetry groups are characterized by anisotropic scale transformation of time and space Lifshitz and Schrodinger field theories are well-known examples of such field theories. This relation may come to be valuable to the currently active attempts of constructing holographic duals for nonrelativistic field theories with Lifshitz symmetry [42] or Schrodinger symmetry [43,44]. For a recent thorough review of Lifshitz holography, see [45] and references therein.
It is natural to ask whether the tensor networks presented here can be generalized to other spin models. In another work [46], we introduce another tile-based tensor network for the higher spin (colored) generaliza-tions of the Motzkin and Fredkin models, showing how an exact network can describe the rainbow phases these models exhibit.
Our results highlight the versatility of tensor networks and their potential in describing complicated many body states, and in particular the power of selfsimilar structures such as MERA, or, in this case, a non-unitary hierarchical tensor network, in describing quantum critical phases.
A Definition of the Motzkin Hamiltonian
The Motzkin Hamiltonian can be defined by first identifying each local spin-z basis state {|1 , |0 , |−1 } with a line segment { , -, }, respectively. This allows us to represent states as a superposition of walks. The Hamiltonian, defined on a spin-chain with 2n sites reads: where Π j acts on the pair of spins j, j + 1 and where Φ, Ψ, Θ are the following states on pairs of neighboring spins To have periodic boundary conditions, the Hamiltonian in Eq. (18) can be simply modified to include a Π 2n term while omitting the boundary terms Π boundary .
B Spin 1/2 case
To re-purpose the binary-height network for the spin-1/2 Fredkin model, we define an operator P ∈ Hom(C 3 , C 2 ) that projects out the 0 component of each spin and maps 1 → 1 2 and −1 → − 1 2 . It is given by where the indices k 1 and k 2 are associated with the spin-1 and spin-1 2 degrees of freedom, respectively. Appending P ⊗2n to the bottom of the binary-height tensor network yields a tensor network for the Fredkin model: (23) We note that Salberger and Korepin previously presented an MPS description of the Fredkin state using the height representation of Dyck walks [24]. However, the elementary building blocks of such networks have growing bond-dimension and lack the local structure of the networks we achieve here.
C Construction of renormalization tensor network via U(1)-symmetric tensors
In this appendix we provide an alternative derivation of an exact hierarchical tensor network representation of the ground state of the spin-1 Motzkin spin chain. Here, in order to better connect with established tensor network methodology [33], we formulate the solution in terms of U(1) invariant tensors instead of the 'flux' preserving tiles discussed in the main text; however both approaches are ultimately equivalent. The derivation presented here is based on the dual notions of δ-symmetric tensors, a restricted subclass of U(1) tensors, and boundary-locked networks. We first define these concepts before proceeding to demonstrate how they can be used to construct a network for the Motzkin spin chain.
Let us recall the basics of U(1)-symmetric networks, as detailed in Ref. [33]. A tensor network with U(1) symmetry is represented by an oriented graph, where each index has an associated direction (depicted with an arrow), such that the indices connected to a tensor can be regarded as either incoming or outgoing with respect to that tensor. Each index i ∈ {1, 2, . . . , d} in the network is assigned a set of quantum numbers n (i) = [n It is known that U(1)-symmetric tensors are those that conserve particle number, such that a tensor component has zero weight unless the sum of the outgoing charges matches the sum of the incoming charges. For the purposes of constructing a solution to the Motzkin model, it is useful to restrict to a sub-class of symmetric tensors that we call δ-symmetric tensors, which we define as U(1)-symmetric tensors where every (structurally non-zero) element is equal to unity. In other words, these are U(1)-symmetric tensors with component equal to unity if the total incoming charge n in matches the total outgoing charge n out and zero otherwise, which can be understood as a tensor version of the Kronecker-delta function.
The concept of a boundary-locked tensor network is now defined. Given a lattice L of spin-1 sites, let T be the tensor network built from U(1)-symmetric tensors, representing a S z = 0 quantum state |ψ ∈ L, and let |φ be a U(1)-invariant product state (S z = 0) on L. We say that network T if boundary-locked if, by constraint of the U(1) symmetry, there is a single unique configuration of the internal indices in T that can give a non-zero contribution to the scalar product φ|ψ for any input product state |φ with S z = 0. In other words, a network is boundary locked if, by fixing the boundary indices in a configuration compatible the S z = 0 symmetry of the network, the internal indices are then 'locked' in a unique configuration by the constraint that the total incoming U(1) charges must match the total outgoing charge in all tensors. It follows that a necessary (but not sufficient) constraint for a network to be boundary locked is that the irrep of charge n on any tensor index is at most 1-fold degenerate (which allows us to specify the value that an index takes by the charge n that it carries).
Given that the notions of δ-symmetric tensors and boundary-locked networks have been established, we are now able to construct the exact hierarchical network for the ground state of the Motzkin chain. We begin by focusing on a simpler task: given a finite chain of spin-1 sites L we describe how to construct a network representing the equal weight superposition U(1) states in the S z = 0 spin symmetry sector (or equivalently, the superposition of all walks that start and end at zero height). This solution can later be refined to exclude the paths that take negative height values at any point, as described in the main text, such that the ground state of the original Motzkin spin chain is recovered. Let us consider a U(1)-invariant state |ψ ∈ L with S z = 0; if |ψ is described by a boundary locked network built from δ-symmetric tensors it automatically follows that |ψ must be the desired equal-weight superposition of all states in the S z = 0 sector. This is true since the scalar product of |ψ with any product state of the S z = 0 spin sector must evaluate to unity, as only a single configuration of indices can contribute and all configurations have a total weight equal to unity, while the scalar product with any state outside of the S z = 0 is trivially zero.
The remaining goal is thus is to build a boundary-locked network T of δ-invariant tensors that constitutes a proper holographic realization: given a lattice L of N sites, we want the network T to be organized into O(log N ) self-similar layers, each with some finite bond dimension that is independent of the system size N . Here we follow a similar construction as used in the MERA, and build the network from a sequence of coarse graining (CG) transformations, each of which is comprised of a disentangling step followed by a blocking step that reduces the number of lattice sites by a factor of 2. However, instead of using disentanglers with finite local support, which prove inadequate for the exact construction, we instead represent the disentangling operation as an MPO. We build this MPO from copies of a four index δ-invariant tensor G ik jl as depicted in Fig. 6(a-b). The virtual bond dimension of the MPO is Thus the action of this MPO is to map states on the N -site lattice L, which has local dimension d = 3, to states on an N -site lattice L of local dimension d = 2. If we assume N to be even and that the virtual indices of the edge MPO tensors are fixed in the |0 state, then it can be seen that any S z = 0 product state on L is mapped to a unique product state on L , see Fig. 7 for example, consistent with the desire for a boundary-locked network. This follows as, once the input indices i and k have been set on a G tensor, there is only a single choice of the output indices j and l that satisfies U(1) charge conservation. The blocking step of the CG transformation is now realized using a 3-index δ-symmetric tensor W ij k , whose input indices carry U(1) charges n (i) = n (j) = [−1, 1] and whose output index carries n (k) = [−2, 0, 2], see also Fig. 6(c-d). This blocking step is again consistent with realizing a boundary locked network, and clearly has non-zero overlap with any state on L .
One may then repeat this coarse graining transformation, consisting of disentangling with the MPO formed from G tensors and then blocking with the W . Notice that the magnitude of U(1) individual charges carried on the indices of the G and W tensors doubles with each coarse graining step, but the content of these tensors otherwise remains unchanged. The output index on the final isometry, after taking log 2 (N ) coarse graining steps where N is assumed to be a power of 2, is fixed in the n = 0 state to ensure that the network is in the total spin S z = 0 sector, see Fig. 8. Thus our goal is completed: we have an S z = 0 holographic network that (i) is constructed entirely from δ-symmetric tensors and (ii) is boundary-locked, which implies that it represents an equal weighted superposition of all states in the total spin S z = 0 sector.
In comparing the network derived in this appendix, Fig. 8(a), to that derived in the main text, Fig. 4, one sees that they both have an equivalent structure, with the G and W tensors derived in this appendix substituting for the B and T tensors in Fig. 4. However, upon expressing the G and W tensors in the 'tiling' representation used in the main text, see Fig. 9(a-b), it is seen that they correspond to a different set of tiles than do the B tensors, Fig. 2, and the T tensors, Fig. 4. Nevertheless, a component-wise analysis of all permissible tilings reveals that the product of two G and a W tensor is identical to the product of two B and a T tensor, as depicted in Fig. 9(c). Thus one indeed concludes that the network constructed in this appendix is ultimately equivalent to that of the main text.
D Equivalence between the renormalization and binary height networks for the periodic model
Here we prove that the height renormalization tensor network defined in Fig. 4 represents the periodic Motzkin ground state. We do this by showing its equivalence to the generalized binary-height tensor network defined in Fig. 2(b). In order to represent the state |Ψ k , the boundary vectors | b L and | b R (see Eq. (6)) are chosen to encode integers p and q such that q − p = k and min(p, q) ≥ n. We choose max(p, q) = 2 log 2 2n +1 + 2n (24) min(p, q) = 2 log 2 2n +1 + 2n − |k|, We use a rectangular binary-height tensor network with m = 2 log 2 2n +1 many layers. This choice guarantees that b mL = b mR = 1. See Fig. 10(a) for an example on 8 spins. As usual, the upwards pointing indices on the m th layer have been contracted with the state |0 . In fact, all such indices must take the value 0 regardless of this boundary condition. To see this, suppose some of these indices took a value of 1 or -1, and let µ be the leftmost column with a non-zero value at the top. Then, • If the top index of µ takes value −1, the only valid tiling for µ is to use tile A 6 for all square cells. Then, the height encoded at the left boundary of µ is zero. However, from Eq. (25), p > 2n ≥ µ, and since it takes at least y many columns to drop in height by y, there are no valid tilings of the lattice to the left of µ.
• If the top index of µ takes value 1, the only valid tiling for µ is to use tile A 2 for all square cells. Then, the height encoded at the right boundary of µ is zero. However, from Eq. (25), q > 2n ≥ 2n − µ, and since it takes at least y many columns to climb in height by y, there are no valid tilings of the lattice to the right of µ.
Because non-zero values at the top indices of the network yield invalid tilings, contractions of the network must evaluate to zero in such cases. Therefore, there is some flexibility in choosing the upper boundary condition, i.e., we are free to replace the contraction with |0 ⊗2n with any other tensor that has equal support over this state. In particular, we can use a binary tree of triangle T tensors, as shown in Fig. 10(b). Tiling the entire pyramid with D 5 tiles shows that it has support over the |0 ⊗2n subspace.
Next, we require following lemma: Lemma D.1 (Zipper Lemma). The following tensor networks are equivalent: Proof. By exhaustive search all valid tile configurations (see Fig. 11, the following two tensor networks are equivalent (27) The lemma follows by sequential application of this identity.
Using this, we can pull a horizontal layer of triangular tensors in Fig. 10(b) downwards through the tensor network. At each step, two B tensors are merged into one. Pulling down each triangular tensor is analogous to closing a zipper. The end result is shown in Fig. 10(c). The final step to prove equivalence with the height renormalization tensor network from Fig. 4(a) is to note that the top B tensor from Fig. 10(c) can be removed, since (28) Removing the top B tensor modifies the left and right boundary heights so that now max(p, q) = 2n and min(p, q) = 2n − |k|.
E Equivalence between the renormalization and binary height networks for the original model
Here we show that the height renormalization tensor network shown in Fig. 5 is a valid representation of the Motzkin model. Our proof will follow a similar trajectory to the case with periodic boundary conditions. Figure 12: (a) Binary height tensor network made up of B tensors, vectors |0 , and projectors Π. This is equivalent to the step-pyramid binary-height tensor network from Fig. 2. (b) The network from (a) has been modified so that the B tensors are replaced with C tensors, and the top boundary of |0 ⊗2n has been replaced with a tree of triangular S tensors.
We begin with the step-pyramid of B tensors from Fig. 2. We will embed the width 2n pyramid within a ( log 2 2n + 1) × 2n rectangle (recall that this does not change the represented state). We are also free to append projectors Π (see Eq. (15)) to the base of the network because the B tensors contain no tiles with support on ω (they have no dotted lines). The tensor network shown in Fig (12)(a) incorporates both these changes.
Next, we replace each of the B tensors in Fig (12)(a) with C tensors defined in Eq. (16). Though the C tensors contain two additional tiles (A 7 and A 8 ), the new network includes no additional tilings. To see this, note that A 7 and A 8 contain vertical dotted lines. If these tiles appear in some column, then any valid tiling of that column must connect the dotted line to the bottom of the network. Contraction with Π at the bottom of that column will evaluate the network to zero. Next, we replace the top boundary |0 ⊗2n vector with a tree of triangular S tensors (defined in Eq. (17)). These changes are incorporated into the tensor network shown in Fig. 12(b). This tensor network is equivalent to the network shown in (a).
To prove this, we will show that the indices at the bottom of the pyramid (equivalently, the indices at the top of the square lattice) in Fig. 12(b) must take the value zero in order for the network contraction to give a non-zero value. Suppose some of these indices took a value of 1, -1, or ω, and let µ be the leftmost column of the square lattice with a non-zero value at the top. Then, • If the top index of µ takes value 1, the only valid tiling for µ is to use tile A 2 for all square cells. Then, the height encoded at the left boundary of µ is 2 log 2 2n +1 . Since it takes at least y many columns to climb in height by y, and the left boundary of the network is set to height zero, there is no valid tiling that is compatible with both the left boundary of the network and the right side of the column µ.
• If the top index of µ takes value −1, then the tiling of the pyramid network of S tensors must include some E 7 tiles. Consider the topmost E 7 tile(s). Any valid tiling of the pyramid must connect this dotted line to the top of the pyramid. Then, contracting with the |0 vector at the top index of the top S tensor will evaluate the network to zero.
• If the top index of µ takes value ω, then µ can be tiled only with A 8 . Contraction with Π at the base of µ evaluates the network to zero.
Therefore, using a tree of S tensors as the upper boundary of the square lattice network is equivalent to contracting with |0 ⊗2n . Now, we reprove the zipper lemma (lemma (26)) with tensors C and S replacing tensors B and T , respectively.
Proof. First we prove the equivalence of the tensor networks shown in Eq. (27). We do this by proving an equivalence of tilings. The tilings of each network are specified by assigning values of {0, 1} to the left facing indices and values of {−1, 0, 1, ω} to the indices on the bottom. This yields a total of 64 distinct tilings. 34 of these were shown already in Fig. 11; only numbers 9 and 10 cannot be included because S does not contain the tile D 7 . The remaining 30 are shown in Fig. 13. The zipper lemma follows from sequential application of the identity in Eq. (27).
Using the zipper lemma, we can pull a horizontal layer of triangular tensors in Fig. 12(b) downwards through the tensor network. At each step, two C tensors are merged into one. Pulling down each triangular tensor is analogous to closing a zipper. The end result is shown in Fig. 12(c).
The final step to proving equivalence between the original binary-height tensor network Fig. 2(a) and the height renormalization tensor network (Fig. 5(a) is to note that the top C tensor from Fig. 12(c) can be removed, since (29) where the square tile in the middle network is A 4 .
F Area weighted walks
The ground state of the t > 0 area-weighted case (with open boundary conditions) is Note that A(w), the area under the walk w, is merely the sum of the heights at the vertices connecting adjacent walk segments, i.e, where h w (k) is the height of w between the (k − 1) th and k th segments. These heights can be defined via the following eigenvalue equation where and where S Z,m is spin-z operator on the m th spin. Thus, Below we will show how operators of the form t αS Z,m can be absorbed into the binary height and renormalization tensor networks by redefining their building blocks. F.0.1 Binary height network We will denote Then, by appending such tensors to the physical indices of the binary height network, we can represent the area weighted case as shown below for the 2n = 8 spin example.
(37)
Note that the square tensors have the following symmetry with respect to the S Z operator: (38) (in fact, each A i tile individually satisfies this relation).
Using this identity, we can push the circle tensors onto the virtual legs one layer at at time (39) Once the circle tensors reach the top boundary of |0 vectors, they vanish. All remaining circle tensors reside on horizontal virtual indices and have α = 2 l−1 , where the l-index indicates which row they appear in (see Eq. (37)).Next, by taking the square root, we split each such circle tensor into two with α = 2 l−2 . Now every square tensor has a unique pair of adjacent circle tensors. The effect of these is to reweigh the A i tiles that are summed by each square, i.e., all square tensors on layer l are modified so that (40) Note, that each tile picks up a factor of t 2l−2 per horizontal arrow segment. Also, the final two tiles have been included in order to later generalize to the renormalization tensor network. Eq. (40) remains valid in this case because we define S Z |ω = 0. For the purposes of describing only the binary height network, the final two tiles in Eq. (40) can be ignored. Thus, the binary height tensor network can be modified to include non-unit values of t by introducing a layerdependent reweighing of the tiles. Next we will generalize this procedure to the renormalization tensor network.
F.0.2 Renormalization tensor network
We can append circle tensors to the bottom of the renormalization tensor network to represent the area-weighted ground state. Below is an example for 2n = 8 spins.
(41)
The circle tensors can be pushed upwards through the projectors Π because they commute. Then, they can be pushed through a layer of square tensors using Eq. (39) in the same way as described above.
The triangle tensors satisfy the following symmetry relation (42) (this relation also holds true for the each individual tile, E i ). From this, we can derive a "push-through" relation for the circle tensors: We use relations Eq. (39) and (43) Thus, the renormalization tensor network can also be modified to include non-unit values of t by introducing a layer-dependent reweighing of the tiles. | 11,538.8 | 2018-06-25T00:00:00.000 | [
"Physics"
] |
Two-Stage Model-Agnostic Meta-Learning With Noise Mechanism for One-Shot Imitation
Given that humans and animals can learn new behaviors in a short time by observing others, the question we need to consider is how to make robots behave like humans or animals, that is, through effective demonstration, robots can quickly understand and learn a new ability. One possible solution is imitation based meta-learning, but most of the related approaches are limited in a particular network structure or a specific task. Particularly, meta-learning methods based on gradient-update are prone to overfit. In this article, we propose a generic meta-learning algorithm that divides the learning process into two independent stages (skill cloning and skill transfer) with a noise mechanism which is compatible with any model. The skill cloning stage enables a good understanding of the demonstration, which helps the skill transfer stage when the robot applies the learned experience into new tasks. The experimental results show that our algorithm can alleviate the phenomenon of overfitting by introducing a noise mechanism. Our method not only performs well on the regression task but is significantly better than the existing state-of-the-art one-shot imitation learning methods in the same simulation environments (i.e., simulated pushing and simulated reaching).
I. INTRODUCTION A. MOTIVATION
As we know, humans and animals can learn new behaviors quickly by observing or imitating others and can effectively adapt to new environmental changes by using previous knowledge and experience. We expect that artificial agents can learn as fast as humans. Generally speaking, machine learning requires a large number of samples for training, while humans need only a small amount of samples to learn new skills and concepts. For example, humans only need to learn a few examples of cats and dogs to know the differences between their shapes and characteristics, so they can learn to distinguish between cats and dogs. Since the application environment of robots has migrated from simple settings to unstructured and complex environments, it requires a large amount of expert knowledge. And the corresponding programming has also become complicated, time-consuming, and expensive [1], [2]. Generally, we want robots to be The associate editor coordinating the review of this manuscript and approving it for publication was Yangmin Li . adaptable as humans, which is almost impossible to achieve through traditional programming. Through demonstrations, we can unambiguously communicate any manipulation task, and simultaneously provide clues about the specific motor skills required by robots to perform the task [3]. As such, the core problem of meta-learning is how to build a machine learning model or method that can quickly learn from a small number of samples.
B. LITERATURE REVIEW
In imitation learning, one of the leading methods is behavior cloning based on supervised learning(e.g., [4]). However, when faced with a changing environment, methods based on behavioral cloning are often not adaptable and prone to overfitting. Unlike behavior cloning, the main idea of reinforcement learning [5], [6] is to acquire skills through a lot of trial and error, which has achieved remarkable success in many fields, such as continuous control in simulation or real environments [7]- [10], AlphaGo Zero [11], and Quake III Arena [12]. However, these methods based on reinforcement learning often access unsafe or undefined states space during training because of the need to interact with the environment and random behavioral exploration, while still not effectively solving the problem of quickly adapting to a new environment. In contrast to reinforcement learning, inverse reinforcement learning [13] selects the optimal behavior by learning a reward function [14]- [16] as an estimate, but requires additional expert knowledge to optimize rewards [16]. In general, reinforcement learning through random trial and error is a time-consuming task.
Although previous work has produced many impressive achievements in robotics, they have primarily considered each skill separately, rather than learning one skill to speed up learning another. Based on this background, meta-learning has been proposed in various concepts or forms [17]- [19] and is successfully applied to generative modeling [20], image recognition [21]- [25] and weight optimization algorithms [26]- [28]. However, these one-shot or few-shot meta-learning algorithms have not been applied to imitation learning. In the past, most operations of imitation learning (learning from demonstrations) methods were performed at the level of configuration-space trajectories [29]. These trajectories are usually collected by teleoperation [30], kinesthetic teaching [31], or sensors [32].
Unlike traditional methods based on manual programming, Duan et al. [3] proposed an excellent meta-learning framework that attempts to make the robot learn from very few demonstrations of any given task and instantly generalize to new situations without requiring task-specific engineering. Similarly, James et al. [33] introduced Task-Embedded Control Networks to leverage a task embedding to learn new tasks from demonstrations with meaningful results. Furthermore, Shao et al. [34] proposed a significant method for one-shot imitation combined with object detection, which is not an end-to-end framework. As it needs to train a carefully designed autoencoder network, a carefully designed object detection network, a carefully designed motion policy network, and it needs more manual labeling information, this method is very complicated and hard to train. Since these methods are mainly focusing on a specific network structure, it may lack versatility and flexibility for other tasks.
Not aiming at a particular network, Finn et al. [35] proposed a model agnostic meta-learning algorithm (MAML), which is simple, elegant, and powerful to various tasks. Different from previous meta-learning methods [14], [25]- [27] for learning update functions or learning rules, the MAML algorithm neither expands the number of learning parameters nor constrains model architecture (e.g., by requiring a recurrent model [36] or a Siamese network [37]). As shown in Fig. 1 (left), the main idea of MAML is to make the model have the best performance on new tasks after updating the parameters through one or a few gradient steps. However, it is commonly thought that simple gradient descent would get trapped in terrible local minima [38] (see Fig. 2).
Based on MAML, Finn et al. [39], [40] proposed some significant work of one-shot visual imitation learning. According to the provided demonstration, one-shot imitation's ideal goal is to make robots learn to deduce actions in a new environment. However, unlike image classification and regression tasks that need to provide labeled data during training and testing, imitation learning is more complicated. It is not easy to offer real-time actions corresponding to the demonstration in practical application. In other words, we tend to let the model directly infer the corresponding behaviors in a new scene without additional information from teaching videos. Therefore, it is tricky to make the model deduce works from a series of observations to new scenarios.
As shown in Fig. 1, the MAML algorithm for one-shot imation (see Algorithm 1) tries to directly adopt the output of support tasks as a loss function to perform an internal gradient update to update model θ since expert actions are not provided to the inner gradient update. Because MAML's internal gradient update does not provide supervision information for inner loss, the inner loss is unpredictable (we found that the internal loss is quite huge in our experiments). Therefore, it can not guarantee that the model has fully understood the behaviors of demonstrations (support tasks). After being updated by inner gradient descent, the model is directly enforced to adapt new scenes (target tasks). Intuition tells us that this learning model is more like seeking a direct mapping relationship from pictures to pictures than letting the model truly understand the meaning of pictures.
C. PAPER CONTRIBUTIONS
In order to solve the above problems, the contributions of our work are as follows: • We propose a general two-stage model-agnostic meta-learning algorithm (TMAML, see Fig. 1 and Algorithm 2) for one-shot imitation. Compared to the MAML algorithm, TMAML divides the learning process into two independent stages: -skill cloning. In this stage, we try to let the model fully understand the intent of the demonstrations (support tasks) only; -skill transfer. In this stage, we transfer the learned knowledge (learned from support tasks) to the new scenes (target tasks). * It should be noted that the TMAML is not simply adding more inner steps to MAML since it has its own outer gradient update step, and each stage of TMAML is relatively independent.
In other words, we could consider skill cloning as pre-cognition or pre-training of skill transfer.
• In the process of meta-learning, we introduce a noise mechanism (see Algorithm 3) to alleviate the overfitting problem. --It should be noted that our noise mechnism is different from the work that is based on a complex probabilistic model proposed by Finn et al. [41]. In our work, we simply inject a global noise to the model θ at the beginning of the inner gradient update, which is easy to implement and can increase the robustness of the model's adaptability. VOLUME 8, 2020 FIGURE 1. Diagrams of our two-stage model-agnostic meta-learning algorithm (TMAML) and the MAML algorithm when we do not provide expert actions to inner gradient update. (a) Firstly, the MAML algorithm tries to directly adopt the output of support tasks as a loss function to perform an internal gradient update to update model θ since expert actions are not provided to the inner gradient update. Then the updated model θ * is applied to adapt to new environments (target tasks). (b) Our TMAML algorithm attempts to divide the learning process into two stages: skill cloning and skill transfer. The purpose is to ensure that it has mastered the skills or knowledge from the given demonstrations (support tasks) and then apply the learned experience into new tasks (target tasks). In the skill cloning stage, we use the same internal gradient update strategy as well as MAML, but we do not directly apply the updated model to a new environment. In contrast, we provide expert actions for support tasks in the outer loss to make the model understand the actions required to complete according to the demonstrations from support tasks. After that, we assume the model has already grasped skills learned from support tasks, and we turn it into the skill transfer. In the skill transfer stage, we keep the same inner gradient update as before, and we perform a skill transfer at the outer gradient update to finish new tasks (target tasks) based on experience from support tasks. Note that we do only provide expert actions to the outer gradient update in both skill cloning or skill transfer. • In our work, we successfully apply our algorithm to the regression tasks, simulated pushing tasks, and simulated reaching tasks. We demonstrate that our algorithm achieved state-of-the-art performance in the same experimental setting compared to the previous advanced methods.
II. PROBLEM FORMULATION OF META-IMITATION LEARNING
For one-shot imitation, the MAML algorithm aims to train a model that can achieve rapid adaptation (a vision-based policy needed to adapt to a new scene from a single demonstration). In this section, we will define the visual meta-imitation learning problem and present the algorithm's general form.
A. PROBLEM STATEMENT
Now we consider a model, indicated by f. This model's function is to map demonstrations or observations o to corresponding outputs or actionsâ. Because we aim to apply our algorithm to various meta-learning tasks, we intend to introduce the same generic notion of a learning task below for convenience. Formally, we denote each imitation task T i = {τ = (o 1 , a 1 , . . . , o H , a H ) ∼ π * i , q(o t+1 |o t , a t ), L(a 1:H ,â 1:H ), H }, where τ is demonstration data generated by an expert Algorithm 1 Model-Agnostic Meta-Learning (MAML) for One-Shot Imitation (differences in blue) Require: p(T ): distribution over tasks Require: α, β: step size hyperparameters 1: randomly initialize θ 2: while not done do 3: Sample batch of tasks T i ∼ p(T ) 4: Divide the batch of tasks T i ∼ p(T ) into two mini-batches: A i as target tasks, B i as target tasks for all A i and B i do 6: Evaluate Compute adapted parameters with gradient descent: 9: is a transition distribution, L is a loss function used for imitation, and H is an episode length. We assume that the distribution over tasks p(T ) is exactly what our model wants to learn, and we can obtain successful demonstrations for each task. Feedback is evaluated by the loss function L(a 1:H ,â 1:H ) → R, which could be a cross-entropy loss for discrete actions or mean squared error for continuous actions.
In the K -shot learning setting, we draw K samples of task T i ∼ p(T ) as demonstrations for the model to train. For the one-shot imitation, the model needs to learn a new task T i drawn from p(T ) from only one demonstration generated by T i . Regarding meta-training, the model is trained using one demonstration from expert policy π * i based on a random task T i sampled from p(T ), and we test it on a new scene drawn from π * i to get the test error. The learned policy π i is improved by optimizing the test error concerning the model parameters. Therefore, the test error serves as a training error of the meta-training process.
B. MODEL-AGNOSTIC META-LEARNING
In the meta-learning field, the MAML has been successfully applied to various scenarios, such as regression, image recognition, and reinforcement learning. The MAML algorithm tries to learn a model represented by weights θ such that it can utilize standard gradient descent to get fast adaption on new tasks T i drawn from p(T ). Since the algorithm takes the gradient descent as an inner optimizer, it does not need to introduce additional parameters, which is more parameter-efficient than other meta-learning methods. After learning from a demonstration, the model's parameters θ are Algorithm 2 Two-Stage Model-Agnostic Meta-Learning (TMAML) for One-Shot Imitation (differences From MAML in red) Require: p(T ): distribution over tasks Require: α, β, γ , δ: step size hyperparameters 1: randomly initialize θ 2: while not done do 3: Sample batch of tasks T i ∼ p(T ) 4: Divide the batch of tasks T i ∼ p(T ) into two mini-batches: A i as target tasks, B i as target tasks (A i = B i ) 5: while Skill Cloning do 6: for all A i do 7: Evaluate L inner (f θ ) with respect to |A i | examples: 8: Compute adapted parameters with gradient descent: 10: end for 15: end while 16: while Skill Transfer do 17: for all A i and B i do 18: Evaluate L inner (f θ ) with respect to |A i | examples: 19: Compute adapted parameters with gradient descent: 21: end for 26: end while 27: end while updated as θ i to adapt to a new task T i . Especially, the updated θ i is computing through one or more gradient descent. For convenience, we mainly consider the case of a gradient update in the following sections, while multiple gradient updates could be seen as an extension. During training, the model's parameters are optimizing according to the test error of f θ i : where the hyperparameter α is a step size in the inner gradient descent of meta-learning. Note that the meta-optimization is performed through inner gradient descent over parameters θ , VOLUME 8, 2020 Sample batch of tasks T i ∼ p(T ) and random vector g i ∼ N (0, I ) 4: Divide the batch of tasks T i ∼ p(T ) into two mini-batches: A i as target tasks, B i as target tasks for all A i and B i do 6: Compute parameters with noise vector: Evaluate L inner (f θ * ) with respect to |A i | examples: 8: Compute adapted parameters with gradient descent: 10: Evaluate L outer (f θ i ) with respect to |B i | examples: 12: end for 15: end while and it uses the updated θ i to produce results on new tasks. The test error of tasks is optimized by stochastic gradient descent (SGD).
C. MODEL-AGNOSTIC META-LEARNING FOR ONE-SHOT IMITATION
In this section, we will detail the extension of model-agnostic meta-learning algorithms (MAML) to imitation learning setting. Please note that we only provide visual information as a demonstration (supporting task). No other action and state information are included in an inner gradient update because they are tricky to collect in practical applications.
We use o to represent the input (e.g., a video) of the model, o t represents the agent's observation at time t (e.g., an image), andâ = f θ (o t ) indicates the predicted output (e.g., torques) at time t. For simplicity, we denote a trajectory of demonstration as τ = {o 1:T , a 1:T }.
We assume that each task consists of at least two demonstrations for meta-learning. In meta-training, we randomly sample a batch of tasks T i , and each task consists of two demonstrations. For each task, we make a demonstration as the support task (A i ) and another one as the target task (B i ).
In the inner gradient update phase, we only provide visual information to the model without expert actions, and the inner loss is: where A i is a minibatch of tasks drawn from T i ∼ p(T ).
Although we do not provide expert actions in the inner gradient update for one-shot imitation, the expert actions are needed in the outer gradient update to get an objective loss. So in the outer gradient update phase, the expert actions are provided with visual information to the model, and the outer loss is: where B i is a minibatch of tasks drawn from T i ∼ p(T ) (B i = A i ). We summarize the algorithm of MAML for one-shot imitation in Algorithm 3.
III. IMPROVED ALGORITHM
The MAML algorithm has achieved excellent results in regression, image classification, image super-resolution [42], etc. However, unlike these fields, behavior imitation based on vision is a very troublesome problem. For example, in the image classification task, we can easily provide the classification label for reference. Due to the randomness and unpredictability of human behavior, it is not easy to provide information (e.g., actions and states) other than visual information for the model in real-time without the assistance of additional devices. In terms of practical significance, humans can learn new behaviors only based on visual information, so we also hope that agents can complete tasks as efficiently as humans.
Since we tend to only provide visual information to the model in the inner gradient update phase according to the MAML algorithm, this will make the model lose supervision information as a reference during the intermediate learning process. In other words, the model is unable to fully understand what tasks the demonstrations are completing. On the contrary, we think this is a way to forcefully get the mapping relationship between support tasks and target tasks, rather than truly understanding. As a result, Table 1 lists the success rates of one-shot simulating pushing with varying demonstration information provided at test-time. And Table 2 lists the success rates of one-shot imitating reaching with different demonstration information provided at test-time. The results of these methods show that as the information provided in demonstrations (support tasks) decreases, the model's cognitive ability declines sharply. Based on this, we hope to propose a method that can further understand the visual information from demonstrations and make the model have sufficient generalization capabilities.
A. TWO-STAGE MODEL-AGNOSTIC META-LEARNING ALGORITHM As shown in Algorithm 2, we divide meta-learning in training into two stages: (i) skill cloning and (ii) skill transfer. As with the MAML algorithm, in order to ensure the generality of the algorithm, we do not make any assumptions about the form of the model but assume that a certain parameter vector θ has parameterized it, and the loss function is smooth enough in θ that we can use gradient-based methods. Now we use f θ to represent the model function with respect to θ . At the beginning of training, we need to randomly sample tasks T i ∼ p(T ) and divide them equally into two mini-batches: A i as support tasks and B i as target tasks.
Different from MAML, we first perform skill cloning to let the model understand what actions to complete based on demonstrations. Because the model does not need to switch learning scenarios (from support tasks to target tasks) at this stage, it is easier to understand the purpose of tasks from demonstrations. An intuitive example is: We often practice some fundamental problems repeatedly instead of directly challenging new questions. After we have summarized the law of the problem, we will apply this law to a new similar topic, so that we will not feel confused.
1) SKILL CLONING
In the skill cloning stage, the model only learns from demonstrations A i ∼ p(T ) (which serve as both support tasks and target tasks) and performs internal gradient updates without supervision information (e.g., actions and states) for inner learning: where the hyperparameter α stands for the step size of inner meta-learning. Here we mainly consider the case of one inner gradient update for convenience and directly adopt the internal output as a loss [40]. Still, more gradient updates could be used to produce better results. L A i (f θ ) is: After internal meta-learning, the model's parameters θ are updated as θ . And then the performance of f θ i with respect to θ is optimized via target tasks A i ∼ p(T ): Note that the meta-objective is to finish the target task according to the support task, so we must provide supervision information for the outer loss L A i (f θ i ) . More concretely, the L A i (f θ i ) is different from L A i (f θ ) as: According to the external loss calculated by (7), the final step of skill cloning is to use stochastic gradient descent (SGD) to optimize the model parameter θ : where the hyperparameter β stands for the meta step size of external meta-learning, and we call this step as an outer gradient update in this article. Although we do not provide supervision information in an internal gradient update, we can offer it directly in the outer gradient update. Notably, the support tasks and target tasks are kept the same in the skill cloning stage. Since it does not need to transfer a scene to another scene during training in this phase, it will be more natural for the robots to understand the tasks without facing the ambiguity of different demonstrations (e.g., a pair of related videos A and B).
2) SKILL TRANSFER
After completing a skill cloning stage, we assume that the model has understood the demonstrations of support tasks, and we turn it to the skill transfer stage. In the skill transfer stage, the model not only learns from demonstrations A i ∼ p(T ) (which serve as support tasks) but also apply the learned information to the new environments B i ∼ p(T ) (which serve as target tasks).
In the internal gradient update, the skill transfer stage and the skill cloning stage are consistent: where the hyperparameter γ stands for the step size. Because we have trained A i ∼ p(T ) independently in the skill cloning stage, we assume that the model has learned the corresponding skills in the internal gradient update, and it can be transferred to a new environment in the outer gradient update. Therefore, the meta-objective of skill transfer stage is optimizing the loss for the new tasks According to the loss calculated by the (10), the final step of skill transfer is performing the meta-optimization through stochastic gradient descent (SGG) to update the model parameter θ : where the hyperparameters δ is a meta step size in skill transfer of meta-learning. VOLUME 8, 2020 In the process of TMAML, we alternately perform skill cloning and skill transfer in each iteration (we do not provide supervision information for imitation learning in the internal updates of these two stages). The most significant difference between them is that the former stage updates model parameters θ only based on tasks A i sampled from p(T ). In this stage, the model can learn skills without ambiguity since there is no scene transition. And the latter stage updates model parameters θ based on both tasks A i and B i sampled from p(T ) with scene transition. In the following experiments, we will prove that skill cloning can promote the understanding of skill transfer.
Please note the TMAML is not simply adding more internal steps to MAML for one-shot imitation, since we train the skill cloning and skill transfer separately. Moreover, the support tasks' supervision information of MAML in our one-shot imitation setting is never provided in the internal step during training. On the contrary, we cleverly offer it in the outer gradient update of the skill cloning stage to help the understanding of the demonstration.
B. META-LEARNING BASED ON NOISE MECHANISM
Since the MAML algorithm heavily relies on the gradient direction provided by demonstrations, the gradient direction's accuracy becomes extremely important. However, as mentioned before, methods based on gradient updates often encounter overfitting problems, especially when facing new unseen tasks or demonstrations. For example, we first performed regression experiments based on MAML and found that after the gradient steps were increased to a certain number, the performance did not improve as we expect. We consider that the internal gradient updates have overfitted, especially in the case of simple tasks.
As such, we introduce a noise mechanism in meta-learning. As shown in Fig. 2, we suppose a small ball is on the hill that gets trapped in terrible local minima. If we apply an external force on the little ball, it may move and slide to the global minimum. Here we compare the parameter θ to be optimized to a small ball, and the noise is analogous to the external force exerted on this ball. More concretely, we add some random noise to the model's parameters θ at the beginning of each internal gradient update during training: where σ is a step size and g i could be some random Gaussian noise. Since we first update the model's parameters θ according to some noise, we could alleviate the model's overfitting phenomenon and increase its generalization ability. Based on the updated θ * , we perform an internal gradient update: We describe the specific algorithm in Algorithm 3.
IV. EXPERIMENTAL RESULTS AND ANALYSIS
We mainly conduct experiments in three tasks: regression, simulated reaching, simulated pushing. We design these experiments to answer the following questions: • What is the performance of TMAML compared with previously mentioned methods?
• What effect does skill cloning have on skill transfer in our TMAML algorithm?
• What is the performance of merely training the MAML algorithm twice compared to the skill cloning phase of our TMAML algorithm?
• How does the noise mechanism affect the performance of meta-learning? In order to make our algorithm more convincing, we will compare it with some existing state-of-the-art experimental methods with the same problem settings and network structures.
A. REGRESSION
We first begin with a simple regression problem and compare it with MAML (see Fig. 5), and the experimental settings are consistent with [35]. In the regression task, we map each input x to a specific sine wave output f (x), where the amplitude and phase of the sinusoid are randomly sampled. Please note that the distribution of p(T ) is continuous, where the amplitude takes a value within [0.1, 5.0] and the phase takes a value within [0, π].
During meta-learning, input x consists of K data points, randomly sampled within [−5.0, 5.0]. We use mean-squared error (MSE) to evaluate the loss between the true output y and the predicted output f θ (x), where f θ is a neural network regressor with two hidden layers of size 40 and the activation function is ReLU. In training, we use ten internal gradient updates with K = 10 examples and set Adam [43] as a meta-optimizer (see [35] for more details regarding training and setting about MAML). For TMAML, we set with α = γ = δ = 0.001 and β = 10 −6 . Moreover, we also combine TMAML with noise mechanism and set the σ in Algorithm 3 to 0.01 for training (0.0 for testing).
The experimental results (see Fig. 5 (b) and (c)) show that as the internal gradient step of the MAML algorithm increases, sometimes its performance is not improved but overfitting. In contrast, our ''TMAML + noise'' algorithm can predict regression well even when the provided demonstration used for meta-learning contains some inaccurate points. We show that our algorithm achieves significant performance, and conventional gradient-descent methods cannot adapt to changing regression problems without additional gradient update strategies. Moreover, we performed an ablation experiment on the MAML algorithm, which shows that merely increasing the training times with the same samples does not significantly improve the training performance, thereby confirming our algorithm's effectiveness. In this regression experiment, we can conclude that our algorithm is superior to the previous methods. The introduction of the noise mechanism increases the robustness of the model and alleviates the overfitting phenomenon, and the skill cloning step has a promoting effect on the skill transfer. Examples of simulated pushing. This experimental dataset is also provided by [39] and [40]. In the simulated pushing task, the robot should push the object to the specific place after watching a demonstration (support task) with a changeable goal in a different scene.
B. SIMULATED IMITATION
Unlike the previous regression problem, we hope our method can map image pixels to corresponding actions without labels or expert actions from demonstrations (support tasks). Since our algorithm is not aimed at a specific model structure, we adopt the excellent network structure proposed in [40] as an experimental model. Based on one-shot imitation learning, we mainly carry out experiments in simulated reaching and simulated pushing.
1) SIMULATED REACHING
As illustrated in Fig. 3, the goal of simulated reaching is to reach a particular color in the target environment after watching a demonstration (support task), which is disturbed by some distractors of different colors. In the simulated reaching problem, we set α = γ = β = δ = 0.001, batch_size = 25, training_iterations = 30000 and the number of convolution layers is 5 with 30 (3 × 3) filters in the model proposed by [40] for TMAML. To combined TMAML with noise mechnism, we set σ = 1e −8 during training and set σ = 0.0 during testing and set the number of internal gradient updates to 1.
2) SIMULATED PUSHING
As illustrated in Fig. 4, the goal of simulated pushing is to push a particular object to a specific place after watching a demonstration (support task), which is disturbed by some distractors of different objects. In the simulated pushing problem, we set α = γ = 0.01, β = δ = 0.001, batch_size = 20, training_iterations = 30000 in the model proposed by [40] for TMAML. Regarding noise mechnism, we set σ = 1e −10 for ''MIL(temporal loss + noise)'', σ = 1e −12 for ''TMAML + noise'' applied to skill transfer only, and we set the number of internal gradient updates to 1.
C. DATA ANALYSIS
Regarding one-shot imitation learning, we show the experimental results in Fig. 6 and Table 3. Now we will combine the data to answer the four questions we propose before as follows: • Question: What is the performance of TMAML compared with previously mentioned methods? --Answer: As shown in Table 3, we get the accuracy of 83.33% in simulated pushing and 95.28% in simulated reaching, which are better than the current advanced methods.
• Question: What effect does skill cloning have on skill transfer in our TMAML algorithm? --Answer: As shown in Fig. 6, we record the training loss of the simulated reaching. We can find that the loss in the skill cloning stage drops very rapidly and gradually becomes flat, which indicates that the model has understood the content of demonstrations (support tasks) without much ambiguity. With the assistance of skill cloning, the loss in skill tansfer drops fast and becomes relatively steady. However, the loss of other methods fluctuates greatly, which shows that it is difficult for the model to transfer learning between scenarios (from support tasks A i to target tasks B i ) directly. VOLUME 8, 2020 [40], which corresponds to our skill cloning and skill transfer respectively. • Question: What is the performance of merely training the MAML algorithm twice compared to the skill cloning phase of our TMAML algorithm? --Answer: As illustrated in Fig. 6, in terms of loss, merely training the MAML algorithm twice is not superior to TMAML. Combining the results in Table 3, we find that although the method has reduced the loss in training, the accuracy drops to 79.50% in the pushing task and 84.23% in the reaching task. We infer the approach is easy to overfit the training set, which produces poor experimental results in the testing set.
• Question: How does the noise mechanism affect the performance of meta-learning? --Answer: First, let us focus on Fig. 5 (c). We find that as the number of internal gradients increases, the MAML-based method's loss does not decrease as we expected. However, with the introduction of noise mechanism, the overall loss of the model has been further improved. In Table 3, we find the accuracies are improved in both pushing and reaching of ''MIL, temporal loss + noise(ours)''. As for ''TMAML + noise(ours)'', the accuracy of reaching increases to 95.47%, but it drops to 82.88% in pushing task. According to these results, we infer that the noise mechanism can promote model performance and alleviate the gradient-based overfitting in some cases (i.e., simulated reaching and regression). However, it should be noted that sometimes random noise can also reduce performance, especially for tasks or environments that are particularly sensitive to noise. For example, simulated pushing is sensitive to any changes since we amplify the loss by 50 times for training.
V. CONCLUSION AND FUTURE WORK
In this study, we presented an effective meta-learning method that is universal to various tasks without specific models, which can quickly adapt to new and unseen scenarios based on demonstrations. We demonstrated the effectiveness of our approach to tasks of simulated pushing, simulated reaching, and regression with state-of-the-art results. The experimental results show that our method has a better understanding of visual information, which can effectively perform generalization of knowledge and transfer to new application scenarios of the same task. We also introduced a noise mechanism for the overfitting problem, which can further improve model performance at a cheap cost. There are many meaningful research directions in the future, such as cross-task experience sharing and knowledge reuse with a universal algorithm. We plan to extend the algorithm to apply it to multitasking mechanisms. As we know, humans can handle a variety of tasks at the same time. Intuition tells us that there may be some common knowledge between different tasks that can be shared and reused quickly, and how to efficiently use the commonality between different tasks and avoid mutual interference of different tasks will be an essential topic in the future. Further and most importantly, we hope to explore how to distill the learning experience that was previously available and quickly apply it to entirely new and different unseen tasks. He is currently an Expert of intelligent manufacturing and energy system engineering. In 1994, he joined the American Energy and Power Research Center, ABB Group. He served as a Project Manager, a Senior Researcher, the Research Center Director, and the Chief Scientist with the ABB Group. He has presided over the development of ABB IRB's third and fourth generation robot controllers. He has invented the flexible, intelligent control technology, and he has completed various ABB controllers from motion controllers to force-vision hybrid controllers. He has a significant contribution to the transformation and upgrading of smart controllers based on behavioral intelligence so that ABB's controller performance ranked first in the area of industrial robots. Additionally, he has also presided over one major demonstration project in China, two international scientific, and technological cooperation projects. He has own 15 U.S. patents and 126 Chinese patents. He has published more than 40 articles, three books. His research interests include the underlying theory and critical technology of swarm intelligence, autonomous intelligent robot, and flexible automatic control.
Dr. Gan received the People's Republic of China International Science and Technology Cooperation Award. He plays a vital role at Fudan University, where he is the President of the Institute of Intelligent Robotics and the Vice President of the Institute of Engineering and Applied Technology. Meanwhile, he is the President in the academy of the intelligent manufacturing industry and the Emergent Group in Ningbo of China.
WEI LI received the B.Eng. degree in automation and the M.Eng. degree in control science and engineering from the Harbin Institute of Technology, China, in 2009 and 2011, respectively, and the Ph.D. degree from the University of Sheffield, U.K., in 2016. He is currently an Associate Professor with the Institute of AI and Robotics, Fudan University. He has published more than 20 academic articles in peer-reviewed journals and conferences, such as the IEEE TRANSACTIONS ON ROBOTICS and NeurIPS. His research interests include robotics and computational intelligence, and specifically self-organized systems, and evolutionary machine learning. XUSHENG WANG (Student Member, IEEE) received the master's degree from the University of International Business and Economics. He is currently pursuing the Engineering degree majored in electronic information with Fudan University. He has been awarded the Honor of Outstanding Inventor in Hebei Province of China. He has participated in many national, provincial, and ministerial critical scientific research projects. He has published many academic articles and applied for 25 patents, including 19 invention patents, and ten authorized patents. His research interests include medical service robots and swarm intelligence. | 8,904 | 2020-01-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
PATENT ANALYSIS FOR COMPETITIVE TECHNICAL INTELLIGENCE AND INNOVATIVE THINKING
Patents are a very useful source of technical information. The public availability of patents over the Internet, with for some databases (eg. Espacenet) the assurance of a constant format, allows the development of high value added products using this information source and provides an easy way to analyze patent information. This simple and powerful tool facilitates the use of patents in academic research, in SMEs and in developing countries providing a way to use patents as a ideas resource thus improving technological innovation.
INTRODUCTION
Patents are a wide field, where techniques, products, applications and legal considerations are very mixed. Most of the time, this is also a field often dedicated to the industrial users and, for example, the academic community do not cite patents very often. Nevertheless, patents are a unique source of information since most of the data and information published in patents are not published elsewhere. However, using and managing a set of patents is rather complicated because most of the tools available today are either expensive and complicated or necessitate a strong expertise in the field of intellectual property. The cost of patent databases, if people want to perform complete searches (involving a large number of patents) or to automatically establish relationships between patents is very high and most of the time out of reach of middle sized enterprises (Dou,97), academic laboratories, or developing countries. Working in the field of Competitive Intelligence (CI) and Competitive Technical Intelligence (CTI) for more than 20 years, we have had the opportunity to use patents in many circumstances and then from these uses develop a basic knowledge to design and develop various tools to integrate patent data in Competitive Intelligence or Competitive Technical Intelligence as well as in innovative thinking (Quoniam,93).
A large literature exists about using patents to build up various indices related to R&D, to ascertain quality of inventions, to compare patent production in various countries, to evaluate the R&D policy of firms within or outside a country, etc… The following references will provide the reader with information about these uses (Lederberger & Kurt, 2003), (Youichirou, 2002), (Pilkington, 2002), (Biju & Soumyo, 2001), (Karki, 1997), (Ernst, 1998). In our opinion patents could be used also to promote ideas and to view the background of an invention. To do this, bibliometrical analysis of various sets of patents is one of the best tools. In the following section, we will present an overview of the software available and we will focus on the ones that give the best compromise between facilities, prices and results.
PATENTS AVAILABILITY AND STRUCTURE
If we want that to increase the use of patents among new groups of people, it is necessary to provide a very simple system and with a good ergonomics to: • Perform easy patent searches over the Internet; • To automatically download sets of selected patents; Data Science Journal, Volume 4, 31 December 2005 210 • To structure these patents in a local database to facilitate automatic analysis; • To enable the updating of this database; • To present the reader with all the information fields needed to help their technological thinking; • To provide the reader with automatic analysis enabling them to map most of the interactions (Applicants, IPC, inventors, Dates, …) from the patent data; • To print out and save the analysis; • To regroup patents into families according the user interest; • To selectively annotate a patent; • To automatically write a report (word format), for integration into a CI or CTI report; • To be able, on demand, to retrieve the full text or first page of the patent;
Various software available in the patent field
Many software have been developed in the patent field, to provide various facilities for specialists or for researchers and R&D practitioners. In the following paragraph we will described various utilities, and we will indicate most of their features, to provide to the reader a functional analysis tool. Most of the software are briefly described and can be reached through the following gateway: http://www.ipmenu.com/ipsoftware.htm . This gateway structure the software available in various categories: IP Software, IP Drafting Software, IP Filing Software, IP Management Software, IP Miscellaneous Software. We present here some large extracts from this site, and we added when this is available the price of the software, and we indicate also if the software deals with bibliométric an innovative aspects of patent bibliometric analysis. Other information can be obtained from the PUIG (Patent User Information Group : http://www.piug.org/vendor.html#bmTools which present various software and vendors, with an emphasis on online analysis from various hosts.
IP Analysis Software
• ANAQUA http://www.anaqua.com/Home.asp "ANAQUA is a web based Enterprise Intellectual Asset Management system (including licensing, litigation & conflict resolution). It enables IP practitioners to control the complete life-cycle of their IP (in addition to docketing). ANAQUA increases revenue, decreases costs & risks and eliminates duplication of effort & re-keying of data. It facilitates collaboration between all involved in the IP process, including brand managers, business units, law firms and (for patents) inventors. ANAQUA has been used by Ford & British American Tobacco since Sept 2002 and has more than 5000 users (including over 200external law firms)." Global platform for managing all types of IP throughout the lifecycle Unlike many IP systems that focus on one area of IP management, ANAQUA fully manages all IP rights including patents, trademarks, designs, domain names, and copyrights. This new approach dramatically improves on traditional siloed records management or docketing systems. IP leaders gain visibility into their global portfolio and the knowledge needed to proactively support their firm's business strategy.
• AURIGIN http://www.aurigin.com/static/index.htm The Aurigin Intellectual Property Asset Management System (IPAM) provides a set of analysis tools and databases for organizing and analyzing the intellectual assets in a company intellectual asset portfolios.
Micropatent, and Patsearch are also available through its host.
• CREAX Information Technologies http://www.creax.com/ "CREAX, as Innovation expert, is a soundboard for individuals, companies and organisations, for everything concerning innovation, creativity and problem-solving. CREAX helps individuals, companies and organisations innovate products, services and processes in a systematic way, resulting in sustainable benefit, higher efficiency and reduced technical and financial risk. The company is focussed on Patent based research and Systematic Innovation methodology based on TRIZ (Russian acronym for Theory of Inventive Problem Solving)" • Gene-IT's GenomeQuest http://www.gene-it.com/GenomeQuest-Patent.html "GenomeQuest allows IP bioanalysts to quickly establish freedom-to-operate and easily monitor competitor sequence IP positions. GenomeQuest works similar to premier web search engines, rapidly bubbling relevant records to the top. The software automates the reporting of the most important and relevant matches‚ minimizing tedious research‚ while providing advanced search algorithms so no relevant sequences are ever missed." • DATAVIEW -http://crrm.u-3mrs.fr Bibliometric software which analyses patents according the various patent fields available in a formatted patent database.
• MAPIT http://www.mnis.net/ Manning and Napier Information Services provide a patent data mining tool, which searches and analyzes United States, international and other patent data.
Allows access to IP.COM (http://www.ip.com/), • MAPOUT http://www.mapout.se/ "MapOut Pro® is a software (patent pending) that works as a hub for strategic information where you can gather information from most of the available bibliographic patent-technology-and business databases." MapOut Pro ® is a software (patent pending) that works as a hub for strategic information where you can gather information from most of the available bibliographic patent-technology-and business databases • Matheo Analyzer and Matheo Patent http://www.imcsline.com http://www.matheo-patent.com Matheo Analyzer "database processing software: importation of your data, statistical analysis, visualization (graphs, networks, matrices)" Matheo Patent "is a software designed to exploit quickly and professionally the EspaceNet and USPatent patent database." This software perform automatically bibliometric analysis of patents according their field structure (applicants, inventors, IPC, ECLA, PR dates, Ap Dates or groups of patents designed by the users), presents histogram, charts, matrix and networks. It allows you to build up and update local patent databases from Espacenet and USPatent . Automatic reports may be design as well as comments.
Databases in text format may be made from any local databases built up from Matheo Patent.
Matheo Patent is available upon a yearly subscription of 600€.
Matheo Analyzer is available for a cost of 2500€ • Patentmaps.com http://patentmaps.com/3i/ "Patentmaps.com provides technology intelligence consultancy services to deliver patent based technological and industrial intelligence. We gather information that you need and systematically analyze them to ensure that you have concise and actionable intelligence. In particular, we specialize in patent mapping and analysis." An evaluation copy is available.
• PatentEase http://www.patentseminars.com/main.asp?mainpage=softwaredescriptionsbody.asp&navigation= descripmenu.asp%22&menu=Software "With the PatentEase program you can draft your own patent application and file it in the United States Patent and Trademark Office (PTO) without the aid of a patent attorney or patent agent. This program guides you through the entire process of organizing your drawings and written description. It even includes documentation known as "formal papers," which you or your attorney must send to the PTO along with your application." Available through IPBookStore.
"If you love calculating patent fees and renumbering claims, by hand, then this software and this site are not for you. However, if you've got better things to do, the MightyMacro™ is for you!" • PatentPRO http://www.patentpro.us/ PatentPRO is a patent drafting package from KernelCreations, Ltd, for patent applications in the USA.
• PatentWizard http://www.patentwizard.com/ PatentWizard, by Michael S. Neustal of Neustal Law Offices, LTD, is a software program that helps businesses and inventors prepare and file a U.S. Provisional Patent Application with the U.S. Patent Office The cost of PatentWizard is 249$US. A free version (PATENTHUNTER 2.0) to download US patents is available.
IP Filling Software
• BMBConnect http://workflow.bmb-bbm.org/ BMBConnect is a combination of some webservices and a webserver consumer tool for e-filing of trademarks for the Benelux Trademark Office (BTO). The webserver consumer can be configured to read data for a trademark application from a 3rd party IP management software system and send it as XML to the BTO.
The site of BMB is available in French and Netherlands languages only.
• IP Document Assembly System (IPDAS) http://www.ipdas.com/efiling.asp "E-filings are easy, fast, and require no special user training using IPDAS, the industry's leading document assembly system for both paper-based and electronic filings. When e-filing, IPDAS users simply write or cut-and-paste into the blank MS Word®-based specification template and submit their entire e-filing package directly to the USPTO from within the IPDAS program where the specification is stored and later available so that IPDAS can automatically format amendments and renumber claims. The fully automated e-filing submission process (automatic conversion of documents to XML files and data validation) takes no more than four minutes. The USPTO's immediate acknowledgement receipt is stored and automatically populated into a client-reporting letter when generated. IPDAS knows to send an email notification with filing receipt information to specified recipients. AutoDocs, LLC is a USPTO Electronic Filing Partner and the first EFS partner vendor to offer a commercial e-filing service." • IP LegalForm http://www.legalstar.com/ IPLegalForm by LegalStar electronic patent, trademark, and service mark forms accepted by the USPTO (United States Patent and Trademark Office); copyright forms approved for use by the U.S. Copyright Office; and, PCT forms approved for use by the World Intellectual Property Organization (WIPO).
Basically this is an IP filling software, which is free, you paid only for the filling you do. Various other software are available, with a price between 600 to 1000 $US. For instance Invention Disclosure Management Software (to disclose invention through the Internet) cost 995 $US.
• ipWorkflow http://www.aspengrove.net/ Aspen Grove markets an IP application and tracking management system called ipWorkflow. Aspen Grove was recently chosen by USPTO as one of five companies to integrate electronic filing. • AURIGIN http://www.aurigin.com/static/index.htm The Aurigin Intellectual Property Asset Management System (IPAM) provides a set of analysis tools and databases for organizing and analyzing the intellectual assets in a company intellectual asset portfolios.
• CPA Software Solutions http://www.cpasoftwaresolutions.com/home.asp CPA Software Solutions is a leading provider of Intellectual Property (IP) software solutions for companies and IP attorney firms. We offer customised IP software to manage IP portfolios, including patents, trademarks, domain names.
• CPI Patent Management System http://www.computerpackages.com/ The CPI Patent Management System allows users to track information necessary from the disclosure stage, into filing and prosecution, and then into issuance and maintenance for patents. This system is provided by Computer Packages, Inc.
• CPI Trademark Management System http://www.computerpackages.com/ The CPI Trademark Management System allows users to track information necessary, from the proposal stage, into filing and prosecution, and then into registration and renewals for trademarks. This system is provided by Computer Packages, Inc.
• Edital-Intellectual Property Network Edital -IPN have been innovating in the field of IP management systems for the last 16 years. They produce WorldSuite, a client-server system to manage office workflow as well as Trademarks, Patents, Designs and Domain Names. Used by international firms who share their information across the globe, WorldSuite offers a very high degree of automation. For smaller portfolios, WorldMark Plus is a trademarks-only programme. Edital also maintains a website, http://www.edital.com/, with information tools in the fields of Trademarks and Patents." • EP Mark (note from the authors: could not be found on the Internet) "EPMARK, solution for the management of trademarks portfolio for Network system for WINDOWS 00/98 / 95 / NT or 3.11 (local network system Ethernet type -TCP/IP or others and distant network system). EPMARK software is available in French / English / Spanish / German." • First to File (note from the authors: could not be access from the Internet) First to File, from FTF Technologies, offers a patent automation system to generate, protect and manage intellectual property.
• FoundationIP http://www.foundationip.com/product.html "FoundationIP is a web-based, enterprise class case management system for patent and trademark attorneys. With it, attorneys can manage all aspects of their clients' matters; provide controllable access to clients; manage docketing; bill; send and receive and file messages; manage IDS references and interface with USPTO, using an eIDS document that is auto-generated and ePAVE to efile IDS's in minutes".
• Innovation Asset Group (IAG) http://innovation-asset.com/overview/maximize.htm "Innovation Asset Group (IAG) empowers companies with DECIPHER™, an integrated software solution for the management of intellectual property assets and the contractual agreements that surround them." • InProma http://www.cpasoftwaresolutions.com/home.asp The InProma Intellectual Property Management System by Maxim is a suite of software designed to handle the registration and management of intellectual property such as patents, trademarks, and designs.
• IAMS http://www.dennemeyer.com/ Dennemeyer's IAMS IP Management and Automation application provides a MS SQL Server/Oracle based patent, trademark, agreements and matter management, docketing and documentation system.
• Intellectual Property Online, Ltd http://www.ippo.com/ IPPO Trademark Administrator is Trademark Practice Automation Software which provides full text record keeping and trademark document assembly.
• Inteum C/S http://www.inteum.com/ Interum C/S®, by Inteum Company, LLC, is a relational information management system for the management of intellectual property and technology licensing.
• Intra Data Management Solutions http://www.intra-dms.com/ "Providing data management solutions since 1997, INTRA is dedicated to custom tailored information systems as well as inexpensive shelf products. A leader in IP management solutions, our powerful PETROS software took more than four years to develop." The best, easiest to use, and most affordable IP management software for companies. Control your IP assets in-house. Available at a cost of 4000$US http://www.intradms.com/Products/prod.asp?page=PR&sub=PRPU&id=1 Complementary Billing system to manage all financial documents and activities, linked to Petros Pro and Petros Pro SQL. Available for a cost of 4500$US http://www.intradms.com/Products/prod.asp?page=PR&sub=PRPU&id=5 • IP Document Assembly System (IPDAS) http://www.ipdas.com/ "Prepare paper-based or electronic-filing-ready documents with this document assembly system that contains both an extensive library of over 400 PTO and PCT forms, letters, and administrative documents used in prosecution and a superbly-designed database that stores names, addresses, citations, priorities, and other information used (and reused) during document preparation. Constantly updated forms are automatically and substantially completed during document creation with data that has been typed only once. IPPAS is a significant time saver, dramatically reduces typographical errors, and offers a shared tool for managing a prosecution practice." • IPSoft, Inc. http://www.ipdox.com/default.htm "IPDOX is a powerful web-based intellectual property portfolio management platform, providing multiuser remote collaboration. Its cornerstone is a highly functional, scaleable IP warehouse, continually populated and updated through the normal business process of IP docketing and IP asset management.
IPDOX manages the prosecution workflow for 14 IP types across 170 countries. Providing superior security with "role based rights", IPDOX delivers true remote collaboration to the stakeholders of the enterprise. Highly scaleable to fit the needs of small and large corporations and law firms." • IP IntelliFile http://www.legalstar.com/IP_IntelliFile/ IP IntelliFile by LegalStar (IP LegalForm & IP LegalDock) is software for the electronic filing of patent applications and related correspondence with the United States patent and Trademark Office. We are one of only three companies currently authorized under contract with the USPTO to develop and distribute software for the electronic filing of patent applications with the USPTO.
• IP LegalDock http://www.legalstar.com/ IP LegalDock by LegalStar is a docketing program that tracks actions and calculates due dates in more than 200 countries.
• I.P.M.S. http://www.ipms.com.tr/ "Intellectual Property Management Software I.P.M.S.® is a help ware designed for the management of intellectual and industrial rights assisting the right owners and/or their representatives. It can organize all files, correspondences, documents, and expenses in one single system operating in a multilingual environment. Additionally, the "On line View" and the "Sending Instruction" facilities permit the correspondence to be able to access the I.P.M.S.® database via the Internet for consultation purposes or sending instructions for filing or else." • IPscore® 2.0 http://www.ipscore.com/ "IPscore® 2.0 is unique database software offers brand-new opportunities for documenting the value of patents and development projects. It is an uncomplicated and user-friendly tool, which can be used by all companies with a larger or smaller portfolio of patents and development projects. IPscore® 2.0 provides your organisation with the notably improved opportunity to: -prioritize patent portfolios and development projects based on value and potential -get an overview on new projects from the outset -discover the value of existing portfolios -create a reference for the assessment of patents and development projects" This software can be considered as a good complement of the bibliometric technical analysis made from various patent searches of various patent portfolios. The price is 18000 DKr = 2417€ • IPSS http://www.ipss.com/pages/1/index.htm "The IPSSdotNET solution is a step beyond Client-Server systems and works as follows; the database and software are installed on one of your servers, and that's it! As long as the server is on your internal network, you simply use Internet Explorer to access the system from any networked PC in your organisation. There is no need anymore for each user to have the software installed on their PC -this is just time consuming, costly and unnecessary. The system's powerful functionality, built up from 30 years of IPSS experience, enables comprehensive management of Patents, Trade Marks, Designs, Domain Names, Expenditure, Licensed Agreements and Royalties plus 'special' features, AND includes unlimited user access." • ipWorkflow™ http://www.aspengrove.net/solutions/ipworkflow.asp ipWorkflow™ by AspenGrove is a web-based knowledge management system for the IP workplace. With ipWorkflow™, law firms and corporate law departments can easily manage and track all patent applications.
• Jurivox http://www.mioinc.com/index_e.asp "Finally, an integrated software solution for intellectual property profesionals that perfectly fit whatever the size and needs of the practice, from a reliable firm with more than 15 years experience in the industry." • Knowledge Sharing Systems http://www.knowledgesharing.com/body_default.html Knowledge Sharing Systems (KSS) designs, develops, maintains, and operates Internet-based knowledge sharing systems that manage intellectual assets and share knowledge, • ManIPulate Systems http://www.manipulate.co.za/ "ManIPulate™ is the complete management solution for a Corporate Intellectual Knowledge portfoliocomprehensive administration of trademarks and patents, a complete contracts agreements facility with advanced document handling and archiving capabilities, litigation, registered users and associations in an easy, user-friendly environment.The package also takes care of such vital and time consuming tasks as keeping track of, and offering advanced warning of renewal, expiry and other important dates with powerful reporting, e-mail and external file linking functionality.
ManIPulate™ is the only system to give the user direct access to their live data as it is produced from their legal representatives. Vital information such as litigation outcomes, updated records and a host of other essential data is available in an intuitive user-friendly environment." Although it can be used with various patent portfolios, this system is mainly related to South African Attorneys and information database.
• There is also the possibility to use an innovative gateway to share informal information for Competitive Intelligence.
• MyIP -Easy Database http://www.easydatabase.co.uk/ "Easy to use Intellectual Property Asset Management Systems. Intellectual Property (IP) is at the heart of an increasing number of businesses, and is often an organization's most valuable asset. It can be a source of competitive advantage, for example, through Patenting, and Licensing IP can provide a significant revenue stream. Frequently IP is poorly managed through a mixture of paper and spreadsheet systems. The loss of IP can be disastrous. MyIP provides technology transfer professionals with an easy-to-use system that brings discipline to the management of intellectual property data".
Some benefits of this software are linked to detection an analysis of start up companies. http://www.easydatabase.co.uk/products.htm • NetsPat http://www.netspat.com/ NetsPat is a secure, online database of YOUR complete patent portfolio.
• Patent-Management.Net http://www.patent-management.net/ "Patent-Management.net system is a comprehensive online patent management system designed to address the professional and clerical needs of IP professionals. It is a browser based system using a MS SQL Server database manipulated by web pages. The information is stored in a MS SQL Server Database capable of handling large amounts of data consistent with multiple high-volume cases." • paTex (notre from the authors: site not available from the Internet) Intellectual Property Management software for patent and trade mark property.
• re-written two times. In 1994 Patricia was re-written the for the third time, into Windows. Patrix also produce Patricia Web which is an extra feature in Patricia allowing client access to case information on the Patricia database.
• PATTSY http://www.pattsy.com/abt_0.htm PATTSY® by OP Solutions Inc is an Intellectual Property Management System that has unparalleled features and an intuitive, easy to use interface.
This is a professional patent administration and information system for your patent department or law chambers. Patvin is a software package enabling you to manage extensive portfolios of industrial patent rights smoothly and safely for their whole term.
• PLXware http://www.pl-x.com/ PLXware, by PLX Systems, is a suite of software modules designed to provide comprehensive portfolio management, valuation analytics, financial accounting, audit control, decision support and reporting across the maturity cycle for IP assets.
• • Rapidpat http://www.rapidpat.com/ "Rapidpat provides compressed digital patent copies of the US and of more than 25 other world patent authorities on CD, DVD, and via Internet delivery. Our proprietary compression achieves a 12:1 ratio over standard PDF format, with no loss of quality! Documents are 100% Adobe Acrobat Compatible (version 5.0+). We also compress prior art and other materials, in functional digital format (PDF). Rapidpat specializes in high-speed transmission of alerts of patent documents and full-text data, and produces custom patent and trademark portfolios on CD and DVD. Rapidpat Complete offers the complete US patent collection (1790-present) on 42 DVDs for a single, non-recurring fee. No proprietary software is needed to use Rapidpat, as we specialize in Adobe Compatible formats".
To download a US patent cost 2.99$US and to buy in one time the full collection of US Patents from 1790 to date cost 7995 $US • RightsLine, Inc. http://www.rightsline.com/ RightsLine is a leading provider of business applications software, enabling companies to increase revenue from existing intellectual property, across all lines of business. In a time when shareholders are pushing management to do more with less, RightsLine provides the first software solution focused on increasing revenue from intellectual property. From automating licensing and sales divisions to streamlining marketing, corporate brand management and contracts departments, RightsLine provides a proven set of rights management, licensing and royalty products, empowering our customers to increase revenue while reducing risk and improving efficiencies. • unycom http://www.unycom.com/en/index_html "unycom provides expert software solutions and services in the field of intellectual property. The unycom IP management server realizes the comprehensive integration of a large company's complete IP-capital within all enterprise divisions. Research & Development are just as much part of the solution as for example Marketing, Controlling and Production. Electronic links to external contacts and sources of information established by the unycom IP management server allow seamless and simple transmission of data and easy access to information: Patent Offices, clients, business partners, subsidiaries, inventors, data providers, licensees -the possibilities of integration are almost unlimited and will be adapted precicely to fit the individual needs of your company." • Vinsoft http://www.dennemeyer.com/ "Vinsoft is the industry's first true lifecycle intellectual asset management solution that supports the range of intellectual property (IP) activities in an integrated toolset. The Vinsoft suite enables companies to manage their intellectual assets from the product planning stage through commercialization and licensing".
• WINPAT http://www.gsi-office.de/gsi/ "GSI OFFICE MANAGEMENT GMBH produce IP Management software called WINPAT. WINPAT includes a sophisticated workflow management engine for business process engineering.WINPAT is a software that handles all aspects of IP and that can be customized to meet your requirements" The site is only in German language.
• Xen-IP http://www.xensis.com/ Xen-IP, by Xensis, is an intellectual property management system. Xen-IP supports all IP types including Patent, Trade Mark, Designs and Domains, and is fully featured to support the application and management of IP xen-IP Range -Enterprise strength IP Management functionality, scalable from a single user on a single PC to hundreds of users on an international network. The xen-IP range is listed below, click on the links for more information. Also, why not take the Quick Tour ?
IP Miscellaneous Software
• AusInvent http://www.ausinvent.com/casestudies.php AusInvent's role is to provide an Internet service to promote innovation and stimulate the development of new products and business expansion. AusInvent Innovation Services provides Self Assessment Software Lets you assess the likelihood of your innovation being technically and commercially viable.
The self Assessment Software will let you assess your innovation idea online, by answering a set of 32 questions. This is supported by the NSW Department of State and Regional Development of Australia.
• CENTREDOC http://www.centredoc.ch/ CENTREDOC is a Swiss company specialising in broking information relating to technological, competing and strategic surveys in particular the search for patent, literature information and marketing. This is a Swiss company specialized in Technology Watch. Two services online allow you to get informed on patents (RAPID) and professional publications on Technology Watch and Marketing (eLit). The site is in French language.
Example: Centredoc launches RFID Patent a new portal to watch patent on RFID Technology.
• Global IP Estimator http://www.globalip.com/ Software which will generate cost estimates for patent, trademark and design applications around the world.
• GetIPDL http://www.ujihara.jp/GetIPDL/en/HowToOrder.html Free patent downloader from IP Digital Libraries. GetIPDL downloads patent documents including text and images from JPO, USPTO<EMAIL_ADDRESS>DEPATISnet, IP Australia, CIPO and WIPO amongst others. PatSee is an application that allows you to download patent specifications from free Internet servers or any files from a web page automatically. It saves the time and expense of ordering patent on CD and will deliver them straight to your desktop. The price is 250£ • PatSee / INAS Patent Downloading and Analysis Software "IAL (UK) and WinsLAB (Korea) have announced improvements in their software for harvesting and analysing patent. More specifically, PatSee TM PRO has now been modified to semi-automate the downloading of free patent data from the USPTO for use with INAS Patent Analysis Software…..
• Lawra http://www.patrix.com/ A computerised resource for trademark handling in 100 different countries, including necessary forms, such as Assignments and Powers of Attorney, to be filled out on your computer screen and printed by Patrix.
• MentoringPros http://www.mentoringpros.com/ "Making Rain" is a unique web-based training system which helps attorneys and other professionals get new clients and develop new business relationships.
• Patent Value Predictor http://www.patentvaluepredictor.com/PatentWatcherpurchase.asp?Unique=5222005125353 On demand service and subscription where you specify a patent or company and then receive Patent Value Predictor's determination of the value of the patent or company's patent portfolio.
Patent Watcher searches the patents and published applications databases of the U.S. Patent and Trademark Office web server, the cost of one license is 200$US • TM Alert -"Helps Businesses Protect Valuable Corporate Trademarks" http://www.tmalert.com/ "TM Alert is a patent pending downloadable desktop tool which enables users to automatically monitor the U.S. Trademark registry for the publication and registration of trademarks and service marks. TMAlert can monitor for publishing, cancelled or registering marks. TMAlert does a job for which most law firms charge hundreds of dollars. The system can be set to automatically update, and can track according to trademark product class, competitor name or keyword. The tool also includes a customized reporting tool. The tool which is priced at $99.97 is being introduced at $69.97. Users can "test drive" the product before buying".
• SAS Innovation Analysis -http://www.sas.com/news/preleases/052504/news2.html "Available now to customers worldwide, SAS Innovation Analysis uses a concept-based pattern recognition algorithm to probe a 15-terabyte database of patent submissions from virtually every patent office in the world. Going beyond existing software that offer only keyword search capabilities, SAS Innovation Analysis instantly identifies concepts with the same meaning; for instance, a SAS Innovation Analysis search for "cup" would return patents relating to "cup," "glass," "tumbler," "goblet," "mug" and "containment vessel for consumable aqueous solutions," among others. This functionality allows users to rapidly differentiate patents that contain true innovation from those that are simply functional forgeries using different words"
Information sources
First of all, let us say a few wordsabout patent databases. Today, various patent databases are freely available, even though they are usually more complete or sophisticated. This is very important since commercial databases are often expensive eg. WPIL (Thomson Scientific, 2003). Among these free patent databases, we focused our attention on the European Patent Database (EPO: European Patent Office) named ESPACENET which is freely available over the Internet. The EPO database does not cover all the patents produced in the world, but because it covers most of the countries where potential users are located, we may say that this database is quite suitable to give an overview of intellectual properties in various technical fields. To support this point of view, it is obvious that the most important inventions are protected in USA, Europe, and Japan, and, by implication (giving rise to patent families), patents covering these inventions are present in the EPO database. We do not want to develop a comparison between the coverage of the various databases neither with the cross literature links between patents and scientific publications (eg. Chemical Abstracts for instance) or with the various academic research in this field. For more information on this subject see (Faucompré, Quoniam, Dou, 1997). Data Science Journal, Volume 4, 31 December 2005 222 Even if patents are available over the Internet, it is obvious that to individually retrieve hundreds of patents is rather difficult and time consuming. This is the reason why, we have developed a software which will provide for the EPO database and USPTO database (United States of America Patent Office patents databases), perform automatic patent searches, automatically download and analyze the patents and create automatic reports. The various descriptions which will be given in this paper are all available through this software (Matheo Patent, 2003). This papers focuses to the Espacenet database.
Information fields present in a patent useful for CI, CTI and Innovation
A patent record is composed of different fields such as inventors, patents assignees, dates, IPC, claims, etc …These fields are useful since they will help the reader to easily obtain precise information, since all the data has been introduced upstream by information scientists and patent analysts. Moreover, many fields allow the development of various correlations such as histograms for the most simple, or matrices and networks (mapping) for the most complex.
Two groups of data available in the EPO database that are useful for the use of patents in CI, CTI and innovation may be considered. The first group of data may give rise indirectly to various correlations, while the second group provides immediate information about the patent content to the user (reader). The fields, as well as their use in patent analysis, are indicated in the Table 1. In this table, Searchable means that a boolean query may be made, Available means that this field is present when the patents are downloaded, may be selected means that there are various choices available before downloading the patents set, offline means that the order to download a patent to build a family or to access the full text of the patent is launched in the EPO database after local analysis of the set of patents.
HISTOGRAMS FROM PATENTS AND THEIR USE IN CI, CTI AND INNOVATIVE THINKING
In CI and CTI, one of the most important question which must be solved is to know the mapping of most of the interactions of a company with its environment. Various environments may be analyzed such as the technical situation, the potential competitors, the regulations, the economy of the field, the trends in technologies and social demands, etc… It is not in the scope of this paper to present the whereabouts of CI or CTI, but to place in this context the results of patent analysis (Paoli, 2003).
In fact, because patents are a unique source of information, it is quite obvious, that before making any assumption, supposition or guideline, a patent analysis should be performed. The technologies linked to a product, the uses of a technology, the various applications made with a crude material (eg; a natural product for instance), … have to be rapidly analyzed to brush the scope of the field as well as possible. This mapping could be later integrated into the value-map (Brandenburger, 1998) to complete all the relationships existing between the set of "players" in this competitive field. To facilitate the lecture of this paper, we will present the main useful set of correlation as well as an overview all the facilities which are necessary to help the user in patent analysis. If the reader likes to get a more complete information he will have to consult the following internet host: http://www.imcsline.com .
Access to patent information
During the course of the DEA (CRRM, 2004) Competitive Intelligence made in Manado in the North Sulawesi in Indonesia, we use patent analysis to stimulate the innovative thinking of the students . The students have various projects linked with the development of local resources such as cloves, coconuts, seaweeds, etc. We noted that in the course of the projects, development, improvement, were all grounded to the tacit knowledge of people. Tacit knowledge which limited the innovation, since new applications and uses were unknown to the students. To stimulate their thinking and to show that technological applications was one of the keys to develop value added products, we use patent information. This is a free information source and this is important for developing countries and also which bear new products and applications directed linked to their local natural resources. We operate in the following way: (this is an example extracted from the DEA 2003-2004) First we made a large search with the terms COCONUT OR COCONUTS in the title and abstracts of the patents, for the last 20 years. The EPO database was used as an information source. We retrieved 1125 patents, which are presented in figure 1: The patents are listed by title and patent number. The cursor on the right allows a particular patent to be selected.
The bottom of the screen gives a summary of the patent data. By clicking on the resources screen one gets the abstract, claims and description of the invention from the local database:
Abstract
A pre-mix composition is disclosed for use in the treatment of a plant growth medium to promote improve wetting and re-wetting with a mixture of coconut coir pith and a culturally acceptable surfactant. A process for treating a plant growth medium utilizing such a pre-mix composition is also disclosed as well as the treated plant growth medium. Claims (extract): 1-A pre-mix composition …..
Description of the invention (extract)
The present invention relatesa pre-mix compositions to used in plant growth media …..
If the abstracts, claims and description are not present in the EPO database, it is possible to download the full text of the patent from the EPO database by clicking the right mouse button on the selected patent title. This process is shown in Figure 2:
Histogram frequencies, general considerations
Histograms are useful because they provide a overall view of all the items present in the set of selected patents according their presence in a selected field (eg. Applicants, Inventors..). The use of histograms allows an item to be positioned along with all the other items present in the patent set. Because the patent set can be updated at any time, it is possible to perform histograms at different periods of time, and then to monitor the trend of the item considered during a certain period of time.
As histograms are useful for the user, we provide two easy ways to view histograms. Firstly viewing histograms within the context of patent titles and secondly by viewing histograms in general in a single screen.
Histograms in context
This is presented in the Figure 3. To access to this function, it is necessary to click the patent analysis option at the top of the screen. On the left side of the screen the histogram will open according the selection made by the user. Several type of histograms are available: Inventors, Applicants, IP Class, E Class, PD year, Groups, Family. This is represented in Figure 4: Figure 4), patent information may be accessed as shown in Figure 3.
Showing histograms in this way is interesting for the people who like to select patents according their importance to group them together or to give a performance indice to a patent. The indices facilitate understanding because they will appear in the Pertinence colum before the patent number and title. Figure 5 shows how to implement or create a group or to attribute pertinence indices by using the mouse's right button.
Stand alone histograms
To view, print or store histograms, you select the Bars and Chart menu at the top of the screen. As shown in Figure 6.
A box will popup and will indicate the various charts which are available. Selecting a chart will return picture of the histogram. The Histogram may be presented in several ways: classical, 3D, and pie chart. The representations may be printed or stored.
Figure 6. Access to various chart representation
Histograms can be seen 10 items at a time or globally if necessary. Buttons and cursors at the bottom of the histograms window allow the type of charts and the various views to be selected.
The use of IPC charts to stimulate innovation
We saw that Indonesian students very often develop projects on the basis of tacit local knowledge. On the other hand, local hightech facilities may not be available. Then, to stimulate innovation, it is necessary to move step by step. That is to say, that patents and especially International Pantent Classificaiton (IPC) classes (WIPO, 2004) will be used to show new uses, new products, new applications available. IPC classes are technological codes recorded with the patent during its examination, thoses codes are an international standard. However, it will be necessary to focus our attention on applications that can be developed through locally available facilities. This is very important, because if the examples given to the students can not be done locally, there will be only a very small chance of this application succeding. As a result, students willbesome discouraged.
Thus, using IPC is important. The global IPC view (IPC standing alone), will show the students the main IPC classes appearing within the 1125 patents. This is an overview of what is happening in the field of coconuts. However although this knowledge is important in showing the existing gap between western countries inventions and local processing of the resource, it is not often actionable, because the laboratories, man power, general facilities and money will not be locally available for these technologies. We then use the IPC in context, to view the various IPC concerned by the patents dealing with coconuts one after the other. The purpose is the following: Knowing the local facilities, the students will choose possible applications (eg by selecting the right IPC classes) which will be locally actionable to make products with their own resources. This will constitute the first step on the ladder towards innovative thinking to the top. Figure 7 show the global IPC classes involved in the 1125 patents.
Figure 7. IPC classes in decreasing order
IPC classes may be represented in descending, ascending or alphabetical order, in a pie chart with the % of each IPC. Here, we represent the IPC in descending order, the box on the right side of the chart presents the IPC by color as in the chart, and give the frequency and the first four digits of the IPC.
From the IPC in context, we selected the various classes (first 4 digits) giving access to new products or applications that could be made from coconuts, with local facilities and which were unknown to the students. This is represented in Table 2. It is also noticeable, that the location of the town of Manado in North Sulawesi is very close to the deep sea Port of Bitung. This will facilitate shipment of products developed from the various technologies.
Networks
Although histograms are useful, they however do not show the links which exist between these different fields. For instance the IPC histogram and the Applicants histogram will not give rise directly to Applicant Expertises. This correlation can only be made if a matrix between applicants and IPC is built and if a network of expertise is drawn from this matrix. In the same way, networks of applicants (when several applicants appear in the same patent) will show the related companies, etc ...
To achieve these functionalities we will use the Network option at the top of the screen of Matheo Patent.
Clicking the box will open a popup window deplaying all the fields which may be correlated and used to build networks. This is presented in Figure 8. For instance Applicants may be correlated with IPC, this will give rise to the competency network, IPC with IPC will give rise to the technological network, Inventors with inventors will give the inventor network, inventor with IPC will give the inventor competencies, etc.
In Figure 9, we present the network of Applicants and Inventors. The cursor in the bottom of the windows help to select frequency of forms (eg. Applicants or Inventors) or the frequency of pairs (Applicants-Inventors).
However if we used a large number of patents such as the set of 1114 selected in this paper, the networks which will be obtained will be complicated and the timeneeded to interpret them far too long. This is what we advise the user, when he selects a large number of patents to build up patent groups. Another possibility, because Matheo Patent works very rapidly on the EPO database would be to make a new database by adding the selected IPC (4 digits), for instance A47C (Chairs) to the original query. This will build a database related to chairs, mattress, … in which coconuts are involved. Working this way will also make possible to develop other databases by keeping the original IPC A47C and combining it with this other terms such as palm, cotton, wood, etc… If you think that many terms, classes, years, applicants may be combined in a single search, you can see how innovative thinking, comparisons, value-maps, can be achieved. Only the people ideas and their mapping (TheBrain, 2004) with true patent information will be the limit of innovation.
Matrices
Very often, people working with statistical representations use various software such as Statistica (2004) to represent information. Other statistical treatments are also possible if adequate data are provided to the user. It was with these constraints in mind that we develop an option to get various type of matrices. To create these matrices it is necessary to click the Matrix option in the upper right corner of the screen. A window will pop-up, allowing the components of the matrix to be selected, as show in Figure 10. Figure 11. Selecting of the components of the matrix We mentioned above that matrices may be useful for transfering data to other statistical software. But, there are many other more powerful applications. For instance, we will show a classic application dealing with the core competencies needed to develop a number of applications, from coconut components. By using the IPC histograms in context, we select some applications that were possible with the local knowledge. We created several groups with the patents related to these applications as show in this paper. The areas selected are presented in Table 3. All the groups have been selected according to local specificity, for instance the group building materials is important because it allows the development of low weight materials, which are important because existing local experience in region's prone toearthquake. Textiles are also important because existing expertise in fabrics production in the region, etc.. When all the groups were selected, it is easy to build up a matrix between groups and the IPC. The matrix shows the IPC classes common to most of the groups. This underlines the core technologies and applications which can capitalize on the new knowledge. The Figure 11 gives a partial representation of the matrix; Data Science Journal, Volume 4, 31 December 2005 232 Figure 11. Representation of the matrix Group of Technologies and IPC For each group of applications, the rows indicate the type of technologies which are involved in these applications. The use of this type of matrix is a very powerful way to detect for example technologies and applications shared by various companies, and applications depending on only from one company.
AN EXAMPLE FROM THE IPC A47C
This class covers the topic: CHAIRS (seats specially adapted for vehicles B60N 2/00); SOFAS; BEDS (upholstery in general B68G). This class was selected because the technologies (Indonesia has a strong background in furniture making) involved to develop chairs, mattress, bed based, convertible are already available in Manado, and also because even if the pieces of furniture are important in size the proximity of the Port of Bitung will facilitate their shipping.
The automatic report
The number of patents involved in a group is generally small, and easy to read. We used the Report facility offered by Matheo Patent to provide the students with a automatically generated report which is a first approach at understanding and analyzing the context. To automatically build the report, you can select various fields and formats, by clicking the available boxes. The selection process is presented in Figure 12. ULTRASOUND THERAPY (measurement of bioelectric currents A61B; surgical instruments, devices or methods for transferring non-mechanical forms of energy to or from the body A61B 18/00; anaesthetic apparatus in general A61M; incandescent lamps H01K; infra-red radiators for heating H05B) [6] 3. STATISTICS (cn) chen qiang (cn) 2 an pan-ho (kr) ace bed co ltd (kr) 1 chen qiang (cn) chen qiang (cn) 1 morizot christian (fr) simmons cie continentale (fr) 1 kim jong-on (kr) kim jong on (kr) 1 mossbeck nils (us) albru handelsgesellschaft mbh (de) 1 wang baolin (cn) wang jinquan (cn) 1 wang jinquan (cn) wang jinquan (cn) 1 hirata koichi (jp) hirata koichi (jp) 1 berner hans-gunter (--) berner hans gunter (--) 1 an pan-ho (kr) chen qiang (cn) 0 an pan-ho (kr) zhu shaoheng (cn) 0 an pan-ho (kr) simmons cie continentale (fr) 0 an pan-ho (kr) kim jong on (kr) 0 End of the report
Complementary information
Complementary information such as networks of applicants or IPCs, etc; are built-up by the user if necessary. When the best patents have been selected according the report complementary information about and details on the patents content (general, abstracts, claims, etc…) or the full text of the patents may be downloaded from the EPO database (use the mouse right button and click on the selected patent title, then follow the instructions given in the pop-up window. Of course, your computer should be linked to the Internet network).
CONCLUSION
Although patents are useful in protecting inventions, their cost is high and most of the time is out of reach of small companies and developing countries. But, being a unique information resource, they can be used as a think-tank to promote and stimulate innovation, and to provide free information sources to various users. Even in academic institutions, in western countries, patents are seldom cited by researchers. In our opinion, the barrier to using patents efficiently arises from their number and the difficulty of rapidly overviewing their content, relationships, etc, which enables the user to select the best patents for stimulating innovation and facilitating the creation of value added products from natural resources.
With this idea in mind, The Matheo Patent software has been developed. It allows fast access to the EPO and USPTO database, provides an easy way to build up patent databases and to automatically analyze patents.
It shows huge potential for use in developing countries for the monitoring of competitors and technological trends. We hope that these types of software will induce people to use patents more widely, firstly as an ideas resource, but also as a unique tool to secure and protect their own inventions. | 11,557 | 2005-01-01T00:00:00.000 | [
"Engineering",
"Business",
"Computer Science"
] |
Wanted: special-purpose robots automating life science wet lab workflows
Compared with manufacturing and service industries, the life science R&D industry in general is lagging behind in terms of utilizing large-scale industrial automation for productivity, capacity and quality improvements. Granted, the exploratory and dynamic nature of life science R&D dictates that for most life science R&D projects that come and go, there is not a fixed procedure to begin with, and there is no way to predict how a project is going to end up either. This uncertainty makes automation of general life science R&D workflows difficult. Still, life scientists around the world are working hard to isolate, extract and automate the portions of the workflow that can be automated [1]. Over the past decades, much progress has been made in the automation of analytical separation and detection, data acquisitions and downstream data extraction and processing. Relatively speaking, the progress of automating life science wet lab workflows has been much slower, due largely to the lackluster tools available.
Compared with manufacturing and service industries, the life science R&D industry in general is lagging behind in terms of utilizing large-scale industrial automation for productivity, capacity and quality improvements. Granted, the exploratory and dynamic nature of life science R&D dictates that for most life science R&D projects that come and go, there is not a fixed procedure to begin with, and there is no way to predict how a project is going to end up either. This uncertainty makes automation of general life science R&D workflows difficult. Still, life scientists around the world are working hard to isolate, extract and automate the portions of the workflow that can be automated [1]. Over the past decades, much progress has been made in the automation of analytical separation and detection, data acquisitions and downstream data extraction and processing. Relatively speaking, the progress of automating life science wet lab workflows has been much slower, due largely to the lackluster tools available.
The downsides of general-purpose liquid handling robots
Since a common denominator of life science R&D wet lab workflow is liquid, current robot manufacturers all aim to maximize the applicability of their products to assorted life science workflows (translation: maximizing profits) and offer general-purpose liquid handling robots (GPLHR). This makes their internal robotic products R&D relatively simple, but while the goal of GPLHRs is to suit all applications, they in fact please no one. For ordinary end-user groups, those robots are just too much of a hassle -difficult to use, crash often and the results vary [2][3][4]. Since there are no commercially available liquidhandling robots that tailor to their specific workflows, much postacquisition customization and development needs to be done either by internal staff, vendor staff or both, if they wish to automate more than just a few isolated steps of their experiments. Customization, development and maintenance costs aside, the process is slow, there is often no guarantee of the quality of such custom work and quality and success varies from one organization to another. For those end users' management, it is often a struggle to see through the clouds of the capital expenditure, dedicated automation FTE investment [2], slow customization and slow adoption among ordinary scientists to find out whether the robots are worth the investment [3,4]. For the robot manufacturers, because the barrier of entry for GPLHRs is relatively low, competition is fierce. Due to slow adoption of these robots, the robot manufacturers are not fully realizing the huge life science market potential. Also, they might become the failure of their own success if the repair/maintenance service, technical support and most notoriously and unpredictably, custom development demand cannot keep up with their sales.
In short, there is much to be desired from a contemporary GPLHR. To give a few specific examples: • Workflow-specific software. Postmarket customization and development in the form of extensive scripting by user/vendor staff to suit particular workflows actually in a sense indicate that GPLHRs are halffinished products. Customization and development work needs to be done at the manufacturer to begin with and scripting is not good enough. Again, users do not need to know and should not be expected to know the inner-workings of the automation equipment. The scripting layer should be eliminated altogether. Workflow-specific user interface and business logic, including straightforward enterprise IT data integration c onfiguration options, should all be done at the factory.
• Intelligence. Common causes for GPLHR system halts and irrecoverable crashes include fatal tip/insufficient liquid errors, which can all be eliminated with additional intelligence built into the system. To continue with the tips example above, the tip-sensing 2D laser grid would report the status (number of tips and last tip position) of the par-tial tip rack in real time on demand. The system would combine that information with the number of same type full tip racks (read from the barcodes) available in the carousal, and keep a record of how many of that type of tips are available. If the system also knows/computes at the beginning of a run how many of this type of tips would be needed for the experiment, then the system could alert the user to add more tips if it foresees shortage. A common sense for every lab scientist is that one cannot get more liquid out of a container than what was put in there prior. And yet, current GPLHRs do not have that simple intelligence. Conceivably, the robot could keep an exhaustive internal database of container ID, well positions, volumes of liquids, implement some sort of aliquot safety mechanism [MARS] and if the entire experimental procedure is known at the beginning of a run, computes liquid transfer scheme and warn users of potential insufficient liquid errors before the experiment even starts.
• The above wish list is by no means comprehensive and could go on.
Integrated specialized robots automating life science workflows
In summary, what life science R&D scientists really want are turn-key intelligent automation solutions that automate most, if not all, of their wet lab workflows without too much tinkering on their part [3,[5][6].
Addressing the above list of issues/wishes would be a good start toward that direction. In recent years, there have been relevant advances in that direction both in terms of hardware and software. Hardware wise, most recently a European automation lab is trying to tie assorted stand-alone analytical chemistry instruments/apparatuses together on a common hardware platform to achieve fully automated analytical chemistry workflows. Earlier this year, a UK/US-based company just introduced to the market their ingenious robot that automates aqueous solution preparation starting from solids weighing. Software wise, there have been several publications on attempts to automate majority portions of relatively well-defined bioanalytical wet lab workflows [7][8][9][10][11]. It is important to note that the concepts are entirely applicable to many other life sciences R&D wet lab workflows [6,12].
What we need are ultimate vertical integrators who would take on the tasks of consolidating assorted hardware and software products and advancements into integrated specialized robots that would automate large vertical sections of life science wet lab workflows. The tasks are not as impossible as they may seem. For each specific wet lab workflow, the lab hardware needed is usually limited. Therefore, such integration could start with a few workflows and then expand into/integrate more workflows. Along the way, a stream of robotic products with increasing complexity and expanding functionalities could be introduced to the market.
Conclusion
Life science R&D labs need more automation and scientists want automation to work for them, not the other way around. Ideal life science R&D wet lab robots should automate entire workflows, as opposed to mere liquid transfer steps. To achieve that, vertical integration of workflow-specific hardware and development of workflow-specific software are needed. There is no question that it would be challenging product R&D for robot manufacturers, but the effort would definitely be worth it. The fact that life science R&D industry have invested heavily in and have so far put up with half-finished products (i.e., current crop of GPLHRs) shows the dire need for automation. The underlying marketing message to robot companies is clear: if robot companies can come up with a better product, the life science R&D industry would embrace it and pay for it. The market potential for such robots would be tremendous. The race to special-purpose robots that would reshape the life science R&D wet labs is on.
Financial & competing interests disclosure
The author has no relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript. This includes employment, consultancies, honoraria, stock ownership or options, expert testimony, grants or patents received or pending, or royalties.
No writing assistance was utilized in the production of this manuscript. | 2,045.4 | 2015-08-25T00:00:00.000 | [
"Computer Science"
] |
The economic disease burden of measles in Japan and a benefit cost analysis of vaccination, a retrospective study
Background During 1999-2003, Japan experienced a series of measles epidemics, and in Action Plans to Control Measles and the Future Problems, it was proposed that infants be immunized soon after their one-year birthday. In this study, we attempted to estimate the nationwide economic disease burden of measles based on clinical data and the economic effectiveness of this proposal using the benefit cost ratio. Methods Our survey target was measles patients treated at Chiba-Nishi general hospital from January 1999 to September 2001. Two hundred ninety-one cases were extracted from the database. The survey team composed of 3 pediatricians and 1 physician from Chiba-Nishi general hospital examined patient files and obtained additional information by telephone interview. We analyzed data based on a static model, which assumed that the number of measles patients would be zero after 100% coverage of single-antigen measles vaccine. Costs were defined as the direct cost for measles treatment, vaccination and transportation and the indirect cost of workdays lost due to the nursing of patients, hospital visits for vaccination or nursing due to adverse reactions. Benefits were defined as savings on direct and indirect costs. Based on these definitions, we estimated the nationwide costs of treatment and vaccination. Results Using our static model, the nationwide total cost for measles treatment was estimated to be US$ 404 million, while the vaccination cost was US$165 million. The benefit cost ratio of the base case was 2.48 and ranged from 2.21 to 4.97 with sensitivity analysis. Conclusions Although the model has some limitations, we conclude that the policy of immunizing infants soon after their one-year birthday is economically effective.
Background
Japan is one of the countries most affected by measles, a contagious disease with many complications. The measles vaccine was first introduced to Japan in 1966 and was adopted in the national regular immunization program from 1978 [1]. Before April 2006, when Japan adopted two-dose MR vaccine policy, Japan's Preventive Vaccination Act made provisions for single-antigen attenuated live vaccine to be given only once to children aged 12-90 months. Nationwide coverage remained no higher than 81%. Since 1994, the government of Japan has seemed very passive in controlling vaccine preventable diseases, as the vaccine policy was changed from being a compulsory immunization to being voluntary. As a consequence, measles vaccine coverage rates have been lower than other countries [2,3].
Between 1999 and 2003, Japan experienced a series of measles epidemics. During these epidemics, the number of reported cases ranged from 5,957 (1999) to 34,734 (2001) [4] (Figure 1). There were approximately 100,000 to 200,000 estimated cases during this time [3,5]. During 1999-2007, measles surveillance in Japan consisted of aggregate case reporting systems from pediatric and adult sentinel surveillance systems in which pediatric cases were reported from a representative reported sample of approximately 3,000 pediatric inpatient and outpatient facilities and adult cases were reported from a sample of approximately 450 inpatient hospitals. From these reports, the total number of measles cases was estimated. Measles sentinel reporting systems were replaced with nationwide case-based reporting system in January 2008 [4].
The measles epidemics of 1999-2003 were attributed to insufficient disease suppression due to low vaccination coverage, which ranged from 75 to 81% [6,7]. They had two characteristics: they were all small-medium in epidemic size [8], and the main victims were unvaccinated 1-year-old children. In the nationwide survey, the estimation of 2002 measles vaccination coverage in Japan revealed that Japan's measles vaccine coverage at ages 18, 24, and 36 months were 61.7 ± 1.6%, 79.6 ± 1.3%, and 86.9 ± 1.1%, respectively [9]. The coverage of 18-month-olds was revealed to be rather low for protection from measles transmission, presumably making the group susceptible to measles infection. The age distribution of measles patients supported this presumption because one of the peak ages for measles patients was one year. The National Institute of Infectious Diseases, Japan (NIID) reflected on these characteristics in a publication entitled, Action Plans to Control Measles and the Future Problems [6] and proposed that infants be vaccinated as soon as possible after their one-year birthday to reduce the age group's susceptibility to measles infection. Even though the two-dose regimen is favored around the world, the NIID recommendation was considered the best possible rapid option under laws governing immunization regulations. It was adopted by the Ministry of Health, Labour and Welfare, Japan with the combined support of the Japan Pediatric Association, the Japan Child Health Association, and the Japanese Association of Pediatrics [10].
In Japan, the regular immunization service, which includes the measles vaccine, is given by medical doctors in hospitals and clinics. Under such programs, local governments request support from local medical associations, which are typically comprised of physicians in private clinics that are governed by the Japan Medical Association. Local medical associations delegate responsibilities to eligible doctors and parent(s) take their children to clinics for immunization. Participating doctors are paid for their participation. The amount is variable, as it is determined by the individual local governments. Hospitals and clinics in charge of immunization must procure vaccine, syringes and needles at their own expense. Usually, vaccine delivery costs are included in vaccine price. This system is in place throughout the country. Japan's medical service system differs to that of the United States and many European countries in that patients are generally seen without appointment [11]. Because the referral system between GPs and hospitals is not well established, many patients go directly to hospitals without appointment, and measles patients are often first attended by doctors in hospitals. There are typically two options for hospital admission: patients may be admitted directly after consultation with a physician in the outpatient ward of a hospital; or they may be referred by a GP. Admission fees are standardized under the national health insurance program but the cost of private beds varies -as it is determined by the individual hospitals.
In the present study, we tried to estimate the nationwide economic disease burden of measles based on the clinical data of local measles epidemics in Matsudo City, Chiba Prefecture, Japan between 1999 and 2001. At the same time, we attempted to evaluate the economic effectiveness of the proposal that infants be immunized soon after their first birthday by benefit cost ratio (BCR).
Even though Japan is considered a measles endemic country, health policy research about this topic is still missing and to date, an economic evaluation on the cost of the disease has not been performed. This is the first policy evaluation of Japan's measles vaccination policy based on an economic viewpoint.
Methods
We performed a retrospective study of Chiba-Nishi General Hospital patients identified by a file survey. After complementing some of the data through telephone interviews, direct costs and indirect costs were analyzed following the framework of analysis published by Ohkusa et al [12] and Sugawara et al [13]. Finally, nationwide costs and BCR were estimated based on a static mathematical model of measles transmission [14].
Study design
This study is based on the assumption that the number of measles patients would be zero if all 1-year-old cohorts received the vaccine. We used a static model, as such we did not consider the adjustment period of gradual herd immunity increase and final measles epidemic control. Regarding the framework of the analysis, for the cost-benefit analysis of influenza vaccination we followed the methodology published by Ohkusa et.al [12] and for the cost-effectiveness analysis of routine immunization for varicella we followed the framework of Sugawara et al [13].
Survey area
The data were sampled from patient records from Chiba-Nishi General Hospital from January 1999 to September 2001, where two of the authors worked as pediatricians. This private hospital is located at Matsudo City, which is adjacent to metropolitan Tokyo and has a population of about 470,000. Regarding the health facilities of Matsudo City, there are 13 pediatric clinics and 153 GP clinics, in which 59 physicians also see pediatric patients. Out of 13 general hospitals in Matsudo-city, 3 hospitals, including Chiba-Nishi General Hospital, have a pediatric outpatient service. Chiba-Nishi General hospital has 408 beds, a pediatric outpatient ward, an inpatient ward and an emergency unit. It receives an average of 900-1,000 patients per day, including 200-300 pediatric outpatients.
Case definition of measles
Firstly, we referred the diagnostic criteria of Japan's sentinel surveillance. The diagnostic criteria of sentinel surveillance for measles included: the presence of a generalized rash; fever (≥ 38.5°C); and cough, coryza, or conjunctivitis; or laboratory confirmation. Laboratory confirmation of cases was performed by detection of measles-specific immunoglobulin M (IgM) antibodies [4].
In our study, selection criteria were: 1) cases diagnosed as measles by measles IgM testing; or 2) cases diagnosed as measles by Koplik spots. Koplic spots were included as diagnostic criteria because the presence of Koplik spots is most important in establishing the diagnosis of measles [15]. Furthermore, all of the clinical records of the cases were thoroughly examined by the survey team to determine whether the clinical course was consistent with measles and confirm the measles diagnosis.
Data collection
Data was collected between October 1 and October 28,2001. For the first stage of data collection, cases were extracted from the patient diagnosis electronic database, which was developed to assist in the claiming of national health insurance; it includes information on both confirmed and suspected cases. We examined the database from January 1999 to September 2001.
For extraction of data, we allocated two qualified medical clerks, who were briefly instructed and trained in the data extraction procedures to ensure data coherence. Two hundred and ninety-one cases were extracted from a total of 375,353 records. In the second stage, relevant patient files were examined by our survey team, which included one of the hospital's physicians and three of the hospital's eight pediatricians. Research target candidates were nominated if the records met the diagnostic criteria noted above in "Case definition of measles".
The data that were initially collected included patient name, age, gender, course of fever, clinical symptoms other than fever, contents of the examination and treatment, dates of hospital visit, contents of medical examination and treatment, fees for medical examination and treatment, patient/parental employment status, address and telephone number, and for inpatients, the date of admission and discharge, contents of treatment and examination, and the fee paid by national health insurance. In order to respect patient privacy, access to any identifying information that was necessary for the study was maintained for only as long as it was needed. Once relevant estimations could be made, sensitive information was deleted. Information on employment status was deleted after estimating indirect costs. Patient addresses were only used to calculate transportation fees and were not recorded as the calculation was performed immediately after case selection.
Telephone interviews were conducted to follow up with patients whose records of parental employment status were not clear or where prognosis was not identified due to discontinuation of treatment at the hospital. The three above-mentioned pediatricians, who were in charge of treating pediatric outpatients at Chiba-Nishi General Hospital, performed the interviews. Before beginning the telephone interview, the exact purpose of this survey was explained to the interviewees. Information was only recorded after obtaining verbal consent from the interviewee. In the interview, parent(s) were asked the date of the onset of rash, the duration of the rash, the course of fever and whether their child experienced any other complications. No other sensitive or identifying information was collected. Following the telephone interview, patient identity and telephone numbers were deleted.
Framework of analysis
To estimate the costs and benefits, it is important to define who should assume the cost, and who should receive the benefits. For our research objective, we aimed to provide an estimation that was applicable to the whole of Japan.
We defined direct costs and indirect costs, which are hereafter defined in "Definition of costs". Nationwide direct and indirect costs based on these definitions were then estimated from our sample data.
Definitions of costs
Costs were categorized into direct cost and indirect cost (Table 1). Direct costs are defined as: 1) fee for vaccination and transportation fees for hospital visit; 2) actual fees paid for medical diagnosis and treatment of measles and hospital visit. Indirect costs were defined as workdays lost due to the nursing of measles patients and workdays lost due to vaccination or nursing for mild side effects of vaccination; this also includes any productivity losses due to any measles-related deaths. While vaccination cost incurs yearly as a control cost of measles, measles treatment costs don't incur yearly because the incidence may be reduced to be zero based on our model.
These costs are estimated based on several assumptions that will be discussed later in this text. Benefits are defined as reductions of direct and/or indirect costs. Nationwide direct and indirect costs were estimated based on our sample data and the BCR was based on these estimations.
For precise data, currency conversions should reflect the monthly average of the exchange rate. However, for simplicity we opted to use the average exchange rate of the study period (US$ 1 = JP¥ 118.8).
Direct costs Medical treatment fees
Costs for medical consultation, prescribed medicines, laboratory examination and X-ray examinations were included as medical fees. Admission fees were included for inpatient cases. We followed the national standard for reimbursement of medical services as medical treatment fees.
Since there is no standardized treatment and diagnosis for measles or its complications, several tests and treatments were found to have been used in the sample cases. Thus, the diagnosis and treatment of each case were thoroughly examined to ensure that only those that were medically appropriate were selected for analysis. For the sake of simplicity, the cost of over-the-counter drugs was not included. Furthermore, we did not consider cost of vaccine delivery because this is included in the vaccine procurement cost, which is reimbursed by local governments.
Transportation fees for hospital visits
For transportation fees of patients or their attendants, we estimated the cost incurred by public transportation using the patient's address and the public transportation routes to the hospital. As previously noted, patient addresses were checked from patient files before data were extracted, in order to estimate these costs but not compromise privacy. Address information was not recorded.
Indirect costs
The estimation of indirect costs included: 1) Estimation of wage functions; 2) assumption settings of indirect costs; and 3) estimation of workdays lost by patients or family members. The procedure for the estimation of wage function is described in detail in additional file 1 (see also Tables 2 and 3).
Estimated wage function
We calculated the following wage functions following the procedure summarized in additional file 1.
Base case setting
In benefit-cost analysis, as a reference standard, the base case should be composed using plausible parameters. In addition, we conducted a sensitivity analysis by changing Table 3 is cited in the additional file 1(Technical Annex). 99 samples were selected based on data reliability of age and employment status. One sample was omitted because parental age and employment status were unclear.
certain parameters. The base case was set according to the economic situation in Japan in 1999-2003 [8].
The target population for immunization of 1,200,000 per year reflects the vital statistics of Japan [16]. The vaccination fee was set at JP¥ 5,000 (US$ 42.1)/person which were disbursed to each of the vaccinating physicians as compensation from the public health offices of local governments in Matsudo-city. To receive the vaccine, one of the parents should take off two workdays (the day of vaccination and the next day). For homemakers the absence includes the suspension of housework. For these 2 days, we assumed that an immunized child may have fever and that in such cases one parent must stay home to provide care. This is based on the fact that 20% of vaccine recipients suffer from mild fever lasting for approximately 2 days [17]. For the sake of simplicity, other medical or opportunity costs incurred due to adverse effects of immunization were not included in this study. Vaccine coverage was assumed to be 86.9% [9]. The discount rate for direct and indirect costs was 0%. The primary vaccine failure rate was 3.5%. For simplicity, the secondary vaccine failure rate was not counted.
Age distribution follows this data set.
To simplify analysis, the number of patients was set at 100,000 per year for the whole country. We adopted the lower limit of the estimated number of patients [3,5]. For other data including upper limit, we estimated by sensitive analysis.
The fatality rate for measles in Japan is 1/10,000. As for severe complications, encephalopathy/encephalitis occurs in 1/1,500 cases and subacute sclerosing panencephalitis occurs in 1/100,000 cases [18].
Our estimation is based on the assumption that the number of measles patients will be zero after a 13.1% increase (from 86.9% to 100%) in vaccine coverage. In other words, the vaccination cost is that which is incurred in increasing vaccine coverage by 13.1% to reach 100% coverage. The benefit is the reduction of the direct and indirect costs attributed to measles infection.
The BCR is described as follows: BCR = [Vaccine cost to increase 13.1% coverage]/ [Reduction of direct and indirect costs of measles infection].
Parameter settings for sensitivity analysis
To conduct sensitivity analysis, alternative parameter values were arranged in the probable distribution.
Because some parameters may change simultaneously, we conducted multidimensional sensitivity analysis based on the assumption that the probabilities of occurrence of all of the parameters are the same. Based on this assumption, we analyzed 243 combinations to estimate direct and indirect costs estimation and calculate BCR. For estimation of nationwide costs for measles treatment, we applied estimated direct costs and indirect costs to assumed number of patients following age distribution of measles patients.
Ethical consideration
Before beginning the survey, the ethical committee of the Chiba-Nishi General Hospital, represented by the director of the hospital, formally approved the research. Although the nature of this study meant that some potentially sensitive information was required, we took great care in maintaining the privacy of the patients involved in our investigation. In each step of data collection and processing, personally identifiable information that was no longer required was deleted. This included patient name, telephone number, patient/parental employment status, and address.
Identification of measles patients
The inpatient/outpatient ratio of extracted 291 candidates was 1.43:1.00 (171:120). A total of 194 cases matched our criteria; 97 cases were excluded because their records did not match our diagnostic criteria.
Three pediatricians, who were in charge of treatment to 34 families, conducted the telephone interviews. To confirm the parental employment status, the physicians conducted 14 inpatient family interviews and 10 outpatient family interviews; 1 inpatient family and 5 outpatient families refused. The remaining 10 interviews were conducted to clarify the clinical course of each patient and also included the question on parental employment status. At the time of the interview, the prognoses of all the patients were confirmed. While parent(s) did not remember the exact date of rash onset, they remembered that rash first appeared on the face and then spread to trunk. Most decided to discontinue treatment after spontaneous fever resolution.
In total, 194 measles patients were identified using the available data; 94 patients recovered without hospitalization; and 100 cases were admitted to hospital. Adult patients were more likely to be admitted to hospital than pediatric patients. Regarding diagnosis, 132 cases were laboratory confirmed (inpatient: outpatient = 78:54) and 62 cases were confirmed by Koplik spots (inpatient: outpatient = 22:40). For those who were diagnosed with measles only by Koplik spots, the clinical course was examined by the survey team to clinically confirm the diagnosis.
No patients were identified with severe brain damage (encephalitis) nor did we identify any measles-related deaths.
Sex and age distribution of measles patients
The male: female ratio was 1.49:1.00 (116:78). Two peaks were observed in age distribution (1 year and > 20 years: Figure 2). The highest peak was the 1-year-old cohort; the 20-29-year-old cohort was the 2 nd peak. Adult patients were more likely to be admitted to hospital than pediatric patients.
Treatment and examination
Treatment for non-hospitalized cases included oral cough drugs (n = 85), beta stimulant drugs for bronchitis cases (n = 10), infusion for dehydration cases (n = 54), and inhalation of beta stimulants for dyspnea (n = 10). Tests performed were as follows: blood test, including blood cell count and general biochemical tests (n = 54); measles IgM antibody titer (n = 54); chest X-ray for diagnosis for ruling out of pneumonia (n = 10); brain computed tomography scans for complicated febrile convulsion cases (n = 2); and abdominal ultrasonography for severe diarrhea cases (n = 12).
Patients were usually hospitalized if they were suffering from severe dehydration, systemic malaise, or dyspnea. In all hospitalized cases, patients received drip infusion for correction of dehydration (n = 100) and systemic antibiotics against complicated bacterial infection (n = 100). In cases of severe pneumonia or those complicated by bronchial asthma, oxygen therapy was provided (n = 4) in addition to beta stimulant inhalation therapy (n = 45) and oral cough drugs (n = 70) and oral beta stimulants (n = 45). Systemic steroids were administered in cases of severe hypoxemia with interstitial pneumonia cases (n = 1). Tests performed for inpatients were as follows: blood test, including blood cell count and general biochemical tests (n = 78); measles IgM antibody titer (n = 78); chest X-ray to rule out pneumonia (n = 45); brain computed tomography scans for No cases involved administration of vitamin A or gamma globulin.
Costs
The distribution of total costs is presented in Figure 3 (outpatients) and Figure 4 (inpatients). The average treatment cost of 94 outpatients was US$ 1010.1 (JP¥ 120,000). For the 99 inpatients it was US$ 2,525.30 (JP¥ 300,000). The percentage of indirect costs was much higher in outpatient cases. Based on estimated direct and indirect costs, the nationwide costs of measles treatment and universal vaccination were estimated. The percentage of indirect costs borne by outpatients was higher than that of inpatients.
Assuming that the number of measles patients is 100,000 per year nationwide, the average cost (including direct and indirect costs) for measles treatment was estimated as US$ 404 million (JP¥ 48.0 billion). Assuming that the number of children targeted for vaccination is 1,200,000, the estimated nationwide cost (including direct and indirect costs) for single-dose vaccination was estimated as US$ 165 million (JP¥ 19.6 billion) ( Table 4).
Benefit cost ratio (BCR)
The estimation of BCR in the base case and sensitivity analysis is shown in Table 5. In the base case, BCR was 2.48. Sensitivity analysis showed a minimum of 2.21 and a maximum of 4.97. Multi-dimensional sensitivity analysis showed a median of 4.20, an average of 4.20 and a 95% confidence interval of 2.49-6.17.
Age distribution
Our survey revealed two peaks in age distribution: 1 year and 20-29 years. In our data, 1 year-old-cohort was the highest of these two peaks. In the national surveillance data, the 10-14-year-old cohort was the 2 nd peak. The endemic patterns were almost the same as the endemic patterns in Japan in 1999-2003 [8].
Estimated Costs Costs as burden of disease
We estimated the total cost of measles treatment in Japan to be US$ 404 million (JP¥ 48.0 billion). This estimation contains both direct and indirect costs; however, US$ 404 million can be considered a serious economic impact on its own. The additional costs are incurred because measles is a wasting disease and measles patients require continuous treatment for dehydration,
Economic effectiveness
Our estimation of measles BCR was 2.48 in the base case and sensitivity analysis showed a minimum of 2.21 and a maximum of 4.97.
Benefit cost analyses of measles vaccination (or MMR) have been conducted in several countries. However, almost all of these analyses deal with a two-dose vaccine policy, which reflects the policies of each country. Among these analyses, the earliest measles studies that involve one-dose vaccination found the BCR in Austria to be 4.48 [19], and that in Finland to be 3.16-3.88 [20]. In a study in a hypothetical European country using single-antigen measles vaccine, the incremental benefit of the two-dose regimen compared with one-dose regimen varies from €1.2 million to €1.83 million [21]. In a Canadian study of single-antigen measles vaccines, the incremental net benefit of a two-dose regimen compared to a one-dose regimen is CN$ 0.18 billion and concluded that the two-dose regimen was favorable [22]. A study in U.S.A. showed that the BCR for direct costs and BCR for direct and indirect costs were 14.2 and 26.0, respectively. In this study, net savings are also calculated and were found to be US$ 3.5 billion in direct costs and US$ 7.6 billion if direct and indirect costs were combined. From the perspectives of BCR and net savings, the national 2-dose MMR vaccination program was concluded to be highly cost-beneficial [23]. It is noteworthy that at present cost effectiveness is used to find the best options for vaccination strategy. Simons et al estimated incremental costs and cost effectiveness of user-defined vaccination strategy using a measles strategic planning tool that was developed to facilitate analysis of national immunization and surveillance data and cost effectiveness of different vaccination strategies [24]. In a Ugandan study, Bishai et al compared supplementary immunization activities using dynamic stochastic model with other interventions including malaria and African trypanosomiasis control using the incremental cost effectiveness ratio as an indicator [25].
Although the small number of one-dose measles vaccine studies makes comparison difficult, other types of vaccine can be considered. In China, the BCR for universal hepatitis B vaccination was found to be 1.4 [26]. The BCR for vaccinating preschool children against influenza in United States was 1.93 [27]. Two adult studies in the same country found the BCR for vaccinating healthier adults against influenza to be 1.81 [28] and 2.92 [29].
Validity of BCR Assumptions of analysis
In our analysis, we did not include fees for patients to be placed in private hospital rooms. However, since measles patients are usually isolated to control nosocomial infection, private room fees should be considered in the estimation of social costs. If the room fee of US$ 67.3 (JP¥ 8,000 yen) in Chiba-Nishi hospital applied to this study, the BCR could increase by about 0.05-0.1. Although it is controversial whether private room fees should be included in the estimation, the affect of the fees would seem to be very small.
We assumed that 2 workdays would be lost to vaccination or nursing for mild side effects from vaccination. This assumption is based on the fact that 20% of vaccine recipients suffer from mild fever for approximately 2 days. If less than 2 workdays are lost, the indirect cost of vaccination would decrease and the BCR value would increase.
Furthermore, mild cases of measles should be considered. Such patients may be treated at home and recover without visiting hospital. It is difficult to extrapolate the number that these cases represent. However, even mild cases require family care and absence from work. With these considerations, the BCR could also rise, further highlighting the benefits of vaccination.
Limitations
Our study has a number of limitations. A major limitation is our use of Matsudo City as a proxy to represent all of Japan and our use of Chiba-Nishi General Hospital as a representative proxy for Matsudo City. We believe that this is reasonable based on the age distribution of patients discussed earlier in this discussion. However we can speculate that less serious measles cases might be treated in general physicians' clinics, and that cases observed in Chiba-Nishi general hospital could be the more serious measles cases. Thus, the cases that we observed in this study might be more serious than the actual situation. To avoid these imitations, a multicenter study including general physicians should be considered as a next step and should be addressed in future studies. In reality, it is difficult to obtain data necessary for this kind of analysis because it inevitably requires individual information. A solution to this issue is something that should also be addressed in the future. We assumed that all the immunized children would have fever for two days in spite of the fact that only 20% of vaccine recipients suffer from mild fever lasting for approximately 2 days. This may be an excessive assumption, but it does not change the conclusion of the study. If the percentage of vaccine recipients suffering from fever or the duration of fever were reduced in the model, then the BCR would increase because the indirect cost of vaccination would fall.
There is also a possibility that the telephone interviews were affected by recall bias. In the planning stage, our survey team discussed this concern. In fact, 10 interviewees could not remember the exact date of rash onset. Even though these results do not have a significantly negative impact on our survey, they should be noted.
In addition, we did not include direct costs of over the counter drugs, printing materials or human resource allocation for campaigns and so on which may be necessary to advertise the importance of measles vaccine. We also did not consider indirect costs of special education for subacute sclerosing encephalitis. If we were to include these elements, the indirect cost of vaccination would likely increase and the BCR would likely decrease. Despite the difficulties in estimating these kinds of cost, it is something that we aim to investigate in a more detailed estimation. Furthermore, we did not consider secondary vaccine failure. Including the possibility of secondary vaccine failure would provide for more precise analysis.
Finally, the present study involved purely static analysis. This study is based on the assumption that the number of measles patients would be zero if all 1-year-old cohorts received the vaccine. However, since it is well known that approximately 5% of people do not develop protective antibody levels after just one dose of measles vaccine, 100% measles vaccine coverage achievement would not establish herd immunity high enough to prevent sporadic measles transmission. In a practical sense, it is impossible for measles cases to be reduced to zero immediately after 100% coverage has been achieved. In this sense, this study analyzes stable situations like that of polio. In reality, when 1-year-old cohorts are vaccinated and herd immunity increases, the decrease in measles incidence is gradual. Over time, the costs caused by measles epidemics could be significantly reduced and eventually, even averted. This type of model, which includes an adjustment process, is called dynamic analysis and can produce a markedly different outcome from static analysis [14]. For that purpose, it is necessary to consider the influence of vaccine coverage on epidemics. For example, a study in the United States reported that regional measles epidemics raise vaccine coverage and influences the timing of vaccinations [30]. However, it is difficult to arrange datasets for dynamic analysis. Thus, we conducted static analysis using the best data that could be obtained. This is a limitation of both our study and the NIID recommendation. In the future, the interrelationship between epidemics and vaccination coverage should be clarified in a dynamic framework. Our survey involves benefit cost analysis that counts labor loss and severe subsequent complications as the only indirect cost. While benefit cost analysis is relatively easy to calculate, we did not analyze the impact of other disutilities such as mental stress. Cost utility analysis is ideal and widely utilized for this purpose [31]. In order to apply such analysis to the present study, we would need far more precise data than were available.
Conclusion
We estimated Japan's measles disease burden in the measles endemic era of 1999-2003 and evaluated the effectiveness of immunizing children soon after their 1year birthday. The total measles treatment cost was found to be US$ 404 million with a BCR of 2.48 in the base case. With consideration to the impact of the measles disease burden, our study found the recommendation of immunizing children soon after their first birthday to be suitable and effective.
Additional material
Additional file 1: Technical Annex "Estimation of wage functions". A brief summary of the procedure of indirect cost estimation. See Table 2 &3 for reference. | 7,499.8 | 2011-10-07T00:00:00.000 | [
"Economics",
"Medicine"
] |
Device-Free Indoor Localization Based on Data Mining Classification Algorithms
Indoor localization carries a radio device or sensor and the location of this device is taken as the location of the target person. However, there are situations in which a person does not carry a device. In such cases, device-free localization (DFL) is the best solution. In this paper, we propose a radio frequency (RF)-based DFL system using data mining classification algorithms. ZigBee nodes are deployed at the sides of a rectangular area and the area is divided into square grids. First, a model is developed for each classifier by collecting a received signal strength indicator (RSSI) when a person stands at the center of grids. The RSSI of each RF link is taken as an attribute for classifiers. Second, an online dataset is used to test the trained classifiers. RF links that contribute less for classification are removed from the attribute list. We also analyze the effect of ZigBee and WiFi interference on ZigBee-based DFL systems. Among five data mining classifiers, k-nearest neighbors and support vector machine using sequential minimal optimization achieve a classification accuracy of above 90%.
Introduction
Recently, indoor localization has shown rapid development with many methods proposed in the literature, most of which consider a target object to carry a device. However, a target object does not always carry a device. Hence, device-free localization (DFL) has been used as a way of detecting and tracking subjects without the need to carry any tag or device. It is suitable for applications such as security, surveillance, assisted living, and elderly care. DFL has the advantage of being unobtrusive while offering good privacy protection.
Researchers have proposed DFL systems using different technologies such as RF, (1)(2)(3) camera, (4) infrared, (5) and ultrasonic. (6) Radio frequency (RF)-based schemes have advantages of long range, low cost, and the ability to work through nonconducting walls and obstacles.
RF-based DFL systems are based on the fact that a received signal strength indicator (RSSI) is affected by the presence and movement of people in the monitored environment. (7,8) El-Kafrawy et al. (7) investigated the impacts of human motion on the variances of the RSSI measurement. The experimental results of Turner et al. (8) indicate that RSSI attenuation varies with the number of people and movement speed.
RF-based DFL systems can be broadly categorized into two: location-based and link-based. Location-based schemes create a radio map with the subject present in various predetermined locations, and then map a test location to one of the trained locations on the basis of observed radio signals. Seifeldin et al. (1) proposed a fingerprinting approach for DFL.
On the other hand, link-based schemes capture the statistical relationship between the RSSI of a radio link and whether the subject is on the line-of-sight (LoS). The subject's location is determined using a geometric approach of the obstructed RF links. A device-free patient localization method proposed by Faraone et al. (2) identifies the obstructed links first and infers the position of the obstacle from a geometric intersection. However, our experimental results indicate that it is difficult to identify obstructed links when longer links are obstructed at the middle.
The accuracy of device-free positioning is highly dependent on the variance of RSSI. A minor environmental change can cause a significant fluctuation in RSSI. Multipath propagation and interference increase variability. RSSI varies with the passage of time even in a static environment.
In this paper, we propose a DFL system based on data mining classification algorithms. RF nodes are deployed at the sides of a rectangular area and the area is divided into square grids. Classification algorithms are trained by collecting RSSI of all links when a person stands at the center of each grid. That is, the RSSI of a link is considered as an attribute and the RSS signature at the center of grids is taken as an instance for the data mining classification algorithms. Links that contribute less for classification are removed from the attribute list to make the classifiers run fast. We compare the performance of five data mining classification algorithms. The effect of ZigBee and WiFi interference on ZigBee-based DFL systems is also discussed.
The rest of the paper is organized as follows. A survey of related works is given in § 2. Section 3 describes our proposed method. Section 4 presents results and discussion, and § 5 concludes the paper.
Related Works
Recently, a number of device-based and device-free localization research studies have been proposed in the literature. Liu et al. (9) presented a survey of wireless indoor positioning techniques. Various measuring principles and algorithms have been presented in this paper.
Among device-based positioning systems, radio frequency identification (RFID)-based tracking and positioning systems are suitable for the care of elderly people because RFID tags are simple to wear. A positioning system integrating RFID with different sensors was proposed by Joshi et al. (10) A big challenge in this system is that elderly people may forget or are unwilling to wear the devices. Hence, DFL is the best option for such type of application.
The variation of the received signal strength due to environmental factors is discussed by Xu. (11) In this study, he considered the effect of temperature, height of node's position, type of antenna, and the electromagnetic effect of the human body on the RSSI. According to this investigation, a human body at a distance of 5 m can cause an RSS fluctuation of −3.0 to 3.0 dBm.
Xiao and Song proposed a DFL method using outlier link rejection. (3) Owing to the uncertainty of a wireless channel, certain links may be seriously polluted, resulting in error detection. The authors first identified the outlier link and then rejected this link to reduce the error induced by it.
Han et al. (12) proposed a device-free object tracking system called Twins. Twins leverages the mutual interference between approximated passive tags to enable the motion detection. They discovered an interesting phenomenon: two tags close to each other can enter into a critical state in which one tag is not readable owing to the mutual interference. Any nearby moving objects will inject more RF signals to the twin-pair tags and trigger a change from the unreadable to the readable state. Thus, motion detection is available by observing the state change of twins' tags.
Radio Tomographic Imaging (RTI) technology is used to design a DFL system. (13) However, the performance of RTI is highly dependent on the density of RF nodes. This method will not be feasible if the available RF links are few.
Other proposals use antenna arrays for device-free target detection. ArrayTrack (14) relies on an antenna array and the multiple input multiple output (MIMO) technique to extract the location information from the signal along the direct path (i.e., the first path) and mitigate the multipath effect.
Since both WiFi and ZigBee work on the same frequency band, different techniques have been proposed to address the issue of interference. The performances of ZigBee/IEEE 802.15.4 networks are often evaluated on the basis of the packet reception rate (PRR). (15,16) An adaptive channel selection algorithm has been proposed by Lavric et al. (17) to increase the PRR of ZigBee. ZigBee nodes evaluate the energy level of each ZigBee channel. Then the node will select a channel with the lowest energy level, that is, the least intensity of interference. Zacharias et al. (18) proposed a classification algorithm to detect the common external sources of interference in the 2.4 GHz frequency band.
Overview of experimental platform
We deployed 10 XBee series 2 nodes in Room 211 at the complex building of the National Taipei University of Technology, as shown in Fig. 1. The XBee module is attached to an Arduino Fio board that is based on an ATmega328P microcontroller and runs at 3.3 V and 8 MHz (Fig. 2). RF nodes are deployed at a height of 0.9 m and the deployment area is divided into 0.5 × 0.5 m 2 grids. All nodes have a capability of transmitting and receiving packets. When one node broadcasts a packet, all other nodes receive and record the RSSI of the link.
Training and testing classifiers
Classification is a form of data analysis that extracts models describing important data classes. (19) Data classification is a two-step process, consisting of a training step and a classification step.
In the first step, a classifier is built describing a predetermined set of data classes. This is the training phase, where a classification algorithm builds the classifier by analyzing a training set made up of database tuples and their associated class labels.
An instance, X, is represented by an n-dimensional attribute vector, X = (x 1 , x 2 , ..., x n ), depicting n measurements made on the tuple from n database attributes A 1 , A 2 , ..., A n , respectively. Each instance X is assumed to belong to a predetermined class.
In the second step, the trained classification algorithm is used to classify a new dataset. The accuracy of a classifier on a given test data is the percentage of test set tuples that are correctly classified by the classifier.
Similar to data classification, our localization system has two phases: training and testing. The first phase is training classification algorithms using an offline dataset. When a person stands at the center of a square grid, all XBee nodes broadcast using a token ring algorithm. Nodes that receive a broadcast from a particular node record RSSI. In one round, every node broadcasts once and hence we can obtain the RSS value for all the links in the network. After the end of each round, the RSS signature of a square grid will have the form (RSS 1 , RSS 2 , ..., RSS n ), where RSS i is the RSS value from link i and n is the total number of links. Hence, the RSS signature is taken as an instance, and the center of the grid where a person stands will be considered as a class for testing and training data mining classifiers. At the center of each grid, 10 RSS vectors are collected (i.e., every class has 10 instances).
For training and testing datasets, RSS from link i is taken as attribute A i and the center of grid j is considered as class j ( Table 1). The value of an attribute A i is given in dBm and K is the number of square grids. In a 5 × 7 m 2 area, both training and testing datasets have 117 classes and 1170 instances (10 instances for each class). WEKA version 3.7 software is used to train and classify datasets.
Classification algorithms
In this section, we will present a detailed description for five data mining classification algorithms: k-nearest neighbors (k-NN), support vector machine (SVM) using sequential minimal optimization (SMO), naïve Bayes (NB), naïve Bayes multinomial (NBM), and logistic regression (LogR). Table 1 Data structure of training and testing datasets.
k-NN
k-NN is a powerful and simple method of classification that has proved successful in many applications such as medicine, face recognition, and signature recognition. (19) It is a type of instance-based learning, or lazy learning where the function is only approximated locally. Given a set of training tuples (instances), SVM, NB, and LogR will construct a generalization model before receiving a test tuple. However, k-NN waits until the last minute before doing any model construction to classify a given test tuple. It simply stores a set of training tuples and waits until a test tuple is given. When it sees the test tuple, it classifies the tuple based on its similarity to the stored training tuples.
The steps of k-NN algorithm are as follows: 1. Find the Euclidean distance between the test sample X and the previous sample patterns. The Euclidean distance between two samples X 1 = (x 11 , x 12 , ..., x 1n ) and X 2 = (x 21 , x 22 , ..., x 2n ) is given by 2. Select k-nearest patterns of the test sample and arrange in increasing order of Euclidean distance. 3. Decide a class for X by a majority vote from k-nearest neighbors.
SVM
SVM is a data mining classification algorithm for both linear and nonlinear data. (19) It uses nonlinear mapping to transform the original training data into a higher dimension. Within this dimension, it searches for the linear optimal separating hyperplane. The hyperplane is a decision boundary separating the tuples of one class from another.
If the original data is nonlinear, kernel functions are used to project points up in a higher dimensional space hoping that the separability of the data would improve. Determining the best kernel for a particular data set will result in a significant improvement of the performance of SVM. In this paper, we use SVM using the kernel SMO proposed by Platt. (20)
NB classifiers are statistical classifiers based on the Bayes theorem given by the formula
where C i indicates a class i, X = (x 1 , x 2 , ..., x n ) is a data tuple considered as evidence, P(C i |X) is a posterior probability of C i conditioned on X, P(C i ) is the prior probability of C i , and P(X|C i ) and P(X) are the posterior and prior probabilities of X, respectively. (19) According to the NB classifier, a data tuple X belongs to a class C i that has the highest posterior probability conditioned on X. That is, X belongs to a class C i if and only if P(C i |X) ≥ P(C j |X), for 1 ≤ j ≤ m, i ≠ j, and m is the number of classes.
If the data set contains many attributes, it would be extremely computationally expensive to compute P(X|C i ). To reduce the computational cost, the naïve Bayes classifier assumes that the attribute values are conditionally independent of one another. Hence, posterior probability of X given C i is evaluated using
NBM
NBM is a particular type of NB classifier. NB assumes that all of the attributes [x k in Eq. (3)] are conditionally independent of one another given some class C i . However, the assumption of independency is not always true. To address this issue, NBM assumes each P(x k |C i ) in Eq. (3) as multinomial distributions.
LogR
LogR assumes a parametric form for the distribution P(C i |X), then directly estimates its parameters from the training data. (21) The parametric model assumed by LogR is given as where i = 1, 2, …, m−1.
When i = m, where w jk denotes the weight associated with the jth class C j and with input X k . Hence, we can classify any observed RSSI X to the class C i that maximizes P(C i |X).
ZigBee and WiFi interference
ZigBee is a specification for a wireless standard based on IEEE 802. 15 Nowadays, WiFi networks are everywhere in office buildings, homes, and outdoors in urban areas. WiFi and ZigBee RF channels are shown in Fig. 3. In Taiwan, among the 11 ZigBee channels, only two channels (25 and 26) do not overlap with WiFi. In other countries, such as in Europe and Japan, these two channels also overlap with channel 13 of WiFi.
Since RF channels in ZigBee and WiFi overlap and the transmission power of WiFi transmitters is much higher than those of ZigBee nodes, WiFi highly affects ZigBee networks. The effect of ZigBee and WiFi interference can decrease ZigBee's PRR (15,16) and the variance of RSSI values, and may even stop the functioning of the ZigBee network.
We analyzed the effect of WiFi interference on RSSI values. Experiments were carried out on two ZigBee channels: channel 23 (which overlaps with WiFi channel 11) and channel 26 (nonoverlap). We used a spectrum analyzer to check RF transmissions at the time of the experiment. From our observation, ZigBee channel 26 is free from other wireless transmissions but there were interferences between ZigBee channel 23 and WiFi channel 11. Results indicate that WiFi interference causes an average RSSI variance of −8 dBm. This value is close to the RSSI decrease owing to the presence of a human body on the LoS in ZigBee channels.
Results and Discussion
We conducted experiments in three areas of 35, 25, and 20 m 2 . A total of 10 RF nodes are deployed for 35 and 25 m 2 areas while eight RF nodes are used for the 20 m 2 area. The statistical classifiers k-NN, SVM, LogR, NBM, and NB have been applied to three datasets collected from three experiment areas. We use classification accuracy to compare the performance of these classifiers.
RSSI variance with human presence
We analyzed the effect of a human body on two links of 3 and 7 m lengths, as shown in Fig. 4. The dots in Fig. 4 indicate the test points. A person will stand at these points at the time of RF transmission.
For the 3-m link, RSSI is measured at 25 test points, among which five test points are on the LoS. When a person is on the LoS, RSSI decreases with at least −12 dBm, as shown in Fig. 5(a). Hence, obstructed links can easily be identified and geometric-based localization methods can be applied. For the other nine test points, which are on the LoS but at a distance of more than 2 m from the transmitter or receiver, the RSSI decrease due to human presence is very small and it is difficult to identify whether a link is obstructed. Hence, the RSSI variance due to human presence in the middle of longer links is very small and this induces localization error in particular for geometric-based methods. Table 2 indicates the performance of five classification algorithms on three experiment areas. The values in the table indicate the percentage of correctly classified instances by each classifier. When the experiment area increases from 25 to 35 m 2 , keeping the number of RF nodes at 10, the performances of all the classifiers decrease, because increasing the experiment area decreases the density of links and also the links become longer and less significant.
Performance of classification algorithms
From the three areas, all the classifiers perform weakly in the 20 m 2 area with eight RF nodes ( Table 2). This is because the total number of RF links in this area is 28, which is very few compared with 45 links for the 25 and 35 m 2 areas. Among the five data mining classifiers, k-NN and SVM perform better in all three areas, as shown in Table 2. NB provides the least classification accuracy. NB classifies correctly only 40% of the instances collected from the 20 m 2 area.
Effect of WiFi interference
We conducted experiments on two ZigBee channels (channels 23 and 26) to analyze the effect of WiFi and ZigBee interference on ZigBee-based localization methods. Channel 26 is free from WiFi interference but WiFi channel 11 transmission overlaps with that of ZigBee channel 23. Comparison of the two channels is carried out in a 5 × 5 m 2 area.
The performances of all the classifiers in ZigBee channel 23 are decreased significantly because ZigBee interferes with WiFi, as shown in Table 3. WiFi interference causes RSSI variance and this will widen the gap between the training and testing datasets. However, if there is no interference, RSSI varies less and the training and testing datasets have the chance to look similar. In ZigBee channel 23, k-NN classifies 64.8% of the instants correctly, which is decreased by nearly 26% from its performance in channel 26. In addition to the decrease in the performance of classifiers, we have observed that ZigBee network stops functioning during the experiment time on channel 23 owing to WiFi and ZigBee interference.
Performance of fingerprinting
We compared our proposed method with fingerprinting. Fingerprinting for device-free localization has been widely used in the literature. (1) For this experiment, 10 RF nodes are deployed in a 25 m 2 area. A passive radio map is constructed from 81 training points. For each training point, the RSS signature from 45 links is recorded. The average value of 10 RSSI records is taken for each link. The fingerprinting technique is implemented in ZigBee channels 23 and 26. Figure 6 indicates the cumulative distribution function (CDF) of localization error. The 1.5 m accuracy of fingerprinting on channel 26 is 90%. However, the 0.5 m (grid size) accuracy of k-NN and SVM classifiers is more than 90% on ZigBee channel 26, as shown in Table 3. Similarly, the 1 m accuracy of fingerprinting on channel 23 is 60%, while the 0.5 m accuracy for all the classifiers except NB is above 60%, as shown in Table 3.
Conclusions and Future Works
In this paper, we have proposed a simple and efficient DFL system based on data mining classifiers. First, a model is developed for each classifier by collecting RSSI from the center of each square grid. RF links that contribute less for classification are removed from the attribute list. This is carried out on the basis of the rank given by WEKA software. In the second phase, a different dataset is used and classifies the test points depending on the developed model.
Experimental results indicate that the RSSI value from longer RF links and those on the edges of the deployment area contribute less for classification. Dense deployment of RF nodes with a link length of less than 6 m is required to achieve a good localization accuracy. Among the five statistical classifiers, k-NN and SVM perform almost the same and better than other classifiers.
We also analyzed the effect of ZigBee and WiFi interference on ZigBee-based DFL. Even though the performances of all the classifiers decreased owing to interference, the localization accuracy is still much better than that of the widely used fingerprinting method.
In future works, the RF node deployment strategy will be explored in detail because the accuracy of DFL is highly dependent on the density of RF node deployment. Also, multichannelbased localization will be explored since several transmissions at different frequency channels increase the available information for location estimation. | 5,189.8 | 2016-01-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Research on realization of force feedback of vehicle remote control station steering simulation system
Automotive steering system is able to directly experience the road information, directly determine whether the car is in a stable state of device, good fast steering system directly determine the stand or fall of the console, so the steering force feedback also gradually become the research hot spot. This paper mainly through torque motor as the main driver of steering system components, through the central processing unit of remote feedback torque is analyzed and the driver, and then send the command control and reversing and the size of the output torque of the motor to achieve steering and correction of force feedback.
Introduction
Now for teleoperation car steering simulation quality directly decides the stability and safety of manipulation, the steering wheel force feedback simulation effect is good or bad will directly affect the pilot immersive in the sense of touch, the last is directly reflected in the accuracy of remote vehicle handling and so force feedback to realize remote control vehicle steering simulation system problem research mainly embodies in three aspects [1], one is for automotive steering system modeling can reflect after being accused of real vehicle test torque feedback model, and the second is for remote accused the vehicle back to the real time data processing, the third is to force feedback component torque motor control strategy research, through the pilot and the controlled object of real time data fusion processing and calculation of the control algorithm, eventually produce reasonable expectation torque output, to achieve the closed-loop stability of the "people-car-environment", meet the drivers of the controlled object accurate operation and perfect immersive, thereby reducing the driver's fatigue [2].
Remote control platform steering system structure and modeling
Force feedback of steering simulation system block diagram is shown in Fig. 1 [3], mainly by the control system based on the pilot operation information (steering wheel Angle, torque) collection, converted to digital signal is sent to PC, PC will receive the information and data information of the controlled system (speed, acceleration and attitude parameters, road conditions, etc.), the control system through certain control algorithm, draw close to the real road driving torque, sending control command, control actuators for the torque output, make the driver to get the corresponding force.
Steering force feedback is motion information about cars and wheel surface complex of information feedback, the higher the strength of the force feedback, information, the motion of vehicle drivers on the road to grasp the more accurate, more precise control of the remote is accused of vehicles, if the force feedback is too strong, easy to make the driver fatigue, reduce system manipulation comfort, so in order to better able to give the driver force feedback [4].The modeling of the system takes into account the stiffness, friction and other parameters of the steering torque and the steering system.
Due to the force of the steering wheel, the force mainly comes from the interaction between the ground and the steering wheel and the friction between the system itself and the gravity caused by the internal displacement of the main pin.
The relationship between the positive torque and the lateral force of the vehicle tyre, please refer to Eq. ( 1): In the course of driving, the vehicle generally considers the lateral force to be linearly dependent with the side Angle, please refer to Eq. ( 2): The positive torque produced by the main pin is independent of the speed of the car, its equation please refer to Eq. ( 3): The vehicle tire is returning the positive moment, its equation please refer to Eq. ( 4): Due to the consideration of the forward direction, the feedback of the road is only related to the forward torque of the front tire, and the front wheel Angle of the car, its equation please refer to Eq. ( 5): The force please refer to Eq. ( 6): Therefore, the torque of the front wheel is convert to the torque please refer to Eq. ( 7): Modeling of steering resistance moment can clearly found that if you want to get steering resisting moment, just need to know the car swing angular velocity, Angle, total want to acceleration, and vehicle side-slip Angle, and these values are through the vehicle velocity, velocity and Angle sensor to measure directly, at the same time, car side-slip Angle formula is as follows: = tan ⁄ , By controlling the dc torque motor through the above data, the torque output of the dc torque motor is realized, and the operation stability and accuracy of the system are achieved.
Data transmission and processing
In the process of remote control, the greater the transparency of the system, the better the operating sense of the system, the better the driver can be immersed in the driving process.
In order to make the system more transparent, the most important problem is the delay of the system.The delay of the system is always a problem in the process of remote operation.
At present most of them are based on 3G or 4G network transmission [5], due to its transmission depend on the data transmission between the base station and base station, and the transmission speed is limited by the same base station at the same time using the number, if a base station at the same time there are a lot of people in the download data, so everyone's transmission rate will drop, so the transmission speed of great uncertainty, so this article chooses is wireless digital radio, whose system diagram below.First is the remote control vehicle's motions and gestures transferred to PC, digital radio transmission time is 10 ms [6], each has its own digital radio spectrum, was occupied by other people don't have to worry about data and lead to the uncertainty of data transmission delay, through the control system for data analysis and processing, to control the rotation and the size of the output torque of motor, force sensing feedback to the driver, at the same time, the system will be the operator of data information transmission distance is accused of vehicles, accused by the actuator to realize the vehicle.
Isn't all of the data in the process of data transmission of all accepted, but selectively accept after analyze the data, if the accused car suddenly hit the rocks or in ravines, its data transmission data will suddenly change, there will be no rules to follow, this time should be after the data processing for feedback to the driver, at the same time, combining with the video data, which can ensure that reduce the discomfort of the pilot operation, also can ensure the driver.The data transmission flow of the remote operating system using radio digital radio is as follows.
Research on force feedback control strategy of remote control console Steering simulation system
In the car remote control steering simulation system in Taiwan, is relying on rotating DC torque motor driven rotating the steering wheel, to provide information and sense of the way back to the steering wheel drive, giving the driver force feedback, so the performance of the whole station of the steering force feedback plays a decisive role in the performance of the steering system, so the control strategy is also is the control strategy of motor research.In order to realize the rapid response of the system and enhance the transparency of the system, the 70LYX07 rare earth permanent magnet DC torque motor with low speed and high torque is adopted.
The traditional PID control is used to ensure that the operator can feel soft and satisfy the system's torque.
Traditional PID control is mainly composed of controller and the controlled object, the PID controller signal is according to the given value and the output value for the control deviation value, at the same time of deviation proportional, integral, differential operation, then to control the controlled object, its output algorithm for [7]: According to different to the three parameters of pid control system, the performance of the proportional coefficient , directly determines the strength of the control, increasing the proportion coefficient can enhance the system's dynamic response speed, but if too much, and will cause damage to the stability of the system, and may cause the system unstable; The integral part is mainly to improve the error of the system, so that the system can eliminate the steady-state error, but it will increase the reaction time of the system and slow the response rate.Differential link can improve system stability and response speed at the same time, the system error can foresee, produce lead correction, reduce the overshoot, enhance the stability of the system, the shortage of the is in enhance the system performance index is also amplified noise signal, so in the practical engineering, usually according to the requirements of system control mode selection.
The parameter setting of PID is determined based on the empirical formula and the combination of the test and determination.Firstly, the parameters of the current loop are determined and simulated, and the selection of = 0.5, = 60, its step response curve is shown in Fig. 4.
The step response curve can be seen that the system is not overregulated, and the stable state can be achieved at 0.5 s, which can realize the rapid change of current and ensure the precision of the driver's control.
For the parameter setting of speed ring, select = 1, the step response curve of = 5 and the following of the system are shown in Figs. 5, 6.
The curve shows that the step response of the system can meet the system requirements, and the system is very good with the following, and can provide good feedback for the operation of the driver.
Of current and speed PI control can enhance the quickness of motor start, at the same time guarantee the smoothness of the motor start, guarantee the stability of the motor current and rotating speed, avoid caused by current instantaneous changes of system instability and interference on the system, to ensure the torque changes gentleness and follow performance of the system, the changes in the size of the force can quickly enough accurate feedback from the pilot, in operation at the same time feel more moderate, enables the process of pilot control good experience the speed of the car and car body posture change, the control of the remote car is also more accurate, more comfortable experience.
Conclusions
With force feedback effect of the steering system dynamics modeling analysis, put forward the control strategy of system, through the control of dc torque motor and reversing, and the size of the torque output, make the operator feel force, and through to the dc torque motor control algorithm design, make the dc torque motor current and torque has good tracking performance, for excessive deviation corrected in time, reduce the influence of interference on the system, and enhance the operators in the process of manipulating soft, lightweight and real road feel, to avoid the big impact, and also can reduce in the process of manipulation of the remote car is not allowed to grasp the situation, can according to the torque motor, the force feedback to adjust the speed of the car, let the driver accomplish know fairly well, in order to achieve in the real environment away from the car to car's running state and gestures of the precision, is remote control car safety driving.
Fig. 1 .
Fig. 1.The force feedback block diagram of the steering simulation system | 2,688 | 2017-06-30T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Intra-Individual Comparison of 124I-PET/CT and 124I-PET/MR Hybrid Imaging of Patients with Resected Differentiated Thyroid Carcinoma: Aspects of Attenuation Correction
Simple Summary This study evaluates the qualitative and quantitative differences between 124-iodine PET/CT and PET/MR in oncologic patients with differentiated thyroid carcinoma after thyroidectomy. The impact of improved MR-based attenuation correction (AC) using a bone atlas was analysed in PET/MR data. Despite different patient positioning and AC methods PET/CT and PET/MR provide overall comparable results in a clinical setting. The overall number of detected 124I-active lesions and the measured average SUVmean values for congruent lesions were higher for PET/MR when compared to PET/CT. The addition of bone to the MR-based AC in PET/MR slightly increased the SUVmean values for all detected lesions. Abstract Background: This study evaluates the quantitative differences between 124-iodine (I) positron emission tomography/computed tomography (PET/CT) and PET/magnetic resonance imaging (PET/MR) in patients with resected differentiated thyroid carcinoma (DTC). Methods: N = 43 124I PET/CT and PET/MR exams were included. CT-based attenuation correction (AC) in PET/CT and MR-based AC in PET/MR with bone atlas were compared concerning bone AC in the head-neck region. AC-map artifacts (e.g., dentures) were noted. Standardized uptake values (SUV) were measured in lesions in each PET data reconstruction. Relative differences in SUVmean were calculated between PET/CT and PET/MR with bone atlas. Results: Overall, n = 111 124I-avid lesions were detected in all PET/CT, while n = 132 lesions were detected in PET/MR. The median in SUVmean for n = 98 congruent lesions measured in PET/CT was 12.3. In PET/MR, the median in SUVmean was 16.6 with bone in MR-based AC. Conclusions: 124I-PET/CT and 124I-PET/MR hybrid imaging of patients with DTC after thyroidectomy provides overall comparable quantitative results in a clinical setting despite different patient positioning and AC methods. The overall number of detected 124I-avid lesions was higher for PET/MR compared to PET/CT. The measured average SUVmean values for congruent lesions were higher for PET/MR.
Introduction
Following the introduction of hybrid position emission tomography/computed tomography (PET/CT) in 2000, it has increasingly become the modality of choice for diagnosis and therapy monitoring of various oncologic diseases [1]. In 2010, PET/magnetic resonance (PET/MR) was added as a new hybrid imaging modality for clinical application [2][3][4][5]. PET/MR imaging inherently offers improved soft-tissue contrast and additional functional imaging parameters (e.g., diffusion-weighted imaging, DWI) compared to PET/CT imaging. For the detection and staging of differentiated thyroid carcinoma (DTC) both hybrid imaging modalities, PET/CT and PET/MR, have been used with radiotracers 124 Iodine-NaI ( 124 I) or fluoro-18-fluorodeoxyglucose ( 18 F-FDG) [6][7][8][9][10][11]. The high sensitivity of the PET component combined with the superior ability of MR to detect small lesions in the head/neck region may result in improved diagnostics, therapy monitoring and 124 I dosimetry of thyroid cancer compared with PET/CT [9][10][11]. PET/CT and PET/MR examinations are performed differently with regard to patient positioning and acquisition times and require fundamentally different attenuation correction (AC) methods.
For optimal PET image quality and accurate PET quantification, precise attenuation correction maps (AC maps) of the patient tissues are needed in both, PET/CT and PET/MR. The accuracy and repeatability of such patient-specific AC maps are important preconditions to quantify in vivo biodistribution by virtue of PET. In PET/CT imaging, CT data (given in Hounsfield units, HU) can be converted to the linear attenuation coefficients (LAC) at a PET energy level of 511 keV by a bilinear conversion [12,13]. Thus, CT-based AC results in an AC model of the patient-individual anatomy and continuous LAC values of the patient tissues, also including bone information. The AC of human soft tissues in integrated PET/MR, on the other hand, has to rely on MR imaging providing proton densities and tissue-dependent spin relaxation properties, but not electron densities. Thus, MR images do not contain information about the photon attenuation magnitude and, thus, cannot be converted to LAC at 511 keV as needed for AC of PET data.
The established standard MR-based AC in whole-body PET/MR is a segmentationbased approach [14,15]. Here, for example, a Dixon-VIBE (volumetric interpolate breathhold examination) MR sequence provides tissue segmentation into four different tissue compartments (background air, lung, fat, soft tissue) with predefined linear attenuation coefficients [15]. The Dixon-VIBE AC map is a discrete AC model (four-compartment AC map) and does not provide attenuation information for highly attenuating bone tissue. In the initial standard MR-based AC methods, bones are assigned to the LAC of soft tissue, which may lead to a systematical underestimation in PET signal [16]. The addition of a dedicated bone atlas into the four-compartment MR-based AC was, for example, introduced by Paulus et al. [17] and today can be considered an established method in MR-AC. The bone atlas adds LAC of bone tissue as a fifth compartment to the Dixon-VIBE AC map (five-compartment AC map) [17,18]. This model-based approach applies continuous LAC (0.1 cm −1 up to 0.2485 cm −1 ) of six major bones (skull, spine, pelvis and upper femurs) to the Dixon-VIBE AC map to improve MR-based AC in whole-body PET/MR. The bone model is registered to the individual Dixon-VIBE MR images of each patient. The improved MR-based AC method including the bone model has been evaluated in a clinical setting using 18 F-FDG whole-body PET/MR imaging and provided improved PET lesion detection and quantification in these studies [19,20].
Against this background, this intra-individual comparison study evaluates the qualitative and quantitative differences between 124 I-PET/CT and 124 I-PET/MR imaging in patients with differentiated thyroid carcinoma (DTC) after thyroidectomy. The evaluation was performed retrospectively on hybrid imaging data that were acquired with standard imaging and PET reconstruction parameters specific for PET/CT and PET/MR examinations to identify and quantify existent differences of both hybrid modalities in this specific clinical application. The second aim of this study was to analyze the specific impact of improved MR-based AC using a bone atlas in PET/MR data [17].
Patient Population
In this retrospective single-administration dual hybrid imaging study, patients with DTC after thyroidectomy who underwent 124 I-PET/CT and 124 I-PET/MR on clinical indication from June 2016 until January 2021 were included. All patients received a whole-body (skull base to mid-thigh) PET/CT examination and an additional dedicated head-neck (skull base to upper lung) PET/MR examination allowing for an intra-individual comparison of the same anatomical body region. According to the established clinical standard protocol [7,21,22], PET/CT and PET/MR were acquired on the same day approximately 24 h post 124 I administration (physical half-life: 4.2 days). Initially, n = 38 patients with overall n = 48 PET/CT and PET/MR head-neck examinations were considered who presented with iodine-positive lesions on both hybrid modalities. Three patients with five examinations were excluded from the study because of obvious errors in the MR-based AC map (segmentation errors in the Dixon-VIBE AC map due to metal implants, and/or registration errors of the bone atlas due to BMI > 40 kg/m 2 ) ( Figure 1). In total, n = 35 patients with n = 43 examinations were included in this study. The 23 female and 12 male patients had a mean age of 52 years (range 16-85 years) and the mean BMI was 27 kg/m 2 (range 18-39 kg/m 2 ). The mean ± standard deviation of the administered 124 I activity was 34.5 ± 9.9 MBq. The mean ± standard deviation of the interval between tracer administration and PET/CT scanning was 24 h and 35 min ± 2 h and 47 min. The average time span from tracer administration to PET/MR scanning was 28 h and 53 min ± 5 h and 7 min. Further information about the patient population is listed in Table 1. Written informed consent was given before PET/CT and PET/MR examinations. All procedures performed were in accordance with the ethical standards of the institutional review board of the Medical Faculty of the University Duisburg-Essen (EC approval number: 11-4822-4825-BO) and with the principles of the Declaration of Helsinki and its later amendments.
Patient Population
In this retrospective single-administration dual hybrid imaging study, patients with DTC after thyroidectomy who underwent 124 I-PET/CT and 124 I-PET/MR on clinical indication from June 2016 until January 2021 were included. All patients received a wholebody (skull base to mid-thigh) PET/CT examination and an additional dedicated headneck (skull base to upper lung) PET/MR examination allowing for an intra-individual comparison of the same anatomical body region. According to the established clinical standard protocol [7,21,22], PET/CT and PET/MR were acquired on the same day approximately 24 h post 124 I administration (physical half-life: 4.2 days). Initially, n = 38 patients with overall n = 48 PET/CT and PET/MR head-neck examinations were considered who presented with iodine-positive lesions on both hybrid modalities. Three patients with five examinations were excluded from the study because of obvious errors in the MR-based AC map (segmentation errors in the Dixon-VIBE AC map due to metal implants, and/or registration errors of the bone atlas due to BMI > 40 kg/m 2 ) ( Figure 1). In total, n = 35 patients with n = 43 examinations were included in this study. The 23 female and 12 male patients had a mean age of 52 years (range 16-85 years) and the mean BMI was 27 kg/m 2 (range 18-39 kg/m 2 ). The mean ± standard deviation of the administered 124 I activity was 34.5 ± 9.9 MBq. The mean ± standard deviation of the interval between tracer administration and PET/CT scanning was 24 h and 35 min ± 2 h and 47 min. The average time span from tracer administration to PET/MR scanning was 28 h and 53 min ± 5 h and 7 min. Further information about the patient population is listed in Table 1. Written informed consent was given before PET/CT and PET/MR examinations. All procedures performed were in accordance with the ethical standards of the institutional review board of the Medical Faculty of the University Duisburg-Essen (EC approval number: 11-4822-4825-BO) and with the principles of the Declaration of Helsinki and its later amendments.
Hybrid Image Acquisition
All patient examinations were performed on a whole-body PET/CT system (Biograph mCT, Siemens Healthcare GmbH, Erlangen, Germany) and subsequently on an integrated whole-body 3 Tesla PET/MR system (Biograph mMR, Siemens Healthcare GmbH, Erlangen, Germany).
In PET/CT, the patients were positioned with arms resting above the head. CT measurement for AC was acquired in low-dose technique (without i.v. contrast agent) with a tube voltage of 120 kVp, tube current time product of 15 mAs, beam pitch of 1.0, and 5 mm slice thickness. The whole-body PET/CT emission data were acquired from head to thigh with five to eight bed positions and 4 min PET acquisition for each bed position. The CT data were reconstructed with a voxel size of 0.98 × 0.98 × 1.5 mm 3 and standard reconstruction of kernel B30f. PET/CT is referred to as the reference standard in this comparison.
In PET/MR, patients were positioned with arms resting beside the body. MR measurement for AC was acquired using a Dixon-VIBE sequence with the following sequence parameters: parallel imaging acceleration factor R = 5, matrix 390 × 240 with 1.3 × 1.3 mm in-plane pixel size, 136 slices each 3.0 mm, flip angle 10 • , repetition time (TR) 3.8 ms, echo times (TE) TE1/TE2 1.2/2.4 ms. PET/MR data acquisition was limited to the head/neck region only and was acquired for 20 min for a single bed station. Figure 2 exemplarily shows an intra-individual side-by-side comparison of PET/CT and PET/MR data acquired in a patient with a left-cervical 124 I-avid lesion ( Figure 2).
Attenuation Correction
In PET/CT, the CT-based attenuation data were transformed to LAC of PET at 511 keV using the implemented product version of HU-to-LAC conversion of the PET/CT system resulting in a continuous AC model with patient individual bone anatomy. In PET/MR, the generation of the MR-based AC of the patient tissues is more complex. MR images of the patient tissues acquired with the Dixon-VIBE sequence are used for AC. These MR images are then automatically segmented into four tissue compartments. For each tissue compartment a fixed and predefined LACs is assigned: soft tissue 0.1 cm −1 , fat 0.0854 cm −1 , lung 0.0224 cm −1 and background air 0.0 cm −1 . This discrete MR-based AC model providing four tissue compartments is referred to henceforth as the four-compartment AC map [4,5,15]. Additionally, in this study, a second MR-based AC method has been applied to all PET/MR data adding bone tissue as a fifth tissue compartment to the high-resolution CAIPIRINHA Dixon-VIBE sequence. This additional reconstruction applies a model-based bone segmentation algorithm [17,18,23] and adds the major bones (skull, spine, pelvis, and femurs) as a fifth tissue compartment to the previously mentioned four tissue compartments (air, lung, muscle, soft tissue, bone). The model-based bone atlas adds pre-registered bone mask pairs to the resulting MR-based AC with continuous LACs for bone tissue ranging from 0.1 cm −1 up to 0.2485 cm −1 . Detailed information on this bone model is provided by Paulus et al. [17].
Attenuation Correction
In PET/CT, the CT-based attenuation data were transformed to LAC of PET at 511 keV using the implemented product version of HU-to-LAC conversion of the PET/CT system resulting in a continuous AC model with patient individual bone anatomy. In PET/MR, the generation of the MR-based AC of the patient tissues is more complex. MR images of the patient tissues acquired with the Dixon-VIBE sequence are used for AC. These MR images are then automatically segmented into four tissue compartments. For each tissue compartment a fixed and predefined LACs is assigned: soft tissue 0.1 cm −1 , fat 0.0854 cm −1 , lung 0.0224 cm −1 and background air 0.0 cm −1 . This discrete MR-based AC model providing four tissue compartments is referred to henceforth as the fourcompartment AC map [4,5,15]. Additionally, in this study, a second MR-based AC method has been applied to all PET/MR data adding bone tissue as a fifth tissue compartment to the high-resolution CAIPIRINHA Dixon-VIBE sequence. This additional reconstruction applies a model-based bone segmentation algorithm [17,18,23] and adds the major bones (skull, spine, pelvis, and femurs) as a fifth tissue compartment to the previously mentioned four tissue compartments (air, lung, muscle, soft tissue, bone). The model-based bone atlas adds pre-registered bone mask pairs to the resulting MR-based
Image Reconstruction and Analysis
The PET/CT data were reconstructed using the ordered subsets expectation maximization (OSEM) algorithm with time-of-flight (TOF) with 21 subsets and 2 iterations and a retrospectively reconstructed three-dimensional Gaussian filter of 3 mm and a reconstructed (cuboid-shaped, isotropic) voxel size with a side length of 2.0 mm in each dimension. PET/MR data were retrospectively reconstructed using the image data reconstruction tool provided by the PET/MR system manufacturer (e7 tools, Siemens Molecular Imaging, Knoxville, TN, USA). Two PET reconstructions per patient and examination were generated: (1) MR-based AC without bone (four-compartment AC map) and (2) MR-based AC with bone atlas (five-compartment AC map). The PET data in both reconstructions were reconstructed using ordinary Poisson ordered subsets expectation maximization (OP-OSEM) with 3 iterations and 21 subsets and a 4 mm Gaussian filter resulting in a matrix of 344 × 344 × 127 (resolution 2.09 × 2.09 × 2.03 mm 3 ).
To assess and compare the bone volume and LAC values in CT-based and MR-based AC for all 43 examinations, the CT-based AC map was registered to the MR-based AC map and was cut in a longitudinal direction to match the head-neck field-of-view of MR-based AC. In the CT-based AC maps, only the skull and the spine were considered in the analysis (excluding shoulders, upper arms, ribs, sternum, and clavicles) to match the bones available in the MR-based AC with the bone model ( Figure 3). Bone tissue was segmented in the CTand MR-based AC map and the volume of bone tissue (relative to the total volume of the MR-based AC map) was measured as well as the LAC values. map) and (2) MR-based AC with bone atlas (five-compartment AC map). The PET data in both reconstructions were reconstructed using ordinary Poisson ordered subsets expectation maximization (OP-OSEM) with 3 iterations and 21 subsets and a 4 mm Gaussian filter resulting in a matrix of 344 × 344 × 127 (resolution 2.09 × 2.09 × 2.03 mm 3 ).
To assess and compare the bone volume and LAC values in CT-based and MR-based AC for all 43 examinations, the CT-based AC map was registered to the MR-based AC map and was cut in a longitudinal direction to match the head-neck field-of-view of MRbased AC. In the CT-based AC maps, only the skull and the spine were considered in the analysis (excluding shoulders, upper arms, ribs, sternum, and clavicles) to match the bones available in the MR-based AC with the bone model ( Figure 3). Bone tissue was segmented in the CT-and MR-based AC map and the volume of bone tissue (relative to the total volume of the MR-based AC map) was measured as well as the LAC values. While CT-based AC depicts the true bone anatomy (only skull and spine extracted from CT data for better comparability to MR-based AC) of each patient, bones in the MR-based AC result from an atlas-based approach and thus, represent the "best-match" between the bone model and the actual patient anatomy. Note that the spine bends differently in CT-based AC and MR-based AC due to the different patient positioning in PET/CT and PET/MR exams.
An experienced radiologist and an experienced nuclear medicine specialist in consensus assessed all three PET patient datasets per patient (PET/CT and PET/MR with and without bone). Image reading entailed identifying up to five lesions per patient with focally increased radiotracer uptake. Volumes of interest (VOI) were placed around the detected lesions and the standardized uptake values (SUVmean and SUVmax) in all PET datasets from PET/CT and PET/MR of each patient and examination were measured. In order to ensure accurate and identical placement of all VOIs in all PET data reconstructions, VOIs were delineated with the help of a syngo.via a workstation (Siemens Healthcare GmbH). An experienced radiologist and an experienced nuclear medicine specialist in consensus assessed all three PET patient datasets per patient (PET/CT and PET/MR with and without bone). Image reading entailed identifying up to five lesions per patient with focally increased radiotracer uptake. Volumes of interest (VOI) were placed around the detected lesions and the standardized uptake values (SUV mean and SUV max ) in all PET datasets from PET/CT and PET/MR of each patient and examination were measured. In order to ensure accurate and identical placement of all VOIs in all PET data reconstructions, VOIs were delineated with the help of a syngo.via a workstation (Siemens Healthcare GmbH).
Relative differences were calculated to evaluate the quantitative difference between PET data from PET/CT vs. PET data from PET/MR reconstructed two times, with and without bone atlas in the MR-based AC, respectively.
Bland-Altman plots were generated to assess the general quantitative difference between PET/CT and PET/MR, where the PET/MR data were reconstructed twice, with MR-based AC with and without bone information. Descriptive statistics were used to calculate the mean values and standard deviation of all measured SUVs and the relative differences for all detected lesions in all examinations.
Results
The intra-individual comparison between CT-based and MR-based bone information in the AC maps of all patient data sets resulted in a measured bone volume of 4.48 ± 1.08% for CT-based AC and 3.99 ± 0.96% for MR-based AC. The range of LAC was 0.111-0.266 for CT-based AC and 0.101-0.247 for MR-based AC and the mean values of LAC were 0.134 ± 0.021 for CT-based AC and 0.128 ± 0.002 for MR-based AC ( Table 2). The deviation between both AC methods is rather small and the results of bone volume and LAC values are in good agreement between CT-based AC and MR-based AC in the head-neck region. Note that the measured bone volume of CT-based AC and the corresponding LAC are slightly higher than in the MR-based bone atlas. Here, small but highly attenuating metal implants (LAC > 0.250, mainly dental fillings and implants) in the CT-based AC account for this observation, albeit with only neglectable volume in comparison to the total AC-map volume. In MR-based AC, on the other hand, dental fillings and small implants do not result in visible artifacts and thus do not contribute to higher (or lower) LAC in these regions (Table 2). Table 2. Comparison of CT-and MR-based bone information in the attenuation correction maps (AC maps) of all patient data sets. Bone volume and corresponding linear attenuation coefficients (LAC) were measured in patients' skull and spine (head-neck region). Note that the last column refers to the small volume and overall small percentage of voxels with high LAC in CT-based AC that is caused by dental artifacts. In Figure 4, two patient examples with typical AC-map artifacts are given. CT-based AC maps often reveal streak and beam hardening artifacts around metal implants (e.g., dental fillings and implants). Such artifacts might result in locally increased LAC in the CT-based AC that may result in a local overestimation in the PET signal. Nearly all patient data in this study revealed metal-based artifacts in the CT-based AC maps around dental implants. Patient example 1 in Figure 4 was thus also included in this study. Larger metal implants may also cause artifacts in the MR-based AC. Here, signal voids in the MR images due to metal may lead to segmentation errors in the Dixon-VIBE MR-based AC, and signal voids due to metal artifacts may then be segmented as air. Soft tissue and the right lung in the patient example in Figure 4 were wrongly assigned to background air due to a metal wire cerclage in the sternum leading to a systematic underestimation of the PET signal. Another constraint in the MR-based AC of this specific patient example 2 is the missing reference in the Dixon-VIBE MR images due to the signal voids for the accurate registration of the bone atlas (wrong location of the spine). Therefore, patient example 2 was excluded from this evaluation.
CT-and MR-Based
Overall, n = 111 124 I-positive lesions were detected in the PET/CT data sets of all 35 patients and all 43 examinations. In PET/MR n = 132 124 I-positive lesions were detected, independent of the choice of MR-based AC (with or without bone atlas). Thus, twentyone iodine-positive lesions were missed out in PET/CT compared to PET/MR. Image reading entailed identifying up to five lesions per patient with focally increased radiotracer uptake that were then further quantified. Thus, SUVs were measured in 98/111 PET/CT lesions and in 111/132 PET/MR lesions. For a valid comparison of congruent lesions in PET/CT and PET/MR, the mean ± standard deviation (SD), the range, and the median of the measured SUV mean and SUV max of 98 lesions detected both in PET/CT and in PET/MR with and without bone AC were analyzed ( Table 3). The relative difference of each detected and congruent lesion (n = 98) in PET/CT vs. PET/MR with and without bone was calculated. The average relative difference in the median for the SUV mean between PET/CT and PET/MR with bone was 6.3% and 13.3% for the SUV max . The average relative difference in the median for the SUV mean between PET/MR without bone and PET/MR with bone was −1.1% and −1.0% for the SUV max .
Note that MR-based AC with bone resulted in slightly higher SUVs than the CT-based AC reference. MR-based AC with missing bone information tended to underestimate the PET signal compared to MR-based AC with bone atlas, however, with only very minor differences. The extreme high uptakes in SUV (>1000 in the SUV range) given in Table 3 are explainable due to the remaining thyroid parts, which were not fully removed in thyroidectomy. Thus, in these regions, the iodine uptake might be higher than in Table 3). The relative difference of each detected and congruent lesion (n = 98) in PET/CT vs. PET/MR with and without bone was calculated. The average relative difference in the median for the SUVmean between PET/CT and PET/MR with bone was 6.3% and 13.3% for the SUVmax. The average relative difference in the median for the SUVmean between PET/MR without bone and PET/MR with bone was −1.1% and −1.0% for the SUVmax.
Note that MR-based AC with bone resulted in slightly higher SUVs than the CTbased AC reference. MR-based AC with missing bone information tended to underestimate the PET signal compared to MR-based AC with bone atlas, however, with only very minor differences. The extreme high uptakes in SUV (>1000 in the SUV range) given in Table 3 are explainable due to the remaining thyroid parts, which were not fully removed in thyroidectomy. Thus, in these regions, the iodine uptake might be higher than in iodine-positive lesions or metastasis resulting in larger relative differences and The transaxial slice from patient example 1 (head) shows streak and beam hardening artifacts in the CT-based AC due to dental fillings and implants (red arrows). These CT-based AC artifacts may lead to systematic overestimation of PET signal in these regions after applying CT-based AC, while small dental fillings do not cause artifacts in MR-based AC (green arrows). Patient example 2 (thorax) shows MR-based AC artifacts due to metal implants in the sternum resulting in segmentation errors in the Dixon-VIBE AC map (upper red arrow), while the CT-based AC here shows the wire cerclage with its higher signal intensity (HU) but without noticeable artifacts (upper green arrow). Note the additional registration error of the bone atlas in patient example 2 due to segmentation errors in the MR-based AC itself (lower red arrow), while the CT-based AC shows the spine in the correct position (lower green arrow). Table 3. Standardized uptake values (SUV mean and SUV max ) in all measured n = 98 congruent lesions in PET/CT and PET/MR with and without bone atlas in the attenuation correction map (AC map). Relative differences between PET/CT (reference) and PET/MR with bone information, respectively, relative differences between PET/MR with bone (reference), and PET/MR without bone information are provided. The Bland-Altman plots in Figure 5A,B show the relative differences between PET/CT and PET/MR with bone atlas in corresponding corrected PET data and calculated relative PET differences. Considering all 98 congruent lesions measured in both hybrid imaging modalities, the mean increase in SUV mean is 36.2 ± 105.2% and the mean increase in SUV max is 33.5 ± 84.8%. Note that for a better depiction, outliers are not shown in Figure 5A,B.
Comparison between PET/CT vs. PET/MR with
Cancers 2022, 14, x 9 of 18 standard deviations. Additionally, voxels with stochastic noise in the small VOIs may contribute to large, measured differences. The Bland-Altman plots in Figure 5A,B show the relative differences between PET/CT and PET/MR with bone atlas in corresponding corrected PET data and calculated relative PET differences. Considering all 98 congruent lesions measured in both hybrid imaging modalities, the mean increase in SUVmean is 36.2 ± 105.2% and the mean increase in SUVmax is 33.5 ± 84.8%. Note that for a better depiction, outliers are not shown in Figure 5A,B. The Bland-Altman plots in Figure 6A,B show within the PET/MR reconstructions the relative differences between MR-based AC map with bone atlas (reference) and MR-based AC map without bone atlas in the measured SUV mean and SUV max of 111 of 132 iodinepositive detected lesions in PET/MR. Considering all 111 lesions, the mean in SUV mean was decreased by −2.0 ± 7.0% with missing bone information. The mean in SUV max was decreased by −1.5 ± 3.0% with missing bone information. Maximal relative differences were measurable in lesions close to the bone. Note that for a better depiction, outliers are not shown in Figure 6A,B. Table 4 depicts SUV mean and SUV max in detected iodine-positive lesions sorted according to their location within the patient (lesions close to bone, in the lungs, lymph node lesions, and thyroid lesions) for PET/CT and PET/MR (MR-based AC with bone atlas). In eight detected lesions close to the base of the skull, sternum, clavicle, and cervical vertebral bone PET/MR resulted in less PET signal than measured in PET/CT (SUV mean −23.4% and SUV max −24.1%). Moreover, in lung metastasis, PET/MR showed decreased SUVs compared to the PET/CT reference (SUV mean −15.5 % and SUV max −15.4%). In thyroid lesions and lymph node lesions, PET/MR resulted in an increased PET signal compared to PET/CT (lymph node: SUV mean 10% and SUV max 15.6%, thyroid: SUV mean 16.9% and SUV max 23.9%). Table 4. Detected iodine-positive lesions and measured standardized uptake values (SUV mean and SUV max ) were sorted according to their location within the patient (lesions close to bone, in the lungs, lymph node lesions, and thyroid lesions) for PET/CT and PET/MR. Note that SUV measures in lesions close to bone or in the lungs are higher in PET/CT than PET/MR, while SUV measures in lymph node lesions or the thyroid are higher in PET/MR than PET/CT. One patient example is shown in Figure 7, who underwent a whole-body PET/CT examination 17 h and 50 min post-124 I administration and a subsequent head-neck PET/MR 19 h and 16 min post-administration. Four iodine-positive and congruent lesions were detected in this patient in all three PET reconstructions. Due to the different patient positions within the PET/CT compared to PET/MR, lesion #2 could not be displayed in PET/CT in this example slice (Figure 7). The relative differences in measured SUV max values between PET/CT and PET/MR with bone AC were 27.2% in lesion #1, 143.1% in lesion #2, 13.2% in lesion #3, and 43.5% in lesion #4. The relative differences in measured SUV max values between PET/MR with bone AC and PET/MR without bone AC were −3.5% in lesion #1, −5.4% in lesion #2, −11.3% in lesion #3, and −4.4% in lesion #4 (Figure 7).
Discussion
This retrospective single-administration, dual hybrid imaging comparison study including 35 patients with 43 examinations evaluated the intra-individual qualitative and quantitative differences of PET/CT and PET/MR of patients with DTC after thyroidectomy using the radiotracer 124 I. Additionally, the isolated impact of improved MR-based AC on PET quantification using a bone atlas was analyzed in the PET/MRI data.
In this study, the overall number of thyroid lesions in the head-neck region detected by PET/MR with n = 132 was higher than the number of lesions detected with PET/CT, n = 111. Of these detected lesions, up to n = 5 lesions per patient and hybrid imaging modality were then further evaluated quantitatively. This resulted in a subset of n = 98/111 lesions in PET/CT and n = 111/132 in PET/MR that were further quantified. Of these, only the n = 98 congruent lesions, detected in both hybrid imaging modalities in the head-neck region were compared in an intra-individual comparison. In this comparison, the quantitative evaluation revealed higher mean and median values for SUVmax (36.3 ± 84.9%) and SUVmean (36.2 ± 105.2%) for lesions measured in PET/MR when compared to lesions measured in PET/CT, respectively.
Despite these results in overall number and quantification differences, the results in general terms are in the range of-and comparable to-other studies comparing PET/CT
Discussion
This retrospective single-administration, dual hybrid imaging comparison study including 35 patients with 43 examinations evaluated the intra-individual qualitative and quantitative differences of PET/CT and PET/MR of patients with DTC after thyroidectomy using the radiotracer 124 I. Additionally, the isolated impact of improved MR-based AC on PET quantification using a bone atlas was analyzed in the PET/MRI data.
In this study, the overall number of thyroid lesions in the head-neck region detected by PET/MR with n = 132 was higher than the number of lesions detected with PET/CT, n = 111. Of these detected lesions, up to n = 5 lesions per patient and hybrid imaging modality were then further evaluated quantitatively. This resulted in a subset of n = 98/111 lesions in PET/CT and n = 111/132 in PET/MR that were further quantified. Of these, only the n = 98 congruent lesions, detected in both hybrid imaging modalities in the headneck region were compared in an intra-individual comparison. In this comparison, the quantitative evaluation revealed higher mean and median values for SUV max (36.3 ± 84.9%) and SUV mean (36.2 ± 105.2%) for lesions measured in PET/MR when compared to lesions measured in PET/CT, respectively.
Despite these results in overall number and quantification differences, the results in general terms are in the range of-and comparable to-other studies comparing PET/CT vs. PET/MR in other body regions and using different radiotracers [24][25][26]. In this context, it has to be noted that this study specifically was set up as a retrospective, singleadministration, dual hybrid imaging study where each hybrid imaging modality was independently used with its own and established clinical protocol and PET reconstruction parameters [24]. This was intended to identify commonalities in the lesion detection performance but also to quantify existent differences in a clinical hybrid imaging setting.
Consequently, beyond unifying the post-injection starting time of hybrid imaging acquisition for both exams, no further specific efforts were made to homogenize the PET reconstruction parameters or to cross-calibrate the PET/CT and PET/MR systems used in this study [27,28]. This implies that numerous methodological factors potentially contribute to the measured quantitative differences in overall detected lesions and in the intra-individual quantitative comparison of the detected congruent lesions. These factors are discussed in more detail in the following sections.
Numerous methodological aspects of our study may contribute to the resulting qualitative and quantitative differences in overall lesion number and measured SUV values in congruent lesions. Foremost, retrospective intra-individual comparison studies with two separate examinations on two fundamentally different hybrid imaging modalities PET/CT and PET/MR have resulted in differences within the quantitative range that were also found in the present study [24,26]. Two subsequent and independent hybrid examinations following a single administration of radiotracer lead to the fact that both examinations and data acquisitions start at different post-administration times. Depending on the tracer and examination protocol, this time difference may lead to differences in biodistribution of the tracer at the time of data acquisition. This may influence lesion conspicuity and/or activity quantification. In our study, however, this aspect of different post-administration starting times has been considered as far as practically possible in a clinical setting. The half-life time of 124 I with 4 days, 4 h, and 13 minutes is rather long compared to the differences in post-administration starting times for both hybrid-imaging examinations. PET/CT exams on average started at 24:35 ± 2:47 hh:mm and PET/MR exams on average started at 28:53 ± 5:07 hh:mm post administration of the tracer. In all patients, the PET/CT exam was conducted first, followed by the PET/MR exam. Both factors, using a tracer with a rather long half-life time and applying a rather homogeneous protocol regarding post-administration starting times, reduced the potential quantitative effects of tracer dynamics on measured lesion activity.
Further methodological aspects with potential quantitative impact on PET measurements in this dual hybrid imaging study result from the fact that each patient was examined on two different hybrid PET systems in two independent exams. Here, fundamental differences between the PET detectors (hardware, geometry, electronics, time-resolution, sensitivity, etc.) and the PET acquisition parameters (e.g., 4 min acquisition per bed position and TOF detection for PET/CT vs. 20 min no-TOF for PET/MR) and reconstruction parameters (e.g., different OSEM reconstruction parameters and reconstructed spatial resolution, etc.) will inherently impact the measured results. No specific efforts were made to homogenize the PET reconstruction parameters or to cross-calibrate the PET/CT and PET/MR systems used in this study [24,27,28]. Instead, in this retrospective study, the PET reconstruction parameters for each hybrid imaging modality were kept according to its own established clinical protocol and PET reconstruction parameters [24]. Consequently, these aspects may quantitatively affect the SUV measurements in our study [27].
In addition to the PET acquisition and reconstruction parameters, the different attenuation correction methods in PET/CT and PET/MR account for differences in PET quantification. This aspect (AC in PET/CT vs. PET/MR) has been evaluated in more detail in this study as will be discussed in the following. It has to be noted, though, that only principal differences between CT-AC and MR-AC can be discussed in this context. The individual quantitative impact of each of the different AC methods on specific lesions cannot be evaluated independently of all the other parameters discussed earlier.
Fundamentally, different methods are used in PET/CT and PET/MR exams for attenuation and scatter correction and, moreover, patient positioning is also different. While in PET/CT, the patient is positioned with arms up, in PET/MR the patient is positioned with the arms resting along the body [4,5,24,26]. In a head-neck exam, this leads to the effect that a much larger portion of the photon-attenuating patient tissues (e.g., arms and shoulders) are located in the PET field of view during data acquisition in PET/CT compared to the PET/MR exam. This attenuation then needs to be corrected with the appropriate AC method. Further principal differences between CT-based AC and MR-based AC are that CT-based AC provides continuous LAC values for all tissues including bone [16]. MR-based AC on the other hand only provides four different tissue classes that are derived by image segmentation of MR images that do not directly represent a measure for photon attenuation [5,14,15]. Furthermore, bone tissue in MR-based AC is added as a separate tissue class with an atlas model providing LAC for major bones such as the skull, the spine, and the pelvic bones [17]. The MR-based bone atlas is an efficient and robust method to add attenuation information of bone tissue in the MR-based AC [17,19,20], but this model-based approach is only an approximation of the true bone anatomy and the position and LAC of bone may vary between the model and individual patient anatomy. Especially in patients with anatomical abnormalities (e.g., a very high/low BMI), the bone model might result in registration errors [29]. Investigating this specific aspect further in this study demonstrates that in the head-neck region the spine and skull in CT-based AC and in MR-based AC qualitatively show very comparable results (Figure 3). Also quantitatively, the intra-individual comparison of the bone volumes in the head-neck region revealed similar results for CT-based AC (4.48 ± 1.08%) and for MR-based AC (3.99 ± 0.96%) measured as the bone volume portion of the overall attenuating tissue volume ( Table 2).
The different methods for tissue AC also revealed different artifacts in the head-neck region that may hamper diagnostic imaging in local regions, and, furthermore, have a potential impact on PET quantification as has been shown also in previous studies [14,23,30]. While CT-based AC in the head-neck region frequently shows streak artifacts due to dental implants and fillings, which may lead to locally increased LAC values in the CT-based AC maps associated with a local bias towards overcorrection of lesion activities [31], this was not observed in the MR-based AC data in this study ( Figure 4). On the other hand, wire cerclages and other larger metal implants in the head-neck region may lead to local signal voids in the MR-based AC maps that then may lead to a local bias towards undercorrection of lesion activities ( Figure 4). Nevertheless, the observed differences between CT-based AC and MR-based AC regarding bone volumes and observed artifacts in this study were considered minor and the overall quantitative impact of these two aspects on the study results is neglectable.
Beyond the qualitative and quantitative aspects of the PET/CT and PET/MR comparison discussed above, in this study, the isolated relative quantitative impact of adding bone as a further tissue class in MR-based AC was additionally investigated. The fivecompartment MR-based AC employing the bone model served as a reference [17], while the previous standard MR-based AC using only four-compartments [14,15] was used for a second PET reconstruction of each patient data set. This allowed for measuring the isolated impact of both MR-based AC methods on each patient data set. As a result, missing bone information in the MR-based AC did not affect the overall clinical assessment of thyroid carcinoma in this 124 I-PET/MR study. Overall, 132 lesions could be detected in both PET/MR reconstructions, with and without bone atlas in the MR-based AC. Comparing the improved five-compartment AC map (reference) with the previous standard four-compartment AC map in PET/MR, the overall difference in SUV mean due to the addition of the bone model for the 111 of 132 congruent lesions that were further quantified was only small with −2.0% ± 7.0% ( Figure 6). The relative quantitative impact of lesions located close to the bone was slightly higher than for soft tissue lesions located distant from the bone. This observation has also been reported by previous studies investigating the relative impact of bone in MR-based AC [16,17,19,20]. Nevertheless, in single lesions (close to the bone) relative differences in SUV mean < −10% were calculated when neglecting bone AC ( Figure 6). Thus, the individual impact of improved MR-based AC on each patient in the context of thyroid 124 I PET/MR should be considered carefully.
Hybrid imaging with PET/CT and PET/MR using the radiotracer 124 I in patients with differentiated thyroid carcinoma after thyroidectomy in this study has demonstrated robust and comparable diagnostic performance of both hybrid imaging modalities. The measured differences in lesion activity quantification of congruent lesions are in the range that was reported for previous single-administration dual hybrid imaging studies with PET/CT and PET/MR [11,32]. Such differences result from multiple methodological factors and challenges that are inherent to single-administration sequential PET studies on different PET systems [10] as discussed above. Our results support the examination of patients with differentiated thyroid carcinoma after thyroidectomy with any of the two-hybrid imaging modalities, 124 I-PET/CT or 124 I-PET/MR. The results of this study also imply that repeated exams of individual patients under therapy or for lesion dosimetry planning [6,27,33,34] should-whenever possible-be conducted with the same modality and, furthermore, on the same system to reduce methodological differences as far as possible [27]. Changes in individual lesion activity measured by PET ideally should be due to therapeutic effects only, and not due to changes in the hybrid imaging modality, methodology, imaging protocol, PET recon parameters, and/or AC method.
Conclusions
This retrospective single-administration dual hybrid imaging study evaluated the qualitative and quantitative differences between 124 I-PET/CT and 124 I-PET/MRI specifically in oncologic patients with differentiated thyroid carcinoma after thyroidectomy. The intra-individual comparison of 43 whole-body PET/CT and head-neck PET/MR patient examinations in this study demonstrated that the PET acquisitions in PET/MR have shown higher sensitivity when compared to PET/CT. The total number of 124 I-avid lesions was higher for PET/MR when compared to PET/CT. The additional evaluation of PET/MR data corrected without and with bone atlas demonstrated that SUVs were slightly higher with the addition of bone information in the MR-based AC for all measured 124 I-avid lesions. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The data is available upon request from corresponding author.
Conflicts of Interest:
Ken Herrmann reports personal fees from Bayer, other from Sofie Biosciences, personal fees from SIRTEX, non-financial support from ABX, personal fees from Adacap, personal fees from Curium, personal fees from Endocyte, grants and personal fees from BTG, personal fees from IPSEN, personal fees from Siemens Healthineers, personal fees from GE Healthcare, personal fees from Amgen, personal fees from Novartis, personal fees from Y-mAbs, outside the submitted work. Lale Umutlu is a Speaker of Bayer Healthcare, Speaker & Research Funding Siemens Healthineers and Research Funding DFG. No other potential conflict of interest relevant to this article exists. | 10,158.4 | 2022-06-21T00:00:00.000 | [
"Medicine",
"Physics"
] |
Bioadhesive buccal gels impregnated with fluconazole : formulation , in vitro and ex vivo characterization
Article history: Received on: 16/11/2013 Revised on: 28/11/2013 Accepted on: 02/01/2014 Available online: 30/03/2014 This study describes the formulation of bioadhesive buccal gels for fluconazole delivery via buccal mucosa. In the present study the polymer with well-defined mucoadhesive properties like Carbopol 934 was used. CarbopolPoloxamer gels of 1% fluconazole was formulated with various absorption enhancers like polyethylene glycol, propylene glycol, glycerol, phosphatidylcholine, mannitol and sodium lauryl sulphate by cold method. The gels were characterised for gelation temperature, bioadhesive force, pH, viscosity, drug release profile, ex vivo permeation across goat buccal mucosa and stability profile. The percent drug permeated through the buccal mucosa was in the range of 62-76%. Polyethylene glycol and propylene glycol were found to be better absorption enhancers as compared to others and followed zero order release kinetics.
INTRODUCTION
Candidiasis or thrush is a fungal infection (mycosis) of any of the Candida species, of which Candida albicans is the most common.Superficial infections of skin and mucosal membranes by Candida causing local inflammation and discomfort are however common in many human populations.While clearly attributable to the presence of the opportunistic pathogens of the genus Candida, candidiasis describes a number of different disease syndromes that often differ in their causes and outcome.Commonly referred to as a yeast infection, it is also technically known as candidosis, moniliasis, and oidiomycosis (James et al., 2006).Fluconazole is a bis-triazole antifungal drug and structurally related to imidazole derivatives.It is fungistatic in action and exerts its antifungal activity by altering cellular membranes resulting in increased membrane permeability, leakage of essential elements (e.g.amino acid, potassium) and impaired uptake of precursor molecules (e.g.purine and pyrimidine precursor to DNA).Following oral dosing, fluconazole has 90% bioavailability, is almost completely absorbed within two hours and has a half life of 30 h.Like other imidazole-and triazole-class .* Corresponding Author Email<EMAIL_ADDRESS>fluconazole inhibits the fungal cytochrome P450 enzyme 14α-demethylase.Retentive buccal mucoadhesive formulations have proven to be a feasible alternative to the conventional oral medications as they can be readily attached to the buccal cavity, retained for a longer period of time and removed any time.Buccal adhesive drug delivery systems can be based on matrix tablets, films, patches, layered systems, discs, microspheres, ointments and hydrogel systems.
Bioadhesive formulations designed for buccal application should exhibit suitable rheological and mechanical properties, including pseudoplastic or plastic flow with thixotrophy, ease of application, good spreadability, appropriate hardness, and prolonged residence time in the oral cavity (Salamat-Miller, 2005).
These properties may affect the ultimate performance of the preparations and their acceptance by patients.Absorption of drug via the mucous membranes of the oral cavity can occur in either the sublingual, buccal, or local regions.The local region includes all areas other than the former two regions.In general, the oral mucosa is classified as a somewhat leaky epithelium with a permeability rank order of sublingual, buccal and palatal, based on the thickness and degree of keratinization of the tissues.
Additionally, drug delivery via this site avoids extensive enzyme degradation and first-pass metabolism seen with oral administration, which is desired outcomes for the delivery of therapeutic proteins and peptides (Shojaei, 1998).Buccal drug delivery has advantages such as the abundant blood supply in the buccal area, bypassing the hepatic first pass effect, excellent accessibility, etc.But the major challenge remains that it is very difficult to apply ointments, solutions, creams and lotions onto the oral mucosa, and have their effects persist for a significant period of time, since they are very easily removed by salivation, temperature, tongue movement and swallowing.Therefore, the new formulations that have suitable adhesion or adhesive time and show controlled release for a period of time are required (Shaikh et al., 2011).The mucoadhesive drug delivery systems utilize the property of bioadhesion of certain water-soluble polymers that become adhesive on hydration and hence can be used for targeting a drug to a particular region of the body for extended periods of time.Bioadhesive formulations designed for buccal application should exhibit suitable rheological and mechanical properties, including pseudoplastic or plastic flow with thixotrophy, ease of application, good spreadability, appropriate hardness, and prolonged residence time in the oral cavity.These properties may affect the ultimate performance of the preparations and their acceptance by patients.In the present study, bioadhesive and gelforming agents having excellent thick gel barrier formation characteristics with good gelation temperature like Carbopol and Poloxamer were used (Singh et al., 2011).
Materials
Fluconazole was kindly provided by Cadila Pharmaceuticals Ltd., India.Carbopol 934, polyethylene glycol, propylene glycol, glycerol, mannitol and sodium lauryl sulphate were purchased from Lobo Chemie Pvt. Ltd., India.Poloxamer 188, phosphatidyl choline were procured from Himedia Lab Pvt. Ltd., India.All other chemicals used in the study were of analytical grade and were used as received.
Drug excipient interaction studies
Drug-excipient interaction studies were determined by FTIR spectroscopy.Fluconazole powder was separately mixed with various excipients in the ratio of 20:80.The resultant physical mixture was kept in sealed glass vials and placed at different temperature conditions for 3 weeks.Two evaluation parameters were employed to study the interaction between the drug and excipients.The contents of each vial were observed for any change in their physical characteristics and for their characteristic peaks by FTIR Spectrophotometer (Shimadzu 8400S).The results are depicted in Table 1.Physical changes of drug excipient mixtures in solid state at different condition are recorded in Table 2.The FTIR data showed that fluconazole and excipients did not react with each other and retained their action at room temperature.
Formulation of buccal bioadhesive gel
Carbopol-poloxamer gels of 1% fluconazole with different absorption enhancers like polyethylene glycol, propylene glycol, glycerol, phosphatidyl choline, mannitol and sodium lauryl sulphate were prepared by cold method (Table 3).Poloxamer 188 (2%) was added to minimum amount of water with gentle stirring at 5°C (Shin et al., 2000).Subsequently, enhancer and drug were added with required quantity of ethanol.In a separate beaker, Carbopol 934 (5%) was stirred with water and added to the prepared solution of Poloxamer 188 and continuously stirred for 1h.The preparation was then brought to 25 ml with distilled water and stored at room temperature.
Measurement of gelation temperature
Two gm of fluconazole gel was placed in a 100 ml transparent beaker over a modified low-temperature thermostat water bath.Poloxamer gel was heated at the rate of 5°C/5 min with continuous stirring at a rate of 50 rpm (Choi, 1998;Yong, 2001).When the magnetic bar stopped moving due to gelation, the temperature was noted, which indicated the gelation temperature.The gelation temperature of the various formulations is presented in Table 4.
DETERMINATION OF BIOADHESIVE FORCE
The bioadhesive force of poloxamer gel was determined by using measuring device fabricated in house (Figure 2).A section of goat buccal tissue was secured with mucosal side out onto glass vial (C) using a rubber band.The vials with the buccal tissue were stored at 36.5 °C for 10 min.One vial with a section of tissue (E) was connected to the balance (A) and the other vial was placed on a height-adjustable pan (F).One gm Fluconazole gel (D) was applied onto the buccal tissue on the vial.Subsequently the height of the vial was adjusted so that the poloxamer gel was placed between the mucosal tissues of the two vials.The weights (B) were raised until the two vials remained attached.Bioadhesive force, the detachment (gm force), was determined from the minimal weights that detached the two vials.The buccal tissue pieces were changed for each measurement (Choi, 1998;Yong, 2001).The observations are presented in Table 4.
pH of bioadhesive gel
One gm gel was placed in a glass beaker with phosphate buffer pH 6.8 (15mL) and allowed to swell.Thereafter, surface pH measurements were recorded at predetermined intervals of 0.30, 1.0, 1.30, 2, 3, 4, 5 and 6 h.The results are presented in Table 4.
Viscosity studies
The viscosity of gels was measured using Brookfield DV-I+ Viscometer (Version 5.1).The measurements were performed with a T shape spindle (LV Spindle No. S63).Viscosity parameters were measured at different rpm with 1-minute equilibration time at each rpm.Samples were applied to the spindle using a spatula to ensure that formulation shearing did not occur and the viscometer was set at room temperature.The viscosity of various gels with different absorption enhancers is depicted in Table 5.The viscosity of sample was determined by multiplying the observed reading by the shear rate.Viscosity (cps) = (300 / N) × observed reading Where N = rpm
Drug release study
The drug release study of the fluconazole bioadhesive gel was carried out with modified Franz diffusion cell.Specially designed diffusion tubes with internal diameter of 2cm having cellophane membrane at one end were used.Two gm gel was placed inside the tube.This assembly was immersed in a beaker containing 20 mL of Phosphate buffer (pH 6.8) placed over a thermostatically controlled magnetic stirrer set at 37±1°C.The contents in the beaker were stirred with the help of a teflon coated bead at 300 rpm.The samples (2mL) were withdrawn at predetermined intervals of 0.30, 1, 2, 3, 4, 5 and 6 h and replaced with phosphate buffer (pH 6.8) to maintain the sink conditions.The drug content in the sample was quantitated spectrophotometrically (Figure 3).
Ex vivo permeation of drug through the buccal mucosa
Freshly excised goat buccal tissue was used for the ex vivo permeation studies within 2 h of removal (Patel et al., 2007).The underlying tissues were removed from the mucosa with surgical scissors, making sure that the basal membrane was retained.The prepared buccal mucosa was washed and examined for integrity, and then stored at 4°C for 24 h in phosphatebuffered saline pH 6.8, before being used for the permeation experiments.The permeation experiments were performed using modified Franz diffusion cells.Specially designed diffusion tubes (internal diameter -2cm) were used having goat buccal mucosa at one end.Two gm gel was placed inside the tube.This assembly was immersed in a beaker containing 20mL of Phosphate buffer (pH 6.8) and was placed over a thermostatically controlled magnetic stirrer at 37±1°C.The contents in the beaker were stirred with the help of a teflon coated bead at 600 rpm.Aliquots of samples (2mL) were withdrawn at predetermined intervals of 0.30, 1, 2, 3, 4, 5 and 6 h and replaced with Phosphate buffer (pH 6.8) to maintain the sink conditions.The drug content was analysed in the samples spectrophotometrically (Table 6).
Kinetic analysis of drug release data
The release data of all the batches was fitted to zeroorder, first-order, and Higuchi equations to ascertain the kinetic model of drug release (Table 7).
Stability studies
Shelf life as a function of time and storage temperature was evaluated by visual inspection of the bioadhesive gels at different time periods.Stability was monitored at 4°C, 25°C and 50°C (Table 8).
Measurement of gelation temperature
Gelation temperature is the temperature at which liquid phase makes transition to gel phase.Gelation temperature range suitable for bioadhesive gel would be 30-36°C.If the gelation temperature of bioadhesive gel is lower than 30 °C, gelation would occur at room temperature leading to difficulty in administration.However, if the gelation temperature is higher than 36 °C, the gel would remain in the liquid form at physiological temperature, resulting in leakage from the buccal mucosa.Poloxamer 188 was selected due to its thermosensitive gelling properties while solutions of poloxamer 188 alone did not gel at the desirable range.The results indicated that Poloxamer 188 alone could not provide the suitable gelation temperature.However, formulations having Poloxamer 188 mixtures with carbopol 934 gelled at physiological temperature.Poloxamer molecules exhibit a wellarranged zigzag configuration.With increasing temperature, the zigzag configuration of Poloxamer may be transformed into a closely-packed meander configuration, forming a viscous gel.The gelation temperatures of poloxamer gels were affected by the compositions and concentration of poloxamers and carbopol 934.Table 4 depicts the gelation temperature of various formulations.
Determination of bioadhesive force
Bioadhesive force means the force with which bioadhesive gel binds to buccal mucosal membranes.Since the buccal mucosal membranes consist of oligosaccharide chains, the polymers with hydrophilic groups can bind strongly to oligosaccharide chains, resulting in strong bioadhesive force.The stronger the bioadhesive forces are, the more it can prevent the gel from flushing out from the buccal environment and check the pathway for the first pass effect.But, if the bioadhesive force is too excessive, the gel can damage the mucous membranes.Therefore, bioadhesive gel must have an optimum bioadhesive force.These results suggested that the poloxamers with hydrophilic Carbopol bioadhesive polymer could bind to oligosaccharide chains, resulting in moderate bioadhesive forces.Carbopol 934, which enhanced gel strength, also efficiently increased the bioadhesive force.The results are presented in Tab 4.
pH of bioadhesive gel
Surface pH evaluation of oral mucosal dosage forms is an important aspect for characterisation, since an acidic or alkaline pH may cause irritation to the oral mucosa.It was therefore necessary to determine if any extreme surface pH changes occurred with the buccal bioadhesive gel during the drug release period investigated.The pH of the gel remained fairly constant at a pH of approximately 6.7-7.0 over the 6-h test period, confirming that the pH of the gel was within the neutral conditions of the saliva (pH 5.8-7.1) and that no extremes in pH occurred throughout the evaluation period.These results suggested that the polymeric blend identified was suitable for oral application owing to the acceptable pH measurements shown in Table 4.
Viscosity studies of bioadhesive gel
The viscosity of the various formulations is presented in Table 5.The highest viscosity value was observed in the S1 formulation which contained Poloxamer and Carbopol with polyethylene glycol at room temperature.Significant changes in the viscosity of formulations were observed with different absorption enhancers.
Drug release study
The release of fluconazole from the prepared carbopolpoloxamer gels was studied through cellophane membrane at 37±1°C in phosphate buffer (6.8 pH).The release profile of fluconazole from various gels is illustrated in Figure 3.The cumulative percent drug release from the formulations S1, S2, S3, S4, S5 and S6 was found to be 98.22, 98.13, 97.74, 97.61, 97.41 and 96.86 % respectively.Carbopol 934 being a hydrophilic polymer, absorbs water, thereby promoting the dissolution, and hence the release, of the drug.Moreover, the hydrophilic polymers leaches out and, hence, create more pores and channels for the drug to diffuse out of the gel.Carbopol 934 controlled the drug release in increasing concentration.This could have been due to the extensive swelling of the polymers, which created a thick gel barrier, making drug diffusion in controlled manner.It was apparent that S1, S2 and S3 gel were better than other formulations.
Permeation of drug through the buccal mucosa in vitro
Effects of various permeation enhancers on the permeation of fluconazole through the goat buccal mucosa were investigated.The enhancers such as polyethylene glycol, propylene glycol, phosphatidyl choline, mannitol, glycerol and sodium lauryl sulphate were used.Permeation enhancer efficacy was evaluated by the determination of % drug permeation of the formulation.The effect of the different enhancers is presented in Table 6.The glycols, such as polyethylene glycol and propylene glycol, increased the permeation rate of the drug more significantly.
Kinetic analysis of drug release data
Kinetic assessments of drug release data of fluconazole bioadhesive gel presented in Table 7, indicate that the fluconazole gel showed the controlled drug release.
SUMMARY AND CONCLUSION
Polymer employed in the present study for buccal bioadhesive gel was Carbopol 934 having well defined mucoadhesive property.Carbopol 934 being a hydrophilic polymer, absorbs water, thereby promoting the dissolution, and hence the release, of the drug.Moreover, the hydrophilic polymers would leach out and, hence, create more pores and channels for the drug to diffuse out of the gel.Carbopol 934 controlled the drug release in increasing concentration.This could have been due to the extensive swelling of the polymers, which created a thick gel barrier, making drug diffusion in controlled manner and increase drug concentrations at increasing time intervals.Poloxamer 188 has amphiphilic structure, and surfactant properties that make them useful to increase the water solubility of hydrophobic, oily substances or otherwise increase the miscibility of two substances with different hydrophobicities.In the present study buccal bioadhesive gels for fluconazole were prepared.Carbopolpoloxamer gels of 1% fluconazole containing different absorption enhancers were prepared by cold method.The bioadhesive gel was characterized for gelation temperature, bioadhesive force, pH, viscosity, in vitro drug release profile and in vitro permeation.Kinetic analysis of drug release data and stability study of gel in different ambient conditions was also determined.The cumulative percent drug release from the formulations S1, S2, S3, S4, S5 and S6 was found to be 98.22, 99.13, 97.74, 97.61, 97.41 and 97.86 respectively, and percent drug permeated through buccal mucosa was found to be 76.23, 75.26, 75.12, 68.22, 66.25 and 62.41 respectively.On the basis of various in vitro and permeability studies, it was concluded that S1, S2, S3 formulations were comparatively better than other formulations and followed the zero order release kinetics.Mucoadhesive semi-solid formulations overcome the problem of scarce bioavailability of the conventional topical formulations by allowing the application of the drug at the pathological site, and increasing the contact time between formulation and mucosa.In this respect, semi-solid formulations like bioadhesive gel designed in the present study possess high biocompatibility and bioadhesivity that should allow adhesion to the mucosa in oral cavity.
Table 1 :
Position of characteristic absorptions at definite wave number
Table . 2
: Physical changes in drug excipient mixture.
Table 5 :
Viscosity studies of bioadhesive gel
Table 6 :
Permeation of drug through the buccal mucosa.
Table . 7
: Kinetic assessments of drug release data of fluconazole bioadhesive gel. | 4,020.8 | 2014-01-01T00:00:00.000 | [
"Materials Science",
"Medicine"
] |
Design of Classroom Discussions and the Role of the Expert in Fostering an Effective and Aware Use of Examples as a Means of Argumentation
Tasks that require students to construct examples that meet certain constraints are frequently used in mathematics education. Although examples do not serve as proofs for general statements, they have a supporting role in the preliminary stages of making sense of a certain mathematical phenomenon as well as in the development of argumentation. We hypothesize that examples of the limit-confirming type could also support the initiation of arguments for refuting an existential claim. Although students may be able to construct this type of example, they rarely use it effectively in their argumentation. In this qualitative study, we analyze how teachers could scaffold students’ awareness of the potential role of limit-confirming examples as tools for supporting argumentative processes and reflections on methods of construction of effective examples. We analyzed teacher’s actions to explain and generalize this process by identifying and categorizing key moments that could characterize an approach fostering students’ aware and effective use of examples to develop argumentations.
Rationale
Examples are basic entities in mathematics and serve as manifestations of abstract concepts used for showing, communicating, and explaining mathematical ideas. The construction and use of examples play an important role in the process of argumentation and proof, first and foremost, in refuting general statements or confirming existential ones, but not restricted to these functions (e.g. Buchbinder & Zaslavsky, 2013). We hypothesize that limit-confirming examples (LCEs) can also support the initiation of argumentation for refuting an existential claim, serving as an initial step toward a proof (Cusi & Olsher, 2019). In our previous study, we showed that although students may be able to construct this type of examples, they rarely use them effectively in argumentation as an initial stage of proving. Our hypothesis is that teachers can scaffold students' awareness of the potential role of LCEs and of their methods of construction. In this paper, we focus on a teaching experiment aimed at qualitatively analyzing how an expert would design and implement a classroom discussion aimed at refuting an existential statement, starting from examples and arguments developed by students. By analyzing the design and implementation of expert's class discussion, we explain and generalize the process. To this end, we identify and categorize key moments that have the ability to support the teachers' design of classroom discussions aimed at fostering students' aware and effective use of examples to develop argumentations and proofs.
Theoretical Background
Use of Examples for Conjecturing, Developing Arguments, and Proving Students' construction and use of examples have been a central topic of research in mathematics education for the last decades (see, for instance, Bills & Watson, 2008;Antonini et al., 2011;Zaslavsky & Knuth, 2019). Studies that focused on design of settings for evoked example use (Zaslavsky, 2014), and on student's internal structures of mathematics objects (Goldenberg & Mason, 2008) as they manifested through their example space (Watson & Mason, 2005), stressed that "teaching effectively includes making use of tasks and interactions through which learners gain access to examples, to construction methods" (Goldenberg & Mason, 2008, p. 190). The objective in the use of these tasks is to support students' development of "the knowledge of when and how to use examples productively for conceptualization and critical thinking" (Zaslavsky, 2019, p. 254). Recognizing the limitations of the use of empirical examples in constructing proofs (Zaslavsky, 2018), researchers have proposed reconceptualizing example-based reasoning as a necessary and critical foundation in learning to prove (Zaslavsky & Knuth, 2019). Arguing why characteristics of certain examples would work for any other example could lead from inductive example-based arguments to example-based generic arguments (Dreyfus et al., 2012), providing an initial step in the proving process. This process requires moving from the use of empirical examples to making sense of conjectures focusing on the specifics of a generic example use, which implies seeing the general case through the specifics (Aricha-Metzer & Zaslavsky, 2017).
Another fundamental element that characterizes an effective construction and use of examples in proving-related activities is the awareness of the logical status of examples in determining the validity (or lack thereof) of the mathematical statements being explored. The logical status of examples may change based on whether they are used to prove or disprove universal or existential statements, as it has been shown by Buchbinder and Zaslavsky (2013). In their comparison between how mathematicians and students (from middle school to university) select and use examples, Lynch and Lockwood (2019) highlighted two main differences in the strategies being used. First, students' and mathematicians' strategies differed in the attention (or inattention, in the case of students) paid to the relationship between the intended purpose of an example and the strategy used to choose it. Second, whereas students usually did not consider the logical form of a given conjecture, metacognitive awareness of where in a logical argument an example may be used has been a hallmark that characterized mathematicians' example-related activity.
The research studies reported in this section have clearly stressed the importance of focusing on the design and implementation of activities that can foster the learners' development of the metacognitive awareness necessary to support their sophisticated use of examples. Yet, the literature on this theme has stressed the need to deepen the investigation on the efficacy of interventions aimed at fostering students' effective construction and use of examples for reasoning, producing argumentations, and proving (Stylianides et al., 2016). The study documented in this paper is aimed at contributing to this investigation. The teaching intervention on which it is focused has been designed around the notion of limit-confirming examples.
Limit-Confirming Examples as a Means to Support Argumentation
Experts use different criteria and strategies to select examples for the purpose of exploring conjectures (Lockwood et al., 2016): examples based on their familiarity with a particular domain or with particular mathematical properties, examples that increase in complexity or generality, and examples that serve as extreme cases (Clement, 1991) or boundary cases (Ellis et al., 2013). This last strategy of generating extreme or boundary cases may be carried out by creating an auxiliary problem in which the condition of the initial problem is taken to the limit (Balk, 1971). Research also refers to this strategy as "generating limit cases." Inspired by this idea, Cusi & Olsher (2019) introduced a theoretical construct aimed at identifying a certain category of examples that can represent effective means of argumentation, referred here as limitconfirming examples (LCE). This construct was developed by studying criteria that could support the design of tasks aimed at fostering students' exploration of examples to refute the existence of given types of mathematical objects in which certain characteristics coexist. Refuting an existential statement is equivalent to proving a universal statement; therefore, the tasks we designed led students to the creations of LCEs for a universal statement. We defined these as specific- Cusi and Olsher (2019) showed that when a task aimed at fostering students' construction of LCEs was proposed to students in a 10th grade class in Italy, most students were able to spontaneously identify LCEs as supporting examples. This result confirms the notion that these examples may be an easier starting point in the argumentation process, opening the way for the construction of a complete argumentation consistent to some extent with example-based arguments (Dreyfus et al., 2012) as means of argumentation (Stylianides et al., 2016). Nevertheless, Cusi and Olsher (2019) also reported students, in their written explanations, did not motivate their choice of LCEs. This suggests that because most of the students use LCEs spontaneously but are not aware of the reasons why these types of examples are effective, the work that they do with LCEs is in their zone of proximal awareness (Mason et al., 2007). This term, introduced by Mason et al. (2007), refers to the awareness that is imminent or available to learners, but might not reach their attention or consciousness without specific interactions with mathematical tasks, cultural tools, colleagues, teachers, or some combination of these.
The Key Role Played by Teacher in Fostering Students' Awareness of the Construction and Use of Examples for Developing Argumentation
The previous section stressed the importance of guidance by an expert in fostering students' reflections on the use of specific examples to justify or confute statements and in making them focus their attention on the reasons underlying the effectiveness (or lack thereof) of their choice of examples as means of argumentation. These ideas are discussed by Mason (2019), who noted the importance of focusing students' attention on the structural relationships that underlie the chosen examples and of supporting them in recognizing sophisticated relationships and in deliberately shifting between the particular and the instantiation of a generality. He suggested that the teacher should focus on what students are attending to when they use examples to make conjectures and construct proofs, and on how they attend to it, stimulating students in being explicit about the generality behind the examples they use.
In the last two decades, researchers have explored the role played by the teacher in supporting students' development of argumentative processes, investigating purposeful interventions by the teacher to encourage students to verbalize their ideas, make them public, and explain them (Mueller et al., 2014). They also investigated the teachers' use of questions aimed at promoting effective argumentative interactions (Bova, 2017). Conner et al. (2014) identified categories of teachers' support for collective argumentation, distinguishing between teacher's direct contribution of argument components, questions posed to prompt the formulation of argument components, and other supportive actions used to facilitate the development of arguments.
The teacher's role in supporting students' construction and use of examples has been less investigated. Most of the research on this topic discusses general pedagogical implementations, stressing the importance of designing interactions between the teacher and students to turn sets of examples into didactic objects (Watson & Chick, 2011). Some of the research focuses on strategies that could support teachers' effective task design aimed at promoting such students' actions on examples as recalling, tinkering and gluing, complexifying, varying, and generalizing (Watson & Mason, 2005).
A. Cusi, S. Olsher Arzarello et al. (2011) have studied in detail the teacher's role during classroom discussions about the creation of examples of mathematical objects satisfying certain constraints within a given mathematical domain (elementary calculus). They identified a variety of teacher actions that promote students' development of sophisticated awareness of the theoretical and logical background of the examples, which are necessary to grasp the meaning of the examples and to organize them into a web of relationships that structure the example space. Examples of these actions, framed within the cognitive apprenticeship paradigm (Collins et al., 1989), are those aimed at stimulating the students in making their thinking visible by means of various kinds of signs, at providing the correct linkages with the theoretical aspects of the activity, and at discussing the selection or rejection of examples from a logical point of view.
The study documented in this paper is aimed at combining these two foci by investigating the expert's role in supporting the students' effective and aware construction of examples as a means to develop argumentative processes.
Research Aim and Research Questions
Starting from our hypothesis about the fundamental role that an expert could play in supporting students working with LCEs in their zone of proximal awareness, we developed a teaching experiment aimed at qualitatively identifying and categorizing characteristics of a classroom discussions designed and implemented by an expert to foster students' aware and effective use of LCEs to support argumentative processes. We attempted to advance toward this goal by answering the following questions: How could a classroom discussion be designed with the aim of supporting students in an effective and aware use of LCE as a means of argumentation? Specifically, what are the key moments that characterize the structure of this discussion? And how can the expert's interventions in the discussion be characterized?
Analytical Framework
The analytical framework needed to answer our research questions is made up of two main components: (a) theoretical tools that we identified for interpreting the students' use of examples as means to support argumentation and (b) a theoretical construct for analyzing the role played by the expert guiding classroom discussions aimed at making students reflect on their use of examples to support argumentative processes.
First Component of the Analytical Framework: Focus on Argumentation
We used the definition of argumentation introduced by Stylianides et al. (2016) as "the discourse or rhetorical means (not necessarily mathematical) used by an individual or a group to convince others that a statement is true or false" (p. 316). The tool we chose to interpret students' use of examples (in particular, LCEs) as means to support argumentation and to model and analyze students' argumentative processes was a simplified Toulmin model of argumentation (Fig. 1), which represents a fundamental reference to the study of argumentative processes in mathematics (Knipping & Reid, 2019). Toulmin (2003) distinguished between the claim or conclusion being sought and the facts used as a foundation for the claim, in other words, the data. When the ground on which an argument is constructed is strong enough, the new task is to show that the step toward the original claim or conclusion is appropriate and legitimate. This requires considering hypothetical statements in the form "if D, then C," which can act as bridges and legitimize the step. These types of propositions are called warrants. The task of a warrant is to "register explicitly the legitimacy of the step involved and to refer it back to the larger class of steps whose legitimacy is being presupposed" (Toulmin, 2003, p. 92). The support consists of the assurances that stand behind the warrants and enable the author to answer why in general a warrant should be accepted as having authority. The diagram in Fig. 2 models the students' use of LCEs to support the development of argumentation, based on the Toulmin model.
Classroom discussions represent a fruitful context in which students can be guided to reflect about their argumentations and to analyze, compare, or contrast other argumentations, and to collectively construct complete argumentations, in which all the steps needed to reach the conclusion are made explicit (Cusi, Morselli & Sabena, 2017.
Second Component of the Analytical Framework: the M-AE AB Construct
We chose the M-AE AB (acronym for "Model of Aware and Effective Attitudes and Behaviors") construct (Cusi & Malara, 2009 to highlight the key roles played by the expert in mediating classroom interactions that support students' reflections on their argumentative processes and their development of metacognitive awareness about the role played by examples in supporting these processes. The M-AE AB construct is consistent with Mason's characterization of the approach of a teacher who is "mathematical with and in front of learners" (Mason, 2008), with the aim of educating their awareness.
M-AE AB seeks to highlight the key roles played by a teacher who deliberately behaves with the objective of "making thinking visible" (Collins et al., 1989). The objective is to guide students in focusing not only on syntactic aspects but also on the effective strategies developed during classroom activities and on meta-reflections on the actions being performed. These key roles are subdivided into two main groups: (a) those that a teacher plays posing as a learner who faces problems, in order to make the A. Cusi, S. Olsher hidden thinking visible and to share the objectives, strategies, and interpretation of the results; and (b) those that the teacher plays to guide students to reflect on the approaches adopted during the activities, and to become aware of the relationships between the activities in which they are involved and the knowledge they had previously developed. Because we focus here only on the classroom discussion concerning the task, we refer to the second group of roles, which are presented in Table 1, together with indicators to support the coding process.
Participants and Research Setting
Participants were 22 secondary Italian students, from a 10th grade class (students aged 15-16) of a scientific lyceum, in Italy, together with their teacher. We focused on the classroom discussion developed following the students' resolution of a task that was part of an online activity in analytical geometry, specifically, lines intersecting a segment. The online activity was proposed to the group of students in the middle of the school year, when they had already studied some basis of analytic geometry (in particular, coordinates of points, equations of lines, and conditions of perpendicularity and parallelism).
The activity comprised three tasks designed as interactive diagrams describing a geometrical context on a Cartesian plane, using the STEP platform (Olsher, Yerushalmy & Chazan, 2016). The interactive diagrams, built using GeoGebra, enabled participants to construct or drag a set of elements in the diagram. Participants needed to submit examples satisfying different conditions. To explore students' spontaneous construction and use of examples, we did not explain what LCEs were, nor did we ask them to find examples with specific characteristics.
Students were asked to complete the entire activity within 1 h, working in the STEP platform in their school computer lab. Following completion of the task, at the next lesson, a researcher (one of the authors) conducted a face-to-face discussion based on the students' answers. In this paper, we analyze the part of the discussion focusing on the students' answers to the third task of the activity, which was designed to foster students' construction of LCEs. We present this task in the next section. Table 1 The second group of roles within the M-AE AB construct
Roles of a M-AE AB
Characterization of each role Indicators to code each role Guide in fostering a harmonized balance between the syntactic and the semantic levels Helps students control the meaning and the syntactic correctness of the mathematical expressions they construct, and at the same time, the reasons underlying the correctness of the transformations they perform.
Poses questions or intervenes to make students reflect on the correctness of given transformations being performed, and highlights connections between the processes that characterize the resolution of a problem and the corresponding meanings. For example: "Is this transformation correct?" "Is it legitimate to simplify this expression?" "Why did you make this transformation?" "How have we obtained this result?" "Why have we obtained this result?" Reflective guide Stimulates reflections on the effective approaches carried out during class activities to make students identify effective practical and strategic models from which they can draw their inspiration in facing problems.
Poses questions or intervenes to support students in making the meaning of effective strategies and approaches explicit. For example: "Could you explain your reasoning to your classmates?" "Is there someone who could explain your colleague's reasoning?", "She reasoned as follows: 'Since I want to obtain this kind of result, I could…'" (The teacher speaks as if she were the student, repeating the words of the students or reformulating their argument.) "Is it clear what your colleague said? She observed that…" (The teacher repeats what a student said referring to her in the third person singular). "Activator" of both reflective attitudes and metacognitive acts Stimulates and provokes meta-level attitudes, with particular focus on the control of the global sense of processes.
Poses questions or intervenes to support students in highlighting strengths or weaknesses of specific arguments and strategies, and to foster the sharing and comparison of different arguments and strategies. For example: "Do you agree with what your colleague said?" "Do you think it is an effective choice or strategy? Why?" "What do you think about what's written here?" "What are the differences between these answers? What do they have in common?" "Was this task difficult for you? Why?" "Would you adopt the same strategy if the problem was …?" A. Cusi, S. Olsher Data sources included classroom discussions that have been audio recorded and transcribed. Additional data sources were the student submissions for the task, the supporting examples they constructed and attached, and the verbal explanations.
The Task
The task (Fig. 3) required students to explore an existential statement, looking for examples to justify their choice.
The existential statement on which the task is focused (the claim in bold in Fig. 3) could be formulated as follows: "There are two lines that satisfy all of the following three properties: (a) the two lines belong to the family y=mx; (b) the two lines are perpendicular to each other; (c) the two lines intersect the segment AB." Therefore, it could be represented as A ∧ B ∧ C, that is, an intersection of three conditions. To prove that this statement is false requires proving that the universal statement ¬(A ∧ B ∧ C) is true, which is logically equivalent to each of the following statements: Thus, to prove that the existential statement in Fig. 3 is false, it is enough to prove one of the following universal statements: The two LCEs for statement 1 ("If two lines belong to the family y=mx and are perpendicular to each other, then they do not both intersect the segment AB"), presented in Fig. 4a and b, are characterized by the fact that they were constructed to have two lines, perpendicular to each other, passing through the origin, with one of the two lines intersecting the segment at one of its edges. They are LCEs because in the case of other examples constructed in the same way (with two perpendicular lines passing through the origin, if one line intersects the segment in a point other than the edges for AB, the other line certainly does not intersect AB) (Fig. 4c).
Regarding statement 2 ("If two lines belong to the family y=mx and both intersect the segment AB, then they are not perpendicular to each other"), one LCE is possible (Fig. 5a). Its characteristic is the fact that the two lines are the ones that intersect the segment in its extreme points.
We can consider 5a an LCE because all the other pairs of lines intersecting the segment and belonging to the family y=mx (Fig. 5b) form an angle that is smaller than the one in Fig. 5a, which means that the two lines are not perpendicular. There is also an LCE for statement 3, which is described in Cusi and Olsher (2019), but we do not present it here because it did not appear in the students' work in this study.
Methodology of Analysis
To answer our research questions, we used qualitative methods to analyze the role that the expert in our teaching experiment played in the classroom discussion. We coded the classroom discussion to focus on how it was structured by the expert. In particular, we investigated how the expert planned her interventions, on one hand, to foster students' reflections on their use of examples and on the characteristics of their argumentations, and on the other, to help students develop an effective and aware use of LCEs as means of supporting argumentation.
The classroom discussion was coded using the analytical tools introduced above, from three main perspectives: (a) the ways in which the expert scaffolded awareness of the potential role of examples and of their methods of construction; (b) the ways in which the expert organized the main phases of the classroom discussion to foster reflection on the characteristics of the constructed argumentations (consisting of both examples and written texts) and on the individual components of argumentative texts, as per Toulmin's model; and (c) the nuances of the roles played by the expert, in particular at the metacognitive level, with reference to the M AE AB analytical framework; the expert played these roles consciously. We used the coding to identify and categorize key aspects of the teacher's guidance of the students.
Results
Our analysis revealed six key aspects of the teacher's guidance during the classroom discussion: ( These key aspects correspond to key moments in the outline of the classroom discussion. The aspects in question concern the aim of fostering the development of awareness about the role of the constructed examples (perspective a, summarized in the first column of Table 2), and the characteristics of the examples and their corresponding argumentative texts (perspective b, summarized in the second column of Table 2). For each key moment, the roles played by the expert are described (perspective c, summarized in the third column of Table 2), together with her aims in relation to perspectives a and b.
Key Moment 1: Logical Structure of the Statement Under Analysis
Key moment 1 represents the process of identification of the logical properties involved in the formulation of the statement. It is therefore focused on the claim, with reference to Toulmin's model. According to this structure, students can finalize the claim as true or false. Students found the last task (task 3) difficult, and only eight (of 22 students) submitted answers to this task. Of the eight students who worked the task, three answered that the statement was true. Figure 6 shows the three examples submitted by these students. It is clear that the pairs of lines proposed in these examples do not satisfy the statement because they do not satisfy the property (a) (they do not both belong to the family). Key moment 1 occurred at the beginning of the classroom discussion, when the expert made students reflect on the examples submitted by the students who answered that the statement was true (Fig. 6). In the following excerpt, the expert (R), after having examined the examples in Fig. 6 and found that they do not satisfy the statement, guided the students in the identification of the three properties (a, b, c) of the pairs of lines that satisfy the statement. 4) R: Which features must the two lines have to satisfy the statement? 5) S1: The coefficients [referring to the slope] should be anti-reciprocal [one slope is the opposite of the reciprocal of the other slope] 6) R: They [the lines] should be perpendicular, which corresponds to the condition you mentioned. Then? 7) S1: They should belong to the family.
In the next part of the discussion, the students, with some difficulties, identified the third property, stating that the two lines should also intersect the segment. The expert went on to schematize the statement to enable more efficient work and communication about it. Guided by the students, she summarized the three properties on the board: (a) they both belong to the family of lines; (b) they are perpendicular to each other; and (c) they both intersect the segment.
The main role R played was that of a guide in fostering a harmonized balance between the syntactic and semantic aspects. Her aim was to make students notice and attend to the logical structure of the statement, as a prerequisite to being able to systematically assess the characteristics of the examples submitted by their classmates.
Key Moment 2: Structure and Characteristics of the Constructed Examples in Light of the Structure of the Statement
This key moment represents the phase in which the expert, referring to the logical structure of the statement, directed students' reflections on the examples they constructed to make them evaluate the effectiveness of these examples in supporting the claim about the truthfulness/falseness of the statement. At this stage, the focus was not on the argumentative texts produced by the students, but on the data in relation to the claim.
The key moment occurred immediately after key moment 1 in the classroom discussion, when R and the students analyzed the three examples in Fig. 6 in light of the logical conditions they had just enunciated. S1 observed that the three examples did not satisfy property (a) because one or both lines did not belong to the family. R relaunched this observation to make the students analyze also the other properties. This key moment refers to the phase of the discussion in which the expert guided the students in observing how their written arguments were structured, with reference to the logical aspects previously discussed. It had to do with the way in which the connection between data and claim is explained. The reflections fostered in key moments 2 and 3 formed the necessary basis on which a reflection about the coherence between the chosen examples and the corresponding written argumentations could be developed (the focus of key moment 4).
Key moment 3 occurred in the part of the discussion in which R started focusing on the five answers stating that the statement was false. Although all five answers were Fig. 6 Examples submitted by the three students who answered that the statement was true A. Cusi, S. Olsher discussed, because of space limitations, in this paper, we focus only on three of them (a, b, and c in Table 3), which contained an LCE for statement 1. The written arguments submitted by the first two students (a and b in Table 3) were not characterized by the same logical structure of the chosen examples, that is, (A ∧ B) → ¬ C. Indeed, the structure of the proposed argumentations evoked statement 3: (B ∧ C) → ¬ A. Moreover, the written argument of the third student who proposed an LCE (c in Table 3) was not in agreement with the chosen example because it can be interpreted as a reformulation of the statement: the two real numbers that are introduced in the arguments correspond to the slope of the two lines, which should belong to the set ] − ∞ , − 2/3] ∪ [2, + ∞ [ to satisfy the properties A and C and should be antireciprocal to also satisfy property B.
In the following excerpt, after having displayed answers a and b in Table 2, R made students reflect on the common structure of the two written arguments. 40) 40) R: I'd look at the first two [answers] together. Why do you think I suggest looking at the first two together? 41) 41) S5: Because in all of them it's written that we would need the "known term." 42) 42) R: In all of them it's written that to make them perpendicular, the "known term would be needed." What does this sentence mean? How can we interpret it? 43) 43) S6: That they don't pass through the origin. 44) 44) S7: That they cannot pass through the center. 45) 45) R: Do you mean the center of the family? 46) 46) S7: Yes. 47) 47) R: At least one of them should not pass through the center of the family, which is the origin. At least one of the two does not pass through the origin. What do they mean by "we need the known term?" 48) 48) S7: That, therefore, the statement would no longer be true because we would need the known term.
From the beginning of the excerpt, the focus of this discussion is at the meta-level, as R plays the role of activator of metacognitive acts (line 40), fostering students' reflections Table 3 Examples and the written arguments submitted by the students who answered "no, the statement is false" Submitted examples Corresponding written arguments (b) The student submitted an LCE for statement 1, similar to those in Fig. 4.
No, because to make them perpendicular we should know the "known term." (c) The student submitted an LCE for statement 1, similar to those in Fig. 4.
No, because two real numbers that are anti-reciprocal and one of them is smaller than -2/3 and the other greater than 2 do not exist.
Design of Classroom Discussions and the Role of the Expert in... on the reasons on which the grouping and display of the answers is based. Making students compare these two written arguments, she sought to make them highlight their common structure. When the students focused on a key sentence used in the two written answers (line 41), that is, "there should be the known term" and "we should know the known term," R immediately played the role of activator of reflective attitudes (line 42), fostering students' interpretation of this key sentence to make them reflect on the characteristics of these arguments. Students correctly interpreted the sentence, stressing that in both written arguments, it was said that if two lines satisfy properties (b) and (c), they cannot also satisfy property (a), that is, they do not belong to the family. In this way, they highlighted the structure underlying both arguments: (b ∧ c) → ¬ a.
Key Moment 4: Comparison Between Examples and Corresponding Argumentative Texts
This key moment occurred during the phase of the discussion in which the expert encouraged students' reflections on the coherence between the examples they chose and the corresponding argumentative texts, in light of the analysis previously developed during key moments 2 and 3. From the point of view of Toulmin's model, the aim was to make students become aware of the importance of coherently connecting the data and the corresponding warrant in producing a complete argumentation. Therefore, the focus was on how to construct and finalize a warrant (the written text) that was coherent with the chosen data.
This key moment occurred in the part of the discussion in which R, after having fostered students' reflections on the structure of the argumentative texts proposed by the students who submitted answers a and b in Table 2 (the focus of key moment 3), asked them to compare the two examples and to reflect on the coherence between the written texts and the examples associated with them. The main roles she played were those of activator of metacognitive acts, because she made students reflect at a metalevel, causing them to investigate whether or not there were effective connections between the structure of the examples and that of the corresponding arguments, and to achieve this objective, the role of guide in fostering a harmonized balance between syntactic and semantic aspects, helping students observe that while the structure of the two written arguments was (b ∧ c) → ¬ a, the structure of the examples was (a ∧ b) → ¬ c.
Key Moment 5: Characteristics of Examples that Make Them Effective Tools When Constructing Argumentations
This key moment focused on reflections on the effectiveness of given examples as tools that support the construction of complete argumentations. Through these reflections, the characteristics of effective examples are made explicit, enabling students to become aware of the criteria that can guide both the identification and the construction of effective examples. For this reason, key moment 5 was aimed at identifying the backing in Toulmin's model.
The following excerpt is taken from the moment of the discussion when the expert enabled students' reflections on the examples proposed in answers a and b (Table 3), asking them to explain the strategy underlying the choice of considering lines that intersect segment AB in its extreme point (A or B). 61) R: Why did they choose to work with the extreme points of the segment? What strategy can be behind this choice? Is it a good strategy to consider a line passing through one extreme point of the segment, as a limit case? 62) S11: To be as close as possible to the other extreme point 63) S12: It is the maximum point of intersection with the segment. 64) S13: I did so. 65) R: Did you give one of these two answers? Explain your reasoning. 66) S13: I thought that, since the perpendicular to the vertex didn't intersect the segment, consequently all the others wouldn't have intersected it because the slope would have gradually increased and the perpendicular would have moved away from the segment.
In the next part of the discussion, R reformulated S13's sentence to make the meanings underlying the choice of this particular LCE explicit, stressing the fact that it encompassed all the other examples. In this excerpt, R played the role of reflective guide, as she encouraged students to make the strategic choice behind the two analyzed examples explicit (line 61), so that the role of the LCE would become part of the shared understanding of the entire class. In line 65, she played this role again, asking S13 to explain his choice to make the other students reflect on the effectiveness of working with LCEs.
Key Moment 6: Designing New Effective Examples for the Construction of Complete Argumentative Texts
This key moment made students display the fact that they internalized the reflections developed in key moment 5, through the collective construction of new examples that have the effective characteristics previously identified. In Toulmin's model, key moment 6 focused on the data as coherently constructed from the warrant, because the reflections developed were about the coherence between (a) the data and (b) the way in which the connection between data and the claim was explained.
This key moment occurred toward the end of the classroom discussion, when during the analysis of answer c (Table 3), students were led to develop a comparison between the example submitted and the corresponding written text, highlighting the incoherence between them by investigating their logical structures. The expert played two main roles. First, she served as a guide in fostering a harmonized balance between semantic and syntactic aspects, as she made students reflect on the fact that the choice of the example should be in agreement with what was written, and when she stimulated students' investigation, asking them to construct another example whose structure was coherent with that of the written text. Next, she also served as a reflective guide, when she asked the students to explain this new construction to enable them to share the meanings underlying the choice of this example and the criteria for the construction of LCEs.
Conclusions
In this article, we deepened the investigation of how classroom discussions can be designed and implemented with the aim of scaffolding an aware and effective construction and use of LCEs to develop complete argumentations. We identified six key moments around which discussions may be structured starting from students' answers.
Each key moment was aimed at making students aware of one of the following fundamental key aspects in the use of LCEs as means of argumentation: (a) the logical structure of the statement being analyzed; (b) the connections between this structure and that of the examples constructed by the students; (c) the connection between this structure and the corresponding argumentative texts; (d) the coherence between examples and the corresponding argumentative texts; (e) the reasons that could guide the identification of effective examples to support the development of complete argumentative texts; and (f) the construction of such examples.
Adopting multiple perspectives has strengthened our qualitative analysis. It also enabled us to focus on how each key moment supports students' development of awareness about given aspects related to the construction and use of effective examples (perspective a), and on the corresponding construction of a coherent argumentation, whose components were gradually analyzed and reconstructed during the discussion (perspective b).
We investigated the classroom discussions designed according to this structure by analyzing the expert's role with reference to the M-AE AB construct (perspective c), which enabled us to identify the actions and interventions that an expert could deploy to assist students in developing awareness about the key aspects in the use of LCEs as means of argumentation. These actions and interventions conformed to the individual key moment on which the discussion was focused, as shown in the third column of Table 2. In particular, our analysis showed the importance of the expert assuming roles that bring the discussion to a meta-level. Assuming such roles as reflective guide, activator of reflective attitudes and metacognitive acts, guide in fostering a harmonized balance between the syntactic and semantic levels, has promoted students' reflections, enabling them to fill possible gaps in their awareness. Activation of these roles is often combined with the use of certain techniques, such as exposing students' answers to make them become a direct object of reflection, and making students compare and contrast different examples and written arguments.
Our results are consistent with the findings of Arzarello et al. (2011), which have stressed the importance of aiming the teacher's actions toward developing the students' awareness of the meaning of examples at both the theoretic and logical levels. We believe that our study marks a step forward in the investigation started by Arzarello, Ascari and Sabena because we combined the perspective of supporting students building their example spaces with that of making students create and use examples to develop argumentative processes. In this way, our work contributes to the identification of the main characteristics of the classroom context advocated by Mason (2019), in which students are "in the presence of teachers and peers talking about how they perceive examples" and "immersed in a discourse that is explicit about the perceived scope of generality" (Mason, 2019, p. 345).
Identification of the six key moments according to which classroom discussions can be structured, and of the roles that an expert can play at each key moment, enabled us to outline a methodology for designing and implementing classroom discussions that scaffold students' aware and effective use of examples to develop argumentative processes. This methodology, which we call the "six key moments (SKM) approach," offers a model to which teachers can refer for designing and conducting classroom discussions focused on the construction and use of examples as means of argumentation, not only when the focus is on LCEs.
This idea opens up new directions for future research. Our next step is to explore the use of our methodology as a tool for teachers to structure the design and implementation of argumentative processes connected to other uses of examples, for example, to refute universal statements or to support the development of generic arguments. A possible critical point of our study is that during the analyzed discussion we did not capture the students' voices and interpretations in their entirety. Therefore, we will deepen the investigation of the effect of this approach on the students' development of awareness of examples as means of argumentation, focusing also on students who do not participate in classroom discussions.
Funding Open access funding provided by Università degli Studi di Roma La Sapienza within the CRUI-CARE Agreement.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. | 9,980.6 | 2021-10-05T00:00:00.000 | [
"Education",
"Mathematics"
] |
A Multimodal Data Fusion and Deep Learning Framework for Large-Scale Wildfire Surface Fuel Mapping
: Accurate estimation of fuels is essential for wildland fire simulations as well as decision-making related to land management. Numerous research efforts have leveraged remote sensing and machine learning for classifying land cover and mapping forest vegetation species. In most cases that focused on surface fuel mapping, the spatial scale of interest was smaller than a few hundred square kilometers; thus, many small-scale site-specific models had to be created to cover the landscape at the national scale. The present work aims to develop a large-scale surface fuel identification model using a custom deep learning framework that can ingest multimodal data. Specifically, we use deep learning to extract information from multispectral signatures, high-resolution imagery, and biophysical climate and terrain data in a way that facilitates their end-to-end training on labeled data. A multi-layer neural network is used with spectral and biophysical data, and a convolutional neural network backbone is used to extract the visual features from high-resolution imagery. A Monte Carlo dropout mechanism was also devised to create a stochastic ensemble of models that can capture classification uncertainties while boosting the prediction performance. To train the system as a proof-of-concept, fuel pseudo-labels were created by a random geospatial sampling of existing fuel maps across California. Application results on independent test sets showed promising fuel identification performance with an overall accuracy ranging from 55% to 75%, depending on the level of granularity of the included fuel types. As expected, including the rare—and possibly less consequential—fuel types reduced the accuracy. On the other hand, the addition of high-resolution imagery improved classification performance at all levels.
Introduction
Statistics show an unprecedented increase in the size, intensity, and effects of wildfire events relative to historical records [1,2].In 2018, the deadliest fire in California history, the Camp Fire, resulted in 85 casualties and destroyed nearly 14,000 homes and more than 500 commercial structures [2].Exacerbated by climate change, extreme wildfires are projected by the United Nations Environment Program to further increase globally on the order of 30% by 2050 and 50% by the end of the century [3].Wildfires are continuing to grow into a substantial threat to the well-being of communities and infrastructure despite technological and theoretical advancements in fire science.The unprecedented size and complexity of this problem call for multi-disciplinary and data-informed research on wildfire risk management (assessment, mitigation, and response).
Efficient wildfire risk management relies on accurate wildfire spread simulations.Such simulations can substantially improve the effectiveness of pre-event mitigation, as well as evacuation, rescue, and fire suppression efforts [4,5].A key input to wildfire simulations is robust estimates of fuels that carry wildfires.Fuels are mainly categorized into the three layers of ground (litter, duff, and coarse woody debris), surface (grass, forb, shrubs, large logs), and canopy fuels (trees and snags) [6].Although surface fuels are the primary drivers of the initiation and spread of forest fires, research in this area has matured slowly with the Anderson 13-category standard fire models [7], which served as the primary input for point-based and spread simulations until the inclusion of the 40 Scott and Burgan standard fire behavior models introduced in 2005 [8].Surface fuel characterization methods were developed as generalizations, which did not capture the full range of temporal variability and spatial non-conformity that are inherent in surface fuel beds [6].Therefore, input data into modern fire behavior models bear uncertainties in describing the dynamic processes that are missed in traditional fuel inventories [9].A review of the state of the art in surface fuel mapping research indicates that most of the past research efforts were focused on site-specific semi-manual expert systems or traditional machine learning methods (e.g., decision trees and random forests) at regional scales.These systems have limited capability in leveraging big data analytics, which can be exploited to learn from spatial and spectral continuities and provide consistency of vegetation and fuels across a given landscape.As a result, such systems are difficult to generalize to large problem domains.
At the national scale, the LANDFIRE program has created comprehensive and consistent geospatial fuel products that incorporate remote sensing with machine learning, expert-driven rulesets, and quality control [10].Although these products have created a valuable foundation for fire spread simulation efforts based on years of collective experience and domain expertise, large-scale modeling techniques are needed that deliver near-real-time on-demand fuel mapping based on georeferenced fuel data and do not rely on experience-driven expert rulesets and localized vegetation models [11].Such models could improve the frequency and reduce the latency of fuel data, which are currently at a multi-year level.Furthermore, new techniques could allow for a comprehensive and systematic accuracy assessment using independent validation datasets, which are currently unavailable for LANDFIRE fuel maps.
To build on the success of the LANDFIRE products as a baseline and improve their capabilities, this paper describes a deep-learning-based framework that ingests multimodal-i.e., hyperspectral satellite, high-resolution aerial image, and biophysical climate and terrain-data.This framework relies on a deep network of layers of learnable weights that are trained using large amounts of georeferenced labeled data that guide the formation of the data extraction pipeline.
Background.Most past efforts to map surface fuels for wildfire spread simulations utilize fire behavior fuel models, which are abstract categorizations of fuels that are used as input in fire spread simulations.The most widely adopted model in the United States was developed by Scott and Burgan, which has 40 fuel categories [8].Most of the past work on fuel identification and mapping focused on classifying the pixels of a georeferenced map into one of the fire behavior fuel model categories.A review of the fuel identification and mapping literature shows a variety of approaches leveraging remote sensing and biophysical data.Table 1 summarizes the major studies on surface fuel identification and mapping.We note here that our paper focuses only on surface fuels.Therefore, the term fuel will be used hereafter to refer to surface fuels only.Behavior System [12] Lidar and AVIRIS data 395-km 2 area of the 2014 King Fire, California, USA N/A Anderson 13 fuel model [13] ASTER satellite data 212-km 2 area in the Canary Islands, Spain Sample of pixels from existing fuel map Scott and Burgan 40 fuel model [14] Airborne laser scanning and Indian Satellite data Two areas of 165 km 2 and 487-km 2 in Sicily, Italy 5028 field plots NFFL fuel model [15] Lidar data 410 km 2 national park in Spain 128 field plots Prometheus fuel model [16] ASTER imagery 64-km 2 region in the south of Italy 17 field plots (500 pixels) Modified Prometheus fuel model [17] Lidar data and bands of NAIP imagery 99.5 km 2 of northern Sierra Nevada, California N/A Scott and Burgan 40 fuel model [18] Lidar and Airborne Thematic Mapper data 2.3 km 2 of a national park in Spain 360 field plots Prometheus fuel model [19] Lidar and Quickbird data 13-km 2 area in eastern Texas, USA 27 polygons (2160 pixels) Anderson 13 fuel model [20] ALS data, Landsat-8 data, and Digital Terrain Model 3678-km 2 [20] showed that fusing lidar data resulted in fuel identification improvement compared with using Quickbird multispectral imagery alone.Jakubowski et al. [18] estimated the fuel map for a small region in the Sierra Nevada using lidar data and National Agricultural Imagery Program (NAIP) imagery, and a variety of traditional machine learning algorithms, and concluded that although the methods predicted general fuel categories accurately, specific fuel type prediction accuracy was poor.Garcia et al. [19] reported high fuel identification accuracy using lidar and spectral data with Support Vector Machines and decision rules and attributed the cases of confusion to low lidar penetration to understory vegetation.These studies indicate that, while the inclusion of lidar data has shown promise, their limited spatial availability has restricted their applicability to small scales.Therefore, until frequent high-resolution lidar surveys become available at the national scale, this data modality might not be a useful input for large-scale mapping efforts.
The studies listed in Table 1 mostly use spectral signatures from satellite or airborne imagers, lidar data, biophysical data, or a combination thereof to identify and map fuels.In most cases, the area of interest is less than a few hundred square kilometers, and the labeled training data comprise only small numbers of points.This means that the resulting fuel identification models are localized and site-specific.The closest work to large-scale fuel identification is that of Pickel et al. [12], wherein the utility of an Artificial Neural Network model for fuel mapping was explored.They used a three-layer neural network to estimate 9 fuel types based on the Canadian Fire Behavior Prediction System for a 200 × 200 km 2 area in British Columbia, using a vector of 24 spectral, terrain, and climate inputs.For the target fuel labels, their work used a sample of pixels from the Canadian fuel product.The results of the study demonstrated that an overall accuracy of 60-70% could be achieved after regrouping the less-frequent fuel types.
The review of the literature in Table 1 also shows that, while different sources of imagery have been used to extract multispectral information at the points of interest, highresolution images have not been used yet as an independent input to identify fuels.In the cases where high-resolution aerial or satellite optical images (e.g., NAIP and Quickbird imagery) have been used ([18,23,27]), only RGB pixel values were collected as scalar inputs similar to other spectral or biophysical features.In Mutlu et al. [20]-while bands of 2.5-m resolution Quickbird images were used to create composite images with lidar-generated bands of height bins, variance, and canopy cover-per-pixel classification using decision rules essentially resulted in the treatment of pixels in isolation, rather than within the landscape context.Therefore, an investigation of the application of high-resolution images as distinct inputs for fuel identification is lacking and would be useful.
The literature review also reveals that none of the previous approaches provide a measure of fuel identification uncertainty.Such uncertainty is well-recognized to exist within any identification task and can be a result of a variety of sources, including randomness in the data, models, and sensors, as well as environmental noise.Knowledge of the uncertainty in the identified fuels is important as it provides a means to account for wildfire simulation uncertainties, which can be helpful in risk assessment and uncertainty-aware decision-making [28].Furthermore, knowledge of the confidence with which fuels are predicted can be a useful tool for model diagnostics and quality control.In other words, increased uncertainty in the identification can point to underlying problems in the data and, thus, to methods that can be used to improve their accuracy.Specifically, the active learning framework in machine learning aims to improve model performance while reducing the costs associated with large-scale data labeling by actively querying ground truth labels for data points with the highest uncertainty.Providing fuel identification uncertainties would enable the use of active learning to improve fuel identification efforts in the future.
Research Significance.To overcome the current limitations in fuel mapping using remote sensing, this paper leverages emerging deep learning technology to examine the feasibility of creating surface fuel maps at a much larger scale than the existing fuel mapping capabilities, while quantifying fuel map uncertainty.To that end, we use a data fusion scheme to integrate spectral and biophysical features with high-resolution imagery and identify surface fuels using a single end-to-end model for the State of California.To train the model, fuel pseudo-labels are generated using a geospatial sampling of the LANDFIRE fuel maps.This information is then coupled with multimodal input data sourced from various data repositories and geospatial data products, including multispectral satellite data (bands of Landsat surface reflectance), spectral indices (e.g., Normalized Difference Vegetation Index (NDVI)), topography and terrain data (from the U.S. Geological Survey (USGS) Digital Elevation Model), and high-resolution aerial imagery from the NAIP.The proposed approach presents the following technical contributions and benefits with respect to the existing literature: 1.
Creating fuel identification models that are applicable at large spatial scales (e.g., state and national levels) while integrating spectral and biophysical information with high-resolution imagery and providing a measure of model uncertainty; 2.
Creating a method for anomaly detection in the existing surface fuel mapping systems (specifically the LANDFIRE products) by comparing the predicted fuels with the existing fuel labels and using the discrepancies as a starting point for quality control;
3.
Providing a means to interpolate fuels for the intermediate years when fuel maps are not available within the LANDFIRE database.
A detailed analysis of the effect of the individual components of the model, the proposed stochastic ensemble approach, and the size of the dataset utilized for model training is presented in the discussions.It should be noted that the use of pseudo-labels sampled from the LANDFIRE products is to demonstrate the proof-of-concept and examine the feasibility of developing large-scale fuel identification models.However, the proposed framework is readily applicable to large collections of field data from national data collection campaigns, such as the Forest Inventory and Analysis (FIA) program of the United States Forest Service, which is not publicly available at this time [29].
Materials and Methods
Proposed System.This paper investigates the use of deep learning for large-scale surface fuel mapping.Figure 1 provides a schematic of the proposed identification model where two types of neural networks are used to extract information from different modalities of input data in a way that facilitates their fusion and end-to-end training on labeled data.For tabular data-such as biophysical metadata (e.g., terrain and climate features), seasonal spectral values (e.g., bands of Landsat multispectral imagery), and statistics of spectral indices (e.g., NDVI), a multi-layer artificial neural network (ANN) consisting of multiple fully connected neural layers is used.For image-based contextual data (i.e., high-resolution imagery), a convolutional neural network (CNN) is used, which leverages a deep hierarchy of stacked convolutional filters that constitute layers of increasingly meaningful visual representations.The number, arrangement, and characteristics of these layers can be designed for each specific task.Alternatively, a variety of state-of-the-art CNN architectures exist that can be utilized as backbones and outfitted with custom dense output layers.Examples of these architectures include VGGNet [30], ResNet [31], DenseNet [32], Inception [33], and InceptionResNet [34].These architectures have been used in several remote sensing applications with different degrees of success [35], and the selection of the optimal architecture is known to be dependent on the characteristics of the specific task at hand.In this work, an array of architectures is trained and compared with each other to maximize fuel identification performance.To speed up and improve the learning process, a learning mode called transfer learning can be used, wherein the extracted features in state-of-the-art CNN architectures that have been pre-trained on generic large-scale computer vision datasets are repurposed and fine-tuned to the existing task.This is built upon the widely known observation that the intermediate visual features extracted in visual recognition tasks are not entirely task-specific, except for the final classification layer [36,37].Even in cases with a large distance between the source and target tasks, transferring features from networks pre-trained on large datasets is better than random initialization [36].This has been shown to be applicable to various remote sensing problems involving RGB imagery [38][39][40].In remote sensing applications involving spatial data other than RGB imagery (e.g., multi/hyper-spectral data, lidar, and radar images), the number and nature of input bands are usually not consistent with such pre-trained networks.However, in the proposed approach, the application of the CNN backbone on high-resolution RGB imagery allows for the use of transfer learning.As a result, the weights of the CNN backbone are initialized from those pre-trained on the generic computer vision ImageNet dataset [41], which are then fine-tuned using the high-resolution fuel imagery herein.
At the conclusion of each neural network branch, the computed features are concatenated before the final prediction layer to fuse the multimodal data.The optimal share of the branches in the data fusion will be determined through training in terms of the weights of the prediction layers.This end-to-end architecture is shown in Figure 1, which is built upon the established notion that different modalities of sensing the same subject usually provide complementary information, enabling deep learning methods to produce more reliable predictions.Details on the network and data fusion design are presented in a later section.share of the branches in the data fusion will be determined through training in terms of the weights of the prediction layers.This end-to-end architecture is shown in Figure 1, which is built upon the established notion that different modalities of sensing the same subject usually provide complementary information, enabling deep learning methods to produce more reliable predictions.Details on the network and data fusion design are presented in a later section.Training the same machine learning model on different sets of observations from the same population has been shown to result in a degree of variance in the resulting models [42].Furthermore, aside from the CNN backbone that is initialized from pre-trained weights according to transfer learning, all other neural network layers are randomly initialized, resulting in slightly different models, some of which may not provide optimal fuel identification results.To improve the accuracy and robustness of the model in response to variations in observation subsets and training randomness, and to provide a measure of model uncertainty, a stochastic ensemble of models was created, which is depicted in Figure 2. Training the same machine learning model on different sets of observations from the same population has been shown to result in a degree of variance in the resulting models [42].Furthermore, aside from the CNN backbone that is initialized from pretrained weights according to transfer learning, all other neural network layers are randomly initialized, resulting in slightly different models, some of which may not provide optimal fuel identification results.To improve the accuracy and robustness of the model in response to variations in observation subsets and training randomness, and to provide a measure of model uncertainty, a stochastic ensemble of models was created, which is depicted in Figure 2. In the proposed model, the dataset is first randomly split into multiple subsets for training and validation, following the widely used k-fold cross-validation scheme.A separate randomly initialized model is trained on each of the training subsamples to capture the variance from the randomness in the observations.Subsequently, each of these k models is further randomized in inference mode using a process called Monte Carlo dropout [43].Dropout refers to a regularization technique in neural networks that was originally proposed to combat overfitting by applying a binary mask drawn from a Bernoulli distribution, which has the effect of randomly dropping some of the nodes in the network during training [44].This, in turn, is known to prevent complex co-adaptation In the proposed model, the dataset is first randomly split into multiple subsets for training and validation, following the widely used k-fold cross-validation scheme.A separate randomly initialized model is trained on each of the training subsamples to capture the variance from the randomness in the observations.Subsequently, each of these k models is further randomized in inference mode using a process called Monte Carlo dropout [43].Dropout refers to a regularization technique in neural networks that was originally proposed to combat overfitting by applying a binary mask drawn from a Bernoulli distribution, which has the effect of randomly dropping some of the nodes in the network during training [44].This, in turn, is known to prevent complex co-adaptation between nodes and can result in improved robustness of trained models [44].
Monte Carlo dropout [43] has been proposed as a mechanism specific to neural networks that aims to quantify machine learning model uncertainties and improve their robustness.In this process, dropout layers embedded before every dense layer in the network are activated at testing time, and the model is applied m times on each observation resulting in m different neural network models where a fraction of the nodes are deactivated at random, hence creating a stochastic ensemble of many slightly perturbed models.Gal and Ghahramani [43] demonstrated that using the mentioned dropout scheme at the testing time provides an approximation of Bayesian inference over the neural network weights that is computationally efficient.This technique has been successfully utilized to derive model uncertainty in visual scene understanding [45], medical imaging [46], robotics, and autonomous driving [47].However, aside from a few recent applications in road segmentation from synthetic aperture radar [48], ocean hydrographic profiles [49], lunar crater detection [50], and urban image segmentation [51], its applications in remote sensing and especially in wildfires have been limited.
To account for the variations from observation subsets and training randomness by means of the stochastic model ensemble proposed in this work, an overall array of k × m softmax scores are created for each data point.Lastly, the average of the softmax scores is used to arrive at the final fuel identification, and the variance of the probability scores provides a measure of model uncertainty.Figure 2 depicts this process and its components schematically.In this figure, the arrows at the conclusion of the process denote the softmax scores from each one of the individual models acting on each pixel's inputs, whose average and variance determine the fuel type classification and its uncertainty, respectively.
Area of Study.To investigate the feasibility of creating a large-scale fuel identification model using deep learning, the state of California was selected as the area of study for data extraction and model training.To train the system, fuel labels were generated by a random geospatial sampling of the 2016 LANDFIRE Scott and Burgan 40 fuel model.An initial sample of 40,000 points was generated to provide a large training and validation dataset to test the feasibility of training large-scale deep learning models.However, smaller subsets of data were also later created to study the effects of the number of training samples on the performance of the model.This dataset is then divided into training and validation subsets for cross-validation as previously described.Figure 3a depicts the spatial distribution of the collected training samples.To create a means for evaluating the developed models, a random test set was also independently generated.To avoid the proximity and correlation of training and testing samples that could affect the generalizability of the testing results, a minimum distance of 1 mile was enforced between the training and testing samples.This eliminates the possibility of very similar points ending up in both the training and testing sets, which can lead to overly optimistic results.An initial sample of 5000 points was selected for testing (Figure 3b).Fuel type labels in Figure 3 are based on the Scott and Burgan fuel models [8], as presented in Table 2.
Data Extraction.For each data point in the extracted sample, an array of input features was extracted.Table 3 summarizes the input features used in the modeling, which was informed by the fuel mapping literature reviewed in the background section.Multispectral data are the most widely used data for wildfire fuel modeling, with the Landsat mission being one of the primary sources of open data for these applications [52].The atmospherically corrected and orthorectified Landsat-8 Operational Land Imager and Thermal Infrared Sensor (OLI/TIRS) surface reflectance data were used at 30-m resolution.A seasonal composite of Landsat OLI/TIRS data was computed for each sample location using the medoid compositing criterion [53].This criterion minimizes the sum of Euclidean distances in the multispectral space to all other observations over the time period of interest (i.e., seasons).This method selects seasonal representative values while preserving the relationships between the bands and has been shown to produce radiometrically consistent composites [54].The quality assessment (QA) band codes were utilized to mask pixels contaminated with cloud and cloud shadow.
of training samples on the performance of the model.This dataset is then divided into training and validation subsets for cross-validation as previously described.Figure 3a depicts the spatial distribution of the collected training samples.To create a means for evaluating the developed models, a random test set was also independently generated.To avoid the proximity and correlation of training and testing samples that could affect the generalizability of the testing results, a minimum distance of 1 mile was enforced between the training and testing samples.This eliminates the possibility of very similar points ending up in both the training and testing sets, which can lead to overly optimistic results.An initial sample of 5000 points was selected for testing (Figure 3b).Fuel type labels in Figure 3 are based on the Scott and Burgan fuel models [8], as presented in Table 2.
(a) (b) 2. 2. In addition to the seasonal spectral values, annual statistics of well-established spectral indices were also computed using the Landsat data as shown in Table 4.The annual median, minimum, maximum, and range of each of the spectral indices were computed for each point at 30-m resolution.Biophysical characteristics of each point of interest, including terrain properties and climate normal, were also extracted.Elevation data were collected from the 1/3 arc-second National Elevation Dataset (NED) by the USGS [55], from which slope and aspect were calculated and added to the input data.In addition, NED-derived multi-scale topographic position index (mTPI) calculated as the elevation difference from the mean elevation within multiple neighborhoods was retrieved as a differentiator of ridge and valley landforms [58].Climate normal values, including temperature, precipitation, dew point, vapor pressure deficit, and horizontal, sloped, and clear sky solar radiation, were extracted from the Parameter-Elevation Regressions on Independent Slopes Model (PRISM) dataset from Oregon State University [56].
Formula Application Reference
NDVI (Normalized Difference Vegetation Index) Sensitive to vegetation greenness [59] EVI (Enhanced Vegetation Index) 1 G Sensitive to vegetation greenness with enhancement [60] SAVI (Soil-adjusted Vegetation Index) 2 (1 Sensitive to vegetation in presence of soil brightness [61] MSAVI (Modified Soil-adjusted Vegetation Index) Sensitive to vegetation in presence of bare soil [62] NDMI (Normalized Difference Moisture Index) Sensitive to vegetation moisture [63] TCB (Tasseled Cap Brightness) Sensitive to vegetation while atmospherically resistant [65] NBR (Normalized Burn Ratio) Sensitive to fire-induced disturbances [66] R: red, G: green, B: blue, NIR: near-infrared, SWIR: shortwave infrared.In cases where an image was not found for 2016, the closest image within a one-year window was retrieved.Figure 4 depicts sample NAIP images for fuel types under investigation in this study.Of note, Figure 4 shows that some of the fuel types can be difficult to differentiate even for the human eye due to their close visual similarity at the scale under study (e.g., GR1, GR2, and GS1).This depicts the difficulty of the classification task and can foreshadow potential areas of misclassification even by powerful machine learning algorithms.The definitions of the fuel type labels in Figure 4 are based on the Scott and Burgan fuel models [8], and their characteristic differences are presented in Table 2. 2.
To train the model, ground truth labels describing the fuels found at each location are required.However, large-scale datasets obtained by field surveys that could be used for this purpose are not publicly available (e.g., the Forest Inventory and Analysis (FIA) Database by the United States Forest Service) and fuel model assignments may not be available as part of data collection.To demonstrate the proof of concept and feasibility of training such models, pseudo-labels using an existing fuel map were used in this work.To this end, pseudo-labels for the points of interest were retrieved by randomly sampling fuel pixels from the 2016 LANDFIRE map of standard surface fire behavior fuel models based on Scott and Burgan fuel models.As a result of the random sampling, the distribution of the extracted labels is a function of the frequency of different fuel types across California.Figure 5 depicts a histogram of fuel types for the pixels within the 2016 LANDFIRE fuel map and shows that several fuel types are not widely represented in the fuel map within the area of study.This is important because fuel types with a small frequency of occurrence are known to be difficult for models to learn as a result of the lack of representative data and the resulting imbalance between the classes.On the other hand, mis-predicting a very small number of isolated pixels has a less pronounced effect on the overall fire spread than making errors in the prediction of large areas of dominant fuel types.As a result, identifying the most common fuel types in the study area provides a more important contribution to the effectiveness of the resulting fire spread simulations.Future sensitivity analyses to quantify the effect of individual fuel types-especially rare and small categories-on fire spread modeling are needed to evaluate these effects.To investigate the effects of class size on the fuel identification performance of the model, Table 5 lists the fuel types larger than different minimum sizes and their cumulative coverages.For example, with a minimum class size of 4%, the model will include 8 classes that cover 78.1% of the pixels of the study area.Alternatively, by aggregating the classes of the same fuel category that are smaller than the minimum class size, models with full coverage of all pixels can be created.2.
To train the model, ground truth labels describing the fuels found at each location are required.However, large-scale datasets obtained by field surveys that could be used for this purpose are not publicly available (e.g., the Forest Inventory and Analysis (FIA) Database by the United States Forest Service) and fuel model assignments may not be available as part of data collection.To demonstrate the proof of concept and feasibility of training such models, pseudo-labels using an existing fuel map were used in this work.To this end, pseudo-labels for the points of interest were retrieved by randomly sampling fuel pixels from the 2016 LANDFIRE map of standard surface fire behavior fuel models based on Scott and Burgan fuel models.As a result of the random sampling, the distribution of the extracted labels is a function of the frequency of different fuel types across California.Figure 5 depicts a histogram of fuel types for the pixels within the 2016 LANDFIRE fuel map and shows that several fuel types are not widely represented in the fuel map within the area of study.This is important because fuel types with a small frequency of occurrence are known to be difficult for models to learn as a result of the lack of representative data and the resulting imbalance between the classes.On the other hand, mis-predicting a very small number of isolated pixels has a less pronounced effect on the overall fire spread than making errors in the prediction of large areas of dominant fuel types.As a result, identifying the most common fuel types in the study area provides a more important contribution to the effectiveness of the resulting fire spread simulations.Future sensitivity analyses to quantify the effect of individual fuel types-especially rare and small categories-on fire spread modeling are needed to evaluate these effects.To investigate the effects of class size on the fuel identification performance of the model, Table 5 lists the fuel types larger than different minimum sizes and their cumulative coverages.For example, with a minimum class size of 4%, the model will include 8 classes that cover 78.1% of the pixels of the study area.Alternatively, by aggregating the classes of the same fuel category that are smaller than the minimum class size, models with full coverage of all pixels can be created.
Model Development and Evaluation.This section presents the details of the overall deep learning framework and its design choices previously presented in Figures 2 and 3. Extensive testing was carried out to design the optimal architecture for the proposed model via cross-validation.Pretrained CNN architectures-including VGGNet [30], ResNet [31], DenseNet [32], Inception [33], and InceptionResNet [34]-were tested as the backbone to ex-tract the visual features from the NAIP imagery, and the best accuracy results were achieved using the InceptionResNet_v2 backbone; hence, this architecture was used throughout the rest of the analyses.InceptionResNet_v2 is a 64-layer CNN architecture based on the Inception family of architectures that employs residual connections similar to those in the ResNet variants.The standard implementation of InceptionResNet_v2 available in the Keras library was used in this work, and further information about this architecture can be found in [34].Input image size was selected to be 128 × 128 pixels, where each pixel represents 1 m on the ground.Data augmentation in the form of random horizontal and vertical flipping and random rotation was applied to the images during training to increase the robustness of the training.Any transformation that could visually change the scene, such as rescaling, recoloring, or non-affine transformations, were not applied, and the original image was maintained during testing.The output of the InceptionResNet_v2 backbone was passed through an average pooling layer that reduces the last convolutional feature map by calculating the average of the feature maps.A dense layer with 128 nodes followed by a dropout layer was added to the end of the CNN branch before concatenation with the multilayer ANN outputs. 2 for fuel type descriptions.
Table 5. List and cumulative coverage of fuel types larger than different minimum class sizes.See Table 2 for fuel type descriptions.
A dropout layer with dropping probability of 0.5 was used after each hidden layer throughout the network to implement the Monte Carlo dropout scheme, as shown in Figure 2. Furthermore, a Rectified Linear Unit (ReLU) activation function in the form of Re(x) = (0, x) was used to provide nonlinearity in the neural network that aids the learning of complex patterns.The resulting network was then trained using the Stochastic Gradient Descent (SGD) algorithm [70].In this process, following every forward pass through the network, training loss is estimated via a cross-entropy loss function.This function is shown in Equation ( 2), where y i and ŷi represent the i-th label and predictions, respectively, and N denotes the size of the training set.The estimated loss in each training epoch is then used in the back-propagation process that updates the unknown parameters (i.e., weights) of the network on small subsets of training data (i.e., mini-batches).In each epoch, the gradients of loss, L, are calculated with respect to the weights, w, ( ∂L ∂w ), and a fraction (η, called learning rate) of the gradient is added to the weights from the previous step (w i − 1 ) (Equations ( 3) and ( 4)).To improve the convergence, a term called momentum (α) is added to the update.Finally, another regularization mechanism called weight decay (λ) is also used to discourage overfitting by imposing smaller weights [70].This process is iteratively repeated until convergence.
Training of the models was carried out for a maximum of 300 epochs while an early stopping criterion was applied to stop the training if validation accuracy did not improve for 30 consecutive epochs.A minibatch of 100, momentum of 0.9, weight decay of 0.0001, and learning rate of 10 −3 were used to start training, and the learning rate was reduced by 1/10 after every 15 epochs, following He et al. [31].Further trial-and-error with these hyperparameters did not provide appreciable accuracy improvements.
The performance of the model was evaluated using well-established classification metrics, including global accuracy, precision, recall, f -score, and Cohen's Kappa statistic.Global accuracy (Acc) measures the ratio of total correct predictions over the entire data points.Recall (Rec) is the ratio of correct predictions of each fuel type to all predictions of that fuel type.Precision (Pre) is the ratio of correct predictions of each fuel type to all existing labels in that class.F1 score is a widely used metric that is the harmonic mean of precision and recall.Precision, recall, and F1 were computed per class, and both their macroaverage (regardless of the size of each class) and their weighted average were calculated.To quantify the agreement between the fuel maps developed through the proposed method with those of LANDFIRE, Cohen's Kappa statistic was used as a well-established agreement metric in the literature that measures the agreement between predicted and observed labels while accounting for agreement by chance.
The implementation of the deep learning procedures in this paper was carried out using the Keras neural network Application Programming Interface (API) with the Ten-sorFlow deep learning platform as the backend.These platforms provide an array of tools compatible with the Python programming language for designing, developing, and training neural networks [71].Training of the models was deployed on an NVIDIA Tesla V100 GPU node with 112 GB of RAM.
Results
Using the proposed methodology, the models were trained for surface fuel identification.Figure 6 depicts the evolution of training and validation accuracy as well as loss during the training of the model.In this figure, solid lines show the mean of the accuracy and loss for the ensemble, and the shaded band provides the 95% confidence interval.As can be seen in this figure, the model demonstrates stable behavior with the convergence of accuracy and loss to a plateau.Furthermore, the small gap between the training and validation curves in each case demonstrates the proper training of the model with minimal effects of overfitting.Table 6 summarizes the overall accuracy of the model trained using different minimum class sizes ranging from 1-5%.These models were first trained on original unfiltered fuel labels obtained from LANDFIRE 2016 fuel maps, as previously described.The accuracy of the model ranged from 51.74% to 69.59% based on the minimum class size without aggregating the classes smaller than the threshold.The reduction in accuracy with the inclusion of the smaller classes is to be expected, as the model will have less information to learn about the smaller classes.Furthermore, aggregating the small classes with the most similar fuels also results in an accuracy reduction on the order of 10%, which is associated with insufficient information about the small classes as well as possible discrepancies between the aggregated classes.For a closer examination of the performance of the system, Figure 7 presents the confusion matrices for the model with a minimum class size of 4%.This case was selected for demonstration as it provides a reasonable accuracy of nearly 70% while covering nearly 80% of the fuel pixels in California.
proposed method with those of LANDFIRE, Cohen's Kappa statistic was used as a wellestablished agreement metric in the literature that measures the agreement between predicted and observed labels while accounting for agreement by chance.
The implementation of the deep learning procedures in this paper was carried out using the Keras neural network Application Programming Interface (API) with the TensorFlow deep learning platform as the backend.These platforms provide an array of tools compatible with the Python programming language for designing, developing, and training neural networks [71].Training of the models was deployed on an NVIDIA Tesla V100 GPU node with 112 GB of RAM.
Results
Using the proposed methodology, the models were trained for surface fuel identification.Figure 6 depicts the evolution of training and validation accuracy as well as loss during the training of the model.In this figure, solid lines show the mean of the accuracy and loss for the ensemble, and the shaded band provides the 95% confidence interval.As can be seen in this figure, the model demonstrates stable behavior with the convergence of accuracy and loss to a plateau.Furthermore, the small gap between the training and validation curves in each case demonstrates the proper training of the model with minimal effects of overfitting.Table 6 summarizes the overall accuracy of the model trained using different minimum class sizes ranging from 1-5%.These models were first trained on original unfiltered fuel labels obtained from LANDFIRE 2016 fuel maps, as previously described.The accuracy of the model ranged from 51.74% to 69.59% based on the minimum class size without aggregating the classes smaller than the threshold.The reduction in accuracy with the inclusion of the smaller classes is to be expected, as the model will have less information to learn about the smaller classes.Furthermore, aggregating the small classes with the most similar fuels also results in an accuracy reduction on the order of 10%, which is associated with insufficient information about the small classes as well as possible discrepancies between the aggregated classes.For a closer examination of the performance of the system, Figure 7 presents the confusion matrices for the model with a minimum class size of 4%.This case was selected for demonstration as it provides a reasonable accuracy of nearly 70% while covering nearly 80% of the fuel pixels in California.2.
Confusion matrices shown in Figure 7 demonstrate a concentration of the predictions along the diagonal, which shows desirable behavior and noticeable agreement between the predicted fuel labels and the corresponding true labels.To further examine the sources Confusion matrices shown in Figure 7 demonstrate a concentration of the predictions along the diagonal, which shows desirable behavior and noticeable agreement between the predicted fuel labels and the corresponding true labels.To further examine the sources of confusion, in Figure 7a, six cases of misclassification are marked for further visual examination, as presented in Figure 8.In Figure 8, samples of images pertaining to each fuel type that were mistaken for a different fuel type are presented.In each case, the assumed "ground truth" labels show noticeable discrepancies with the contents of the images.For example, Case 2 includes images that are visually consistent with agricultural land cover while they have been labeled as "GR2," and Case 5 shows mostly non-urban land cover that has been labeled as "urban."This demonstrates that the labels suffer from a degree of impurity, which can be associated with the fact that these labels are not a direct result of field surveys by fuel experts but are instead sampled from derivative fuel maps, potentially with a level of inherent inaccuracies.Note that agricultural and urban land covers are mapped via external sources ([72,73]) in LANDFIRE [74].To demonstrate the effect of this label impurity, the models were re-trained after filtering the labels against the National Land Cover Database (NLCD) land cover map for 2016 [73].Because the NLCD maps do not have fuel information, any burnable fuel pixels that had a non-burnable land cover label were filtered out, and vice versa.These land cover types include developed land (open space and low-to high-intensity development), barren land (rock, clay, and sand), and cultivated crops.This resulted in the removal of 16.3% of the pixels from the training dataset.The results of this filtering are shown in Figure 7b,d, where the severity of the off-diagonal elements has visibly decreased.This resulted in an accuracy improvement of the individual classes by more than 10% on average across all classes and a global accuracy improvement of 7.2% (from 67.11% to 74.31% in Table 6).This demonstrates an important opportunity for the improvement of fuel maps by using the proposed method to detect the discrepancies that can highlight potential label impurities. of confusion, in Figure 7a, six cases of misclassification are marked for further visual examination, as presented in Figure 8.In Figure 8, samples of images pertaining to each fuel type that were mistaken for a different fuel type are presented.In each case, the assumed "ground truth" labels show noticeable discrepancies with the contents of the images.For example, Case 2 includes images that are visually consistent with agricultural land cover while they have been labeled as "GR2," and Case 5 shows mostly non-urban land cover that has been labeled as "urban."This demonstrates that the labels suffer from a degree of impurity, which can be associated with the fact that these labels are not a direct result of field surveys by fuel experts but are instead sampled from derivative fuel maps, potentially with a level of inherent inaccuracies.Note that agricultural and urban land covers are mapped via external sources ([72,73]) in LANDFIRE [74].To demonstrate the effect of this label impurity, the models were re-trained after filtering the labels against the National Land Cover Database (NLCD) land cover map for 2016 [73].Because the NLCD maps do not have fuel information, any burnable fuel pixels that had a nonburnable land cover label were filtered out, and vice versa.These land cover types include developed land (open space and low-to high-intensity development), barren land (rock, clay, and sand), and cultivated crops.This resulted in the removal of 16.3% of the pixels from the training dataset.The results of this filtering are shown in Figure 7b,d, where the severity of the off-diagonal elements has visibly decreased.This resulted in an accuracy improvement of the individual classes by more than 10% on average across all classes and a global accuracy improvement of 7.2% (from 67.11% to 74.31% in Table 6).This demonstrates an important opportunity for the improvement of fuel maps by using the proposed method to detect the discrepancies that can highlight potential label impurities.Figure 9 shows six of the biggest off-diagonal confusion elements highlighted in Figure 7b after filtering the labels with the NLCD land cover maps.As can be seen, these cases are mostly concentrated adjacent to the diagonal, which implies that the model's mistakes are mostly among the most similar fuel types.In Figure 9, each column shows the two fuel types that have been mistaken for each other.Visual inspection of the two cases in each column shows that the differences between these classes are sometimes subtle and can be difficult to differentiate even for human annotators.Figure 9 shows six of the biggest off-diagonal confusion elements highlighted in Figure 7b after filtering the labels with the NLCD land cover maps.As can be seen, these cases are mostly concentrated adjacent to the diagonal, which implies that the model's mistakes are mostly among the most similar fuel types.In Figure 9, each column shows the two fuel types that have been mistaken for each other.Visual inspection of the two cases in each column shows that the differences between these classes are sometimes subtle and can be difficult to differentiate even for human annotators.Based on the results presented in this section, the evidence suggests that the proposed model is relatively successful at identifying the surface fuel types in the test set given an assumed degree of impurity associated with the labels used for training.The level of fuel identification accuracy is dependent on the desired degree of granularity with smaller minimum class sizes, resulting in learning difficulty with less information to support the extracted patterns.Moreover, based on the confusion matrices in Figure 7b, the non-burnable urban land cover (NB3) is the easiest to detect (class accuracy of 95.3%), which is to be expected, as this class has the most discernible features even to the untrained eye.On the other hand, the grass-shrub class (GS2) is the hardest to detect (class accuracy of 66.1%), which is associated with its close similarity to the grass fuel types.
To further visualize the performance of the model outside the testing set and in mapping, Figures 10 and 11 present samples of fuel maps generated by the proposed model together with the corresponding uncertainty maps created as previously described using the average and variance of the model probabilities.As can be seen in Figure 10, the qualitative comparison of the predicted maps with LANDFIRE counterparts shows noticeable overall agreement, consistent with the Cohen's Kappa values of 0.854, 0.477, and 0.475 for the three images from left to right, respectively.Figure 11 shows a sample of results with relatively large discrepancies between the predictions and the target labels, with Cohen's Kappa values of 0.046, 0.016, and 0.321.Examination of the first column in this figure shows that a large portion of the GR1 and GR2 area in the target map indeed seems to be visually consistent with the predicted NB3 (agricultural).This may be pointing to a potential discrepancy in the target map (i.e., LANDFIRE) that could be used for map correction or improvement.Note that LANDFIRE uses external mapping data for agricultural lands [72].The second column in this figure shows that the model replaced the area covered by TL6 in the label map with TU5.In this case, the corresponding uncertainty map shows that the model has some awareness of the potentially erroneous prediction that could be accounted for in the resulting decisions.Finally, the third column shows a similar case where, despite the overall relative agreement between the maps, the predictions seem to have missed areas of NB9 (bare ground), TL6, and GR1.Similarly to the previous case, the corresponding uncertainty map may be leveraged to highlight the areas where the model has lower confidence in its predictions.Based on the results presented in this section, the evidence suggests that the proposed model is relatively successful at identifying the surface fuel types in the test set given an assumed degree of impurity associated with the labels used for training.The level of fuel identification accuracy is dependent on the desired degree of granularity with smaller minimum class sizes, resulting in learning difficulty with less information to support the extracted patterns.Moreover, based on the confusion matrices in Figure 7b, the nonburnable urban land cover (NB3) is the easiest to detect (class accuracy of 95.3%), which is to be expected, as this class has the most discernible features even to the untrained eye.On the other hand, the grass-shrub class (GS2) is the hardest to detect (class accuracy of 66.1%), which is associated with its close similarity to the grass fuel types.
To further visualize the performance of the model outside the testing set and in mapping, Figures 10 and 11 present samples of fuel maps generated by the proposed model together with the corresponding uncertainty maps created as previously described using the average and variance of the model probabilities.As can be seen in Figure 10, the qualitative comparison of the predicted maps with LANDFIRE counterparts shows noticeable overall agreement, consistent with the Cohen's Kappa values of 0.854, 0.477, and 0.475 for the three images from left to right, respectively.Figure 11 shows a sample of results with relatively large discrepancies between the predictions and the target labels, with Cohen's Kappa values of 0.046, 0.016, and 0.321.Examination of the first column in this figure shows that a large portion of the GR1 and GR2 area in the target map indeed seems to be visually consistent with the predicted NB3 (agricultural).This may be pointing to a potential discrepancy in the target map (i.e., LANDFIRE) that could be used for map correction or improvement.Note that LANDFIRE uses external mapping data for agricultural lands [72].The second column in this figure shows that the model replaced the area covered by TL6 in the label map with TU5.In this case, the corresponding uncertainty map shows that the model has some awareness of the potentially erroneous prediction that could be accounted for in the resulting decisions.Finally, the third column shows a similar case where, despite the overall relative agreement between the maps, the predictions seem to have missed areas of NB9 (bare ground), TL6, and GR1.Similarly to the previous case, the corresponding uncertainty map may be leveraged to highlight the areas where the model has lower confidence in its predictions.Fuel types are described in Table 2.
Figure 10.Sample fuel mapping results with small discrepancies with the LANDFIRE fuel map.Fuel types are described in Table 2. 2.
Discussion
Table 7 summarizes the contribution of the different components of the model by listing the per-class and overall F1 scores.As shown in Table 7, in most cases, models made from individual components have the lowest performance, and the fusion of complementary components results in improvements with respect to individual components.Among the individual components, NAIP imagery has the highest overall
Discussion
Table 7 summarizes the contribution of the different components of the model by listing the per-class and overall F1 scores.As shown in Table 7, in most cases, models made from individual components have the lowest performance, and the fusion of complementary components results in improvements with respect to individual components.Among the individual components, NAIP imagery has the highest overall performance, followed by spectral values.Although the detection of some classes (e.g., NB3, NB1) is substantially easier with imagery than spectral values, others (e.g., NB8, NB9) are easier to differentiate using spectral values.This is associated with how discernible these classes are using their spectral or visual signatures (e.g., agricultural lands may be harder to miss using their unique farm patterns than their spectral differences compared with grasslands).Furthermore, although biophysical data show weak correlations with non-vegetation classes (e.g., NB1, NB8, NB9), they provide the highest performance in the grassland classes.Of note, the addition of imagery data always results in performance improvement.This can be seen by comparing every model (single or multi-component) with its counterpart after the inclusion of imagery data.By comparing the full model with the one that includes all non-imagery data types (SV + SI + BP), all classes except NB8 (water) show accuracy improvement.This lack of improvement for NB8 can be attributed to the apparent visual similarity of some surface water image patches to simple grassland landscapes.Finally, the full model that includes the fusion of all components results in the highest detection performance, both across most individual classes and overall.This demonstrates the benefit of data fusion in improving the fuel identification performance of the system.The results of this analysis demonstrate that, to create useful large-scale fuel identification models, datasets consisting of tens of thousands of fuel plots may not be required, as the model with 1/10 of the largest data size still achieves an overall accuracy within nearly 5 percent of that with 40,000 observations (Figure 12).The proposed method can also be augmented with semi-supervised learning techniques, such as label propagation, which has been previously used in the remote sensing context to remedy the shortage of ground truth data [75,76].model architecture is being used with different training set populations.We also note that the reported training times are based on model deployment on an NVIDIA Tesla V100 GPU node with 112 GB of RAM.The results of this analysis demonstrate that, to create useful large-scale fuel identification models, datasets consisting of tens of thousands of fuel plots may not be required, as the model with 1/10 of the largest data size still achieves an overall accuracy within nearly 5 percent of that with 40,000 observations (Figure 12).The proposed method can also be augmented with semi-supervised learning techniques, such as label propagation, which has been previously used in the remote sensing context to remedy the shortage of ground truth data [75], [76].Finally, to investigate whether the quality of the training set could be improved by avoiding sampling from isolated noisy pixels, a filter was added to the sampling such that only the points with similar fuels within their neighborhood of radius r were selected as training samples.This filter essentially ensures that only the pixels belonging to a relatively homogeneous and continuous body of similar fuel will be sampled, thus reducing the potential noise from the random sampling strategy used.Three different values of r equal to 50, 100, and 150 m were tested.Although some of the individual classes showed small improvements, the overall accuracy of the model slightly decreased with the increase in the radius.This could be attributed to the fact that increasing r resulted in a slight decrease in samples taken from smaller and naturally less prevalent fuel types, thus limiting any potential improvement from the increased sample homogeneity.More generally, enforcing homogeneity by selecting pure sample sites and filtering the minority Finally, to investigate whether the quality of the training set could be improved by avoiding sampling from isolated noisy pixels, a filter was added to the sampling such that only the points with similar fuels within their neighborhood of radius r were selected as training samples.This filter essentially ensures that only the pixels belonging to a relatively homogeneous and continuous body of similar fuel will be sampled, thus reducing the potential noise from the random sampling strategy used.Three different values of r equal to 50, 100, and 150 m were tested.Although some of the individual classes showed small improvements, the overall accuracy of the model slightly decreased with the increase in the radius.This could be attributed to the fact that increasing r resulted in a slight decrease in samples taken from smaller and naturally less prevalent fuel types, thus limiting any potential improvement from the increased sample homogeneity.More generally, enforcing homogeneity by selecting pure sample sites and filtering the minority classes can result in missed opportunities for the identification of natural discontinuities for fuel breaks and other forest management actions.However, the use of survey-based ground truth fuel labels from national data collection campaigns (e.g., FIA database), and large-scale satellite-based lidar measurements (e.g., the Global Ecosystem Dynamics Investigation -GEDI-mission) for canopy fuel modeling can address such limitations by providing high-confidence labels and can be studied in future works.
Conclusions
Most past wildfire surface fuel mapping studies proposed models trained for and applicable to small areas of interest.In contrast, this paper discussed a model for creating large-scale wildfire surface fuel mapping models that can be applied at regional (e.g., state) scales.The proposed model takes advantage of deep learning to create a predictive model that can fuse information from spectral, biophysical, and high-resolution imagery.The model also features a stochastic ensemble approach using the Monte Carlo dropout technique, which both improves the performance of the model and produces a measure of model uncertainty for the predicted fuels.
The proposed system was applied to a dataset that was compiled using a random sample of the 2016 LANDFIRE surface fuel product based on the Scott and Burgan 40 fuel models for the state of California as the target fuel labels.The results demonstrated the feasibility of the proposed approach that yielded approximately 55% to 75% accuracy, depending on the desired smallest fuel type size to be included in the model.A considerable portion of the error is attributed to the close visual similarity of some of the fuel types at the scales under study, as evidenced by the difficulty of differentiating them even through human examination.In this regard, the proposed model can thus be used to reveal areas of potential discrepancies and high uncertainty in existing fuel maps and to interpolate fuel distributions for points of interest in time.Although the effect of minimum class size included in the model on the fuel identification accuracy was studied and showed an anticipated decrease in the model's performance when including very small classes, its cascading effect on the performance of the resulting fire spread simulations was outside the scope of this study and is deferred to a future study that could compare the predicted fire spread parameters with different fuel identification models.
Analysis of the properties of the proposed system revealed that the fusion of different types of data improves identification accuracy compared to using each data source individually.Specifically, the addition of high-resolution imagery from the NAIP program to any of the models from individual or combined data sources always improved their fuel identification performance.Furthermore, the proposed stochastic model ensemble generation approach resulted in improved performance with respect to individual models while allowing for the generation of model uncertainty estimates that could be propagated throughout resulting fire spread simulations.This can in turn enable uncertainty-aware scenario-based decision-making and model updating.A study of the effect of the size of the training set on the performance of the model revealed an increase in accuracy with an increase in the training set size.Namely, cutting the training set in half resulted in a maximum reduction of 7.2% and an average reduction of 2.2% in per-class performance, while cutting the training time by 2.5 times.This implies that the model has the capacity to benefit from an increased training set (i.e., more data), considering that the training of even the largest model was relatively manageable given the hardware used in this study (overall training of the ensemble model took approximately 4 h).
This proof-of-concept study used a random geospatial sampling of existing LAND-FIRE fuel products to extract target labels for training.However, the proposed approach is generic and can be applied to collections of field data resulting from in situ fuel plots.
Figure 1 .
Figure 1.Proposed deep learning-based surface fuel identification framework (definition of spectral indices is presented in the data extraction section).
Figure 1 .
Figure 1.Proposed deep learning-based surface fuel identification framework (definition of spectral indices is presented in the data extraction section).
Figure 2 .
Figure 2. Stochastic neural network ensemble with inference-time Monte Carlo dropout.
Figure 2 .
Figure 2. Stochastic neural network ensemble with inference-time Monte Carlo dropout.
Figure 3 .
Figure 3. Distribution of sample points used for data extraction for (a) training and (b) testing.Note that a minimum distance of 1 mile is enforced between the training and testing points.The codes in the legend are fuel types according to the Scott and Burgan 40 fuel models, as described in Table2.
Figure 3 .
Figure 3. Distribution of sample points used for data extraction for (a) training and (b) testing.Note that a minimum distance of 1 mile is enforced between the training and testing points.The codes in the legend are fuel types according to the Scott and Burgan 40 fuel models, as described in Table2.
Figure 4 .
Figure 4. Sample National Agricultural Imagery Program (NAIP) images for fuel types larger than 1% of total pixels in California.Fuel types are based on the Scott and Burgan 40 fuel models described in Table2.
Figure 4 .
Figure 4. Sample National Agricultural Imagery Program (NAIP) images for fuel types larger than 1% of total pixels in California.Fuel types are based on the Scott and Burgan 40 fuel models described in Table2.
Fire 2023, 6 , 27 Figure 5 .
Figure 5. Distribution of fuel types in the 2016 LANDFIRE map within California (only fuel types with 0.1% or more are shown).See Table2for fuel type descriptions.
Figure 5 .
Figure 5. Distribution of fuel types in the 2016 LANDFIRE map within California (only fuel types with 0.1% or more are shown).See Table2for fuel type descriptions.
Figure 6 .
Figure 6.Evolution of training and validation accuracy and loss.C.I.: confidence interval.
Figure 6 .
Figure 6.Evolution of training and validation accuracy and loss.C.I.: confidence interval.
Figure 7 .
Figure 7. Testing confusion matrix matrices for models with a minimum class size of 4%: (a) unfiltered fuel labels with no small class aggregation, (b) filtered labels with no small class aggregation, (c) unfiltered labels with small class aggregation, and (d) filtered labels with small class aggregation.Fuel types are described in Table2.
Figure 7 .
Figure 7. Testing confusion matrix matrices for models with a minimum class size of 4%: (a) unfiltered fuel labels with no small class aggregation, (b) filtered labels with no small class aggregation, (c) unfiltered labels with small class aggregation, and (d) filtered labels with small class aggregation.Fuel types are described in Table2.
Figure 8 .
Figure 8. Diagnostic examination of prediction results with original unfiltered LANDFIRE labels.Cases are selected from Figure 7.
Figure 8 .
Figure 8. Diagnostic examination of prediction results with original unfiltered LANDFIRE labels.Cases are selected from Figure 7.
Fire 2023, 6 , 27 Case 7 :Figure 9 .
Figure 9. Diagnostic examination of prediction results with the labels filtered with NLCD land cover.Cases are selected from Figure 7.
Figure 9 .
Figure 9. Diagnostic examination of prediction results with the labels filtered with NLCD land cover.Cases are selected from Figure 7.
Fire 2023, 6 , 27 Figure 10 .
Figure 10.Sample fuel mapping results with small discrepancies with the LANDFIRE fuel map.Fuel types are described in Table2.
Fire 2023, 6 , 27 Figure 11 .
Figure 11.Sample mapping results with relatively large discrepancies with LANDFIRE maps.Fuel types are described in Table2.
Figure 11 .
Figure 11.Sample mapping results with relatively large discrepancies with LANDFIRE maps.Fuel types are described in Table2.
Figure 11.Sample mapping results with relatively large discrepancies with LANDFIRE maps.Fuel types are described in Table2.
Figure 12 .
Figure 12.Effect of the size of the training set on accuracy performance and computation time.
Figure 12 .
Figure 12.Effect of the size of the training set on accuracy performance and computation time.
Table 1 .
Summary of surface fuel mapping literature: comparison of training scale and applicability.
Table 2 .
Fuel type description based on the Scott and Burgan fuel models adapted from [8].
Low shrub fuel load, fuel bed depth of about 1 foot; some grass may be present.The spread rate is very low; flame length is very low.
Table 2 .
Fuel type description based on the Scott and Burgan fuel models adapted from [8]. flame length is similar.The spread rate is high; flame length is very high.
TL3Moderate load conifer litter.The spread rate is very low; flame length is low.TL4 Moderate load, including small-diameter downed logs.The spread rate is low; flame length is low.TL5 High load conifer litter; light slash or mortality fuel.The spread rate is low; flame length is low.TL6 Moderate load, less compact.The spread rate is moderate; flame length is low.TL7 Heavy load, including larger-diameter downed logs.The spread rate is low; flame length is low.TL8 Moderate load and compactness may include a small amount of herbaceous load.The spread rate is moderate; flame length is low.TL9 Very high load, fluffy.Spread rate moderate; flame length moderate.
Table 3 .
Geospatial datasets used for deriving predictors and class variables.
Table 4 .
Spectral indices used as training features.
Table 5 .
List and cumulative coverage of fuel types larger than different minimum class sizes.See Table2for fuel type descriptions.
Table 6 .
Testing accuracy of the model trained both on original unfiltered labels and labels filtered with the National Land Cover Database (NLCD).
Table 6 .
Testing accuracy of the model trained both on original unfiltered labels and labels filtered with the National Land Cover Database (NLCD).
Table 7 .
Performance of different combinations of input components of the model (numbers in the table are F1 scores; values in bold indicate the best result in each category).Fuel types are described in Table2.M-Avg.and W-Avg.refer to macro-and weighted-average, respectively.that the size of the training set does not affect the computational complexity of the testing and model application if the same model architecture is being used with different training set populations.We also note that the reported training times are based on model deployment on an NVIDIA Tesla V100 GPU node with 112 GB of RAM.
Table 8 .
Effect of stochastic ensemble modeling (values in bold indicate the best result in each category).Fuel classes are described in Table2.M-Avg.and W-Avg.refer to macroaverage, respectively. | 15,239 | 2023-01-17T00:00:00.000 | [
"Environmental Science",
"Computer Science",
"Engineering"
] |
Investigating variations of the electron beam voltage on the traveling wave tube output power in the different frequencies
In the present research, the effect of variations of the electron beam voltage on the output power is studied. In order to develop the study, this problem was investigated at different frequencies, which are the products of nonlinear behavior of the traveling wave tube (TWT) in response to the input frequency. Moreover, for a more realistic understanding, the tubes were considered with two linear and nonlinear responses to the input frequency. The TWT output power was calculated in linear and nonlinear modes, at different frequencies using the numerical solution of the mathematical equations of the Lagrangian model. Then, the output power in terms of distance and beam voltage in different frequencies was plotted and compared. The results revealed that the effects of variations of voltage on the output power were more proper in a single-mode TWT in comparison with a multimode one.
Introduction
Traveling wave tubes (TWTs) are devices that are widely used in communication, electronic warfare and radar systems [1,2]. These devices have wide bandwidths, highfrequency and high-power operating points; as a result, they have wider spread application. The nonlinear behaviors of the TWTs are known as one of the important practical limits of them. Until now, many efforts have been made to reduce these nonlinear effects [7-9, 11, 12]. The nonlinearity features are recognized as spectral distortions and saturating mechanism [3]; both of which decrease the efficiency of the TWT.
For single-tone mode, nonlinear distortion products appear as harmonics products (f, 2f, 3f…), while for multi tone mode (multicarrier operation) nonlinear distortion products appear as intermodulation products (mf1 ? nf2) at the output of the amplifier [3]. In this work, the effects of changes in beam voltage on the output power of TWTs are studied in both the presence and absence of a nonlinear phenomenon (spectral distortion).
The most important mechanism that occurs inside the tube is the interaction between the input wave and the electron beam. When an electron beam is injected along the axis of the helix, electric field horizontal component of the wave accelerates some electrons and decelerates the others. This is the basis for the formation of bunch of the electron beam and the transfer of energy from the beam to the wave and ultimately the amplification of the output wave [1,2]. Electron beam emission voltage into the tube is the source of energy and beam velocity. Therefore, determination of the cathode voltage and its variations, which are the emitter of the electrons from the gun and called the beam voltage, is very important. In many previous attempts, the basis for selecting the voltage for the beam was the TWT efficiency and maximum output power. In the previous studies, the value of the beam voltage or the cathode voltage was equal to 3150 V [6,10,15]. In other studies, this value was considered as 2750 v in [11], 4350 v in [5], 4880 v in [13] and 4920 v in [14]. Therefore, in this study, the effects of voltage changes on the improvement in TWT output power is studied using numerical solution of the governing equations. The TWT is modeled by several authors using the Eulerian and Lagrangian electron beam coordinates [4].
& Sh. Saviz<EMAIL_ADDRESS><EMAIL_ADDRESS>Among all of the present models, the multi-frequency spectral Eulerian (MUSE) and Lagrangian TWT equations (LATTE) are the most important physical models. The basis of using these two models is the physical interpretation used for the electron beam as a fluid [4]. In ''Formulation'' section, we formulate the governing equations for the Lagrangian model of the TWT. The numerical solutions as well as the diagrams which show output power in terms of voltage and distance are presented in ''Numerical results'' section. Finally, the discussion and conclusion of the numerical results are given in ''Discussion and Conclusion'' section.
Formulation
The transmission line, Poisson, continuity and Vlasov equations are used to derive the mathematical equations of the TWT which are expressed in the domain of time as follows [3,4,16]: Transmission line equations: In Eqs. (1) and (2), the current and voltage are expressed with I and V, respectively. The q, volume charge density, t, time and z is the axial distance.
Poisson's equation: In Eq. 3, E is the space charge electric field. Continuity and Vlasov equations: In Eqs. (1) and (2), the coefficients of and are the Fourier transforms inverse that are expressed as follows: where the functionsṽ ph ðz; f l x 0 Þ,Kðz; f l x 0 Þ and <ðz; f l x 0 Þ are defined as cold circuit phase velocity, frequency domain circuit interaction impedance and space charge reduction factor, respectively. Using the furrier transform, Eqs. (1), (2), (3), (4) and (5) are transformed from the time domain to the frequency domain.
Using the coordinate transformation in Eqs. (9) and (10), where z is the distance and w is the phase. The phase is described with respect to a traveling wave of speed u 0 and frequency x 0 According to Fig. 1, in which, R is the resistance, C is capacitance, G is shunt, and L is the inductive series; applying Kirchhoff's voltage and current laws, Eqs. (1) and (2) are as follows.
In Eq. 14, m is the electron beam velocity and < is space charge reduction factor.
Also, m e and e are, respectively, the mass and the charge of the electron. Fig. 1 Each section of the helix is represented by an equivalent circuit [16] According to the above discussion, each TWT contains three main parts, the slow wave structure, the source of the electron beam and the propagation of an electromagnetic wave that have approximately the same phase velocity with electron beam. The mathematical Eqs. (11)-(15) form a differential equation system which describes the mechanism of the TWT. In order to normalize the quantities used, the following characteristic quantities are defined: In the above equations, u 0 is the DC beam velocity, L is the TWT circuit length, and T is a characteristic time. The following variables illustrate the normalization of dependent variables and independent coordinates [3,4,16]. Independent coordinates: Dependent variables: Normalized quantities: Derivatives with respect to z and then w become Some other relationships between DC parameters can be written as follows: In Eqs. (34)-(36), q 0 is the DC linear charge density,V 0 is the DC beam voltage, I 0 is the DC beam current, and C is the Pierce gain parameter.
The function x ðz; wÞ is the Fourier series relations that is defined as follows [3,4,16] x ðz; wÞ ¼ X 1 j¼ À 1x j ðzÞ e i f j w ð37Þ where, f j is the set of frequencies that are the drive frequencies together with the frequencies produced from nonlinear interactions, m and n are the integers positive and negative, and w is the periodic function. ðz; wÞ are Eulerian independent variables and ðz; w 0 Þ are Lagrangian independent variables, where z is axial position and w is phase, and w 0 is the phase position of a fluid element with respect to the stream wave. The transformation from Lagrangian to Eulerian coordinates is given by functions Z and w z w Zðz; w 0 Þ is the axial position of fluid element w 0 at z so A function g E of Eulerian variables ðz; wÞ is transformed to a function g L of Lagrangian variables ðz; w 0 Þ using According to Eqs. (40) and (41), transformation matrix of equation is as follows: That Jacobin matrix equals to: The partial derivatives are transformed as follows According to a certain method in the Lagrangian model, we have: Using Eq. (46), convective derivative in the Eulerian coordinates becomes Applying derivative transformation relation (45) to the continuity Eq. (15), the following relation is obtained: Derivative from Eq. (46) relative to w 0 one gets o oz Substitute (49) into (48) and integrate to get In which j is a constant of integration. By definition Wð0; w 0 Þ ¼ w 0 , the values oW ow 0 and q L v L on the w 0 axis, are oWð0;w 0 Þ ow 0 ¼ 1 and q 0 v 0 , respectively. Considering the following equation Equation (50) becomes The Fourier coefficient of q E is expressed as Pulling Eq. (53) back to Lagrangian coordinates (for fixed z) and using Eq. (44) one getsq Using Eqs. (37), (38) and (54), the circuit equations, space charge equation, Newton's law and phase relation (11)(12)(13)(14)(15), in Lagrangian coordinates, the following equations are created where À1\ j \ 1, and For practical implementation, one neglects higher frequencies and limits to ÀM \ j \ M.
The equations in (55-59) are ordinary differential equations, and we will use standard ordinary differential equation integration techniques. The problem is an initial value problem, where one can calculate the proper initial values using (38). Otherwise for, j 6 ¼ 0 ;Ẽ j ¼ṽ j ¼q j ¼ 0, In addition to the initial conditions mentioned above, there are other parameters that are very important in the performance of the TWT. These constant values are given in Table 1.
Numerical results
In this work, different voltages have been applied to the electron beam in the assumed frequency range in order to evaluate the TWT response. In this process among different mathematical models, the Lagrangian mathematical model has been applied on the TWT, which includes all the nonlinear features of a TWT properly. Accordingly, the governing equations on TWTs which indicate the changes in voltage and current circuit, space charge field as well as velocity and density of the electron beam are written. Also by applying the initial conditions appropriate to the TWT features and using fixed step fourth-order Runge-Kutta integrator numerical method, the differential equation systems 55-59 have been solved.
In this study, the TWT has been examined in two different situations; as a result, two different responses are reached which are discussed below:
Situation (1)
In this case, the response of the TWT is examined under the influence of applying the voltage in the absence of harmonic and intermodulation frequencies (single-tone mode). In this situation, the operation of the TWT was studied in the single frequency mode, which means the output frequency from the TWT is the same as the single input frequency entered to it. The final results of the calculations in this mode are revealed in diagrams 2-3 which display output power in terms of distance (z), voltage (v) and frequency (f).
In Fig. 2, the output power of the circuit is plotted at frequencies of 1400, 1600 and 1800 MHz, in terms of different voltage of the electron beam. In this case, the circuit power is calculated for each single input frequency, the harmonic frequencies and the intermodulation are not produced. In the voltage range of 2000-3300 v, the amplification of the wave at these three frequencies is approximately constant.
In Fig. 3, the output power is plotted in terms of axial distance, for a voltage of 3200 v at frequencies of 1400, 1600 and 1800 MHz. It is noticeable that the circuit power is saturated at 40 cm in all the three frequencies. According to Fig. 2, if the voltage of 2600 was selected, the amplification of all the three frequencies would occur alike.
Situation (2)
In this case, the response of the TWT is examined under the influence of applying the voltage in the presence of drive, harmonic and intermodulation frequencies (multi tone mode). Here, when two input frequencies which their difference is about 1 MHz are entered to the TWT, drive frequencies as well as unwanted harmonic and intermodulation output frequencies are produced ( Table 2). The final results of the calculations in this mode are revealed in diagrams 4-8 which display output power in terms of distance (z), voltage (v) and frequency (f).
In Fig. 4, the output power of the circuit is plotted at frequencies of 1400, 1600 and 1800 MHz, in terms of different voltage of the electron beam. In this case, the two main input frequencies are considered (Table 2); as a result, harmonic and intermodulation frequencies are produced. In the voltage range of 2000-2800 v, the difference in amplification of the wave at these three frequencies is small (Fig. 5). In the above figure, the output power is plotted in terms of axial distance, for a voltage of 3200 v at frequencies of 1400, 1600 and 1800 MHz. These calculations are done for each of the three frequencies in the presence of secondorder harmonic and intermodulation frequencies (table). It is noticeable that the circuit power is saturated at 38 cm at all three frequencies.
In Fig. 6, the output power is plotted in terms of axial distance, for a voltage of 3200 v at frequencies of 2800, 3200 and 3600 MHz. These frequencies are second-order harmonics, which are produced by the nonlinear response of the pipe to the two main input frequencies, as shown in Table 2. As shown in Fig. 6, at these frequencies, the saturation occurs with a gentle slope.
In Fig. 7, the output power of the circuit is plotted at frequencies of 2800, 3200 and 3600 MHz, in terms of different voltage of the electron beam. These frequencies are second-order harmonics ( Table 2). In the range of 2000-2600 v, it is clear that the output power difference in the circuit is small in these three frequencies.
In Fig. 8, the output power of the circuit is plotted in terms of different voltage of the electron beam, at frequencies of 1399, 1599 and 1799 MHz. These frequencies are third-order intermodulation frequencies (Table 2). In the range of 2200-3000 v in all the three frequencies, the output power difference in the circuit is small.
Discussion and conclusion
As it can be seen, when voltage changes at different frequencies in TWT, the output power will change. Voltage variation is important in a particular range. Applying a lower voltage is needed to achieve a higher power in low frequencies, but for gaining a higher power in high frequencies, using a bigger voltage is required. In a specified voltage range, the changes in the output power are not noticeable. Beam emission voltage has a direct relationship with output power to a certain extent, but after a certain limit this relationship becomes inverse which means increasing beam emission voltage results in the decrease in output power. This happens because of the saturation phenomena which is the nonlinear operation of TWT. In smaller frequencies, saturation phenomena (this phenomena happens when increasing of the output power stops) in the TWT happen in lower voltage. For instance in Fig. 2, it can be seen that the blue plot (1400 MHz) decreases in a lower voltage (about 3200 v), but the orange plot (1600 MHz) along with the gray plot (1800 MHz) decreases in a higher frequency (about 3400 v) but with different slopes. Comparing Figs. 2 and 3, it is observed that at 1400 MHz (single-tone mode), for a voltage of 2200 V, the TWT output power is 45 dBm, while this power is 30 dBm in the same conditions in a multi tone mode. All shapes show the fact that the voltage applied at each frequency does not create a unique power; at some voltages, the power generated at various frequencies is the same approximately.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creative commons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. | 3,694.6 | 2018-07-25T00:00:00.000 | [
"Physics"
] |
Improving the accessibility and transferability of machine learning algorithms for identification of animals in camera trap images: MLWIC2
Abstract Motion‐activated wildlife cameras (or “camera traps”) are frequently used to remotely and noninvasively observe animals. The vast number of images collected from camera trap projects has prompted some biologists to employ machine learning algorithms to automatically recognize species in these images, or at least filter‐out images that do not contain animals. These approaches are often limited by model transferability, as a model trained to recognize species from one location might not work as well for the same species in different locations. Furthermore, these methods often require advanced computational skills, making them inaccessible to many biologists. We used 3 million camera trap images from 18 studies in 10 states across the United States of America to train two deep neural networks, one that recognizes 58 species, the “species model,” and one that determines if an image is empty or if it contains an animal, the “empty‐animal model.” Our species model and empty‐animal model had accuracies of 96.8% and 97.3%, respectively. Furthermore, the models performed well on some out‐of‐sample datasets, as the species model had 91% accuracy on species from Canada (accuracy range 36%–91% across all out‐of‐sample datasets) and the empty‐animal model achieved an accuracy of 91%–94% on out‐of‐sample datasets from different continents. Our software addresses some of the limitations of using machine learning to classify images from camera traps. By including many species from several locations, our species model is potentially applicable to many camera trap studies in North America. We also found that our empty‐animal model can facilitate removal of images without animals globally. We provide the trained models in an R package (MLWIC2: Machine Learning for Wildlife Image Classification in R), which contains Shiny Applications that allow scientists with minimal programming experience to use trained models and train new models in six neural network architectures with varying depths.
| INTRODUC TI ON
Motion-activated wildlife cameras (or "camera traps") are frequently used to remotely observe wild animals, but images from camera traps must be classified to extract their biological data (O'Connell, Nichols, & Karanth, 2011). Manually classifying camera trap images is an encumbrance that has prompted scientists to use machine learning to automatically classify images Willi et al., 2019), but this approach has limitations. We address two major limitations of using machine learning to automatically classify animals in camera trap images. First, machine learning models trained to recognize species from one location and in one camera trap setup might perform poorly when applied to images from camera traps in different conditions (i.e., these models can have low "out-of-sample" accuracy; Schneider, Greenberg, Taylor, & Kremer, 2020). This transferability, or generalizability, problem is thought to arise because different locations have different backgrounds (the part of the picture that is not the animal) and most models evaluate the entire image, including the background (Beery, Morris, & Yang, 2019;Miao et al., 2019;Norouzzadeh et al., 2019;Terry, Roy, & August, 2020;Wei, Luo, Ran, & Li, 2020). By including images from 18 different studies in North America, our objective was to train models with more variation in the backgrounds associated with each species. Furthermore, by training an additional model that distinguishes between images with and without animals, we provide an option that could be broadly applicable to camera trap studies worldwide.
Second, the use of machine learning in camera trap analysis is often limited to computer scientists, yet the need for image processing exceeds the availability of computer scientists in wildlife research. For example, several researchers have provided excellent Python repositories for using computer vision to analyze camera trap images Beery, Wu, Rathod, Votel, & Huang, 2020;Norouzzadeh et al., 2018;Schneider et al., 2020).
These software packages enable programmers to use and train models to detect, classify, and evaluate the behavior of animals in camera trap images. However, these packages require extensive programming experience in Python, a skill which is often lacking from wildlife research teams. To facilitate the use of this type of model by biologists with minimal programming experience, Machine Learning for Wildlife Image Classification (MLWIC2) includes an option to train and use models in user-friendly Shiny Applications (Chang, Cheng, Alaire, Xie, & McPherson, 2019), allowing users to point-and-click instead of using a command line. This facilitates easier site-specific model training when our models do not perform to expectations.
| Camera trap images
Images were collected from 18 studies using camera traps in 10 states in the United States of America (California, Colorado, Florida, Idaho, Minnesota, Montana, South Carolina, Texas, Washington, and Wisconsin; Appendix S1). Images were either classified by a single wildlife expert or classified independently by two biologists, with discrepancies settled by a third. An image was classified as containing an animal if it contained any part of an animal. Our initial dataset included 6.3 million images but was unbalanced with most images from a few species (e.g., 51% of all images were Bos taurus). We rebalanced the number of images by species and site to ensure that no one species or site dominated the training process. Previous work North America. We also found that our empty-animal model can facilitate removal of images without animals globally. We provide the trained models in an R package (MLWIC2: Machine Learning for Wildlife Image Classification in R), which contains Shiny Applications that allow scientists with minimal programming experience to use trained models and train new models in six neural network architectures with varying depths.
K E Y W O R D S
computer vision, deep convolutional neural networks, image classification, machine learning, motion-activated camera, R package, remote sensing, species identification suggested that training a model with 100,000 images per species produces good performance (Tabak et al., 2019); therefore, we limited the number of images for a single species from one location to 100,000. When >100,000 images for a single species existed at one location, we randomly selected 100,000 of these images to include in the training/testing dataset. After rebalancing the data, we had a total of 2.98 million images; 90% were randomly selected for training, while 10% were used for testing. Images used in this study were either already a part of or were added to the North American Camera Trap Images dataset (lila.science/datasets/nacti; Tabak et al., 2019).
Images from Canada were not used for training but were used to evaluate model transferability as an out-of-sample dataset.
| Training models
We trained deep convolutional neural networks using the ResNet-18 architecture (He, Zhang, Ren, & Sun, 2016) in the TensorFlow framework (Adabi et al., 2016) on a high-performance computing cluster, "Teton" (Advanced Research Computing Center, 2018). Models were trained for 55 epochs, with a ReLU activation function at every hidden layer and a softmax function in the output layer, mini-batch stochastic gradient descent with a momentum hyperparameter of 0.9 (Goodfellow, Bengio, & Courville, 2016), a batch size of 256 images, and learning rates and weight decays that varied by epoch number (described in Appendix S2). We trained a species model, which contained classes for 58 species or groups of species and one class for empty images (Table 1). We also trained an empty-animal model that contained only two classes, one for images containing an animal, and the other for images without animals.
| Model validation and transferability
We first evaluated our trained models by applying them to predicting species in the 10% of images that were withheld from training.
Models were evaluated for each species using the recall, top-5 recall, and precision, which are values summarizing the number of true positives (TPs), false positives (FPs), and false negatives (FNs): As recall is the proportion of images of each species that were correctly classified, top-5 recall is the proportion of images for each species in which one of the model's top five guesses is the correct species. We also calculated confidence intervals for recall and precision rates (Appendix S3). To evaluate transferability of the model, we conducted out-of-sample validation by applying our trained models to images from locations where the model was not trained.
To evaluate the effect of using multiple training datasets on model generalizability, we iteratively trained models using varying numbers of datasets (i.e., 1 dataset, 3 datasets, 6 datasets, … all 18 datasets) and tested the model on the out-of-sample datasets. can classify images at a rate of 2,000 images per minute on a laptop with 16 gigabytes of random-access memory and without a graphics processing unit. MLWIC2 will optionally write the top guess from each model and confidence associated with these guesses to the metadata of the original image file. The function "write_metadata" and the associated R Shiny Application uses Exiftool (Harvey, 2016) to accomplish this. In addition, if scientists have labeled images, MLWIC2 has a Shiny app that allows users to train a new model to recognize species using one of six different convolutional neural network architectures (AlexNet, DenseNet, GoogLeNet, NiN, ResNet, and VGG) with different numbers of layers. We also trained models in these other architectures for comparison. Note that the time required to train a model depends on the number of images used for training and computing resources; operating MLWIC2 on a highperformance computing cluster requires programming experience.
When we iteratively trained the model on varying numbers of datasets, we found that accuracy on out-of-sample images increased with the number of datasets used to train the model (Figure 3).
| D ISCUSS I ON
In MLWIC2, we provide two trained machine learning models, one classifying species and another distinguishing between images with animals and those that are empty, with 97% accuracy, which can TA B L E 2 Mean recall and precision rates (along with 95% confidence intervals) for predicting species using the species model on the validation dataset (the 10% of images that were withheld from training)
TA B L E 2 (Continued)
images in datasets globally. For many research projects, the task of simply removing empty images can save thousands of hours of labor.
We propose a workflow for how users can apply these models to filter-out empty images and train new models as necessary (Figure 4).
By providing Shiny Applications to train models and classify images, we make this technology accessible to more scientists with minimal programming experience. Our finding that high recall (>95%) can be achieved with fewer than 2,000 images for some species (Table 2; Figure 1) suggests that smaller labeled image datasets can potentially be used to train models with this software.
Other researchers have developed models for recognizing animals in camera traps, with some success in out-of-sample identification. For example, Zilong software accurately removed 85% of empty images (Wei et al., 2020), MegaDetector had a precision of 89%-99% at detecting animals , and MLWIC achieved an accuracy of 82% at out-of-sample species classification F I G U R E 1 Within sample validation of the species model revealed high recall and precision for most species. Median values across datasets are presented along with 95% confidence intervals. The number of datasets for each species is included in the circle next to the species name (circle sizes are proportional to the number of datasets containing each species) (Tabak et al., 2018(Tabak et al., , 2019. We hypothesize that our models performed well on some out-of-sample datasets (Snapshot Serengeti, Snapshot Karoo, Wellington, and Saskatchewan; Table 3) because they were trained using camera trap images from multiple locations with different camera placement protocols, allowing the model to develop a search image for each species in multiple backgrounds ( Figure 3).
Transferability of machine learning models remains a complication for implementing these models more broadly to camera trap data and, in many cases, it is most productive for scientists to build models that are trained directly on their study sites (see Figure 4 for more details). While such models will have less broad applicability (they are unlikely to be accurate globally), they can have high study-specific accuracies, thus reducing the burden of manual image classification. Our finding that models become more generalizable when more datasets are used to train the model (Figure 3) indicates that by including more diverse datasets when we train future models, we may be able to train a model that can be accurate in more locations.
| Future directions
As this new technology becomes more widely available, ecologists will need to decide how it will be applied in ecological analyses. For example, when using machine learning model output to design occupancy and abundance models, we can incorporate accuracy estimates that were generated when conducting model testing. for some species to avoid detection by cameras when they are present (Tobler, Zúñiga Hartley, Carrillo-Percastegui, & Powell, 2015).
Another area in need of consideration is how to group taxa when few images are available for the species. We generally grouped species when few images were available for model training using an arbitrary cut off of approximately 1,000 images per group (Table 2).
Nevertheless, we had relatively few images of grizzly bears (Ursus arctos horribilis; n = 843), but we included this species because it is of conservation concern, and found high rates of recall and precision (99% for each). We grouped members of Mustelidae (Mustela erminea, Mustela frenata, unknown Mustela spp., Neovison spp., and Taxidea taxus) together, and this group had relatively low recall and precision (89% and 91%, respectively). When researchers develop new models and decide which species to include and which to group, they will need to consider the available data, the species or groups in their study, and the ecological question that the model will help address.
CO N FLI C T O F I NTE R E S T
The authors have no conflicts of interest to declare.
DATA AVA I L A B I L I T Y S TAT E M E N T
The trained models described in this work are available in the MLWIC2 package (https://github.com/mikey Ecolo gy/MLWIC2).
Images used to train models are available in the North American
Camera Trap Images dataset (lila.science/datasets/nacti). Data from validation tests are available from the dryad digital repository | 3,266.2 | 2020-03-20T00:00:00.000 | [
"Environmental Science",
"Computer Science"
] |
Photophysics of tetracarboxy-zinc phthalocyanine photosensitizers
Zinc-tetracarboxy-phthalocyanine (ZnPc(COOH)4) was synthesized by a melting method and basic hydrolysis. A ZnPc(COOH)4/Fe3O4/Ch composite was prepared by immobilization of ZnPc(COOH)4 onto Fe3O4/chitosan nanoparticles by a simple immersion method. The photophysical properties were studied using UV-vis spectrophotometry, fluorescence spectroscopy and time-correlated single photon counting (TCSPC) in different aqueous solutions. The UV-vis spectra of the ZnPc(COOH)4/Fe3O4/Ch composite displays absorption by the aromatic rings, with a Q band exhibited at λmax = 702 nm. Moreover, the ZnPc(COOH)4/Fe3O4/Ch composite exhibits long triplet-state lifetimes of 1.6 μs and 12.3 μs, crucial for application as a photosensitizer. A triplet quantum yield of 0.56 for the ZnPc(COOH)4/Fe3O4/Ch composite in DMSO/H2O was achieved. FTIR showed that the conjugation of ZnPc(COOH)4 with Fe3O4/chitosan nanoparticles was achieved by electrostatic interaction.
Introduction
Metallophthalocyanine (MPc) derivatives are popular photodynamic therapy (PDT) photosensitizers (PSs). Research on a novel PS requires extensive human effort and a high cost investment over decades before clinical application. Nevertheless, some MPc derivatives, such as: aluminium phthalocyanine (Photosens®, Russia), used against skin, breast and lung malignancies, and cancers of the gastrointestinal tract; 1 silicon Pc (Pc 4, USA), for the sterilization of blood components against human colon, breast and ovarian cancers, and gliomas; 2 and a liposomal zinc phthalocyanine formulation, using a controlled organic solvent dilution against squamous cell carcinomas of the upper aerodigestive tract, 3 have undergone clinical trials.
Current efforts are being made in the development of new photosensitizers (PSs) with improved solubility in body uids and injectable solvents, photostability, enhanced permeability and retention effect, elimination and cumulative systemic toxicity. [4][5][6][7][8][9] In the eld of organic photosensitizers, metallophthalocyanines (MPcs) play an important role due to their excellent photo-and electro-chemical stability and exclusive light-harvesting capability in the red/NIR spectral regions. [10][11][12][13] The main disadvantages of MPcs in PDT are the lack of solubility and selectivity; therefore, the combination of magnetic iron oxide nanoparticles with a photosensitizer is a new and promising approach in PDT. Fe 3 O 4 nanoparticles have been successfully applied in tumor therapy by inducing hyperthermia and oxidative stress that lead to tumor cell damage. [14][15][16] For application in PDT, magnetic nanoparticles (NPs) are usually coated with polymers, bound to the particle through organic linkers. 17 Functionalization of Fe 3 O 4 nanoparticles may lead to enhancement of their biocompatibility, colloidal stability, and an increase in the number of groups, through which the required antitumor effect can be obtained.
The major goal of this paper is to create a new photosensitizer with adequate solubility, especially in body uids and injectable solvents, with greater tumor selectivity, enhanced hydrophilicity, and strong absorption in the NIR spectral region. Therefore, conjugation of an MPc derivative to a magnetic NP functionalized with a polymer is the rst part of our research aimed at delivering PSs to tumor cells. The magnetic iron oxide nanoparticles will be used as the carrier of the photosensitizer because of: their ability to carry and deliver therapeutic photosensitizers into deep-seated tumours; the enhanced solubility of the hydrophobic PS with an appropriate size to accumulate in the tumour tissues via enhanced permeability and retention effect; and the ability to attack cancer cells selectively without harming other healthy cells. The Fe 3 O 4 NPs will be functionalized with chitosan, which is a biodegradable, biocompatible polysaccharide and, in comparison with many other polymers, has many free -OH and -NH 2 groups that can serve as anchors for conjugation of therapeutics and targeting ligands.
Considering the above mentioned information, we focused our research on attaching functionalized ZnPc with carboxylic groups (-COOH) to an Fe 3 O 4 /chitosan system hoping to get a synergistic effect in the photodynamic parameters of the resulting composite.
Equipment
The UV-vis spectra of the solutions were measured using a UVvis spectrophotometer (Lambda 25, PerkinElmer, Inc., Shelton, CT, USA) from 200 nm to 1200 nm in 10 mm quartz cuvettes. The steady-state uorescence spectroscopy was performed using a spectrometer (LS-55, PerkinElmer, Inc., Shelton, CT, USA) equipped with double-grating excitation and emission monochromators. Time-correlated single photon counting (TCSPC) was used to determine the uorescence lifetime. The time-resolved uorescence spectra were recorded on a spectrometer (FLS980, Edinburgh Instruments, Livingston EH54 7DQ, Oxford, UK). All the measurements were made at room temperature (295 ± 1 K). A Bruker D8 ADVANCE X-ray diffractometer (using Cu K a radiation with l = 1.5406Å) was used for structural investigation of the magnetic nanoparticles. A Bruker FTIR spectrometer was used to provide information about the chemical composition.
Synthesis
The synthetic pathway of ZnPc(COOH) 4 A mixture consisting of 4.35 g (0.022 mol) of trimellitic anhydride, 2.52 g of Zn(CH 3 COOH) 2 $2H 2 O, 0.3 g of ((NH 4 ) 6 MO 7 O 24 )$ 4H 2 O, 0.5 g of Na 2 SO 4 , 13.51 g (0.225 mol) of urea and 5 ml of 1bromonaphthalene was heated at 200-205°C for 8 h with continuous stirring. Aer 8 hours, the reaction mixture was cooled to room temperature and treated with methanol. The obtained suspension was ltered. The solid reaction product was washed on the lter with methanol, chloroform and, nally, with acetone. Aer drying, the product was crumbled and then reuxed for one hour in 5% hydrochloric acid (HCl) solution. Aer drying, the same procedure was carried out with 5% sodium hydroxide (NaOH) solution for one hour at 90°C. Finally, the solution was acidied with HCl until the pH was equal to 2, and the precipitated nal product was ltered and dried in the open air. 0.68 g of ZnPc(COOH) 4 was obtained with a yield of 70% (Fig. 1).
Preparation of chitosan-functionalized magnetic nanoparticles
Chitosan and Fe 3 O 4 were mixed in an appropriate proportion to form the chitosan-magnetic nanoparticles composite with amine groups by the reverse-phase suspension cross-linking method. 18 Aqueous acetic acid solution was used as a solvent for the chitosan polymer and H 2 O 2 was used as the cross-linker. In this specic procedure, a chitosan solution was prepared using a mixture of 2% acetic acid and 10% H 2 O 2 solutions. Then 0.2 g Fe 3 O 4 was added and stirred with strong ultrasonic agitation at room temperature for 4 h. At the end of this period, some of the chitosan-Fe 3 O 4 nanocomposite particles were collected from the reaction mixture by using a permanent magnet. The product was washed with ethanol and dried in vacuum at 60°C for 5 hours and used for XRD analysis (Fig. 2).
Chitosan is able to interact with negatively charged molecules, 19 such as the hydroxyl (Fe-OH) groups on the surface of magnetite nanoparticles. The presence of -OH groups on the surface of the Fe 3 O 4 nanoparticles was conrmed by the strong broad band with a maximum at 3431 cm −1 in the IR spectrum ( Fig. 4), corresponding to n(O-H) oscillations. We suppose that ionic interactions occur between the negatively charged CH 3 COO − species and the positively charged (NH 3+ ) groups of the chitosan molecules dissolved in the aqueous acetic acid solution.
to chitosan, Fe 3 O 4 and Fe 3 O 4 /Ch nanoparticles
Acetic acid is a weak acid and is a very common solvent for chitosan. A sample of 0.3 g of chitosan was dissolved in 50 ml of 2% concentrated acetic acid. Then 0.5 ml of 10% hydrogen peroxide was added to the solution for the destruction of intermacromolecular hydrogen bonds and interchain hydrogen bonds to make water-soluble chitosan. The appropriate ratio of chitosan to acetic acid in the chitosan-acetic acid solution was 1 : 0.5, and then ZnPc(COOH) 4 was dissolved in a 1 : 1 DMSO/ H 2 O solution. Aer that, both solutions were mixed, heated at 40°C and stirred continuously for 40 min.
In a separate experiment, ZnPc(COOH) 4 solution was mixed with a dispersion medium containing chitosan-functionalized magnetic nanoparticles at room temperature and stirred for 2 h using a mechanical stirrer.
Experiments where ZnPc(COOH) 4 was dissolved in 1 : 1 DMSO/H 2 O solution and simply mixed with Fe 3 O 4 were also performed.
Structural analysis of the Fe 3 O 4 and Fe 3 O 4 /chitosan magnetic nanoparticles
The X-ray diffraction patterns of the Fe 3 O 4 and Fe 3 O 4 /chitosan nanoparticles, along with the standard pattern of Fe 3 O 4 (JCPDS #75-0033), are shown in Fig. 3 and details of the peaks are given in Table 1. The similar XRD patterns reveal that Fe 3 O 4 does not undergo any phase changes following functionalization with chitosan, a situation also conrmed by other reports. 20,21 XRD diffraction analysis revealed a broad nature of the diffraction maxima, indicating that Fe 3 O 4 has small crystallite sizes. The crystallite sizes were evaluated using the Debye-Scherrer formula: where l is the wavelength of the X-rays (1.5406Å), b is the FWHM (full width at half maximum), q is the diffraction angle, k = 0.94 and D is the crystallite size. The metal oxide nanoparticles have a mean crystallite size of 13.95 nm. During the coating process with chitosan, the crystallite size slightly increases, as the size of the individual crystallite is related to the thickness of the chitosan layer. The mean crystallite size of the nanoparticles with chitosan increases up to 14.80 nm.
FTIR analysis
The FTIR spectra of chitosan, the (Fig. 4). The result is consistent with similar investigations. 22,23 The chemical interaction of ZnPc(COOH) 4 with the Fe 3 O 4 /chitosan system is conrmed by the shi of the signal from 1702 cm −1 (n(C]O)) of the protonated COOH groups in the IR spectrum of ZnPc(COOH) 4 , associated with splitting, to 1660 cm −1 (n sym (COO)), and 1436 and 1406 cm −1 (n asym (COOO)) that correspond to deprotonated carboxylic groups. This can be explained by the dissociation of carboxylic groups and formation of electrostatic interactions between NH 3 + and COO − fragments (Fig. 16a).
UV-vis and uorescence analysis
Usually, MPcs give rise to electronic spectra with two strong absorption bands, one around 300 nm, called the "B" or Soret band, due to electronic transitions from the deeper p-HOMO to n*-LUMO energy levels, while the other at 600-650 nm, called the "Q" band, due to electronic transitions from the p-HOMO to p*-LUMO energy levels. 24 The UV-vis spectra of ZnPc(COOH) 4 and ZnPc(COOH) 4 /Ch in DMSO/H 2 O are presented in Fig. 6. The absorption spectra of the synthesized materials display absorption peaks in the visible region at around 700 nm. In the case of ZnPc(COOH) 4 and ZnPc(COOH) 4 Fig. 8, 2% acid acetic and 10% hydrogen peroxide were used. The Q band extends into the 580-800 nm region and exhibited two peaks at l max = 645 nm and 702 nm in the case of the Fe 3 O 4 nanoparticles linked to chitosan (Fig. 8), almost the same values as when Fe 3 O 4 is not bound to chitosan (Fig. 7). Both the ZnPc(COOH) 4 /Fe 3 O 4 /chitosan and ZnPc(COOH) 4 /Fe 3 O 4 spectra (Fig. 9) show similar specic absorption peaks of the phthalocyanine aromatic ring. The chitosan had no obvious absorption peak in the visible region, but leads to an increased intensity of the 702 nm peak and a narrower Q band. The comparison in Fig. 9 allows us to suppose that the Q absorption band could be assigned to the p- p* transition on the ZnPc macrocycle. Introducing the peripheral -COOH substituent onto the macrocycle of ZnPc led to a signicant bathochromic shi of the absorption spectra due to an increased destabilization of the HOMO electron state versus the LUMO state.
The low energy peak is due to the monomer, while the high energy peak is caused by the aggregation. The aggregation species persisted more when the Fe 3 O 4 nanoparticles were not bound to chitosan.
The uorescence emission spectrum of ZnPc(COOH) 4 Fig. 10. The uorescence spectrum aer excitation at 615 nm shows two emission bands situated at 695 nm and 765 nm. The uorescence spectrum of the ZnPc(COOH) 4 /chitosan system (Fig. 11) aer excitation at 638 nm also shows two bands, as in Fig. 10, but they are both shied 10 nm into the near-infrared region. The uorescence spectrum of ZnPc(COOH) 4 immobilized on the Fe 3 O 4 magnetic nanoparticles shows broad and structured uorescence at 702 nm, 764 nm, 789 nm and 826 nm, and shows an increase in intensity at 850 nm, when excited at 645 nm (Fig. 12). The limits of the measurement equipment did not allow us to record uorescence above 850 nm. The spectrum of ZnPc(COOH) 4 immobilized on the Fe 3 O 4 /chitosan magnetic nanoparticles shown in Fig. 13 displayed less structured uorescence. Only two broad bands situated at 713 nm and 784 nm shied to the nearinfrared region are revealed. The resultant red-shi was associated with the electrostatic interaction between ZnPc(COOH) 4 and the chitosan-functionalized Fe 3 O 4 nanoparticles.
in DMSO/H 2 O is shown in
The uorescence lifetimes of ZnPc(COOH) 4 and ZnPc(COOH) 4 /chitosan in DMSO/H 2 O solution are presented in Fig. 14.
The uorescence decays of ZnPc(COOH) 4 and ZnPc(COOH) 4 Table 2. So, we suppose that the surface interaction between the amino groups of the chitosan/Fe 3 O 4 and the carboxylic groups of ZnPc(COOH) 4 most probably forms an electrostatic interaction. In addition to the electrostatic interaction between charged surfaces of ZnPc(COOH) 4 and chitosan/Fe 3 O 4 , coordination bonds between the Zn 2+ ions of phthalocyanine and the oxygen atoms of chitosan/Fe 3 O 4 can be formed. 26,27 Also, hydrogen bonds between the nitrogen atoms of phthalocyanine and the hydrogen atoms of chitosan/Fe 3 O 4 are also possible, as shown in the scheme presented in Fig. 16.
So, signicant efforts have been made to develop the ZnPc(COOH) 4 /Fe 3 O 4 /chitosan composite that has strong absorption of long-wavelength light and a triplet quantum yield of 0.56 that can be promising for PDT. But further studies will continue to improve the triplet-state lifetime and the triplet quantum yield, and elucidate the physiochemical processes in this composite. Moreover, in vitro and in vivo studies are required to elucidate the PDT effects.
(2) Fe 3 O 4 /chitosan magnetic nanoparticles with a mean crystallite size of the nanoparticles up 14.80 nm using the suspension cross-linking technique.
(3) ZnPc(COOH) 4 immobilized on chitosan-functionalized Fe 3 O 4 nanoparticles through an immersion method with the aid of DMSO/H 2 O 2 /Ac.ac solution, exhibits higher triplet lifetimes of 1.6 ms and 12.3 ms.
The values of the triplet quantum yield (0.56) and the tripletstate lifetimes of ZnPc(COOH) 4 /Fe 3 O 4 /Ch make this composite a promising candidate for PDT. | 3,446.8 | 2022-11-03T00:00:00.000 | [
"Chemistry",
"Materials Science",
"Physics"
] |
Combining continuous flow oscillatory baffled reactors and microwave heating: process intensification and accelerated synthesis of metal-organic frameworks
We have constructed a continuous flow oscillatory baffled reactor (CF-OBR) equipped with a homogeneous and controllable microwave applicator in an entirely novel design. This affords a new route to chemical production incorporating many of the principles of process intensification and allows, for the first time, investigation of the synergistic benefits of microwave heating and CF-OBRs such as; faster and continuous processing; improved product properties and purity; improved control over the processing parameters; and reduced energy consumption. The process is demonstrated by the production of a metal-organic framework (MOF), HKUST-1, a highly porous crystalline material with potential applications in gas storage and separation, catalysis, and sensing. Our reactor enabled the production of HKUST-1 at the 97.42 g/h scale, with a space time yield (STY) of 6.32 × 10 kg/m/day and surface area production rate (SAPR) of 1.12 × 10 m/m/day. This represents the highest reported STY and fastest reported synthesis (2.2 seconds) for any MOF produced via any method to-date and is an improvement on the current SAPR for HKUST-1 by two orders of magnitude owing to the superior porosity exhibited by HKUST-1 produced using our rig (Langmuir surface area of 1772 compared to 600 m/g).
Introduction
Microwave heating is an established process intensification method used in industrial sectors such as rubber vulcanizing [1] and for drying food and wood. [2] During microwave heating, energy is delivered instantaneously through interaction of an alternating electromagnetic field with a material rather than by conductive, convective or radiative heat transfer. [3] Microwave heating enables selective and targeted heating to specific components during the reaction; this is particularly attractive for chemical processing as an alternative to conventional heating owing to the following benefits; significantly reduced production times (many hours to minutes), increase in product yield and purity, and enhancement of product properties. [4][5][6][7][8] A continuous flow oscillatory baffled reactor (CF-OBR) is a proven process intensification method in the laboratory for reactions such as biodiesel production, [9] bioprocessing [10] and saponification [11], and is increasingly being commercialised [12]. CF-OBRs are tubular reactors containing equally spaced baffles presented transversely to an oscillatory/pulsed flow, as shown in Figure 1. Fluid inside the CF-OBR is oscillated by a pump placed at one or both ends of the tube. The baffles disrupt the boundary layer at the tube wall, whilst the oscillation creates vortices, resulting in improved mixing. Superposition of net flow on to the oscillatory motion allows control over mixing and residence times by altering the oscillation conditions, i.e. oscillation amplitude and frequency. [13] Unlike a conventional tubular reactor, the degree of mixing in an OBR is independent of the net flow, therefore it is possible to achieve a high level of mixing at low flow rates and what would otherwise be low Reynolds numbers. [13] An advantage of this is the ability to use tubular reactors with a greatly reduced length-to-diameter ratio thus decreasing the size of the process. Size reductions of up to 99.6% compared to continuous stirred tank reactors (CSTRs) with equivalent throughput are possible. [14] Furthermore, OBRs are scalable as the mixing mechanisms do not alter between the laboratory and industrial scale, given geometric and dynamic similarity. [13] As with microwave heating, CF-OBRs offer huge opportunities for intensifying chemical production. The uniform mixing environment and enhanced heat transfer in OBRs enable considerable reduction in processing times (by up to 80%), better temperature control, and consistent product properties (i.e. particles with uniform size and morphology). [14] Microwave heating and CF-OBRs individually show great potential for energy savings, enhanced process control and optimization, and improved product quality compared to other processing methods, such as CSTRs, as a result of reduced solvent and energy usage, faster and continuous processing, smaller processing equipment, and safer implementation of harsh production conditions. [13] Therefore, a production route that combines microwave heating and CF-OBR technology has the potential to deliver synergistic benefits to scalable chemical processing with exceptional process intensification attributes. In particular, the CF-OBR is able to provide plug flow in a compact design whilst uniformly suspending solid particles. The solid particles are suspended by the oscillatory flow mixing structures.
They will be in constant motion, hence they will be exposed to the same dose of microwave energy. This is an advantage over packed bed catalytic reactors for example where it can be challenging to heat large beds homogeneously using the microwave field. In other designs of laboratory-scale plug flow reactors, such as microchannel reactors, it is difficult to operate with solid particles without blockages, or time-consuming development of catalyst washcoats (carriers to disperse particles).
Metal-organic frameworks (MOFs) are highly porous crystalline materials composed of metal nodes and organic linkers. [15] MOFs have received marked attention from academia and industry owing to their unprecedented porosity (surface areas up to 7000 m 2 /g) [16] and diverse structures and functionalities. The properties of MOFs offer immense opportunities for economic and environmental impact in areas such as gas storage and separation, [17] catalysis, [18] sensing, [19] and drug delivery [20]. In particular, the tuneable nature of MOFs could enhance their performance in gas and petrochemical separations compared to other adsorbents, such as activated carbons and zeolites.
MOFs are not currently used in industry owing to the inability to produce these materials at the required quality, purity, quantity and cost for application. [6,21] The main reason for this is the demanding MOF synthesis conditions; i.e. use of large amounts of toxic, corrosive and highly flammable chemicals, high temperatures and autogenous pressures (typically above the boiling point of the solvent), long reaction times (hours or days), acidic by-products, high energy requirements, and heterogeneous reaction mixtures that require mixing. [6] Additional challenges include reproducibility between batches and cost and availability of large scale reaction rigs. [6] The development of technologies that address these issues in an efficient and sustainable way is a key enabling step in the transfer of MOFs from the laboratory to industrial application.
As methods for scaling up MOF production evolve, parameters for comparing and assessing their efficiency and practicality and have become important. Two key parameters include production rate (mass of dry MOF product per hour, g/hr) and space time yield (STY, quantity of MOF produced per unit volume of reactor in a 24 hour period, kg/m 3 /day). [6] Another important factor is the quality of MOF produced; this is dependent upon their intended use. For example, the overall surface area exhibited by MOFs is important for gas capture and storage [6,22] whereas uniformity of size and morphology of the crystals are important for separations [23][24][25] and controlled drug release [26].
Surface area production rate (SAPR), which is defined as the amount of surface area of MOF produced per unit volume of reactor per day, m 2 /m 3 /day, has recently been developed to indicate the quality of MOF obtained from different production methods. [21] This criterion will be used in this paper to evaluate the production methods discussed and developed herein. [28] and a tubular microwave reactor with a high reported STY of up to 400,000 kg/m 3 /d for HKUST-1 [21]. In the latter system, a significant decrease in surface area from ca.
2100 [29] to 600 m 2 /g [21] is observed which would render the MOF with little or no commercial value, highlighting the importance of assessing the quality of MOF produced as well as the production rate.
However, these systems do not fully assess the effect of microwave energy on the reaction mixture and rely upon mixing provided by small internal diameters (i.d. <4.4 mm) [21,27,28] tubes. In order to facilitate developments in continuous flow microwave synthesis of MOFs beyond the laboratory, greater understanding of the effect of microwave and mixing parameters that are essential for scale-up is required. Microwave parameters include; the efficiency of power coupling and distribution of the electric field within the heating cavity; penetration depth and relationship with reactor design and specification; and the required power density distribution (energy per unit volume over the treatment time, see references [7,30] for a more in-depth explanation) to produce MOFs consistently at high quality and at the required production rates and STYs. [7] Mixing parameters include; fluid mechanics, formation and dissipation of vortices, velocity profiles, shear rate distribution, residence times, and scale-up correlations aimed at producing MOFs with a desired particle size distribution and morphology. [11] All of these variables underpin the successful integration of microwave energy with chemical reactor systems capable of delivering the economic large-scale manufacturing processes needed to produce MOFs, consistently at high quality and at the correct cost base and minimal environmental impact.
We have constructed a CF-OBR system equipped with a homogeneous and controllable microwave applicator in an entirely novel design, as CF-OBRs have not been used in conjunction with controllable microwave heating before. This affords a new route to MOF production incorporating many of the principles of process intensification and allows, for the first time, investigation of the synergistic benefits of microwave heating and CF-OBRs. Using our system, herein referred to as the 'MW-CF-OBR' rig ( Figure 2), we have, for the first time, quantified experimentally the amount of microwave energy absorbed by a MOF reaction mixture in continuous flow thus allowing investigation of the effect of mixing and microwave heating on continuous MOF production. We report the effect of flow rate and baffle material on the production rate, STY and quality (SAPR, porosity, crystallinity, particle size and morphology) of an archetypal MOF, HKUST-1 [31]. This particular MOF was chosen to enable comparison with other production routes. Using our MW-CF-OBR rig we have produced HKUST-1 at the 97.42 g/h scale and with a space time yield (STY) of 6.32 × 10 5 kg/m 3 /day. This is more than 50 % higher than the previously reported STY [21] and represents the highest reported STY for any MOF produced via any method to-date. Additionally, all the HKUST-1 materials produced were highly porous and exhibited Brunauer-Emmett-Teller surface areas (SA BET ) over 1300 m 2 g (i.e. at the desired level/ according to specification) resulting in a surface area production rate (SAPR) of 1.12 × 10 12 m 2 /m 3 /day. This is two orders of magnitude more than the highest reported SAPR for HKUST-1 and shows that the materials produced using the MW-CF-OBR rig are of high quality. thermocouple; S1 and S2; microwave leakage meters; OBR: green corresponds to plastic baffle, red corresponds to interchangeable baffle section (metal or plastic); C1 and C2: PC controllers. Further information about the components is given in the Supporting Information (SI).
Experimental
The schematic of the MW-CF-OBR rig is given in Figure 2. The rig consists of four continuous syringe pumps (labelled P1-P4 in Figure 2), three feedstocks (E1-E3, Figure 2), a mesoscaleoscillatory baffled reactor (consisting of two individual tubes connected together fitted with a helical plastic perfluoroalkoxyalkane, PFA, baffle; labelled OBR in Figure 2) fed through a choked microwave cavity (MW4, Figure 2) and a product collection vessel (E4, Figure 2). PFA was used for the baffles as it is compatible with the reaction mixtures and is microwave transparent. The microwave cavity was designed to ensure homogeneity of the treatment by considering the electromagnetic properties of the reaction mixture. This was achieved via electromagnetic simulations performed using COMSOL Multiphysics [32] (see SI, for further details and model). Figure 3 shows the power density distribution inside the material (including baffles and reaction mixture). Power density variation was ± 6% across the treatment zone (Figure 3), indicative of uniform heating in the microwave applicator zone. Chokes were designed by considering the electromagnetic properties of the reaction mixture to prevent any microwave leakage and are compliant with health and safety legislation. [33] A thermocouple located directly after the microwave cavity (T1, Figure 2) was used to monitor the bulk temperature of the suspension following heating in order to ensure no disruption to the oscillatory flow in the microwave heated zone. As mentioned in the introduction, HKUST-1 was selected as it has been produced via many different routes (e.g. electrochemical, mechanochemical/extrusion, spray drying, microwave heated reactions, conventional heated continuous flow synthesis and in batch systems such as CSTRs) [6]. This enables direct comparison of production rate, STY and quality (SAPR, porosity, particle size and morphology, crystallinity) for which some data already exist. For example, production of HKUST-1 in ethanol in a CSTR at 60 °C takes 5 hours with a yield of 32.3 %, STY of 41 kg/m 3 /day, and an SAPR of 91 × 10 6 kg/m 3 /day. [21] Feedstocks were prepared according to previously reported stochiometries and concentrations for HKUST-1 synthesis. [21] Feedstock one (E1, Figure 2 An automatic 3-stub tuner (S-TEAM) [35] was used for impedance matching to minimize reflected power and maximize energy absorption by the reaction mixture during microwave treatment.
Once temperature had stabilized and the system had reached steady state (approximately 5 reactor volumes), samples were collected at the OBR outlet (after V3, Figure 2) at approximately 15 mL intervals during continuous HKUST-1 production. Power meters were used to measure the forward and reflected power from which the average absorbed power (forward minus reflected) and total absorbed energy were determined.
In order to assess the effect of localised electromagnetic heating of the baffles, both PFA and stainless steel baffles were used in the interchangeable baffle section (as indicated by the red zone in Figure 2). When metals are exposed to microwave energy, the majority of the microwave current flows in a thin layer (an eddy current) of the surface known as skin depth. The skin depth for stainless steel is 8.48 μm. [3] The remaining OBR length was fitted with PFA baffles (highlighted green in Figure 2) to ensure that the metal baffle did not protrude beyond the chokes of the microwave cavity thus preventing microwave leakage. Experiments conducted with stainless were performed at two flow rates, 10 and 50 mL/min.
After microwave treatment, the resulting blue suspensions were centrifuged for 10 minutes at 4200 revolutions per minute at 5 °C. The supernatant was then decanted and the blue powder washed with ethanol (ca. 10 mL) and re-centrifuged using the same conditions. The decant, wash and centrifugation step was repeated once more. After this, the supernatant was decanted and the powder was dried in an oven at 50 °C for 5 hours. A minimum of five samples from three separate reactions were obtained for analysis at each flow rate.
The average yields of HKUST-1 at each flow rate were determined by thermogravimetric analysis of 2 samples from 3 separate experiments (i.e. the 'average yield' is determined from a total of 6 samples per flow rate). Data are given in Table S2 in the SI.
Results and Discussion
Using our novel MW-CF-OBR rig (Figure 2), the effect of synthesis conditions, namely flow rate, applied power (and therefore energy), and baffle material, on the yield and properties (porosity, crystallinity and morphology) of HKUST-1 were investigated. Reactions were conducted in ethanol as this solvent does not decompose during HKUST-1 synthesis and can be recycled, thus reducing the cost, inherent safety issues, and environmental burden of the process. [36] The applied power was varied with flow rate to maintain a bulk reaction temperature of 65 ± 5 °C, as displayed in Table 2. Temperatures beyond 65 ± 5 °C were not investigated in order to prevent build-up of pressure from the boiling reaction mixture. Data in Table 2 show that an increase in energy is required to maintain the target temperature with decreasing flow rate and in presence of a metal baffle; this is due to heat losses to the surrounding areas. Future experiments need to consider insulating the OBR if flow rates below 30 mL/min are required.
Yield of HKUST-1
Products synthesised using the MW-CF-OBR rig exhibit powder X-ray diffraction (PXRD) peaks corresponding to pure HKUST-1, showing that no other MOF phases or impurities were produced (see Figure S4, SI). As mentioned in the introduction, it is important to calculate yield in terms of the production rate and STY as these parameters are typically used to evaluate process efficiency and to compare between different production routes. Figure 4 shows an increase in production rate with increasing flow rate. [21] It may be possible to improve the conversion by altering the concentration of reactants, [21] the metal:linker ratio, [37] or by increasing the reaction temperature [21] but our current experimental set-up is limited by the reduced penetration depths observed for more concentrated metal salt solutions and by the pressure generated at elevated temperatures. Further work is ongoing to investigate these factors. The effect of treatment conditions on the quality of HKUST-1 was also investigated by analysing the crystallinity, morphology and porosity of HKUST-1 produced using the MW-CF-OBR rig. As mentioned in the introduction, MOFs that exhibit high surface areas are particularly desirable for gas storage and capture [6,22] whereas MOFs consisting of highly crystalline particles of uniform size and morphology are appropriate for separations [23][24][25] and controlled drug release [26].
Crystallinity and morphology: SEM analyses show that HKUST-1 materials recovered from the MW-CF-OBR rig mainly consist of nano-sized crystals below 15 μm 2 . This is consistent with previous reports in which microwave heating produces smaller crystals than conventional heating owing to nucleation at hotspots that form due to local superheating of the solvent [8] and solvated metal ions. [7,30] In this work, five dominant morphologies namely, plates, circular, irregular, triangular based prisms and octahedral crystals are observed (see Section 4, SI). Irregular, [38] circular [38] and octahedral shaped crystals have been reported previously for HKUST-1 with octahedral crystals >5μm being the most common. [38][39][40][41][42] The shape of the crystals is unlikely to be critical for industrial applications of MOFs, however, the production of crystals with uniform size and shape is desirable. Using an aforementioned particle size distribution analysis method, [7,43] our present work shows that HKUST-1 particle size and morphology is dependent on the flow rate and, therefore, mixing effects and microwave treatment (power and time, consistent with previous reports [7,44]). However, there is no obvious correlation between the yield of HKUST-1 and particle size (see Table S2 in SI).
Materials recovered from the MW-CF-OBR rig show a reduction in median particle size with increasing flow rate. For example, at a flow rate of 5 mL/min (plastic baffles) HKUST-1 particles consist of triangular based prisms and octahedral crystals with a median crystal size of 0.78 μm 2 as observed by scanning-electron microscopy (SEM, see Figure 5 and SI). However at a flow rate of 100 mL/min (plastic baffles) HKUST-1 particles consist of much smaller irregular and circular shaped crystals with a median size of 0.016 μm 2 (see Figure 5). The same trend is observed for HKUST-1 produced using metal baffles; the median particle size decreases from 0.15 (octahedral and triangular prisms) to 0.019 μm 2 (irregular and circular crystals) between flow rates of 10 and 50 mL/min, respectively (see Figure 5). The smaller and irregular shaped crystals recovered at high flow rates are likely the result of decreased reaction time, i.e. crystals are earlier in the crystallisation process, or crystal breakage due to rapid and harsh mixing. Further work is ongoing to investigate these effects.
The relative crystallinity of HKUST-1 materials were determined using the method of Vivani et al. [45] by comparing the reciprocals of the full width half maximum (1/FWHM) for a selected peak in the PXRD patterns. FWHM were calculated using a split pseudo-Voigt peak fitting function ( Figure S5, SI). Data in Table 1 show no obvious correlation between treatment conditions and relative crystallinity. HKUST-1 produced at 100 mL/min shows the lowest 1/FWHM (see Table 1) and therefore is the least crystalline material produced using the MW-CF-OBR rig. This is consistent with SEM analyses in which irregular shaped crystals were produced at 100 mL/min and is an acceptable level of crystallinity compared to other commercially available materials[46, 47]. Porosity: A summary of N 2 gas sorption data for HKUST-1 produced in the MW-CF-OBR rig is given in Table 2. HKUST-1 prepared at flow rates of 5 and 10 mL/min exhibit Type I isotherms with little hysteresis, [48] characteristic of microporous materials and typically exhibited by HKUST-1 [49,50]. HKUST-1 prepared at flow rates of 50, 80 and 100 mL/min exhibit Type I isotherms with some Type IV character and H4 hysteresis indicative of inter-particulate mesoporosity, [48,51] also reported previously for HKUST-1 [52]. All materials exhibit Langmuir (SA Lang ) and Brunauer-Emmett-Teller surface areas (SA BET ) over 1300 m 2 /g with no obvious correlation between treatment conditions and surface area (see Table 2).
In our present work, the highest (SA BET ) of 2004 ± 0.4 m 2 /g was exhibited by HKUST-1 prepared at a flow rate of 30 mL/min which is higher than that reported for HKUST-1 prepared by mechanochemical (1600 m 2 /g), electrochemical (1820 m 2 /g), solvothermal (1550 m 2 /g) and continuous flow (1950 m 2 /g) synthesis. [6] A relationship between flow rate and relative microporosity of HKUST-1 was determined by calculating the ratio of the pore volume at low pressure to the pore volume at high relative pressure (V 0.1 /V Tot ). [7,53] Data in Table 2 (and Figure S6, SI) show that as the flow rate increases the value of V 0.1 /V Tot tends away from one, indicating that highly microporous HKUST-1 is produced at flow rates below 50 mL/min and above this intergranular mesoporosity arises. This is consistent with SEM analyses (see SI) and another report where high V Tot and H4 hysteresis were exhibited by expanded aggregated particles of HKUST-1. [54] In order to compare the surface area of HKUST-1 produced using our MW-CF-OBR rig with that obtained via other processes, we have determined the SAPR. The SAPR of HKUST-1 produced using our MW-CF-OBR rig increases with increasing flow rate owing to high SA BET (>1300 m 2 /g) as shown in Figure 6 and Table 2, respectively. While this result is expected, a related microwave-heated continuous flow study by McKinstry et al., [21] reported a decrease in the SA Lang of HKUST-1 from 1930 to 1550 m 2 /g with an increase in flow rate from 300 to 550 mL/h. [21] The reduction in surface area observed by McKinstry et al. may be a consequence of using a non-optimised system, i.e. the energy absorbed per unit mass was not controlled, thus resulting in a difference in temperature between the different flow rates and a reduction in quality of the MOF produced. [7] In this work SAPRs between 3.36 ×10 7 and 1.12 × 10 12 m 2 /m 3 /day (where a day corresponds to 24 hours) were achieved, with the highest SAPR reached at a flow rate of 100 mL/min (see Figure 6). Our highest SAPR is two orders of magnitude higher than the current top SAPR of 2.40 × 10 9 m 2 /m 3 /day owing to the superior porosity exhibited by HKUST-1 produced using our rig (Langmuir surface area of 1772 compared to 600 m 2 /g). [21] Figure 6: Plot of flow rate vs. surface area production rate for representative samples of HKUST-1 produced using the MW-CF-OBR rig. Circles and triangles represent experiments conducted with a plastic and metal baffle, respectively.
Conclusions
We have developed a microwave heated oscillatory baffled reactor (MW-CF-OBR) rig for synthesising HKUST-1 in ethanol under mild conditions (ambient pressure and temperatures below 70 °C). Oscillatory baffled reactors were employed to generate uniform mixing of reactants and suspension of solids, and to facilitate rapid heat transfer to the bulk mixture. Microwave heating enabled a significant reduction in synthesis time with production rates of 97.42 g/h and a space time yield (STY) of 6.32 × 10 5 kg/m 3 /day achieved. This is the highest reported STY and fastest synthesis rate (2.2 seconds) for any MOF produced to-date using any production method. Process intensification is enabled by both technologies through efficient mixing (uniform suspension of solids and high level of plug flow, OBR technology) and rapid delivery of energy requirements (microwave heating). The rapidity of energy delivery is further enhanced by selective microwave heating of the metal ions in solution. [7,56] The quality of HKUST-1 materials produced using the MW-CF-OBR rig were assessed by analysing the crystallinity, morphology and porosity. HKUST-1 particle size and morphology were found to be dependent on the flow rate. For example, syntheses conducted at 5 mL/min gave relatively large octahedral and triangular prism shaped crystals (median particle size 0.78 μm 2 ) whereas flow rates above 50 mL/min gave much smaller irregular shaped crystals (median particle size 0.019 μm 2 ). A slight reduction in relative crystallinity (although still in line with commercial sources)[46, 47] is exhibited by HKUST-1 produced at flow rates of 100 mL/min compared to lower flow rates, which is probably due shorter reaction times (less time for crystallisation), or crystal breakage due to rapid and harsh mixing. All the HKUST-1 materials produced were highly porous and exhibited Brunauer-Emmett-Teller surface areas (SA BET ) over 1300 m 2 g resulting in a surface area production rate (SAPR) of 1.12 × 10 12 m 2 /m 3 /day. This is two orders of magnitude more than the highest reported for HKUST-1 and confirms that the materials produced using the MW-CF-OBR rig are of high quality.
Supporting Information
Supporting Information associated with this can be found online at (to be insterted). | 5,936 | 2019-01-15T00:00:00.000 | [
"Materials Science"
] |
The Supply of Inputs to Rice Farmers in Savannakhet
The policy of rice intensification in Laos is dependent on an adequate supply of key inputs such as high-yielding seeds, good-quality fertilizers, reliable irrigation, affordable finance, and appropriate information. This study focused on two crucial inputs—seeds and fertilizers. Six villages in Champhone District, Savannakhet Province, were selected for the study. Farmers had mostly adopted the seed-fertilizer technology. Mechanization of land preparation through the use of hand tractors was also widespread. Many also used irrigation to cultivate a dry-season crop. However, the productivity and profitability of rice farming remained low. Constraints to the supply of seeds and fertilizers explain part of this dilemma. There is scope for policy intervention to improve the supply and use of productive inputs for more intensive rice production. Further investment in the rice breeding and seed production centres may be needed to develop suitable varieties for the range of rice environments encountered by farmers and to improve the quality of seeds supplied. Intervening to control or subsidize the price of fertilizer is unlikely to be effective. The government could, however, simplify the import process, helping reduce farmers’ costs; increase the capacity to monitor and enforce fertilizer quality standards; and provide more site- and variety-specific information to farmers regarding optimal fertilizer use.
primarily for commercial purposes, two (Buekthong and Dondaeng) produced rice primarily for family consumption but regularly sold a surplus, and two (Khamsida and Khaokad) produced rice only for self-sufficiency. A preliminary survey was conducted to determine the broad picture of the seed and fertilizer supply chains in the six villages and to identify the key actors in each chain. This provided the basis for selecting interviewees in the second visit in March 2012. The types and numbers of interviewees are listed in Table 8.1. The farmers were selected randomly from the list of farmers in each village, including farmers who were members of a seed production group or involved in the government's Rice Production Improvement Project (RPIP).
the FertIlIzer Supply chaIn
As shown in Chap. 7, most rice farmers (85%) in the Savannakhet Plain used chemical fertilizers for the wet-season (WS) crop and all used chemical fertilizers for the dry-season (DS) crop. The commonly used fertilizers were urea (46-00-00), ammonium phosphate (16-20-00), and compound fertilizers such as 16-08-08 (16%) and 15-15-15 (4%). The most common fertilizer brands used by Savannakhet farmers were Ox Brand from the Thai Central Chemical Public Company Limited, Rabbit Brand from the Chia Tai Company Limited, and Football Brand from an unidentified company in Vietnam. These were the brands with higher quality and price. Most farmers used fertilizers based on their financial capacity; only a few based their usage on technical requirements. Some farmers could not afford to apply fertilizers due to the high and fluctuating price. In many cases, the cost of applying additional fertilizers outweighed the additional return (see Chap. 10). In addition to applying chemical fertilizers, farmers in Savannakhet still applied animal manure to their rice fields before land preparation. The animal manure was sought from within the family and the village. Farmers applied as much as they could find as the number of animals had decreased and manure was increasingly scarce.
Actors in the Supply Chain
The fertilizer supply chain for Savannakhet Province is illustrated in Fig. 8.1. The fertilizers used by farmers in the province were mainly sourced from Thailand, Vietnam, and Taiwan. Most imports occurred through the border checkpoints at Savannakhet-Mukdahan (Thailand) and Dansavanh-Lao Bao (Vietnam), at either end of National Route 9 which traversed the province (see Fig. 5.1 in Chap. 5). The major types of supplier are discussed in turn.
(a) Individual agents. The individuals in Fig. 8 Lifestyles Company in Thailand. The concentrated fertilizer was to be mixed with water and sprayed onto the rice leaves every seven days. Farmers who had used this fertilizer said that rice production had improved as a result, though the response was slow compared to chemical fertilizers. The fertilizers came in a set of two bottles, each costing THB 580 (around LAK 150,000). These individual agents also supplied chemical fertilizers, buying up stocks and storing them in their houses. The types of fertilizers supplied in this way were 15-15-15, 46-00-00, 16-20-00, and 16-8-8. Farmers could purchase directly from the individual. Payment could be made in cash or the fertilizer could be taken on credit. There was no interest charged and farmers could simply repay the credit after harvest. (b) Import companies and fertilizer shops. These were not solely for selling fertilizers; their main activity was selling construction materials. However, they would have a corner of the shop devoted to fertilizers during the production season, mostly imported from Thailand. The same four types of fertilizers were sold-46-0-0, 15-15-15, 16-20-0, and 16-8-8. These distributors used to provide credit to farmers but, due to the low rate of repayment, only cash sales were now made. The import companies usually imported fertilizers directly, whereas the shops were supplied by mobile vendors who visited from time to time. These vendors could not be traced in the study and it was unclear how they imported their fertilizers. (c) Rice millers. There were three rice millers supplying fertilizers to farmers in the study villages. The same four types were provided. The fertilizers were bought from a fertilizer shop in Kilometre-35 Village and some were imported from Salavan Province. Both cash and credit sales were made available to farmers. For credit sales, the miller would make a contract with the farmers which stated the total amount to be repaid, the due date, the form of repayment (cash or rice-if the latter the quantity was calculated based on the current rice price at the time of drawing up the contract), and the interest rate (typically 1.0-2.5% per month). (d) Vietnamese traders. These traders played a significant role in the fertilizer supply chain. Though they did not come to Savannakhet intending to sell fertilizers, in 2008 they saw the potential for supplying fertilizers to farmers in Champhone District. They imported around 30-40 tons per year from Vietnam, all in the wet season around May, June, and July. It was unclear how they brought in the fertilizers. Farmers said that at the beginning of the production season the traders came to the village with a load of fertilizers in their truck and the farmers were free to select whatever fertilizer they wanted. They brought in three main types-46-0-0, 15-15-15, and 16-20-0. Although the price of the Vietnamese fertilizers was cheaper than fertilizers from Thailand, farmers claimed they had to use almost twice as much Vietnamese fertilizers to obtain the same yield as with Thai fertilizers. The Vietnamese traders supplied fertilizers to farmers in the village on credit; once farmers had cash or after the harvest was completed, the traders would come back to the village to collect the money. Due to the generally high price of fertilizers, this form of credit was popular with farmers who were short of working capital.
Despite the poorer quality of the fertilizers, farmers were attracted by the availability of credit and the saving on the time and cost of purchasing fertilizers in town. (e) Thasano Seed Production Centre. The seed production centre at Thasano, under the Ministry of Agriculture and Forestry, was located on National Road 13 just west of Champhone District. The Centre worked with village heads to organize seed production groups of 20-30 farm households to which it provided fertilizers. These groups were established because the demand for improved seeds was exceeding the Centre's own production capacity. Participating farmers had to agree with the Centre and village head to comply with the seed production techniques and standards provided by the Centre. Once a farmer group was formed, a contract was developed between the group and the Centre. The contract stated clearly the seed production techniques or standards that the farmers had to follow, that the output had to be sold to the Centre, and that the Centre would not purchase seeds from farmers who did not follow the specified procedures. The Centre supplied two types of fertilizers to the seed producing groups-chemical and bio-fertilizers. The chemical fertilizers included 46:00:00, 16:20:00, and 15.15.15. The Centre ordered these fertilizers as required from Siam Machinery Intertrade Company Limited, based in Thailand, with importation through the Dansavanh-Lao Bao border crossing. However, the Centre had received exemption from import duty because of its public role.
The bio-fertilizers were supplied by Rfarm Company, with its head office in the capital and its factory located in Hin Hurb District, Vientiane Province. The Centre understood that the company imported fertilizers directly from Taiwan. However, further enquiries revealed that the company imports materials from Taiwan and then processes, repacks, and distributes the product in Laos. (f) Rice Production Improvement Project. This project aimed to improve rice productivity for farmers and supply good rice variety for farmers who lack access to high-quality seed. The village head and the project coordinator collaborated closely in organizing farmer groups of 20 farmers each. The participants had to have at least 0.5 ha of paddy land, be hard-working farmers, and belong to a minority group that had less access to fertilizers and seeds. Once the farmer group was organized and the group committee assigned, the project supplied them with fertilizers of two types-15:15:15 and 46:00:00. The project imported fertilizers directly from Vietnam and stamped a Lao logo on the bags before supplying the farmers. One group of farmers received 50 bags of fertilizers-30 bags of 15:15:15 and 20 bags of 46:00:00. There was neither any charge to the farmer group nor any requirement to repay the cost at the end of the season. The fertilizer was only made available to farmer groups; farmers who were not members could benefit from this line of supply. Table 8.2 shows the estimated annual volume of fertilizers imported by each of the above actors. The Thasano Seed Production Centre imported a large quantity but it was a single supplier and mainly supplied its own seed producers. The import companies and input supply shops were more numerous and handled about 6000 tons each, hence these were the main suppliers. The Vietnamese traders and rice millers each handled a smaller quantity, but the traders were important suppliers in some villages. About two-thirds of the farmers surveyed obtained their fertilizers from shops and import companies ( Fig. 8.2). The reason given was that these suppliers had lower prices. This was confirmed in interviews with the different suppliers. Exemptions from import duty helped to lower the price. About a third of farmers purchased fertilizers from individual agents in the village and/or visiting Vietnamese traders. Millers, the seed production centre, and the Rice Production Improvement Project each supplied only a small percentage of farmers.
Fertilizer Transactions
There were two forms of payment for fertilizers-cash and credit (Table 8.3). The vast majority of farmers producing rice primarily for commercial purposes paid in cash because the price was lower than under the credit system. In the latter case, the price incorporated an implicit interest rate that varied between suppliers. Farmers using credit (34%) were those with limited capital during the production season, hence they had no choice but to pay the interest premium. These farmers stated that if they had available cash at the time of purchasing production inputs they would prefer to pay up-front in cash. Among the suppliers, the village agents and Vietnamese traders were the most willing to supply fertilizers on credit. These traders were flexible and willing to negotiate the time of payment and to receive whatever amount the farmer could pay. The cash and credit prices for selected types of fertilizers are shown in Table 8.4. One point to note is that urea from Thailand was up to twothirds more expensive than urea from Vietnam, presumably reflecting the quality differences reported by farmers. The table also shows variation in the implicit interest charge incorporated in the credit price. For Thai brand urea, supplied by shops and import companies, the premium was 20% and for Thai brand compound fertilizer (15-15-15) it was 25%. Assuming six months until payment, the annualized interest rate was 40-50%. However, for Vietnamese urea supplied by Vietnamese traders, the premium was 40-65%, representing an annualized interest rate of 80-130%. This higher rate probably reflected the greater flexibility of the Vietnamese traders in the payment time and amount.
Data were obtained on the margins between the purchasing and selling prices of different types of fertilizers for different suppliers (Table 8.5). The traders had the highest margins, up to 25% for urea, but this included the cost of delivery to the village. Millers also had high margins of around 20%. The shops and import companies had relatively low margins of 2-4%. The import companies purchased fertilizers 10-20% more cheaply than the other suppliers and could also distribute at a lower price. Moreover, although the purchasing price of the three main fertilizers was different, this distributor sold them at the same price-LAK 4200 per kg-which was up to 30% cheaper than other suppliers. When asked why the selling price was uniformly low, the distributor merely remarked that the price was sufficient to compensate for the purchase price.
Constraints and Problems
The survey highlighted some problems with the fertilizer supply chain, both for farmers and importers/distributors. The problems reported by farmers related to quality, capital, and price. The problems reported by suppliers related to documentation processes and bad debts.
Many farmers complained about the poor quality of the fertilizers they purchased. They said that the rice was slow to respond to some types of fertilizers. This was especially the case for Vietnamese fertilizers, while the Thai fertilizer was perceived to be of high quality. Some farmers mixed fertilizers from Vietnam with fertilizers from Thailand, which they said gave a better response. Access to financial resources was important to enable farmers to have fertilizers when they needed them most. The only financial support available was from the Agricultural Promotion Bank, but to qualify for a loan farmers had to form a group of 10-20 members and submit a production proposal to the Bank. Each member had to guarantee that every other member would repay their loan or else the group would be liable. Many farmer groups had failed to repay their loans. The only alternative for capital-scarce farmers was to take fertilizers on credit from Vietnamese traders and individual agents with much higher implicit interest rates, as shown above. Compounding these problems, the price of fertilizers was continually increasing, making it harder to purchase highquality fertilizers and reducing the incentive to apply optimal amounts. Many importers complained about the documentation procedure for importing fertilizers. The complexity of this process led to higher import costs, pushing up the retail price encountered by farmers. Suppliers also commented on the low rate of repayment for fertilizer credit. Many had stopped providing credit as a result. Only the Vietnamese traders and some individual agents still provided credit, with very flexible repayment times and instalments. However, they offset their risks with higher marketing margins and very high implicit interest rates.
Origin and Uptake of Improved Varieties
Though improved rice varieties had been introduced to Laos since 1960, in 1990 about 95% of the lowland WS rice crop was still based on traditional varieties (Inthapanya et al. 2006). From the 1990s, rice breeding and seed production stations were established and a succession of high-yielding glutinous varieties were selected and disseminated to the major ricegrowing areas in the Mekong Valley, with rapid uptake by farmers (see Chap. 6). At the time of this study, there were four active seed production centres, breeding and supplying improved varieties of rice throughout the country-three in Vientiane Capital, Napok, Phonengam, and Dondaeng, and one in Savannakhet, the Thasano Seed Production Centre mentioned earlier.
These centres produced a wide range of varieties with different attributes. Thasano produced ten varieties and about 50 tons of rice seeds per year. Farmers in Savannakhet also used varieties from Vientiane, including Thadokham (TDK) and Phonengam (PNG) varieties, and a variety from the Provincial Agriculture and Forestry Office (PAFO) (Homsavanh). Farmers also selected and conserved their own seeds.
In the survey, most farmers (86%) reported using an improved variety in the WS, with 32% using TDK10, a relatively recent release, and 15% using PNG 3, a high-yielding, drought-tolerant variety released in 2005. All farmers used only improved varieties in the DS, including TDK10 (22%), TDK5 (19%), TDK8 (13%), and PNG3 (8%). It was noteworthy that the use of TDK5, a short-duration variety, increased in the DS.
Actors in the Supply Chain
The structure of the seed supply chain is shown in Fig. 8.3. The principal actors were the seed production centres shown on the left, multiplying up the first and second rounds (R1 and R2) of the certified seed; the seed production groups, producing R2 and R3 seeds; the PAFO and the District Agriculture and Forestry Office (DAFO) that distributed the seed, along with millers and the Rice Production Improvement Project; and the farmers, Fig. 8.3 The seed supply chain in Savannakhet. a Thasano Centre and PAFO/ DAFO buy back seeds from the seed production groups at 10% above the market price for paddy rice who purchased the seed, selected and retained the seed for their own use, and exchanged the seed with other farmers within and outside their village.
(a) Seed production centres. These centres produced both R1 and R2 seeds. However, due to the increasing demand for seeds of improved varieties, Thasano was working with groups of farmers to multiply seeds. A farmer group consisted of around 20 farmers who made an agreement with the Centre to produce and supply seeds. Centre staff visited the farmers' fields from time to time to ensure that they were meeting the required standards. If the farmers' seed production met the standards for certification, the Centre or the PAFO bought the seed at a premium price; otherwise farmers had to sell it in the market as normal eating rice. (b) Seed production groups. The farmer seed production groups were organized under the supervision of the Thasano Centre and the Rice Production Improvement Project, run by the PAFO. These farmers produced either R2 or R3 seed to supply the Centre and the project. Thasano and the PAFO paid a 10% premium for seeds produced according to the requirements for certification. The farmer groups were also allowed to sell seeds directly to farmers. (c) Rice Production Improvement Project (RPIP), PAFO, and DAFO.
The RPIP was being implemented by the PAFO and DAFOs with financial support from the World Bank. One of its goals was to help poor and minority farmers to get access to good-quality seed. The project was working with 33 farmer groups and 615 households, including eight groups in Champhone District. The project worked with the village head to organize the farmer group. One bag of R2 seed was provided free to each household. The PAFO/DAFO then bought the seed produced by the farmer group at a 10% premium. This seed still needed further purification before selling commercially. (d) Rice millers. Rice millers supplied seeds to farmers in the same way as they supplied fertilizers. The millers bought seeds from the Thasano Seed Production Centre and sold it to farmers at LAK 2000 per kg. The farmers could pay for the seeds immediately in cash or after harvest in cash or rice (calculated from the current rice price). (e) Farmers. Farmers sourced seeds from many different distributors depending on their circumstances, including the seed production centre (18%), the RPIP (20%), and seed production groups (18%) ( Table 8.6). However, the most common source was other farmers within the village (41%). Farmers reported that they observed each other's rice fields and if someone had a variety that provided higher yield and better quality, they would exchange the seed with that farmer. Observing neighbouring rice fields was a simple technique preferred by farmers to find a suitable new variety because if the variety performed well in the neighbour's field, in their experience it was likely to be well adapted to his or her own field (Fig. 8.4). This cultural practice was of long-standing and occurred throughout Laos, providing the basis for technical change. The village communities were relatively small and homogenous so that everyone knew each other, hence it was easier for farmers to observe fields and exchange rice varieties with their neighbours than to search for improved varieties independently. By this means, the improved varieties developed from the 1990s have spread rapidly in lowland areas.
The price paid by farmers for seeds from different sources is compared with the estimated cost of seed production in Table 8.7. The seed production centres had a higher cost of production and a higher selling price (LAK 5000-6000 per kg). The higher price reflected both their higher costs and significantly higher margins (66-100%). The farmer groups produced seeds at a lower cost and sold to other farmers at a lower price, with a margin of 25-50%. However, their margin in selling back to the government agencies was only 25% or less. The millers also sold seeds more cheaply and with a lower margin.
Problems and Constraints
Several problems were identified in the course of the survey. Farmers reported that the seed they bought from the main suppliers was mostly impure. This suggests that the seed production process was not properly monitored and so there was still a problem of mixing seeds of different varieties. This was compounded by the absence of a proper seed certification system to provide information on whether the seed the farmer bought was in compliance with seed production standards. Such a seed certification system would solve the problem of impure seeds and farmers would have more confidence in the quality of the seed they purchased. Farmers also felt that there was a lack of varieties for specific soil and climatic conditions (e.g., infertile sandy soils, drought, and flooding). They wanted seed that was clearly labelled regarding its suitability to specific environments (e.g., flood-tolerant). However, the four seed production centres had not yet released such site-specific varieties, focusing rather on varieties that would do reasonably well in a range of environments. Farmers also reported that the available varieties were not resistant to pests and diseases, restricting productivity in some areas.
concluSIon Farmers in Champhone District had mostly adopted the seed-fertilizer technology that formed the basis of increased yields and productivity in Asian rice farming (Chaps. 1 and 6). Mechanization of land preparation through the use of hand tractors was also widespread. Many had also intensified their cropping system, using irrigation to cultivate a DS crop as well as the traditional rainfed WS crop. However, the productivity and profitability of rice farming remained low. Constraints to the supply of seeds and fertilizers can explain part of this dilemma.
Farmers used mostly improved varieties for the WS crop and entirely so for the DS crop. These were mostly glutinous varieties, incorporating introductions from the International Rice Research Institute (IRRI) and Thailand with Lao genetic material, to produce higher yields in a range of adverse Thasano Centre and PAFO/DAFO buy back seeds from the seed production groups at 10% above the market price for paddy rice environments. They had been progressively released since the 1990s and were rapidly adopted and disseminated. Just over a third of Champhone farmers (37%) sourced their seed from the formal public-sector supply chain, including seed production centres, PAFO, DAFO, and a governmentimplemented rice development project. The private sector played little role, apart from some millers who included seed in their advance of inputs to selected surplus-producing farmers. Most farmers (61%) obtained seed from other farmers, including 18% who bought from a seed production group, set up by the seed production centres to accelerate the multiplication of seeds, and 43% who exchanged the seed with their neighbours, after observing the performance of different varieties in the field, and then selected and retained the seed for their subsequent use. In this way, they gained access to the improved varieties, though probably with some deterioration in seed quality and hence yield (Diaz et al. 1998). Indeed, the main problems identified concerned the lack of proper seed certification, the supply of impure seeds, lack of varieties for specific soil and climatic conditions, and lack of varieties with resistance to the prevalent pests and diseases. Farmers also used various types of fertilizers in their rice production, including chemical fertilizers from Thailand and Vietnam, organic fertilizers, and animal manure. While the increasingly limited supply of animal manure was sourced from neighbours in the village, the manufactured fertilizers were sourced from a range of mainly privatesector distributors, including import companies, input supply shops, mobile traders, individual villagers acting as agents, and rice millers. In addition, the government seed production centre and the Rice Production Improvement Project supplied fertilizers to farmers participating in their activities. Most of these suppliers provided chemical fertilizers, including urea, ammonium phosphate, and compound nitrogen-phosphoruspotassium (NPK) fertilizers; only a few provided organic fertilizers. The most important suppliers were the import companies and shops, who preferred cash payment at the time of purchase. In contrast, the traders mainly supplied fertilizers in the village on credit, to be repaid soon after harvest, with an implicit interest charge of 50-100% p.a. incorporated in the price. Farmers with limited capital were more likely to use this credit system. The major problems identified in this fertilizer supply chain were the poor quality of especially the Vietnamese product, the lack of financial resources to buy sufficient fertilizers, and the increasing price of fertilizers.
There is clearly scope for policy intervention to improve the supply and use of productive inputs for more intensive rice production. Further investment in the rice breeding and seed production centres may be needed to develop suitable varieties for the range of rice environments encountered by farmers and to improve the quality of the seed supplied. This needs to be accompanied by an official seed certification system to ensure farmers have access to high-quality seeds and information about varieties suited to their local situations. While the increasing price of fertilizers was clearly a constraint, marketing margins were quite low, implying a competitive in-country distribution system. Intervening to control or subsidize the price of fertilizers can be a costly and administratively cumbersome policy and is unlikely to be effective. The government could, however, take action to further simplify the import process, which would help reduce costs that are passed on to farmers, to increase the capacity to monitor and enforce fertilizer quality standards, and provide more siteand variety-specific information to farmers regarding optimal fertilizer use. Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/ by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. | 6,385.4 | 2020-01-01T00:00:00.000 | [
"Agricultural and Food Sciences",
"Economics"
] |
Engineering Cheerful Robots : An Ethical Consideration
Socially interactive robots in a variety of forms and function are quickly becoming part of everyday life and bring with them a host of applied ethical issues. This paper concerns meta-ethical implications at the interface among robotics, ethics, psychology, and the social sciences. While guidelines for the ethical design and use of robots are necessary and urgent, meeting this exigency opens up the issue of whose values and vision of the ideal society inform public policies. The paper is organized as a sequence of questions: Can robots be agents of cultural transmission? Is a cultural shift an issue for roboethics? Should roboethics be an instrument of (political) social engineering? How could biases of the technological imagination be avoided? Does technological determinism compromise the possibility of moral action? The answers to these questions are not straightforwardly affirmative or negative, but their contemplation leads to heeding C. Wright Mills’ metaphor of the cheerful robot.
Introduction
We inhabit a world in which 'social' gadgets cheerfully interact with humans.This paper's title, however, alludes also to the metaphorical sense in which sociologist C. Wright Mills spoke of robots in the 1950s: 'We know of course that man can be turned into a robot . . .But can he be made to want to become a cheerful and willing robot?' [1] (p. 171).His metaphor denotes individuals who passively accept their social position, content with their allotted niche, for they are incapable of questioning the normative order.'The ultimate problem of freedom is the problem of the cheerful robot,' stated Mills-for this phenomenon implies that not everyone wishes to be free-and considered the likelihood that the human mind 'might be deteriorating in quality and cultural level, and yet not many would notice it because of the overwhelming accumulation of technological gadgets' [1] (p. 175).
A characterization of life in the 1950s as an overwhelming accumulation of gadgets may bring a smile in the 2010s, but the issues raised by Mills remain pertinent, if not more urgent, in this era of unprecedented acceleration of new technologies that are transforming not only our lifestyles but also our self-understanding and possibly human nature itself.We are told that a technological singularity-when as a species we will either transcend our biology (to paraphrase Kurzweil [2]) or become extinct-is imminent.Across the academia, scholars engaged with discourses of posthumanism and transhumanism comment on how 'the posthuman view configures human being so that it can be seamlessly articulated with intelligent machines' [3] (p. 3).Meanwhile, the new technologies pose new social, political and ethical challenges.Announcing the birth of roboethics in 2005, Veruggio provoked his audience to consider whether ethical issues with respect to robots should remain a matter for stakeholders' own consciences or be construed as 'a social problem to be addressed at institutional level' [4] (p. 2).In a follow-up paper [5], averring that soon 'humanity will coexist with the first alien intelligence we have ever come into contact with-robots' (p.5), Veruggio articulated a roadmap for roboethics with the caveat that its target is 'not the robot and its artificial ethics, but the human ethics of the robots' designers, manufacturers and users' (p. 7).Since 2005, the march of robots into our midst has been increasingly recognized as a social problem to be addressed at the institutional level.
This opens up the axiological issue of whose values and vision of the ideal society inform public policies.The empirical question can be answered by observing the increasing dominance of technology-led positions (but should this vision determine ethics?).The rise of 'robot culture' is a phenomenon of social scientific interest, but should this phenomenon, or some aspects of it, be construed as deserving an ethical consideration?The answer is not straightforwardly affirmative or negative, and this paper is not aimed at arriving at a categorical answer.The following is organized as a sequence of questions that signpost a few salient issues that emerge at the interface among robotics, ethics, psychology, and the social sciences.
Can Robots Be Agents of Cultural Transmission?
The concept of cultural transmission originated in sociobiology, in which context it is distinguished from genetic transmission of traits.In humans, it denotes socialization and enculturation processes whereby beliefs, values, and norms of conduct are transmitted across and within generations [6].At the level of interpersonal interactions, especially within the family, cultural transmission occurs when adults impart their own values, beliefs, and attitudes to children ('direct vertical' transmission).Cultural transmission occurs also within the peer group ('direct horizontal').The process operates at the societal level without direct interpersonal interaction ('oblique' transmission); for instance, when mass media and popular culture induce imitation and learning.
While cultural transmission is a universal process, the mechanisms and contents involved in the process are not necessarily universal, since childrearing practices and normative expectations vary across cultures.Such differences can already be seen in infancy.In a cross-cultural study that investigated infant behavioral inhibition, Australian, Canadian, Chinese, Italian, and South Korean toddlers were presented with a toy robot that moved, made noises, and emitted smoke [7].Toddlers from Western cultures (especially Italian and Australian) were quicker to touch the robot than their counterparts from Eastern cultures, with Chinese and South Korean toddlers being the shiest (many of whom did not touch the robot).Towards an explanation, the researchers speculated that Asian parents tend to reward cautious and reserved behavior in their children.
The significance of the robot in [7] was the fact that it was an unfamiliar toy introduced by a stranger; i.e., not necessarily its appearance as a robot.Real robots increasingly enter environments of child development in a variety of forms and functions.Examples of direct vertical transmission can be glimpsed in reports from a longitudinal study in the University of California San Diego, which has involved placing humanoid robots in a crèche.When QRIO (a bipedal robot created by Sony) was first introduced, some toddlers cried when it fell [8].The investigators advised the teachers to tell the children not to worry since the robot could not be damaged, but the teachers, ignoring the advice, 'taught the children to be careful; otherwise, children could learn that it is acceptable to push each other down' [8] (p.17956).Later on, children seldom cried when QRIO fell, but instead helped it to stand up.Separately, an ethnographic study of the same project described how a teacher seized the opportunity to foster the etiquette of saying 'Thank you' when a toddler spontaneously offered a toy to RUBI (a plump robot, clad in yellow cloth, with a head and arms, created for the project) [9].There were likely opportunities also for horizontal transmission.Supplementary videos for [8] include a clip (movie 5) that shows QRIO suddenly falling over, and children rushing over, and one boy persistently tries to raise the robot; other children observed, and may potentially imitate their peer's helping behavior (see also an analysis of the episode in [10] pp.181-182).
In the above examples, the robot served as a fulcrum for human-human interactions within which cultural transmission took place, but it did not function as a socializing agent in its own right.Robot Tega exemplifies an effort to build a robot that could 'socialize' children into doing their homework [11,12].Arguably, an advantage of educational robots is that, as an intelligent tutoring system, the robot can customize its tutoring to suit individuals' pace and style of learning (at least when it works smoothly; see [13] on breakdowns in child-robot interactions).The creators of Tega have gone a step further in taking into account the fact that emotional states can affect a child's motivation.Interacting with an enthusiastic cartoon-like robot can make learning fun, and encourage children to try harder.Tega was successfully tested with 3-5-year-old English-speaking children learning Spanish [11].
If something helps to improve learning, it makes pedagogic common-sense to use it, but curricular learning (such as mastering a foreign language) should not be confused for socialization.Children's long-term exposure to robots could have unintended consequences.This concern is insinuated in the heading of the New Scientist report on Tega (a new platform designed by Personal Robots Group at MIT Media Lab), 'Kids can pick up attitude from robots they play and learn with' [12].The thread is followed in an MIT Technology Review article [14] raising concerns about what might happen when robots become role models for children.In a convergent vein, a blog article [15] claims that 'parents are worried that Amazon Echo is conditioning their kids to be rude'.At present, only a minority of children experience interactions with robots such as Tega, but 'smart' gadgets are increasingly part of the home environment.Unlike educationally assistive robots, gadgets such as Amazon's Echo do not require the child to learn new skills.The gadget is 'child-friendly' only because of the impoverishment of the interaction.The functional reduction of human dialogue does away with courtesies such as saying 'please', and rewards brusque interaction style-an outcome that could frustrate parents trying to instill good manners in their children [15].
Currently, any evidence for that effect is at best anecdotal.Nevertheless, this speculative instance evinces a theoretical distinction between cultural transmission of behavioral norms (e.g., parents teaching their children not to be rude) and a change at societal level, such as a cultural shift in what people consider as rudeness.For better or worse, new affordances are created as gadgets are becoming both more sophisticated and affordable.In contrast with the worries expressed in [15], a leading headteacher in Britain has recently suggested that Alexa or Siri-type virtual assistants could help timid children become more confident in lessons: 'Children can be reluctant to put their hands up and answer questions in class, especially if they think they might be ridiculed.That won't come from a machine.' [16].It could be argued that helping timid children overcome their shyness in the classroom could give them a better foundation for life than providing them with technological crutches.
The specific ethical issue arising at this juncture pertains to ameliorative responsibility; that is, 'an obligation to improve a situation, no matter whether one is the causally responsible for it' [17] (p.110).People may agree about this obligation in principle, but opinions are polarized as to whether using robots will improve or worsen given situations.In general, the answer to whether robots can be agents of cultural transmission is affirmative, but we cannot assume that any direct transmission by means of robots would have the intended effect (or only that specific effect) on developmental and learning outcomes.Furthermore, as can be observed in the case of migrant families, the transmission of values from parents to offspring might be less effective in the host country insofar as children might be reluctant to accept the parents' tradition whilst parents may hesitate to impose attitudes that might be nonadaptive in the new environment [18].A similar 'generation gap' might exist between adults and children or youth, as digital migrants and digital natives respectively (cf.[19]), with the qualification that (unlike migrants to an existing society) the digital world is rapidly evolving ahead of all of us, old and young.
Is a Cultural Shift an Issue for Roboethics?
Describing cultural shifts in highly industrialized societies in the 1980s, Inglehart proposed that a change in values is mostly an automatic consequence of increased prosperity [20].He urged attention to 'substantial and enduring cross-cultural differences in certain basic attitudes and habits,' differences that are stable but not immutable, and are susceptible to gradual changes that are traceable to specific causes [20] (p.22).He further commented that changes due to industrialization may interact differently with religion, as a political factor, in the Confucian-influenced Far East, the Islamic world, and Catholic countries.Similar assertions could be extended to the technologized societies of the 2010s.
The existence of cross-cultural differences in attitudes to robots is well documented.For example, a 2012 Eurobarometer survey [21] in 24 European countries revealed considerable cross-national differences, notably in public objections to using robots in the care of children, the elderly and the disabled; negative attitudes were strongest in Cyprus (85%) and weakest in Portugal (35%).A 2016 survey of attitudes to robots in healthcare [22] in 12 countries across Europe, the Middle East and Africa found that the British sample on the whole was least receptive to the idea of healthcare robots.However, 55% of 18-to 24-year-old Britons were receptive to the idea, in contrast with only 33% of older Britons.As technological realities change, attitudes to robots change across generations.Whereas in the early 1980s an Arab journalist reportedly described the creation of androids as a travesty against Allah (cited in [23]), in October 2017, Saudi Arabia granted citizenship to a female-looking robot [24].This gesture might well be a publicity stunt, but nonetheless it indicates the possibility of shifts in acceptance of humanlike artefacts among Muslims.
At the level of the individual person, cultural shifts translate into developmental outcomes through an interplay of proximal and distal processes.Bronfenbrenner's bioecological paradigm [25] and his earlier ecological systems model [26] describe human development as happening within hierarchically nested systems.Proximal processes are the 'progressively more complex reciprocal interaction' of a child with the people, objects, and symbolic resources that constitute the child's immediate environment (microsystem)-an interaction that 'must occur on a fairly regular basis over extended periods of time' in order to be effective [25] (p.620).By implication, robots can play a role in proximal processes only when they enter the child's world on a regular basis [27].Furthermore, children's everyday contact with robots is likely to occur within family and school settings already replete with hi-tech, settings that reflect adults' beliefs about the technologies they make available to the child.Adults' beliefs are formed against the backdrop of the particular society's characteristic belief systems, resources, hazards, life styles, life-course options, patterns of social interchange, and so forth (the macrosystem).Bronfenbrenner's model thus posits distal processes that impact, top-down, proximal processes.This treats 'culture' as if it were operating externally to everyday activities within microsystems.Recent revisions (e.g., [28]) tend to integrate Bronfenbrenner's bioecological paradigm with Vygotskian and neo-Vygotskian approaches, sometimes under the label 'ecocultural'.Endorsing Bronfenbrenner's view of the human being as 'a growing, dynamic entity that progressively moves into and restructures the milieu in which it resides' [26] (p.21), bioecological and ecocultural models generally describe processes that shape the person one becomes.
A technology-related cultural shift may manifest in a variety of ways.For instance, by age 4, most children categorize prototypical living and non-living kinds, and typically designate robots to the inanimate category; but findings that children tend to attribute aliveness to robot pets with which they interact may indicate the emergence of a new ontological category that disrupts current animate/inanimate distinctions [29][30][31][32].Commentators may comment on the desirability (or otherwise) of inevitable consequences of a technologized social reality.In this vein, Turkle opines that disembodied interpersonal interactions through social media, mobile phones and the internet, have led to the emergence of a new state of selfhood-human subjects wired into social existence through technology-at the cost of youth's capacity for authentic relationships [33].As a consequence, society has arrived at a 'robotic moment', a situation marked by readiness to accept robots as relationship partners, according to Turkle.
If a cultural shift is inevitable, ethical appraisals may at best provide pragmatic agendas for minimizing risks.However, even modest agendas of limited application are imbued with their authors' notions of the kind of society we want to live in, and are underpinned by the belief that it is possible to influence the direction of societal change.
Should Roboethics Be an Instrument of (Political) Social Engineering?
The term 'social engineering' has two meanings.Recently it has entered the field of computer and information security as an umbrella term for a variety of techniques that are used to manipulate people into divulging confidential information (e.g., deception by phone) or compromise people's security and privacy in cyberspace (e.g., phishing emails) [34][35][36].Social engineering in this sense is clearly relevant here since robots can be hacked for criminal or malicious purposes.The question raised in this section, however, refers to the older and more general sense of the term.As used chiefly in political science and sociology, social engineering denotes any planned attempt by governing bodies to manage social change and in this way to regulate the future of a society.
The first occurrence of the analogy between engineers and policymakers is traceable to an 1842 book by the British socialist economist John Gray [37].Gray contrasted a situation in which a steam engine malfunctions with the situation in which some social or economic problem requires remedy.If several engineers were separately to examine the malfunctioning steam engine, they likely would arrive at similar conclusions about the problem and how to fix it; but in the political arena there is little agreement among separate committees regarding the nature of the problem, its cause and remedy: 'the political and social engineers of the present day . . .seem to agree in nothing, except that evils do exist' [37] (p.117).A similar observation could be made about the present-day proliferation of advisory bodies and initiatives that produce guidelines for ethical design and use of artificial intelligence (AI) and robots.
At the close of the nineteenth century, the metaphor acquired positive connotations of public service, defining social engineers as specialists appointed to handle problems of human or social nature.For instance, the American Christian sociologist Edwin Earp introduced his 1911 book (titled The Social Engineer) with the claim, 'Social engineering means not merely charities and philanthropies that care for victims of vice and poverty, but also intelligent organized effort to eliminate the cause that make these philanthropies necessary' [38] (p.xv).He further defined social engineering as 'the art of making social machinery move with the least friction and with the best result in work done.' [38] (p.33).Throughout the twentieth century, the usage of the term became associated with centralized organizations that deploy preventative and ameliorative measures towards fixing society's ills.
Extrapolating the above usage to the field of roboethics, the would-be social engineers are experts in a variety of fields who may be called upon to identify risks and plan ways to minimize these.Individuals may contribute through membership in organizational sections; e.g., the Institute of Electrical and Electronics Engineers' (IEEE) Global Initiative on Ethics of Autonomous and Intelligent Systems.They may participate in workshops that could inform policymaking.For example, the principles of ethical design and use of robots outlined in [39] originated in a 2010 workshop, and subsequently were incorporated into the British Standards Institution's 'Guide to the Ethical Design and Application of Robots and Robotic Systems' published in 2016 [40].The spirit of the social engineer is implicit in the mission statement of the Foundation for Responsible Robotics, a Netherlands-based initiative with an international cast of academics.The Foundation's mission, as its website states, is 'to shape a future of responsible robotics design, development, use, regulation, and implementation' [41].
A modicum of utopianism is perhaps inevitable in any ambition to better the future of society.In accordance with Karl Popper's [42] distinction between utopian and 'piecemeal' social engineering, however, initiatives such as the aforementioned may fall under the rubric of piecemeal.In Volume I of his political science book, first published in 1945, Popper regarded the piecemeal approach as preferable to the utopian, for this approach tackles problems as they arise, seeking 'a reasonable method of improving the lot of man,' a method that can be readily applied and 'has so far been really successful, at any time, and in any place' [43] (p.148).However, his recommendation to rely on tried-and-tested methods might be difficult to implement in a world that is itself rapidly changing due to technological advances.This challenge is insinuated in a rider to the mission statement of the Foundation for Responsible Robotics: 'We see both the definition of responsible robotics and the means of achieving it as ongoing tasks that will evolve alongside the technology' [41].Viewed pessimistically, the possibility of pre-empting irresponsible robotics might become moot if technological innovations constantly change the terrain at a pace and in ways that are difficult to anticipate.
A case in point is cybersecurity.Technological innovations create new affordances for social engineering in the term's negative meaning; 'the social engineer is a skilled human manipulator who preys on human vulnerabilities' [36] (p.115).This characterization could not be more diametrically opposed to Earp's, in whose view the 'social engineer is one who can help the religious leader to establish a desired working force in any field of need' [38] (p.xviii).As a response to a specific 'field of need', roboethics undertakes tasks of piecemeal social engineering by virtue of advising public policies.An affirmative answer to the question of whether roboethics should contribute to the engineering of a better society, however, presupposes a consensus about what constitutes a better society.The absence of consensus raises the question of whose vision of the ideal society is being served.
How Could Biases of the Technological Imagination Be Avoided?
Social issues have been recognized as among the 'problems' defining the engineering field for more than a decade.While social scientists typically investigate the impact of technologies on society and persons, roboticists tend to ask what needs to be done to make robots desirable for society and persons.The term 'technological imagination' paraphrases Mills' definition of the sociological imagination.The sociological imagination is a stance that construes social phenomena in terms of what these may reveal about the workings of a society [1], whereas the technological imagination is a stance predisposed towards construing social issues in terms of their implications for technology [10,43].This is the engineering field's default stance, understandably, since making robots is its raison d'être.Furthermore, since it is in the manufacturers' interest to avoid marketing products that might make them liable to lawsuits, the industry may self-regulate in the long run.Pragmatically, ethical appraisals pivot on assessments of risks associated with technological innovations, and policy recommendations center on how these risks could be realistically minimized.
The focus on the technological artefact, although necessary, results in a kind of tunnel vision.For example, in an interview with the IEEE online newsletter [44], the vice president of the IEEE Society on Social Implications of Technology has identified important ethical and legal concerns related to marketing home robots to families with young children, including information security, safety, and safeguarding children: the gadget could be hacked, enabling strangers to watch the child; it might be used unscrupulously to sell products to children; a robot might accidently hurt a child; and the robot might witnesses child abuse.Nevertheless, technology-driven ethical appraisals are not child-centered, and seldom take into consideration the possibility of detrimental effects on child development or the wider social context (e.g., the home or the school).In contrast, psychology-driven ethical appraisals such as outlined by Amanda and Noel Sharkey [45,46] do highlight issues of emotional attachment, deception of the child, and loss of human contact (see also [33]).Apropos teachers' attitudes to robots in the classroom, research reported in [47] demonstrates the exigency of taking the consideration of ethics beyond design issues and toward engagement with stakeholders' views on how robots may affect their current practices.The point made here, however, concerns biases located in one side of a schism within the discourse of social robotics [10,43,48].Representing the stance identified by the present author as the technological imagination [10,43], the writers of [48] maintain that the world is 'run by technological developments, and that robots are here for further enhancements and new applications' and are critical of the opposite stance, the 'society-driven side [which] opines that the world is driven and run by social aspects' (p.107).
The technological imagination informs policies not only via a pragmatic 'damage limitation' approach to regulating uses of technological products, but also via a narrative of moral commitment to improving the quality of life by means of robots.In this vein, Movellan has stressed 'our responsibility to explore technologies that have a good chance to change the world in a positive manner' [49] (p.239).The claim that robots will help children to become 'better people: stronger, smarter, happier, more sociable and more affective,' as he put it in an interview with Wired [50], insinuates that children who are denied robots-either because parents cannot afford the gadgets or conscientiously refrain from giving them to their children-will grow up worse people: weaker, duller, sadder, less sociable and less affective.The rhetoric thus places the onus on policymakers to allocate resources to the development and promotion of educational robots.
The benefits of socially assistive robots (SAR) should not be overlooked or understated.For example, there is robust evidence in support of robot interventions for promoting social skills among children with autism [51,52].A potential pitfall of technology-led morality, in this specific instance, would be a naïve belief that providing non-autistic children with robot companions can enhance their social skills, a belief resting on a simplistic 'engineering logic':
•
The social skills of autistic child A are impaired.
•
Intervention using robot R raises A's skills to age-average level.
•
The social skills of non-autistic child B are already at age-average level.
•
Therefore, R will raise B's skills to above-average.
However, autistic children might respond better to robots than to people because of their symptomatic impairments (as noted in [51]).Non-autistic infants are innately attuned to human beings, and children ultimately prefer people to robots (see [27] for a related discussion).The engineering logic can be contrasted with a 'psychological logic'; namely, an approach that seeks to explain phenomena of human mind and behavior by reference to biopsychosocial factors impacting on the individual:
•
The social skills of autistic child A are impaired.Explanation: Deficits in the mirror neurons system (which facilitates imitation and empathy).
•
Intervention using robot R raises A's skills to age-average level.Explanation: Robots are less complex than people are.
•
The social skills of non-autistic child B are already at age-average level.Explanation: innate orientation to people and personal history of social interactions.
•
Therefore, since robots are less complex than people, R might be detrimental to B's further development.
Indeed, some psychologists investigating human-robot interaction (HRI) have expressed concerns that children might accept robotic companionship without fostering the moral responsibilities that human companionships entail [53].Findings that children who had higher involvements with technological artefacts were less likely to view living dogs as having a right to just treatment and to be free of harm may signal the possibility that human adaptation to interacting with robots will 'dilute the "I-thou" relationship of humans to other living beings' [54] (p.231).
The 'quick' answer to the question of how to avoid biases of the technological imagination is to widen the pool of expertise so as to encompass a spectrum of dispositions to robots as well as knowledge.This is already done in at least some cases (advisory bodies tend to be multidisciplinary; robots for autism are developed in collaboration with clinicians).The potential dilution of the 'I-thou' relationship, however, signals a deeper, longer-term problem.
Does Technological Determinism Compromise the Possibility of Moral Action?
Identifying technological determinism as the dominant narrative in social robotics, Šabanović commented that, in this narrative, social problems are typically construed as something in need of technological 'fixes', and the users of robotic products are often treated 'as objects of study, rather than active subjects and participants in the construction of the future uses of robots' [55] (p.440).This is not a peculiarity of robotics, for it reproduces the dominant mechanistic worldview of modern psychology [10].The mechanistic worldview has made it possible to translate human qualities onto machines.As Rodney Brooks put it, 'Humans, after all, are machines made up of organic molecules whose interactions can all be aped (we think) by sufficiently powerful computers' [56]
(p. 86).
There lingers the technological dream of 'the Universal Automaton . . . the creation of the perfect citizen,' which could be augmented with an emphasis on 'the amount of diversity it is capable of handling' as a benchmark in the creation of truly social robots [57] (p.86).The infamous case of Tay evinces some pitfalls of machine learning.Tay was a chatbot developed by Microsoft, targeting 18-24 year-olds in the USA [58].It was launched via Twitter on 23 March, 2016, but Microsoft removed it only 16 hours later because Tay started to post inflammatory and offensive tweets, having quickly picked up antisemitism from the social media.Microsoft attributed it to 'trolls' who attacked Tay, since the bot customized its replies to them by searching the internet for suitable source material [59].From the standpoint of applied ethics, issues that immediately come to mind apropos this instance of technology-gone-awry include the exigency of regulating AIs by means of censorship, perhaps through installing a moral code in the machine.From the standpoint of metaethics, the case of Tay calls into question the nature of morality itself.In the present context, the moral of the story lies in the demonstration of an AI's capability of handling diversity of information compounded with the incapability of locating its own self in a space of moral actions.Like Mills' cheerful robots-and unlike those trolls, whose mischief was deliberate-Tay lacked freedom of thought to reason about what it was finding on the internet.
The mechanistic worldview enables a functional reduction of the complexity of social interaction to algorithms enacted by a machine; in effect, as [57] put it, minimizing the 'human' in HRI.However, this squeezes out of the minimal 'human' the very quality that makes us human-the aspect of selfhood that Charles Taylor regarded as 'perennial in human life'; namely, the fact that 'a human being exists inescapably in a space of ethical questions; she cannot avoid assessing herself in relation to some standards' [60] (p.58).It is not the possession of some standards, a moral code, but the capacity (and freedom) to dialogue with these standards, that constitutes the human subject as 'an articulate identity defined by its position in the space of dialogical action' [60] (p.64).The existence of roboethics indeed attests to dialogical action at both individual and collective levels.
Conclusions
Above the silver lining of technological progress, there is a cloud of worries about privacy, human safety, using robots for crime, and more.Veruggio and cowriters provide a comprehensive list of global social and ethical problems that the introduction of intelligent machines into everyday life brings about: dual-use technology (having civilian and military applications); anthropomorphizing lifelike machines; cognitive and affective bonds toward machines; technology addiction; digital divide across age groups, social class, and/or world regions; fair access to technological resources; effects of technology on the global distribution of wealth and power; and the impact on the environment [61] (p.2143).The discourse is by default oriented towards matters of applied ethics that arise from existing technology, as well as matters arising in anticipation of futuristic robots (such as robot rights and robot personhood).The possibility that human-robot coexistence might result in the engineering of human subjects who, in Mills' words, will 'want to become a cheerful and willing robot' [1] (p. 171) is not usually flagged as an issue for roboethics.The focus remains on what technology can do for us and shouldn't do to us; and yet this technology might be changing us, our human nature. | 7,152.6 | 2018-06-24T00:00:00.000 | [
"Philosophy",
"Psychology",
"Computer Science",
"Sociology",
"Engineering"
] |
Technology Development and Infusion from NASA's Innovative Partnerships Program
NASA's innovative partnerships program (IPP) develops many technologies for NASA's programs and projects through a portfolio of technology investments and partnerships. The investment portfolio includes Small Business Innovation Research (SBIR) and small business technology transfer (STTR), the IPP seed fund, and NASA's centennial challenges prize program. In the process of technology development and infusion, the transition of technologies from laboratories or testbeds to their application in flight programs is often one of the most challenging steps. Newly developed technologies achieve full success when they are infused into programs and projects, although there are numerous obstacles to achieving infusion. This paper addresses the IPP portfolio for providing technology, the challenges and obstacles to technology infusion, and some of the methods currently being employed by NASA to help address those challenges and obstacles. The paper also presents some examples of IPP technologies infused into high profile programs and projects and draws lessons learned and best practices from those successful examples.
INTRODUCTION
NASA's Innovative Partnerships Program (IPP) provides needed technology and capabilities for NASA's Mission Directorates, Programs, and Projects through investments and partnerships with Industry, Academia, Government 1 IPP consists of the following program elements, as summarized in Figure Together these program elements increase NASA's connection to emerging technologies in external communities, enable targeted positioning of NASA's technology portfolio in selected areas, and secure NASA's intellectual property to provide fair access and to support NASA's strategic goals. Technology transfer through dual-use partnerships and licensing also creates many important socio-economic benefits within the broader community [2].
During FY 2006, IPP facilitated many partnerships and agreements, including over 200 partnerships with the private sector, Federal and state government, academia, and other entities for dual use technology development and reimbursable use of NASA facilities, over 50 license agreements with private entities for commercial and quality of life applications of NASA-developed technology, reporting of more than 750 new technologies developed by NASA civil servants and contractors and evaluation of those technologies for patent protection, and more than 400 agreements for commercial application of software developed by NASA.
The general process by which IPP develops and provides technology to meet the needs of NASA's Mission Directorates is provided in Figure 2. IPP investments are intended to complement other Mission Directorate and Field Center efforts, filling important gaps in NASA's technology portfolio. In order to understand ongoing and planned technology investments within NASA and needed technologies, IPP pursues many avenues of communication. IPP has established the position of Chief Technologist , that focuses on agency-wide technology needs and infusion programs. The IPP Director and Mission Directorate (MD) Associate Administrators (AAs) meet on a quarterly basis to discuss Mission Directorate needs and how well they are being addressed by the IPP portfolio. With the restructuring of NASA's SBIR/STTR organization, there are now four dedicated Level III offices, each one assigned specifically to work closely with a Mission Directorate to understand their needs and ensure that SBIR/STTR projects are addressing those needs.
There is active MD participation in the conduct of all IPP technology activities, from key roles in the solicitation development and selection processes for Seed Fund and SBIR/STTR, to determining future Centennial Challenge competitions and revising rules to best address technology needs. IPP has an office at every Field Center, working with local projects to understand their technology needs and communicate to them what IPP can provide. The Field Center offices each include an SBIR Technology Infusion Manager, and all offices provide local infusion, seed fund and partnership support.
There are several sources of technology in the IPP portfolio that have potential for addressing the needs of the Mission Directorates.
These include SBIR/STTR, Centennial Challenges, and the IPP Seed Fund, which will each be addressed in the following sections of this paper. IPP strives to keep abreast of the changes in emphasis within the Agency's technology landscape as they occur. This helps IPP to be more responsive to the needs of the MDs and provide more value through the IPP technology portfolio. As an example, IPP recently aligned its SBIR and Seed Fund topics and sub-topics to reflect the newly reformulated Program and Project Goals of the Aeronautics Research Mission Directorate. This has led to recent IPP investments supporting advances in technologies related to alternative jet fuels and turbine blade superalloys for improved engine performance and reduced emissions.
There are many challenges associated with infusing technologies into programs and projects. IPP is working to implement best practices and is also developing some new projects to help address those challenges, as discussed in later sections of this paper.
SBIR/STTR
The purposes of the SBIR/STTR programs, as established by law, are to stimulate technological innovation in the private sector; to strengthen the role of small businesses in meeting Federal research and development needs; to increase the commercial application of these research results; and to encourage participation of socially and economically disadvantaged persons and women-owned small businesses [3].
Technological innovation is vital to the performance of the NASA mission and to the Nation's prosperity and security. To be eligible for selection, a proposal must present an innovation that meets the technology needs of NASA programs and projects and has significant potential for successful commercialization. In this context, commercialization encompasses the transition of technology into products and services for NASA mission programs, other Government agencies and non-Government markets.
The largest portion of IPP's technology portfolio comes from small businesses that are funded by NASA's SBIR/STTR programs. SBIR and STTR are competitive programs that provide technology to address NASA's needs. SBIR is for small businesses (less than 500 employees), and STTR requires that small businesses partner with a research institution (e.g. a University or Federal laboratory) with the objective of transferring research from the laboratory to the small business where it can be further developed and put to commercial use. Each year NASA awards several hundred contracts to small businesses and their partners, as summarized in Figure 3.
NASA considers every technology development investment dollar critical to the ultimate success of NASA's mission and strives to ensure that the research topic areas described in this solicitation are in alignment with its Mission Directorate high priorities technology needs. In addition, the solicitation is structured such that SBIR/STTR investments are complementary to other NASA technology investments. NASA'S ultimate objective is to achieve infusion of the technological innovations developed in the SBIR/STTR program into its programs and projects. Phase 1 awards for both SBIR and STTR are feasibility studies for $100k that last 6 months for SBIR and 12 months for STTR. Phase 2 awards are for technology development and last two years with funding of up to $750k for both SBIR and STTR. SBIR/STTR investments can support about three years of technology development, including Phase 1 and 2 funding. Often, technology development requires more time and much larger investments than can be made by SBIR funds alone. This is where infusion is so important, in that the SBIR portion of a technology development is one of the critical links in the overall chain of events necessary for developing a technology. Th re have b e een notable successes from this program, with to some of NASA's highest profile missions and directly contributing to their success.
technologies being infused in
A few examples will be provided here, to illustrate how SBIR technologies are making important contributions. The twin Mars Exploration Rovers, still amazingly conducting science long after their planned mission life, are using 3 specific SBIR-developed technologies as shown in Figure 4.
Maxwell Technologies of San Diego, California, fabricated and tested an ASCII chip with single event latch up protection technology. Their innovation enables the use of commercial chip technology in space missions, providing higher performance at a lower cost. For the Mars rovers, the application was high-performance memory modules and analog-to-digital converters in the power systems and communications electronics. Yardney Technical Products of Pawtucket, Connecticut, developed lithium-ion batteries with specific energy density of >100Wh/kg, volumetric energy density of 240 Wh/l, and long cycle life. Subsequently, they won a large Air Force/NASA contract to develop batteries for space applications, and supplied the lithium-ion battery packs for the rovers. Starsys Research of Boulder, Colorado, developed paraffin based heat switches that function autonomously and are used to control the radiator for the electronics package on the rovers [5].
ASCII chip for memory modules and analog-todigital converters.
Lithium-ion batteries for battery packs.
Heat switches to control radiator for electronics package.
ASCII chip for memory modules and analog-todigital converters.
Lithium-ion batteries for battery packs.
Heat switches to control radiator for electronics package. SBIR technologies are also making important contributions to the next Mars rover mission -M Campbell, California, developed a small-format carbon nanotube field emission cathode (CNTFE) X-ray tube for the CheMin instrument on MSL. While a tungsten cathode was ultimately baselined for the flight tube, the form, fit, and function of the flight tube was derived from this SBIR. InXitu, Inc., of Mountain View, California, developed a powder handling device for X-ray Diffraction Analysis based on piezoelectrically-induced sample motion, and a miniature X-ray tube having a grounded cathode configuration is being developed to enable a further 2-fold reduction in the size of CheMin prototype instruments.
Wireless sensors developed with SBIR funding are now placed in the leading edge of the Space Shu objects during ascent, as shown in Figure 6. The Enhanced Wide-Band Micro-Miniature Tri-Axial Accelerometer Unit (EWB MicroTAU) system is a wireless, high-speed, synchronized data acquisition network for dynamic acceleration sensing, recording, and processing applications [6]. Use of this system as a wing leading edge impact monitoring system was first flown in NASA's Return to Flight mission, STS-114 in July 2005. The general term SBIR wireless technology is Sensor Control and Acquisition Telecommunications (SCAT) wireless instrumentation systems. SCAT systems have also been used for multiple applications on the International Space Station (ISS) such as wireless vehicle health monitoring, wireless instrumentation and data recording, and for instrumentation of flight tests for developmental vehicles. Another example of how SBIR has played a critical role in technology development is the maturation of Phenol Impregnated Carbon Ablator (PICA) heatshield materi invented at NASA Ames in 1993, as shown in Figure 7. PICA was a very promising material but only small specimens (~ 0.1 m) of PICA had been produced at the time of its invention. It was being considered as an enabling technology for the Stardust mission, but required the production of a much larger piece (~ 1.0 m). Available flight proven heatshield materials (e.g. carbon-phenolic) were too heavy to use for the Stardust sample return capsule, which needed to be very mass efficient. In 1994, PICA was selected by Lockheed Martin for the re-entry heatshield on Stardust. [7]. Technologies which are currently being funded are searchable on the SBIR/STTR website [8], and the interface for this searchable database is shown in Figure 8. The IPP Seed Fund has been established as a annual process to enhance NASA's ability to eet mission technology goals by providing ddress barriers and initiate cost-shared, joint-development partnerships. The hared partnerships with industry, academia, research institutions, national laboratories, and other ent is also posted to the FedBizOpps website. Responses to the
SEED FUND
m seed funding to a IPP Seed Fund is used to provide 'seed' funding to enable larger partnerships and development efforts to occur and will encourage, to the maximum extent possible, the leveraging of funding, resources, and expertise from non-NASA partners, NASA Programs and Projects and NASA Centers.
The IPP Office at NASA HQ provides an annual Seed Fund Call for Proposals to NASA Centers, soliciting proposals for cost-s Government agencies for joint development of technology that is of Mission interest to NASA.
The Call is developed in coordination with all Mission Directorates, and distributed to all Field Centers. In order to solicit external interest for partnerships an announcem Call must be from NASA personnel participating as a Partnership Manager (PM) in the Center IPP Office; with proposals including both an internal NASA Co-Principal Investigator (Co-PI) and an External Co-PI.
Proposed projects should be one year in duration and must include one or more non-NASA partners who are willing to provide cost-sharing at a level equal to or greater than the IPP funding provided to the project. Acceptable costnology spectrum of tech from the IPP Seed Fund.
Proposals are evaluat teria which include: ity; and leveraging of resources. The review process begins at each of NASA's 10 Field roviding a total of $62.2 million for the advancement of critical technologies and sharing from the partner includes actual dollars applied directly to the project, in-kind considerations such as workforce labor and the use of unique and dedicated facilities and testbeds. Such leveraging of non-NASA resources also helps ensure successful application of the technology, because the partners have 'skin in the game' as stakeholders. The technology landscape covered by the successful proposals embraces the needs of all four Mission Directorates, as summarized in Figure 9 below. through test and demonstration (ground or space). TRL 9 Actual system "flight proven" through successful mission op Planne technology advancement resulting from the 200 nd awards is illustrated in Figure 10. The number o seed fu 2006 Seed Fund projects at each TRL is shown in the blu me of the award, and the one year seed fund project.
Technology Spectrum
Delaware [11]. The habitat, as shown in Figure 11, will be put through its paces as a component of the McMurdo Station in Antarctica from January 2008 through February 2009. Using reports from explorers braving this harsh environment and data collected from habitat sensors, designers will evaluate the concept of using inflatable structures to support future explorers on the moon or Mars. There are several technology demonstrations planned for Seed Fund projects, in addition to the inflatable habitat demonstration previously discussed. A summary of some of the notable demonstrations planned for the coming year is provided below in Figure 13.
CENTENNIAL CHALLENGES
Centennial Challenges is NASA's program of prize contests to stimulate innovation and competition in solar system exploration and ongoing NASA mission areas. By making awards based on actual achievements, instead of proposals, Centennial Challenges seeks novel solutions to NASA's mission challenges from non-traditional sources of innovation in academia, industry and the public. Current Centennial Challenges competitions are listed in Table 2 Peter Homer of Bangor, Maine, was the first recipient of Centennial Challenges prize money when he won the Astronaut Glove Challenge (Figure 14). The New York Times Magazine ran a cover story about Centennial Challenges, featuring Peter Homer's capture of the Astronaut Glove Competition [13]. Homer's glove technology relates to the pressure-containing inner layers. Among the potential benefits of the winning glove design are that it requires less torque to bend than the Phase 6 glove design currently in operational use; therefore, it may be less fatiguing to use. In addition, the finger joint flexes at a predictable repeatable location allowing each finger to be patterned to the individual astronaut's unique hand dimensions. Homer's next steps are to continue refinement of the glove design to further reduce bending torque (hand fatigue), improve sizing and fit, refine manufacturing processes, investigate the potential for applying finger joint technology to other mobility joints of the space suit, and explore ways to incorporate glove innovation into layers of the space suit. Since winning the Astronaut Glove Challenge, Homer has been hired as a consultant to Orbital Outfitters, a firm commercially developing a pressurized space suit for suborbital space flyers.
Hamilton Sunstrand and ILC Dover, the current manufacturers of NASA's spacesuits, were actively involved in sponsoring the competition and provided much of the test equipment. One of NASA's foremost spacesuit experts from JSC was also in attendance and quite impressed.
Exploration Systems
While innovations from the competition haven't yet been infused, discussions are underway. Potential uses for NASA human space missions include: launch and re-entry safety/survival suits; suits for on-orbit extra vehicular activity (EVA); suits for planetary and lunar surface operations; and high pressure (zero pre-breathe) spacesuits (since the greater joint flexibility can allow for higher suit pressures).
The Lunar Lander Challenge has had two years of competitions in conjunction with the X-Prize cup although the prize has yet to be won. Other competitions have been very successful at advancing knowledge and driving innovations as well, although prize money has not yet been won. An example is the Regolith Excavation Challenge that took place in a 4m by 4m 'sandbox' with 6 tons of JSC-1a lunar regolith simulant. This was the first time ever that this amount of lunar regolith simulant was used, leading one of NASA's experts who was present at the competition to state that he learned more in two days 'playing in this sandbox of JSC-1a' than he has in two years reading and studying about regolith properties.
OBSTACLES
The biggest obstacle to technology infusion is the perceived risk by program/project managers (or their systems engineers) of adopting a new technology. They like to have technologies with flight heritage and don't want to take on any more risk than they feel they have to. If the benefits of 6 by their completed a procurement to select a commercial service provider for parabolic aircraft flight to simulate multiple gravity envi the contract to the Zero Gravity Corporat 2, 2008. IPP is their Programs and Projects.
ut a number of practices that if followed, will increase the ments.
2) Cultivate interest with customer as technology is being develop of milestones and demonstrations. Recognize that technology priorities on as the technolare of ents that must occur to lead to success-a new technology don't clearly outweigh the risks in the mind of a decision-maker, than that technology will likely not be infused. If additional development is required, then cost and/or schedule can be other obstacles. Projects generally desire technologies to be at least TRL preliminary design review (PDR). IPP is doing several things to address these obstacles. A key element of achieving TRL 6 is demonstrating a technology in the relevant environment, including the gravity environmentfrom microgravity to lunar or Martian gravity levels. Space technology development can stall at the mid-technology readiness levels due to lack of opportunities to test prototypes in relevant environments. In addition, limited testing opportunities often have high associated costs or require lengthy waits.
NASA just ronments, awarding ion on January working with NASA's Strategic Capability Assets Program (SCAP) and the Glenn Research Center (GRC), to use this IDIQ contract for parabolic aircraft services to initiate a new activity -Facilitated Access to the Space environment for Technology development and training (FAST). FAST will provide more opportunities for reducing risk and advancing TRLs by providing partnership opportunities to demonstrate technologies in these environments [14].
FAST will purchase services through this new procurement mechanism and provide partnership opportunities aimed at reducing risk by advancing needed space technologies to higher technology readiness levels (TRL). This will demonstrate the business model for purchasing services commercially, and advance technology readiness for NASA's research and technology needs. The objective is to provide advanced technologies with risk levels that enable more infusion, meeting the priorities of NASA's Mission Directorates and
BEST PRACTICES
The key to successful infusion is satisfying the technology user, a.k.a. customer or decision-maker, that the benefits of infusing a new technology or innovation outweigh any additional cost or risk. Someone will need to make a decision at some point that yes, this technology is something that will be infused. This discussion of best practices will refer to that person as 'customer.' There is no standard recipe for infusion success, b likelihood of infusion. Not all must necessarily be followed, but the more the better.
1) Develop a technology that is needed.
Communicate with the customer in order to understand their needs and how your technology might address those needs better than other options. IPP works hard to ensure our portfolio of technologies is integrated with the needs of NASA's Mission Directorates, and complementary to their other technology invest the ed.
Seek to have 'skin in the game' from the customer, as this validates that they are, in fact, interested. IPP seeks to do this through the Seed Fund, and has started a new Phase 2E feature in SBIR, to encourage cost-sharing from the Mission Directorates.
Communicate with customers as the technology is being developed, keeping them apprised and needs can be dynamic and keep abreast of changes in their needs.
3) Develop an infusion plan early, and keep updating it as the technology matures.
Actively consider and plan for infusi ogy is being developed, not as an afterthought once it has been successfully demonstrated. Throwing the technology ball over the fence and hoping that someone on the other side will catch it is not a good strategy for infusion. This infusion plan should include funding options for the duration of development and demonstration needed. In the case of most IPP projects, they limited duration and the IPP funding is but one link in a longer chain of ev ful infusion. SBIR/STTR funding is 3 years, Seed Fund is 1 year. Technology development is typically a much longer process and it should be thought of as such and planned for with that in mind. Understand the technology as part of the system it may be infused into, and be prepar understanding.
Communicate understanding of the issues of importance to the customer or technology user, as they deliberate on which technologies to infuse. To the extent possible, anticipate their There is a common theme to all the items listed abovemunications. Without co usion -it is that simple. To communicate effectively, re are several things that should be understood. In order be successful at infusing technology, the technology eloper must understand the issues and concerns of the hnology user -typically a project manager or systems ineer conducting tradeoff analyses of multiple candidate hnologies for various systems and subsystems. In order understand the issues facing these decision makers, the re knowledgeable the individual or organization seeking infuse a particular technology is, the greater the likeliod of successful infusion. The technology must be good, t if its attributes are not effectively communicated, it may er be infused.
To put a technology in the best position for infusion, it is desirable that there be certain levels of knowledge relative he key issues on the minds of decision-makers -related performance, sc marized below are indicative of the types of questions ically asked by decision-makers, and while not all must know, the more that are known the more likely that ceptions of risk can be reduced and infusion may occur Performance • What impact will this technology have on the overall performance for the system (e.g., power savings, mass savings, higher resolution, increased Isp, etc.)? Can the benefits be quantified? • Has (or will) this performance improvement been demonstrated? If a demonstration is planned, invite decision-makers to the demonstration.
SUMMARY
is seeking to add value to NASA's Mission Directorat nd their programs and projects, through technology ev lopment and infusion to meet mission needs. IPP's nology portfolio provides ben ou ces. There is a track record of success, with a few ples of the m PP is aggressively pursuing better integration and more sion. IPP is also working to better identify priority nvestments and partnership opportunities. IPP has a highly ed cated workforce at each of the 10 Field Centers. They working hard to build even stronger connections to rams and projects to better understand needs, gthen working relationships, and increase infusion. | 5,474 | 2008-03-01T00:00:00.000 | [
"Economics"
] |
Simplify: a Python library for optimizing pruned neural networks
Neural network pruning allows for impressive theoretical reduction of models sizes and complexity. However it usually offers little practical benefits as it is most often limited to just zeroing out weights, without actually removing the pruned parameters. This precludes from actual advantages provided by sparsification methods. We propose Simplify, a PyTorch compatible library for achieving effective model simplification. Simplified models benefit of both a smaller memory footprint and a lower inference time, making their deployment to embedded or mobile devices much more efficient.
Motivation and significance
Over the last few years, neural network pruning (i.e. the reduction of the size and complexity of a model through the removal of a set of parameters) has been the subject of extensive research in the scientific community [3,6,18,19,22].
Modern pruning techniques allow for impressive theoretical reduction in both memory requirements and inference time for state-of-the-art neural network architectures. However, most procedures are limited to only identifying which portion of the weights can be set to zero, offering little to no practical advantages when the model is deployed to resource-constrained devices such as mobile phones or embedded systems. While most of the pruning-related works report some form of theoretical speedup, either in terms of FLOPs or inference speed [1], this does not always reflect the actual achievable performance gain and it is usually overestimated.
To solve this issue, we propose Simplify 1 , a PyTorch [14] compatible simplification library that, allows to obtain an actually smaller model in which the pruned neurons are removed and do not weight on the size and inference time of the network. This technique can be used to correctly evaluate the actual impact of a pruning procedure when applied to a given network architecture. Moreover, Simplify allows to apply the simplification process even at training time, in conjunction with pruning techniques, thus reducing the required time for pruning and fine-tuning neural networks. A high level representation of the pruning and simplification pipeline is given in Figure 1 In the related literature it is possible to encounter two class of pruning procedures: unstructured and structured. Unstructured pruning approaches remove single parameters from the network, independently from one another [2,4,9,13,20,21]. When employing this kind of techniques, one can obtain high degree of sparsity, but the pruning of entire neurons is not a guarantee. Structured approaches, on the other hand, focus on the removal of whole neurons, leading to the imposition of some kind of structure over the pruned topology [10,19,23]. Since our proposed library removes the pruned neurons from the network, we will focus on models pruned using structured techniques. Various accelerators, both hardware and software, for sparse neural networks have been proposed [11,15,25,26]. The main downside of this kind of solutions is the requirement for specific hardware or software, that can be hardly applied to standard consumer devices. Furthermore, they are designed to apply inference-time acceleration using the zero-filled model instead of building an optimized structure, thus precluding the ability to train a pruned neural network.
Simplify solves these issues by extracting the remaining structure from a pruned model, and removing all the zeroedout neurons from the network. This allows to obtain a model that can be saved, shared and used without any special hardware or software. While, at a first glance, this may seem a straightforward procedure, the removal of zeroed neurons poses some hidden challenges like the presence of bias in said neurons or some constraints in the output's dimensions due to skip or residual connections. Even though the interest of the deep learning community on the matter seems to be quite strong 2 , very few approaches and libraries for simplifying pruned models have been proposed 3 . Moreover, they are usually limited to simpler architectures such as VGG [16], and their usage is restricted to the deployment of an already pruned model. On the other hand, with Simplify, we provide a way to: 1. Optimize more complex network architectures (e.g. ResNet [5], DenseNet [7] and so on), and, in general, custom architectures, without constraints given by the connectivity patterns (i.e. residual connections); 2. Optimize models during training: this allows to obtain speed-ups in the time required for training a model and reduce the memory occupation, when applied together with an iterative pruning technique.
Software Description
The Simplify library leverages on the main PyTorch packages and is composed of three main modules that, even if designed to function in a predefined order, can be used independently based on the user requirements. We now provide a brief overview of each module functionalities and purpose. A more detailed explanation of the maths involved in each module is provided in the Appendix.
Fuse First, we have the fuse module. Here we perform a non-mandatory optimization of the model by merging, in a single Convolutional layer, pairs of consecutive Convolutional and Batch Normalization layers. This process is know as Batch Normalization fusion or folding. This step can be ignored if the presence of Batch Normalization layers in the network is required, i.e. for further training of the simplified model. This step is not needed to define the simplified model, but provides inference-time and memory usage advantages, especially when deploying a trained model to production, thanks to an optimization of the model architecture.
Propagate The second module is called propagate. With this module we solve the problem of non-zero bias in zeroed neurons mentioned in Sec. 1. It is possible that some pruned neuron retain non-zero bias; in such situation it would be impossible to remove the neuron without losing the bias contribution. To solve this problem, in the propagate module, we essentially treat such neurons as a constant signal that can then be absorbed by the next layer, making the zeroed neuron removable.
Remove Lastly, with the remove module we perform the actual simplification of the model, removing the zeroed-out neurons. Here we make sure that the output and input dimensions of adjacent layers correspond, while also taking into account architecture constraints such as the presence of skip connections.
Illustrative Examples
In this section we provide an usage overview for Simplify. We also illustrate the results obtained for the two different use cases discussed in Sec. 1, namely optimization for model deployment and optimization during training.
Optimization for deployment This is the most common use case. Here, the simplification procedure is applied on an already trained model, on which a pruning criterion has been previously applied. In most cases, a one-line call to the simplify method is sufficient: the library performs all the three steps autonomously, and takes care of different architectural patterns such as residual connections. Below, we provide a sample code snippet. Tab. 1 shows the inference times (in milliseconds) of different standard PyTorch dense models, the resulting pruned models (random, structured pruning with 50% probability) and the simplified model obtained with our proposed library. The benchmarks are run on a Intel(R) Core(TM) i9-9900K CPU, with a batch size of 1 in order to simulate a one-shot inference of a deployed model. The results are averaged across 1000 different runs for each architecture. It's easy to see that, thanks to Simplify, the resulting model is actually faster and able to leverage on the applied pruning while remaining a fully-fledged PyTorch network. Additional results for all the torchvision architectures can be found in the repository README file. Optimization for training Most modern network architectures employ Batch Normalization as a way to improve generalization. To avoid losing the Batch Normalization contribution, we provide the ability to avoid the fusion step, so that these layers are retained. To further improve training time, it is possible to enable a training mode for simplify, which helps in decreasing inference time. More details are provided in Sec. C.1. Below, we provide a sample code snippet.
Impact
Current SOTA pruning research base their results on theoretical estimation of the models improvement. They offer poor practical benefits due to the lack of removal of pruned neurons that still weight on the model computation, especially when deployed to resource constrained devices like mobile phones. Simplify provides out of the box functionalities to translate the impressive theoretical results of pruning procedure to an actual shrinking of the neural network model, reducing both memory requirements and inference time. It allows for a more precise evaluation of pruning procedures, enabling systematic comparison within scientific research, and helps during deployment, allowing for the full exploitation of the pruned network without the need for ad hoc hardware platforms.
Conclusions
We propose the PyTorch compatible library Simplify, with the aim of providing a simple-to-use set of procedures to remove zeroed neurons from a neural network architecture. The proposed library solves different issues in the creation of simplified models, such as the propagation of the bias of pruned neurons and the shape constraint of skip connections.
The proposed library is composed of three modules that, while designed to work together, can be used independently from one another according to the required functionality for a specific setting.
Conflict of Interest
We wish to confirm that there are no known conflicts of interest associated with this publication and there has been no significant financial support for this work that could have influenced its outcome.
CRediT authorship contribution statement
Andrea Bragagnolo: Conceptualization of this study, Methodology, Software. Carlo Alberto Barbano: Conceptualization of this study, Methodology, Software.
A. Batch Normalization fusion
A vast amount of modern neural networks use Batch Normalization (from here on out BatchNorm) as a way to improve generalization. Given an input , we can define the output of BatchNorm as: where and represent, respectively, the weights and bias of the layer and are learned using standard backpropagation procedures; and 2 represent the mean and variance computed over a batch. During training this layer keeps running estimates of its computed mean and variance, which are then used for normalization during evaluation. Let us denote this approximations aŝ and̂ 2 . Notice that each parameter is defined for each channel of the input feature map; we will denote them as , ,̂ and̂ 2 for a given channel . Once a neural network is trained to completion, all the parameters of its layer can be considered frozen i.e. no longer update from further training. Also in standard network architectures, is possible to identify pairs of Convolutional and BatchNorm layers whose output is of the same size. In such conditions is possible to reduce the network complexity by fusing this two layers them into a single one. Note that this operation is only applicable if there is no non-linearity between the two layers.
Let us consider a generic Batch Normalization's output this can be rewritten as since this BatchNorm layer is preceded by a Convolutional layer, can be defined as where is the input of the Convolutional layer, are its weights and its bias. We can now express the BatchNorm output as a function of the Convolutional layer, substituting Eq. 4 in Eq. 3.
Leveraging on Eq. 5, we can finally fuse the Convolutional and the BatchNorm layer in a single Convolutional layer whose weights and bias are defined as and the output is therefore = ⋅ + (8)
B. Bias propagation
This step is necessary if biases are presents in the model's hidden layers, or are introduced by the fusion of batch normalization layers. Neurons with zeroed-out channels might have non-zero bias, and so they will fire a constant output value. Hence, a neuron cannot immediately be removed if the corresponding bias is nonzero. These values, however, can be propagated and accumulated into the biases of the next layer. This operation can be repeated until all of the biases have been propagated to the last layer of the network. After a bias has been propagated, it can then be set to zero in the original neuron, which in turn allows the removal of the whole weight channel.
B.1. Linear layers
We denote as 1 = ⟨ , ⟩ and 2 = ⟨ , ⟩ two sequential linear layers. and denote the weight matrix and bias vector of 1 , of size × and respectively. and denote the weight matrix and bias vector of 2 of size × and respectively. We also denote as the activation function (e.g. ReLU). A forward pass for 1 consists in: (where represents an input vector of size ) and for 2 : Focusing on Equation 9, we can visualize the vector-matrix product: We now suppose that some output channel of has been zeored-out following the application of some pruning criterion, e.g. every entry in 1 is zero. The multiplication becomes: We now focus on the forward pass of 2 . As example, we analyze what happens with the first neuron 0 . If we rewrite Equation 10 focusing on 0 we obtain: 0 = ( 0 + 0 ) 0,0 + ( ) , + ⋯ + ( + ) 0, + 0 (11) We now focus on the forward pass of 2 . As example, we analyze what happens with the first neuron 0 . If we rewrite Equation 10 focusing on 0 we obtain: 0 = ( 0 + 0 ) 0,0 + ( ) , + ⋯ + ( + ) 0, + 0 (12) The term ( 1 ) 0,1 is a constant which can be accumulated into 0 . The same reasoning can be extended to all neurons in 2 , by adding ( 1 ) multiplied with the respective incoming weight to the neuron bias. The new set of biaseŝ for the layer can be written as:̂ = ⎡ ⎢ ⎢ ⎢ ⎣ 0 + ( 1 ) 0,1 1 + ( 1 ) 1,1 ⋮ + ( 1 ) ,1 ⎤ ⎥ ⎥ ⎥ ⎦ and the original bias 1 can be set to zero in 1 , resulting in̂ = 0 , 0, 1 , … , . This procedure can be applied when multiple neurons are pruned in 1 and the general rule to obtain the updated biaseŝ is as follows: where represents the indices of zeroed channels in 1 . After the bias propagation procedure, the layers 1 and
B.2. Convolutional layers
A similar reasoning can be applied for convolutional layers. However, the propagation process needs to take into account whether the convolution employs zero-padding on the input tensor or not.
For the sake of simplicity, using the same notation of Section B.1, let us consider two sequential convolutional layers 1 = ⟨ , ⟩ and 2 = ⟨ , ⟩. We also assume that 1 has one input channel and two output channels ( has shape 2 × 1 × 1 × 1 and is a vector of length 2), while 2 has two input channels and one output channels ( has shape 1 × 2 × 2 × 2 , and is a vector of length 1).
The forward pass for 1 is: where * represents the convolution operation and is a properly sized input. In this context, the addition operation + between the resulting feature map = * and the corresponding bias value will perform a shape expansion of to match the feature map shape, for example: We now assume that the second channel 1 of 1 has been zeroed out after the application of some pruning criterion, hence if we consider 1 + 1 we obtain: where . denotes that the element shape has been expanded.
We now analyze what happens with 2 . For the sake of simplicity, we assume that = = 3, that 2 = 2 = 2 and that every value of is equal to 1. We also consider a stride value of 1 for 2 . Convolution without padding (or "same" padding): This is the simpler case, and it is similar to the linear layers (Section B.1). The forward pass of 2 can be expressed as follows: The factor ( 1 ) * 0,1 is constant and can be accumulated into 0 . Visualizing it, we obtain: In this case, the updated bias can be converted as a scalar replacing the original value 0 : given that the resulting matrix is constant, we can directly factor out 4 ( 1 ) and set 1 to 0 in 1 , obtaining a new biaŝ 0 = 4 ( 1 ) + 0 which will be used from now on in 2 4 . The same reasoning can be extended to the case of multiple neurons in the convolution layer and multiple pruned channel in the preceding layer: each bias value will be updated according to the rule in Equation 14. The general rule to obtain the new bias vector̂ can be expressed as follows: where represents the indices of the zeroed output channels in 1 .
Convolution with zero-padding
If the convolution applies zero-padding to the input values, then the bias cannot be accumulated into a scalar, as the resulting matrix will not be constant. To show this, we rewrite Equation 14 applying a zero-padding of size 1 along each dimension of the input tensor: where ′ = ( 1 ) for brevity. In this case, the new bias value need to be maintained in a matrix form, i.e.: 0 = ⎡ ⎢ ⎢ ⎢ ⎣ ′ + 0 2 ′ + 0 2 ′ + 0 ′ + 0 2 ′ + 0 4 ′ + 0 4 ′ + 0 2 ′ + 0 2 ′ + 0 4 ′ + 0 4 ′ + 0 2 ′ + 0 ′ + 0 2 ′ + 0 2 ′ + 0 ′ + 0 ⎤ ⎥ ⎥ ⎥ ⎦ To obtain the updated biases in case of multiple neurons and multiple channels, the same rule of Equation 15 can be applied, keeping in mind that in this case it will result into a tensor of shape × × instead of a vector. This introduces a constraint on the feature map size, hence the model can only ever be used at a fixed input size. However, given that the whole simplification procedure is executed on an already trained model, before deploying to production, it should not represent a major issue.
B.3. Residual connections
While the above process works fine for simple feed-forward models, special care must be taken to handle residual connections. As an example, let us consider the case of two linear layers 1 = ⟨ , ⟩ and 2 = ⟨ , ⟩, whose outputs and are summed together in a residual connection, followed by another layer 3 = ⟨ , ⟩: where 0 denotes that a channel was pruned. The residual (sum) operation introduces a new constraint: only biases corresponding to matching pruned channels in 1 and 2 can be propagated to the next layer. To see why, we can rewrite Equation 17 as Equation 12 and obtain: 0 = ( 0 + 0 + 0 ) 0,0 + ( 1 +̂ 1 + 1 ) 0,1 + ⋯+ + ( − + − ) , − + + ( + +̂ + ) 0, + 0 It is clear that even if multiple channels are pruned from 1 and 2 , only the factor ( −1 + −1 ) 0, −1 becomes a constant. In this case, we opt not to propagate any bias and employ an expansion scheme (Sec. C.1) to achieve a speed-up in the convolution operations anyways. | 4,522 | 2022-01-01T00:00:00.000 | [
"Computer Science"
] |
Evaluating software architecture using fuzzy formal models
Unified Modeling Language (UML) has been recognized as one of the most popular techniques to describe static and dynamic aspects of software systems. One of the primary issues in designing software packages is the existence of uncertainty associated with such models. Fuzzy-UML to describe software architecture has both static and dynamic perspective, simultaneously. The evaluation of software architecture design phase initiates always help us find some additional requirements, which helps reduce cost of design. In this paper, we use a fuzzy data model to describe the static aspects of software architecture and the fuzzy sequence diagram to illustrate the dynamic aspects of software architecture. We also transform these diagrams into Petri Nets and evaluate reliability of the architecture. The web-based hotel reservation system for further explanation has been studied.
Introduction
Unified Modeling Language (UML) is a semi-formal modeling and standard language, for easy describing software architecture (Object Management Group, 2002).This language has developed a powerful set of predefined modeling elements, diagrams and structure to describe the structural and behavioral properties of software architecture and the introduction of appropriate tools to support it.Unfortunately, this language is only capable of modeling some specific information systems, where there is no uncertainty in the model.However, when we consider uncertainty in UML, the extended version named Fuzzy-UML will be produced (Haroonabadi & Teshnehlab, 2008;Ma et al., 2011).Fuzzy-UML includes both o structure and behavior sights, which would be explained later in this paper.The next section is proposed method, and its subsections included fuzzy data model, fuzzy sequence diagram, evaluating software architecture using fuzzy Petri Nets, reliability using fuzzy Petri Net, case study, and the last section is conclusion and future works.
To evaluate software architecture, both structural and behavioral aspects must be considered including use case, deployment and class diagram used to display structural aspect and activity, sequence and state diagram stand for behavioral aspect.
In this paper, we study Fuzzy-UML to display uncertainty in the systems, in this paper we use fuzzy class diagram to show the structural aspect of the software system, and to display behavioral aspect, we use fuzzy sequence diagram.During the past few years, there have been several studies on fuzzy UML, fuzzy logic, Petri Nets, etc. (Ma, 2005;Motameni et al., 2008;Motameni & Ghassempouri, 2011).Ma has contributed on the idea of Fuzzy-UML by discussing and developing some new ideas with complete explanation of fuzzy programming (Ma et al., 2005(Ma et al., -2011)).Haroonabadi and Teshnehlab (2009) used stereotypes to describe software architecture and added fuzzy features to transform use case and sequence diagram respectively to fuzzy use case and fuzzy sequence diagram.Behavioral modeling system based on the fuzzy sequence diagram was also explained in details.With using these diagrams, one is able to describe the architecture of uncertain information systems and then analyze them.
Fuzzy data model
The class diagrams in UML are the logical frameworks, which describe the nature of the main structure.The classes and the relationships among classes are the integrated elements of the class diagram.Fuzzy-UML is created by adding uncertainty as part of system integration.According to MA et al. (2011) in the context of classes, there are three levels of fuzziness defined as follows, -Fuzziness in the extent where the class belongs to the data model as well as fuzziness on the content of the class in term of attributes.
-Fuzziness represents whether some instances belong to a particular class; even though the structure of a class is crisp, it is possible that an instance of the class belongs to the class with degree of membership.
-The third level of fuzziness is on attribute values of the instances of the class; an attribute in a class defines a value domain, and when this domain is a fuzzy subset or a set of fuzzy subset, the fuzziness of an attribute value appears.
The attribute or the class name in the first level must be described by the phrase WITH membership DEGREE where, 0 membership 1.This value shows the belonging degree of the attribute to the class or the class to the data model.The second level of fuzziness, the membership degree in an Instance of the class, which belongs to the class should be specified.So an additional attribute in the class is defined for representing of the instance membership degree to the class where its domain is [0, 1].The classes with the second level of fuzziness are specified by a rectangle where its lines are in dash form.In the third level, a fuzzy keyword is appeared in front of the attribute.Ma et al. (2011) presents an example of banking account using the concept of fuzzy class and Fig. 1 shows the banking account fuzzy class.In the mentioned class, the credit attribute could have the fuzzy value (the third level of fuzziness).In the other hand, the credit attribute is a linguistic variable, and it has a domain like fuzzy sets (for example: little / much).The account type specifies the membership degree of credit attribute to the class (the first level of fuzziness): "Credit WITH 0.8 membership DEGREE" The relationships among the classes are divided into four categories and they are propounded in fuzz terms (Larman, 1998) including fuzzy generalization, fuzzy association, fuzzy aggregation and fuzzy dependency.
Fuzzy system sequence diagram
In UML sequence diagram is implemented for materializing the practical cases and if the case is uncertain, the sequence diagram will be uncertain, too.A system sequence diagram could be created very easily by representing input and output events.System sequence diagrams such as use cases and system contracts describe the role of a system without explaining how it accomplishes.A simple system sequence diagram consists of actor, system, and messages and each message itself has events and conditions.This diagram incorporates fuzzy rules for transforming the state of an object into another state and a fuzzy rule can be expressed as follows, Rule = If <condition list> Then <event list> According to Haroonabadi and Teshnehlab (2009) uncertainty in the method has two levels of fuzzification.First, we use a degree method second we must determine the logic.Fig. 2 shows a sample of fuzzy concept where a message C belongs to object B with the membership function of ( ) B x and a message D belongs to object A with the membership function of ( ). ( P, P s , P e , T , TF , TRTF , A, I , O, TT , TTF , AEF , PR , PPM , TV ) , where : i.
P: is a finite set of fuzzy places.Each place has a property associated with it, in which: P s p is a finite set of input places for primitive events. P e p is a finite set of output places for actions or conclusions.ii.
T: is a finite set of fuzzy transitions.They use the values provided by input places and produce values for output places.iii.
TF: is a finite set of transition functions, which perform activities of fuzzy inference.iv.
TRTF: T TF is transition type function, mapping each transition T to a transition function TF. v.
A (P×T T× P) is a finite set of arcs for connections between places and transitions.Connections Between the input places and transitions (P ×T) and connections between the transitions and output places (T × P) are provided by arcs.In that: I: P T is an input mapping.
O: T P is an output mapping.vi.
TT is a finite set of fuzzy token (color) types.Each token has a linguistic value (i.e., low, medium and high), which is defined with a membership function.vii.
O: T P is token type function, mapping each fuzzy place P to a fuzzy token type TT.A token in a place is characterized by the property of the place and a level to which it possesses that property.viii.
AEF: Arc Expression is arc expression function mapping each arc to an expression, which carries the information (token values).ix.
PR is a finite set of propositions, corresponding to either events or conditions or actions/conclusions. x.
TV: P [0, 1] is truth values of tokens (µ i ) assigned to places.It holds the degree of membership of a token to a particular place.
Proposed model
To evaluate software architecture, both structural and behavioral aspects should be considered.Use case, deployment and class diagram are implemented to display structural aspect and activity, sequence and state diagram stand for behavioral aspect.To display uncertainty in the systems, in this paper we use fuzzy class diagram to show the structural aspect of the software system, and to display behavioral aspect, we use fuzzy sequence diagram as follows, Step1.First for each message in this diagram, its event and conditions must be found.The events and conditions calculated for the dream activity is represented in Table 1.
Step2.First we need to check the correctness of the conditions.Therefore, we should provide a mapping to do that.For each condition, we provide a transition, which is responsible to validate the result and the result will go to another place.That means the token with the given fuzzy amount continues its lifecycle and the token may have the value from 0 to 1, which means: 0 ≤ token value ≤1.The result at the end of analyzing condition C will be a fuzzy amount based on the condition.Sometimes, we have more than one condition depending on fuzzy logical operations and we choose the appropriate function.The concept is represented in Eq. (1) as follows, OR= µAB(x) = max [µA(x), µB(x)] , AND = µAB(x) = min [µA(x), µB(x)]. (1) In Fig. 3 and Fig. 4, two different common cases are depicted.Similarly, we can play with the rules and create more complicated and complex logical operation by extending the figure based on the number of rules and order of logical operation.
Step3.The next step is to run the event/events if the conditions are met.Here the antecedent fuzzy amount is used to evaluate the consequent.The transition here is to apply the antecedent truth value to the consequent membership function.Aggregation occurs here as in Fig 5 .The defuzzification process will be accomplished here because the final output of a fuzzy system should be a crisp number.The defuzzification process is done based on center of gravity test.Center of gravity can be expressed as equation (2).
COG= µ A x dx / µ A dx
(2) Table 1 shows details of the events and conditions for rule 1 and 2. We can say t1 uses C1 to adjust and clip consequent membership function of E1, on the other hand, it calculates the consequent of rule R1.T2 does same but for E2 , t3 aggregates the two rules , t4 does defuzzification , t5 and t6 both are responsible to check to see if the correct condition that needs to trigger the system occurs or not.If it does not occur t5 passes the token to its end place and if it occurs t6 passes it to the system.The result of this step (the amount in the output place of t6) which is a strength fuzzy amount will be exerted from actor to the system as a black box or from system to the actor as a result of a message.
Reliability using Petri Net
For calculating reliability of the system, we may define a success rate, f, for the transition in CPN.Success rate specifies the probability of firing transition if something wrong does not happen.On the other hand, 1-f is the failure rate, the probability of data loss.The token in the input of the transition t is assumed that carries the amount, which stands for the accumulation of the success rate up to that place.When transition fires the amounts which represents the success rate will be changed to D*f, i.The following case study is chosen because it is familiar, but rich architectural problems, and thus allows one to concentrate on how to do analysis, rather than explain the problem and domain.A RRS system is a computerized application used to record rents and handle payments; it is typically used in web based room renting systems.It includes hardware components such as a computer and software to run the system.It interfaces to various service applications.These systems must be relatively fault- tolerant; that is, even if remote services are temporarily unavailable, they must still be capable of capturing data and handling at least cash payments.After studying the use cases and use case diagram, the main successful scenario is chosen by the analyzer for SSD.Analyzer chose the SSD in fig 7. Now gradually we are going to transform each sequence into fuzzy message.The customer should be able to initiate a new rent if two requirements are provided.The event and conditions of the first message are derived.The scenario considered by the analyst as followed: for received money, we have three fuzzy values, named: not enough, almost enough, indeed enough, and for date we have two fuzzy values, named: valid and invalid.Combining the inputs and their corresponding fuzzy values, we have six conditions, accordance with table 2. Like the first actor-system message, other actor-system messages and the fuzzy rules are created by fuzzy specialist.For system-actor messages we don't need to do the same mapping because in SSD, system is black box, so this is just the result that system produces is important and we don't need to know how it is created.This message is based on the result from the previous step.System reaction to the rule is done via a simple message but we should define two important things; first value(s) that should be returned from system to actor is associated with the previous message and in case of no proper result from the previous message, customer should resend the previous message.Fuzzy rules for message one, are in Fig 8 .Fig 9 .shows the output for two fuzzy input variables.Accordance to it, the output for mentioned inputs are 5.65 out of 15.Fig10.shows the final fuzzy system sequence diagram modeled with fuzzy Petri Net.To calculate the reliability of each incoming message or actor-system message, we can follow the formula given in D. With the input money s $5000 and the date 20th, we may have the following fuzzy values: µ(money=indeed enough) =0.0 µ(date=valid) =0.48 µ(money=almost enough) =0.09 µ(date=invalid) =0.39 µ(money=not enough) =0.88 We assume all the transitions have the probability of firing 0.97 except t0 and t00 with the reliability of 1. path: The tokens may pass all the transitions except t15 and they may pass the rest once.One important point here is that in case of many inputs and one output as in t3 we assume D in fig .6 the minimum of the input reliabilities.This is because t3 should have it's both inputs if it wants to fire.Therefore the reliability of a token after passing each transition for message one in the fuzzy SSD for the given inputs is as follow: t0 =1, t00 =1, t1 = 0.97, t2 = 0.97, t8 = 0.97, t9 = 0.97, t11=0.97,t12 = 0.97, t3 =0.94, t10 =0.94, t13 = 0.94, t4 = 0.91, t5 = 0.91, t6 = 0.91, t7 = 0.88, t14 = 0.85, t16 = 0.83 The final reliability of message one with mentioned input is 0.83 because it is the last transition, token passes.For each message, we can calculate the reliability based on the path a token may move and success rate of the transitions token encounters in its lifecycle.And the reliability of SSD is the reliability of its messages individually.
Table 2
Event and condition for 1 st message in F-SSD
Conclusion
The final reliability of the 1 st message is 0.83, which is low and it is because the reliability of each transition is 0.97.The final reliability of SSD depends on the success rate of each transition in each incoming message.The developer of the system should create more reliable components in other word transitions with better success rate.In order to enhance the reliability more precisely, software classes must be programmed.In RUP, these kinds of shortcomings will be solved during the different iterations.Reliability plays an important role in purchasing software and it must be considered by system analysts.This is very significant while dealing with crucial controlling systems.As a future work, other UML diagrams specially the behavioral diagrams of UML and other nonfunctional parameters or can be evaluated through Petri Net.Besides UML diagrams, software architectures will be studied later through the same technique.Fuzzy Sets & Systems, 186, 26-46. Motameni, H. Movaghar, A., Daneefar, I., Nematzadeh, H., & Bakhshi, J. (2008).Mapping to convert activity diagram in fuzzy UML to fuzzy Petri Net.World Applied Sciences Journal, 3(3), 514-521.Motameni, H., Daneefar, I., Bakhshi, J., & Nematzadeh, H. (2009)
Fig. 8 .Fig. 9 .
Fig. 8. Fuzzy rule for 1 st actor-system message Analysts of the software can benefit from this research.The study concludes to model and evaluate the UML system sequence diagram.System sequence diagram first modeled to fuzzy Petri net and then the model was analyzed based on reliability.Through the formalism of the Petri Net, analysis of SSDs in terms of nonfunctional parameters is done.At the end the result of these experiments were assessed. | 3,985.8 | 2012-04-01T00:00:00.000 | [
"Computer Science"
] |
Tumor Biochemical Heterogeneity and Cancer Radiochemotherapy: Network Breakdown Zone-Model
Breakdowns of two-zone random networks of the Erdős–Rényi type are investigated. They are used as mathematical models for understanding the incompleteness of the tumor network breakdown under radiochemotherapy, an incompleteness that may result from a tumor’s physical and/or chemical heterogeneity. Mathematically, having a reduced node removal probability in the network’s inner zone hampers the network’s breakdown. The latter is described quantitatively as a function of reduction in the inner zone’s removal probability, where the network breakdown is described in terms of the largest remaining clusters and their size distributions. The effects on the efficacy of radiochemotherapy due to the tumor micro-environment (TME)’s chemical make-up, and its heterogeneity, are discussed, with the goal of using such TME chemical heterogeneity imaging to inform precision oncology.
Introduction
This paper extends our previous work [1] that correlated a tumor's lattice heterogeneities with its radiochemotherapy efficacy, extending it now from an idealized lattice model to a more realistic network model, using a new mathematical approach. It also arrives at some recommendations towards optimized treatment protocols. It is well known that around a tumor there is a special micro-environment (TME) with special characteristics, such as a density gradient, which is very crucial when considering the proper therapies that need to be applied. It is well known that tumors have a center of mass that contains the tumor cells, and this is true in all cases, such as in the original location of the tumor, or at a colony due to metastasis, or even in a laboratory experiment in a xenograft animal model due to implantation. These cells extend throughout the TME in shapes that may have various geometries, some of them very clear or in other cases with a fractal structure. Here we study models that contain such heterogeneities, as developed by the density gradient in the TME. Notably, beyond this mass distribution of the tumor cells, the TME may have a heterogeneity in the distribution of the chemical components, relating to its oxygen content (depletion of O 2 , i.e., hypoxia), acidity (increase in H + , i.e., lowering of pH, i.e., acidosis), or extracellular potassium ions (excess of K + , i.e., hyperkalemia). Such information has recently become amenable to the novel method of chemical imaging [2]. In order to understand a specific tumor's biology, we first need to understand the mass and chemical component distribution, the density gradient, and the connectivity network heterogeneity. All these will have important implications for the optimal medical treatment. The drug and the imaging contrast agent need to overcome the difficulty of the tumor penetration, and certainly both the therapy and the imaging will be affected by such difficulty [3]. To make this point, since we know of the TME's acidity, if one employs chemotherapy, this should be applied only to the tumor periphery, because the TME's acidity does not allow for the drug to reach the tumor center.
Since we know that the O 2 concentration is lower at the center of a tumor and higher at its periphery, we clearly see that radiation therapy will not be as effective, and in addition may also affect its combination with chemotherapy. Furthermore, it is also possible that the distribution of hyperkalemia (excess of extracellular K + ) in the TME may affect the success of immunochemistry [2]. Precision oncology may thus depend on the physical and chemical imaging of the patient's tumor's heterogeneities.
In the current work, we build a computational model to mimic the heterogeneities at the point of the network's break-up, which may now include the TME's acidity (pH) and its O 2 concentration. The known therapies today include chemotherapy, radiation therapy, combination therapy, immunotherapy and/or surgery. Therefore, direct knowledge of these heterogeneities could show the way towards the optimal treatment route. For the properties that we investigate here, our model refers to the standard combination therapy, which involves radiation therapy as the first step, followed by chemotherapy.
Our results may also be relevant to the understanding of existing protocols that have been empirically derived [4]. It is well known that the earliest and still most common tumor treatment is via chemotherapy [4], i.e., by employing drugs. However, such drug doses have a severe drawback, as they are limited in their efficiency by their notorious side effects. Additionally, tumors use one of their "chemical weapons", specifically, the acidity (acidosis) of the TME, and thus, they resist chemotherapy [5][6][7][8][9]. This "acidosis" (low pH) of the TME has been discovered over a century ago by Warburg [10]. Unfortunately, drug molecules decompose in the presence of such acidity. Thus, the right treatment protocol now must start with radiation therapy, which is then followed by chemotherapy [4]. The idea now is, instead of having a uniform TME in the area surrounding the tumor, to initially break-up the extended network of the tumor cells into a number of isolated "clusters". The drug molecules will be now be able to avoid the acidic portions of the TMEs, due to cluster break-up, and thus they will survive until reaching the tumor cells, to be able to properly operate their function. The topic of this study is to develop a mathematical model of such a break-up of the tumor network, in order to be able to plan the proper therapy.
In our previous recent work, we used a randomized two-component lattice model. Such a well-known model shows a phase transition that depends on the ratio of the concentrations of the two components, the so-called percolation model [11,12]. Such a transition is higher order than one, the customary phase transitions, and it is highly nonlinear, as it is properly described by power laws. Geometrically, it describes the formation or break-up of a connected cluster. The formation of such an extended cluster is mathematically equivalent to its break-up process, with the break-up process being called inverse percolation [13]. Both the network formation and its break-up do occur at a "critical concentration" of the relevant component [11,12]. Here, we extend our previous work on lattices of the break-up of the largest connected cluster, as we now employ random networks of the Erdős-Rényi type [14,15]. We model a real tissue network by such an Erdős-Rényi network that is being broken-up, and thus it contains two parts, live and dead tumor cells, with the dead cells due to radiation treatment. In the new network model, we randomly break links between the nodes and totally remove nodes, in an effort to mimic how cells are randomly killed by the radiation. Additionally, with the presence of hypoxia (the absence of tissue oxygen), tumors may have a "chemical weapon" against radiation therapy. This low concentration of oxygen in the TME has also been known for over a century, also due to Warburg [10]. As the tumor cells depict an accelerated growth and multiplication, resulting in an enhanced metabolism, this leads to hypoxia. Of course, the presence of O 2 molecules controls the chemical mechanism of cell-kill by radiation. Specifically, the O 2 molecules have a "triplet" ground state and also a higher energy "singlet" state. The radiation energy moves the molecules from their triplet to their singlet state. The singlet oxygen that is produced kills the cells, and as such it has been called "killer oxygen", as it produces the so-called "reactive oxygen species" (ROS) [16]. The OH radical molecule is a typical ROS member, while the singlet oxygen molecule itself is another one. The oxygen depletion will be not be uniform in the entire TME, but will be higher at the tumor's center and lower away from the center at its periphery. This is because in the periphery the metabolized oxygen molecules are constantly replenished by oxygen diffusion from the nearby, oxygen-rich, normal tissue which has zero hypoxia. We thus build a model to include the above attributes where the radiation-based cell-kill may be most effective at the tumor's outer shell (periphery) and least effective at its center. This leads us to use distinct zones in the Erdős-Rényi network, zones which have different removal probabilities. We thus apply an originally random distribution of tumor cells, with a zone-to-zone density gradient of kill probability. As a first step towards illustrating this approach, we used earlier a simple two-dimensional "onion-like" shelled lattice model [1]. We showed in that work that the "critical concentration", for the live tumor cell network break-up, does depend strongly on the ratio of removal probabilities in the different zones. We discuss the potential ramifications for radiotherapy and combination radiochemotherapy, with suggestions for an optimally shaped radiation beam and potential methods for tumor oxygenation before radiation. We give graphical illustrations of our preliminary insights regarding the efficacy of the radiotherapy. It is true that more specialized geometries may be needed in the future that are characteristic of specific tumors and their specific TMEs, thus necessitating personalized precision radiation oncology therapy.
In this model, we focus on quantities such as the giant component, but we have to acknowledge similar works that have presented other measures as the number of nodes and edges in the generalized k-core against attacks in the network [17,18]. For the purposes of completeness, we cite [19] where a wide range of mathematical and computational tools for cancer research are analysed.
Method of Simulation
We generate random networks of the Erdős-Rényi type with N nodes [20,21]. The Erdős-Rényi type of network, also called random network, is a network type in which any two nodes are connected with a pre-defined probability, p. Thus, when constructed, each node is connected to a number of k i nodes, where k i is the degree of node i. A characteristic value of the network is < k >, the average degree of all nodes of the network. The probability distribution of the k nodes results in a Poisson distribution. In Figure 1 we present a typical network of this type. We expect that the nodes with the higher degree would be the most central and the most connected nodes in the network. However, the degree of the nearest neighbors of each node is equally important. As stated in [22], the kshell of a node reveals how central this node is in the network with respect to its neighbors, meaning that a higher k-value signifies a more central node belonging to a more connected neighborhood in the network.
We construct two (2) models in which the nodes are divided into zones with different removal probabilities. In the first model (k-sorted model), we rank all nodes from highest to lowest according to the degree that they have. Then we generate two zones, one containing all nodes with the highest degree, down to the node with the median degree, and a second zone containing the nodes with the median degree and down to the nodes with the smallest degree. In the second model (k-shell model), we use the well-known k-shell decomposition, thus taking the network apart in trying to get to its core, i.e., the part of the network with the highest connectivity. Once we do this, we again divide the network into two (2) zones. We then start removing nodes randomly with different probabilities in the two zones. When a node is removed, all of its links are also removed. When nodes are picked from the outer zone, they are always removed with probability p ext = 1. When they are picked from the inner zone, they are removed with a probability equal to (1 − r) * p ext where r is defined the same way as in [1]. It is the rate of the reduction of removal probability between the two zones in both models. When r = 0 then the removal probability is 1. We vary the value of r in the range 0 < r < 1. After removing several nodes, we reach the point when there is no longer a spanning cluster in the network, as it has now been broken up into a large number of pieces. The point of this breakdown [23,24] is reached when κ = 2, where κ = <k 2 > <k> . When this point is reached, we stop the removal process.
k-Sorted Model
This is the model with the sorted k values. We divide all the network nodes into 2 zones according to their degree, i.e., the more connected nodes (higher k values) will belong to the internal zone and the less connected ones (lower k values) to the external zone. We calculate the median k of the k values of all the network nodes and assume that this k will be the limit separating the 2 zones. Every node that has a larger k will belong to the internal zone, while every node with smaller k belongs to the external zone. In order to divide the nodes roughly equally between the two zones, in half of our realizations, the nodes with k equal to the limit belong to the internal zone, and in the other half, they belong to the external zone. We will then show results of simulations for ER networks with < k > = 20 and N = 10,000, at the point of breakdown κ = 2, where κ = <k 2 > <k> .
k-Shell Model
This is the model built by the k-shell decomposition method. Initially, following the procedure explained in [25] we decompose the network into k shells. We recursively remove nodes with degree k or less, by increasing k, starting with k = 1. Each k-shell includes the nodes that were removed when all nodes with degree k were removed. This procedure is continued for k = 2, 3 etc., for all k values, and it stops when all nodes are removed. We divide the network nodes into 2 zones according to their k-shell. We assume that the nodes of the nucleus, i.e., the nodes of the last k-shell, will belong to the internal zone while the nodes of all the other k shells will belong to the external zone. The network breaks down when κ reaches the limiting value of κ = 2.
The implemented python codes for both models are freely available in an open access repository [26].
Results
We implement the two models that we discussed above using regular Erdős-Rényi networks. For each model, we divide the network into two zones, an inner zone and an outer zone. We then start to take out nodes (and their connected links) randomly, with two different probabilities for the inner and outer zone. We stop when a spanning cluster ceases to exist, and we monitor several properties of the resulting network. In Figure 2, we plot the number of clusters that have remained at the network, as a function of r, for both models at the critical point of the network breakdown, i.e., when κ = 2. We observe that the higher the difference of the removal probabilities between the two zones is, the smaller is the number of generated clusters. This observation applies to both models that we investigated, albeit with a considerable difference between the two. We observe that with the k-shell model fewer nodes need to be removed so as to achieve network breakdown, which signifies the importance of considering the degree of the nearest neighbor, as well as the degree of the neighbor of the nearest neighbor as an impactful factor. In Figure 3, we monitor the number of nodes that have to be removed so that the critical point is reached. We observe here that the higher the difference of the removal probabilities between the two zones is, the larger is the number of removed nodes needed so as to reach the critical point.As seen from Figures 2 and 3, increasing the parameter r above 0.5 does not improve the break-up of the network by much. This could guide the radiation therapist's dose increases applied to a given tumor, when combined with repeated CT imaging of the tumor cell network, e.g., when combined with a tumor cell targeted contrast element [27].
The distribution of the sizes of the clusters of Figure 2 are plotted in Figure 4 (for r = 0 and for r = 0.8). We observe in Figure 4, when the removal probabilities are the same for both zones, that the distributions are identical (black dots). When the inner zone has a smaller probability than the outer zone, then there is just a small difference, where there are slightly more small sized clusters in the k-shell than in the k-sorted model, which agrees with the results of Figure 2. However, this difference is within statistical error. However, it is interesting to see that there exists a very good linear relationships in the log-log plots, which means that all these distributions follow a nice power law form, for both models. We calculate the slope of the straight lines in Figure 3 to be −2.37 (black), −2.29 (blue) and −2.34 (red). Surprisingly, within statistical error, the slopes of the cluster size distribution for the two cases, with zones of different probabilities, and without any zones, are similar when the network breaks down. This implies that depleting zones with different probabilities plays no role in the fragments that are broken off the main body of the network at the critical point exactly when a spanning cluster ceases to exist. For the case of a network breakdown with only one zone (i.e., equal probabilities of node removal in the entire network), there is an analytical solution for the relative size of the largest cluster S, as a function of the number of removed nodes [28] where S is defined as the fraction of the nodes that belong to the largest cluster to the remaining nodes in the network. In Figure 5, we plot this analytical solution together with our simulation results only for the case of r = 0. To derive the analytical solution of S, we calculate the equation u is the smallest non negative solution of the equation u = G 1 (u) and P(k) is the degree distribution of the network. We observe excellent agreement between the simulations and the analytical solution. Notice that the normalization on the y-axis is done by dividing the size of the largest cluster by the total number of nodes that are present at any given value of f . If the normalization is done by dividing by N, the initial total number of nodes, then the curve is almost linear, in agreement with Albert-Barabási [28]. In order to have an indication for how the network changes during the entire removal process, we show in Figure 6 the change in the size of the largest cluster as a function of the fraction of the removed nodes, which, in essence, is as a function of time. When the removal probabilities are identical in the two zones, then we have identical results (black line). When the inner zone has a smaller removal probability than the outer zone, we observe that the largest cluster is reducing in size faster for the k-shell model (blue) than for the k-sorted model (red).
Considering Figure 6, we clearly see that by reducing the node removal probability in the inner zones (red and blue lines), the size of the remaining network ("largest cluster") increases significantly. Say at 95% removal (9500 nodes removed, f = 0.95), compared to the black line (r = 0), the k-shell model of removal (blue line) gives a roughly 3 times larger "largest cluster" S (the remaining intact network), while the alternative model (red line) gives an about 7 times larger "largest cluster" S. This is a manifestation of the fact that in the k-shell model about 85% of the network nodes belong to the inner zone, while in the k-sorted model almost exactly half of the nodes do. This is so because in the k-shell model the number of nodes in the inner zone comes out automatically as the denser part of the network, while in the k-sorted model we define the cutoff line as the median in the list of the sorted nodes according to their k value. Finally, we also implemented some other algorithms in an effort to properly divide the network nodes into zones. First, we randomly chose half of the nodes to belong to the inner zone and the other half to the outer zone. We then applied the algorithm with reduced probability of removal in the inner zone, as compared to the outer zone. The results for the properties monitored earlier, i.e., the number of clusters, the number of nodes removed, the size of the largest spanning cluster, etc., did not follow any particular trend but just produce random noise, meaning that the random division of nodes into zones does not properly identify the system's core.
Next, we tried an embedding algorithm so as to cast the network on a two-dimensional surface, which would give measurable geometrical distances (x and y values) between the nodes. We did this by using two algorithms, a force directed algorithm, and the node2vec embedding algorithm [29]. The results did not follow a specific pattern, just as in the above random zones approach. Again, we conclude that these are not proper ways for identifying the core of the network.
Discussion and Conclusions
Qualitatively, the results of these network models resemble those of the lattice model [1]: A drop in the node removal (cancer kill) probability between the tumor's external and internal zones decelerates the network's breakdown. Quantitatively, at about 95% site (cell) destruction, we here get an order of magnitude increase in the size of the surviving network (see Results section). Medically this would be bad news, of course. However, it is unlikely that the radiation beam hitting the inner zone of the tumor (its center) would be weaker than that hitting the outer zone (periphery). On the other hand, the chemical factor that reduces the radiotherapy efficacy, hypoxia, is more significant at the tumor's center, while at its periphery the oxygen level is expected to be that of normal tissue, where there is no hypoxia, by definition [10]. Similarly, the therapy resistance causing acidosis and hyperkalemia are more severe in the tumor's central zone [2,[5][6][7][8][9][10], where acidosis is going to reduce the efficacy of both chemotherapy and radiochemotherapy. Potentially, remedying such radiotherapy failures as demonstrated here may involve intensifying the radiation beam center, i.e., hitting the tumor's center harder; however, technical and medical challenges will have to be addressed. On the other hand, ways may be found for reducing the hypoxia in the tumor's center using micro-or nano-technology.
In conclusion, this new mathematical network breakdown model may illuminate present therapy challenges, due to tumor hypoxia and acidosis, and inspire future solutions regarding cancer radiotherapy and radiochemotherapy. In addition, due to the increased hyperkalemia in the tumor's inner zone, this 2-zone model may as well be relevant to immunotherapy. Having shown the importance of a reduced site elimination probability in a network's inner zone underlines the importance of chemical imaging of a tumor's inner zone heterogeneities, whether in a patient or a patient's xenograft.
Finally, we point out that chemical images of real tumors, in vivo, together with tumor therapy efficacy maps, are being derived in one of our laboratories. The derived tumor maps could serve as the matrix for future analysis, as has been done here, regarding the r parameter, so as to help optimize the spatial contours of the radiation beam.
Data Availability Statement:
The data presented in this study are openly available in an open access repository [26]. | 5,495 | 2022-08-01T00:00:00.000 | [
"Medicine",
"Mathematics"
] |
The symmetry algebras of Euclidean M-theory
We study the Euclidean supersymmetric D=11 M-algebras. We consider two such D=11 superalgebras: the first one is N=(1,1) self-conjugate complex-Hermitean, with 32 complex supercharges and 1024 real bosonic charges, the second is N=(1,0) complex-holomorphic, with 32 complex supercharges and 528 bosonic charges, which can be obtained by analytic continuation of known Minkowski M-algebra. Due to the Bott's periodicity, we study at first the generic D=3 Euclidean supersymmetry case. The role of complex and quaternionic structures for D=3 and D=11 Euclidean supersymmetry is elucidated. We show that the additional 1024-528=496 Euclidean tensorial central charges are related with the quaternionic structure of Euclidean D=11 supercharges, which in complex notation satisfy SU(2) pseudo-Majorana condition. We consider also the corresponding Osterwalder-Schrader conjugations as implying for N=(1,0) case the reality of Euclidean bosonic charges. Finally, we outline some consequences of our results, in particular for D=11 Euclidean supergravity.
Introduction.
The physical spacetime is Minkowskian, but there are several reasons justifying the interest in Euclidean theories. We can recall here that i) The functional integrals acquire a precise mathematical meaning only in the context of Euclidean quantum theory (see e.g. [1,2]).
iii) The generating functional of an Euclidean field theory can be related to the description of statistical and stochastic systems ( [5,6]).
At present the D = 11 M-theory has been considered as the most recent proposal for a "Theory of Everything" ( see e.g. [7,8]). We still do not know the dynamical content of the M-theory, however it seems that the algebraic description of its symmetries is well embraced by the so-called M-algebra * [9,10] {Q A , Q B } = P AB = (CΓ µ ) AB P µ + CΓ [ where the Q A (A = 1, 2, . . . , 32) are 32 D = 11 real Minkowskian supercharges, P µ describe the 11-momenta, while the remaining 517 bosonic generators describe the tensorial central charges Z [µν] and Z [µ 1 ...µ 5 ] . It should be stressed that the M-algebra (1.1) is the generalized D = 11 Poincaré algebra with maximal number of additional bosonic generators. These additional generators indicate the presence of D = 11 M2 and M5 branes. Indeed, it has been shown (see e.g. [11,12]) that D = 11 supergravity contains the super-2-brane (supermembrane) and super-5-brane solutions.
Our aim here is to study the Euclidean counterpart of the M-algebra, described by the relation (1.1). The problem of Euclidean continuation of superalgebras is not trivial, because the dimensionality of Minkowski and Euclidean spinors may differ, as it is wellknown from D = 4 case (see e.g. [13]- [16]). In a four-dimensional world the Minkowski spinors are C 2 (two-dimensional Weyl spinors) which can be described as R 4 Majorana spinors, but the fundamental D = 4 Euclidean spinors are described by we deal with a pair of D = 3 Euclidean spinors) or C 2 ⊗ C 2 = C 4 , i.e. the number of spinor components is doubled. Further, one can describe the D = 4 Minkowski Dirac matrices as real four-dimensional ones (the so-called Majorana representation), but the four-dimensional D = 4 Euclidean Dirac matrices are necessarily complex. The doubling of spinor components is reflected in the analytic continuation procedure, and the reality condition in D = 4 Minkowski space is replaced by the so-called Osterwalder-Schrader reality condition [16,17]).
In D = 11 Minkowski case the fundamental spinors are R 32 , and the corresponding fundamental representation of the Minkowskian D = 11 Clifford algebra is R 32 × R 32 , which allows writing the M-algebra (1.1) as a real algebra. In the D = 11 Euclidean case the fundamental spinors are H 16 , while the fundamental Hermitean Clifford algebra representation in D = 11 Euclidean space is C 32 × C 32 . We shall describe the Euclidean M-algebra using Hermitean products of complex Hermitean D = 11 Euclidean gamma-matrices satisfying the Euclidean counterpart of algebraic relations (1.2): If we introduce 32 complex supercharges Q A we get the following formula for the Euclidean D = 11 superalgebra where on r.h.s. of (1.4) all linearly independent Hermitean antisymmetric products of Γ-matrices appear. Since in D = 11 Euclidean space we get the relation we obtain the identity (1.6) Applying (1.6) for k = 2 and 3 one can write the relation (1.4) as follows where and Z [µ 1 ...µ 4 ] which describe the maximal complex Hermitean extension of the set of real bosonic charges occurring in the Minkowski case. We shall show, however, that one can find a holomorphic subalgebra of (1.4) defining holomorphic Euclidean M-algebra with 32 complex supercharges and 528 bosonic generators.
In our paper, in order to be more transparent, we study at first in Section 2 the lower-dimensional case of D = 3 Euclidean superalgebra.
It appears that to 528 bosonic charges of Minkowski M-algebra In Section 3 we study in more detail the D = 11 case and in particular the D = 11 tensorial structure of 496 Euclidean central charges. We introduce in D = 3 and D = 11 the Osterwalder-Schrader conjugation which is required if we wish to obtain the holomorphic Euclidean M-theory with real bosonic charges. In Section 4 we present an outlook, considering in particular the possible applications to D = 11 Euclidean superbrane scan as well as the Euclidean version of the generalized AdS, dS and conformal superalgebras. We would also like to recall here that recently Euclidean symmetry and Euclidean superspace was considered as a basis for noncommutative supersymmetric field theory [18]- [20].
2 The D = 3 Euclidean superalgebra and the role of quaternionic and complex structure.
The D = 3 Euclidean spinors are described by real quaternions (q 0 , q r ∈ R; r = 1, 2, 3) The quaternionic spinor (2.1) is modified under Sp(1) transformation law as follows One can describe the real quaternion q ∈ H(1) by a pair of complex variables (z 1 , z 2 ). Further, introducing the 2 × 2 complex matrix representation e r = −iσ r , one can represent unit quaternions (2.2) as 2 × 2 unitary matrices A: i.e. Sp(1) ≃ SU (2). We should assume that D = 3 Euclidean supercharges are the SO(3) spinors. Unfortunately, since the Clifford algebra has the fundamental C 2 ×C 2 representation, one can not employ single quaternionic supercharges as describing the Hermitean D = 3 N = 1 Euclidean superalgebra. In fact, if we introduce the quaternionic Hermitean superalgebra with supercharges described by fundamental SO (3) spinor where R = R 0 + e r R r → R = R 0 − e r R r describes the quaternionic conjugation, it will contain only one bosonic charge Z ∈ H (Z = Z → Z = Z 0 ) and can be successfully used rather for the description of D = 1 N = 4 supersymmetric quantum mechanics [18]. In order to obtain the "supersymmetric roots" of D = 3 Euclidean momenta one should introduce however, in agreement with the representation theory of D = 3 Euclidean Clifford algebra (2.5), two complex supercharges ( Q 1 , Q 2 ). One can write the D = 3 Euclidean superalgebra in the following familiar complex-Hermitean form We see that among the bosonic generators besides the three momenta we obtain a fourth real central charge Z. In fact (2.7) can be obtained by dimensional reduction from standard D = 4 N = 1 super-algebra. In order to find the quaternionic structure in the superalgebra (2.7) one should introduce the following pair of two-component spinors The formula (2.9) implies quaternionic reality condition in complex framework [21]- [23] described by SU(2)-Majorana condition. Indeed, introducing R 1 α = R α , R 2 α = R H α one can rewrite (2.9) in the following way [21] R a α = i ε ab ε αβ (R b β ) * . (2.10) The self-conjugate super-algebra (2.7) can be written as follows: and describes the N = (1, 1) D = 3 Euclidean supersymmetry.
It is easy to check that the equations (2.11a-2.11c) are consistent with the relation (2.9), i.e. the SU(2)-Majorana reality condition (2.10) can be imposed.
The analytic continuation of the Minkowski superalgebra (2.12) to the Euclidean one, given by (2.11a), is obtained by complexification of the real D = 3 Minkowski spinors and Wick rotation of Minkowski vectors into Euclidean ones. We replace Q α ∈ R by Q α ∈ C and where C = ε = iγ 2 and C γ r = − γ T r C, i.e. we keep the same D = 3 charge conjugation matrices in Euclidean and Minkowski case. One gets (r = 1, 2, 3) i.e. one can identify after putting Q E α = R α the superalgebras (2.15) and (2.11a). In order to justify the real values of P r in (2.15) one should introduce the Osterwalder-Schrader (OS) conjugation A → A # , which is defined in any dimension D with complex Euclidean spinors replacing real Majorana spinors as complex conjugation supplemented by time reversal transformation, i.e.
The real values of P r in (2.15) are required if we assume the invariance of the superalgebra (2.15) under OS conjugation, i.e. the conjugation (2.9). This reality requirement is satisfied inside the superalgebra (2.11a-2.11c) and is equivalent to the consistency of (2.11a-2.11c) with subsidiary condition (2.10). Indeed performing such a conjugation one obtains from the N = (1, 0) superalgebra (2.11a) the identical in its form N = (0, 1) superalgebra (2.11c).
ii) It is not possible to impose the OS reality conditions on single pair of super- because such a condition is not consistent, i.e. the superalgebra (1, 0) can not be made selfconjugate.
3 The Euclidean M -algebra and the role of quaternionic and complex structure.
In this Section we shall translate the Euclidean superalgebra structures from D = 3 to D = 11. Due to the Bott's periodicity conditions these algebraic structures should be analogous. The D = 11 Euclidean spinors are described by 16 quaternions R m ∈ H 16 (m = 1, . . . , 16) and the fundamental reprsentation of the D = 11 Euclidean Clifford algebra (1.3) is C 32 × C 32 . The quaternionic D = 11 Hermitean superalgebra, generalizing relation (2.6), is given by the relation The 16 × 16 Hermitean-quaternionic matrix Z mn is described by 496 real bosonic Abelian generators.
In order to describe the complex-Hermitean Euclidean M-algebra we should introduce 32 complex supercharges Q A ∈ C 32 . The most general complex-Hermitean D = 11 Euclidean algebra is given by the relation containing 1024 real bosonic charges. We introduce D = 11 Euclidean gamma-matrices by putting where the matrices Γ µ , Γ 0 describe the real 32-dimensional Let us introduce the following pair of 32-component complex supercharges: where the D = 11 charge conjugation matrix satisfies the relations The superalgebra (1.7) can be written in the following form (C = Γ 0 ; µ = 1, . . . 11) We see that i) The relation (3.9a-3.9c) describe the selfconjugate (1, 1) Euclidean M-algebra with 1024 bosonic Abelian charges which can be written also in the form (1.4) ii) The relation (3.9a) describes the holomorphic (1, 0) Euclidean M-algebra, with 528 Abelian bosonic charges. The antiholomorphic (0, 1) Euclidean M-algebra obtained by conjugation (3.7), which contains also 528 Abelian bosonic charges, is given by the relation (3.9c).
In such a way we shall obtain from (1.1) the holomorphic Euclidean M-algebra (3.9a).
The quaternionic conjugation (3.7) describes the D = 11 OS conjugation in Euclidean space. We would like to point out that in holomorphic and antiholomorphic Euclidean M-algebra (i.e. if we consider the relations (3.9a) and (3.9c) as separate) the bosonic generators, in particular the 11-momenta, can be complex.
In order to obtain e.g. in (3.9a) the real Abelian bosonic generators . one should impose the invariance of the superalgebra (3.9a) under the OS conjugation (3.7), i.e. assume that the form of holomorphic and antiholomorphic Euclidean Malgebras related by OS conjugation is identical.
Conclusions.
In relation with Euclidean M-theories and their algebraic description presented in this paper we would like to make the following comments: i) It is well-known [9] that the presence of tensorial central charges in generalized D-dimensional supersymmetry algebra can be linked with the presence of p-brane solution of D-dimensional supergravity. We have two D = 11 Euclidean M-theories described by (1,1) self-dual (see (2.7)) and (1, 0) holomorphic (see (2.11a) or (2.15)) supersymmetry algebras. In Euclidean theory the role of p-brane solutions will be played by Euclidean instantons and space branes (S-branes). It appears that in holomorphic N = (1, 0) Euclidean M-theory the set of instanton solutions corresponds to p-dimensional solutions in standard Minkowskian D = 11 M-theory (2-branes and 5-branes, supplemented by six-dimensional Kaluza-Klein monopoles and ninedimensional Hořava-Witten boundaries). The Euclidean N = (1, 1) M-theory with symmetry algebra (2.7) will have additional instanton or S-brane solutions corresponding to 3-tensor and 4-tensor central charges which do not have their Minkowski space counterparts.
ii) The superalgebras either with Hermitean selfconjugate algebra structure or holomorphic structure can be considered in any dimension with complex fundamental spinors (D = 0, 4 modulo 8 for Minkowski metric and D = 2, 6 modulo 8 for Euclidean metric) or fundamental quaternionic spinors (D = 5, 6, 7 modulo 8 for Minkowski metric and D = 3, 4, 5 modulo 8 for Euclidean metric). The physical choice of the algebraic structure of supersymmetry is indicated by the presence in the bosonic sector of the vectorial momentum generators. For example in D = 4 one can choose either the Hermitean algebra (α, β = 1, 2) or the pair of holomorphic/antiholomorphic algebras: Since the four-momentum generators are present only in the relation (4.1), these relations are the basic D = 4 supersymmetry relations.
In the quaternionic D = 3 and D = 11 Euclidean case the Hermitean algebras (2.11b) and (3.9b) do not contain the momentum generators; these generators occur in the superalgebras (2.11a,2.11c) and (3.9a,3.9c). We see therefore that in these cases the holomorphic/antiholomorphic algebra is more physical.
iii) If the fundamental spinors are complex, from algebraic point of view one can consider the minimally extended supersymmetry algebra with either Hermitean or holomorphic complex structure. If we assume however that the Hermitean anticommutator {Q + A , Q B } as well as the holomorphic one {Q A , Q B } are saturated by Abelian bosonic generators (tensorial central charges), we obtain the most general real superalgebra.
In such a way in D = 4 Minkowski case six tensorial central charges are generated by the relations (4.2), while the Hermitean superalgebra (4.1) describes only the fourmomentum generators.
In quaternionic case we have two levels of generalizations. Assuming that the fundamental spinors belong to H n , one can consider: 1) generalized Hermitean superalgebra for the supercharges belonging to C 2n . In such a way we generate 4n 2 real Abelian bosonic generators.
2) One can write down also the most general real superalgebra, with supercharges belonging to R 4n . In such a way we obtain 2n(4n + 1) real Abelian generators. | 3,574.2 | 2003-12-09T00:00:00.000 | [
"Mathematics"
] |
PyExaFMM: an exercise in designing high-performance software with Python and Numba
Numba is a game-changing compiler for high-performance computing with Python. It produces machine code that runs outside of the single-threaded Python interpreter and that fully utilizes the resources of modern CPUs. This means support for parallel multithreading and auto vectorization if available, as with compiled languages such as C++ or Fortran. In this article we document our experience developing PyExaFMM, a multithreaded Numba implementation of the Fast Multipole Method, an algorithm with a non-linear data structure and a large amount of data organization. We find that designing performant Numba code for complex algorithms can be as challenging as writing in a compiled language.
(GIL).Libraries for high-performance computational science can bypass the GIL by using Python's C interface to call extensions built in C or other compiled languages, which can be multithreaded or compiled to target special hardware features.Popular examples of this approach include NumPy and SciPy, which have together helped to propel Python's popularity in computational science by providing high-performance data structures for numerical data as well as interfaces for compiled implementations of common algorithms for numerical linear algebra, differential equation solvers, and machine learning, among others.As the actual number crunching happens outside of the interpreter, the GIL only becomes a bottleneck to performance if a program must repeatedly pass control between the interpreter and non-Python code.This is typical when an optimized compiled language implementation of your desired algorithm doesn't exist in the Python open-source ecosystem, or if a lot of data organization needs to happen within the interpreter to form the input for an optimized NumPy or SciPy code.Previously, an unlucky developer would have had to tackle these issues by writing a compiled-language implementation and connecting it to their Python package, relegating Python's role to an interface.Many computational scientists may lack the software skills or interest in developing and maintaining complex codebases that couple multiple languages.
This scenario is where Numba comes in [1].It is a compiler that targets and optimizes Python code written with NumPy's n-dimensional array data structure, the ndarray.Its power derives from generating compiled code optimized for multithreaded architecture from pure Python.Numba promises the ability to develop applications that can rival C++ or Fortran in performance, while retaining the simplicity and productivity of working in Python.We put this promise to the test with PyExaFMM, an implementation of the three-dimensional kernel-independent fast multipole method (FMM) [2], [3].PyExaFMM is open-source, 2 as are the scripts and Jupyter notebooks used to run the experiments in this paper. 3fficient implementations of this algorithm are complicated by its reliance on a tree data structure and a series of operations that each require major data organization and careful memory allocation.These features made PyExaFMM an excellent test case to see whether Numba could free us from the complexities of developing in low-level languages.
We begin with an overview of Numba's design and its major pitfalls.After introducing the data structures and computations involved in the FMM, we provide an overview of how we implemented our software's data structures, algorithms, and application programming interface (API) to optimally use Numba.
BRIEF OVERVIEW OF NUMBA
Numba is built using the LLVM compiler infrastructure 4 to target a subset of Python code using ndarrays.LLVM provides an API for generating machine code for different hardware architectures such as CPUs and GPUs and is also able to analyze code for hardware-level optimizations such as auto-vectorization, automatically applying them if they are available on the target hardware [4].LLVM-generated code may be multithreaded, bypassing the GIL.Numba uses the metadata provided by ndarrays describing their dimensionality, type, and layout to generate code that takes advantage of the hierarchical caches available in modern CPUs [1].Altogether, this allows code generated by Numba to run significantly faster than ordinary Python code, and often be competitive with code generated from compiled languages such as C++ or Fortran.
From a programmer's perspective, using Numba (at least naively) doesn't involve a significant code rewrite.Python functions are simply marked for compilation with a special decorator; see listings (1), ( 2) and ( 3) for example syntax.This encapsulates the appeal of Numba: the ability to generate high-performance code for different hardware targets from Python, letting Numba take care of optimizations, would allow for significantly faster workflows than is possible with a compiled language.
Figure (1) illustrates the program execution path when a Numba-decorated function is called from the Python interpreter.We see that Numba The Numba runtime interacts with the Python interpreter dynamically, and control over program execution is passed back and forth between the two.This interaction is at the cost of having to 'unbox' Python objects into types compatible with the compiled machine code, and 'box' the outputs of the compiled functions back into Python-compatible objects.This process doesn't involve reallocating memory, however, pointers to memory locations have to be converted and placed in a type compatible with either Numbacompiled code or Python.
PITFALLS OF NUMBA
Since its first release, Numba has been extended to cover most of NumPy's functionality, as well as the majority of Python's basic features and standard library modules 5 .If Numba is unable to find a suitable Numba type for each Python type in a decorated function, or it sees a Table 1.Testing the effect of the different implementations of computing dense matrix-vector products in double precision with some data storage from listing (2).
Algorithm
Matrix Dimension Python feature it doesn't yet support, it runs in 'object mode', handling all unknown quantities as generic Python objects.To ensure a seamless experience, this is silent to the user, unless explicitly marked to run in 'no Python' mode.Object mode is often no faster than vanilla Python, leaving the programmer to understand when and where Numba works.As Numba influences the way Python is written it's perhaps more akin to a programming framework than just a compiler.An example of Numba's framework-like behavior arises when implementing algorithms that share data, and have multiple logical steps as in listing (2).This listing shows three implementations of the same logic: the initialization of a dictionary with some data followed by two matrix multiplications, from which a column of each is stored in the dictionary.The runtimes of all three implementations are shown in table (1) for different problem sizes 6 .This example is designed to illustrate how arbitrary changes to writing style can impact the behavior of Numba code.The behavior is likely due to the initialization of a dictionary from within a calling Numba function, rather than an external dictionary.However, the optimizations taken by Numba are presented opaquely to a user.
Furthermore, not every supported feature from Python behaves in a way an ordinary Python programmer would expect, which has an impact on program design.An example of this arises when using Python dictionaries, which are central to Python, but are only partially supported by Numba.As they are untyped and can have any Python objects as members, they don't neatly fit into a Numba-compatible type.Programmers can declare a Numba-compatible 'typed dictionary', where the keys and values are constrained to Numba-compatible types, and can pass it to a Numba decorated function at low cost.However, using a Numba dictionary from the Python interpreter is always slower than an ordinary Python dictionary due to the (un)boxing cost when getting and setting any item.
Therefore, though Numba is advertised as an easy way of injecting performance into your program via simple decorators, it has its own learning curve.Achieving performance requires a programmer to be familiar with the internals of its implementation and potential discrepancies that arise when translating between Python and the LLVM-generated code, which may lead to significant alterations in the design of algorithms and data structures.
Listing 2. Three ways of writing an algorithm that performs some computations and saves the results to a dictionary.
THE FAST MULTIPOLE METHOD
The particle FMM is an approximation algorithm for N -body problems, in which N source particles interact with N target particles [3].Consider the calculation of electrostatic potentials in 3D, which we use as our reference problem.Given a set of N charged particles with charge q i at positions x i , the potential, φ j , at a given target particle at x j due to all other particles, excluding self-interaction, can be written as where 1 4π|xi−xj | is called the kernel, or the Green's function.The naive computation over all particles scales as O(N 2 ); the FMM compresses groups of interactions far away from a given particle using expansions, and reduces the overall complexity to O(N ).Expansions approximate charges contained within subregions of an octree, and can be truncated to a desired accuracy, defined by a parameter, p, called the expansion order.Problems with this structure appear with such frequency in science and engineering that the FMM has been described as one of the ten most important algorithms of the twentieth century [5].
The FMM algorithm relies on an octree data structure to discretize the problem domain in 3D.Octrees place the region of interest in a cube, or 'root node', and subdivide it into 8 equal parts.These 'child nodes' in turn are recursively subdivided until a user-defined threshold is reached (see fig. 1 of Sundar et al. [6]).The FMM consists of eight operators: P2M, P2L, M2M, M2L, L2L, L2P, M2P and the 'near field', applied once to each applicable node over the course of two consecutive traversals of the octree (bottom-up and then top-down).The operators define interactions between a given 'target' node, and potentially multiple 'source' nodes from the octree.They are read as 'X to Y', where 'P' stands for particle(s), 'M' for multipole expansion and 'L' for local expansion.The direct calculation of ( 1) is referred to as the P2P operator, and is used as a subroutine during the calculation of the other operators.The kernel-independent FMM (KIFMM) [2] implemented by PyExaFMM is a re-formulation of the FMM with a structure that favors parallelization.Indeed, all of the operators can be decomposed into matrix-vector products, or multithreaded implementations of (1), which are easy to optimize for modern hardware architectures, and fit well with Numba's programming framework.We defer to the FMM literature for a more detailed discussion on the mathematical significance of these operators [2], [3].
COMPUTATIONAL STRUCTURE OF FMM OPERATORS
The computational complexities of KIFMM operators are defined by a user-specified n crit , which is the maximum allowed number of particles in a leaf node; n e and n c , which are the numbers of quadrature points on the check and equivalent surfaces respectively (see sec. 3 of Ying et al. [2]).The parameters n e and n c are quadratically related to the expansion order, i.e., n e = 6(p − 1) 2 + 2 [2].Typical values for n crit used are ∼ 100.Notice the depth of the octree is defined by n crit , and hence by the particle distribution.
The near field, P2M, P2L, M2P, and L2P operate independently over the leaf nodes.The M2L and L2L operate independently on all nodes at a given level, from level 2 to the leaf level, during the top-down traversal.The M2M is applied to each node during the bottom-up traversal.All operators, except the M2L M2M and L2L, rely on P2P.The inputs for P2P are vectors for the source and target positions, and the source charges or expansion coefficients; the output is a vector of potentials.The inputs to the M2L, M2P, P2L and near-field operators are defined by 'interaction lists' called the V, W, X and U lists respectively.These interaction lists define the nodes a target node interacts with when an operator is applied to it.We can restrict the size of these interaction lists by demanding that neighboring nodes at the leaf level are at most twice as large as each other [6].Using this 'balance condition', the V, X, W and U lists in 3D contain at most 189, 19, 148 and 60 nodes, respectively.
The near-field operator applies the P2P between the charges contained in the target and the source particles of nodes in its U list, in O(60 • n 2 crit ).The M2P applies the P2P between multipole expansion coefficients of source nodes in the target's W list and the charges it contains internally in O( 148• n e • n crit ).Similarly, the L2P applies the P2P between a target's own local expansion coefficients and the charges it contains in O(n e • n crit ).
The P2L, P2M and M2L involve creating local and multipole expansions, and rely on a matrix-vector product related to the number of source nodes being compressed, which for the P2L and M2L operators are defined by the size of the target node's interaction lists.These matrix -ector products have complexities of the form where k = |X| = 19 for the P2L, k = |V | = 189 for the M2L, and k = 1 for the P2M.Additionally, the P2L and P2M have to calculate 'check potentials' [2] that require O(19 respectively.The M2M and L2L operators both involve translating expansions between nodes and their eight children, and rely on a matrix-vector product of O(n 2 e ).The structure of the FMM's operators exposes natural parallelism.The P2P is embarrassingly parallel over each target, as are the M2L, M2P, P2L and near-field operators over their interaction lists.The near-field, L2P, M2P, P2L and P2M operators are also embarrassingly parallel over the leaf nodes, as are the M2L, M2M and L2L over the nodes at a given level.
DATA-ORIENTED DESIGN OF PYEXAFMM
In the context of high-performance computing, data-oriented design refers to a coding approach that favors data structures with simple memory layouts, such as arrays.The aim is to effectively utilize modern hardware features by making it easier for programmers to optimize for cache locality and parallelization.In contrast, object-oriented design involves organizing code around user-created types or objects, where the memory layout is complex and can contain multiple attributes of different types.This complexity makes it difficult to optimize code for cache locality, and thus it results in lower performance in terms of utilizing hardware features.
Numba focuses on using ndarrays, in alignment with the data-oriented design principles, which we apply in the design of PyExaFMM's octrees as well as its API.Octrees can be either 'pointer based' [7], or 'linear' [6].A pointerbased octree uses objects to represent each node, with attributes for a unique id, contained particles, associated expansion coefficients, potentials, and pointers to their parent and sibling nodes.This makes searching for neighbors and siblings a simple task of following pointers.The linear octree implemented by PyExaFMM represents nodes by a unique id stored in a 1D vector, with all other data such as expansion coefficients, particle data, and calculated potentials, also stored in 1D vectors.Data is looked up by creating indices to tie a node's unique id to the associated data.This is an example of how using Numba can affect design decisions, and make software more complex, despite the data structures being simpler.
Figure (2) illustrates PyExaFMM's design.It has a single Python object, Fmm, acting as the API.It initializes ndarrays for expansion coefficients and calculated potentials, and its methods interface with Numba-compiled functions for the FMM operators and their associated datamanipulation functions.We prefer nested functions when sharing data, but keep the operator implementations separate from each other, which allows us to unit test them individually.This means that we must have at least one interaction between Numba and the Python interpreter to call the near field, P2M, L2P, M2P and P2L operators, d − 2 interactions to call the M2L and L2L operators, and d interactions for the M2M operator, where d is the depth of the octree.The most performant implementation would be a single Numba routine that interacts with Python just once, however this would sacrifice other principles of clean software engineering such as modularity, and unit testing.This structure has strong parallels with software designs that arise from traditional methods of achieving performance with Python by interfacing with a compiled language such as C or Fortran.The benefit of Numba is that we can continue to write in Python.Yet as seen above, performant Numba code may only be superficially Pythonic through its shared syntax.
MULTITHREADING IN NUMBA
Numba enables multithreading via a simple parallel for-loop syntax (see listing (3)) reminiscent of OpenMP.Internally, Numba can use either OpenMP or Intel TBB to generate multithreaded code.We choose OpenMP for PyExaFMM, as it's more suited to functions in which each thread has an approximately similar workload.The threading library can be set via the NUMBA_THREADING_LAYER environment variable.
Numerical libraries like NumPy and SciPy use multithreaded compiled libraries such as Open-BLAS or IntelMKL to execute mathematical operations internally.When these operations are compiled with Numba, they retain their internal multithreading.If this is combined with a multithreaded region declared with Numba, as in listing (3), it can lead to nested parallelism, where a parallel region calls a function that contains another parallel region inside it.This creates oversubscription, where the number of active threads exceeds the CPU's capacity, resulting in idle threads and broken cache locality, and possibly hanging threads,waiting for others to finish.To avoid this, PyExaFMM explicitly sets NumPy operations to be single-threaded by using the environment variable OMP_NUM_THREADS=1 before starting the program.This ensures that only the threads declared using Numba are created.
PARALLELIZATION STRATEGIES FOR FMM OPERATORS
The P2M, P2L, M2P, L2P and near-field operators all rely on the P2P operator, as this computes Equation (1) over their respective sources and targets, parallelized over their targets, the leaf nodes.
For the L2P operator we encourage cache locality for the P2P step, and keep the data structures passed to Numba as simple as possible, by allocating 1D vectors for the source positions, target positions and the source expansion coefficients, such that all the data required to apply an operator to a single target node is adjacent in memory.By storing a vector of index pointers that bookend the data corresponding to each target in these 1D vectors, we can form parallel for-loops over each target to compute the P2P encouraging cache locality in the CPU.In order to do this, we have to first iterate through the target nodes and look up the associated data to fill the cache local vectors.The speedup achieved with this strategy, in comparison to a naive parallel iteration over the L2P's targets, increases with the number of calculations in each thread and hence the expansion order p.In an experiment with 32,768 leaves, n crit = 150 and p = 10, our strategy is 13% faster.This is a realistic FMM setup with approximately 10 6 randomly distributed particles on the surface of a sphere.
The previous strategy is too expensive in terms of memory for the near-field, M2L and M2P operators, due to their large interaction lists.For example, allocating an array large enough to store the maximum possible number of source particle coordinates in double precision for the M2P operator, with |W | = 148 and n crit = 150, requires ∼ 17GB for the above experiment, and a a data-oriented design are required.The pitfalls illustrated above demonstrate the potential need for care when adapting code to achieve performance with Numba.While Numba is marketed as a simple way to enhance performance of Python code with a decorator, implementing complex algorithms requires significant software development expertise.This level of expertise may exceed the capabilities of Numba's target audience.
Nevertheless, Numba is truly a remarkable tool.For projects that prioritize Python's expressiveness, simple cross-platform builds, and a vast open-source ecosystem, having only a few isolated performance bottlenecks, Numba is a game-changer.By writing solely in Python, our PyExaFMM project remains concise, with just 4901 lines of code.Best of all, we can effortlessly deploy it cross-platform with Conda and distribute our software through popular Python channels, avoiding the need to create and maintain a separate Python interface, a tedious and time-consuming task commonly associated with compiled language packages for computational science.We encourage readers to explore this powerful tool and engage with the Numba community, which continues to push the boundaries of high-performance computing in the Python ecosystem.
ics, high-performance computing, aerodynamics and biophysics.Barba received a Ph.D. in Aeronautics from the California Institute of Technology and a B.Sc. and PEng in Mechanical Engineering from Universidad T écnica Federico Santa Marìa in Chile.She is a member of IEEE CS, SIAM, AIAA and ACM.Contact her at<EMAIL_ADDRESS>Betcke is a Professor of Computational Mathematics in the Department of Mathematics, University College London (UCL), U.K. His research interests include numerical analysis, scientific computing, boundary element methods and inverse problems.Betcke received a DPhil in Numerical Analysis from Oxford University.Contact him at<EMAIL_ADDRESS>
Figure 3 .
Figure 3. CPU time as a percentage of wall time for operators.CPU time is defined as the time in which the algorithm runs pure Numba compiled functions.Wall time is CPU time in addition to the time taken to return control to the Python interpreter.
Listing 1 .
An example of using Numba in a Python function operating on ndarrays.
Listing 3 .
An example of parallel multithreading. | 4,851 | 2022-09-01T00:00:00.000 | [
"Computer Science"
] |
Efficiency Evaluation of Grain Harvesters of Different Types under North Kazakhstan Conditions
The problem of selecting certain types of grain combine harvesters is quite urgent now. This is because the agricultural manufacturers are struggling to make a right selection of a grain harvester of a definite firm or make due to the aggressive marketing from the manufacturers. (Research purpose) Efficiency evaluation of grain harvesters of different types under the North Kazakhstan weather conditions. (Materials and methods) Technical and economic research has been performed according to the standard methodology followed by data analysis. The calculation has been made for direct combining by 4, 5 and 6-class harvesters equipped with wide-cut headers from leading domestic and foreign manufacturers. (Results and discussions) the authors have also calculated direct costs for thrashing of one ton of grain under favorable harvesting conditions, total costs for thrashing of one ton of grain including grain losses under unfavorable harvesting conditions, as well as total costs for thrashing of one ton of grain considering that 30 percent of grain is harvested under favorable harvesting conditions and 70 percent – under the ones. (Conclusion) It has been found that the price of thrashing of one ton of grain that characterizes the efficiency of utilizing grain harvesters depends on the price/efficiency ratio of a harvester, yield and harvesting conditions. Combine harvesters of a lower class with the optimum price/efficiency ratio are more preferable under favorable harvesting conditions. However, in case of the harvest period prolongation due to unfavorable harvesting conditions, combine harvesters of a higher class are more preferable.
I n Northern Kazakhstan, there are farms of different categories (personal farms, medium-size and large agricultural enterprises) with arable land areas of 300-3000 ha, 3000-10000 ha; and more than 10000 hectares, respectively. Moreover, large and mediumsize farms, in which 71% of the regional arable land acreage is concentrated, account for more than 20%, [1]. The beginning of the harvest period (the third decade of August) is usually dry, but in September, as a rule, it begins to rain. The yield capacity in the region amounts to about 13 hwt per hectare with fluctuations from 8 hwt per hectare in dry years to 19 hwt per hectare in the wet ones.
In recent years, grain harvesters of various capacities from different countries have been delivered to operate in the region. There is an increase in the share of medium and high-class harvesters from "near and far abroad". This is due to the limited periods of favorable weather in the autumn period in the region and the desire of agricultural producers to maximize the productivity of machines in the harvesting process under a shortage of machine operators. The solution to the problem of choosing and effective operating a certain harvesting machine encounters. This is due to the fact that under conditions of aggressive advertising of the equipment to be sold by its manufacturers, it is not easy for agricultural producers to make the right choice in favor of a certain firm, or abrand of a combine harvester [2][3][4][5].
RESEARCH OF PURPOSE is to evaluate the effectiveness of the application of combine harvesters of various classes in the conditions of Northern Kazakhstan, taking into account weather conditions. MATERIALS AND METHODS. Technical and economic studies have been carried out in accordance with a standard procedure followed by an analysis of the results obtained. The calculation has been performed for a technological operation of direct combining by different brands of combine harvesters (Tab. 1).
In the Republic of Kazakhstan, grain harvesters are aggregated with headers and reaper-headers of different widths.
The calculation is based on the maximum cutting width of header and reaper-header. Wide headers provide for the most complete loading of combines basing on their throughput capacity.
The travel speed of combines for a given yield has
PROBLEMS AND DECISIONS
been calculated using the formula given below taking into account the zonation coefficient [6]: where V p is the working speed, m/s; q -throughput capacity, kg/s; K з -coefficient of zonal conditions; B -header width, m; β -coefficient of the header width use; У -crop yield, t/ha; δ -straw ratio. It has been taken into account that, with a yield of up to 20 hwt/ha, the 4-class harvesters have operating speed limits of 2.20 m/s; сlass 5 -2.50 m/s and 6-class combine harvesters -3.06 m/s. When the given speeds exceed the expected yields, grain losses increase sharply. Taking into account the speed of the combine and the header width, we have calculated the productivity for 1 hour of the shift time: where W см -shift productivity, ha/h; К см -the coefficient of time shift use. Total costs for harvesting grain by comparable combine harvesters have been calculated by the formula: The difference in the composite costs for the compared harvesters is considered significant if it exceeds the expected value by 5%.
Operating costs have been calculated as follows: If we assume that the most productive (reference) harvester can harvest without losses, the number of working days that are accompanied by losses can be calculated for less productive combines by the formula: (5) where Д -the number of days accompanied by losses, days; Д опт -number of optimal days for harvesting, days; W б -performance rate of a reference combine, ha/h (t/h); W р -the productivity of the compared combine, ha/h (t/h).
The grain loss resulting from incomplete harvest has been determined by the formula: where П у -losses from incomplete harvest, $/ha; К п -the daily intensity of crop losses when prolonging the working period as compared to the optima one, share/day; K п = 0.01 t/ha; С п -purchase price, $ 120/ton. У -productivity, t/ha; If we divide the right-hand side of the expression (6) by the yield, we get the amount of loss, $ per ton.
RESULTS AND DISCUSSION. The calculation results of the cost of harvesting 1 ton of grain by combine harvesters under favorable conditions without prolonging the working period are presented in Table 2.
Combines can be ranked as to the cost of harvesting 1 ton of grain. The lowest price of grain threshing in favorable weather conditions is provided by the class 4 combine Esil-740, which is explained by the best ratio between its price and productivity. The second place in terms of increasing the cost of 1 ton of grain is confidently taken by the Akros-530 harvester. The cost of 1 ton of grain harvester by combine harvesters 9660-STS, Mega-360 and Medion-310 at a yield of 10-15 hwt/ha is by 3-5 $/t more, and at a yield of 20 hwt/ha by $ 5-9 per ton more than the cost of grain threshing with combine Esil-740.
The offered ranking is valid for favorable weather conditions and the absence of biological losses due to untimely performance of operations. In case of down time due to precipitation, the most significant biological losses
PROBLEMS AND DECISIONS
have been detected for a combine with lower productivity. This is due to the fact that such a combine has the largest area to be harvested for the period of precipitation, which results in such losses. Taking into account losses from untimely performance of operations, the total costs per 1 ton of grain will be as follows (Tab. 3).
Under unfavorable conditions and the fact of losses due to untimely performance of operations, the more efficient combine harvesters 9660-STS, Akros-530, then Mega-360, then Esil-740 and Medion-310 should be given priority.
In conditions of Northern Kazakhstan, less than 50% of the area is harvested under favorable weather. The research has been carried out in the southern districts of the region with an average yield level of about 10 hwt per hectare. Under the precipitation are areas with an average yield level of about 20 hwt/ha. Taking account of this fact, let us assume that under the conditions of the northern part of Kazakhstan, 30% of the grain is harvested under favorable weather and 70%, under unfavorable. Calculation results of the cost of threshing 1 ton of grain by comparable harvesters under these conditions is shown in Table 4.
At a ratio of the amounts of grain threshed under favorable weather and precipitation 30:70, the ranking of harvesters by the cost of threshing proceeds as follows: at a yield of 15-20 hwt per hectare, the lowest cost of grain threshing is provided by the 5 class combine Akros-530, by $1/t more grain as compared to combine harvester 9660-STS.
At a yield of 10 hwt/ha, the lowest cost of grain harvesting is ensured by the 6 class 9660-STS combine harvester by $ 4/t more than the cost of grain from the 5 class Akros-530 combine harvester. Combine harvesters Yesil-740 and Mega-360 provide the higher cost of threshing than the Akros-530 and 9660-STS at 2-3 $/t at a yield of 15-20 c/ha, and 3-9 $/t at a yield of 10 hwt/ha. Medion-310 gives the highest cost of threshing at a ratio of the amount of grain harvested under favorable weather and precipitation as 30:70.
Thus, under favorable harvesting conditions, priority should be given to combine harvesters of a lower class with an optimal price-quality ratio. However, if there is a danger of prolonging the harvesting period due to unfavorable weather conditions, priority should be given to higher-class harvesters. The results complement SIBIME studies, which show that in Siberia's extreme conditions, direct costs of harvesting by higher-class harvesters may be less than those for lower-class harvesters [7]. However, according to SIBIME, the lower threshold of the effective use of high-performance combines of leading foreign companies corresponds to yields of 35-40 c/ha. According to our research, under unfavorable harvesting conditions, this threshold can be significantly lower if these harvesters are equipped with wide-cut headers. The results of our studies confirm the conclusions of V.D. Saklakov that «for every technical means (machine-tractor unit) there is an optimal duration of field operations» [8][9][10][11]. CONCLUSIONS: 1. The cost of harvesting 1 ton of grain, characterizing the efficiency of the use of combine harvesters, depends on the ratio between the price and performance of the combine, yield, and harvesting conditions.
2. Under favorable conditions in the absence of losses from untimely performance of harvesting operations, the use of 4 and 5-class Esil-740 and Akros-530 combine harvesters is most effective, the higher costs are determined for the Medion-310, 9660-STS and Mega-360 combine harvesters.
3. Under unfavorable harvesting conditions, priority as to the effectiveness of use should be given in descending order to 6 and 5-class 9660-STS and Akros-530combine harvesters, followed by Mega-360, and also 4-class Esil-740 and Medion-310 combine harvesters. 4. In actual circumstances, periods with favorable and unfavorable weather conditions are both fairly probable during harvesting operations. In this respect, the combine harvester fleet of Northern Kazakhstan should be made up of mainly 5 and 6-class combine harvesters equipped with wide-cut headers and reaperheaders. Table 3 Table 4 | 2,522.8 | 2018-07-26T00:00:00.000 | [
"Agricultural And Food Sciences",
"Economics"
] |
2nd International Workshop on the Theory and Practice of Algebraic Specifications, Amsterdam 1997
We introduce computational systems to formalise the notion of rewriting directed by user defined strategies. This provides a semantics for ELAN , an environment dedicated to prototype, experiment and study the combination of different deduction systems for constraint solving, theorem proving and logic programming paradigms. Formally, a computational system can be represented as a rewrite theory in rewriting logic together with a notion of strategy to select relevant computations. We show how conveniently the strategies can also be specified using again computational systems. Several non-trivial examples of strategy description are described including a search space library and its use for solving problems like game winning strategies.
Introduction
ELAN is an environment dedicated to prototype, experiment and study the combination of different deduction systems for constraint solving, theorem proving and logic programming paradigms.Its evaluation mechanism is rewriting, whose elegance and expressiveness as a computational paradigm are no more to be stressed as evidenced by systems like ASF+SDF [Kli93] or OBJ3 [GKK + 87].Less evident is the difficulty that comes from the absence of explicit control mechanism over rewriting.In many existing rewriting-based languages or systems, the term reduction strategy is hard-wired and is not accessible to the designer of an application.Indeed controlling rewriting is a fundamental issue as soon as one wants to use rewriting as a specification language.We show in this paper how the control can itself be specified using rewriting.
The ELAN language is based on the concept of computational systems [KKV95, Vit94, BKK + 96b] given by a signature providing the syntax, a set of conditional rewrite rules describing the deduction mechanism, and a strategy to guide application of rewrite rules.Formally, this is a rewrite theory in rewriting logic [Mes92], [MOM93a], together with a notion of strategy to select relevant computations.Each ELAN module defines its own signature, labelled rewrite rules and strategies.ELAN is implemented in C++.
We first show in Section 2 how non-deterministic computations are handled in ELAN and combined with deterministic ones.Then Section 3 describes the strategy language of ELAN which provides some predefined primitives and the possibility for the user to define his own strategies in a very flexible way using the same paradigm of rewriting.For instance the user may want to develop his own search space library and use it for solving different problems.This is illustrated in Section 4. The conclusion addresses some further research perspectives.
Evaluation with rules and strategies
ELAN is a system for prototyping non-deterministic computations thanks to rules and strategies.An ELAN program is composed of a signature part describing operators with their types, a set of rules and a set of strategies.Strategies Theory and Practice of Algebraic Specifications ASF+SDF'97 is one of the main originality of ELAN compared to other algebraic specification languages based on rewriting.A strategy is a way to describe which computations the user is interested in, and specifies where a given rule should be applied in the term to be reduced.We describe informally here the evaluation mechanism and how it deals with rewrite rules and strategies.
Rules are labelled conditional rewrite rules with local variable affectations ` : l r if v where y := S u `is the rule label, l and r the respective left and right-hand sides, v the condition and y := S u a local affectation, assigning to the local variable y the result of the strategy S applied to the term u.
For applying such a rule on a term t, say at top position, first l is matched against t, using if necessary, associative commutative matching (if some operators in the signature are declared as associative and commutative).Then the expressions in where and if parts, instantiated with the matching substitution, are evaluated in order.When there is no failure, this usually instantiates the local variables (such as y) in where expressions, and this extends the matching substitution.When every condition is satisfied, the replacement by the instantiated left-hand side is performed.Indeed the power of this mechanism comes from the fact that where expressions may invoke strategy evaluation.Application of a rewrite rule in ELAN yields, in general, several results.This is first due to equational matching (for instance AC-matching) and second to the where assignment, since it may itself return several possible assignments for variables, due to the use of strategies.When a rewrite rule or a strategy returns an empty set of terms, we say that it fails.
Thus the language provides a way to handle this non-determinism.This is done using a few basic strategy operators: first (or dc) standing for first non-failing result, and dk standing for dont know choose.For a rewrite rule ` : l r the strategy first(`) returns the first non-failing result of the application of the rule.On the contrary, if the `rule is applied using the dk(`) strategy, then all possible results are computed and returned by the strategy.
The implementation handles these several results by an appropriate back-chaining operation.This is extended to the application of several rules: the dk strategy results in the application of all substrategies and yields the concatenation of all results; the application of the first strategy returns the set of results of the first non-failing rule application.If all sub-strategies fail, then it fails too, i.e. it yields the empty set.
Strategies can be described using dk and first operators, a concatenation operator ;, as well as with the elementary strategies that are rule labels and the strategies fail for failure, and id for identity.Iterators are also provided and user defined strategies operators as well as their semantics can be expressed in the language.This is done using rewriting itself as detailed in the next section.
The strategy language
We are now describing how strategies can be specified using rewriting itself.This is done at two levels.First, the user defined strategies are specified using rewrite rules that can themselves be controlled by (meta) strategies.Second, the operational semantics of the strategy language is also defined as a rewrite theory with strategy, making the whole language description supported by the single concept of computational system.
We are using in this section the basic notions of rewriting logic and of proof terms.Our notations are consistent with the ones of [KKV95,Mes92,MOM93b].
The notion of strategy as subset of proof terms of a rewrite theory was proposed in [KKV95,Vit94].The set of (closed) proof terms is defined as the set of terms T F L f ;g built on function symbols F of the rewrite theory, labels L and the symbol ';' standing for concatenation.The symbol : is defined on T F T F and : t t 0 means that there is a chain of rewrite steps from the term t to the term t 0 encoded by the proof term .It is easy to see that for each proof term 2 , the sets: dom = f t j 9 t 0 s:t: : t t 0 g and cod = f t 0 j 9 t s:t: : t t 0 g are either both singletons or both empty.Thus, we can also define a partial function : T F 7 !T F, such that t = t 0 when dom = f t g and cod = f t 0 g .A strategy S is defined as a subset of .The definition of the symbol : is extended to strategies as follows: S : t t 0 if there exists 2 S; such that : t t 0 and S t = ft 0 j 2 S; : t t 0 g: The definition of domain can be extended too: domS = f t j 9 t 0 2 T F ; 9 2 S; s:t: : t t 0 g : An arbitrary strategy S could be very complicated or irregular from the computational point of view (for instance a non-recursive set).This is why we concentrate on describing special subclasses of strategies and define two subclasses of general strategies, called elementary and defined strategies whose operational semantics are given using rewriting logic.The main difference between these two classes is that elementary strategies are predefined in ELAN while defined strategies are user-definable, and may be recursive, parameterised and typed as well.
Elementary strategies
The first step is the construction of elementary strategies, where an elementary strategy S is an element of the set of terms T F L f ; ; id; dc; dk; rst; failg.Roughly speaking, elementary strategies represent non-recursive nondeterministic computations.We already introduced the strategy operators ; ; dc; dk; rst; fail; id.In addition, lS 1 ; : : : ; S n corresponds to an application of a rewrite rule labelled by `2 L , which also applies substrategies S i on values of variables x i after matching and before replacing the matched subterm by the instantiated right-hand side of the rewrite rule.fS 1 ; : : : ; S n is an application of substrategies S i to subterms t i of the term ft 1 ; : : : ; t n with root The description of operational semantics of elementary strategies is achieved through the definition of an interpreter described by labeled rewrite rules, given in this volume (see [BK97]).
Elementary strategies can be given sorts, based on the set of sorts S defined in the user's rewrite theory.This introduces new strategy sorts hs 0 7 !s 00 i for the sort of all elementary strategies transforming terms of sort s 0 into terms of sort s 00 .
Defined strategies
The goal is now to be able to define new strategies by rewrite rules.Let us take the very easy example of map, that can be defined by a strategy rewrite rule in the following way: mapS dcnil; S : mapS 1 The right-hand side of this definition means that whenever the strategy map with an argument S (i.e.mapS) is applied to a term t, either t is nil, or the strategy S is applied on the head of t (i.e.t should be a non-empty list) and mapS is further applied on the tail of t.
This strategy definition can be formulated also using the strategy application symbol : mapS nil nil 2 mapS a:as S a: mapS as The difference relies on the fact that a list, which the functional map is applied on, is an explicit argument in the second (or third) definition, while in the first one, it is implicit.
Rule 1 applied without care may lead to infinite computations, for example when taking a left-most inner-most reduction strategy.This justifies the concept of meta-strategies, i.e. strategies which control the execution by rewriting of defined strategies.Such a strategy can be either constructed by enriching the strategy interpreter, or explicitly defined by the user from labelled rules on strategies.
A rule like mapS dcnil; S : mapS defines recursively the strategy map.Indeed other rules involving map can be written, to express in particular some equivalence on defined strategies, for instance the distributivity of map on the concatenation ; or its application on the identity: mapS 1 ; mapS 2 mapS 1 ; S 2 mapid id Moreover, properties of strategies can be described by rewrite rules such as: id ; S S S ; id S rstS; S S dkS; S S In ELAN, a strategy definition is given by: A set of strategy symbols F D with signatures: d : s 1 ; : : : ; s n h s 0 7 !s 00 i where s 1 ; : : : ; s n are strategy sorts of the arguments, and hs 0 7 !s 00 i is the sort of the result.
A finite set of rewrite rules: -either in an implicit form dS 1 ; : : : ; S n S -or in an explicit form dS 1 ; : : : ; S n x S x: where d 2 F D , S i for i = 1 ; : : : ; nare elementary strategies, x : s 0 is a new variable in X , S is a term built from defined strategy symbols, elementary strategy symbols, rule labels and variables.
The interpreter for elementary strategies is expanded in both cases of the previous definition with new rules, labelled with DSTR: DSTR dS 1 ; : : : ; S n x S x Different evaluation modes for strategy application are offered via different possibilities for labels.A rewrite rule on strategy labelled by is applied just as unlabelled rules on terms in an innermost-outermost way.A rewrite rule on strategy labelled by the special label is applied in a lazy way using the built-in strategy of the interpreter of strategies, described in [BK97].A rewrite rule on strategy labelled by ` is applied just as labelled rules on terms with a user-defined strategy built from these labels.
Example 1 The definition of the strategy map is for instance given by the definition of the strategy symbol map : h s 7 !sihlist s 7 !list s i and by the rule mapS dcnil; S mapS; where S : hs 7 !si and s 2 S .The following rule: DSTR List list s mapS x dcnil; S mapS x is added to the strategy evaluator.
This way of expressing strategies is extremely powerful and can be used for defining simple strategies like the following ones, where the variables in use have the sorts: x : hs 7 !si; x s: h list s 7 !list s i; y s: h u 7 !vi.
iterate : hs 7 !sihs 7 !si iteratex dkx; iteratex; id repeat : hs 7 !sihs 7 !si repeatx rstx; repeatx; id map1 : h s 7 !sihlist s 7 !list s i map1x rstnil; x map1x map2 : h list s 7 !list s ihlist s 7 !list s i map2nil nil map2x:xs x map2xs The iterate strategy differs from repeat in a such way that it returns all intermediate forms during the normalisation of a term by the application of a strategy x, while repeat returns only the last one (i.e.normal form).The strategy map1 applies a fixed strategy x on all elements of a list and produces a new list of transformed elements.The strategy map2 is driven by a list of strategies which are respectively applied to elements of a list of same length.
A search space library
Describing deduction processes is very convenient using computational systems which have indeed been created for this purpose.In the earlier version of the ELAN language, strategy definition was only possible using nullary strategy operators.This was already quite powerful and we have used this first version of the system to describe (and execute) several non-trivial computational systems.In particular, we have shown in [KM95] how saturation-based theorem provers like completion can be easily and efficiently implemented using rewrite rules and strategies.We have also implemented a mechanical theorem prover for first-order predicate calculus [BKK96a] and more recently the predicate prover of the B system [CK97].
We are now using the full power of user defined strategies for specifying a strategy library that provides primitives for depth-first search in a computation tree.Two natural applications of this strategy library are examples of a winning strategy for a tic-tac-toe game, and a guided strategy helping to find out a path (or, all paths) towards an exit of a labyrinth.
The strategy library supports four primitives for depth-first search.The current situation of the partially discovered search tree is represented by a path from the initial state (the root of the tree) to the current state (the left-most leaf).It also remembers all possible alternatives (or, choice points) met along this path.Each step of the path is represented by a list of states, where its head is the currently chosen state, and the rest represents all non-explored alternative states.The limitation of this approach is that the number of successive (alternative) states of any state should be finite, i.e. the search tree has a finite degree.Each step of the path is a list of states, thus, the whole path could be naturally represented as list list state .This sort also describes the current situation during the depth-first search in this tree.We define four primitive strategies over the sort list list state , which develop and maintain this search tree.Their intended semantics are the following: callS applies the strategy S on the current state, where S is a strategy transforming a state into a state, i.e. of the sort hstate 7 !statei.All possible results of this application, if there are any, are formed into a new step (level) attached to the current path.
next throws away the current state and continues searching with the next possibility of the previous step (the latest choice-point).
exit leaves the current state ignoring all alternatives and it returns the control to the previous step of the current path.
cut eliminates all alternative states of the latest step.
For better understanding the definitions in Figure 1, let us recall that on the left-hand sides, the variable state represents the current state, the variable level represents all alternatives of the current state, i.e. state:level : list state is one step of the path, where the others steps are recorded in the variable space.The only non-trivial strategy among the four primitives is the strategy callS, which uses the function symbol set of to collect all results of an application of a strategy S.
Tic-tac-toe game
The use of the depth-first search library is illustrated by the definition of a game winning strategy for a simple game with a finite search space.The solution described below is split into game-dependent and independent parts.The gameindependent part does not rely on rules of a particular game.These particularities are hidden in the game-dependent kernel.
The particular game chosen for this example is the tic-tac-toe game with the play-board of the size 3 3.Two players with different signs alternatively put one sign on the board.The winner is the player who succeeds in having 3 signs in a same column, row, or diagonal.
The game-dependent part defines the state (Figure 2) as a pair encoding in its first component the next player f next player's sign g j@,@,@j, f playboard : 1,st row g j@,@,@j, f playboard : 2,nd row g j@,@,@j f playboard : 3,rd row g : ( Sign Sign Sign Sign Sign Sign Sign Sign Sign Sign ) state; end Figure 2: A state of tic-tac-toe game and in its second component the play-board.There are three basic strategies win, putrow; column, moves, all of the sort hstate 7 !statei.nxt is a function that switches players.Their definitions are given in a straightforward exhaustive way, thus, we show only a small part of them in Figure 3.
The strategy win : hstate 7 !statei tries to put the last sign and to win the game.It fails, if there is no correct winning move.The deterministic strategy putrow; column puts the sign on the field with coordinates row and column, if it is possible.The non-deterministic strategy moves generates all possible moves (according to the game rules) from the current situation.
The game-independent part specifies the winning strategy movetowin using an auxiliary strategy findloosing in Figure 4.The strategy movetowin, first of all, tries to apply the basic strategy winsearching a winning move.This application is realised by calling the library primitive callwin.It succeeds if there is a winning move.Otherwise, the strategy movetowin produces the set of all possible moves by callmoves.These successive possibilities are filtered by the strategy findloosing removing all winning configurations of the adversary.If there is at least one possible move such that the adversary's configuration is not winning, the strategy findloosing succeeds, and all alternative Theory and Practice of Algebraic Specifications ASF+SDF'97 Otherwise, the strategy findloosing fails, and movetowin fails also.These two mutually recursive strategies represent the kernel of the game-independent part.The winning strategy movetowin may be encapsulated by a strategy guess, which either proposes to the player a winning move, or, if there is no winning move, proposes one of the non-winning configuration chosen with an heuristic strategy.The strategy heuristic saying what to do if we are not in a winning configuration, depends on the particular game and some defensive tactic.When no specific heuristic is proposed, it can be replaced by the strategy id. Figure 5 illustrates the use of these strategies on a small game session.With a similar encoding of the game state and rules, we can also specify other games, such as the nim-game in [CELM96].
Labyrinths
Another application of our strategy library is a definition of a strategy helping to find out an exit from a labyrinth.An alternative re-formulation of this problem could be to find a path (or, all paths) between two nodes of a graph.As in the previous section, we split the solution into a part dependent on the labyrinth and an independent one.
The dependent part contains a specification of a state, which is a pair of coordinates, and a definition of four basic moves in the labyrinth.They are realized as state transforming strategies left; right; up; down of sort hstate 7 !statei such that these strategies fail, if some movement in a given state is not possible (because of a wall).All exits of the labyrinth are specified by a strategy exitable, which fails, if the current state has 'no exit doors'.Otherwise, it is equivalent to the identity strategy.
Conclusion
The definition of the strategy language described in this paper is related to a view of strategies in reflective logics (in particular, rewriting logic) developed in [CM96].From this point of view, the strategy language described here can be classified as an internal strategy language, whose semantics and implementation are described in the same logic, while the strategy language available in the first distribution of ELAN and described in [Vit94,Vit96] as a built-in one.
Another question which comes with the reflective approach is how to control computation at the meta-level.
Theory and Practice of Algebraic Specifications ASF+SDF'97 Roughly speaking, there are two levels of rewriting: the object (or first-order term) level, and the meta-level that controls the object-level.The first solution is that the meta-level computation (i.e. the evaluation of strategies) is controlled by a built-in strategy.But computation at the meta-level can also be controlled by meta-strategies.Using a reflective logic allows using the same formalism at all levels, which might be viewed as an advantage of a reflective approach.
From the specification point of view, writting programs in ELAN forces to think about the actions and their control in the same unified way, thus enforcing an equation of the form "Program = Logic + Control Logic".What needs to be done is to develop a methodological way to construct program in this way in order to facilitate the use of these concepts by programmers.
The capability offered by this definition of strategies by rewrite rules leads also to a new approach to strategies compilation.This is a promising direction of our current research.
Figure 4 :
Figure 4: A winning strategy | 5,345.2 | 1997-01-01T00:00:00.000 | [
"Computer Science",
"Mathematics"
] |
Potentiometric Biosensing of Ascorbic Acid, Uric Acid, and Cysteine in Microliter Volumes Using Miniaturized Nanoporous Gold Electrodes
Potentiometric redox sensing is a relatively inexpensive and passive approach to evaluate the overall redox state of complex biological and environmental solutions. The ability to make such measurements in ultra-small volumes using high surface area, nanoporous electrodes is of particular importance as such electrodes can improve the rates of electron transfer and reduce the effects of biofouling on the electrochemical signal. This work focuses on the fabrication of miniaturized nanoporous gold (NPG) electrodes with a high surface area and a small footprint for the potentiometric redox sensing of three biologically relevant redox molecules (ascorbic acid, uric acid, and cysteine) in microliter volumes. The NPG electrodes were inexpensively made by attaching a nanoporous gold leaf prepared by dealloying 12K gold in nitric acid to a modified glass capillary (1.5 mm id) and establishing an electrode connection with copper tape. The surface area of the electrodes was ~1.5 cm2, providing a roughness factor of ~16 relative to the geometric area of 0.09 cm2. Scanning electron microscopy confirmed the nanoporous framework. A linear dependence between the open-circuit potential (OCP) and the logarithm of concentration (e.g., Nernstian-like behavior) was obtained for all three redox molecules in 100 μL buffered solutions. As a first step towards understanding a real system, the response associated with changing the concentration of one redox species in the presence of the other two was examined. These results show that at NPG, the redox potential of a solution containing biologically relevant concentrations of ascorbic acid, uric acid, and cysteine is strongly influenced by ascorbic acid. Such information is important for the measurement of redox potentials in complex biological solutions.
Introduction
Potentiometry is an important electroanalytical technique routinely used to measure the concentration (activity) of small ions (e.g., H + , K + , F − ) in solution [1]. In contrast to other electrochemical sensing techniques such as chronoamperometry, differential pulse voltammetry (DPV), or cyclic voltammetry (CV), no current flows. As a result, there is no perturbation to the interface, and no changes in the chemical composition of the sample take place. It is a quick and passive measurement technique that requires minimum instrumentation and can be easily miniaturized for field portability [1][2][3][4][5][6]. Billions of potentiometric measurements are likely made each year using traditional membrane-based ion-selective electrodes.
Redox potentiometry using metallic redox electrodes, albeit less studied, has also been shown to be a useful tool to evaluate the redox properties of complex environmental or biological samples and determine the concentration of small molecules [1,[7][8][9]. In this experiment, the open-circuit potential (OCP) or zero-current potential of an indicating, redox electrode (E Ind ) (i.e., inert electrode such as gold or platinum) is measured with respect to a reference (E ref ) electrode using a high-impedance voltmeter, such that E measured = E Ind − E ref . For E measured to reflect what takes place at the indicator electrode, the potential of the reference must remain constant. Unlike traditional membrane ionselective electrodes (ISEs) [1][2][3][4][5][6], the redox electrode is not specific to the ion for which it was created but rather can collectively respond to any number of species present in the solution. The measurement is particularly sensitive to the concentration of the redox species present in solution, the rate of electron transfer between the redox species and the electrode, the electrode material (e.g., gold, platinum, etc.), and its surface composition [7][8][9][10]. Because the measured potential is often a mixed potential [11,12], it can make the interpretation and understanding of these types of OCP measurements challenging. Nevertheless, redox potentiometry has been successfully used to evaluate the overall redox state of complex samples. Examples include water and soil samples [7,[13][14][15][16][17][18], blood and blood products [19][20][21][22], milk [23], tea fermentation [24], fish [25,26], and cheese making [27]. Under certain conditions, the direct potentiometric determination of small molecules and ions such as hydrogen peroxide [28,29], oxygen [30,31], phosphate [32], and ascorbate [33,34] can be achieved using different metal electrodes and/or polymer coatings to enhance specificity and sensitivity. In more recent work, gold gate field-effect transistor (FET) based sensors for biomolecules have also been reported [35,36]. OCP measurements have also been recently shown to be a promising tool to sensitively detect nanoparticle collision events [37][38][39][40][41], single emulsion droplet collisions at an interface between two immiscible solutions [42,43], and recently enzyme kinetics [44].
While all these studies have demonstrated the promise of redox potentiometry with metallic electrodes, a greater understanding of such measurements and their adaptability to complex biological samples need to be made. Planar platinum electrodes, which are commonly used in redox potentiometry, may not be the optimum electrode to make these measurements due to the propensity for their surfaces to be biofouled [45,46]. Rather, a better choice for redox potentiometry studies in complex biological samples would be an appropriately nanostructured, high surface area electrode known to reduce the effects of biofouling on the electrochemical response as well as improve electron transfer rates of small molecules. While a potentiometric response is typically independent of electrode area [44], electrode areas can indirectly influence the measurement, particularly in complex solutions where the electrode surface is easily fouled or passivated. This passivation can reduce the rate of electron transfer between the electrode and the redox species and result in a loss in sensitivity, most notably at low concentrations [47]. Having plenty of places for electron transfer to take place that is provided by a nanoporous, high surface area electrode, for example, could minimize the impact of biofouling on the potentiometric response. By keeping the footprint of the electrode small, it can be used to measure the redox properties of small sample volumes.
Nanoporous gold (NPG) electrodes prepared by dealloying gold leaf have proven to be promising materials for electrochemical measurements due to their high surface area and ability to increase electron transfer rates for kinetically sluggish reactions [48][49][50]. They have also been shown to be a useful tool to make electrochemical measurements in complex solutions containing known biofouling agents [51,52]. This occurs through a unique biosieving-like mechanism where small redox molecules can enter into the nano-sized pores to exchange electrons while much larger proteins cannot [51]. Nanosized pores are advantageous because they restrict larger biomolecules from entering the framework. However, in some cases, nanopores can be a disadvantage due to mass transport limitations [53] and the correct balance between too big and too small must be made.
While NPG electrodes have been frequently used in amperometric and voltammetric experiments for the detection of small analytes [54][55][56][57][58][59][60], they have been much less studied in redox potentiometry even though they have many promising characteristics. In recent work, they have been used to measure the redox potential of human plasma [47] and red blood cell packets [20]. However, very little is known about what and how many redox species present in these complex solutions influence the measured potential. As a first step toward understanding the potentiometric response in complex solutions and the measurement of mixed potentials, in general, we report on the potentiometric sensing of three small, biologically important redox-active molecules (ascorbic acid, cysteine, and uric acid) as well as a mixture of all three using NPG electrodes. These three antioxidants play an important role in oxidative stress [61], are readily oxidized at metallic electrodes, and are present in plasma and blood. They are believed to be the three major components in blood that could contribute to the measured redox potential; thus, it is important to evaluate their potentiometric behavior individually and then collectively at NPG electrodes.
When working with biological samples, there is a desire to keep the electrode small to minimize the need for large sample sizes. Thus, our experiments begin with the fabrication and validation of a miniature, high surface area nanostructured electrode for redox potentiometric measurements in small volumes of solution. While NPG microelectrodes have been made [62][63][64], these electrodes have a comparably smaller overall electrode area even after taking into account the large roughness factor. The electrodes fabricated herein are~2 mm in diameter but have a large electrochemically active area that is similar to a standard-sized electrode. These electrodes can be used to measure the redox potential (OCP) in 100 µL volumes of the solution with little effort and inexpensive instrumentation. Because no net current flows in a potentiometric experiment, analyte depletion is not a problem. The proper function of these electrodes in the potentiometric mode was first verified using a poised redox system, potassium ferri/ferrocyanide. Next, the potentiometric response of three biologically important redox molecules (ascorbic acid, uric acid, and cysteine) was evaluated. The three-dimensional cylindrical geometry of the electrodes allowed all measurements to be conducted in a very small volume of solution (100 µL). Nernstian responses were observed for these three redox species. The slope or sensitivity of the calibration curve was~30 mV/10-fold change in concentration. For the first time, we also describe OCP measurements of a mixed solution containing biologically relevant concentrations of ascorbic acid, uric acid, and cysteine. The results demonstrate that in this mixture of analytes, the experimental OCP is controlled by ascorbic acid, the bioreagent that is more easily oxidized.
NPG electrode fabrication: Glass hematocrit capillary tubes (OD 1.6 mm) were flame sealed and then sonicated for 10 min in ethanol followed by deionized water for 10 min and dried with N 2 gas. Glass capillary tubes were then O 2 plasma cleaned using a Southbay PE 2000 plasma etcher at 20 W for 5 min by placing the tubes into a clean glass vial with the desired modification end face-up in the stainless-steel chamber. Cleaned tubes were then soaked with the plasma cleaned side facing down in a 10 mM (3-mercaptopropyl) trimethoxysilane (MPTMS) in hexanes solution for 1 h in a 60 • C water bath. The capillary tubes were then rinsed with hexanes and left to dry. Manetti 12K gold leaf (FineArtStore, a book of 25 loose leaves) was chemically dealloyed in concentrated nitric acid for 13 min and then floated on DI water twice for~5 min. Conducting copper (Cu) tape (Ted Pella) was attached to the MPTMS-modified tube leaving the functionalized end exposed (~5 mm), which was then used to capture one dealloyed leaf square ensuring the gold contacted both the copper lead and the functionalized glass covering the rounded tip. The dealloyed gold was dried with nitrogen and further cleaned by UV radiation (254 nm, 20 W) for 24 h by placing the electrode face up in a home-built box~10 cm from the UV source. For testing, NPG electrodes were wrapped securely with Parafilm leaving 1.5 mm of nanoporous gold at the end of the capillary.
Potentiometric Measurements: Unless otherwise noted, open-circuit potential (OCP) measurements were conducted in a two-electrode cell with the NPG electrode and a AgClcoated Ag wire (reference electrode; 0.1 M KCl) using a CH Instruments 1200A potentiostat or a Metrohm Autolab multichannel potentiostat. A 1.5 mL centrifuge tube cut at the 0.5 mL mark served as the electrochemical cell ( Figure S1). The solution volume was 100 µL. A micro flea stir bar was used to quickly stir the solution. To minimize evaporation, the cell was covered with Parafilm. Surface area measurements were undertaken via cyclic voltammetry (CV) in 0.5 M H 2 SO 4 using a traditional three-electrode cell. The surface morphology of the dealloyed NPG electrodes was examined using a field emission scanning electron microscope (FE-SEM, HITACHI SU-70). X-ray photoelectron spectroscopy (XPS) using a monochromatic Al Kα (1486.68 eV) X-ray source (ThermoFisher ESCA lab 250) with a beam size of 0.5 mm, pass energy of 20 eV, and step size of 0.1 eV was used to determine the percent Ag remaining after dealloying. Samples were cleaned using an Ar plasma to remove any possible contaminants. Data were analyzed using Avantage software and Wagner sensitivity factors.
Electrode Fabrication
One simple and cost-effective approach to prepare NPG involves chemical dealloying commercially obtained 12 K gold leaf in nitric acid and capturing the dealloyed gold leaf on a conducting substrate (e.g., a gold-coated slide) to make the electrode [54]. However, the gold slides are expensive, millimeters in size, and have limited optical transparency. In addition, mL volumes are needed to make electrochemical measurements with these electrodes. It would be beneficial to have NPG electrodes with a high surface area (similar to a traditional electrode) but a small footprint so that redox potential measurements in small volumes can be easily made.
In this report, we describe an approach to decrease the cost as well as reduce the size of the electrodes so that rapid potentiometric measurements in small sample volumes can be made. Such electrodes could be used to measure the redox potential of small volumes of complex chemical systems such as blood products or a collection of electrodes can be used collectively to map heterogeneity in a real sample. With this in mind, we developed a procedure to capture and adhere dealloyed gold leaf on modified glass capillary tubes (1.6 mm OD) using conducting copper tape for the electrical connection. This procedure is depicted in Figure 1a and the electrochemical cell is shown in Figure 1b. A glass capillary tube is first modified with mercaptosilane, which serves as the glue to improve the adhesion of the fragile, dealloyed gold leaf to its surface [65]. To define the geometric area and prevent the solution from contacting the copper tape, Parafilm is stretched and sealed around the capillary tube, or epoxy is used.
A high-resolution SEM image shown in Figure 1a depicts the nanoscale morphology of the gold electrode. Characteristic nanosized pores and ligaments are noted. The pore sizes range from about 10-50 nm. The geometric area of the electrode is 0.09 cm 2 , which is obtained from the diameter of the glass capillary and the length of the exposed gold. The total electrochemically active surface area was obtained by recording a CV in sulfuric acid and measuring the charge associated with the gold oxide peak, Figure S2. Using a conversion factor of 386 µC/cm 2 , the total electrode area is~16 times larger than the geometric area [54]. The electrodes have 8.56 ± 1.65% Ag (3 different electrodes, 3 points acquired on each electrode) after dealloying as measured by XPS. The XPS spectra for Ag and Au are shown in Figure S3. It is known that residual Ag can improve electrocatalytic activity in specific cases [66]. At this time, we do not believe the silver remaining after dealloying influences the OCP when the solutions have relatively high concentrations of redox molecules that exhibit relatively fast kinetics and are not overly surface sensitive. However, variations in the amount of residual silver will likely influence the OCP obtained in buffer as this is an unpoised mixed system and the molecules that likely contribute to the measured potential are surface sensitive. Future experiments will help tease out the importance of residual silver on redox potentiometry experiments. A high-resolution SEM image shown in Figure 1a depicts the nanoscale morphology of the gold electrode. Characteristic nanosized pores and ligaments are noted. The pore sizes range from about 10-50 nm. The geometric area of the electrode is 0.09 cm 2 , which is obtained from the diameter of the glass capillary and the length of the exposed gold. The total electrochemically active surface area was obtained by recording a CV in sulfuric acid and measuring the charge associated with the gold oxide peak, Figure S2. Using a conversion factor of 386 μC/cm 2 , the total electrode area is ~16 times larger than the geometric area [54]. The electrodes have 8.56 ± 1.65% Ag (3 different electrodes, 3 points acquired on each electrode) after dealloying as measured by XPS. The XPS spectra for Ag and Au are shown in Figure S3. It is known that residual Ag can improve electrocatalytic activity in specific cases [66]. At this time, we do not believe the silver remaining after dealloying influences the OCP when the solutions have relatively high concentrations of redox molecules that exhibit relatively fast kinetics and are not overly surface sensitive. However, variations in the amount of residual silver will likely influence the OCP obtained in buffer as this is an unpoised mixed system and the molecules that likely contribute to the measured potential are surface sensitive. Future experiments will help tease out the importance of residual silver on redox potentiometry experiments.
Potentiometric Response
To
Potentiometric Response
To verify the newly fabricated NPG electrodes behave in 100 µL solution volumes according to the Nernst equation, a standard oxidation-reduction potential (ORP) calibrant was used: potassium ferricyanide ([Fe(CN) 6 ] 3− ) and ferrocyanide ([Fe(CN) 6 ] 4− ) in 0.1 M phosphate buffer solution containing 0.1 M KCl [7]. Both the NPG working electrode and Ag/AgCl wire reference electrode were placed in 5 mM [Fe(CN) 6 ] 3− and the OCP was measured. After a predetermined amount of time for equilibration (typically 100 s), a small aliquot of the reduced form, [Fe(CN) 6 ] 4− (24 mM), was added to the sample vial while stirring. Immediately after addition, the potential changes to a more negative value and quickly equilibrates. At this point, the solution becomes poised because both forms of the redox couple are present at reasonable concentrations; the resulting potential should be stable and predicted by the Nernst equation. The average value of the OCP was calculated from the last 15 s before the next addition. Figure 2b. The slope was calculated from linear regression (R 2 = 0.9965) on three data sets (three different electrodes) represented as three different symbols in Figure 2b. The slope was −58.3 ± 0.9 mV with a y-intercept of 146.0 ± 0.5 mV (N = 3). The slope indicates a one-electron transfer system, which is expected for the ferro-ferricyanide redox couple behaving in a Nernstian fashion (−59.2 mV). Because this a poised redox system that follows the Nernst equation, the y-intercept is the formal potential of the redox couple. The y-intercept of 146 mV vs. Ag/AgCl reference electrode in 0.1 M KCl agrees with the expected value of 142 mV for the formal potential of the ferricyanide-ferrocyanide redox couple [7]. We and others have observed similar results for this standard redox system [65,67]. redox couple are present at reasonable concentrations; the resulting potential should be stable and predicted by the Nernst equation. The average value of the OCP was calculated from the last 15 s before the next addition. Figure 2a shows the OCP-time trace after successive additions of 2-5 μL aliquots of [Fe(CN)6] 4− to the [Fe(CN)6] 3− receiving solution already present in the electrochemical cell. The OCP vs. time trace is very stable, as expected for a poised system. After taking into account dilution factors, the concentration ratio of the reduced to the oxidized form ([Fe(CN)6] 4− /[Fe(CN)6] 3− ) was calculated for each addition. The logarithm of the concentration ratio of the two species was used to construct the Nernst plot shown in Figure 2b. The slope was calculated from linear regression (R 2 = 0.9965) on three data sets (three different electrodes) represented as three different symbols in Figure 2b. The slope was −58.3 ± 0.9 mV with a y-intercept of 146.0 ± 0.5 mV (N = 3). The slope indicates a one-electron transfer system, which is expected for the ferro-ferricyanide redox couple behaving in a Nernstian fashion (−59.2 mV). Because this a poised redox system that follows the Nernst equation, the y-intercept is the formal potential of the redox couple. The y-intercept of 146 mV vs. Ag/AgCl reference electrode in 0.1 M KCl agrees with the expected value of 142 mV for the formal potential of the ferricyanide-ferrocyanide redox couple [7]. We and others have observed similar results for this standard redox system [65,67]. Once the electrodes were validated using a well-behaved redox couple, the response of the electrodes to three biologically relevant redox molecules (ascorbic acid (AA), cysteine (Cys), and uric acid (UA)) was investigated. These systems are more complicated than those obtained for ferri-and ferrocyanide not only because the electrochemistry is more complex but also because these solutions contain only one form of the redox couple. Ascorbic acid is an essential vitamin found in both animal and plant kingdoms and plays an important role in the prevention and treatment of the common cold, mental illness, Once the electrodes were validated using a well-behaved redox couple, the response of the electrodes to three biologically relevant redox molecules (ascorbic acid (AA), cysteine (Cys), and uric acid (UA)) was investigated. These systems are more complicated than those obtained for ferri-and ferrocyanide not only because the electrochemistry is more complex but also because these solutions contain only one form of the redox couple. Ascorbic acid is an essential vitamin found in both animal and plant kingdoms and plays an important role in the prevention and treatment of the common cold, mental illness, infertility, cancer, and AIDS [68]. Purine metabolism releases uric acid as a primary endproduct and abnormalities in UA can lead to gout, hyperuricemia, and Lesch Nyhan disease [69]. On the other hand, cysteine is an important thiol-containing amino acid and plays a role in protein synthesis as well as in the food, cosmetic, and drug industries [70].
All three of these analytes are also important antioxidants found in blood and blood products and are believed to play a key role in blood redox potential measurements. Most electrochemical experiments on these important bioreagents rely on chronoamperometry or voltammetry, which involves the application of potential and the measurement of current [57][58][59]. However, adsorption and biofouling strongly influence voltammetric measurements while potentiometry is much less affected [67]. Therefore, the potentiometric measurement of AA, UA, and cysteine first in individual solutions and then in mixtures have been studied. This experiment begins with the addition of small aliquots of either 2.6 mM AA, 4.6 mM Cys, or 9.9 mM UA (all in phosphate buffer) to 100 µL of 0.1 M phosphate buffer (pH = 7.4). The OCP initially measured in the buffer is a mixed potential because no defined redox couples are present in the solution. Upon the addition of a redox molecule, the OCP abruptly changes and establishes a new value. Because only one form of the redox couple is present, the potential cannot be calculated using the Nernst equation and the y-intercept is not equivalent to the redox formal potential. However, the linearity between OCP and the LOG of the concentration is expected providing the pH does not change (a buffer is used in this work) and the concentration of the other form of the redox couple is finite and constant [34]. Equations (1)-(4) demonstrate this relationship for uric acid at pH > pKa: Urate ion (UA)→Diimine (DI) + 1H + + 2e − (1) Making the assumptions that the formal potential (E o ) and the pH stays constant (the solution is buffered) and [DI] is small and constant, Equation (3) reduces to the following: The OCP-time traces are shown in Figure S4 for each of the three biological redox molecules. The first addition results in the largest shift in the OCP because the solution changes from background electrolyte to a dilute solution of that particular redox species. Each successive addition results in a smaller change in the OCP, as expected. In all cases, the OCP becomes more negative as the concentration of the analyte (reductant) increases. The potential reaches a near-constant value as pseudo-equilibrium is reached. Because these molecules have moderately fast electron transfer kinetics at nanoporous gold (see below), the OCP-time traces stabilize once sufficiently high concentrations are present in the solution. Figure 3 shows the expected logarithmic dependence of potential on concentration obtained from the OCP-time traces for each of the three redox species. Again, different symbols represent different electrodes and show the reproducibility of the method. It can be noted that the reproducibility between electrodes is very good and all the redox probes show a linear response between potential and logarithm of concentration. The experimental slopes are all consistent with a 2e − process. For ascorbic acid, (Figure 3a), an experimental slope of −31.3 ± 0.4 (N = 3) was obtained over the concentration range of 50 µM to 2 mM; for cysteine ( Figure 3b) it was −26.5 ± 0.6 mV (N = 3) over a concentration range of 100 µM to 1.5 mM; for uric acid (Figure 3c), a slope of −29.0 mV (N = 2) was obtained from 100 µM to 2 mM. The slope of these graphs represents the sensitivity of the measurement, which is near the theoretical limit. For ascorbic acid, similar sensitivities were noted using singlewalled carbon nanotube carbon fiber electrodes and in microdroplet solutions and solutions containing fibrinogen using NPG [34,65]. The detection limit is estimated to be around 20-50 µM based on when the potential changes from one that is controlled by an added redox molecule to one that is controlled by background processes [38]. Overall the detection sensitivity is similar to other redox potentiometry experiments [34] but not as good as that recently reported using potentiometric FET sensors (e.g., nM for cysteine) [36] or differential pulse voltammetry using graphene/Pt nanocomposite electrodes (0.03-0.15 µM) [60].
solutions containing fibrinogen using NPG [34,65]. The detection limit is estimated to be around 20-50 μM based on when the potential changes from one that is controlled by an added redox molecule to one that is controlled by background processes [38]. Overall the detection sensitivity is similar to other redox potentiometry experiments [34] but not as good as that recently reported using potentiometric FET sensors (e.g., nM for cysteine) [36] or differential pulse voltammetry using graphene/Pt nanocomposite electrodes (0.03-0.15 μM) [60].
Potentiometric Response in a Mixed Solution
When multiple redox species are present in the solution at the same time, the interpretation of OCP becomes more complex. The measured potential depends on many factors including the formal potential, concentration, and electron transfer kinetics of each redox species. Electron transfer rates can also be sensitive to the electrode material [71]. One promising application of OCP measurements involves the potentiometric measurement of the redox potential of a complex solution such as blood or plasma [20]. A shift in the redox potential toward more positive potentials from baseline levels would be indicative of oxidative stress. However, the meaning of the measured potential is complicated because multiple redox species with varying concentrations and formal potentials are present in the solution. As a first step towards understanding a real biological system, we investigated the response of changing the concentration of a redox species, ascorbic acid, in the presence of the other two redox species, cysteine, and uric acid. These three redox species are present in either plasma or blood at levels of 23-85 μM for AA, 50-100 μM for Cys, and 150-470 μM for UA [72].
Potentiometric Response in a Mixed Solution
When multiple redox species are present in the solution at the same time, the interpretation of OCP becomes more complex. The measured potential depends on many factors including the formal potential, concentration, and electron transfer kinetics of each redox species. Electron transfer rates can also be sensitive to the electrode material [71]. One promising application of OCP measurements involves the potentiometric measurement of the redox potential of a complex solution such as blood or plasma [20]. A shift in the redox potential toward more positive potentials from baseline levels would be indicative of oxidative stress. However, the meaning of the measured potential is complicated because multiple redox species with varying concentrations and formal potentials are present in the solution. As a first step towards understanding a real biological system, we investigated the response of changing the concentration of a redox species, ascorbic acid, in the presence of the other two redox species, cysteine, and uric acid. These three redox species are present in either plasma or blood at levels of 23-85 µM for AA, 50-100 µM for Cys, and 150-470 µM for UA [72].
In this experiment, cyclic voltammograms (CVs) need to be collected to identify the potential at which each species becomes oxidized. For this to be done, a standard-sized NPG electrode in a standard electrochemical cell must be utilized as these will allow CVs to be collected at slow scan rates without concern for depletion of reagents or radial diffusion. Figure 4 shows the CVs acquired at 50 mV/s at a NPG electrode in a separate solution containing either 0.5 mM AA, UA, or Cys in pH 7.4 phosphate buffer. Separate solutions and separate electrodes were used because all three reagents and/or their oxidation product can adsorb on the electrode [58]. It is evident in these CVs that each redox species can be oxidized at the NPG electrode and have similar electron transfer kinetics. Upon scanning the potential in the positive direction from −0.2 V, AA is oxidized first followed by Cys and then UA.
NPG electrode in a standard electrochemical cell must be utilized as these will allow CVs to be collected at slow scan rates without concern for depletion of reagents or radial diffusion. Figure 4 shows the CVs acquired at 50 mV/s at a NPG electrode in a separate solution containing either 0.5 mM AA, UA, or Cys in pH 7.4 phosphate buffer. Separate solutions and separate electrodes were used because all three reagents and/or their oxidation product can adsorb on the electrode [58]. It is evident in these CVs that each redox species can be oxidized at the NPG electrode and have similar electron transfer kinetics. Upon scanning the potential in the positive direction from −0.2 V, AA is oxidized first followed by Cys and then UA. When all three redox species are present in the same solution, the peak potentials shift to more positive values by ~50 to 100 mV, indicative of adsorption on the electrode surface. It can be noted that ascorbic acid is still the first to be oxidized upon scanning the electrode potential to more positive values followed by cysteine and then uric acid.
Based on thermodynamics alone, the OCP of the mixed solution containing these three redox species is predicted to be negative of the formal potential of AA ( Figure 5 inset) [73]. Furthermore, the OCP should be determined by AA at reasonable concentrations. To evaluate this hypothesis, a potentiometric experiment was performed. In this example, AA was added to an initial solution containing both 100 μM Cys and 300 μM UA in 0.1 M pH 7.4 phosphate buffer. These concentrations were chosen because they When all three redox species are present in the same solution, the peak potentials shift to more positive values by~50 to 100 mV, indicative of adsorption on the electrode surface. It can be noted that ascorbic acid is still the first to be oxidized upon scanning the electrode potential to more positive values followed by cysteine and then uric acid.
Based on thermodynamics alone, the OCP of the mixed solution containing these three redox species is predicted to be negative of the formal potential of AA ( Figure 5 inset) [73]. Furthermore, the OCP should be determined by AA at reasonable concentrations. To evaluate this hypothesis, a potentiometric experiment was performed. In this example, AA was added to an initial solution containing both 100 µM Cys and 300 µM UA in 0.1 M pH 7.4 phosphate buffer. These concentrations were chosen because they represent the near-physiological concentrations of Cys and UA in blood. The OCP time-trace for the additions of AA (also prepared in 100 µM Cys and 300 µM UA in phosphate buffer to ensure that the concentration of Cys and UA stay constant) is shown in Figure 6a. The initial OCP starts off being fairly negative of the values typically obtained in phosphate buffer because of the presence of Cys and UA in the receiving solution. The addition of AA to this solution in known increments results in a negative shift in the OCP as expected for the addition of a known reducing agent. With each addition, the OCP becomes more negative. The concentration of AA in the solution after the first addition is 0.19 mM and 0.37 mM after the second addition. buffer to ensure that the concentration of Cys and UA stay constant) is shown in Figure 6a. The initial OCP starts off being fairly negative of the values typically obtained in phosphate buffer because of the presence of Cys and UA in the receiving solution. The addition of AA to this solution in known increments results in a negative shift in the OCP as expected for the addition of a known reducing agent. With each addition, the OCP becomes more negative. The concentration of AA in the solution after the first addition is 0.19 mM and 0.37 mM after the second addition. buffer to ensure that the concentration of Cys and UA stay constant) is shown in Figure 6a. The initial OCP starts off being fairly negative of the values typically obtained in phosphate buffer because of the presence of Cys and UA in the receiving solution. The addition of AA to this solution in known increments results in a negative shift in the OCP as expected for the addition of a known reducing agent. With each addition, the OCP becomes more negative. The concentration of AA in the solution after the first addition is 0.19 mM and 0.37 mM after the second addition. It can be seen in Figure 6b that the OCP changes logarithmically with the concentration of AA. The slope was −43 ± 0.8 mV, which is a little higher than expected but consistent with earlier results for AA [47,65]. When the reverse experiment is done where Cys is added to a solution containing 10 µM AA and 300 µM UA, no clear steps are observed with each subsequent addition of Cys. These results are shown in Figure S5. Rather, the OCP drifts but never really stabilizes. Since the OCP is determined by AA and AA is present in the initial solution, the potential should stay approximately constant at its initial value when small amounts of either UA or Cys are added. In this situation, the potential is 'poised' by AA, which is conceptually analogous to the pH of a buffer being set or 'poised' by the presence of a conjugate acid/base pair. The addition of a small amount of a base or acid will not change the pH.
Previous experiments observed similar results to those described herein albeit with a different set of analytes and electrode material [34]. These authors showed that the OCP of a SWNT carbon fiber electrode significantly changed when AA was added to an equimolar solution containing four redox active analytes: dopamine, uric acid, serotonin, and dihydroxy-phenyl acetic acid (DOPAC). However, when these redox probes (dopamine, uric acid, serotonin, and DOPAC) were instead added to a PBS solution containing AA, no significant change in the OCP was observed. All these redox molecules have similar electron transfer kinetics and all are also oxidized at potentials more positive of AA. The authors state that the presence of AA decreases the sensitivity of the electrode to the concentration of coexisting neurotransmitters and they explain it in terms of self-driving forces [34]. They also demonstrate how this simple system can be used for in-vivo sensing of AA in live animals [33,34]. The net result of these studies is that the OCP is controlled by AA.
Conclusions
Redox potential measurements can provide important information regarding the overall redox state of a sample whether it be relatively simple solutions or complex samples such as blood products or sediments. The present work demonstrates that nanoporous gold (NPG) electrodes can successfully respond in a Nernstian-like fashion to biologically important redox species such as AA, UA, and Cys. The NPG electrodes were inexpensively made by starting with white gold leaf and glass capillary tubes. Because of the small size of the electrodes, measurements can be made in sub-milliliter volumes, which bodes well in future testing where only small sample volumes are available. The measurement of the redox potential of a mixed buffered solution containing near-physiological concentrations of AA, UA, and Cys indicates the concentration of AA to be the potential determining factor even when present at a low concentration. These redox molecules have similar kinetics at NPG but are oxidized at more positive potentials than AA.
As compared to amperometric, voltammetric techniques and some potentiometric techniques, the detection limit for these three analytes (in the tens of micromolar range) and the sensitivity (~30 mV per 10-fold change in the concentration) is not as good. However, the strength of this method lies in its simplicity and adaptability to in-field measurements and small volumes. It represents a mediator-free, label-free, and membrane-free approach to the potentiometric sensing of three biologically important redox molecules in 100 µL volumes. Redox potentiometry is particularly useful when the overall redox state of a sample needs to be measured quickly and efficiently with minimal cost using inexpensive instrumentation (e.g., a high impedance voltmeter). This present work is an important step in the understanding of mixed potentials in complex sample solutions containing biologically relevant molecules. It also aids in the understanding of the open-circuit potential in blood and blood products. Our future studies will be aimed at modulating the electrode surface to improve sensitivity and tailoring the selectivity of this redox-based potentiometric sensor for the detection of other biologically important small molecules. | 8,712.2 | 2020-12-28T00:00:00.000 | [
"Chemistry",
"Biology",
"Materials Science"
] |
Character Sums and the Riemann Hypothesis
We prove that an innocent looking inequality implies the Riemann Hypothesis and show a way to approach this inequality through sums of Legendre symbols.
Let
where λ is the Liouville lambda-function1 .Since |λ(n)| = 1, this series is absolutely convergent for real x, so that f is continuous, odd and periodic with period 1 on R.Here is a plot of f (x) for 0 x 1 using 1000 terms of the series defining f : Theorem 1.If f (x) 0 for 0 x 1/4, then the Riemann Hypothesis is true.
Theorem 1 is deceptive in that it looks like it should be a a simple matter to prove that f (x) is non-negative.A problem is that it is not clear whether f (x) is differentiable or not, and even if it is it would be difficult to estimate the derivative.So, proving that f (x) > 0 at some point doesn't immediately tell us about f (x) at nearby points.
The "1/4" in Theorem 1 can be replaced by any positive constant.So the real issue is trying to prove that f (x) > 0 for small positive x.
Note that
so that if for some x there is an N such that then, it must be the case that f (x) > 0. We will use this idea a little later.
We can give an "explicit formula" for f in terms of the zeros ρ = β + iγ of ζ: Theorem 2. Assuming the Riemann Hypothesis, ) .
Here ℓ(n) is defined through its generating function for ℜs > 1.Also, X(s) is the factor from the functional equation for ζ(s) which can be defined by = 2(2π) −s Γ(s) cos πs 2 .Note that if the zeros of ζ(s) are simple, then the term with the sum over the zeros of ζ becomes Theorem 2 is nearly a converse to Theorem 1 in the sense that if RH is true and all the zeros are simple and so that the inequality (2) seems plausible.
Finally we remark that the formula of Theorem 2 for f (x) hides very well the fact that f (x) is periodic with period 1!
Prior results
There has been quite a lot of work connecting partial weighted sums of the Liouville and the Riemann Hypothesis.We refer to [BFM] for a nice description of past work.In this paper the authors prove that the smallest value of x for which
Character sums
A possible approach to proving that f (x) > 0 for small x > 0 lies in the fact that λ is completely multiplicative and takes the values ±1.This scenario resembles quadratic Dirichlet characters (for simplicity think Legendre symbols) except that Dirichlet characters can also take the value 0. By the Chinese Remainder Theorem, for any N there is a prime number q such that λ(n) = n q for all n N where .q is the Legendre symbol 2 mod q.As an example: n 163 for all n 40, but they differ at n = 41.Let be the Fourier sine series with λ(n) replaced by n q .If f q (x) 0 for 0 x 1/4 for a sufficiently large set of q, then it must also be the case that f (x) 0 for 0 x 1/4.(The proof is that if f (x 0 ) < 0 for some 0 < x 0 < 1/4, then we can find a q such that n q = λ(n) for all n N where N is chosen so large that |f (x 0 )| > 1/N; then it must be the case by the analog of (1) for f q that f q (x 0 ) < 0.) The same assertion but with q restricted to primes congruent to 3 mod 8 is also valid, since the Legendre symbols for these q can also imitate λ(n) for arbitrarily long stretches 1 n N. We can express this as follows: Theorem 3. If f q (x) 0 for all 0 x 1/4 and all primes q congruent to 3 mod 8, then the Riemann Hypothesis is true.
2 n q = 0 if (n, q) > 1; n q = +1 if n is a square mod q; and n q = −1 if n is not a square modulo q.
Remark 1.We could just as well have stated this theorem for q ≡ 3 mod 4.However, the intention is that we are interested in q for which χ q imitates λ.Insisting that χ q (2) = −1 leads to the condition that q ≡ 3 mod 8.
The sums f q (x) still have the same problem in that it is tricky to prove for sure that they are positive for small positive x.However, the analogue of Theorem 2 above is much simpler, is unconditional, and leads to a straightforward way to check, for any given fixed q, that f q (x) 0 for 0 x 1/4.
Theorem 4. Let x 0. Let q ≡ 3 mod 8 be squarefree.Then n .Now Dirichlet's class number formula enters the picture.Let K = Q( √ −q) be the imaginary quadratic field obtained by adjoining √ −q to the rationals Q.Let h(q) be the class number3 of K. Dirichlet's formula is for squarefree q ≡ 3 mod 4 and q > 3; (see [D] or [IK]).Thus, the Theorem above can be rephrased in terms of h(q).Moreover, we can express L q (1) as a finite character sum Since n q is an odd function of q we also have where Corollary 1.Let q > 3 be squarefree with q ≡ 3 mod 8. Then Here is a plot of We can use the corollary to prove that f 163 (x) 0 for 0 x 1/2 and consequently that f (x) 0 for 1/4 > x 0.043 as follows..05 it follows from (1) that f (x) 0 for 0.25 x 7 163 = 0.043.Corollary 2. f (x) 0 for 0.043 x 0.25.
It seems clear that for any given ǫ > 0 we could replace 0.043 by ǫ in this inequality with enough computation time.Also, if we use Euler products instead of Dirichlet series we can show that f (x) 0 for 1/4 x 0.011.
The following conjecture seems surprising.
Conjecture 1.If q ≡ 3 mod 8 is squarefree, then f q (x) 0 for 0 x 1/2.Remark 2. J. Bober has checked that this inequality is true for all primes q ≡ 3 mod 8 up to 10 9 .Now we turn to the proofs.
Useful Lemmas
Lemma 1.For y > 0 we have Summing these leads to the desired formula.See also [T]; the above is the integral of the formula (7.9.5).
x s s(s + 1) This lemma is well-known and is easy to verify.
Proofs of theorems
Proof of Theorem 1.This assertion is a consequence of Landau's Theorem: "If g(n) 0 then the right-most singularity of ∞ n=1 g(n)n −s is real."This is theorem 10 of [HR] and Theorem 1.7 of [MV1].What we actually need is an integral version of this theorem: "If g(x) 0 then the right-most singularity of ∞ 1 g(x)x −s dx is real."The proof of this version is essentially the same as that of the first version (see Lemma 15.1 of [MV1]).The application to our situation is slightly subtle.We argue as follows.Since where 0 < c < 1.The integral is absolutely convergent for 0 < c < 1/2.By Mellin inversion we have We split the integral into two integrals at x = 4 so that say.The integral defining I 1 (s) is absolutely convergent for σ > 1 and the second integral is absolutely convergent for σ < 1.Using the periodicity of f we can show that the second integral converges for σ < 2. Indeed, let Then F (n) = 0 for all integers n and F is bounded.Therefore, This integral converges for ℜs < 2. So, we now have I 2 analytic for ℜs < 2. Clearly, I 1 + I 2 is analytic for ℜs > max{−1/2, ρ − 1} i.e. for ℜs > 0. (The pole of X(1 − s) at s = 0 is canceled by the zero of 1/ζ(s + 1) at s = 0.) It follows that I 1 (s) = (I 1 (s) + I 2 (s)) − I 2 (s) is analytic for ℜs > 0. It follows that I 2 (s) is also analytic for ℜs > 0, and since we already knew it was analytic for ℜs < 2 it follows that I 2 (s) is entire.Now, we can write I 1 as Recall we have assumed that f (1/x) 0 for x 4. Therefore, by Landau's theorem, the rightmost singularity of I 1 (s) is real.Since I 2 is entire, it follows that the rightmost pole of I 1 (s) + I 2 (s) must also be real.But the rightmost real pole of is at s = −1/2.This must be the rightmost pole.Therefore the poles at ρ − 1 must all have their real parts less than or equal to -1/2.In particular, ℜρ 1/2, which is RH.
Proof of Theorem 2. We start again from where 0 < c < 1/2.The integrand has poles only at s = − 1 2 and at s = ρ − 1 where ρ is a complex zero of ζ(s) and nowhere else in the s-plane.The residue at Assuming that the zeros are simple, the residue at s = ρ − 1 is We (carefully) move the path of integration to (c) where −2 < c < −1.To do this we have to cross through a field of poles arising from the zeros of the zeta function.To do this we use Theorem 14.16 of [T1] (see also [R]) to find a path on which 1/ζ(s + 1) ≪ T ǫ where we can safely cross.Using the bounds |X(1 − s)| ≪ T σ−1/2 and ζ(2s + 2) ≪ T − 1 2 −σ we can get the sum of the residues arising from the zeros up to height T together with an error term that tends to 0 as T → ∞.Thus, assuming the zeros are simple, If the zeros are not simple we modify the sum over zeros appropriately.We make the change of variable s → −s in the integral.Using the functional equation for the ζ-function and functional relations for the Γ-function, we see that the new integrand is Then Theorem 2 follows.
Proof of Theorem 4. We denote χ q (n) = n q .By Lemma 1, where 0 < c < 1.Since χ q is odd, we find that the integrand has a pole at s = 0 and nowhere else in the complex plane.We move the path of integration to (c) where c < −1 to see that Now let s → −s in the integral and use the functional equation (see [D], [IK] or [MV1]) L(1 − s, χ q ) = 2q s− 1 2 (2π) −s Γ(s) sin πs 2 L(s, χ q ).After simplification, the integral above is .
By Lemma 3, this integral is The proof of Theorem 4 is complete.
Remark 3. Note that the non-negativity, for 0 < x < 1/4, of the right-hand side of (3) implies the Riemann Hypothesis.This condition only involves Dirichlet L-functions with quadratic characters.Thus, information solely about Dirichlet L-functions potentially gives the Riemann Hypothesis.This example shows that different L-functions somehow know about each other.
Further remarks
Since h(q) ≫ ǫ q 1/2−ǫ we see that f q (x) 0 for a ≪ x ≪ q −1/2−ǫ .In particular, f q (a/q) 0 for a ≪ q 1/2−ǫ .But this doesn't give information about f (x).Also, the Polya-Vinogradov inequality tells us that max N N n=1 χ q (n) ≪ q 1/2 log q and the work of Montgomery and Vaughan [MV] shows that the Riemann Hypothesis for L(s, χ) implies that max Moreover, it is known that the right hand side here can not be replaced by any function that goes to infinity slower.It is also known, assuming the Riemann Hypothesis for L(s, χ), that L(1, χ) ≪ log log q.
Our desired inequality can be expressed in terms of L(1, χ) as max It appears that both sides of this inequality can be as big as √ q log log q.
A question is whether the converse of Theorem 1 is true.It might be possible to approach this by showing that the " 3 2 " derivative of f (x) is positive at x = 0 so that there is a small interval to the right of 0 for which f (x) ≥ 0. This method, or trying to prove (2) directly, would involve explicit estimates (assuming RH) for 1/ζ(s) in the critical strip; see [MV1] section 13.2 for a good approach to such explicit estimates.
Finally, we mention that f (x) can be evaluated at a rational number x = a/q as an average involving Dirichlet L-functions L(s, χ) where χ is a character modulo q.
6. Evaluation of f q (a/p).
A couple of formulas may help us move forward here.One is that if θ 1 and θ 2 are characters with coprime moduli m 1 and m 2 respectively, then (see [IK,(3.16)]) The other is that for a character θ modulo m and a positive integer r where B r is the rth Bernoulli polynomial (see [Wa,Theorem 4.2]).Recall the functional equation (see [D]) for a primitive character θ mod m: It follows that for an even θ = χ q ψ, with q ≡ 3 mod 4 and ψ an odd character modulo p, Therefore, ℑ{τ (ψ)ψ(a)L(2, χ q ψ)} = ℜ{ 2π 2 χ q (p)ψ(aq) We sum this equation over the odd characters modulo p using Thus, we have Theorem 5.For primes p and q both congruent to 3 modulo 4 and 1 ≤ a < p/2 we have for all p < q which are primes congruent to 3 modulo 8 and all 0 < a < p/2, then the Riemann Hypothesis follows.
We note that by these techniques one can show Theorem 6.
When this formula is compared with our earlier formula we deduce the identity a 3 for q ≡ 3 mod 4.
Now we indicate another possible direction.
Proposition 1.If f q (x) = 0 then x is a rational number.
Proof.By Corollary 1, f q (x) = 0 implies that S q (q/2) − S q (qx) = 0.But S q (q/2) = h(q) is an integer.So f q (x) = 0 implies that S q (qx) is a rational number.Now This has the shape integer − integer qx which can only be rational if x is a rational number.
So, it suffices to show that f q (x) has no rational zeros; perhaps a congruence argument could work.However, Theorem 5 is not much use here because the hypothetical x for which f q (x) = 0 would likely have a denominator that is divisible by q, so the conditions of Theorem 5 don't hold.
We remark that there are rational values of x for which the numerator of f q (x) is congruent to 0 modulo q; for example These examples, which all seem to have an x with denominator divisible by q, might be worth studying further.
Here is one final formula that may or may not be useful.Suppose that f q (x) = 0. Let y = xq.Then either n≤y χ q (n) = h(q) and n≤y nχ q (n) = 0 or else y satisfies .
The first alternative seems unlikely as in that case there would be an interval on which f q (x) would be identically 0.
Conclusion
Conjecture 1 has been checked for primes up to 10 9 and it holds for those primes.However, probabilistic grounds call into question it's truth for all primes q ≡ 3 mod 8.Of course, one only needs it's truth for a set of characters χ q for which χ q (n) = λ(n) for all n ≤ N q where N q → ∞ with q.Presumably something like this is correct (and should be equivalent to RH), but it is not clear how to proceed.But the results of section 6, suggest a slightly alternative way forward which may have a more arithmetic flavor.
a number with absolute value at most 1, not necessarily the same at each occurrence.Now for a an integer, S 163 (163x) is constant for x in the interval [ a a 163 ) 0.0095 0.0095 0.019 0.038 0.047 0.066 0.076 0.095 0.12 0.14 Since Θ 20 | 3,880.8 | 2024-04-30T00:00:00.000 | [
"Mathematics"
] |
Droplet Manipulations in Two Phase Flow Microfluidics
Even though droplet microfluidics has been developed since the early 1980s, the number of applications that have resulted in commercial products is still relatively small. This is partly due to an ongoing maturation and integration of existing methods, but possibly also because of the emergence of new techniques, whose potential has not been fully realized. This review summarizes the currently existing techniques for manipulating droplets in two-phase flow microfluidics. Specifically, very recent developments like the use of acoustic waves, magnetic fields, surface energy wells, and electrostatic traps and rails are discussed. The physical principles are explained, and (potential) advantages and drawbacks of different methods in the sense of versatility, flexibility, tunability and durability are discussed, where possible, per technique and per droplet operation: generation, transport, sorting, coalescence and splitting.
Introduction
Droplet microfluidics has been around since the early 1980s, but the number of examples that have made it towards commercial applications are limited.In his 2006 overview, Whitesides [1] stated that microfluidics was still in its infancy, but might offer revolutionary new capabilities for the future.Indeed, great advances have been made since then, leading to commercial devices for genome sequencing, Polymerase Chain Reaction (PCR) and flow cytometry.Each of these techniques uses some form of microfluidics and droplet control for the automation of complex laboratory protocols.
The first tool required for droplet microfluidics is, of course, a system to generate droplets in the order of picoliter or even femtoliter [2] volumes.To generate a droplet, one needs a flow geometry (sometimes supplemented with an external field) that causes the continuous (mostly aqueous) phase to form a neck, followed by a break-up.This should preferably work in a highly controlled manner leading to a large continuous production of monodisperse droplets.Secondly, one or more methods of droplet manipulation are required, which force a droplet to execute the desired protocol.These methods do not necessarily have to be active.Some elegant passive methods have been developed for splitting [3,4], guiding [5] and trapping [5][6][7] drops at a certain location.Thirdly, most protocols require a certain type of analysis or detection, which mostly is optical (scattering, transmission, adsorption, fluorescence, Raman, surface plasmon resonance) or electrical (measuring current, voltage, impedance).
Over the years, several excellent reviews on droplet microfluidics have been published.There are short overviews on available techniques for droplet generation [8][9][10], methods for single cell encapsulation [11], passive microfluidic techniques [12], manipulations using electrowetting [13,14], as well as on the fundamental challenges involved [15], on the (need for) integration of biology, chemistry and physics in biological assays [16][17][18], on the choice of optimal surfactants [19], and even comprehensive review articles [20][21][22] and books [23] covering the entire field of droplet based microfluidics.The theoretical fluid physics aspects behind microfluidics have also been covered [24].
The present review is oriented towards techniques that have recently become available to manipulate the droplets in microchannels-both actively and passively.We try to give an impartial view of what are the advantages of each technique, and the downsides, as far as researchers are willing to divulge these.Some of these techniques are still in the academic research stage, waiting to be used for practical and commercial applications.This paper is further organized as follows: we first discuss the existing techniques from the physics point of view, and then separately how they have been translated to passive and active manipulations for which are able to execute laboratory protocols for biophysical and biochemical purposes.
Drag Force and Viscous Dissipation
We define passive control as the use of a dedicated microfluidic geometry to control droplets entirely via the flow field, without any controller interference.In steady state flow the velocity of a droplet is governed by the drag exerted on the droplet by the continuous carrier medium.The terminal droplet velocity depends on how the two liquids flow through a channel.When a droplet is much smaller than the channel cross section the resulting terminal (i.e., steady state) velocity of the droplet equals the average local velocity of the continuous phase.In general, a small drop will follow the streamlines of the flow of the continuous phase.In Poiseuille flow, this means that droplets move faster in the center of the channel.
When droplets take up a larger portion of the channel cross section they are confined by two or four walls, resulting in the formation of lubrication films around the slug.As Baroud et al. [10] explain, the pressure drop ∆P across (and thus the terminal velocity of) slugs of droplets in a rectangular channel is influenced by corner flow of the continuous phase around the droplet.From this, it follows that not only viscosity and geometry play a role, but also the interfacial tension (thus deformability) influences the terminal velocity of the droplet.Using a microfluidic comparator, the influence of drop size, viscosity, interfacial tension and flow rate on the hydrodynamic resistance in a rectangular channel has been measured [25].
As an example, let us assume a confined droplet in a microchannel that only undergoes small deformations.This assumption is often valid in microchannels where the capillary number is very small and capillary forces (surface tension) dominate over viscous forces (viscosity and velocity).Such a droplet experiences a friction force due to the confinement, balanced by a drag force due to the surrounding flow field.The drag force exerted on such a droplet can be written as F drag = αLµv rel , with α a dimensionless prefactor determined by the geometry, L the characteristic length scale of the problem dependent on droplet radius and channel geometry, µ the dynamic viscosity, and v rel the continuous flow velocity relative to the dispersed phase.For an example of such a situation the reader could examine the supplementary information of [26], where the drag force on a droplet confined in a Hele-Shaw channel geometry in oil flow is described analytically.
When droplet deformations are no longer negligible, the relation between drag force and velocity is no longer linear because the deformation (and thus prefactor α) becomes velocity dependent.This brings us to another related factor that can influence droplet velocity: the presence of surfactants at the interface.The circulating flow fields inside and outside the slug can transport surfactants across the interface resulting in accumulation of surfactants at the rear of the droplet.This surfactant concentration gradient at the interface results in Marangoni stresses, leading to increased dissipation, slowing down the droplet.The interplay of these different mechanisms can make it difficult to predict the terminal velocity of a slug.For instance, in the presence of surfactants it can be possible for an air bubble to have a lower terminal velocity than a water droplet [3].
These basic insights can be helpful in designing passive control systems.As an example, one can use the different properties of droplets, and the resulting difference in drag forces exhibited on them, to change their behavior.For instance, drops of different viscosity will obtain a different terminal velocity.Partially confined, high viscosity droplets undergo a stronger viscous dissipation and therefore move slower than low viscosity droplets of equal radius (and confinement).This allows low viscosity droplets to catch up with high viscosity droplets, resulting in coalescence [27].Similarly, droplets of different radius behave differently.In the regime of terminal velocity, a drop with smaller radius will move faster than a larger drop.This is because in Poiseuille flow the velocity of the continuous phase is higher near the center of the channel.Therefore a smaller droplet in the center of the channel will obtain a larger average velocity than a larger droplet.The same principle allows for the hydrodynamic separation of particles of different sizes [28].In most cases however, droplets will neither have a different viscosity, nor a different size.
Geometric Structures
It is possible to design the geometry of a channel such that the flow field alters the drop behavior.For instance, the splitting of a droplet can be facilitated by dividing one channel into two branches, or using a pillar as a flow obstruction in the channel.Depending on channel widths and hydraulic resistances, this allows creating pairs of identical daughter droplets, or to split the volume of the mother droplet into a larger and a smaller fraction [4].
As shown by the Vanapalli group [29], so-called microfluidic parking networks can be used to let one droplet alter the flow field (and hence also the fate) for consecutive drops.When given the choice to enter different channels, a droplet will tend to follow the path of least hydraulic resistance.However, if that path contains a constriction much smaller than the droplet size, trapping of the droplet will occur, because the pressure drop over the parallel channel is smaller than the large Laplace pressure required to pass through the constriction.The droplet then effectively blocks that channel for consecutive droplets, forcing them to take the parallel channel.Depending on ratios of channel resistances a regime can be found where for instance every first drop is captured, or even regimes where the first droplet goes into the bypass channel, effectively increasing the resistance of this parallel channel, so that only the second or third droplet are captured.By letting consecutive drops coalesce, Bithi et al., also showed the ability to control the droplet volume accurately (Figure 5b), or change the (bio)molecular concentrations inside the droplet [7].Other geometric structures, like an array of traps, can be built in a wide microchannel as well; in that case chance dictates the trapping of droplets (Figure 5a) [6].
Another great example of passive channel geometries has been shown by Korczyk et al., only by changing the flows their channel geometries are able to precisely meter a volume from a larger drop, merge droplets to create droplets of different concentration, delay a droplet and have a droplet shift register [30].
Surface Energy Wells
After a droplet is generated in a microchannel it is often confined by two walls and takes a pancake or disc shape.Alternatively, if it is confined by four walls, it becomes a slug.If there had been no walls (or other external forces) the droplet would have taken the shape of lowest surface energy, a sphere.Therefore it stands to reason that if the squeezed droplet was able to decrease its interfacial area that such change would be energetically favored.Creating a small hole in the top of a microchannel allows a drop to reduce its curvature, and thus its interfacial energy by locally expanding into the hole.Once a droplet is lodged in this surface energy well it will require force to pull it out (see Figure 1a).This makes it possible to trap a drop against the drag force [31], or split a droplet [32].Building further on this idea, also thin trenches or "rails" were created, effectively creating elongated (directional) energy wells.These allow to steer a droplet in flow along these rails [33], or to fuse two droplets by using two converging rails [34].Other researchers used similar rails for the fusion and sorting of droplets [35].As is the case with other passive techniques: it is difficult to release the drop once trapped, or to select which way to steer a drop.Using active laser forcing (see Section 3.5) these problems can be overcome [5].(c) A polarizable medium will move towards the region with highest electric field intensity; (d) The electrostatic energy will be minimal when a droplet covers the activated electrode; (e) A droplet passing two electrodes finds a minimum energy when centered above the electrodes; (f) By taking charges from the open electrode, a droplet can be precharged.A precharged droplet will move towards the oppositely charged electrode; (g) Paramagnetic particles move towards the highest magnetic field strength and can drag a droplet along with them; (h) The interdigital transducer creates a surface acoustic wave that is attenuated at the PDMS pillar, creating an upward pressure wave that can move droplets; (i) At specific frequencies the piezo can create resonating pressure waves in a channel, forcing a droplet towards the antinodes; (j) A medium with better polarizability at optical frequencies than the surrounding medium will move towards the region of high intensity laser light.
Active Manipulation Techniques
Active manipulation entails changing the behavior or fate of a droplet (in flow) by an external, user-controlled mechanism.Below we discuss the more common techniques that are used for active control, selected on their proven ability to control droplets accurately and reliably.Our main focus will be on techniques that can control a droplet after it has been formed, but some of these techniques can also be used before or during droplet generation.
Pneumatic Membrane (Quake) Valves
A common material for fabrication of microfluidic devices is polydimethylsiloxane (PDMS), a polymer that can be molded using standard soft lithography techniques [36,37].Multiple layers of PDMS are used as the basis for pneumatic membrane valves.One layer contains the microchannel structure through which the liquids will flow.Another layer consists of gas filled channels, which cross over the microchannel.At the point where the channels cross, the two layers are only separated by a thin PDMS membrane.By increasing the pressure of the gaseous phase the membrane expands into the liquid channel (see Figure 1b).When enough pressure is applied the liquid channel can be blocked, effectively forming a pneumatic valve [38].This technique has been applied for creating a peristaltic pump [38], for the sorting of droplets [39], and for droplet generation [40].Especially for droplet generation this technique has some advantages over other methods.Single droplets are generated on-demand, and the amplitude and duration of the pressure pulse can be used to control the size of the droplet [41].Placing multiple droplet generators in series allows merging droplets of different content and size.In turn this enables the creation of large sets of droplets, covering wide ranges in concentrations and content [42].Using many valves, such droplets can be individually guided and stored in a multiplex array of 95 wells for cell culturing or PCR analysis [43].
Dielectrophoresis
Dielectropheresis (DEP) is the motion of polarizable objects caused by application of a non-uniform electric field.In microfluidics, DEP is mainly used to sort small particles with a different dielectric constant r than the surrounding medium [44].There are multiple ways to understand how DEP works.For small droplets the easiest way to visualize DEP is to look at the charge distribution within the droplet.As we know, opposite charges attract.Thus, in an electric field any polarizable medium will orient its electric charges such that they oppose the electric field.Three main ways for a medium to polarize are: ionic, dipolar and electronic.If the medium contains ion pairs, like a salt dissolved in water, these can quickly shift the charges at the droplet interface.If the molecule is in itself a dipole, like a water molecule, it can rotate and orient itself against the electric field.Much faster, within each atom the positively charged nucleus and negative charged electron cloud can shift slightly to oppose the electric field.
Regardless of the polarization mechanism, the end result is a droplet that can be considered as a dipole which is oriented against the electric field.In an inhomogeneous field this means that the electrostatic attraction is larger at the side of the dipole where the electric field is stronger, resulting in a net force in the direction of increasing field strength (see Figure 1c).There are analytical expressions for the DEP force on spherical objects [45].For different (droplet) shapes and volumes numerical modeling is needed to calculate the electric field and potential, and subsequently the DEP force.
In practical DEP applications an electric field inside a microfluidic channel is generally created by applying a voltage across two (or more) electrodes just outside the channel.A simple way to get gradients in the electric field is to use a pointed electrode where all the electric field lines converge.Especially interesting for two-phase flow microfluidics is that water droplets in an electric field can on average be considered as a large dipole.The consequence hereof is that not only small particles, but also water droplets in oil can be sorted with DEP, even at high speeds (>1 kHz) [46].
When dealing with larger volumes instead of dielectric droplets or particles, it can also be insightful to describe DEP (and electrowetting as well, see Section 3.2.2) via a capacitive energy approach.From electrostatics we know that when a voltage source charges a capacitor, the total electrostatic energy of the system decreases: E el " ´1 2 CU2 with C the total lumped capacitance and U the applied potential.In microfluidic systems, a pair of electrodes next to a microchannel with either water, oil or another dielectric material in between also forms a capacitor.Since the electrostatic energy decreases with increasing capacitance, it is thermodynamically favorable to have a system with large capacitance.Using a lumped capacitance model, i.e., taking into account the capacitances of all different media between the electrodes, E el can be calculated.The simplest case is a parallel plate capacitor, for which C " 0 r d A, with d the thickness of the dielectric between the electrodes and A the electrode area.Since displacement of the medium between the electrodes (e.g., oil) by one that has a higher polarizability (e.g., water) increases the lumped capacitance, it is energetically favorable for highly polarizable media to move towards the region of high electric field strength.
EWOD/DMF
In the context of microfluidics, ElectroWetting-On-Dielectric (EWOD) (also known as digital microfluidics, DMF) generally uses arrays of electrodes insulated by a thin dielectric layer to control conductive droplets [47].This enables highly accurate, on-demand control over individual droplets like transportation, generation, splitting, coalescence and mixing [22].
The term electrowetting originates from the observation that a grounded water droplet atop an insulated electrode spreads when a voltage is applied, effectively decreasing the droplet contact angle with the substrate and thus "wetting the surface better".This concept is explained by the Young-Lippmann model, which balances the surface energy cost with the gain in electrostatic energy when a droplet spreads.The model allows to calculate the (oil/water) interfacial tension, or the insulator thickness when measuring the contact angle as a function of voltage [48].As an applied theoretical example the model also predicts the position of the droplet in an electrode wedge structure, dependent on applied voltage and the resulting droplet contact angle [49].In droplet based microfluidics, however, it is difficult or even impossible to determine the contact angle.This makes application of the Young-Lippmann model less useful.To explain the droplet motion, it might be better to use an electrostatic model, in this case the same lumped capacitive model as mentioned before for DEP.
Although it is not always mentioned, EWOD is very similar to DEP [50].Both techniques use the difference in response of the media to an applied electric field.One difference is that EWOD mainly uses aqueous droplets which are conductive due to the addition of ions.The high conductivity allows to consider the droplet as an electrode with resistance but negligible capacitance.This also makes a second difference between DEP and EWOD clear: the insulating layers are required to prevent the short circuiting of the electrodes.The continuous phase can be air, or an oil of low permittivity.In presence of a potential difference across electrodes but absence of a water droplet, the capacitance is intrinsically low.However, if the oil or air is replaced by a conductive drop, the total applied voltage now falls across the thin dielectric, insulating layers.The capacitance of the droplet can then be ignored, and the electrostatic energy of the system can be calculated using the a similar formula as before: E el " ´1 2 0 r d AU 2 , but now with d being the total thickness of the insulating layers, not the distance between electrodes.By switching on an electrode from the array of electrodes, the droplet will find a minimum energy by spreading across the activated electrode, increasing its capacitive area.The resulting force pulls on the three-phase contact line allowing to move droplets (see Figure 1d).
To further show the similarity between DEP and EWOD, the conductive liquid in EWOD can be described with an effective permittivity e f f " b r aqueous phase can no longer be neglected.Since the dielectric constant of the water phase must be taken into account EWOD has effectively become DEP.
The main advantage of EWOD is the sheer number of droplet manipulations that are possible on-demand.The fact that it works well with conductive droplets makes it ideal for biological and cell applications.Another advantage of EWOD is that it is a relatively basic electromechanical technique, which ensures that the control over water droplets is accurate, switchable, high speed and predictable.EWOD can be integrated in more complex (control) systems by using state-of-the-art techniques from the immense field of electronics.Nice examples are for instance the ability to print electrodes on thin films, allowing the manipulation of droplets on bent surfaces [51].In addition, real-time detection of the droplet position by measuring the capacitance of a set of electrodes [52], or electronic paper displays [53] illustrate this point.Thin film transistor arrays used in liquid crystal displays have also been used for EWOD, giving control to over 4000 electrodes with impedance spectroscopy for droplet position detection [54].
There are also some disadvantages to consider when using EWOD.One is the high voltage that is required for thick dielectric layers.Another is that relatively complex electronic systems are required when using many individually activated electrodes.The longevity of a device is also strongly dependent on the dielectric materials used.Many different materials have been used aiming for good hydrophobicity, low hysteresis and high resistance to dielectric breakdown.Some researchers have focused on making the dielectric as thin as possible, to allow use of low voltages.This would make EWOD compatible with standard low voltage electrical systems (no need for expensive amplifiers and switches), while still allowing for high speed droplet actuation [55].Others have used relatively porous dielectrics like PDMS which are infused with the continuous oil phase, ensuring low hysteresis and sufficient hydrophobicity [26,56].It is interesting to see how an open source project like DropBot brings researchers together to make EWOD cheaper, automated, more reliable and effectively crossing the divide between laboratory research and commercial application [57].
Electrostatic Potential Wells
EWOD devices are mostly used as large arrays of electrodes that completely control every motion of each droplet.In an attempt to acquire the strengths of both worlds, the high throughput of two-phase flow channel-based microfluidics has been combined with the individual drop control achieved using electrical actuation [26].Two-phase flow allows for high-speed droplet generation and the transport of droplets over large distances without the need for many electrodes.By having dielectric covered electrodes below the microchannel, the accurate on-demand control of EWOD is used to manipulate (conductive) aqueous droplets in flow.A disadvantage, as compared to EWOD alone, is that the ability to transport droplets upstream is lost.
The electrode geometries in this technique are chosen to be co-planar electrodes separated by a gap that is comparable to the thickness of the dielectric layer.When a potential is applied across such a gap, a passing droplet will try to equalize the areas of droplet contact above the electrodes since this maximizes the lumped capacitance.Effectively the electrodes form an energy well, where the electrostatic energy is minimal if the droplet centers over the gap (see Figure 1e).The similarity with surface energy wells (Section 2.3) is clear, and therefore this method was coined "electrostatic potential wells".The technique has been able to trap, coalesce and release droplets in flow, split a droplet in two equal volumes, guide droplets to different lateral positions on electrostatic rails [26], and sort droplets at >1 kHz [58].
Pre-Charging
A third electrostatic manipulation technique that needs to be mentioned is the pre-charging of droplets.Several ways for pre-charging droplets have been described in literature.One method applies a DC electric field perpendicular to the flow in the channel.The positive and negative charges (ions) in the droplet separate to either side of the droplet.While the charges are separated the water droplet is split in two at the crossing of a T-junction, resulting in two oppositely charged water droplets [59].A second method uses two closely spaced electrodes below the channel.A positive or negative DC voltage is applied across the electrodes as a droplet passes and covers both electrodes.An opposite charge q is induced in the droplet at the active, insulated electrode interface.The charges originate from the grounded electrode that has no insulating layer (see Figure 1f).The charges remain in the droplet as it detaches from the electrodes resulting in a charged droplet [60].
Charging a droplet in itself has little use.A secondary electrode pair further downstream the channel is required to apply another electric field.In this second electric field charged droplets will feel an electric force Ñ F e " q Ñ E , with Ñ E the electric field strength.Negatively charged droplets move towards the positively charged electrodes and vice versa.This technique is capable of sorting of pre-charged droplets into three different outlet channels at 3 drops per second [61].A secondary utilization of this technique arises from the coalescence of droplets.Two oppositely charged droplets will feel an attractive electrostatic force enabling an easier coalescence of droplets, even when the droplets are covered by surfactants [59].
Magnetic Manipulations
Similar to DEP where polarizable materials are influenced by an electric field, so can magnetic particles be manipulated by a magnetic field.One example is the magnetic tweezer (see Figure 1g).Here, an electromagnet above the channel in combination with a ferromagnet built in the substrate below the channel generate a magnetic field with increasing intensity towards the ferromagnet.Paramagnetic particles inside droplets will orient to oppose the magnetic field and move towards the region of high magnetic field strength.In some cases the particles are able to drag a droplet with them, which allows the movement of a droplet.In other cases the particles are extracted from the droplet and later merged with a consecutive droplet.By functionalizing the particles, for instance with antibodies, this allows the execution of biological assays [62,63].
By integrating stripes of paramagnetic PDMS below a microfluidic channel, magnetic field gradients can be created around the stripes by providing an external, homogeneous magnetic field.Water droplets loaded with paramagnetic microparticles will then move towards the stripes, where the magnetic field gradient is highest.This technique provides an energy landscape very similar to the surface energy rails and the electrostatic potential rails, discussed in Sections 2.3 and 3.2.3 and has been used to guide and sort droplets in oil flow along the rails and trap and merge droplets at the end of the magnetic rail [64].Similarly, rotating external magnetic fields have been used to let ferrofluidic droplets perform automatic tasks, effectively creating logic gates (see Section 4.6) [65].
Surface Acoustic Waves
As the name suggests, a surface acoustic wave (SAW) is an acoustic pressure wave that propagates across a substrate.It is created by interdigitated electrodes on a piezo-electric substrate.The resonance frequency of the interdigitated transducer (IDT) depends on the pitch between the electrodes and the characteristic velocity at which a sound wave travels across a specific surface material.When a potential is applied over the IDT at the resonance frequency a high pressure Rayleigh wave of MHz frequency and nanometer amplitudes is created, which propagates across the substrate.An encounter with a fluidic channel attenuates the wave by coupling its energy into the fluid.This results in a (standing or travelling) bulk pressure wave propagating through the liquid at an angle (the Rayleigh angle) that is determined by the differences in acoustic impedance (material density and wave speed) of the media.This bulk wave can generate an acoustic radiation force on droplets or cause acoustic streaming within a liquid.The first manipulates droplets, the latter influences particles in flow.
While the technique for creating SAWs has been demonstrated in the 1960s [66], it has only recently been rediscovered as a means for external control in microfluidics.Firstly, for the focusing of beads [67], and soon after for the sorting of droplets [68].Very recently the technique has been used for steering of slugs [69].By using a tapered electrode geometry localized pressure fields can be created in the channel.In this local field a droplet can be trapped temporarily and then merged with a consecutive droplet [70,71].Another way to localize the region where the pressure wave will occur is by using a PDMS pillar to connect the IDT substrate with the channel (see Figure 1h).Only at the pillar the SAW will be attenuated and form an upwards pressure wave.This clever design allowed the sorting of droplets containing cells from empty droplets at a rate of 3 kHz [72].Combining multiple tapered IDTs with multiple pillars enables the sorting towards multiple outlet channels.Besides forcing individual droplets, the SAWs are also capable of providing enough pressure to break up a thin co-flowing stream of water in oil.This enabled to generate cell-containing droplets directly from co-flow [72].
Numerical simulations can be used to predict the acoustic field in a channel [72], and the displacement of a surface by the SAW can be measured using Laser Doppler vibrometry [70].Recently, another optical technique has been developed which is able to measure the acoustic field in a microchannel directly [73].Advantages of SAW are that only small voltages are required to create very large forces, and that pressure waves will manipulate any object in the channel, independent of permittivity, conductivity, refractive index or size.A disadvantage could be that the brute force from SAWs is not localized and SAWs are therefore not convenient for highly accurate control.Within these limits it is still possible to perform several protocols that do require accuracy, like splitting [71] a droplet in two, by using multiple IDTs and carefully tuned voltage actuation sequences.
Ultrasonic Acoustophoresis
Besides SAWs it is of course also possible to create bulk acoustic waves by simply using a flat piezo-electric surface in contact with the channel walls, instead of the interdigitated transducer.This directly creates bulk acoustic waves (BAWs) in the microchannel.Unlike SAWs, BAWs are restricted to discrete, harmonic resonance modes such that the wavelength fits the channel dimension (see Figure 1i).If a particle or droplet is large enough it will move towards the nearest anti-node.This allows to sort large objects from small particles.BAWs have also been used for stretching and mixing of droplets [74], as well as for sorting and coalescing droplets [75].
Optical Manipulation Techniques
The use of focused laser spots generates various possibilities for droplet manipulation.Light waves can interact with objects in different ways.A focused light beam is in fact nothing more than an electric field gradient with the highest field strength in the center.Very similar to how dielectric objects respond to an electric field, as described in Section 3.2.1,so do objects of high permittivity move towards the center of the highest electric field strength created by the laser.The difference is that electric field frequency for laser light lies in the order of 10 14 Hz.The orientational and ionic polarization mechanisms of molecules cannot follow these frequencies and the permittivity thus only depends on the electronic screening abilities of the atoms.Perhaps the best-known example of this principle are optical tweezers (see Figure 1j) [76].A collimated laser beam enters an infinity-corrected objective lens and is focused into a narrow beam in the focal plane.This focal spot is used to trap and transport a high permittivity particle or droplet.By changing the angle at which the collimated beam enters the objective the focal spot is translated laterally, while changing the incoming beam to slightly diverging or converging translates the focal spot axially.The technique is not limited to just water of course, but can for instance also manipulate liquid crystals [77], as long as there is a gradient in refractive index.To be able to manipulate multiple droplets or particles, multiple individually changeable beam wave fronts are required.Towards this end, spatial light modulators have been used to create up to 400 dynamically addressable optical tweezers [78].A disadvantage of optical manipulation techniques is that the absolute force that can be generate is quite low in the order of 100 pN or lower [79].This means optical techniques are probably only useful at low flow rates.
Electrostatic interaction is not the only influence light has on particles.Light waves also refract at the interface of a spherical particle of different refractive index than the surrounding medium.And because photons are known to have momentum this means that photons which are scattered or absorbed actually transfer this momentum to the particle or droplet and force it in the direction of laser beam propagation, as well as towards the beam center.This principle makes it possible to selectively sort droplets towards different outlets [80].To prevent optical trapping of the particle, the laser beam is not focused.
A different method to use focused laser light to manipulate droplets is using the light to locally heat the interface of surfactant covered droplets.This heating increases the local interfacial tension of the droplet, probably by removing or destroying the surfactant, effectively creating an interfacial tension gradient at the droplet interface.As a result Marangoni flows are created around the laser hot spot yielding a force that pushes the droplet away from the hot spot.This technique has been used for influencing droplet generation, coalescence and fusion [81].It also has been used in combination with the passive surface energy wells described in Section 2.3 to transport droplets from one rail to another, and to dislodge trapped droplets from their surface energy trap [5].
The final optical technique worth mentioning is used for on-demand generation of droplets by using intense pulsed laser light to create rapidly expanding cavitation vapor bubbles [82].The cavity comes to existence when the strong optical field induces breakdown of the water molecules, forming plasma.The energy quickly dissipates and the heat generated in this process creates a quickly expanding cavity vapor bubble of which the pressure wave moves through the water and perturbs a nearby oil-water interface.The water pushes into the oil phase to form an aqueous droplet.Droplet generation rates of up to 10 kHz and droplet volumes of 1-150 pL were realized.The authors report volume deviations of just 1%, which would make this technique one of the most accurate ways for droplet generation.
Droplet Generation
The creation of droplets is the basis of any two-phase flow microfluidic chip [8][9][10].Especially in the early days of droplet microfluidics, many different channel designs have been thought of to generate droplets.Most common techniques are based on hydrodynamic interaction between the two immiscible phases, where the continuous phase applies shear on the eventually dispersed phase.See Figure 2 for an overview of such flow based techniques.
T-junction
Flow focusing
Co-flow
Step emulsification One option is the T-junction [83], where two perpendicular channels bring the two phases together.When the front of the dispersed phase fluid intrudes into the main channel, a pressure drop occurs between the front and rear of the emerging droplet.The droplet grows under the balance of pressure, interfacial tension and shearing force until the neck at the rear becomes thin enough-and the pressure low enough-to break into a small droplet [84].In practice, the T-junction creates slugs of droplets in the exiting channel with a minimal length of twice the channel width, or larger if the pressure in the dispersed phase channel is increased.Smaller droplet sizes can be obtained at higher flowrates, but these are generated by a Plateau-Rayleigh instability, which is generally not considered to be the best method to create monodisperse droplets.
Another option for generating droplets is the flow focusing device (FFD) [85].In an FFD the dispersed phase is located in a center channel while the continuous phase enters from two perpendicular side channels.In this way, the dispersed phase is squeezed between the two continuous phase flows.By shear stress a neck is formed and broken from the dispersed phase creating a droplet.Often a smaller orifice through which all the liquids must flow is positioned behind the FFD.This creates a high local shear stress at the orifice, which ensures the neck is broken in this region.In practice, droplet size and generation frequency are influenced by the FFD geometry, the viscosities of the two phases and the flowrates.The droplet sizes are more uniform as compared to techniques that use the Plateau-Rayleigh instability to break the neck.Droplet generation frequencies can go up to several kHz.
Very similar to the FFD, drops can also be generated by a co-flowing geometry [86].In this geometry a thin (glass) capillary or (metal) needle is inserted as a center channel inside the main channel.The dispersed phase-often water since it wets the glass capillary well-enters via the capillary and is segmented by the shear force of the external continuous fluid on the dispersed phase.At higher flowrates a jet is formed and droplet break up is again caused by Plateau-Rayleigh instability [87].One advantage is that the dispersed phase is completely surrounded by the continuous phase.This ensures that the dispersed phase does not interact with the channel walls, which can significantly increase the durability of the microfluidic channel when using biological media.Another advantage is the ability to create double or even triple emulsions-inside-emulsions [88].
Another method is called step emulsification [89,90].This technique is based on the fact that a soon-to-be-dispersed liquid filling a thin channel is always confined by the (non-wetting) side walls.A confined liquid is forced into strong curvature, resulting in a large Laplace pressure drop across the interface.If a region is offered where the channel expands in both width and height (a step) the tip of the water phase will accelerate into the wider channel to decrease its high curvature.As it accelerates a neck is formed behind the droplet as the continuous phase flows upwards in the channel [91].When the neck has become thin enough, it eventually breaks due to Rayleigh-Plateau instability.The step can also be a gradual change in height where the gradient influences the eventual droplet size [34].The step emulsification method has the advantage that the drop size is mainly controlled by the geometry and the resulting change in Laplace pressure, and less influenced by pressure differences between the liquid phases.This allows parallelizing the step geometry, and thus generates a high throughput of reasonably uniform droplets [34,89].
Active control over the droplet generation frequency can easily be achieved by changing the flow rates or pressures of the liquid phases.A very elegant technique for actively controlling the droplet generation frequency is dielectrophoresis (DEP) [92].By applying a high-frequency voltage across four electrodes positioned around the FFD an electrostatic force is applied on the water interface.Increasing the voltage increases this force, pulling the water interface further downstream in the FFD and effectively increasing the droplet generation frequency.Compared to syringe pumps, electrical control allows for much faster switching between generation frequencies.
So far, the discussed techniques generate continuous streams of drops.To generate single droplets on-demand active control is required.Firstly, the dispersed phase has to be static for this application.This can be achieved by stopping the flow, but in case of syringe pumps, control can be hindered by elastic effects from the tubing and is therefore not very fast or stable.A simpler approach is to use a pressure controller for the dispersed liquid phase.Stopping the soon-to-be-dispersed liquid phase can be facilitated by using a tapered channel: as the liquid traverses through the tapering channel, its curvature increases.Ultimately the increasing Laplace pressure will match the applied pressure on the liquid phase, causing the flow to stop.By applying a short pressure pulse a droplet can be generated on demand.Other active methods for on demand drop generation include electrowetting [13,93,94], SAWs [95], pneumatic valves [41,96,97] and laser-induced cavitation [82].A nice feature of the pneumatic valves is that consecutive pneumatically controlled droplet generators can directly combine different solutions into one droplet at controllable ratios [42].
Droplet Transport/Guiding/Steering
In some protocols it is required to transport a droplet to a specific location, for instance towards a position where it should be held for incubation, or where it can be analyzed by some optical technique, or the starting location of another protocol on the same chip.Both passive and active techniques have been devised to transport droplets in the desired direction.Arguably the most accurate technique that offers external, on-demand control over droplet transport is EWOD.Protocols for electrode actuation can be preset, but also changed by the user depending on what is needed.Optical tweezers are able to trap a droplet in the focal waist and by moving the focal spot the droplet can be transported.
The simplest passive technique for droplet transport is to let it follow the flow of the continuous phase.Microfluidic walls or obstructions determine where a droplet goes.More interesting are manipulation techniques that can be used in combination with flow.In Section 2.3, surface energy rails were discussed that can passively guide droplets, assisted by flow.Inspired by the simple physics behind these surface energy rails, electrostatic potential rails have been designed to actively guide droplets towards six different lateral positions in a wide channel, depending on which electrodes are activated (See Figure 3) [26].An intricate network of microchannels and pneumatic valves has been used to direct flows of drops towards 95 wells independently [43].
Micromachines 2015, 6, page-page techniques have been devised to transport droplets in the desired direction.Arguably the most accurate technique that offers external, on-demand control over droplet transport is EWOD.Protocols for electrode actuation can be preset, but also changed by the user depending on what is needed.Optical tweezers are able to trap a droplet in the focal waist and by moving the focal spot the droplet can be transported.
The simplest passive technique for droplet transport is to let it follow the flow of the continuous phase.Microfluidic walls or obstructions determine where a droplet goes.More interesting are manipulation techniques that can be used in combination with flow.In Section 2.3, surface energy rails were discussed that can passively guide droplets, assisted by flow.Inspired by the simple physics behind these surface energy rails, electrostatic potential rails have been designed to actively guide droplets towards six different lateral positions in a wide channel, depending on which electrodes are activated (See Figure 3) [26].An intricate network of microchannels and pneumatic valves has been used to direct flows of drops towards 95 wells independently [43].The laser spot is used to select and force droplets onto the rail.A local widening of the rail serves as a deeper surface energy well where trapping will occur.A droplet can be pushed out of the well by a consecutive droplet.Adapted from [5] with permission of The Royal Society of Chemistry.(b) Guiding of droplets by electrostatic potential rails.Depending on which electrodes are activated a droplet will be guided towards one of six lateral positions in the channel.Adapted from [26] with permission of The Royal Society of Chemistry.
Droplet Sorting
The sorting of droplets is perhaps the most important application in microfluidics, especially for chemical or biological applications.Most researchers use fluorescence detection to determine whether a droplet should be sorted or not, called fluorescence assisted droplet sorting (FADS).Whether it is a fluorescently labeled cell in a droplet that needs to be separated from empty droplets, or proof whether a chemical reaction has taken place, fluorescence is an easy-to-use and fast technique enabling detection at high speeds.
The sorting of droplets is actually a subset of guiding droplets, so all previous techniques for active droplet transport can be used for sorting as well.Albeit a relatively slow process, the surface energy rails achieved active sorting control by laser forcing [5].In addition, the novel method of magnetic rails has room for improvement at sorting rates of four drops per minute.Pre-charging of droplets allowed to sort droplets at 3 Hz so far [61].Standing SAWs have been shown to reach sorting frequencies of 222 Hz [98].By using a pneumatic valve to close the waste outlet (with low hydrodynamic resistance) droplets can be sorted towards the other outlet at 250 Hz [39].The The laser spot is used to select and force droplets onto the rail.A local widening of the rail serves as a deeper surface energy well where trapping will occur.A droplet can be pushed out of the well by a consecutive droplet.Adapted from [5] with permission of The Royal Society of Chemistry; (b) Guiding of droplets by electrostatic potential rails.Depending on which electrodes are activated a droplet will be guided towards one of six lateral positions in the channel.Adapted from [26] with permission of The Royal Society of Chemistry.
Droplet Sorting
The sorting of droplets is perhaps the most important application in microfluidics, especially for chemical or biological applications.Most researchers use fluorescence detection to determine whether a droplet should be sorted or not, called fluorescence assisted droplet sorting (FADS).Whether it is a fluorescently labeled cell in a droplet that needs to be separated from empty droplets, or proof whether a chemical reaction has taken place, fluorescence is an easy-to-use and fast technique enabling detection at high speeds.
The sorting of droplets is actually a subset of guiding droplets, so all previous techniques for active droplet transport can be used for sorting as well.Albeit a relatively slow process, the surface energy rails achieved active sorting control by laser forcing [5].In addition, the novel method of magnetic rails has room for improvement at sorting rates of four drops per minute.Pre-charging of droplets allowed to sort droplets at 3 Hz so far [61].Standing SAWs have been shown to reach sorting frequencies of 222 Hz [98].By using a pneumatic valve to close the waste outlet (with low hydrodynamic resistance) droplets can be sorted towards the other outlet at 250 Hz [39].The electrostatic energy rails have been converted into a smaller electrode geometry allowing for active sorting at 1200 Hz [58].As mentioned before DEP was one of the first applied, active sorting techniques for droplets [46], later improved with another electrode geometry and laser detection to accomplish active sorting up to 2000 Hz [99].But the technique that is able to apply the largest force on a droplet is the travelling SAW, able to sort at 3000 Hz [72].It must be noted that not all experiments have been done on comparable droplet systems.For instance the electrostatic energy rails used droplets with a two times larger radius than the DEP experiments, approximately multiplying the drag force that needs to be overcome by four.For the travelling SAW experiments, the authors also claim that the maximum applied force was not limited by the technique itself, but by the fact that more force could harm the droplet content.See Figure 4 for an overview of drop sorting.Reprinted with permission from [39].Copyright 2014, AIP Publishing LLC.(c) Depending on the frequency of the SSAW droplets will be forced towards one of the five outlets.Adapted with permission from [98].Copyright 2013, American Chemical Society.(d) By switching on or off the central electrode the electrostatic potential rails will force a droplet to the upper or lower channel.Adapted with permission from [58].Copyright 2015, AIP Publishing LLC.(e) By activating the electrode (red) an electric field gradient is created forcing water droplets to the upper channel.In the inset the droplets go to the lower channel by default.Adapted from [99] with permission of The Royal Society of Chemistry.(f) Left: The IDT generates a SAW, which is attenuated at the PDMS pillar.This locally creates a upwards pressure wave that forces a droplet to the upper channel.Right: Using two IDTs and two pillars allows to sort to multiple outlets.Adapted from [72] with permission of The Royal Society of Chemistry.
Droplet Trapping and Release
Holding droplets at a specific location in a microchip during flow can be used for several protocol steps.One step could be an incubation, where a droplet needs to wait for a chemical or biological process to reach completion.Or a step where droplets need to be analyzed over a longer time, which is easier if the droplets are not moving.As an example one could think of following the interaction kinetics of a (droplet based) protein to a ligand bound on one of the channel walls, using In the inset the droplets go to the lower channel by default.Adapted from [99] with permission of The Royal Society of Chemistry; (f) Left: The IDT generates a SAW, which is attenuated at the PDMS pillar.This locally creates a upwards pressure wave that forces a droplet to the upper channel.Right: Using two IDTs and two pillars allows to sort to multiple outlets.Adapted from [72] with permission of The Royal Society of Chemistry.
Droplet Trapping and Release
Holding droplets at a specific location in a microchip during flow can be used for several protocol steps.One step could be an incubation, where a droplet needs to wait for a chemical or biological process to reach completion.Or a step where droplets need to be analyzed over a longer time, which is easier if the droplets are not moving.As an example one could think of following the interaction kinetics of a (droplet based) protein to a ligand bound on one of the channel walls, using the change in refractive index near the surface as the signal (e.g., using SPR).
Optical tweezers can be used to trap one droplet at a time, and a two-electrode geometry is able to create an energy well and hold a droplet using DEP [100].Here, however, only the more practical trapping techniques are mentioned, specifically the ones that are able to trap multiple droplets simultaneously which should allow for multiplex assays.For practical reasons, like re-using a chip or analyzing droplets off-chip, it is also useful to be able to release a droplet after analysis is complete.
Geometrical structures are the simplest method of trapping droplets [6].The microfluidic parking networks are a typical example of a passive, geometrical technique for trapping droplets at specific on-chip locations [29].Concentration gradients of trapped droplets can be created, and the trapped volume can also be rectified [7].Releasing of the droplets is not mentioned by the authors, but could perhaps be possible by reversing the flow.
Arrays of surface energy wells (holes in the microchip substrate) have shown their ability to trap multiple drops [33].One disadvantage is that the trapping is determined by chance and thus non-specific.This issue has been resolved by combining the traps with surface energy rails and laser forcing, allowing to select which droplets will be trapped at which position on the chip.The success of this method was demonstrated via the creation of an array of trapped droplets with increasing concentration gradient [5].The droplets can be released by increasing the flow rate [31], which can also break up the droplet, leaving behind a controlled volume in the trap [32], or, as before, the droplet can be actively released by laser forcing [5].
An intricate network of microchannels and pneumatic valves has been used to route flows through the microchannels and selectively guide droplets towards 95 wells.The droplets can be loaded one by one, and as the authors mention, their chip allows for automated cross-contamination free release and recovery of the reaction products from the individual chambers for downstream analysis.Combined with the ability to easily create concentration gradients of droplet content, this technique can be very useful for automating multiplex laboratory protocols [43].
It is clear that EWOD is intrinsically capable of trapping and release, since it controls all motions of the droplet.The only flow-based technique shown to be able to trap and release multiple droplets on demand without the need for geometric obstructions is the electrostatic potential well.By applying a potential over two channel-wide electrodes separated by a zigzagging gap, droplets can be trapped at each lateral location where the gap forms a tip.Upstream the electrostatic potential rails are used to transport droplets laterally in the channel towards the desired trapping location.The angle of the zigzag structure ensures that a droplet is corrected for errors in lateral position.One zigzagging electrode geometry forms a column where six drops can be trapped on-demand by switching just one electrode.Releasing of the six droplets is simply done by turning off the active electrode.By placing multiple zigzagging structures after another arrays of droplet traps are created [26].Figure 5 shows an overview of the discussed trapping techniques.
to transport droplets laterally in the channel towards the desired trapping location.The angle of the zigzag structure ensures that a droplet is corrected for errors in lateral position.One zigzagging electrode geometry forms a column where six drops can be trapped on-demand by switching just one electrode.Releasing of the six droplets is simply done by turning off the active electrode.By placing multiple zigzagging structures after another arrays of droplet traps are created [26].Figure 5 shows an overview of the discussed trapping techniques.[26] with permission of The Royal Society of Chemistry.
Droplet Splitting/Fission
The on-chip splitting of droplets could have several purposes.For instance if the volume of a droplet needs to be reduced, or if a protocol requires two identical droplets of which one for instance serves as the blank/control experiment.The first requires the asymmetric splitting of a droplet in controlled volume ratios, the second requires the splitting in two equally sized droplets.Since water in air or oil has a relatively high surface tension sometimes surfactants are added to the oil and/or water phase to reduce the interfacial tension, which facilitates droplet splitting.
Basically, each technique used for the generation of droplets (Section 4.1) is capable of breaking up the droplet phase.Pneumatic valves can break up an existing droplet while it passes, or a flow focusing junction with smaller orifice or higher continuous flow rate can break a droplet into multiple smaller drops.
Many passive break up methods use a combination of flow and geometric structures.Splitting slugs of droplets can be done by dividing a channel in two, so a droplet can break up at the T-junction.Breakup can be facilitated by adding a pointy structure to the T-junction [3].The volume ratios can be predetermined by having different hydrodynamic resistances of either outlet channel.Very similarly, a pillar obstruction in the channel can split droplets.Again changing the relative pathway resistances by placing the pillar off-center determines the volume ratio of the split droplets [4].An overview of splitting techniques is given in Figure 6.
Active techniques capable of splitting droplets are quite rare.EWOD is good example for the splitting into equally sized droplets, even without need for surfactants [13,101].By activating 3 (or more) electrodes in a row a droplet will be elongated.Deactivation of the center electrode ensures the droplet wants to reduce its contact area with that particular electrode, while still being pulled towards the outer electrodes.This results in the droplet splitting in two equal volumes, or in discrete volume ratios depending on how many, and which electrodes used.Similarly the electrostatic potential wells have also been able to split a droplet in two equal volumes [26].
15 similarly, a pillar obstruction in the channel can split droplets.Again changing the relative pathway resistances by placing the pillar off-center determines the volume ratio of the split droplets [4].An overview of splitting techniques is given in Figure 6.
Active techniques capable of splitting droplets are quite rare.EWOD is good example for the splitting into equally sized droplets, even without need for surfactants [13,101].By activating 3 (or more) electrodes in a row a droplet will be elongated.Deactivation of the center electrode ensures the droplet wants to reduce its contact area with that particular electrode, while still being pulled towards the outer electrodes.This results in the droplet splitting in two equal volumes, or in discrete volume ratios depending on how many, and which electrodes used.Similarly the electrostatic potential wells have also been able to split a droplet in two equal volumes [26].[13].(e) Similar to EWOD, a droplet in oil flow will split in two equal volumes if the voltage across the electrostatic potential rails is high enough.The drag force assists in the breakup of the neck.Reproduced from [26] with permission of The Royal Society of Chemistry.[13]; (e) Similar to EWOD, a droplet in oil flow will split in two equal volumes if the voltage across the electrostatic potential rails is high enough.The drag force assists in the breakup of the neck.Reproduced from [26] with permission of The Royal Society of Chemistry.
SAWs can be used for droplet splitting as well.By using two off-axis IDTs on either side of a droplet SAWs can be created that apply a torque on the droplet.Low amplitude SAWs enable the stretching or rotation of a droplet.When followed by higher amplitude SAWs a droplet can be split in two [71].Depending on the ratio of amplitudes of each SAW it is possible to alter the eventual volume ratio of the split droplets.It must be noted that in this case the droplets were in the order of microliters, and not the more conventional picoliter range in droplet microfluidics.
Droplet Merging/Coalescence/Fusion
Many protocols require the merging of droplets, e.g., two droplets with different content coalescing in order to start or stop a chemical reaction, or to dilute a sample [9].Passive techniques are available to aid in droplet coalescence.A simple passive approach is to widen the microchannel, and thus reducing the flow rate locally.This effectively reduces the distance between two consecutive droplets and promotes coalescence when the channel width is reduced once more [102].Theoretically, all that is required for droplets to coalesce is to bring them close together, and wait for the thin film between droplets to be squeezed out.Therefore any technique that can transport or trap droplets (Sections 4.2 and 4.3) is capable of bringing two drops together and merge them.For instance optical forcing [103] and SAW [70] have been used to trap a droplet until a secondary droplet flows against it.Optical tweezers have been used to fuse two femtoliter droplets together [104], while the more powerful SAW technique has also been used to push two microliter sized drops together [71].[13].(d) After trapping two surfactant-covered droplets with different content in surface energy wells, they are merged using a laser spot focused at the droplet-droplet interface.Adapted from [5] with permission of The Royal Society of Chemistry.(e) Similar to electrocoalescence, interdigitated electrodes below a microchannel will merge two surfactant-covered droplets within milliseconds.Adapted with permission from [93].Copyright 2011, AIP Publishing LLC.
Droplet Logics
So far, in all active techniques the external control is governed by computers and electronics.Understandably so, since digital logics offer fast and reliable actuation.It could, however, make sense to build (part of) the logics into the microfluidic chip, where the presence or state of one drop is influenced by the state of other droplets, or the state of the chip.Theoretically, this could allow droplets to perform more complex routines automatically and determined by the current state of the microfluidic device.As an example, pneumatic membrane valves have been used to open and close different channels and force droplets to execute a certain protocol depending on which valves are activated or not [108].This allowed the creation of NOT, NAND and NOR gates, flip-flops, oscillators, self-driven peristaltic pumps, and a 12-bit shift register.Another method generates gas bubbles in the channel, and uses the hydrodynamic interaction between bubbles to automatically perform logic operations, forming AND/OR/NOT gates, a toggle flip-flop, a ripple counter, timing restoration and a ring oscillator [109].As a purely passive technique, a channel geometry has been designed that serves as a shift register for water droplet in oil [110,111].Very recently, ferrofluidic droplets atop a substrate incorporating multiple permalloy geometries were shown to automatically perform logic operations when a continuous rotating magnetic field is applied [65].This enabled AND, OR, XOR, NOT and NAND logic gates, fanouts, a full adder, a flip-flop and a finite-state machine without any external control.Specifically the interaction of serial and parallel logic gates could be useful in droplet Frequently, and especially in biological applications, surfactants are added to the oil and/or water phase, which separates droplets and helps to ensure that each droplet is one small reaction container less influenced by neighboring droplets.Surfactants can also mitigate wetting problems, and can provide a means for exchanging small molecules between droplets through the continuous phase [19].Specifically fluorinated surfactants can create highly stable emulsions and are capable of storing droplet emulsions for multiple years [105].To still be able to coalescence these surfactant stabilized droplets an external force that destabilizes the interface is required.For instance, as a chemical method an extra inlet for a destabilizing alcohol can locally increase the chance of coalescence [106].As a passive method, gaseous bubbles have been placed in between multiple water droplets of different content, serving to separate trains of droplets.Since a confined gaseous bubble moves slower through a channel than the surfactant covered water droplets, the water droplets are pushed together.Over time the thin oil film between the droplets is squeezed out and the droplets coalesce [3].Active coalescence is mostly achieved by electrical actuation (electrocoalescence). First, two droplets are brought in close proximity for instance by the channel geometry.By using a DEP-like electrode geometry, two droplets in an electric field will polarize and charges accumulate at the surface.At the interface between the droplets the charges are opposite (if the droplet pair is aligned in the direction of the electric field), which results in an electrostatic force that brings the droplets closer together.The thin oil film is squeezed out until the droplets merge.Applying a high frequency (kHz) electric field perturbs the interface between the droplets increasing the efficacy of merging [107].In the same manner for EWOD and electrostatic potential wells two droplets have to be brought towards neighboring electrodes and the AC frequency of the applied voltage helps to destabilize the interface and speeds up the coalescence of droplets [13].Interdigitated electrode structures are capable of merging surfactant covered droplets on-demand within milliseconds [93].Surface energy wells have also been used to trap two surfactant covered, highly stabilized droplets.Laser forcing was required to break the interface and merge the droplets [5].See Figure 7 for an overview of the discussed droplet merging techniques.
Droplet Logics
So far, in all active techniques the external control is governed by computers and electronics.Understandably so, since digital logics offer fast and reliable actuation.It could, however, make sense to build (part of) the logics into the microfluidic chip, where the presence or state of one drop is influenced by the state of other droplets, or the state of the chip.Theoretically, this could allow droplets to perform more complex routines automatically and determined by the current state of the microfluidic device.As an example, pneumatic membrane valves have been used to open and close different channels and force droplets to execute a certain protocol depending on which valves are activated or not [108].This allowed the creation of NOT, NAND and NOR gates, flip-flops, oscillators, self-driven peristaltic pumps, and a 12-bit shift register.Another method generates gas bubbles in the channel, and uses the hydrodynamic interaction between bubbles to automatically perform logic operations, forming AND/OR/NOT gates, a toggle flip-flop, a ripple counter, timing restoration and a ring oscillator [109].As a purely passive technique, a channel geometry has been designed that serves as a shift register for water droplet in oil [110,111].Very recently, ferrofluidic droplets atop a substrate incorporating multiple permalloy geometries were shown to automatically perform logic operations when a continuous rotating magnetic field is applied [65].This enabled AND, OR, XOR, NOT and NAND logic gates, fanouts, a full adder, a flip-flop and a finite-state machine without any external control.Specifically the interaction of serial and parallel logic gates could be useful in droplet microfluidics to perform the required protocols.
Discussion/Conclusions
One protocol step that is not extensively covered in this review is mixing.The homogenization of droplets with volumes in the picoliter regime generally requires ~1 s, assuming a molecular diffusivity of 5 ˆ10 ´10 m 2 /s, as is typical for small molecules.And that is not taking into account internal flows inside the droplet that are created by velocity gradients at the interface.It has been shown that droplets traversing a channel can passively mix within 25 ms [112].Faster passive mixing (within 2 ms) can be achieved by letting the droplet traverse a meandering channel: mixing is then further enhanced by the internal flow patterns [113,114].Active mixing could be achieved through application of an AC voltage, either in EWOD or in DEP.EWOD droplet mixers using electrode arrays have been demonstrated in closed chips [115], while perturbation of the interface of a sessile droplet was found to enhance mixing as well [116].
The absolute forces in surface energy wells are comparatively small, which limits the operation speeds.However, the technique is easily understood, easily implemented in microfluidic devices and, perhaps its best attribute, passive.This makes the surface energy wells very cheap to implement, since no expensive peripheral equipment (pressure regulators, amplifiers, data acquisition cards, laser optics etc.) are required.EWOD (DMF) is a technique that offers the most versatile control, being able to perform all protocol steps mentioned in this review.It is also one of the more powerful techniques.The durability of EWOD still appears to be an "Achilles heel", where dielectric breakdown or wetting can occur during prolonged application of high voltages.In general, DEP has the electrode geometry outside the channel, and therefore does not have such issues with durability.This geometry does however require (much) higher voltages, and cannot generate forces as large as with EWOD.Moreover, with DEP, it is not possible to perform each of the protocol steps.
SAW offers the largest forces from all techniques discussed, making it stand out as a powerful technique for sorting.Perhaps more focus could be put on making the high intensity pressure fields more localized in the channel, e.g., in a range to the droplet radius, to obtain more precise control for the more subtle protocol requirements like splitting or trapping.
Pneumatic membrane valves offer an entirely different way of implementing the required protocol steps.The valves are capable of performing all the drop manipulations discussed in this review.Specifically the ability to actively change the flow direction of the continuous phase is quite unique as an active control method.One downside of the membrane valves is that they are generally an order of magnitude slower in operation than the electronically actuated methods of drop manipulation.
Optical techniques in general offer much smaller forces than the other techniques.Increasing the light intensity can generate larger forces, but also causes heating inside the liquid.Still, this heating can be used to generate droplets at high rates, or to assist in the coalescence of droplets.The use of focused laser light has also been demonstrated to be effective for several manipulations in case of surfactant covered droplets.
In short, many different methods have been devised to manipulate droplets in microchannels, each with their own specific (and sometimes inherent) advantages and drawbacks.We have given an overview of techniques that are well-established by now, and of methods which we currently consider to be amongst the most promising or ingenious.For many applications it would be desirable to use only passive techniques; each droplet would then simply execute its process without any need for outside control or feedback.Unfortunately, for the more complicated protocols that is simply impossible and external control is required.It is up to the chip designer to decide which technique will suit his purposes best, finding the optimum between the technique's possibilities, actuation speed, power, accuracy, reliability, ease-of-use, safety and costs.Table 1 gives an overview of the specific positive and negative aspects of each technique.
In some cases, high performance applications have been realized by combining multiple techniques, as exemplified by the combination of (passive) surface energy wells and (active) laser forcing, or (active) EWOD and (passive) two-phase flow.EWOD (active) has also been combined with antibody-coated paramagnetic particles (active) for the execution of complex immunoassays [117].The combination of (passive) microfluidic parking networks with (active) pneumatic membrane valves has resulted in parking networks with active control for merging, storage and release while reducing the number of required pneumatic valves [118].More of such developments towards high standard lab-on-a-chip applications can be expected, since none of the mentioned techniques seem to limit the application of another technique.
Figure 1 .
Figure 1.Schematic overview of the droplet manipulation techniques.(a) A squeezed droplet can decrease its interfacial energy by expanding in a hole; (b) Increasing the air pressure above the local water pressure expands the membrane into the crossing microchannel, effectively blocking it;(c) A polarizable medium will move towards the region with highest electric field intensity; (d) The electrostatic energy will be minimal when a droplet covers the activated electrode; (e) A droplet passing two electrodes finds a minimum energy when centered above the electrodes; (f) By taking charges from the open electrode, a droplet can be precharged.A precharged droplet will move towards the oppositely charged electrode; (g) Paramagnetic particles move towards the highest magnetic field strength and can drag a droplet along with them; (h) The interdigital transducer creates a surface acoustic wave that is attenuated at the PDMS pillar, creating an upward pressure wave that can move droplets; (i) At specific frequencies the piezo can create resonating pressure waves in a channel, forcing a droplet towards the antinodes; (j) A medium with better polarizability at optical frequencies than the surrounding medium will move towards the region of high intensity laser light.
Figure 2 .
Figure 2. Schematic representation of the most popular droplet generation methods.Black arrows indicate the flow direction of the continuous phase.
Figure 3 .
Figure 3. (a) Guiding of droplets by surface energy rails.The laser spot is used to select and force droplets onto the rail.A local widening of the rail serves as a deeper surface energy well where trapping will occur.A droplet can be pushed out of the well by a consecutive droplet.Adapted from[5] with permission of The Royal Society of Chemistry.(b) Guiding of droplets by electrostatic potential rails.Depending on which electrodes are activated a droplet will be guided towards one of six lateral positions in the channel.Adapted from[26] with permission of The Royal Society of Chemistry.
Figure 3 .
Figure 3. (a) Guiding of droplets by surface energy rails.The laser spot is used to select and force droplets onto the rail.A local widening of the rail serves as a deeper surface energy well where trapping will occur.A droplet can be pushed out of the well by a consecutive droplet.Adapted from[5] with permission of The Royal Society of Chemistry; (b) Guiding of droplets by electrostatic potential rails.Depending on which electrodes are activated a droplet will be guided towards one of six lateral positions in the channel.Adapted from[26] with permission of The Royal Society of Chemistry.
Figure 4 .
Figure 4. Active sorting of droplets.(a) Magnetic particles in droplets force the droplet towards the magnetic rail when a magnetic field is applied over the entire chip.With kind permission from Springer Science+Business Media: Microfluidics and Nanofluidics, Selective handling of droplets in a microfluidic device using magnetic rails.2015, 19, 1, 141-153, Teste et al. [64].(b) Applying a pressure to the pneumatic valve changes the path of least resistance from the upper to the lower channel.Reprinted with permission from[39].Copyright 2014, AIP Publishing LLC.(c) Depending on the frequency of the SSAW droplets will be forced towards one of the five outlets.Adapted with permission from[98].Copyright 2013, American Chemical Society.(d) By switching on or off the central electrode the electrostatic potential rails will force a droplet to the upper or lower channel.Adapted with permission from[58].Copyright 2015, AIP Publishing LLC.(e) By activating the electrode (red) an electric field gradient is created forcing water droplets to the upper channel.In the inset the droplets go to the lower channel by default.Adapted from[99] with permission of The Royal Society of Chemistry.(f) Left: The IDT generates a SAW, which is attenuated at the PDMS pillar.This locally creates a upwards pressure wave that forces a droplet to the upper channel.Right: Using two IDTs and two pillars allows to sort to multiple outlets.Adapted from[72] with permission of The Royal Society of Chemistry.
Figure 4 .
Figure 4. Active sorting of droplets.(a) Magnetic particles in droplets force the droplet towards the magnetic rail when a magnetic field is applied over the entire chip.With kind permission from Springer Science+Business Media: Microfluidics and Nanofluidics, Selective handling of droplets in a microfluidic device using magnetic rails.2015, 19, 1, 141-153, Teste et al. [64]; (b) Applying a pressure to the pneumatic valve changes the path of least resistance from the upper to the lower channel.Reprinted with permission from [39].Copyright 2014, AIP Publishing LLC; (c) Depending on the frequency of the SSAW droplets will be forced towards one of the five outlets.Adapted with permission from [98].Copyright 2013, American Chemical Society; (d) By switching on or off the central electrode the electrostatic potential rails will force a droplet to the upper or lower channel.Adapted with permission from [58].Copyright 2015, AIP Publishing LLC; (e) By activating the electrode (red) an electric field gradient is created forcing water droplets to the upper channel.In the inset the droplets go to the lower channel by default.Adapted from[99] with permission of The Royal Society of Chemistry; (f) Left: The IDT generates a SAW, which is attenuated at the PDMS pillar.This locally creates a upwards pressure wave that forces a droplet to the upper channel.Right: Using two IDTs and two pillars allows to sort to multiple outlets.Adapted from[72] with permission of The Royal Society of Chemistry.
Figure 5 .
Figure 5. Overview of trapping mechanisms of droplets in oil flow.(a) Array of passive geometric traps.Reversing the oil flow releases the droplets.Adapted from [6] with permission of The Royal Society of Chemistry.(b) Array of multiple, consecutive microfluidic parking networks of different volume.Reprinted with permission from [7].Copyright 2014, AIP Publishing LLC.(c) Surface energy wells, for trapping droplets and release by laser forcing.Adapted from [5] with permission of The Royal Society of Chemistry.(d) Top: Gradients of droplet content created and guided by pneumatic membrane valves to 95 individually addressable traps.Bottom: Magnification of the passive trapping geometry.Adapted with permission from [43].Copyright 2012, National Academy of Sciences, USA.(e) Left: Three different electrode geometries for electrostatic potential wells.Right: Six by six electrostatic potential well arrays for the trapping and release of 36 droplets.Adapted from [26] with permission of The Royal Society of Chemistry.
Figure 5 .
Figure 5. Overview of trapping mechanisms of droplets in oil flow.(a) Array of passive geometric traps.Reversing the oil flow releases the droplets.Adapted from [6] with permission of The Royal Society of Chemistry; (b) Array of multiple, consecutive microfluidic parking networks of different volume.Reprinted with permission from [7].Copyright 2014, AIP Publishing LLC; (c) Surface energy wells, for trapping droplets and release by laser forcing.Adapted from [5] with permission of The Royal Society of Chemistry; (d) Top: Gradients of droplet content created and guided by pneumatic membrane valves to 95 individually addressable traps.Bottom: Magnification of the passive trapping geometry.Adapted with permission from [43].Copyright 2012, National Academy of Sciences, USA; (e) Left: Three different electrode geometries for electrostatic potential wells.Right: Six by six electrostatic potential well arrays for the trapping and release of 36 droplets.Adapted from[26] with permission of The Royal Society of Chemistry.
Figure 6 .
Figure 6.Overview of splitting methods.(a) Two off-axis IDTs create two SAWs that apply a torque on a 3 µL droplet, splitting it in two.Adapted from [71] with permission of The Royal Society of Chemistry.(b) Passively, using a multiple consecutive T-junctions, a slug can be split into 16 equal sized droplets.Reproduced from [3] with permission of The Royal Society of Chemistry.(c) Top: A pillar in the center of a channel splits droplets into equal volumes.Bottom: An off-center pillar splits droplets in unequal volumes.Reprinted with permission from [4]: Link et al., Phys.Rev. Lett.2004, 92, 054503.Copyright 2004 by the American Physical Society.(d) By activating a row of three EWOD electrodes a drop will elongate.After deactivating the central electrode, a bridge is formed, splitting the droplet in two equal volumes.Copyright 2003 IEEE.Reprinted, with permission, from[13].(e) Similar to EWOD, a droplet in oil flow will split in two equal volumes if the voltage across the electrostatic potential rails is high enough.The drag force assists in the breakup of the neck.Reproduced from[26] with permission of The Royal Society of Chemistry.
Figure 6 .
Figure 6.Overview splitting methods.(a) Two off-axis IDTs create two SAWs that apply a torque on a 3 µL droplet, splitting it in two.Adapted from [71] with permission of The Royal Society of Chemistry; (b) Passively, using a multiple consecutive T-junctions, a slug can be split into 16 equal sized droplets.Reproduced from [3] with permission of The Royal Society of Chemistry; (c) Top: A pillar in the center of a channel splits droplets into equal volumes.Bottom: An off-center pillar splits droplets in unequal volumes.Reprinted with permission from [4]: Link et al., Phys.Rev. Lett.2004, 92, 054503.Copyright 2004 by the American Physical Society; (d) By activating a row of three EWOD electrodes a drop will elongate.After deactivating the central electrode, a bridge is formed, splitting the droplet in two equal volumes.Copyright 2003 IEEE.Reprinted, with permission, from[13]; (e) Similar to EWOD, a droplet in oil flow will split in two equal volumes if the voltage across the electrostatic potential rails is high enough.The drag force assists in the breakup of the neck.Reproduced from[26] with permission of The Royal Society of Chemistry.
Micromachines 2015, 6 ,Figure 7 .
Figure 7. Overview of droplet merging techniques.(a) Merging two microliter sized droplets using two SAWs focused by a tapered electrode geometry.Adapted from [71] with permission of The Royal Society of Chemistry.(b) Electrocoalescence of two cell-containing, surfactant-covered droplets, while travelling through the electric field created by two electrodes as indicated by the dashed triangles.Coalescence frequency is > 1 kHz.Adapted with permission from [107].Copyright 2014 by John Wiley & Sons Inc., Hoboken, NJ, USA.(c) EWOD electrode geometry used to bring two droplets together after splitting.Copyright 2003 IEEE.Reprinted, with permission, from[13].(d) After trapping two surfactant-covered droplets with different content in surface energy wells, they are merged using a laser spot focused at the droplet-droplet interface.Adapted from[5] with permission of The Royal Society of Chemistry.(e) Similar to electrocoalescence, interdigitated electrodes below a microchannel will merge two surfactant-covered droplets within milliseconds.Adapted with permission from[93].Copyright 2011, AIP Publishing LLC.
Figure 7 .
Figure 7. Overview of droplet merging techniques.(a) Merging two microliter sized droplets using two SAWs focused by a tapered electrode geometry.Adapted from [71] with permission of The Royal Society of Chemistry; (b) Electrocoalescence of two cell-containing, surfactant-covered droplets, while travelling through the electric field created by two electrodes as indicated by the dashed triangles.Coalescence frequency is > 1 kHz.Adapted with permission from [107].Copyright 2014 by John Wiley & Sons Inc., Hoboken, NJ, USA; (c) EWOD electrode geometry used to bring two droplets together after splitting.Copyright 2003 IEEE.Reprinted, with permission, from[13]; (d) After trapping two surfactant-covered droplets with different content in surface energy wells, they are merged using a laser spot focused at the droplet-droplet interface.Adapted from[5] with permission of The Royal Society of Chemistry; (e) Similar to electrocoalescence, interdigitated electrodes below a microchannel will merge two surfactant-covered droplets within milliseconds.Adapted with permission from[93].Copyright 2011, AIP Publishing LLC.
Table 1 .
Overview of the techniques discussed in this review and their capabilities. | 18,190.2 | 2015-11-13T00:00:00.000 | [
"Physics"
] |
Stable Sparse Classifiers Identify qEEG Signatures that Predict Learning Disabilities (NOS) Severity
In this paper, we present a novel methodology to solve the classification problem, based on sparse (data-driven) regressions, combined with techniques for ensuring stability, especially useful for high-dimensional datasets and small samples number. The sensitivity and specificity of the classifiers are assessed by a stable ROC procedure, which uses a non-parametric algorithm for estimating the area under the ROC curve. This method allows assessing the performance of the classification by the ROC technique, when more than two groups are involved in the classification problem, i.e., when the gold standard is not binary. We apply this methodology to the EEG spectral signatures to find biomarkers that allow discriminating between (and predicting pertinence to) different subgroups of children diagnosed as Not Otherwise Specified Learning Disabilities (LD-NOS) disorder. Children with LD-NOS have notable learning difficulties, which affect education but are not able to be put into some specific category as reading (Dyslexia), Mathematics (Dyscalculia), or Writing (Dysgraphia). By using the EEG spectra, we aim to identify EEG patterns that may be related to specific learning disabilities in an individual case. This could be useful to develop subject-based methods of therapy, based on information provided by the EEG. Here we study 85 LD-NOS children, divided in three subgroups previously selected by a clustering technique over the scores of cognitive tests. The classification equation produced stable marginal areas under the ROC of 0.71 for discrimination between Group 1 vs. Group 2; 0.91 for Group 1 vs. Group 3; and 0.75 for Group 2 vs. Group1. A discussion of the EEG characteristics of each group related to the cognitive scores is also presented.
INTRODUCTION
Learning disability (LD), with a prevalence between 2.6 and 3.5%, (14% of all with mental health problems) in children between 5 and 16 years old (Emerson and Hatton, 2013), is a complex phenomenon that includes many facets. Definitions and classifications vary profoundly (Kavale and Forness, 1995). Usually a child is identified as suffering from LD when he/she has poor performance in standardized tests for reading, mathematics, and written expression-adjusted according to the age, schooling, and level of intelligence of the proband. Specific Learning Disorder are defined by problems in only one of these three areas: reading/language (Dyslexia), Mathematics (Dyscalculia), and Writing (Dysgraphia). On the other hand, there is another category, Not Otherwise Specified Learning Disability (LD-NOS) in which more than one area is affected (Diagnostic and Statistical Manual of Mental Disorders, version IV, DSM-IV-TR; American Psychiatric Association, 2000).
Thus LD-NOS is a broad, catch-all category for children with notable learning difficulties, which affect education but do not fall into a specific category. Obtaining a more nuanced assessment of these children may help to address subjecttailored therapy methods that better cope with their problems. However, due to the heterogeneity within and across domains it is challenging to disentangle the different possible subgroups of LD-NOS.
To our knowledge, subtyping of LD-NOS children has been only based on their behavioral and neuropsychological tests. Recently, Roca-Stappung et al. (2017) used cluster analysis based on the scores of the Neuropsychological Assessment of Children test (ENI) to find 3 clearly defined subtypes in a sample of 85 LD-NOS children. Children in Group 1 showed less severe problems; those in Group 2 showed an intermediate performance (without scoring very low in any of the tests); and group 3 had the most severe problems, significantly worse than the other groups in almost all tests. Thus, group membership is an ordinal scale variable that differentiates different levels of neuropsychological disabilities that may provide a key to the design of more specific rehabilitation. Nevertheless, further work in this direction might be substantially improved by the inclusion of neural biomarkers as a basis of stratification. A natural candidate for this type of biomarker are those derived from electrophysiology which is a non-invasive, inexpensive and sensitive technology for assessing brain dysfunction with relevance to low and middle-income countries.
The first stage for the identification of a biomarker is to demonstrate a significant variation of the selected features with the disease entity. The second is to verify their predictive power for retrospective studies. Progress for the first stage has been achieved for electrophysiological biomarkers of LD-NOS in work that can be summarized as below.
Event-Related Potential parameters have been shown by several authors to differ significantly between LD-NOS children and controls and between two subgroups of LD-NOS using this technique (Silva-Pereyra et al., 2001;Heine et al., 2013;Fernández et al., 2014;Tang et al., 2014;Žarić et al., 2014;Ma et al., 2016;Moll et al., 2016). Unfortunately, ERPs requires complicated experimental conditions and will not be the focus of our interest.
Quantitative Electroencephalography (qEEG) from the resting state is, on the contrary, a less difficult to apply method. qEEG parameters are clearly different between LD-NOS children and those with good academic achievement (Becker et al., 1987;Marosi et al., 1992Marosi et al., , 1997Fernández et al., 2002;Žarić et al., 2017). LD children are characterized by more power in the Theta band (4-7.5 Hz) and less amount of power in the range of alpha frequencies (8-13.5 Hz) (John et al., 1983;Lubar et al., 1985;Marosi et al., 1992;Chabot et al., 2001;Fernández et al., 2002;Gasser et al., 2003;Fonseca et al., 2006). Even increases in power in the Delta band have also been observed in cases with severe difficulties . Furthermore, Jäncke and Alahmadi (2016), showed significant qEEG differences between children with LD-NOS, those with learning disabilities with verbal disabilities (LD-Verbal), and healthy controls. The features were selected by using a group independent component analysis (gICA) model. Finally, in the study by Roca-Stappung et al. (2017) mentioned above it was shown that qEEG parameters differed between subtypes of LD-NOS.
In this paper, we turn attention to the second stage of qEEG biomarker identification for LD-NOS subtyping: that of the selection of a subset of parameters that have high predictive power. This is an old problem in multivariate statistics: variable selection for classification. The goal is to extract a small subset of relevant variables that can jointly classify subjects accurately into different populations. The solution of this task often becomes difficult in high dimensional settings, i.e., where there is a big number of variables involved in the problem and relatively small size of the sample (Mwangi et al., 2014;Jovic et al., 2015). Different approaches to variable selection have been used. The simplest one is to rank variables by means of standard univariate statistical methods such as the t-test and select those with significant scores. Its virtue is simplicity, but it entails a high number of individual tests that then require control of false positives for multiple comparisons. It also ignores the possibly important information contained in the correlations between variables. Multivariate discriminant analysis, on the other hand, does take advantage of the correlations when selecting variables for a discriminant function. The main problems with this approach is that, in high dimensional problems, there is a lack of stability in the selection of variables: different subset of biomarkers that exhibit similarly high classification accuracy on the training set then fail utterly in the test set. It is obvious that the training phase is capitalizing on chance.
Some methods have been introduced to circumvent the difficulties that arise for variable selection and classification in high dimensional settings (Meinshausen and Buhlmann, 2010;see Hastie et al., 2009;Fan and Lv, 2010 for comprehensive reviews). We base our own work on that proposed by Wehrens et al. (2011) who developed a method to achieve stability of potential biomarkers under perturbations. We further develop these ideas and apply them to the selection of biomarkers when there are several groups that are ordered in degree of severity as is the case for LD-NOS groups described by Roca-Stappung et al. (2017).
The aims of this paper are therefore two-fold: • To improve a technique, first introduced by Wehrens et al. (2011), for the identification of stable classifiers on tests in high dimensional settings. This method will be extended to the prediction of disability severity • To identify, using this technique, a stable low dimensional classifier, based on a minimal set of qEEG features, that predicts the degree of severity of LD-NOS.
SAMPLE AND PROPOSED qEEG FEATURE SET Participants
Eighty-five right-handed children (31 female) diagnosed with LD-NOS participated in the study. The age range was from 8 to 11 years (9.2 ± 0.96). They were tested with different scales: M.I.N.I.-KID (Sheehan et al., 2010), WISC-IV (Wechsler, 2007), and the Child Neuropsychological Assessment (ENI; Matute et al., 2007). They had normal neurologic examinations and no other psychiatric disorder. Their score at the Full-Scale Intelligence Quotient (FSIQ) was over 70, to exclude intellectual disability. The ENI evaluated three cognitive domains: reading, writing, and arithmetic. All subjects scored low in at least two of the three domains. A k-means cluster was applied to the ENI scores, dividing the sample into three groups. The number of clusters was previously decided by the experimentalists (Roca-Stappung et al., 2017). The Ethics Committee of the Neurobiology Institute of the National Autonomous University of Mexico approved this study, which followed the Ethical Principles for Medical Research Involving Human Subjects established by the Declaration of Helsinki. Informed consent was signed by all children and their parents.
A detailed description of the experiment has been published elsewhere (Roca-Stappung et al., 2017).
EEG Recordings
The resting state EEG was obtained during rest, with the eyes closed, for the 19 leads of the 10-20 system, using the linked earlobes as reference. The data was sampled at 5 ms. To construct the EEG spectrum, for each subject, 24 artifact free segments of 2.56 s length were selected by an experienced neurophysiologist. This segment length is commonly used both in clinical and experimental EEG studies since it avoids non-stationarities in the EEG signal and guarantees an appropriate frequency resolution of 0.39 Hz for the analysis (Niedermeyer et al., 2010), with an accurate spectral description of the EEG signal, since the degrees of freedom of this estimate is larger than the number of electrodes. The EEG segments were re-referenced to the Average Reference. The data was then transformed to the frequency domain using the Fast Fourier Transform. The variables for the model were chosen from the standard high-resolution spectral model (Pascual-Marqui et al., 1988;Valdes-Sosa et al., 1990;Szava et al., 1993), which has been demonstrated to have a higher accuracy than the traditional broadband spectral models (Valdes-Sosa et al., 1990;Szava et al., 1993) since the summarization process involved in the calculation of the broad band models is a weaker approach for describing the EEG, since it can split a spectral peak between two bands (Szava et al., 1993).The spectra were calculated from 0.39 to 19.11 Hz yielding a parameter vector of 49 frequencies for each of the 19 leads, for a total of 931 variables. Spectra were rescaled by the Global Scale Factor (Hernández et al., 1994). An age correction was not applied to the data, since no significant differences with age were found among the groups. It must be noted that while the high-resolution spectral model has a higher dimensionality than the commonly used broad band estimation, we avoid, the danger of overfitting with the method described below.
STATISTICAL METHODS
The procedure to select a stable and sparse classifier is based on: • The use of a sparse classifier based on the L1 penalty • The evaluation of the performance of the classifier using Receiver Operator Characteristic (ROC) measures • The use of resampling techniques to ensure variable selection that lead to stable classifiers.
We now describe each of these issues in turn.
The Sparse Classifier: Elastic Net Regression with Prior Screening
We carry out sparse classifier construction, with built-in variable selection, by estimating a weighted multivariate linear regression model known as the elastic-net (Zou and Hastie, 2005;Friedman et al., 2010). The model is described by the Equation (1): Here N is the number of subjects, x i ∈ R p observations of subject i, and y i ∈ R is the label group of subject i; ϕ 0 ∈ R, ϕ ∈ R p are the model parameters; γ is the regularization parameter; p is the number of variables in the model; and.
The penalty P γ in Equation (2) is known as the elastic-net norm (Zou and Hastie, 2005). To understand its behavior, note that the ϕ 2 l2 norm induces regressions that behaves well for high dimensional regressions but that tend to spread out coefficient weights among highly correlated variables. On the contrary, the ϕ l1 norm produces the "lasso regression" which is indifferent to highly-correlated predictors and tries to select only one thus inducing sparsity. The elastic-net reaches a compromise between the ridge and the lasso, the relative contributions being determined by the γ and λ parameters. Since these parameters are selected by cross-validation, in any specific case, the sparsity of the solution will be data-driven. The implementation of the elastic net described by Friedman et al. (2010) and Hastie et al. (2016), known as GLMNet, is implemented with high algorithmic efficiency by using cyclical coordinate descent methods. This fast implementation is essential to be able to carry out the iterative resampling techniques we use to achieve stability. We should also mention that GLMNet is able to cope with a wide family of models that includes not only the least square regression mentioned above, but also two-class logistic regression, and multinomial regression problems.
According to our experience, the GLMNet classification algorithm can deal well with problems with up to 1,000 variables, even when the number of subjects is less than 100. However, for the goal of dimensionality reduction, we applied an additional variable screening technique to eliminate variables with negligible contribution to the classification problem. This is the "indfeat" (Weiss and Indurkhya, 1998) "independent significance features test" (indfeat). For a single variable X, the "indfeat" index is the absolute value of a t-test for comparing group means. A feature is retained as a candidate if the significance value returned by the "indfeat" is larger than 1.0.
Evaluation of the Accuracy of Classifiers
A widely used technique for assessing classifier performance is that of the receiver operator curve (ROC) methodology. Let us first consider the two-group classification scenario that attempts to distinguish between D + , the diseased (i.e., the positive condition) group ad D − the healthy (i.e., the negative condition) group. Also suppose that there is a classifier that produces a continuous index T measured for both groups. The convention will be that higher values of test result are associated with greater severity of the disorder. We will classify a subject as positive if T ≥ t and this defines probabilities known as the specificity Sp (t) and the sensitivity Se (t)of the classifier: where P − and P + are, respectively the probability density of the index in both groups. The ROC curve is the plot of P + vs. 1 − P − (lack of specificity). A global measure of diagnostic accuracy is the Area under the ROC curve or (AUC). As can be easily seen AUC is 0.5 for a classifier that is operating at chance and is 1 for the perfect classifier. At a greater level of detail, to determine the threshold for which the detection level is optimal [the balance between Sp (t) and Se (t)], the Youden index was defined by Youden (1950) as: which reaches a maximum value of 1 if the test is perfect and a minimum of 0 when classification is at chance. An optimal threshold t * can thus be obtained by taking t * = arg max t J(t) and the optimal Youden index, which maximizes the overall effectiveness of a diagnostic test, will be the summary measure for its discriminatory ability.
We now generalize these concepts to the situation (as dealt with in this article) in which there are three ordered diagnostic groups based on the severity of a disorder or disease: D + (i.e., the positive condition) group. D 0 the intermediate group (early stage/very mildly diseased). D − the healthy (i.e., the negative condition) or, alternatively, the less affected group.
An AUC-like measure for multiple ordered groups was proposed by Obuchowski (2005). This author defines a nonparametric test that can estimate the marginal areas under the ROC between each pair of groups and the global area under the ROC for all the groups. The algorithm also allows the user to specify a parameter to penalize the effect of making a mistake in one specific sample, let us say to penalize the possibility of making a mistake with the positive class, i.e., the possibility of assigning an ill subject to the healthy population. The Youden index has also been generalized to the ordered three group scenario by Luo and Xiong (2013). Let P i be the probability of the test in group, D i , i = +, 0, −. Now two threshold t − and t + , with t − <t + , are defined. Subjects whose T is below t − are assigned to D − and those above t + to D + . The remaining subjects will be classified into the intermediate group D 0 .
The probabilities of correctly classifying patients from the three groups are individually defined as: The generalized Youden index for three groups is.
allows the selection of optimal cutoff points as in the two-group case.
Procedure for Finding Stable Classifiers
With this framework in place, we now turn to the problem of finding stable classifiers, that is sets of variables, that have not been selected by chance. We follow here the central idea of Wherens, which is based on the idea of perturbations, that is to carry out variable selection on random subsamples obtained by resampling methods such as the jack-knife or bootstrap. If in any iteration some variables are selected by capitalizing on chance, it is very unlikely that they will be present in many other iterations of the method. More specifically, in each iteration, only the top 10% of variables ("top fraction") is retained. The "top fraction" can be based either on the absolute values of either the t-values or the model coefficients. After all iterations, variables are ranked according to how frequently they were selected. Only variables that have a frequency of selection above a "consistency threshold" are retained as useful biomarkers. Wehrens et al. (2011) used the AUC to pick the "consistency threshold"-which is recommended to be 50%. In this paper, we extend the methodology proposed by Wehrens et al. (2011) in several directions: • Instead of selecting the "top fraction" of variables to be retained for each iteration we carry out this selection based on a doubly sparse procedure: preselection of variables, first by the indfeat feature selection procedure and then subsequent further sparsification by means of the elastic net regression. • The standard AUC estimation procedure is sensitive to small perturbations in the sample, especially when the number of variables exceeds the sample size (p>>N) (Pencina et al., 2008;Gu and Pepe, 2009). To solve this problem, we implement here a stable estimate based on the empirical statistical distribution of the ROC areas, by further random samplings of the data. The value at the 50% of the distribution is adopted as a stable estimator of the AUC.
• For the estimation of the ROC curve we use a non-parametric method (Obuchowski et al., 2001;Obuchowski, 2005Obuchowski, , 2006 valid when the gold standard is not binary, i.e., continuous, ordinal o nominal scale as in our case. The method allows assessing the discrimination accuracy by calculating the marginal area under de ROC between each pair of groups. The generalization of the ROC to multi-class has received much attention by researchers in the last decade (Nakas and Yiannoutsos, 2004). • Additionally, to give a numerical index of the specificity and sensitivity of the method, we used the Youden index for the ordered three-group data described in the previous section.
Summary of the Method for Stable Classifier
Our procedure for the identification of stable classifiers is shown in Figure 1 in a schematic form. It consists of two parts: Biomarker Selection (Left Panel of Figure 1) 1) For several iterations i, the algorithm selects a random subsample that includes 70% of the original individuals and 70% of the variables. 2) Indfeat screening with a threshold of 1.0 is used to prune the total variable set.
3) The GLMNet procedure is then applied to obtain a minimal set of variables for the iteration i. the variables selected at this iteration are kept for further analysis. 4) After a number of Max_Iterations (1,000 in our case), only the variables that are selected more than 50% of the times will be in the final selection. 5) The final classifier is found using GLMNet with the selected variables.
Note that our algorithm uses two different cross-validation procedures: the first one is the Leave-One-Out cross-validation (LOO-CV), included in the selection of the regularization parameters of the GLMNet algorithm; the second one is the subsampling of the variables and subjects in the step 2) of the loop in each iteration. This type of cross-validation is known as Repeated Random Subsampling or Monte Carlo cross-validation (RRS-CV) (Dubitzky et al., 2007, Chapter 8), which is more restrictive than the commonly used leave-one-out technique and it is one of the strongest cross-validation procedures possible.
ROC Based Biomarkers Stability Assessment (Right Panel of Figure 1) 6) For a given iteration i, a random a subsample of 70% of the original individuals the variables are formed. 7) With the remaining 30% of the subjects the Area under the ROC (AUC) curve is calculated. 8) After a number of Max_Iterations (1,000 in our case), the distribution of the AUC is found. 9) The AUC at the 50% of the distribution is retained as the measure of classification accuracy. Note that also in this phase we use RRS-CV in step 6) during the construction and validation of the robust ROC curves.
Variable Selection
The methodology described in the previous section was applied to the 931 variables of the EEG spectra of the 85 LD-NOS children, divided into the three groups with increasing disability. As a first step, the "indfeat" index was calculated between each pair of groups of the sample for all variables in the model. The variables that did not exceed the significance level of 1.0 for any pair of groups were discarded as candidate predictors. Because of this procedure, 364 variables were removed. The "biomarkers selection" step (see Figure 1 left panel) was applied to the remaining 537 variables, which identified 20 variables that were selected as biomarkers. Table 1 shows the results of the classification procedure. The Table has been sorted by the frequencies in Hz, to facilitate comparison to the traditional Broad Band definition of EEG frequencies. Columns 1 and 2 show the Lead and frequency selected as biomarkers. Column 3 shows the proportion of times each variable appears over iterations. Column 4 shows the coefficients ϕ with which each variable was included in the final classification equation. Finally, Column 5 groups the selections according to the Broad Band model. Columns 1 and 2 contain the Lead name and the selected frequency; Column 3 contains the percent of times that the variable was identified as a biomarker. In Column 4 the Beta coefficients for each variable in the classification equation. Figure 2 summarizes the distribution of the biomarkers both in topography as well as frequency. The first row of Figure 2 shows the topographical distribution of the ϕ coefficients, summarized by the traditional broadband frequencies, to comparisons. Note that due to the sparse nature of the biomarkers technique, there are not wide areas selected as biomarkers, as it is commonly seen in classification techniques based on statistical thresholds. The elastic-net technique used here tries to avoid selecting variables as biomarkers which contain approximately equivalent information, which is the case of adjacent frequencies and electrodes. Note however, that the biomarkers are not randomly distributed. For example, in Delta there are three adjacent electrodes in the right hemisphere (C4, T4, and T6); in High Theta there is a wide frontal area; in Beta the variables are in the parieto-occipital areas (P3 and O1) and one in the frontal area (F3) of the left hemisphere. It is also important to observe the colors in the figure. The colors indicate that the variables have the same sign in the classification equation (i.e., same effect) and the colors are well-organized by brain regions.
Rows 2 to 4 in Figure 2 show the average differences of the mean values of the spectra of each pair of groups in each frequency band.
Additionally, to facilitate the analysis of the results, Table 2 shows a summary of the clinical and the electroencephalographic findings for each group. EEG findings are extracted from the mean differences between the groups shown in Figure 2
Performance of the Classifier
The stable marginal AUC of the ROC for each pair of groups are shown in Table 3. There is a high discrimination power of the classification equation for Groups 1 and 3 (0.91). The value of the marginal AUC for Groups 1 and 2 and for Groups 2 and 3 also exhibit a good classification accuracy over 0.7. Figure 3A shows the scores obtained for each subject in each group evaluated with the classification equation. A boxplot is included to show the mean and quantiles of each group. The third group is well-separated from the first two groups, explaining the large AUC shown in Table 3. The right panel of Figure 3, shows the global ROC curve for the classification equation. Note that the total area under the ROC curve is 0.92, which is higher than the area reported in Table 3 (0.89) since it has not been yet submitted to the stability procedure described in the right panel of Figure 1, which yields a more conservative estimate. Note also that the value of the ROC curve at 10% of False Positive is 0.77 and the value at 20% of False Positive is 0.91, which means that at a low rate of False Positives the sensitivity of the equation is high, a very desirable property.
Panel C in Figure 3 shows the results of obtained with the stability procedure of the ROC, after Max_iterations = 1,000. The figures show the empirical distribution of the AUC for the global area and the pairwise areas after all iterations. In each iteration, the nominal ROC is obtained by selecting a random subsample of the original sample. The values for the total AUC as well as for each pair of groups are stored. At the end, the stable estimation of the AUCs is selected as the value at the 50th percentiles of the empirical distributions (see Figure 3C). FIGURE 2 | Color-plate with the ϕ coefficients of the classification equation (A) and the differences of the mean EEG spectra between each pair of groups, at the selected leads by the biomarkers procedure (B). Everything has been summarized by the Broad Bands shown in Table 1. The ϕ coefficients have been normalized to show only the sign of the coefficient.
Optimal Sensitivity and Specificity of the Model (Youden Index)
The Youden index described in section Evaluation of the Accuracy of Classifiers was calculated for the stable classification equation. Table 4 shows the summary of the data basic statistics produced by the method. The Youden index for this model was 0.4838. The Confidence Interval (CI) at the 95% of variance was [0.38 0.59].
The best cut-points found by the Youndex index in this case are: lower = 3.01, upper = 3.42. These values allow the Youden index not only to summarize the discriminatory accuracy of the diagnostic test but also to provide a ready-to-use optimal cut-point for future diagnosis. Table 5 shows the group correct classification probabilities, for the selected cut-points.
Comparison of the Results with Those of Random Samples and Other Classification Methods
To explore whether results just described (see Table 2) are not produced by chance (at random), we carried a further 1,000 realizations of our complete classification procedure, reordering group membership of the random in each iteration. For each iteration, we calculated the total area under the ROC curve. The distribution of those ROC values is at the chance level (mean = 0.497). To statistically assess this result, we took the distribution of the AUC values and calculated their density distribution as well as the probability of the density function in the range 0 to 1. These results are shown in Figure 4. The left panel in Figure 4 shows the probability function of the AUC. Note that the probability of obtaining by chance an AUC value of 0.91 (like ours) is smaller than 0.1e-10, which is in practice an impossible event. The right panel of Figure 4 shows the density distribution of the values of the AUC at random level. Note that it is centered at 0.5 (random classification), which coincides with the mean value of the AUC in our random realizations. Finally, to compare the performance of our method with other standard methods used in the literature, we used two classification algorithms included in Matlab R2015a: a) A multiclass model for the Support Vector Machine (SVM) algorithm (Allwein et al., 2000;Fürnkranz, 2002;Escalera et al., 2009) (function named "fitcecoc" in Matlab). As defined, the boxplot shows the mean, percentiles, and dispersion of the groups. Note that Group 3 is almost perfectly separated from Groups 1 and 2 (B) shows the ROC curve for the global performance of the algorithm, before applying the ROC stability procedure. The rate of True Positive at a rate of 10 and 20 percents of False Positive is very high (C) shows the performance of the ROC curve under the stability procedure. Note the stable ROC estimate for the Global classification as well as the Marginal estimates for each pair of groups. (Guo et al., 2007) (function name in Matlab "fitcdiscr" with Linear discriminant).
For both algorithms several leave-one out repetitions were carried out by successively leaving one subject out of the classification procedure and then evaluating it with the obtained classification equation as a totally independent testing sample. We thus obtained an unbiased estimate for the ROC curve. The resulting area under the ROC curve by the LDA algorithm was 0.58; and the area under the ROC for the SVM procedure was 0.65, both significantly smaller than the 0.91 obtained with our methodology. Left shows the probability function of the AUC in the range 0 to 1 and the right shows their density distribution function. Note that the probability of obtaining by chance an AUC value of 0.91 (like ours) is smaller than 0.1e-10, which is in practice an impossible event. Also, the density distribution is centered at 0.5 (random classification), which coincides with the mean value of the AUC in our random realizations.
DISCUSSION
In this paper we report a method to achieve stable classifiers for qEEG parameters, with two main objectives, to improve a technique for the identification of stable classifiers, extending this to the prediction of disability severity and the identification using this technique of a minimal set of qEEG features (biomarkers) to predict the degree of severity of LD-NOS.
A) To improve a technique (Wehrens et al., 2011), for the identification of stable classifiers on tests in high dimensional settings.
Our approach is consistent with best practices in bioinformatics and neuroinformatics where there are many more variables than subjects. To avoid capitalizing on chance, the neuroimage community has recommended the use of resampling techniques as described in the special issue of Neuroimage "Individual Subject Prediction" (Arbabshirani et al., 2017;Stephan et al., 2017;Varoquaux et al., 2017). We emphasize that our method adheres to these procedures using the elastic-net technique, which includes a regularization parameter to shrink the number of variables which will participate in the model. This is a common and effective technique to avoid overfitting since the model tries to reduce the number of parameters in a natural data-driven way. Additionally, the estimation of the regularization parameter (lambda) is performed by means of cross-validation.
The stability based ROC procedure evidenced high sensibility of the classification equation to discriminate between the groups, especially Group 3. This result points to different EEG patterns for each group, which may be an evidence of the different neurological origin of the learning disabilities, although all children have been classified in a very wide unspecific group. The current classification of LD-NOS may not be appropriate for the best understanding of the characteristic and needs of these children. Finding specific EEG alterations in each group may lead to a better classification of the children affected by this disorder, which may also be useful for the design and development of subject-tailored rehabilitation methods. B) To identify, using this technique, a stable low dimensional classifier, based on a minimal set of qEEG features (Biomarkers), that predicts the degree of severity of LD-NOS.
We identified a set of 20 qEEG features (see Table 1) which allowed the classification of the groups. The Group 3 obtained the lowest scores in the three cognitive areas (see Table 3); their scores were extremely low compared to Groups 1 and 2. Groups 1 and 2 were more balanced; although children of the Group 1 performed better in most tests, they were especially good in Reading and Writing accuracy; the Group 2 performed significantly better in Writing Narrative Composition.
For the description of the classifiers we will use the EEG basic rhythms and/or band of frequency.
Theta Band
Theta band has been divided into Low and High Theta for convenience due to the topographical distribution of the classifiers. In the Low Theta, only the right parietal area (P4) was selected, and Group 1 had the highest values. In the High Theta, bilateral frontal areas (Fp1, Fp2, F8, and Fz) and the left parietal (P3) areas were involved, where Group 3 had the highest activity. The excess of Theta activity in the EEG at rest has been consistently reported in LD-NOS children (Mechelse et al., 1975;Colon et al., 1979;John et al., 1983;Alvarez et al., 1992;Fernández et al., 2002;Jäncke and Alahmadi, 2016). This seems to be consistent with the fact that Group 3 obtained the lowest scores and, therefore, should be more affected. Compared with children with good academic Achievements, LD-NOS children had evidenced excess of Theta activity (from 3.52 to 7.02 Hz) (Fernández et al., 2002). However, some authors who have studied this entity have not reported excess in Theta activity (Byring et al., 1991;Chabot et al., 2001;Gasser et al., 2003;Fonseca et al., 2006;Thatcher et al., unpublished manuscript), although it may be due to the composition of their samples and to the frequency of different types of pathological patterns in the extensive group of LD-NOS.
It is also interesting to note the presence of frontal and parietal electrodes in the classifiers. Even if the electrodes reflected only electrical activity at the scalp, a common practice is to match this activity with the corresponding brain structures. In that case, we can say that frontal and parietal lobules are involved in the attention processes: (a) the dorsal network of attention in charge of the spatial orientation, involved frontal and parietal areas; (b) the ventral network of attention, in charge of the detection of the environmental stimuli, involved the temporoparietal joint and the ventral frontal cortex, mainly in the right hemisphere. This corresponds to the parietal and frontal cores of the orienting function (Petersen and Posner, 2012). Although LD-NOS children do not satisfy the DSM-IV criteria for Attention Deficit Disorder and Hyperactivity (ADDH), they frequently have from mild to moderate attentional deficits.
The frontal regions also process the executive functions as inhibition processes, planning, and working memory. Swanson (1987) proposed that the main deficit of LD children lies in mechanisms of executive functioning, which also points to working memory deficits as essential problems in children and adults with LD (Swanson and Siegel, 2001;Berninger, 2008), specifically in Baddeley's proposed phonological loop and central executive (Fletcher, 1985;Landerl et al., 2009;Maehler and Schuchardt, 2011;Swanson, 2012;Swanson and Stomel, 2012).
Delta Band
The presence of biomarkers at 0.39 Hz is somehow unexpected, but it was carefully tested. This very slow frequency is usually associated with ocular movements if it appears at frontal leads, but in this case, it appeared at the right central-parietal leads. (Steriade and Timofeev, 2003) hypothesized that frequencies below 1 Hz are not Delta rhythm properly but slow oscillations which, to some extent, modulates the Delta rhythm.
Alpha Band
The presence of posterior (T6) alpha rhythm has been related to the maturational process (John et al., 1983;Harmony et al., 1995;Riviello et al., 2011) and the number of correct answers in experimental conditions (Klimesch, 1999). The alpha biomarker in the left central lead (C3) may correspond to the sensorimotor rhythm (SMR). The reinforcement of SMR has been successfully used in Neurofeedback in the treatment of epilepsy (Sterman and Egner, 2006) and attention deficit disorder / hyperactivity (ADHD; Monastra et al., 2005). Pineda (2005) has found this activity related to cognitive performance. Group 1 and Group 2 exhibited the highest alpha values, which seems to be consistent with the hypothesis that they have a more matured brain than the children in Group 3.
Beta Band
Children of Group 3 have higher Beta power than the other two groups in frontopolar, anterior temporal and left parietal electrodes. Several studies have found the existence of one group of children with the combined type of ADHD which have an EEG profile characterized by excess of Beta activity (Chabot et al., 2001;Clarke et al., 2001a,b,c). We already pointed out that although LD-NOS children of our study might have attentional problems, they do not meet the criteria to be diagnosed with ADHD; therefore, it is possible that children in the Group 3 were distinguished by having more attentional problems than those of the other two groups, that may explain the greater difficulties observed in children of the Group 3. On the other hand, the temporal and left parietal regions are involved in language and calculation processes, in which these children have lower performance than other children with LD.
In our study the selected biomarkers agree with previous studies distinguishing LD from normal children (Colon et al., 1979;John et al., 1983;Alvarez et al., 1992;Fernández et al., 2002;Jäncke and Alahmadi, 2016;Žarić et al., 2017). Most of the biomarkers were found in the Theta band as in previous studies, and the biomarkers in the Delta and Alpha bands have also been described to discriminate between these two groups. However, our new approach found that in each of the groups classified according to these biomarkers it is possible to distinguish also different behavioral characteristics between groups. These results are extremely valuable. Since the practical point of view, the EEG may guide the type of rehab-educational therapy, paying special attention to those with the EEG characteristics of the group 3, since these children showed the worst performance in the neuropsychological tests.
In summary, this is the first report using quantitative EEG to try to obtain subtypes within the group of children with LD-NOS abnormalities.
This is more relevant because the LD-NOS constitute a very heterogeneous group. For this reason, to apply a new procedure of analysis of the EEG to classify according to their electrophysiological characteristics (that represent the bases of behavior) is a step forward to understand their differences to explore specific new therapeutic procedures.
In the future, research using joint recordings of EEG-fMRI, in resting state or during task-related paradigms, can be a more complete validation of the biomarkers already found here, taking advantage of the MRI spatial resolution. This could shed more light about the brain structures related to each subtype.
Note that the "indfeat" procedure is an optional step inside the classification algorithm to eliminate non-informative variables. It can be avoided, or it can be performed just once before the classification. In fact, there exist other algorithms to reduce dimensionality that might be used instead. We also tested the algorithm using "indfeat" just once, outside the main loop, and obtained essentially the same results (not shown but available upon request to the authors).
CONCLUSIONS
We have shown that resampling techniques adequately protect against the curse of dimensionality when constructing classifiers from high-dimensional, small size samples. We extend the methodology by Wehrens et al. (2011), for the identification of stable classifiers for predicting degree of severity.
We apply this methodology to find an optimal predictor of LD-NOS disability severity based on a reduced set of qEEG variables that may be of use in real world screening settings.
The selection of a small set of qEEG variables with good predictive properties is of importance in practice since it would allow designing stripped-down EEG devices that could be cost/effective and deployable in all economic settings.
AUTHOR CONTRIBUTIONS
JB-B participated in methods development and programming, data analysis, and processing, manuscript writing, discussions, hypothesis elaboration, figures creation; LG-G participated in methods development, programming, discussions, manuscript revision, hypothesis elaboration; TF participated in the experiment design, data collection, manuscript revision, discussions, hypothesis elaboration; RL participated in methods development, manuscript writing, discussions, hypothesis elaboration; MR-S participated in the experiment design, data collection, and hypothesis elaboration; JR-G participated in EEG analysis, discussions, and hypothesis elaboration; MB-V introduced new methods for sensitivity and specificity and processed data in R; TH participated in discussions, hypothesis elaboration, manuscript revision; PV-S participated in the mathematical formulation, programming, discussions of the methodology, methods advisor, hypothesis elaboration, and writing of the article.
FUNDING
This research has been partially supported by Grant PAPIIT IA201417, DGAPA-UNAM, and by Grant CONACYT 251309. | 9,703.4 | 2018-01-15T00:00:00.000 | [
"Computer Science"
] |
Anodal Transcranial Direct Current Stimulation Promotes Frontal Compensatory Mechanisms in Healthy Elderly Subjects
Recent studies have demonstrated that transcranial direct current stimulation (tDCS) is potentially useful to improve working memory. In the present study, young and elderly subjects performed a working memory task (n-back task) during an electroencephalogram recording before and after receiving anodal, cathodal, and sham tDCS over the left dorsolateral prefrontal cortex (DLPFC). We investigated modulations of behavioral performance and electrophysiological correlates of working memory processes (frontal and parietal P300 event-related potentials). A strong tendency to modulated working memory performance was observed after the application of tDCS. In detail, young, but not elderly, subjects benefited from additional practice in the absence of real tDCS, as indicated by their more accurate responses after sham tDCS. The cathodal tDCS had no effect in any group of participants. Importantly, anodal tDCS improved accuracy in elderly. Moreover, increased accuracy after anodal tDCS was correlated with a larger frontal P300 amplitude. These findings suggest that, in elderly subjects, improved working memory after anodal tDCS applied over the left DLPFC may be related to the promotion of frontal compensatory mechanisms, which are related to attentional processes.
INTRODUCTION
Cognitive aging is characterized by patterns of cognitive decline that are specific to each cognitive function in terms of onset and progression rate (Salthouse, 2009;Park and Bischof, 2013). The aging of society is leading to an increased prevalence of chronic diseases, including those affecting cognition, such as Alzheimer's disease (Sosa-Ortiz et al., 2012). Therefore, the scientific community is currently increasing its effort to diversify pharmacological targets (Cummings et al., 2014) and develop non-pharmacological interventions (Bamidis et al., 2014;Hsu et al., 2015) to treat, prevent, or slow down aging mechanisms that lead to the progression of the cognitive decline characteristic of normal and pathological aging.
Executive control functions decline substantially with physiological aging (Grady, 2012). These functions include a set of cognitive processes-such as working memory, cognitive inhibition, cognitive flexibility, and attentional and inhibitory control-that humans use in daily life activities to successfully monitor behaviors and implement goal-directed actions (Chan et al., 2008;Diamond, 2013). Working memory, an extensively studied executive control function, includes a set of cognitive processes that allow humans to encode, store, maintain, and manipulate information for a short time period (Baddeley, 2003). These cognitive processes become less efficient with age (Park et al., 2002;Peich et al., 2013;Kirova et al., 2015), and this age-related decline has been associated with altered patterns of brain activity and connectivity during the working memory tasks (Cook et al., 2007;Daffner et al., 2011;Sander et al., 2012;Pinal et al., 2015).
A promising tool to slow down cognitive decline is transcranial direct current stimulation (tDCS), which is thought to improve a wide range of cognitive functions by promoting brain plasticity mechanisms (Hsu et al., 2015;Dedoncker et al., 2016;Summers et al., 2016). The tDCS technique consists of applying a constant flow of current between two electrodes at a low intensity (1-2 mA) for about 5-20 min. tDCS modulates cortical excitability by modifying the spontaneous neuronal firing rate (Creutzfeldt et al., 1962). Whereas anodal tDCS increases the spontaneous neuronal firing rate, cathodal tDCS reduces it.
Research focusing on working memory processes has usually applied anodal tDCS over the dorsolateral prefrontal cortex (DLPFC) to improve performance, as the DLPFC is thought to play a crucial role in working memory (Levy and Goldman-Rakic, 2000;Tremblay et al., 2014). A seminal study conducted by Fregni et al. (2005) reported that anodal tDCS over the left DLPFC improved working memory performance in healthy young participants, whereas cathodal tDCS over the left DLPFC and anodal tDCS over the primary motor area did not produce any effect. Afterward, several studies replicated the findings about the improved working memory by applying anodal tDCS over the DLPFC in healthy young subjects (Ohn et al., 2008;Andrews et al., 2011;Keeser et al., 2011;Teo et al., 2011;Lally et al., 2013;Richmond et al., 2014;Carvalho et al., 2015;Au et al., 2016;Talsma et al., 2017) and extended these findings to samples of healthy elderly participants (Berryhill and Jones, 2012;Park et al., 2014;Jones et al., 2015). Nonetheless, some studies reported null effects on cognitive improvement after tDCS was applied over the DLPFC (Mylius et al., 2012;Motohashi et al., 2013;De Putter et al., 2015;Sellers et al., 2015).
The inconsistent results outlined in the previous paragraph may be related to methodological and individual differences across the different studies (Horvath et al., 2014;Fertonani and Miniussi, 2017). In general, meta-analyses of tDCS and working memory have demonstrated that offline tDCS applied to the DLPFC has a moderate impact on working memory functioning in healthy populations (Brunoni and Vanderhasselt, 2014;Hill et al., 2016). This finding is consistent with other meta-analytical studies suggesting that offline stimulation improves cognition more than online stimulation in healthy subjects (Hsu et al., 2015;Dedoncker et al., 2016;Hill et al., 2016). Even so, there exists a set of variables that are able to produce diverse tDCS modulations even if homogeneous samples of subjects are used. For instance, tDCS effects may differ according to individuals' baseline performance (Tseng et al., 2012;Benwell et al., 2015;Hsu et al., 2016) and/or level of practice in a specific task (Dockery et al., 2009). In this regard, one study found that cathodal tDCS improved performance at the initial stages of training in a motor planning task; however, when participants became relatively skilled, anodal tDCS led to additional improvements, whereas cathodal tDCS led to impaired performance (Dockery et al., 2009). These results were attributed to the tDCS effects on the signal/noise ratio of neural populations involved in performing the task, which depends on the ability to execute the task (Miniussi et al., 2013;Fertonani and Miniussi, 2017). Other studies have also demonstrated that anatomical differences in a sample of healthy young participants affected the spread of current and the concomitant behavioral tDCS modulations . In contrast, it has been suggested that studies using multiple tDCS sessions are able to improve cognition more than tDCS studies using a single session (Horvath et al., 2015;Au et al., 2016). Nonetheless, it is still possible that a single tDCS session causes neural modulations that are not strong enough to result in behavioral effects. In fact, studies have frequently reported neural changes related to aging (Vallesi and Stuss, 2010), cognitive decline (Cespón et al., 2015), or cognitive interventions implemented in elderly participants (Tusch et al., 2016) in the absence of behavioral differences.
Despite the growing interest in investigating the capability of tDCS to improve cognitive functions, the neural correlates that underlie the modulated performance are still poorly understood. Event-related potentials (ERP) represent a suitable tool to investigate the neural correlates of the cognitive processes that are modulated by applying tDCS because the high temporal resolution of ERP is suitable for the high speed of the cognitive processes taking place during the performance of a cognitivebehavioral task.
Electrophysiological studies about working memory have frequently focused on the P300 ERP (Kok, 2001;Watter et al., 2001;Polich, 2007;Daffner et al., 2011). During working memory tasks, the latency of P300 correlates with the speed of context information update (Polich, 2007). The amplitude of parietal P300 is related to the amount of neural activity allocated to the context information update processes, whereas the amplitude of frontal P300 is related to the allocation of attentional resources to an upcoming stimulus (Fabiani and Friedman, 1995;Friedman et al., 2001;Nieuwenhuis et al., 2005;Polich, 2007;Daffner et al., 2011;Wild-Wall et al., 2011;Saliasi et al., 2013;Tusch et al., 2016). Overall, aging is associated with longer P300 latencies and diminished P300 amplitudes (Polich, 1997; for a review, see Rossini et al., 2007). Nonetheless, according to the reported shift from posterior to anterior activity with age, many studies have found diminished parietal P300 amplitude and increased frontal P300 amplitude related to aging (Friedman et al., 1997;Daffner et al., 2011;Saliasi et al., 2013;van Dinteren et al., 2014), which was interpreted as additional allocation of frontal activity to compensate age-related decline in the cognitive processing supported by posterior areas (Friedman et al., 1997).
The only study that investigated ERP modulations in young subjects by applying tDCS reported that improved working memory performance in a 2-back task after anodal tDCS was correlated with increased frontal P300 amplitude (Keeser et al., 2011). However, no previous studies have focused on brain activity modulations related to the improved working memory performance in elderly subjects after tDCS.
The aim of the present study was to investigate the capability of tDCS to modulate working memory and underlying neural processes in healthy young and elderly participants, who performed an n-back task during an electroencephalogram (EEG) recording before and after anodal, cathodal, and sham tDCS applied over the left DLPFC. To match the task difficulty, young and elderly participants performed a 3-back and a 2-back task, respectively (for a graphic representation of the experimental session and n-back tasks, see Figure 1).
We hypothesized that the two groups would perform similarly, as elderly subjects performed an easier version of the task. Considering the age-related decline in learning ability (Salthouse, 2009), we expected a greater improvement after sham tDCS in young participants than in elderly participants. In line with most previous studies, we hypothesized that working memory would improve after anodal tDCS in both groups of participants. If this improvement was mediated by the strengthening of attentional mechanisms supported by prefrontal regions, then a larger frontal P300 amplitude would be observed after applying the anodal tDCS. Instead, if this improvement was mediated by more efficient processes related to context information update, then a larger parietal P300 amplitude would be observed after anodal tDCS. Likewise, we were interested in studying whether the possible performance modulations observed after cathodal tDCS were mediated by the modulation of attentional processes and/or processes related to context information update.
Participants
Fourteen healthy young (six females; mean age = 24.8, SD = 3.69) and 14 healthy elderly participants (nine females; mean age = 70.2, SD = 5.12) took part in the present study. All participants were right-handed, as evaluated using the Edinburgh Handedness Inventory test (Oldfield, 1971). They reported no previous history of neurological or psychiatric disorders and had no metal implants. Furthermore, elderly participants undertook a neuropsychological assessment to ensure that their cognitive functioning was within normal parameters. Experimental protocols were performed in accordance with procedures for non-invasive brain stimulation (Woods et al., 2016). The study was performed in accordance with the ethical guidelines outlined in the 1964 Declaration of Helsinki and received prior approval by The Saint John of God Clinical Research Centre Ethical Committee. The experimental procedures were carefully explained to all participants who volunteered to take part in the study. Informed consent was obtained from all participants. The consent obtained from the participants was both informed and written.
Procedures
Participants attended three experimental sessions separated by at least 5 days. Participants performed a working memory task (a verbal n-back task) before and after tDCS. tDCS was delivered by a battery-driven constant current stimulator (BrainStim, EMS) through two rubber electrodes (anodal area = 16 cm 2 ; cathodal area 50 cm 2 ). The anode was placed over the scalp overlying the left DLPFC, in correspondence with the F3 electrode and the cathode over the right shoulder. In each experimental session, participants received anodal, cathodal, or sham tDCS. The order of these experimental sessions was counterbalanced across participants. The stimulation ramped up and down for 8 s and remained stable at 1.5 mA for 13 min. In the sham condition, current was delivered for 10 s only at the beginning and at the end of the stimulation block. At the beginning of each experimental session, participants performed a brief practice block. Next, they performed the n-back task during the EEG recording (the structure of the experimental sessions is recapped in Figure 1).
Task
The n-back task consisted of the presentation of 80 targets and 240 non-targets (i.e., the probability of target appearance was set at 25%) in two separated blocks (40 targets and 120 nontargets per block), each 6 min long. The break between blocks was around 90 s. During the task, the letters A-L randomly appeared in the center of the screen for 500 ms. The letters were presented in white color against a black background. The screen remained blank during the inter-stimuli interval, which was jittered between 2000 and 2500 ms. The screen was placed 100 cm in front of the participants, who were instructed to direct their gaze to the center of the screen throughout the task and to respond, by pressing the space bar, to the stimulus identity if it matched the stimulus that had been presented two trials before (2-back task, which was performed by elderly participants) or three trials before (3-back task, which was performed by young participants). The different versions of the task were created to match the task difficulty level for young and elderly participants. Each participant performed the n-back task six times, that is, twice a session (before and after tDCS) in three tDCS sessions (anodal, cathodal, and sham). To prevent participants from learning the letter sequence, the order of stimuli presentation was pseudorandomized so that the letters appeared in a different order each time they performed the task. Before performing the corresponding n-back task, participants performed a training block that was 3 min long (20 targets and 60 non-targets). Participants proceeded with the experiment only if they reached 60% accuracy in the practice block, and they could repeat the practice block a maximum of three times.
EEG Recordings
EEG was recorded using 31 electrodes (Easycap, GmbH, Brain Products) in accordance with the 10-10 International System; these electrodes included Fp1, Fp2, AF7, AF8, F7, F3, Fz, F4, F8, FC5, C1, FC2, FC6, T7, C3, Cz, C4, T8, CP5, CP1, CP2, CP6, P7, P3, Pz, P4, P8, PO7, PO8, O1, and O2. The ground FIGURE 1 | Structure of the experimental sessions (top panel). The participants performed three experimental sessions: sham, cathodal, and anodal tDCS. The sessions were separated by a minimum of 5 days. The order of sessions was counterbalanced between participants. The figure represents the cognitive tasks performed by young (3-back task) and elderly (2-back task) participants (bottom panel). The target letter (25% of trials) is represented within gray squares. Participants responded to the target letter by pressing the space bar. The letters were presented at the center of the screen for 500 ms in white color against a black background. During the inter-stimulus interval (duration jittered at 2000-2500 ms), the screen remained blank. electrode was placed on Fpz. The right mastoid was used as online reference for all electrodes whereas the left mastoid (offline reference) was used to re-reference the activity to the average of the left and right mastoid. The EEG signal was acquired with a 0.1-1000 Hz bandpass filter and digitized at a sampling rate of 5000 Hz (down-sampled to 1000 Hz before ERP pre-processing). Vertical and horizontal eye movements were recorded by two electrodes located above and beneath the right eye and two electrodes located lateral to the external canthi of each eye. Impedance was maintained below 5 k s. After signal storage, ocular artifacts were corrected using independent component analysis. The signal was filtered at a 0.1-80 Hz digital bandpass and a 50 Hz notch filter. Epochs exceeding ±100 µV were automatically rejected. All remaining epochs were individually inspected to identify those still displaying artifacts, which were also eliminated from subsequent averaging. Epochs were then corrected to the mean voltage of the 200 ms pre-stimulus recording period (baseline).
Data Analysis
Performance was evaluated by considering the reaction time (RT) and accuracy. Accuracy was calculated taking into account correct responses and missed responses to the target stimulus as well as erroneous responses to the non-target stimulus (false alarms). This was done using the d prime index (d ), which was calculated as follows: d = Z (hit rate) − Z (false alarm rate) , where Z represents hit and false alarm rates transformed into z scores using the standard normalized probability distribution. A higher d indicates higher performance. That is, the d value can be increased by increasing hits to the target stimulus (i.e., accuracy) and/or correct rejections of the non-target stimulus as well as by minimizing the missed responses to the target stimulus or the erroneous responses to the non-target stimulus (i.e., false alarms).
For electrophysiological analyses, ERPs were calculated for the correct responses. The epochs were established between −200 and 800 ms relative to the onset of the target stimulus. P300 ERP was analyzed using the mean amplitude in time windows of 100 ms, ranging from 350 to 550 ms (i.e., 350-450 ms, and 450-550 ms), which was based on the visual inspection of grand averages. Analyses were conducted within four regions of interest (ROIs), which include the stimulated area (i.e., frontal left region), the homologous area (frontal right), and the parietal left and right areas, in which P300 typically achieves maximum amplitudes. The mentioned ROIs were calculated by pooling the following electrodes: frontal left (F3, F7, AF7, FC5), frontal right (F4, F8, AF8, FC6), parietal left (P3, P7, PO7, CP5), and parietal right (P4, P8, PO8, CP6). To understand the functional meaning of the observed ERP modulations, correlation analyses were conducted between P300 changes (i.e., "P300 amplitude after tDCS-P300 amplitude before tDCS") and d changes (i.e., "d after tDCS-d before tDCS") for each ROI and experimental condition.
Statistical Analysis
To evaluate whether tDCS modulated behavioral performance, the corresponding repeated-measures ANOVAs for RTs and d values were carried out with a between-subject factor, Group (two levels: Young and Elderly) and two within-subject factors, Type of Stimulation (three levels: Anodal, Cathodal, and Sham) and Time (two levels: before tDCS and after tDCS).
For the ERP data, P300 was analyzed using the corresponding repeated-measures ANOVA with a between-subject factor, Group (two levels: Young and Elderly) and two within-subject factors, Stimulation (three levels: Anodal, Cathodal, and Sham) and Time (two levels: before tDCS and after tDCS), for each studied time window (i.e., 350-450 ms, and 450-550 ms) within the corresponding ROIs (i.e., frontal left, frontal right, parietal left, and parietal right). Pearson's correlation analyses were carried out to analyze the correlation between the magnitude of change in the d value and the magnitude of change in the P300 amplitude after the different tDCS conditions (i.e., anodal, cathodal, and sham).
The Greenhouse-Geisser correction for degrees of freedom was performed when the condition of sphericity was not met. In these cases, the corresponding degrees of freedom were provided. For significant results, measures of size effect are provided by reporting the partial eta square (η p 2 ) index. When the ANOVAs revealed significant effects due to the main factors and/or their interactions, post hoc comparisons were performed by applying the Bonferroni correction.
Pearson correlation coefficients between enhanced d values and increased P300 amplitude after tDCS were significant at the 350-450 ms time window within the left and right frontal regions when anodal tDCS was applied (see Figure 4). In detail, significant correlations were observed between enhanced d and increased P300 amplitude after anodal tDCS within the left frontal region (rxy = 0.45, p = 0.016) and within the right frontal region (rxy = 0.47, p = 0.012). No significant correlations were observed between d and P300 changes for the 450-550 time window. FIGURE 2 | Event-related potentials before and after tDCS in healthy young participants. Each represented waveform results from averaging four electrodes that compounded the respective region of interest: frontal left (F3, F7, AF7, FC5), frontal right (F4, F8, AF8, FC6), parietal left (P3, P7, PO7, CP5), and parietal right (P4, P8, PO8, CP6). Current density maps (350-550 ms) are showed for the three experimental conditions before and after applying the tDCS. These maps revealed a parietal P300 distribution in young subjects.
DISCUSSION
The present study investigated whether and how the anodal and cathodal tDCS delivered over the left DLPFC modulated the performance and the underlying neural activity in young and elderly participants in a working memory task. In the absence of stimulation, young subjects benefited from additional practice in the task, as indicated by improved performance after the sham tDCS. Anodal tDCS induced a working memory improvement in elderly subjects. However, in young, anodal tDCS impeded the spontaneous learning observed in the sham session. No effects were promoted by cathodal tDCS. Anodal tDCS induced a larger frontal P300 component in elderly subjects, which correlated with behavioral (d ) improvements. Additionally, the parietal P300 was increased after tDCS, but interactions were not observed between a larger parietal P300 and a specific group or experimental condition.
The main results of the study are graphically summarized in Figure 5. Accuracy, measured using the d index, was higher among elderly than among young, possibly because elderly subjects performed an easier task (2-back task) than did young subjects (3-back task). However, the slower RTs observed in elderly than in young subjects might suggest a trade-off between speed and accuracy among the elderly, which may also explain the greater accuracy observed in this group. Nevertheless, previous ERP studies demonstrated that the age-related slowing in motor execution processes contributes to the slower RTs observed in elderly compared with young subjects even if, as in the present study, speed and accuracy are similarly required of both samples of participants (Kolev et al., 2006;Roggeveen et al., 2007;Cespón et al., 2013).
The behavioral results showed a learning effect related to practice in young but not in elderly subjects, as demonstrated FIGURE 3 | Event-related potentials before and after tDCS in healthy elderly participants. As specified for young subjects, each represented waveform results from averaging four electrodes that compounded the respective region of interest. The amplitude of P300, which is related to working memory processes, was increased in elderly participants after anodal tDCS in the left frontal region at the 350-550 ms time window (see dotted red line). Current density maps (350-550 ms) are showed for the three experimental conditions before and after applying the tDCS. These maps revealed a frontal and parietal P300 distribution in elderly subjects. by higher d after the sham tDCS in the former group. This finding may be related to a greater learning ability of young compared with elderly subjects during the performance of the n-back (Salminen et al., 2016) and other cognitive-behavioral tasks (King et al., 2013). Alternatively, these results might suggest the existence of a ceiling effect in elderly subjects, which would prevent a subsequent improvement. However, this possibility should be excluded because an improvement was observed in elderly subjects after the anodal tDCS was applied. In fact, anodal tDCS had opposite effects for young and elderly subjects; anodal tDCS improved the performance of elderly but hindered that of young subjects (who already exhibited improvement FIGURE 5 | Summary of the main behavioral (top chart) and electrophysiological (bottom chart) results of the present study ( * p < 0.1, and * * p < 0.05).
without stimulation). As suggested by Bortoletto et al. (2015), it is possible that increased neural excitability related to anodal tDCS disrupted the optimal neural state and impeded the practicerelated improvement observed after the sham tDCS. In contrast, cathodal tDCS did not have a behavioral effect in any group. This result suggests that cathodal tDCS did not modulate the neural activity patterns underlying the task performance.
The behavioral results discussed in the previous paragraph deserve additional consideration, as the statistics showed only a tendency (p = 0.06) for such differences. These results could be related to the small sample size used in the present study. Moreover, these results probably reflect also the high interindividual variability in response to the tDCS, as noted by previous studies (Horvath et al., 2014). In fact, a recent metaanalysis reported that offline tDCS applied over the left DLPFC showed no significant but strong tendencies for improved performance in healthy subjects (Hill et al., 2016). Thus, the present results are in line with previous studies. Moreover, these findings warrant further research to identify the individual factors contributing to this variability and encourage investigation about neural correlates of the tDCS modulations.
The main goal of the present study was to investigate the neural processes modulated by tDCS and the neural correlates of the possible behavioral modulations. The electrophysiological results revealed that anodal tDCS increased the left frontal P300 amplitude in elderly participants between 350 and 550 ms. Thus, a larger P300 amplitude can be related to enhanced performance after anodal tDCS, which was also supported by analyses of correlations between the increased P300 amplitude (in the left and right frontal regions between 350 and 450 ms) and the improved d index after anodal tDCS. These results are consistent with previous investigations that focused on P300 ERP modulations after other types of interventions were applied with the aim to improve cognition. A previous study reported a greater P300 amplitude after 5 weeks of cognitive training in working memory tasks (Tusch et al., 2016). Other studies related larger P300 amplitudes after cognitive training (O'Brien et al., 2013) and physical exercise (Kamijo et al., 2009) to increased attentional deployment and cognitive control, respectively.
The correlations between enhanced performance and increased P300 amplitude after anodal tDCS were conducted by including all participants that took part in the study (i.e., elderly and young). Thus, increased frontal activity after tDCS was related to improved performance also in young participants. The correlations between improved working memory and a larger frontal P300 amplitude in young participants were consistent with a previous study (Keeser et al., 2011) in which participants did exhibit a net improvement; however, the results of this abovementioned study should be interpreted with caution, as it involved a sample of 10 participants performing a 2-back task. In the present study, increased frontal P300 led to increased d in a subsample of young subjects whereas decreased frontal P300 led to decreased d in another subsample of young subjects, which explains the absence of a net improvement after anodal tDCS in the young group. In contrast, most of elderly participants exhibited increased P300 amplitude after anodal tDCS, which led to a net improvement after anodal tDCS in the elderly group. On the other hand, parietal P300 increased after all tDCS conditions between 350 and 450 ms, suggesting reduced difficulty in executing operations related to context information update after taking practice in the task (Polich, 2007). Moreover, the parietal P300 was larger in young than in elderly (350-450 ms) whereas the frontal P300 was larger in elderly than in young subjects (mainly in the anodal tDCS condition, see also the topographic maps, Figures 3, 4). These results are consistent with the reported P300 topographical changes related to aging (Friedman et al., 1997;Daffner et al., 2011;Saliasi et al., 2013;van Dinteren et al., 2014).
The frontal P300, whose increased amplitude correlated with improved performance after anodal tDCS, was related to the allocation of attentional resources to the upcoming stimulus, whereas the parietal P300 was related to context information update (Fabiani and Friedman, 1995;Friedman et al., 2001;Nieuwenhuis et al., 2005;Polich, 2007;Daffner et al., 2011;Wild-Wall et al., 2011;Saliasi et al., 2013;Tusch et al., 2016). Thus, these results indicate that increased working memory performance in elderly participants after anodal tDCS is related to enhanced attentional processes but not to improved efficiency in mental operations related to context information update. This finding aligns with previous studies that reported that encoding processes also depend on attentional capacity (Emrich and Ferber, 2012;Mazyar et al., 2012), and with studies that related the age-related decline in attentional capacity to greater susceptibility to interfering stimuli in working memory tasks (Schneider-Garces et al., 2010). Moreover, the correlations between improved working memory and enhanced bilateral frontal activity may be related to a previous behavioral study, which reported that left and right anodal tDCS equally improved working memory (Jones et al., 2015). These authors hypothesized that increased frontal activity mediates modulations of frontostriatal connectivity, which leads to improved working memory. In line with this hypothesis, other studies reported increased striatal dopaminergic release after cognitive training (Backman et al., 2011;Kühn et al., 2011;Backman and Nyberg, 2013). Additionally, striatal modulations were related to transfer effects from cognitive training to untrained n-back tasks (Dahlin et al., 2008;Salminen et al., 2016).
The relationship between increased frontal activity and increased performance observed in the present study is consistent with the compensation-related utilization of neural circuits hypothesis (CRUNCH; Reuter-Lorenz and Cappell, 2008;Schneider-Garces et al., 2010; see also Cabeza et al., 2002;Davis et al., 2008;Daffner et al., 2011). This hypothesis predicts an inverted U-shaped relationship between task difficulty and allocation of neural resources such that neural resources increase at a higher task difficulty to maintain good performance. However, after achieving a critical point, which happens at lower difficulty levels in elderly than in young participants, additional increases in task difficulty are accompanied by a reduction in neural resources and impaired behavioral performance (Mattay et al., 2006;Wild-Wall et al., 2011). Considering that the tasks performed in the present study were highly demanding, it is possible that elderly participants were in the "descendent" slope of the inverted U-shaped curve hypothesized by the CRUNCH. Thus, the anodal tDCS favored "going backward" in the inverted-U curve hypothesized by this model, which would lead to increased brain activity and improved performance. Interestingly, other studies reporting heterogeneous results could fit within this model. For instance, Saliasi et al. (2013) reported correlations between higher frontal activation and worst performance in elderly subjects. Considering that a high allocation of neural activity to perform easy tasks was related to low brain resource levels (Reuter-Lorenz and Cappell, 2008;Schneider-Garces et al., 2010), the results of Saliasi et al. (2013) may be explained by the easy versions of the task that were used (i.e., 0-back and 1-back tasks). In contrast, studies reported reduced neural activity in highly demanding working memory tasks after cognitive training (Brehmer et al., 2011;Vermeij et al., 2017). In this case, the high number of cognitive training sessions implemented by these studies probably allowed a reduction in the subjective difficulty level even on highly demanding tasks.
A noteworthy limitation of the present study is the absence of an experimental condition to demonstrate that the observed effects are site specific, as suggested by recent reviews about noninvasive brain stimulation (Rossini et al., 2015). If anodal tDCS over a brain region not involved in the task (e.g., the vertex) failed to promote an increase in frontal activity, then we could have undoubtedly confirmed that increased frontal activity after anodal tDCS applied over the DLPFC is mediated by specific modulations of neural processes involved in task performance. However, if anodal tDCS over a brain region not involved in the task increases frontal activity, then we cannot exclude a non-specific increase in the arousal levels as the responsible mechanism for the observed frontal activity enhancement. Future studies should explore these alternative possibilities to further clarify the neural mechanisms underlying working memory improvement. Finally, another limitation of the present study is the small sample size, which might explain the weak tDCS effects that were observed on the behavioral data. Future studies should consider increasing the sample size. Increasing the sample size would be also useful to study the high inter-individual variability of the tDCS effects by dividing the samples in high and low performers, which is in line with recent studies about interindividual variability of the tDCS effects (Tseng et al., 2012;Benwell et al., 2015;Hsu et al., 2016).
In summary, anodal tDCS applied over the left DLPFC increased the left frontal P300 amplitude in elderly participants. This increase was related to a tendency to improved working memory, as supported by a correlation analysis. Considering that frontal P300 amplitude is related to attentional processes, the results of the present study suggest that anodal tDCS can improve working memory by strengthening attentional processes. In contrast, anodal tDCS did not modulate the amplitude of the parietal P300, which is typically related to context update processes. In general, the present study suggests that anodal tDCS may have the capability to enhance working memory performance in healthy elderly subjects by promoting frontal compensatory mechanisms related to attentional processes.
AUTHOR CONTRIBUTIONS
JC designed and programmed the experimental task and procedures, collected and analyzed the data, interpreted the results, and wrote the manuscript. CR programmed the experimental task and procedures, collected and analyzed the data, and interpreted the results. PR interpreted the results and critically reviewed the manuscript. CM designed the experimental procedures and critically reviewed the manuscript. MP designed the experimental procedures, collected and analyzed the data, interpreted the results, and wrote the manuscript.
FUNDING
This study was funded by the Italian Ministry of Health GR-2011-02349998, European Commission Marie-Skłodowska Curie Actions, Individual Fellowships; 655423-NIBSAD, and Galician government (Postdoctoral Grants Plan I2C 2011. | 7,589.4 | 2017-12-18T00:00:00.000 | [
"Psychology",
"Biology"
] |
Coculture engineering for efficient production of vanillyl alcohol in Escherichia coli
Vanillyl alcohol is a precursor of vanillin, which is one of the most widely used flavor compounds. Currently, vanillyl alcohol biosynthesis still encounters the problem of low efficiency. In this study, coculture engineering was adopted to improve production efficiency of vanillyl alcohol in E. coli. First, two pathways were compared for biosynthesis of the immediate precursor 3, 4-dihydroxybenzyl alcohol in monocultures, and the 3-dehydroshikimate-derived pathway showed higher efficiency than the 4-hydroxybenzoate-derived pathway. To enhance the efficiency of the last methylation step, two strategies were used, and strengthening S-adenosylmethionine (SAM) regeneration showed positive effect while strengthening SAM biosynthesis showed negative effect. Then, the optimized pathway was assembled in a single cell. However, the biosynthetic efficiency was still low, and was not significantly improved by modular optimization of pathway genes. Thus, coculturing engineering strategy was adopted. At the optimal inoculation ratio, the titer reached 328.9 mg/L. Further, gene aroE was knocked out to reduce cell growth and improve 3,4-DHBA biosynthesis of the upstream strain. As a result, the titer was improved to 559.4 mg/L in shake flasks and to 3.89 g/L in fed-batch fermentation. These are the highest reported titers of vanillyl alcohol so far. This work provides an effective strategy for sustainable production of vanillyl alcohol.
INTRODUCTION
Vanillin is one of the most important aromatic flavor compounds widely used in foods, beverages, perfumes, cosmetics and pharmaceuticals. Its annual global market reaches twenty thousand tons. Natural vanillin is extracted from Vanilla pods. The price is between $1200 and 4000 per kilogram, and the output can fulfill less than 1% of the market demand. Currently, vanillin is mainly produced by chemical synthesis from guaiacol and lignin, and the average price is less than $15 per kilogram (Banerjee and Chattopadhyay 2019;Martȃu et al. 2021). However, this method has the problem of environment pollution. The increasing health-and nutrition-consciousness of the customers has led to a growing interest to produce natural vanillin by biotechnology-based approaches.
Bioconversion studies have been conducted for vanillin production using ferulic acid, eugenol and isoeugenol as the major substrates (Kaur and Chakraborty 2013;Ma et al. 2022;Yamada et al. 2008), among which the ferulic acid-based route shows the highest efficiency. For example, a vanillin-tolerant bacterium Amycolatopsis sp. ATCC 39,116 was engineered by disrupting the Meichen Yang and Hao Meng contribute equally to this work.
Besides bioconversion, biosynthesis of vanillin or its related compounds (vanillate and vanillyl alcohol) from cheap carbon sources such as glucose has also been investigated. In 1998, a vanillate biosynthetic pathway was constructed in E. coli by heterologous expression of a 3-dehydroshikimate (3-DHS) dehydratase and a catechol-O-methyltransferase. Vanillate produced was further converted to vanillin by a carboxylic acid reductase (CAR) in vitro (Li and Frost 1998). De novo production of vanillin was first achieved in both S. cerevisiae and Schizosaccharomyces pombe by co-expressing three similar enzymes as above, producing 65 and 45 mg/L of vanillin from glucose, respectively (Hansen et al. 2009). E. coli contains many endogenous enzymes with aldehyde reductase activity, which would lead to formation of vanillyl alcohol instead of vanillin in vivo. To solve this problem, a recombinant E. coli with reduced aromatic aldehyde reduction (RARE) was engineered by inactivating three aldo-keto reductases and three alcohol dehydrogenases, leading to a 55-fold increase in vanillin titer to 119 mg/L (Kunjapur et al. 2014). In addition, the reaction catalyzed by O-methyltransferase was verified to be rating-limiting, and was strengthened by either enhancing supply or regeneration of the methy doner S-adenosylmethionine (SAM). As a result, the titer of vanillate reached 272 mg/L in shake flask experiment. Previously, our group designed a novel vanillyl alcohol biosynthetic pathway, which consists of three heterologous enzymes 4-hydroxybenzoate (4-HBA) hydroxylase (PobA), CAR and caffeate O-methyltransferase (COMT). The engineered E. coli with this pathway produced 240.69 mg/ L of vanillyl alcohol with the accumulation of 282.53 mg/L of 3, 4-dihydroxybenzyl alcohol (3,4-DHBA) (Chen et al. 2017). As indicated above, the biosynthetic efficiency of vanillin/vanillate/vanillyl alcohol still requires significant improvement for practical application.
When an entire pathway is accommodated in one cell, it may cause heavy metabolic burden, and balancing the gene expression and the cofactor/co-substrate supply is laborious and sometimes difficult. Thus, the modular coculture engineering has been developed, which divides the full pathway into modules and distribute them into two or more cells. The strategy has been applied to the production of various chemicals such as apigenin, rosmarinic acid, salidroside and phenol (Thuan et al. 2018;Li et al. 2019bLi et al. , 2022Liu et al. 2018;Guo et al. 2019).
In this study, we optimized vanillyl alcohol production in the monoculture by balancing expression levels of different modules and introducing the SAM regeneration cycle. However, the titer was still low with the accumulation of large amount of the intermediates. To solve this problem, E. coli cocultures were designed (Fig. 1), and the optimized system produced (559.4 ± 14.7) mg/L and (3890.2 ± 87.9) mg/L of vanillyl alcohol in shake flasks and bioreactor, respectively, which are the highest reported so far. This work demonstrates the potential of coculture engineering for efficient production of vanillin and its derivatives.
Strains and plasmids
Strains and plasmids used in this study are listed in Table 1. E. coli strain DH5a was used as the host strain for plasmid construction. E. coli strain BW25113 and its derivatives were used for the feeding experiments and de novo biosynthesis. Plasmids pZE12-luc (high copy number), pCS27 (medium copy number), and pSA74 (low copy number) were used for constructing recombinant plasmids by enzyme digestion and ligation (Chen et al. 2017). Gene knockout was carried out using k-RED recombination following the standard protocols (Datsenko and Wanner 2000).
Cultivation conditions
LB medium containing 10 g tryptone, 5 g yeast extract and 10 g NaCl per liter was used for seed culture preparation. The M9 medium containing 15 g carbon source (glucose and/or glycerol), 6.78 g Na 2 HPO 4 , 0.5 g NaCl, 3 g KH 2 PO 4 , 1 g NH 4 Cl, 1 mM MgSO 4 , 0.1 mM CaCl 2 and 2 g yeast extract per liter was used for shake flask experiments.
In shake flask experiments, single fresh colonies were picked into 4 mL LB medium and incubated at 37°C, and 1 mL of overnight seed culture was transferred into 50 mL M9 medium and cultured at 30°C and 200 rpm for 2 h. Then, protein expression was induced with 0.5 mM isopropyl-b-D-thiogalactopyranoside (IPTG). Feeding experiments were carried out to compare the efficiency of the different SAM-enhancing strategies. For this, the cell cultures of strains YMC04, YMC05 and YMC06 were induced with IPTG, supplemented with 1 g/L 3,4-DHBA at 2 h after inoculation, and continued to be cultured at 30°C and 200 rpm. When necessary, ampicillin, kanamycin and chloramphenicol were added to the media at final concentrations of 100, 50 and 34 mg/mL, respectively.
In bioreactor experiments, the initial medium contained 15 g glucose, 5 g glycerol, 4 g (NH 4 ) 2 SO 4 , 1.5 g KH 2 PO 4 , 3 g Na 2 HPO 4 , 4 g yeast extract and 30 mg/L VB 1 per liter. The feeding solution contained 500 g/L glucose and 500 g/L glycerol. At the beginning, the seed cultures of strain YMC13 and YMC14 (50 mL each) were inoculated into the 3 L bioreactors (1 L working volume), and cultured at 30°C. While the cell density at 600 nm (OD 600 ) reached about 10, 0.5 mM IPTG was added to induce protein expression. During the whole culture process, the concentration of glucose was controlled under 5 g/L, the dissolved oxygen (DO) was controlled at 30% from 0 to 24 h and 8% after 24 h, and the pH was kept at 7.0.
For both the shake flask and the bioreactor experiments, samples were taken at regular time intervals for analysis of cell growth and production. Cell growth was monitored by measuring the OD 600 . The concentrations 3,4-DHBA, protocatechuic acid (PCA), and vanillyl alcohol were analyzed by HPLC.
HPLC analysis
The standard of (3,4-DHBA), protocatechuic acid (PCA), and vanillyl alcohol were purchased from Macklin (Shanghai Macklin Biochemical Co., Ltd). HPLC (HITA-CHI) equipped with a reverse-phase Diamonsil C 18 column (Diamonsil 5 lm, 250 mm 9 4.6 mm) and an UVvis detector was used for analysis of PCA, 3,4-DHBA and vanillyl alcohol. The mobile phase contains solvent A (water with 0.1% formic acid) and solvent B (100% methanol) with a flow rate of 0.8 mL/min. The column temperature was set at 35°C. The following gradient was set: 5% to 50% solvent B for 20 min, 100% solvent B for 2 min, 100% to 5% solvent B for 2 min and 5% solvent B for an additional 5 min. Quantification was based on the peak areas at specific wavelengths (254 nm for PCA, 280 nm for 3,4-DHBA and vanillyl alcohol). Vanillyl alcohol can be produced either via the 3-DHS pathway or the 4-HBA pathway ( Fig. 2A). As the last step catalyzed by COMT is rate-limitng, we first compared the efficiency of the two pathways by producing the penultimate compound 3, 4-DHBA. For the 3-DHS pathway, plasmid pZE-AQ expressing E. coli feedbackresistant aroG * and P. putida 3-DHS dehydratase gene quiC was constructed. AroG * catalyzes the committed step of the shikimate pathway while QuiC converts 3-DHS to PCA. The previously constructed plasmid pCS-CS produces CAR and its activator SFP, catalyzing reduction of PCA to the aldehyde. The aldehyde can be reduced to 3,4-DHBA by endogenous reductases. Plasmids pZE-AQ and pCS-CS were co-transfered into strain BW25113DpykA/F, generating strain YMC01. Strain BW25113DpykA/F was used instead of the wildtype BW25113. Genes pykA and pykF encode pyruvate kinases which convert PEP to pyruvate. Deletion of pykA/F can conserve PEP and increase the flux to the shikimate pathway. In shake flask experiment with glycerol as the carbon source, strain YMC01 produced (790.0 ± 49.9) mg/L 3, 4-DHBA and (366.7 ± 73.7) mg/L PCA (Fig. 2B). Similarly, for the 4-HBA pathway, plasmid pZE-AUP was constructed, expressing E. coli aroG * and ubiC, and P. aeruginosa pobA. Plasmids pZE-AUP and pCS-CS were co-transfered into strain BW25113DpykADpykF, generating strain YMC02. Strain YMC02 produced (548.8 ± 12.9) mg/L 3,4-DHBA and (32.0 ± 4.1) mg/L PCA at 96 h (Fig. 2B). The two strains showed similar growth profiles, and cell densities reached maximum at 48 h (Fig. 2C). The results showed that the 3-DHS pathway has higher efficiency than the 4-HBA pathway. Thus, the former was used in the following study for vanillyl alcohol production.
Improving 3,4-DHBA methylation by strengthening SAM regeneration cycle
After optimizing 3,4-DHBA production from glycerol, we next tested the conversion capability of COMT by feeding experiment. The wild-type strain BW25113 was co-transformed with plasmids pZE-COMT and pCS27 (empty vector), generating strain YMC03. When fed with 1 g/L 3,4-DHBA, strain YMC03 produced (305.7 ± 8.2) mg/L vanillyl alcohol at 16 h. SAM is a co-substrate of COMT, and its supply is essential to the conversion efficiency. In a previous study of vanilate biosynthesis, two strategies have been adopted (Kunjapur et al. 2016). One is to strengthen SAM regeneration by expressing luxS and mtn (LM module) and the other is to enhance its biosynthesis by expressing two key genes metA* and cysE* (MC module).
To test the effectiveness of these two strategies, plasmids pCS-LM, pCS-MC and pCS-LM-MC were constructed, and co-transferred with plasmid pZE-COMT into BW25113, respectively, generating strains YMC04, YMC05 and YMC06. In the feeding experiment, strain YMC04 produced 484.6 ± 8.3 mg/L vanillyl alcohol at 16 h, which is 58.5% higher than that of YMC03. However, strain YMC05 produced significantly less vanillyl alcohol (122.5 ± 7.3 mg/L). Further introducing LM module (strain YMC06) recovered vanillyl alcohol to 309.5 ± 1.4 mg/L (Fig. 3A). As shown in Fig. 3B, overexpressing metA* and cysE* drastically reduced cell growth of strains YMC05 and YMC06, which may be attributed to the accumulation of toxic S-adenosylhomocysteine (SAH) (Roe et al. 2002). The results showed that enhancement of SAM regeneration rather than SAM biosynthesis is an effective way to increase methylation Ó The Author(s) 2022 efficiency, and the ineffectiveness of the latter strategy is mainly due to its negative effect on cell growth.
Optimizing vanillyl alcohol biosynthesis in monoculture
To achieve de novo biosynthesis of vanillyl alcohol, we assembled the full pathway in one strain. For this purpose, plasmids pZE-AQ-C and pCS-CS-LM were constructed, and co-transferred into strain Lab02, producing strain YMC07. Notably, Lab02 is a derived strain of Lab01 and is unable to use glucose as the carbon source . In shake flask experiment with glycerol as the carbon source, strain YMC07 produced (189.0 ± 4.2) mg/L vanillyl alcohol with the accumulation of (543.4 ± 14.1) mg/L PCA and (371.7 ± 35.6) mg/L 3,4-DHBA (Fig. 4A). The accumulation of large amount of the intermediates indicates that the efficiency of the last two enzymatic steps is insufficient. To solve this problem, modular optimization was conducted by adjusting plasmid copy number and promoter strength. As PCA was accumulated in large amounts, the AQ module was thus moved from the high-copy (H) to the medium (M) or the low-copy (L) plasmids. Similarly, the CS and C modules were moved to the high-copy plasmid, and two strong constitutive promoters (Plpp0.5 and Plpp2.0) were used to control expression of COMT gene. As a result, four strains (YMC08, YMC09, YMC10 and YMC11) were produced. However, modular optimization only led to modest improvement in vanillyl alcohol production, with the best titer of (212.6 ± 18.4) mg/L by strain YMC10 (Fig. 4B, C).
Production of 1 mol 3,4-DHBA from PCA consumes 2 mol NADPH and 1 mol ATP. Moreover, methylation reactions are among the most energy-consuming processes in nature. The regeneration of one active methyl group in the form of SAM costs 12 ATP equivalents (Nyyssola and Leisola 2001). Thus, it seems difficult to balance gene expression and meet the demand for energy and redox power in a single cell.
Optimizing vanillyl alcohol biosynthesis in E. coli coculture
Since optimization of vanillyl alcohol biosynthesis in a single cell was not successful, we turned to explore the coculture strategy. In the design, two BW25113 derived strains Lab02 (BW25113DptsGDmanXYZDglk) and Lab03 (BW25113DglpK) were used as the upstream and the downstream hosts, respectively . When glucose and glycerol are use as the mixed carbon source, the two strains will use glycerol and glucose, respectively, which is supposed to reduce the competition between them. Lab02 was transformed with plasmids pCS-CS, pZE-AQ, producing strain YMC12 responsible for 3,4-DHBA production from glycerol. Lab03 was transformed with plasmids pZE-COMT and pCS-LM, producing strain YMC13 responsible for conversion of 3,4-DHBA to vanillyl alcohol.
To further improve the performance of the coculture system, we chose to knock out aroE gene in the Table 1 for the genetic information of the strains Ó The Author(s) 2022 aBIOTECH (2022) 3:292-300 upstream strain, producing strain YMC14. Knockout of aroE can, on one hand, block the synthesis of aromatic amino acids and other essential metabolites such as 4-hydrxybezoate and 4-aminobezoate, and on the other hand, conserve and direct more carbon flux to the target pathway. Thus, it is expected to reduce cell growth but improve production capacity of the upstream strain, making room for the rating-limiting downstream strain in the coculture.
In the YMC14/YMC13 coculture, the holistic growth profiles were negatively correlated with the inoculation ratio of the upstream strain. The OD 600 value reached 8.48 ± 0.02 at the ratio of 1:4, but just reached 4.01 ± 0.09 at the ratio of 4:1 (Fig. 5C). Interestingly, vanillyl alcohol titer was also negatively correlated with the inoculation ratio of the upstream strain. At the ratio of 1:4, the best titer reached (559.4 ± 14.7) mg/L at 96 h, which is 70% higher than that of the YMC12/ YMC13 coculture (Fig. 5D). Meanwhile, (77.4 ± 4.2) mg/L PCA and (264.0 ± 6.2) mg/L 3, 4-DHBA were still accumulated. The results indicate that due to aroE knockout strain YMC14 has improved carbon flux through the shikimate pathway than strain YMC12. At the low inoculation ratio (1:4), the upstream strain can still provide sufficient precursors. Vanillyl alcohol titer could potentially be improved by further lowering the upstream ratio. The production curve showed that the vanillyl alcohol titer kept increasing during the cultivation time (Fig. 5E).
To explore the scale-up potential, feedback fermentation of the YMC14/YMC13 coculture was performed in 3 L bioreactors containing 1 L initial fermentation broth. The inoculation was also kept at 1:4. As the show in Fig. 5F, the cells grew fast within the first 48 h and the OD 600 value reached 82.9 ± 2.9 at 48 h. Thereafter, the cell density remained stable. Vanillyl alcohol titer kept increasing during the cultivation period and the titer reached (3890.2 ± 87.9) mg/L at 72 h, with the accumulation of (617.0 ± 31.3) mg/L PCA and (510.5 ± 128.8) mg/L 3, 4-DHBA. According to the results, the methyltransferase is an obvious bottleneck in vanillyl alcohol biosynthesis. To further improve the production efficiency, in the future this problem should be tackled by strategies such as enzyme mining and protein engineering.
CONCLUSION
In this study, we first aimed to achieve efficient vanillyl alcohol production in a single cell. However, stepwise optimization of the pathway, including pathway selection, enhancement of SAM regeneration and modular optimization led to no significant improvement in the production. The two precursors were still accumulated in large amounts. This suggests that it may be challenging and laborious to balance the pathway and the demands of enzymes for energy and cofactors. Thus, by applying the coculture engineering strategy, the pathway was divided and distributed into two E. coli cells. Table 1 for the genetic information of the strains. H high copy, M medium copy, L low copy. Data shown are mean ± SD (n = 3 biological replicates) This effort led to the highest vanillyl alcohol titer reported so far, demonstrating the potential of coculturing engineering in biosynthesis of valuable compounds. | 4,179 | 2022-09-05T00:00:00.000 | [
"Engineering"
] |
Influence of heat treatments on the microstructure and degree of sensitization of base metal and weld of AISI 430 stainless steel
Universidade Federal Fluminense, Departamento de Engenharia Mecânica E-mail<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>Centro Federal de Educação Tecnológica Celso Suckow da Fonseca, Departamento de Engenharia Mecânica, Rio de Janeiro/Brasil e-mail<EMAIL_ADDRESS><EMAIL_ADDRESS>____________________________________________________________________________________ ABSTRACT AISI 430 is a non-stabilized ferritic stainless steel grade with carbon content lower than 0.12%.After hot and cold rolling this material is annealed. The slow cooling after soaking at temperatures between 900 o C and 1000 o C promotes the formation of a high quantity of carbides and nitrides, while the rapid cooling partially suppresses the formation of these precipitates, but introduces martensite in the microstructure. Intergranular martensite can also be produced in the weld metal and in the heat affected zone (HAZ) of welds of nonstabilized ferritic stainless steels. In this work, several heat treatments between 900 o C and 1000 o C, with different cooling rates, were performed in a commercial sheet of AISI 430 grade. Also, an autogenous welding was produced with GTAW process, and post weld heat treatment at 700°C was carried out. The different microstructures produced were analyzed by optical and scanning electron microscopy (SEM). The degree of sensitization was measured by double loop electrochemical potentiodynamic tests (DL-EPR). The pitting corrosion resistance was evaluated by cyclic polarization tests in 3.5%NaCl solution. Hardness and toughness tests were also performed in selected heat treatment conditions. The results indicate that the slow cooling results in a higher degree of sensitization than observed in the material rapid cooled from the annealing temperature. The ferritic martensitic structure produced by water cooling has higher pitting potential and lower degree of sensitization, but is brittle at room temperature. A subsequent tempering treatment between 600 and 800 o C can increase the toughness, but the corrosion resistance may decrease due to carbides precipitation.The heat affected zone of AISI 430 welds contains intergranular martensite, which is brittle and susceptible to corrosion attack. Post weld heat treatment at 700 o C decomposed the martensite into ferrite and carbides and improved the corrosion resistance.
INTRODUCTION
AISI 430 steel is one of the most popular ferritic stainless steels.Although more modern ferritic stainless steels have been developed, the production of AISI 430 is still elevated due to its low cost and good corrosion properties.
The influence of heat treatments on microstructure, corrosion resistance and mechanical properties of stainless steels is a key issue.Depending on the final heat treatment, the mechanical properties and corrosion properties may vary significantly.Frequently, the best heat treatment for corrosion resistance is not the best for the desired mechanical properties.
On the other hand, welding always produces important changes on the microstructure of weld metal and heat affected zone (HAZ) which affects corrosion resistance and mechanical properties.In the case of ferritic stainless steels, the main change produced in fusion welding processes is the pronounced grain growth in the weld metal and HAZ [1].Besides this, in non-stabilized steels, such as AISI 430, intergranular martensite may form and intergranular precipitation of M 23 C 6 carbides and M 23 (C,N) carbonitrides may occur in the coarse grain heat affected zone (CGHAZ) [1][2][3][4].
Krafft [5] reported a failure case occurring in the weld metal (WM) and heat affected zone (HAZ) of an AISI 430 component of a heat recovery steam generator.The post weld heat treatment (PWHT) applied to the welded joint was non-uniform and insufficient to promote the proper tempering of martensite.
In this work, the effect of heat treatments on the microstructure and corrosion resistance of an AISI 430 steel was studied by means of double loop electrochemical potentiodynamic reactivation (DL-EPR) tests and pitting corrosion tests.The effect of a post weld heat treatment (PWHT) on microstructure and corrosion resistance of an autogenous welded joint produced by gas tungsten arc welding (GTAW) was also investigated.
MATERIALS AND METHODS
The material studied was a hot rolled and annealed sheet of 3.0 mm of thickness of AISI 430 steel with composition shown in Table 1.Specimens of the base metal with dimensions (15 x 10 x 3) mm³ were cut for heat treatment and corrosion tests.These specimens were submitted to isothermal heat treatments for 1 hour in the temperatures 900 o C, 950 o C and 1000 o C. Three cooling media were used: water (fast cooling), air (moderate) and furnace (slow cooling).Some specimens were heat treated in selected conditions and machined to the dimensions of sub-size Charpy test specimens with (55 x 10 x 2.5) mm³ and V-notch.Charpy impact tests were carried out at room temperature.Two portions of the rolled sheetcut and machined to dimensions (100 x 100 x 2.5) mm 3 were autogenous welded with automatic GTAW process with 99.9% Ar as gas protection.The parameters were adjusted to obtain full penetration.The heat input was 0.8kJ/mm.Specimens of the welded joint, including the weld metal (WM) and the HAZ were cut for post weld heat treatment (PWHT) and subsequent analysis by electrochemical corrosion tests and microscopy.The PWHT was carried at 700 o C for 1 hour with water cooling.
The degree of sensitization was measured by double loop electrochemical potentiodynamic tests (DL-EPR) [6][7].A three electrodes cell, with working electrode (WE) of the material analyzed, saturated calomel electrode as reference and Pt wire as counter-electrode were used.WE's were prepared with the specimen to be analyzed embedded in epoxy resin among with a cooper wire for electric contact.The surface of WEs were prepared by grinding with sand paper grit 100, 200, 300 and 400.The area exposed to the rest solution was delimitated with enamel.The test solution was 0.25MH 2 SO 4 and 0.01M KCSN.After 1 hour of stabilization of the open circuit potential (E OCP ) the anodic polarization with sweep rate 1 mV/s initiated.The sweeping was reverted to the cathodic direction at 0.300 V SCE .
After the DL-EPR tests, the pitting corrosion resistance of some selected conditions were evaluated by cyclic polarization tests in 3.5%NaCl solution at room temperature.The tests were also conducted in a three electrode cell, but WE's were grinded and polished with diamond paste.After the stabilization of the E OCP , the working electrode was polarized in the anodic direction with sweep rate 1mV/s.The scanning was reverted to the cathodic direction when the current density reached 5.10 -3 A/cm 2 .
Each corrosion test was repeated 3 times.Average values and standard deviations are presented in the results.
The microstructural investigation was performed by optical microscopy (OM) and scanning electron microscopy (SEM) with the specimens etched with Villela's reagent (95 ml ethanol, 5 ml HCl and 1g of picric acid).
Heat treatments in the base material
Figs. 1(a-c) compare the microstructures of specimens treated at 950 o C and cooled in water, air and furnace.The slow cooling after soaking at temperatures between 900 o C and 1000 o C promotes the formation of a high quantity of chromium carbides and nitrides (Fig. 1(a)), while the rapid cooling partially suppresses the formation of these precipitates, but introduces martensite in the microstructure (Figs.1(b) and (c)).The martensite volume fraction of specimen treated at 950°C and water quenched was (0.33 0.04).
Carbon and nitrogen diffusion is so fast in the ferritic phase that is not possible to suppress completely the carbides and nitrides precipitation, but they are too fine to be observed by optical microscopy [8].Fig. 2(a) shows the intra and intergranular precipitation observed by SEM.The EDS analysis confirms that these particles are chromium rich carbides (Fig. 2(b)).
Air cooling produces a microstructure with ferrite and martensite partially decomposed, with some precipitates, as show in Fig. 3. Fig. 4 presents the DL-EPR curve of specimen heat treated at 950 o C and furnace cooled.The main result of the DL-EPR test is the degree of sensitization (DOS) given by the ratio Ir/Ia, where Ir is the reactivation peak of current, and Ia is the activation peak of current.Fig. 5 shows how the DOS varies with heat treatment temperature and cooling media.Slow cooling, which produces a microstructure of ferrite and precipitates (carbides and nitrides) gives the highest DOS, i.e. the higher susceptibility to corrosion due to chromium depletion.The ferritic-martensitic microstructure with low density of carbides and nitrides gives the lower DOS.Table 2 shows the pitting potentials measured in polarization test in 3.5%NaCl solution of specimens treated at 900 o C, 950 o C and 1000 o C and water cooled.The specimen treated at 950 o C and water cooled is also the one with higher pitting resistance.Table 3 shows the Charpy impact energy of specimens quenched from 950 o C, with and without subsequent tempering at 600 o C, 700 o C and 800 o C for 1 h.The DL-EPR tests and pitting corrosion tests suggest that the microstructure of ferrite and martensite with low density of precipitates is the most favorable to a better corrosion resistance of AISI 430 steel.However, the martensite is brittle and makes the hardness increase.As a result, the toughness of the steel quenched in water from 950 o C is very low, although it can be improved by tempering in the 600 -800 o C range.Figs.9(a-b) compare the pitting potentials curves of the welded joint before and after PWHT (700 o C/1h).Table 4 shows the DOS and E PIT parameters of the weld metal plus HAZ before and after the PWHT at 700 o C. It is observed an important decrease of DOS and a small increase of the pitting potential with the PWHT.The main microstructural change observed is the chromium carbide precipitation (tempering reactions) in the martensite.This is clearly observed in the HAZ and base metal, as shown in Figs.10(a-b).
The effects of PWHT at 700 o C on the DL-EPR results of the base metal, however, may be different from that observed in the WM and HAZ.If the base metal was previously quenched from 950 o C and has the microstructure of ferrite, martensite and few carbides/nitrides, the tempering at 700 o C promotes the increase of the DOS due to additional chromium carbides precipitation.Even considering that a healing effect due to chromium diffusion is reported [8], in this work the tempering at 700 o C of the base metal increased the DOS from 0.079 to 0.250 (curves not shown).Thus, the tempering of martensite at 700 o C for 1h is beneficial to the corrosion resistance of the HAZ, but has a negative effect to the corrosion resistance of the base metal if it was welded in a water quenched condition.It must be pointed out that the grains in the HAZ are very coarse and the un-tempered martensite is concentrated in the grain boundaries.On the other hand, the martensite volume fraction in the base metal water quenched from 950 o C (0.33 0.04) was much higher than the amount found in the HAZ.These differences may explain why the intergranular martensite of the HAZ has a more deleterious effect in the corrosion resistance than the martensite obtained by quenching treatment in the base metal, and why the PWHT is so important to the HAZ.
Figure 1 :Figure 2 :
Figure 1: Microstructures of specimens treated at 950 o C and cooled in (a) water, (b) air and (c) furnace.
Figure 3 :
Figure 3: Microstructure of specimen treated at 1000 o C and air cooled.
Figure 4 :
Figure 4: DL-EPR curve of specimen het treated at 950 o C and furnace cooled.
Figure 5 :
Figure 5: Variation of the degree of sensitization (Ir/Ia) with heat treatment temperature and cooling media.
Fig. 6
Fig.6exhibits the macrostructure of the welded joint, where the pronounced grain growth can be observed.Figs.7(a-b) show the microstructure of the coarse grain HAZ in the as welded condition.Intergranular martensite is clearly shown.In some very coarse grains, such as observed in Fig.7(b), intragranular carbides and nitrides are noted.The intergranular martensite contains less Cr than the ferritic matrix, and, as a consequence, this phase is preferentially attacked in the electrochemical corrosion tests (see Figs8(a-b)).
Table 1 :
Chemical composition of the base material.
Table 2 :
Pitting potentials measured in cyclic polarization tests in 3.5%NaCl.
Table 3 :
Impact toughness and hardness of specimens quenched from 950°C and tempered. | 2,814 | 2018-01-08T00:00:00.000 | [
"Materials Science"
] |
Study of Generalized Phase Spectrum Time Delay Estimation Method for Source Positioning in Small Room Acoustic Environment
This paper considers the application of signal processing methods to passive indoor positioning with acoustics microphones. The key aspect of this problem is time-delay estimation (TDE) that is used to get the time difference of arrival of the source’s signal between the pair of distributed microphones. This paper studies the approach based on generalized phase spectrum (GPS) TDE methods. These methods use frequency-domain information about the received signals that make them different from widely applied generalized cross-correlation (GCC) methods. Despite the more challenging implementation, GPS TDE methods can be less demanding on computational resources and memory than conventional GCC ones. We propose an algorithmic implementation of a GPS estimator and study the various frequency weighting options in applications to TDE in a small room acoustic environment. The study shows that the GPS method is a reliable option for small acoustically dead rooms and could be effectively applied in presence of moderate in-band noises. However, GPS estimators are far less efficient in less acoustically dead environments, where other TDE options should be considered. The distinguishing feature of the proposed solution is the ability to get the time delay using a limited number of the adjusted bins. The solution could be useful for passively locating moving emitters of narrow-band continual noises using computationally simple frequency detection algorithms.
Introduction
The problem of time-delay estimation (TDE) is to measure the difference in the time of arrival of signals recorded by space-separated sensors. This task is relevant for many applications, including those which are related to signal source localization [1]. The position of the object can be determined on the straight line [2,3], on the plane [4,5], and in space [6][7][8] depending on the location and the number of sensors.
The use of TDE methods is typical for those areas of technology where there is a need for the passive location of objects emitting signals. The physical nature of the signal, however, is not essential. Among practical applications, we can highlight the pipeline leaks position determination [2,3], local mobile objects positioning [9], passive radio positioning [1], etc. In recent years, the problem of TDE has become more relevant in connection with the spread, on the Internet, of concepts and services providing contactless control of household appliances [10], automatic tracking of objects [7], as well as in the sensor systems of robotic devices [11]. A common problem in the implementation of each of the listed
Materials and Methods
The most studied and widespread TDE technique is based on cross-correlation functions computation (CCF) [2]. CCFs are calculated for different time series pairs of sampled microphone signals, based on the position of the maximum in a correlogram. An alternative to the TDE correlation methods are phase-frequency methods, suggested firstly in [17]. Unlike correlation methods which analyze signals in the time domain, phase methods operate with signals frequency-domain representations. This section is devoted to the phase methods of TDE.
This paper considers the simplest case with two sensors, shown in Figure 1. Obviously, two sensors are not enough for unambiguous signal source localization on a plane or in space [11]. Depending on the relative sensor's position and the position of the signal source, a pair of microphones may be sufficient to determine the direction towards the object. In general cases, at least three sensors are required to determine the position of the source in a room [16]. In this case, the signals of the sensors array can be processed both simultaneously and in pairs [8]. The latter means that the algorithm considered in the paper can be used to localize the signal source in a room using three or more microphones.
Ideal Propagation Model
The TDE task for sound source detecting in a room can be formalized in several ways [8]. Each method is a compromise between the signal propagation model accuracy and the complexity of the mathematical description of the problem. The main acoustic signal
Ideal Propagation Model
The TDE task for sound source detecting in a room can be formalized in several ways [8]. Each method is a compromise between the signal propagation model accuracy and the complexity of the mathematical description of the problem. The main acoustic signal propagation models are [8]: ideal propagation model, multipath propagation model, and reverberation model. In this work, we consider that the simulated microphones are equally capable of efficiently registering signals coming from any direction. The ideal propagation model assumes that there is only one path from the signal source to each of the microphones. Let s 0 (t) be the signal emitted by the source. Then the signals of the receivers will be where τ a , τ b are lag values; α a , α b are signal attenuation coefficients; n A (t), n B (t) are random uncorrelated additive microphone noises. The values of τ a , τ b are determined by the geometric distances r a , r b from the signal source to the corresponding receiver where c is the sound speed. Attenuation of signals α a , α b can be caused by various factors, however, in the simplest ideal case, exclusively source beam pattern and the scattering of the sound wave are considered and, so where k is a constant coefficient. In this case, the TDE is performed to get the value τ ab = τ b − τ a which is used further to determine the position of the sound source. Using the notations above and having redefined t = t − τ b , we can rewrite (1) Expression (4) does not consider the influence of several physical factors, such as reflection and absorption of sound in a room.
Later, in the course of computational experiments with the ideal scenario, we will take that k = 1, since the target signal-to-noise ratio (SNR) can be achieved exclusively by changing the noise intensity.
Reverberation Model
The problem of the ideal propagation model is that the assumptions made do not correspond to the acoustic conditions of the real-world enclosed room. Firstly, there are always several paths for sound propagation between the source and the receiver due to the presence of reflected waves. Secondly, the absorption of sound energy by room surfaces has a significant effect on the recorded signal.
In accordance with the reverberation model, the received signals are described as follows where h a (t), h b (t) are room impulse response (RIR) functions. The complexity of application of (5) is in the practical difficulty of RIR determination. Acoustic measurements [18] or mathematical methods can be used to solve this problem. The image model method, first proposed in [19], is the most widespread among the latter. Alternatively, statistical methods [20] or methods based on geometric acoustics and ray tracing [21] can be used. To create realistic sound signals in this work, the image model method was used in the implementation of Lehman, Johansson and Nordholm [22,23].
Basic Phase Shift TDE
The phase TDE algorithm is based on obtaining information about the delay value from the cross-phase spectrum Φ ab of two signals. The algorithm for constructing the cross-phase spectrum is known from spectral analysis [14]. At the initial stage, the Fourier transforms S a (f k ) and S b (f k ) of the signals of each of the channels are determined where s a (t i ) and s b (t i ) are series of N real samples of s a (t) and s b (t) signals sampled with an interval ∆; F D is the operator of short-time discrete Fourier transform (DFT); S a (f k ) and S b (f k ) are spectrums of the signals.
Further instantaneous cross-spectrum of signals S where superscript (q) indicates the time instant t q = ∆·N·q of the beginning of the q-th time window; * is the element-wise complex conjugation; × is the element-wise product. The final measurement of the cross-spectrum S ab (f k ) is obtained by averaging the Q instantaneous spectrums It should be noted that the application of (8) requires that the signal source remains stationary relatively to the receivers during the entire time of signal recording. If it is not, the spectral estimation S ab (f k ) would not be correct. However, this assumption is normally relevant for the cross-spectrum. If we consider that neither source nor sensors are moving, the phase shift for each particular harmonic component will remain the same for all Q instantaneous spectrums. Therefore, coherent accumulation is applied this way to reduce the impact of the additive random noise.
To retrieve the set of phases, the phase cross-spectrum Φ ab (f k ) is finally calculated where U is an operator of phase unwrapping [24]; arg is the operator for defining the argument of a complex number. All harmonic components presented in s 0 (t) will also be present in s a (t) and s b (t). In this case, the phase difference between the k-th harmonic components of s a (t) and s b (t) is determined by τ ab ·f k . Therefore, the estimation τ ab can be obtained as the coefficient of proportionality in the line equation of the approximating Φ ab (f k ).
The valueτ ab can be determined, for example, based on the criterion for minimizing the squared error function [14]. Let the error e be determined as where b ab is a constant term. Then Equating the derivatives to zero in (11) results in where values A, C, B, D can be computed with the proposed scheme An advantage of the algorithm based on the use of (12) and (13) is that non-adjacent spectral bins can be used for TDE. It is optimal to choose k ∈ S, where S is a set of the most essential harmonic components of the signal s 0 (t).
Generalized Phase Spectrum TDE
A modification of the method described in the previous subsection can be used to localize stationary signal sources. The modified method was initially proposed in [15] and was named GPS TDE.
A distinctive feature of the generalized method is the use of real-valued frequency weight function W(f k ) which is used to determineτ ab . Similarly to (10), the weighted error in this case are introduced Obtaining a calculation formula forτ ab could be carried out in the same way as in the previous subsection It is clear from (14) that the functions W(f k ) should be chosen in the way that its value is high if the useful signal prevails over noises at the f k frequency and differs little from zero in other cases. A set of five frequency weighting functions was investigated in [14]. Table 1 below shows the calculation formulas for these functions.
Method
Nomenclature Formula The coherence function γ 2 ab (f k ) widely used for this purpose is calculated as It should be noted that the computational scheme proposed in this section differs from the one in [14]. Equation (15) allows the unwrapped phase spectrum to not pass through the origin, as far as we used coefficient b ab in linear regression. This feature is practically important and will be addressed later. As far as W(f k ) is based on spectral estimations, the generalized method should be applied carefully for signals that are non-stationary.
Results and Discussion
A series of computational experiments were carried out for a comparative evaluation of the algorithms. The human voice is commonly used for evaluation purposes in related studies [7,8]. Prior to the proposed study, we have tested algorithm performance for several speakers but did not find a significant difference in the results. Therefore, we have used the recording of one speaker and focused the study mainly on evaluating the impact of additive noise and multipath propagation in a reverberant environment.
A recording of a male speaker's voice with additive random noise was used to produce a set of test signals. The noise-free sound was synthesized based on the recorded voice by each of two means: in accordance with (4) and in accordance with (5).
Additive noises were generated by software, then scaled and summed with the preprocessed recording. The spectral noise density was equal in the range from 0 to 1000 Hz. Signals and noises outside of this frequency range were not considered in the experiments. A similar approach to preparing the set of test signals was used in [25].
Noises of the same intensity were applied to both channels. At the same time, the intensity of the noise was set in such a way as to provide the target SNR relative to the root-mean-square value of the signals recorded by the sensors for the entire time of each instance of the experiment. When applying (1), the delay was introduced by shifting one copy of the record relative to another by an integer number of sampling intervals (f d = 44,100 Hz).
Experimental Setting
A set of stereo test records with a duration of about 20 s each were prepared for the study. The recording was analyzed in fragments of about 1 s during each instance of the experiment. At the same time, the analysis of each of the fragments was considered an independent experiment. The final estimations used to calculate the absolute error were obtained by averaging obtained values of the lag time.
The number of samples in each of the analyzed fragments was L = 40,960 (about 928.8 msec). The number of samples in the segment was taken to be N = 4096 (about 92.9 msec). Consequently, each piece of recording sound was fragmented into Q = 10 segments. When processing the results, the outputs corresponding to the segments of the recording, where pauses in speech predominated, were discarded.
Two different sets of frequency bins were used when applying (16). The first set contained frequency bins corresponding to the condition f k [100 Hz, 850 Hz]. The second set contained four non-overlapping frequency bands shown below. The choice of such frequency intervals was carried out in accordance with the form of power density spectrum of the raw signal shown in Figure 2. The presented characteristic was obtained by averaging all instantaneous power density spectrums with a window of N = 4096 samples. The position of the cut-off level was chosen empirically to optimize the TDE operation in the absence of reverberations. It should be noted that the power density spectrum for different speakers or even for different speech fragments by this speaker would not remain the same. However, the proposed procedure will remain applicable regardless.
Simulation of the Small Room Environment
As noted above, creating a realistic sound signal in accordance with (5) requires obtaining RIR functions h a (t), h b (t). The MATLAB program prepared by Eric Lehman [22] was used to obtain these characteristics. When calculating the RIR, the room parameters and the configuration of the sensors were specified as shown in Figure 3. The dimensions of the room were 5 × 3.5 × 2.25 m. The source has coordinates (1.5, 2.75, 1.8), and the microphones (4.5, 1.25, 1.8) and (4.5, 2.25, 1.8).
Two different sets of frequency bins were used when applying (16). The first set contained frequency bins corresponding to the condition fk ϵ [100 Hz, 850 Hz]. The second set contained five non-overlapping frequency bands shown below. The choice of such frequency intervals was carried out in accordance with the form of power density spectrum of the raw signal shown in Figure 2. The presented characteristic was obtained by averaging all instantaneous power density spectrums with a window of N = 4096 samples. The position of the cut-off level was chosen empirically to optimize the TDE operation in the absence of reverberations. It should be noted that the power density spectrum for different speakers or even for different speech fragments by this speaker would not remain the same. However, the proposed procedure will remain applicable regardless.
Simulation of the Small Room Environment
As noted above, creating a realistic sound signal in accordance with (5) requires obtaining RIR functions ha (t), hb (t). The MATLAB program prepared by Eric Lehman [22] was used to obtain these characteristics. When calculating the RIR, the room parameters and the configuration of the sensors were specified as shown in position of the cut-off level was chosen empirically to optimize the TDE operation in the absence of reverberations. It should be noted that the power density spectrum for different speakers or even for different speech fragments by this speaker would not remain the same. However, the proposed procedure will remain applicable regardless.
Simulation of the Small Room Environment
As noted above, creating a realistic sound signal in accordance with (5) requires obtaining RIR functions ha (t), hb (t). The MATLAB program prepared by Eric Lehman [22] was used to obtain these characteristics. When calculating the RIR, the room parameters and the configuration of the sensors were specified as shown in Figure 3 The reverberation time (T 60 ) was assumed to be 50 msec and 120 msec. The first value is compliant with the standards of a room intentionally designed for voice broadcasting. The second value is compliant with the requirements for verbal communication in an office space [26]. The synthesized RIRs are shown in Figure 4. The reverberation time (T60) was assumed to be 50 msec and 120 msec. The first value is compliant with the standards of a room intentionally designed for voice broadcasting. The second value is compliant with the requirements for verbal communication in an office space [26]. The synthesized RIRs are shown in Figure 4. Table 2 shows the absolute TDE errors for various weight functions and the ideal signal propagation model. Figure 5 shows the dependence of TDE error on SNR. Table 2 shows the absolute TDE errors for various weight functions and the ideal signal propagation model. Figure 5 shows the dependence of TDE error on SNR. (15) and (16) provides greater accuracy while increasing the intensity of in-band noises. At the same time, the use of the second reduced frequency set allows you to reduce the threshold SNR to 4 dB over which sharp drop in the accuracy manifests. Figure 6 shows the absolute TDE error for SNR 8 dB for WPHAT and WML. When the noise intensity is not sufficient to go over the threshold, the estimators demonstrate the best possible performance in terms of accuracy regardless the noise level. When the SNR drops below the threshold level, the accuracy degrades gradually with the intensification of the noise. However, using a reduced set of frequency bins makes the contaminating effect of in-band noise less harsh. Notably, this is more obvious for WPHAT than for WML. Figure 6 shows the absolute TDE error for SNR ≥ 8 dB for W PHAT and W ML . When the noise intensity is not sufficient to go over the threshold, the estimators demonstrate the best possible performance in terms of accuracy regardless the noise level. When the SNR drops below the threshold level, the accuracy degrades gradually with the intensification of the noise. However, using a reduced set of frequency bins makes the contaminating effect of in-band noise less harsh. Notably, this is more obvious for W PHAT than for W ML . That can be explained by the fact that frequency weighting applied with ML estimator compensates for frequency bins where noise prevails over the signal. Despite the fact, that threshold SNR level appears in Figure 6 to be better for PHAT than for ML, the latter estimator surpasses the former in terms of accuracy in the single path scenario regardless of noise intensity. The frequency weighting function for the ML estimator is in Figure 7. Figure 7 shows the form of Фab (fk) and all W(fk) in the absence of noise (SNR = 32 dB) and their presence (SNR = 4 dB). A part of the curve that is close to linear shape is clearly distinguished at Фab, in both cases, however, in the presence of noise, the corresponding frequency range is significantly narrower. It should be noted that Фab in the absence of noise passes through the origin and behaves as described in [14]. However, when the signal is contaminated with the noise, Фab is offset relative to the abscissa axis. This can be explained by the fact that there is no voice signal on frequencies up to 100 Hz, so the prevalence of the noise in this band results in an unpredictable offset of the unwrapped phase spectrum. That makes the estimation technique proposed in [14] not relevant for this task.
Comparison of GPS TDE Methods in Anechoic Environment
The shape of WSCOT and WCOH is close to a line parallel to the time axis in the absence of noise. In the presence of noise, a high level of WSCOT and WCOH is observed in the intervals where the cross-power spectrum |Sab| has high values. WBCC form follows the shape of |Sab| and does not differ significantly in the presence of noise and their absence. Four areas of high values are visible at the WML corresponding to the Фab regions that are best approximated by the line. Figure 7 shows the form of Φ ab (f k ) and all W(f k ) in the absence of noise (SNR = 32 dB) and their presence (SNR = 4 dB). A part of the curve that is close to linear shape is clearly distinguished at Φ ab , in both cases, however, in the presence of noise, the corresponding frequency range is significantly narrower. It should be noted that Φ ab in the absence of noise passes through the origin and behaves as described in [14]. However, when the signal is contaminated with the noise, Φ ab is offset relative to the abscissa axis. This can be explained by the fact that there is no voice signal on frequencies up to 100 Hz, so the prevalence of the noise in this band results in an unpredictable offset of the unwrapped phase spectrum. That makes the estimation technique proposed in [14] not relevant for this task.
The shape of W SCOT and W COH is close to a line parallel to the time axis in the absence of noise. In the presence of noise, a high level of W SCOT and W COH is observed in the intervals where the cross-power spectrum |S ab | has high values. W BCC form follows the shape of |S ab | and does not differ significantly in the presence of noise and their absence. Four areas of high values are visible at the W ML corresponding to the Φ ab regions that are best approximated by the line.
Comparison of GPS TDE Methods in Reverberant Environment
Tables 3 and 4 summarize the average TDE absolute errors for different weighting functions, reverberation model and different reverberation times. Table 4. Absolute error of GPS TDE with reverberation model (T 60 = 120 msec). Figure 8 shows that in the presence of reflected signals, the ML estimator is inferior in accuracy to the SCOT and COH estimators, especially in the absence of additive noises. At the same time, the accuracy turns out to be significantly lower than in the previous case. This can be explained by the correlation of the signals with their reflected copies. In the presence of reverberations and intense noises, none of the functions show any accuracy advantage. The latter makes it useful to apply the BPS TDE method (PHAT) as the simplest one.
SNR Mean Absolute Error (msec)
The use of the second set of frequency bins provides an advantage in accuracy only in conditions of noise dominance (SNR ≤ 0 dB). The use of the complete set of frequency bins provides significantly better accuracy in other cases. Figure 8 shows the dependence of TDE error on SNR graphically. drastic increase in the error both in the presence and absence of noise. However, with the dominance of noise over the signal, the presence of reflected copies has a positive effect on accuracy. However, even if this is the case, the TDE error remains unacceptably high for a significant part of practical applications.
The use of the second set of frequency bins provides an advantage in accuracy only in conditions of noise dominance (SNR 0 dB). The use of the complete set of frequency bins provides significantly better accuracy in other cases. Figure 8 shows the dependence of TDE error on SNR graphically. Figure 9 shows the results of using GPS TDE for various acoustic conditions of the environment. It is clear from the figure that the reverberation time increase leads to a drastic increase in the error both in the presence and absence of noise. However, with the dominance of noise over the signal, the presence of reflected copies has a positive effect on accuracy. However, even if this is the case, the TDE error remains unacceptably high for a significant part of practical applications. It can be seen from the form of Фab that an increase in the reverberation time leads to a distortion of the frequency response form and a decrease in the estimate accuracy. At the same time, the distortions observed for WSCOT and WCOH are not as significant as they were in the absence of reverberations and the presence of noises. This can be explained by the fact that the reflected signals are mutually correlated, and their presence does not contribute to a significant decrease in the level of signal coherence. The correlation of the reflected signals also affects at the shape |Sab| and, therefore, at the WBCC form. The WML form also changes significantly with an increase in the reverberation time, while the regions of high values also correspond to the linear sections Фab. At T60 = 120 msec, the number of such sections becomes smaller which negatively affects the accuracy. Figure 10 shows the form of Φ ab (f k ) and all W(f k ) for different values of reverberation time (T 60 ). All graphics in Figures 7 and 10 are obtained for one and the same fragment of the original signal. It can be seen from the form of Φ ab that an increase in the reverberation time leads to a distortion of the frequency response form and a decrease in the estimate accuracy. At the same time, the distortions observed for W SCOT and W COH are not as significant as they were in the absence of reverberations and the presence of noises. This can be explained by the fact that the reflected signals are mutually correlated, and their presence does not contribute to a significant decrease in the level of signal coherence. The correlation of the reflected signals also affects at the shape |S ab | and, therefore, at the W BCC form. The W ML form also changes significantly with an increase in the reverberation time, while the regions of high values also correspond to the linear sections Φ ab . At T 60 = 120 msec, the number of such sections becomes smaller which negatively affects the accuracy. Figures (a,c,e,g,i) are obtained for T60 = 50 msec. Figures (b,d,f,
Conclusions
This study investigated GPS TDE in relation to the problem of localizing a sound source in a small room. The suggested TDE algorithm is based on the analysis of the phase response form which makes it possible to estimate the time by analyzing an arbitrary set of spectral bins.
To assess the algorithm's applicability and efficiency, a series of computational experiments were performed to simulate the speaker positioning within a small room. To simulate room acoustics, the image model implemented by Lehman and Johanson [23] was used. During the course of the experiment, the SNR at the signal receivers was varied, as well as the room reverberation time.
The fundamental applicability of the suggested algorithm was shown due to the performed experiment. In the absence of noises and echo, GPS TDE demonstrates an accuracy comparable to the sampling error at f d = 44,100 Hz (about 0.01 s). A decrease in accuracy is expected in the absence of echo but at an increase in the intensity of additive noise. However, narrowing of the frequency range over which TDE is performed helps to maintain accuracy under moderate noises (SNR > 4 dB). The best accuracy characteristics are provided by the ML GPS estimator.
When an echo occurs, TDE accuracy downgrades significantly. The reflected signals are correlated, and, therefore, introduce extra noise to the correlogram. In this case, the use of a reduced set of spectral bins affects the accuracy negatively. Even with insignificant reverberations, corresponding to an acoustical very dead room and the absence of noises, the ML GPS estimator demonstrates a relatively low accuracy. The SCOT and COH GPS estimators show the best results. In conditions of higher reverberations, the TDE error increases significantly in comparison to the ideal case and makes the use of the GPS method ineffective. In practice, however, the influence of echo can be lower, as real-world microphones are not omnidirectional.
Even though the suggested method is inferior to analogs in a few aspects, its advantage remains high computational efficiency. The suggested computational scheme, when using a relatively small number of adjacent frequency samples for TDE, allows the use of Goertzel's algorithm instead of FFT [27]. This is essential for embedded computers with memory constraints. Additionally, the use of well-known implementations of the Goertzel algorithm designed for phase detection [28] will make it possible to re-evaluate the spectral characteristics of the signal with new data arrival. The latter is useful for solving the problem of positioning a mobile acoustic source. Further studies will be devoted to the testing of this hypothesis. | 6,952.4 | 2022-01-26T00:00:00.000 | [
"Engineering",
"Physics"
] |
Aberrant epigenetic silencing of neuronatin is a frequent event in human osteosarcoma
The paternally imprinted neuronatin (NNAT) gene has been identified as a target of aberrant epigenetic silencing in diverse cancers, but no association with pediatric bone cancers has been reported to date. In screening childhood cancers, we identified aberrant CpG island hypermethylation in a majority of osteosarcoma (OS) samples and in 5 of 6 human OS cell lines studied but not in normal bone-derived tissue samples. CpG island hypermethylation was associated with transcriptional silencing in human OS cells, and silencing was reversible upon treatment with 5-aza-2’-deoxycytidine. Expression of NNAT was detectable in osteoblasts and chondrocytes of human bone, supporting a potential role in bone homeostasis. Enforced expression of NNAT in human OS cells lacking endogenous expression resulted in significant reduction in colony formation and in vitro migration compared to nonexpressor control cells. We next analyzed the effect of NNAT expression on intracellular calcium homeostasis and found that was associated with an attenuated decay of calcium levels to baseline following ATP-induced release of calcium from endoplasmic reticulum (ER) stores. Furthermore, NNAT expression was associated with increased cytotoxicity in OS cells from thapsigargin, an inhibitor of calcium reuptake into ER and an inducer of the ER stress response. These results suggest a possible tumor suppressor role for NNAT in human osteosarcoma. Additional study is needed ascertain sensitization to ER stress-associated apoptosis as a mechanism of NNAT-dependent cytotoxicity. In that case, epigenetic modification therapy to effect NNAT transcriptional derepression may represent a therapeutic strategy potentially of benefit to a majority of osteosarcoma patients.
INTRODUCTION
Osteosarcoma (OS) is the most common primary bone tumor in children, adolescents, and young adults. Although OS is a comparatively rare cancer, affecting fewer than 500 individuals below 21 years of age annually in the United States [1], some clinical features are well-documented including a characteristic range of age at presentation, prevalent sites of primary tumor involvement, and typical patterns of metastatic dissemination. Response to chemotherapy, which frequently may be assessed at the time of primary tumor resection following neoadjuvant chemotherapy, is highly variable, however, and much uncertainty persists regarding the molecular events governing the biology of these tumors and response to therapy. Presumed to originate in pluripotent mesenchymal cells, OS typically produce osteoid as a marker of osteoblastic differentiation but may also exhibit features of chondroblastic and fibroblastic development, and histologic heterogeneity of tumors is well documented. Molecular and cytogenetic analyses have indicated a substantial prevalence of structural variation in OS [2][3][4][5]. Despite this genomic complexity, and excepting the well-established roles for aberrations in the RB transcriptional corepressor 1 (RB1) [6,7], and tumor protein p53 (TP53) [8][9][10] tumor suppressor pathways in hereditary OS predisposition as well as sporadic tumors, few pathognomonic cytogenetic abnormalities or consistent genetic mutations have emerged either as clear and predominant drivers of tumorigenesis or as biomarkers for histologic features or clinical behavior. Comparatively few studies, furthermore, have explored the potential role of epigenetically-mediated gene dysregulation in the pathogenesis of OS.
Epigenetic mechanisms, including aberrant CpG island hypermethylation leading to transcriptional silencing of gene expression, are now well established as mediators of gene dysregulation determining diverse neoplastic phenotypes in essentially all types of cancer [11][12][13]. Human neuronatin (NNAT) is a paternally imprinted gene (paternally-derived allele is transcriptionally active, maternally derived allele is transcriptionally silent) localized to chromosome segment 20q11.2. We first identified aberrant hypermethylation of the NNAT 5′ CpG island childhood acute leukemia [14]. Subsequent analyses have shown NNAT hypermethylation in pituitary adenoma [15] and non-small-cell lung cancer (NSCLC) [16]. Enforced expression of NNAT has been associated with decreased in vitro colony forming efficiency (CFE) and inhibition of proliferation in pituitary adenoma cells [15,17] and decreased CFE in NSCLC [16], suggesting a potential tumor suppressor function in some cancers.
The NNAT gene is expressed as two alternatively spliced mRNA isoforms, both of which encode an endoplasmic reticulum-associated proteolipid [18,19].
Sequence homology [18] and functional analyses [19][20][21] have suggested that NNAT acts as a regulator of the sarco/ endoplasmic reticulum Ca 2+ ATPase (ATP2A2, SERCA2), thereby participating in the regulation of intracellular Ca 2+ levels ([Ca 2+ ] i ) in some cells. Originally described as a gene selectively expressed in the developing brain, NNAT has indeed been shown to play a role in the induction of neural differentiation in embryonic stem cells via inhibition of SERCA2 [21]. Additional observations, however, suggest a more pleotropic role. NNAT expression has been shown to induce adipocytic differentiation in mesenchymal cells [19] and to induce apoptosis in pancreatic cells [20]. The roles ascribed to NNAT of induction of differentiation in cells of mesenchymal origin and silencing/suppression of neoplastic phenotypes in various cancers prompted us to explore a potential role in solid tumors in children, especially those of mesenchymal origin.
We screened a panel of pediatric primary solid tumors to determine the prevalence of NNAT CpG island hypermethylation. We found that such aberrant methylation was relatively rare among the most common embryonal tumors of childhood, neuroblastoma and Wilms' tumor. Notably, however, more than two thirds of osteosarcoma (OS) samples demonstrated aberrant hypermethylation of the CpG island encompassing the NNAT promoter and exon 1. We therefore studied the role of hypermethylation of NNAT promoter region on NNAT expression in OS cells and tested the effect of NNAT expression on the clonogenic and invasive capacity of OS cells in vitro. Our results show that NNAT is silenced by aberrant CpG island hypermethylation in human OS. We also show that enforced expression of NNAT inhibits clonogenicity in human OS cells and suppresses in vitro transmembrane migration. Induction of NNAT expression in OS cells resulted in attenuated decay of intracellular calcium levels following mobilization from ER stores, and NNAT expression enhanced the cytotoxic effect of thapsagirgin, an inhibitor of SERCA2 and an inducer of endoplasmic reticulum (ER) stress in OS cells. Together, these findings support a possible tumor suppressor function for NNAT in human osteosarcoma. Establishing a potential mechanism related to calcium homeostasis and/or ER stress warrants further investigation, but the present analysis suggests that NNAT hypermethylation may represent a potential target for epigenetic modifier therapy in osteosarcoma.
The NNAT CpG island exhibits frequent aberrant methylation in pediatric bone sarcomas but not in embryonal or CNS tumors
We have previously shown that hypermethylation of the NNAT 5′ CpG island is a frequent event in acute leukemias of childhood [14]. Extending our analysis Oncotarget 1878 www.oncotarget.com to common solid tumors of childhood and adolescence, we examined tumor samples by Southern blot utilizing methylation-sensitive restriction endonucleases focusing on the CpG island comprising the promoter, exon 1, and proximal intron 1 ( Figure 1A). This tumor panel included Wilms' tumors, neuroblastomas, OS, Ewing sarcomas, and CNS tumors. Among these neoplasms, we observed that the bone sarcomas OS and Ewing sarcoma exhibited a high frequency of NNAT hypermethylation compared to embryonal tumors or CNS tumors (Table 1). In our previous analysis of the NNAT CpG island we showed methylated and unmethylated alleles present in approximately equal proportions in normal, mature peripheral blood cells (NNAT non-expressors) and pituitary tissue (NNAT expressor), reflecting the imprinted status, i. e., methylation of the transcriptionally silent maternally-derived allele and lack of methylation of the transcriptionally active paternally-derived allele [22]. For the present analysis, tumor samples exhibiting skewing of the allelic methylation signal ratio to ≥ 75% methylated/≤ 25% unmethylated were defined as hypermethylated because transcriptional silencing was consistently seen at this methylation threshold in pediatric acute leukemia specimens (Kuerbitz, unpublished data). Thus defined, 13 of 68 pediatric solid tumors (19.1%) exhibited NNAT hypermethylation (Table 1). Interestingly, of these 13 tumors, 12 were bone-derived tumors. We found that 8 of 11 OS tumors (73%) exhibited NNAT CpG island hypermethylation ( Figure 1B, lanes 3-13, and Figure 1C) as did 4 of 10 Ewing sarcoma samples. Conversely, hypermethylation of the NNAT CpG island was observed in only 1 of 13 Wilms' tumors, and no NNAT hypermethylation was observed among the 14 primary neuroblastoma tumors that we analyzed. Similarly, we examined 20 CNS tumors of various histopathologies including high grade and low-grade gliomas, ependymomas, and medulloblastomas. No NNAT hypermethylation was identified among these neoplasms (Table1).
The NNAT CpG island exhibits aberrant methylation in human OS cell lines and tumor samples but not in normal bone tissues
NNAT CpG island methylation was assessed in OS cell lines. Southern blot analysis showed that human OS cell line HOS (also called TE-85) exhibited about 60% unmethylated and 40% methylated alleles. Analysis of the HOS-derived, N-methyl-N'-nitro-N-nitrosoguaninetransformed cell line MNNG/HOS [23] however, revealed complete absence of the normal 1.6 kb allele, reflecting acquired methylation or loss of the unmethylated allele ( Figure 1A and 1B). Methylation analysis by combined bisulfite PCR restriction analysis (COBRA) confirmed the Southern blot results for HOS and MNNG/HOS cells and demonstrated extensive hypermethylation of the NNAT CpG island in the OS cell lines G-292 and MG-63 ( Figure 2A and 2B). COBRA analysis of the OS cell lines U-2 OS, and SaOS-2 also revealed extensive NNAT CpG island hypermethylation (Supplementary Figure 1A and 1B). Thus, NNAT hypermethylation was observed in 5 of the 6 OS cell lines examined.
To expand our analysis, we next employed COBRA to assess 22 additional primary or metastatic OS tumor samples. We first compared proportional allelic methylation quantitated by Southern blot to that of COBRA in 7 OS tumor samples and observed good agreement between the 2 methods with respect to percentage methylation (R 2 = 0.92, Supplementary Figure 2) and full concordance with respect to scoring for hypermethylation. We identified NNAT hypermethylation in 15 of the 22 tumor specimens (68%), thus demonstrating a high prevalence of de novo methylation or deletion of the expressed, unmethylated (paternally-derived) allele in this cohort of tumors, also ( Figure 2C and 2D).
Detailed descriptions of tumor histology were not available for most cases analyzed, so correlation of NNAT hypermethylation with specific OS histologies was not possible in this study. Nevertheless, we noted that tumors exhibiting NNAT hypermethylation included 2 that were designated chondroblastic and 1 designated fibroblastic. NNAT hypermethylation was identified in 16 of 21 samples (76%) derived from primary tumors and in 6 of 10 samples (60%) from metastatic lesions (lung, soft tissue, and brain). For the remaining 2 OS specimens the source was not designated. Based on this limited analysis, it did not appear that NNAT hypermethylation was specific either to a particular OS histology or to primary versus metastatic tumors.
To determine the extent of NNAT CpG island methylation in normal and developing bone tissues, we then analyzed genomic DNA samples from normal human bone, non-neoplastic human proximal femoral growth plate tissue derived from a slipped capital femoral epiphysis specimen, and cultured human mesenchymal stem cells. Quantitative methylation analysis by COBRA revealed that methylated alleles comprised 35%, 44%, and 40% of NNAT CpG island allelic signal, respectively, in bone, growth plate, and MSC samples ( Figure 2E, gel photograph not shown). Thus, our analyses identified aberrant hypermethylation of the NNAT CpG island in a total of 23 of 33 (70%) human osteosarcoma samples but not in normal bone, developing bone, or bone progenitor cells. These results suggested that aberrant hypermethylation of the NNAT CpG island occurs frequently in human OS tumors.
Since methylation analysis by either Southern blot analysis or COBRA is based on restriction enzyme digestion and thus quantitates methylation at a single CpG dinucleotide, we next sought to determine whether the methylation levels at the CpG sites interrogated by these techniques were representative of methylation more broadly Oncotarget 1879 www.oncotarget.com across the NNAT CpG island. We therefore performed bisulfite sequencing of HOS and MNNG/HOS-derived DNA to evaluate methylation at each of the 24 CpGs within the 208 base pair COBRA amplicon. Sequencing at least 10 clones, selected at random from cloned, PCR-amplified products of bisulfite-modified DNA, we found that 2 distinct populations of alleles were discernible among HOSderived clones. One population, comprising 9 of 17 clones, was extensively methylated across the CpG island whereas the remaining 8 clones were nearly devoid of methylation ( Figure 2F). This result was consistent with the site-specific allelic methylation levels determined by southern blot and COBRA assays and is compatible with the known imprinted status of the gene in normal cells. By contrast, all clones from MNNG/HOS-derived DNA exhibited methylation at nearly all CpG sites across the NNAT CpG island amplicon. Based on these results we concluded that the site-specific methylation levels quantitated by Southern blot and COBRA were representative of methylation broadly across the NNAT CpG island.
Aberrant NNAT CpG island methylation is associated with transcriptional repression in OS cell lines
Our study identified aberrant hypermethylation of the NNAT CpG island in 70% of osteosarcoma samples analyzed, indicating that this epigenetic event frequently accompanies osteosarcoma tumorigenesis. In our prior analysis of pediatric acute leukemia we found that NNAT CpG island hypermethylation was consistently associated with transcriptional silencing. We therefore analyzed NNAT mRNA expression in OS cell lines to determine whether NNAT hypermethylation is likewise associated with transcriptional silencing. Semi-quantitative endpoint RT-PCR was performed on total mRNA collected from 5 OS cell lines in log phase growth. NNAT mRNA expression was observed only in HOS cells, which exhibited the normal, hemimethylated allelic signal at the NNAT CpG island. HOS cells expressed both NNATα and NNATβ mRNA splice variants. No expression of NNATα or NNATβ mRNA was evident in any of the remaining 4 OS cell lines, all of which exhibited hypermethylation at the NNAT CpG island ( Figure 3A). Western blot analysis of whole cell protein lysates prepared from OS cell lines in log phase growth confirmed expression of both NNAT isoforms in hemimethylated HOS cells, but not in the hypermethylated cell lines MNNG/HOS or U-2 OS ( Figure 3B). Thus, the normal allelic hemimethylation signature in HOS cells was associated with expression of NNAT mRNA and protein isoforms while no NNAT expression was detectable in OS cell lines exhibiting aberrant CpG island hypermethylation.
NNAT demethylation is associated with derepressed mRNA expression in OS cell lines
Because relaxation of transcriptional repression with demethylation is a hallmark of epigenetic silencing, we asked whether pharmacologically-mediated demethylation would result in NNAT expression in OS cell lines. MNNG/ HOS and U-2 OS cells were treated with the DNA methyltransferase inhibitor 5-aza-2'-deoxycytidine (5aza-dC). Both cell lines exhibited NNAT hypermethylation and lacked expression of NNAT mRNA at baseline. Quantitating NNAT CpG island methylation by COBRA, we found that DNA from both cell lines was substantially demethylated after 72 hours of 5aza-dC exposure (≥ 40% unmethylated alleles), while DNA from untreated control cells remained heavily hypermethylated (<10% unmethylated alleles, Figure 4A, gel image not shown). Samples of total mRNA from the 5aza-dC-treated and control cell lines were then analyzed for NNAT mRNA expression by semi-quantitative RT-PCR. We found that both NNAT splice variants were detectable in the demethylated, 5aza-dC-treated samples, while untreated control samples remained negative for NNAT mRNA ( Figure 4B). This restoration of NNAT mRNA expression in OS cell lines following pharmacologically-mediated demethylation suggested that the absence of baseline expression in these cells was the result of transcriptional repression associated with extensive CpG island hypermethylation.
NNAT expression is detectable in cells of the bone growth plate
We reasoned that NNAT expression would likely be detectable in cells during normal bone development if loss of NNAT expression is relevant to the biology of osteosarcoma tumorigenesis. We therefore assessed NNAT Oncotarget 1883 www.oncotarget.com expression in normal mouse and human bone growth plate. Utilization of the antibody for identification of NNAT protein by immunohistochemical analysis of human tissue was first validated by testing human anterior pituitary tissue, previously shown to express at high levels. Robust expression was observed in this tissue (Supplementary Figure 3). Mouse distal femoral growth plate sections were then analyzed, and NNAT expression was detected in osteoblasts ( Figure 5A). Similarly, NNAT expression was detected in in osteoblasts and chondrocytes from the femoral growth plate of a 13-year-old human ( Figure 5B, left panel) and in osteoblasts of human infant rib-end growth plate ( Figure 5B, right panel). These results indicated that NNAT was expressed in mesenchymederived cells during the course of normal endochondral bone growth in humans and mice.
Enforced expression of NNAT in OS cells suppresses colony formation and transmembrane migration but does not inhibit cell proliferation
The recurrent methylation-associated loss of expression observed in human OS cells suggested a possible tumor suppressor role for NNAT [15]. We therefore compared colony forming efficiency in OS cells constitutively expressing NNAT versus nonexpressor controls. U-2 OS cells and MNNG/HOS cells were transfected with expression vectors encoding either NNATα or NNATβ or with an empty vector lacking the NNAT cDNA. Cells were seeded and grown under antibiotic selection. When macroscopic colonies were counted, a significant reduction in colony number was observed in MNNG/HOS transfected with either NNATα or NNATβ compared to the vector-only control ( Figure 6A, 6B). A similar decrease in clonogenicity was observed in U-2 OS cells transfected with NNATα, while colony formation in cells transfected with NNATβ was not statistically different from that of vector-only control transfectants ( Figure 6C). In parallel experiments, transfectant colonies were also selected and expanded to derive stable cell lines expressing NNATα or NNATβ. NNAT expression was confirmed in these cells lines by western blot analysis ( Figure 6D). Cell proliferation during log phase growth was then compared between MNNG/ HOS-derived stable NNAT expressor cells and vector only controls. No difference was observed in proliferative rate between cells expressing NNATα or NNATβ or emptyvector non-expressors compared to untransfected control cells ( Figure 6E). These experiments indicated that
NNAT expression suppresses transmembrane migration potential in U-2 OS cells
To gauge the potential effect of NNAT expression on U-2 OS cells' capacity for invasiveness, we examined the in vitro migration of NNAT-expressor cells versus empty vector controls across Matrigel ® -coated membranes, a surrogate for invasive potential. Using serum-depleted and serum-replete environments to provide a transmembrane gradient to promote cell migration, we observed decreased transmembrane cell migration in the NNATα-expressor cell line compared to the control ( Figure 6F).
NNAT expression is associated with an attenuated decay of [Ca 2+ ] i levels following calcium release
Expression of NNAT has been associated with increased intracellular calcium levels ([Ca 2+ ] i ) in previous studies, suggesting that NNAT may play a role in regulation of calcium homeostasis in human cells [19][20][21]. We therefore compared [Ca 2+ ] i in human OS NNAT expressors versus nonexpressors. To avoid inter-cell line variability in calcium response, we engineered human MNNG/HOS osteosarcoma cells for inducible NNAT expression. Robust induction of NNAT expression was achieved using the LacSwitch II vector system (Agilent). Using this system, NNAT transcription, blocked at baseline by a constitutivelyexpressed lac repressor, was derepressed by treating cell cultures with isopropyl β-D-1-thiogalactopyranoside (IPTG). Tight induction of NNAT expression with IPTG treatment, with little or no detectable NNAT in IPTG-untreated cells, was confirmed by western blot analysis ( Figure 7A). [Ca 2+ ] i was then tracked in individual cells following ATP-stimulated mobilization of calcium stores for NNAT expressor cells with and without IPTG-induction and in empty vector control cells treated with IPTG. In analysis of multiple cell tracings, no difference in the dynamics of [Ca 2+ ] i increase-i. e., the rate of [Ca 2+ ] i increase or peak [Ca 2+ ] i -was observed associated with IPTG-induced NNATα expression ( Figure 7B). However, the decay of [Ca 2+ ] i from the post-release peak throughout the decline back to baseline was markedly attenuated in cells with IPTG-induction of NNATα compared to uninduced cells or IPTG-treated vector-control nonexpressor cells. The [Ca 2+ ] i tracing following ATP stimulation of IPTG-untreated NNATα cells (i.e., lacking induced NNATα expression) was not appreciably different from those of the IPTGtreated empty vector control cell line, suggesting that the IPTG itself had little or no effect on the dynamics of Ca i release and re-uptake. A similar attenuation of [Ca 2+ ] i decay was observed in MNNG/HOS cells engineered for IPTGinducible NNATβ expression (Supplementary Figure 4). Together these data suggested that NNAT expression may have inhibited reuptake of Ca 2+ into intracellular stores or otherwise delayed the return of [Ca 2+ ] i to baseline after Ca 2+ release. The results provided further support for a role for NNAT in regulation of Ca 2+ i homeostasis.
NNAT expression enhances thapsigargin cytotoxicity in OS cells
Ca 2+ i homeostasis is critical to normal processing of nascent proteins in the ER, and disruption of Ca 2+ i homeostasis can trigger the ER stress response [24]. We therefore asked whether NNAT expression could affect cell cytotoxicity resulting from ER stress induced by thapsagargin. Thapsigargin is an inhibitor of SERCA2b and induces ER stress associated with elevation of [Ca 2+ ] i . Prolonged exposure to this agent triggers apoptosis in many cell types. We therefore assessed cell survival in NNATα or NNATβ expressor OS cells treated with thapsigargin compared to control nonexpressors. We found that MNNG/HOS cells constitutively expressing either NNATα or NNATβ exhibited additive cytotoxicity (decreased cell survival) with thapsigargin treatment at 250 nM or 500 nM at 72 hours in compared to nonexpressor controls cells (Figure 8). This result suggested that NNAT expression augmented ER stress-associated cytotoxicity induced by thapsigargin.
DISCUSSION
We first reported a potential role for epigenetic dysregulation of the NNAT gene in human oncogenesis, identifying aberrant 5′CpG island hypermethylation associated with transcriptional silencing in pediatric acute leukemias [14]. The present study was undertaken when a preliminary screen of childhood solid tumors identified frequent aberrant NNAT hypermethylation in OS, in contrast to embryonal tumors such as neuroblastoma and Wilms' tumor or CNS tumors (Table 1). Whereas the normal human bone and cultured primary human mesenchymal stem cell samples we examined exhibited the hemimethylated pattern that is expected at an imprinted gene, we found extensive aberrant hypermethylation of the NNAT CpG island promoter region in 23 of 33 primary OS samples and in 5 of the 6 OS cell lines analyzed. Importantly, while restriction digestion-based methods such as southern blot analysis and COBRA typically interrogate single CpG dinucleotides, bisulfite sequencing of OS cell line DNA confirmed that the quantitative methylation levels indicated by these site-specific methods were representative of allelic methylation levels across the NNAT CpG island.
DNA methylation-associated, allele-specific expression or transcriptional repression is well established www.oncotarget.com in genes subject to imprinting or to X-chromosome inactivation in normal cells [25]. Similarly, transcriptional silencing of tumor suppressor genes resulting from aberrant CpG island hypermethylation may be regarded as the functional equivalent of mutation-mediated loss of activity in cancers [12]. While we identified NNAT hypermethylation in OS employing a dedicated screen of childhood solid tumors, aberrant NNAT silencing has also been identified in human pituitary adenomas [15] and in human non-small-cell lung cancers [16] where NNAT transcriptional derepression was identified in mRNA expression screens employing RNAi-mediated or pharmacologically-mediated demethylation. In agreement with these reports, we found that extensive NNAT CpG island methylation was associated with transcriptional repression and absence of protein expression in 4 of 5 OS cell lines tested. Expression of the 2 mRNA splice variants and 2 protein isoforms was evident only in HOS cells, and these cells exhibited the normal proportional (~50%) allelic methylation ratio.
Because the epigenetic mark we interrogated for NNAT silencing is DNA methylation, we ascertained that NNAT derepression could be accomplished with CpG island demethylation. We treated OS cell lines with 5aza- Oncotarget 1887 www.oncotarget.com dC, a well-characterized and widely-used demethylating agent [26][27][28]. Important to this analysis, 5aza-dC, a nucleoside analogue that is specific for DNA, acts via inhibition of DNA methyltransferase and does not inhibit RNA or protein synthesis [29]. The resultant loss of NNAT CpG island methylation and concomitant gain of NNAT mRNA and protein expression observed in the 5aza-dCtreated OS cell lines confirmed the epigenetic basis of NNAT silencing in OS cells. Further study to optimize NNAT derepression for therapeutic application may utilize, additionally or alternatively, epigenetic modifiers targeting histone modifications or other epigenetic marks.
To assess a potential tumor suppressor role for NNAT we tested colony formation in OS cell lines expressing NNAT isoforms. We found that enforced expression of either the α or the β NNAT isoform was associated with significant suppression of clonogenicity compared to empty vector controls in MNNG/HOS cells. This result is in agreement with data reported by Zhong in human non-small-cell lung cancer [16] and Revill in murine pituitary adenoma cells [15]. Significant suppression of clonogenicity was not observed with expression of NNATβ in U-2 OS cells, indicating that this phenotype may be cell line-specific. In contrast to the reported observations in pituitary adenoma cells [17], inhibition of proliferation did not accompany NNAT expression in our analysis. This discrepancy may be related to different experimental methods. Whereas Dudley et al. tested assayed proliferation in cells engineered for inducible NNAT expression, cells stably transfected for constitutive NNAT expression were utilized for proliferation experiments in the present study. In the constitutive expression system, proliferative capacity is necessarily established as a function of clone selection. Suppression of clonogenicty was thus more likely the result of inhibition of early events in in vitro colony formation rather than expansion of established micro-colonies in the OS cells we analyzed.
When we assessed transmembrane migration, a commonly-employed surrogate for invasive potential, we observed decreased migration in NNAT expressors compared to controls, suggesting possible suppression of invasive potential associated with NNAT expression in OS. Renner and colleagues also evaluated NNATdependent cell migration in an in vitro wound healing assay and noted decreased migration in liposarcoma cells transfected with NNATα [30]. These results showing decreased cellular migration with NNAT expression contrast with results reported by Ryu et al. in which miRNA-708, a suppressor of metastasis in human breast cancer cells, was found to downregulate NNAT expression [31]. In this study, the metastatic phenotype was rescued from miRNA-708-dependent suppression by enforced expression of a NNAT mRNA that was rendered refractory to miRNA-708 downregulation by mutation of the 3′UTR. These disparate results suggest that the effect of NNAT expression vis-à-vis metastasis is also tumor-specific or even cell-specific.
Other investigators have demonstrated an effect of NNAT expression on modulation of cellular [Ca 2+ ] [18- ]. Tracking [Ca 2+ ] i at the single cell level we found that NNAT expression attenuated the decline back to baseline following the release of stores by ATP stimulation. A role for NNAT in intracellular Ca 2+ homeostasis is suggested by several lines of evidence. NNAT encodes a proteolipid and shares protein sequence homology with phospholamban, an inhibitor of SERCA2, the cardiac sarcoplasmic reticulum Ca 2+ ATPase, type 2 [18,32]. Immunofluorescence experiments have suggested that NNAT localizes to the ER membrane, the major cellular repository of Ca 2+ [19,20]. Physical interaction between NNAT and SERCA2 in embryonic stem (ES) cells was suggested by co-immunoprecipitation experiments reported by Lin et al. who found that NNAT expression in ES cells mediated induction of neural differentiation in a Ca 2+ -dependent way [21]. Our observation that NNAT expression did not alter the rate or peak level of [Ca 2+ ] i increase in OS cells upon ATP-stimulated Ca 2+ release but rather slowed the return to baseline [Ca 2+ ] i levels is conceptually consistent with inhibition by NNAT of SERCA2-dependent sequestration of Ca 2+ from the cytoplasm back into the ER.
We found that NNAT expression in OS cells also augmented cytotoxicity mediated by thapsigargin, an inhibitor of SERCA that induces apoptosis via the ER stress response [33]. This effect on cellular sequelae following perturbation of [Ca 2+ ] i further supports, albeit indirectly, an action for NNAT relating to SERCA activity and [Ca 2+ ] i homeostasis and supports previous data showing that expression of NNATβ induced expression of ER stress-related proteins and triggered ER-stress mediated apoptosis in pancreatic beta cell-derived cells [20]. A recent report has suggested that NNAT-associated dysregulation of [Ca 2+ ] i and associated ER stress underlies the neuropathology in Lafora disease [34].
NNAT expression has been assessed in relation to multiple tumor types, and a consistent picture has not emerged to date. In addition to the present study focused on osteosarcoma, methylation-associated silencing of NNAT together with expression-associated suppression of neoplastic phenotypes such as colony formation, proliferation, or cell migration has been reported in liposarcoma [16] and pituitary adenoma [15,17]. Together, these reports support a tumor suppressor role for NNAT. Because we did not identify tumors exhibiting NNAT promoter hypermethylation among neuroblastomas we examined, we did not evaluate NNAT expression in these tumors. Tajiri and colleagues found, however, that lower level NNAT expression correlated with an unfavorable prognosis in neuroblastoma [35], consistent again with a more aggressive phenotype in tumors lacking NNAT expression. Ryu et al. [31] suggest the converse -that expression of NNAT is a driver, not a suppressor, of neoplastic phenotypes. Uchihara et al., in their study of NSCLC, correlated NNAT expression with an unfavorable prognosis [36], in apparent contradiction to the tumor suppressor role for NNAT suggested by Zhong and colleagues in that disease [16]. Xu and colleagues found that NNAT overexpression was associated with shortened survival in patients with glioblastoma multiforme [37]. Similarly, overexpression of NNAT has been noted in medulloblastoma, and enforced NNAT expression, and particularly co-expression of both NNAT isoforms, in a medulloblastoma cell line resulted in increased clonogenicity in soft agar and in tumor xenograft formation [38]. This apparent conflict between promotion and suppression of neoplastic phenotypes may be reconcilable conceptually by noting the consistent finding in recent studies, including the present analysis, of the effect of NNAT expression on [Ca 2+ ] i . If NNAT functions as an inhibitor of SERCA2 so that silencing or upregulation of NNAT expression results in transient or sustained changes in [Ca 2+ ] i , then the phenotype resulting from NNAT silencing may be variable among tumor types dependent upon the dynamics of the Ca 2+ i response, the lineage-specific or differentiation stage-specific portfolio of calcium-responsive genes and proteins, and/or the cell sensitivity to ER stress responses including apoptosis resulting from perturbation of Ca 2+ i homeostasis. Detailed analysis of the events downstream of Ca 2+ release and the effect of NNAT expression, or lack thereof, on these cell-specific processes or properties will be necessary to facilitate a more mechanistic understanding of the roles of NNAT in tumor promotion versus tumor suppression.
The present analysis suggests a tumor suppressor role for NNAT expression in human OS, potentially resulting from sensitization to ER stress-associated cytotoxicity. If confirmed mechanistically, NNAT silencing may represent a potential target for therapy utilizing epigenetic modifiers to achieve derepression. This potential utility of demethylation therapy with decitabine (5aza-dC) to reverse estrogen receptor silencing and inhibit neoplastic phenotypes in OS cells has been demonstrated recently [39]. Compiling a panel of such epigenetic targets and tailoring epigenetic modifier therapy to achieve optimal gene derepression may represent a new therapeutic modality for OS. This analysis suggests that such a strategy may be applicable to a majority of osteosarcoma cases.
Primary or metastatic human OS tumor samples were obtained from initial patient biopsies or tumor www.oncotarget.com resections performed at Akron Children's Hospital (Akron, OH, USA) and Rainbow Babies and Children's Hospital (Cleveland, OH, USA). Additional human osteosarcoma samples were obtained from the Cooperative Human Tissue Network (CHTN, Columbus, OH, USA). Normal human bone genomic DNA was prepared from surgical specimens from excision of supranumerary digits in accordance with Akron children's Hospital IRB regulations. Human growth plate genomic DNA was kindly provided by Dr. William Landis (University of Akron, USA). Sections of mouse distal femur growth plate tissue were kindly provided by Dr. Walter Horton, Northeast Ohio Medical University, and were prepared in accordance with Association for Assessment and Accreditation of Laboratory Animal Care accreditation requirements. Sections of human tissues including anterior pituitary, infant rib-end growth plate, and human capital femoral physis were obtained from archived tissue specimens in the Akron Children's Hospital Department of Pathology. Acquisition of all human tissue samples was regulated by IRB-approved study protocols.
Preparation of genomic DNA
High molecular weight genomic DNA was prepared from fresh or flash-frozen issue samples. Tumor specimens were finely minced or pulverized and DNA was extracted from tumor fragments or powder by SDS/ proteinase K-digestion and phenol/chloroform extraction. For cultured cell lines, genomic DNA was purified from lysed cells using the 5PRIME ArchivePure DNA Blood kit (ThermoFisher, Waltham, MA, USA) according to the manufacturer's directions.
Southern blot analysis
Southern blot analysis for NNAT methylation was performed as described previously [14]. Blots were hybridized with the 32 P-labeled 1.6 kb upstream NNAT fragment to identify a 1.6 kb digestion fragment representing alleles lacking methylation a the NruI site within the NNAT CpG island versus a 6Kb fragment representing methylated alleles (Figure 1). The proportion of methylated alleles was quantitated for each sample by densitometric analysis of the autoradiograph under visible light utilizing an EC3 imaging system with a digital CCD camera supported by VisionWorks LS 6.0 software (UVP, Upland, CA, USA).
Combined Bisulfite Restriction Assay (COBRA)
One microgram of genomic DNA was subjected to bisulfite modification using the EpiTect Fast Bisulfite Conversion kit (Qiagen, Waltham, MA, USA). Modified DNA was subjected to PCR amplification (94° × 40 sec; 52.2° × 40 sec; 72° × 40 sec, 43 cycles) with primers F-AATCTTTATTCCCTAACAAAC and R-GGGTGGGATAGGGTTTTTAATT, which are specific to modified NNAT sequences 1530-1551 and 1716-1738, respectively (Accession U31767). To the 208 bp NNAT amplification product 500 ng of a 779 bp human cMYC PCR fragment was then added as a Taq α 1 digestion control. The cMYC fragment was amplified from unmodified human DNA (Accession L00057; bases 361-1140; F-CTACGGAGGAGCAGCAGAGAAAG, R-AGAAGCGGGTCCTGGCAGCGG). Samples were digested with Taq α 1. To control for completeness of bisulfite modification, duplicate reactions were digested with ApoI, since modification creates an ApoI site at NNAT position 1563. Digested samples were fractionated in polyacrylamide gels and visualized by UV illumination following ethidium bromide staining. Band intensity was quantitated by digital imaging as described above. The proportion of methylated NNAT alleles was calculated as 81 bp + 127 bp bands/81 bp + 127 bp + 208 bp bands. Only samples exhibiting complete (≥ 95%) modification and digestion were analyzed.
Bisulfite sequencing
One microgram of bisulfite-modified genomic DNA from OS cell lines was subjected to 35 cycles of PCR amplification with the bisulfite modificationspecific NNAT CpG island primers as above. Amplification products were cloned into the vector pCR2.1 (ThermoFisher, Waltham, MA, USA). At least 10 independent clones from each PCR reaction were submitted for sequencing. The prevalence of methylation in genomic DNA was determined at each of the 24 CpG sites within the amplimer from the proportion of cloned alleles exhibiting modification to TG (unmethylated) versus preservation of CG (methylated).
RT-PCR
Total RNA was prepared from cultured cells with the 5PRIME PerfectPure RNA Blood kit (ThermoFisher, Waltham, MA, USA) and was reverse transcribed (SuperScript First Strand cDNA synthesis kit, ThermoFisher, Waltham, MA, USA). End point PCR was performed as described previously [14] using primers which amplify both NNAT mRNA isoforms NNATα (262 bp) and NNATβ (181 bp). Duplicate reactions were amplified with primers for the ubiquitously-expressed SNRPD3 gene (500 bp). DNA was electrophoretically separated with a 2% Agarose gel with gel red and detected under ultraviolet light.
5-aza-2′-deoxycytidine (5aza-dC) treatment
U-2 OS and MNNG/HOS OS cells were treated with freshly-prepared 5 mM 5aza-dC (Sigma-Aldrich, St. Louis, MO, USA). DNA and RNA were extracted from treated and untreated cells at 72 hours and evaluated by COBRA for NNAT promoter region methylation status and RT-PCR for expression of NNAT mRNA.
Immunohistochemisty for NNAT
Sections of formalin fixed, paraffin embedded normal tissue or primary OS samples were subjected to heat based epitope retrieval, and peroxide blocking was performed according to standard protocols. 5% goat serum (sc-2043) and 1% bovine serum albumin (BSA) (Sigma) was used for blocking. Slides were then incubated over night with NNAT antibody (ab27266, 1:250) and 1% BSA as negative control. HRP conjugated secondary antibody (ab6721, 1:500) was used for detection with chromogen (Dako, Agilent Technologies, Santa Clara, CA, USA). Slides were counterstained with hematoxylin and analyzed by light microscopy.
OS cell line transfections
The NNATα and NNATβ coding sequences (262 bp) were amplified by RT PCR from total RNA extracted from the NNAT expressor OS cell line HOS (TE-85) using primers (F: CCA ACA GCG GAC TCC GAG ACC AG and R: GTG TAT GCC AGC TTC TGC AGG GAG). PCR products were cloned into the expression vector pcDNA 3.1Neo (ThermoFisher, Waltham, MA, USA). The integrity of wild-type NNAT sequences encoded by the expression vectors was confirmed by DNA sequencing. OS cell lines were transfected with linearized vector using the FuGENE 6 transfection reagent (Promega, Madison, WI, USA), and colonies were grown under G418 selection (400 µM) for clonogenicity assays or for isolation of stable NNAT-expressor cell lines. For stable cell lines, macroscopic colonies were selected, expanded, and screened by immunoblot analysis for NNAT expression. To generate stable, inducible NNAT expression in OS cell lines, the LacSwitch II system (Agilent Technologies) was utilized according to the manufacturer's direction. The NNATα or NNATβ cDNA was subcloned into the vector pORSVI/MCS to generate expression vectors. Cells were then sequentially transfected with the vector pCMVLacI followed by pORSVIHNNA, pORSVIHNNB, or pORSVI/MCS. Stable cell lines were isolated under the antibiotic selection appropriate for each transfection. Cell lines exhibiting negligible NNAT expression at baseline with robust NNAT expression upon treatment with isopropyl β-D-1-thiogalactopyranoside (IPTG) were utilized for experiments.
Clonogenicity assays
Following transfection, cells were seeded into 100 mm dishes at uniform density and subjected to G418 selection. When colonies were macroscopically visible, cells were fixed and stained on the plate with methylene blue. Colonies were counted by visual inspection. Experiments were repeated three times.
Cell proliferation and thapsigargin cytotoxicity assays
Cells were seeded at 5000 cells per well into 96 well tissue culture plates in complete medium without selective antibiotic. Cell numbers were then measured daily employing metabolic activity as a surrogate for viable cell number, using the CellTiter 96 Assay (Promega, Madison, WI, USA) according to the manufacturer's instructions. For cytotoxicity assays, cells were cultured in 96 well plates overnight and then treated with thapsigargin (Sigma-Aldrich) or the DMSO-based dilution vehicle. Cell counts were determined as above.
In vitro invasion assay
Twenty four-well plate cell culture inserts with 8.0 µm pores size (BD BioSciences, Franklin Lakes, NJ, USA) were coated with 50 µl of 0.5 mg/ml Matrigel ® matrix solution (BD Biosciences). Serum-containing media without selective antibiotic was placed in the lower chambers of wells, and 200 µl of cell suspension at 5 × 10 4 cells/ml in serum-free media without selective antibiotics was dispensed onto the matrix coated inserts. Cells were incubated for 72 hours at 37°C in 5% CO 2. After removing noninvading cells from upper chamber with a cotton swab, invading cells were fixed with 10% formalin. After inserts were stained with hematoxylin, cells were counted visually by light microscopy. Four inserts were examined for each cell line per experiment. The experiment was repeated three times. | 9,073.4 | 2020-05-19T00:00:00.000 | [
"Biology",
"Medicine"
] |
Cajun Vernacular English A Study Over A Reborn Dialect
Cajun English is as dialectal variety of English spoken in the western areas of Louisiana which shows a strong influence of French language. It appeared a century ago when French speakers were forced to change their language to English. Yet, after the 1950s and because of racial segregation (Dubois and Horvath, 1999) and a political standardization of English language as the main language for American speakers this dialect was about to disappear. Despite this fact, the current situation of Cajun English is positive in the sense that it has been recovered by the youngest generations of speakers, who are proud of being Cajun.
The aim of this paper, thus, is to present the linguistic and social characteristics of Cajun Vernacular English. Concerning language, it presents peculiarities such as i) the elimination of final consonants, which involves grammatical consequences in the conjugation of verbs, ii) a reduction or absence of glides in the four long stressed vowels (Thomas, 2004), iii) a heavy nasalization of vowels, iv) the dropping of the aspirated consonant aitch and the replacement of interdental fricatives by stops, producing what is called "the paradigm 'dis, dat, dese, dose'" (Rubrecht, 1971) among others. Concerning social characteristics, this paper explores the reasons why this dialect is mainly used for tourism purposes as stated by Dubois and Hovarth 2003.
II. Introduction
The Oxford Dictionary defines the term dialect as "a regional variety of language distinguished by features of vocabulary, grammar and pronunciation from other regional varieties and constituting together with them a single language".There are a great number of dialects in the English language and the reasons which make them unique are very different.Cajun Vernacular English is a variety of English included within Southern American English dialects, sharing some of their variables but with a great influence from French language, which makes it distinctive from the rest of southern dialects.French language was brought to Cajun speaking areas by Acadians who moved to Nova Scotia -Canada-when the British took control of their lands around 1765 (Dubois and Harvorth, 2004: 407).Until the beginning of the twentieth century, French was the only language spoken at home by the descendants of French speakers, but English became progressively more used, and Cajun English appeared as a result.In three generations the dialect became widely used but ridiculed, then refused by their speakers, which made it almost disappear, and finally rescued by the youngest generations who feel Cajun as part of their own identity.This article deals with the characteristics of Cajun Vernacular English and the reasons which led to what has been called 'the Cajun Renaissance'.
III. Historical Background
The Cajun speech community covers the Gulf Coast from Texas to Mississippi but it is mainly concentrated in the small rural towns of southern Louisiana, being Lafayette its metropolitan center (see Appendix 1).Cajuns are the descendants of Acadians from Nova Scotia, Canada, who went to French Louisiana around 1765 when the British took control of their lands.In Louisiana they contacted with other French speaking populations as well as people who spoke other languages.The British purchased French Louisiana in 1803 and despite English became the official language of the state in 1812, Cajuns continued speaking French as their first language.They lived -as many continue to live today -in small towns and most of them were poor and had little education.Although some French descendants were considered as equal citizens, Cajuns were often ridiculed and made the butt of jokes.
In the 1930s English was the sole language of education in Louisiana, but it was not used in the Cajun communities.In addition, they did not learn English properly because most Cajun speakers attended school irregularly or left it early.English may have been the language in classroom, but French was the language of the streets."It is this generation, people who are now sixty years or older who were forced to change their language and became the original speakers of the dialect we have labeled Cajun Vernacular English" (henceforth CajVE) (Dubois and Horvath, 1998a: 269), providing the linguistic source for succeeding generations.However, when the USA entered in World War II, Cajun English suffered an important decrease.People who joined the army were already bilingual or semi-bilingual due to the effort of Louisiana state government to enforce the speaking of English.After the WWII, the social changes that appeared had great influence in CajVE speakers.Children who had grown up in French-speaking homes began to have better learning of English than their parents, attended school more regularly and became wealthier.Many of this generation of speakers started to refuse French as a language in order to avoid the negative stereotypes associated with being a Cajun speaker and adopted American cultural ways.They began to use English at home and even the bilingual speakers abandoned French (Dubois and Horvath, 1999: 292).
Decades later, Cajun was only spoken by old people and middle generations rejected it completely.As a consequence, young generations suffered a sense of loss of Cajun culture and began to recover it.This process has been called the Cajun Renaissance.Nowadays, Cajun has an unprecedented status among their former original speaking territories but also among outsiders.Tourists are attracted by Cajun food, music and
ISSN 1139-5486
Cajun Vernacular English.A Study Over A Reborn Dialect.Raúl Pérez Ramos 626 festivities, having also the support from the local government.However, bilingualism has almost disappeared and it is very unlikely to retrieve.
As not all Cajuns identify themselves with Cajun English, we can determine CajVE as an "ethnolect", particularly because that term seems to describe a speech community that changes its language of everyday communication to the politically dominant language, in this case English, and it is also a source of the language shift process that occurred.(Dubois and Horvarth, 2004: 408)
IV. Linguistic characteristics of CajVE
There are two main phonological characteristics in CajVE.The first one is the elimination of final consonants.CajVE speakers do not pronounce most of the final consonants and they also drop some of the final consonants clusters [nd, st, lm].This occurs not only in monomorphemic words but also in bimorphemic.[f] life and even the absence of [ʃ] in fish.This phonological rule has an important consequence in grammar: final consonants which are also morphological markers are deleted at the word-ends.There will be further explanation on this matter in the following sections.
The second phonological characteristic is the reduction or absence of glides in the four long stressed vowels [i], [e], [o] and [u] in CajVE, similar to Southern English with the difference that stressed vowels and diphthongs are not prolonged (Thomas, 2004:303-304).The vowel [i] is pronounced in words such as me, street, and read with [i:], the vowel [e], as in way, make and take is pronounced with Those two characteristics make CajVE qualitatively distinctive from Southern English as a separate dialect, and specially from American English.Other phonological features of CajVE include heavy nasalization of vowels ([b] nasalization in Alabama), the non-aspiration of [p', t', k] in word-initial position preceding a stressed vowel (pat, pronounced like bat) or [r, l, w, j] (plant, table and car), [h'] dropping (hair pronounced like air) and the replacement of interdental fricatives [θ, ð] by stops [t, d].As Rubrecht (1971: 152) points out: "the paradigm 'dis, dat, dese, dose' is well-known in Louisiana to describe how Cajuns talk".Although there is not clear how the substitutions of those possessive determiners came "one fact is fairly clear, all of them are stigmatized" (Dubois and Harvorth, 2004: 411).
As it was pointed above, the phonological elimination of final consonants has grammar consequences.The present tense marker in the third person singular (-S) disappears e.g.'He gave me six'.'She go with it'.And there is also an absence of the past tense morpheme (-ED) in weak verbs e.g.'I stay two month', 'She wash my face'.Furthermore, CajVE has a high rate absence of the auxiliary of the verb to be in the third person in singular (IS), the second person singular and plurals (ARE) e.g.'She pretty', 'He gonna meet her', 'What we doing?', 'You supposed to know that'.Figure 1 represents the percentage of these grammatical features that CajVE speakers do, distinguished by age and their mother tongue.As we can see, the grammatical features above mentioned occur in many occasions, being the older generations who are original speakers of French the ones who use this kind of features more frequently.The "ARE absence" is the most usual in CajVE occurring more than 70% of occasions.
V. Social characteristics of CajVE
Cajun English is nowadays recognized as an essential part of Cajun culture and preserved by local institutions.In 1968 Louisiana was declared officially bilingual, French was declared mandatory in high schools and the state established institutional relations with other francophone nations.Cajun speakers born after the 1970s are influenced by the Cajun Renaissance, they are proud to be Cajun and have also obtained important economic benefits due to the expanding tourism.However, the fact that Cajun is displayed mostly to outsiders reinforces the use of English as the official Cajun language, despite that French is the original language of Cajuns, changing the way in which Cajuns sound like.There is a clear separation between old generations of Cajuns and the youngest ones and, what is most surprising of all, between genders.As it was concluded by Dubois and Harvoth "young men return to the CajVE forms used by their grandparents' generation, while young women generally use the standard variants introduced by the middle-aged speakers.We have called this change led by young men in the direction of the former stigmatized and stereotyped CajVE variants 'recycling'" (Dubois and Harvoth 1999:292).Figures 2 and 3 show the clear difference in the pronunciation of dental stops between men and women.The main reason for this is that the Cajun Renaissance affects traditional male activities such as boating, fishing, hunting or the traditional "courir du Mardi Gras", in which few women take part.Other important activities in Cajun culture like music and cooking are also mainly part of the male culture.
VI. Discussion
Undoubtedly, the political institutions of Louisiana had a key factor in the Cajun Renaissance, not only because they comprehended it as a part of their cultural heritage but also an economical opportunity to attract tourists.If it wasn't for them, the segregation could have been the death of the dialect.
The relatively short history of sociolinguistics has shown that it is possible a combination of the commitment to the objective description of sociolinguistic data and also a concern for social issues.It is crucial for an endangered dialect that linguists have a proactive attitude towards the data they are studying.
Walt Wolfram and Natalie Schilling-Estes studied Ocacroke English, the dialect from Ocacroke Island, in North Carolina.This small island was relatively isolated for over two and a half centuries, and to this day is only accessible by ferry.This isolation produced a dialectal variety commonly known by the pronunciation of the diphtong [ay] as [ɔy].In fact, their speakers are referred to as high tiders or "hoi toiders" because of their pronunciation of high tide as "hoi toid".In this case, the isolation that developed this dialect was geographical, not segregational, but it has similar traits with Cajun English.First of all, the influence of outside visitors has changed the dialect for the younger generations, who don't 630 share the frequency of these phonological characteristics, and second of all, native older inhabitants are proud of their dialect and tend to exaggerate its most recognizable traits.Tourists tend to express curiosity for the dialect and it is known by them as one of the defining cultural elements of the island (Wolfram and Schilling-Estes, 1995).
Seeing that the dialect was about to disappear, the investigators concluded that they needed to inform about this endangered dialect to the widest audience possible.This was summarized in what was called the principle of linguistic gratuity (Wolfram 1993b): Investigators who have obtained linguistic data from members of a speech community should actively pursue positive ways in which they can return linguistic favors to the community.Thus, they had the obligation to document the dialect in a proper way, but also to raise the level of consciousness within and outside the community.This was transformed in the creation of several institutions for mantaining and preserving the dialect (the Ocacroke Preservation Society, the Ocacroke School and The Outer Banks Museum), tape recordings with extracts of real Ocacroke speech, a book about the dialect, t-shirts with typical phrases of Ocacroke and a documentary entitled 'The Ocacroke Brogue', among other things.
Wolfram and Schiling-Estes have taught us with this approach that linguists are obligated to have a proactive attitude towards an endangered dialect, which must be considered by the whole community as important as the rest of the characteristics of their cultural heritage.
Being now a dialect that is getting progressively attached to tourism activities in the western areas of Louisiana, the future of Cajun Vernacular English may not be bright.Therefore, a similar campaign like the one for Ocacroke English should be portrayed for CajVE in the long future in order to preserve this unique way of speech taking the work of Wolfram and Schiling-Estes as a model.
VII. Final remarks
In less than a hundred years, Cajun English appeared as a dialect, it was about to disappear due to the segregation made towards their speakers and it has been recovered by the youngest generations, those who feel CajVE as part of their identity.Cajun English has unique phonological and grammatical characteristics that make it quite different from the rest of Southern American English dialects.Social characteristics are also determining and obligatory to define the dialect nowadays because when CajVE was recently recovered, the English language was considered the defining language of Cajuns instead of French due to tourism purposes, which changed the way Cajuns sound like and in what contexts is the vernacular used.As many men than women work in the tourism associated with Cajun culture, male speakers have developed CajVE in a stronger way.Cajun English has developed its own innovations not by migration movements or by being in contact with native speakers of English but by institutional decree.
Being now a dialect attached to tourism purposes, the future of CajVE is not clear in the long term.The work of Walt Wolfram and Natalie Schiling-Estes shows the measures that linguists should take for an endangered dialect based on their of principle of linguistic gratuity, teaching the speech community the importance of a dialect for their cultural heritage.
CajVE speakers delete the final [t] in late or rent, [d] in hand, food or wide, [θ] in both, [r] in together, [l] in school or simple, and both final [r] and [k] in New York, making the [rk] cluster disappear.There are also more important final absences, like final [v] in twelve, [s] in house or fence, [n] nine, [m] mom, [e:].The [o] in words such as know, both, and over, and the [u] in food, school, and two, are also realized as monophthongs [o:, u:] respectively.Mid vowels [o, e] are monophthongized more frequently than high vowels [i, u].The diphthongs [ai], [aʊ] and [ɔi] become monophthongs.Words with [ai] like fire, price or prize and with [aʊ] like mouth or power are pronounced with [a:], and also words with [ɔi] like choice or oil lose their glide and become monophthongs pronounced with [ɔ:].
Figure 1 .
Figure1.Percentage of occurrences of all variables according to four speaker groups (taken fromDubois and Horvath, 2003:40)
Figure 1 .
Figure 1.Map of the State of Louisiana.In red the 22 parishes in Southern Louisiana that belong to the Acadian Region with the "Cajun Heartland USA" subregion in a darker shade. | 3,610.6 | 2013-01-01T00:00:00.000 | [
"Linguistics"
] |
Activation of AMP-Activated Protein Kinase by 3,3′-Diindolylmethane (DIM) Is Associated with Human Prostate Cancer Cell Death In Vitro and In Vivo
There is a large body of scientific evidence suggesting that 3,3′-Diindolylmethane (DIM), a compound derived from the digestion of indole-3-carbinol, which is abundant in cruciferous vegetables, harbors anti-tumor activity in vitro and in vivo. Accumulating evidence suggests that AMP-activated protein kinase (AMPK) plays an essential role in cellular energy homeostasis and tumor development and that targeting AMPK may be a promising therapeutic option for cancer treatment in the clinic. We previously reported that a formulated DIM (BR-DIM; hereafter referred as B-DIM) with higher bioavailability was able to induce apoptosis and inhibit cell growth, angiogenesis, and invasion of prostate cancer cells. However, the precise molecular mechanism(s) for the anti-cancer effects of B-DIM have not been fully elucidated. In the present study, we investigated whether AMP-activated protein kinase (AMPK) is a molecular target of B-DIM in human prostate cancer cells. Our results showed, for the first time, that B-DIM could activate the AMPK signaling pathway, associated with suppression of the mammalian target of rapamycin (mTOR), down-regulation of androgen receptor (AR) expression, and induction of apoptosis in both androgen-sensitive LNCaP and androgen-insensitive C4-2B prostate cancer cells. B-DIM also activates AMPK and down-regulates AR in androgen-independent C4-2B prostate tumor xenografts in SCID mice. These results suggest that B-DIM could be used as a potential anti-cancer agent in the clinic for prevention and/or treatment of prostate cancer regardless of androgen responsiveness, although functional AR may be required.
Introduction
AMP-activated protein kinase (AMPK) is expressed in all eukaryotic cells and is a critical enzyme that plays an essential role in cellular energy homeostasis, as well as controlling processes related to tumor development including cell cycle progression, cell proliferation, protein synthesis, and survival. Therefore, as an anti-cancer target, AMPK has received intensive attention in recent years. Mammalian AMPK is a trimeric serine/threonine protein kinase composed of a catalytic a subunit and two regulatory subunits, b and c. AMPK is activated through phosphorylation of Thr-172 on the a subunit by an energy-depleting stress, such as increased ratios of AMP/ ATP [1] and ADP/ATP [2], or stimulated by cellular kinases including liver kinase B1 (LKB1) [3][4] and calmodulin-dependent protein kinase kinase (CaMKK) [5]. Once activated, AMPK plays two major functions, metabolic and non-metabolic. In the regulation of metabolic process, AMPK phosphorylates serine moieties in many target proteins and results in switching on of catabolic pathways to activate ATP-generating processes including the uptake and oxidation of glucose and fatty acids, and switching off of anabolic pathways including protein, fatty acid and cholesterol syntheses, which consume ATP [6]. Regarding non-metabolic functions of AMPK, activation of AMPK can induce cell cycle arrest and inhibit cell proliferation and protein synthesis in malignant cells through multiple mechanisms such as the accumulation of tumor suppressor factor p53 and the cyclindependent kinase inhibitors p21 and p27 [7], as well as downregulation of the mTOR pathway [8][9]. Extensive research supports the role of AMPK in cancer prevention and therapeutics, suggesting that targeting AMPK may be a promising option for cancer treatment.
To that end, metformin, an anti-diabetic drug, has been shown to activate AMPK, raising a hypothesis that metformin may reduce the risk of cancer in patients with type 2 diabetes through activation of the AMPK pathway [10]. Indeed, reports from clinical studies have demonstrated that diabetic patients treated with metformin had a significantly lower rate of cancer incidences and cancer-related mortality compared with patients exposed to other anti-diabetic medicines [10][11][12]. Pre-clinical studies have also shown that metformin not only inhibits growth of cultured cancer cells [13][14] and tumors in mice [15], but also selectively targets cancer stem cells [16].
Besides metformin, some natural compounds, including quercetin, genistein [17], capsaicin, EGCG [18], and curcumin [19], have been shown to have anticancer effects associated with activation of the AMPK signaling pathway. In fact, natural products have been the most productive source of leads for the development of anti-cancer drugs. According to the literature, approximately 73% of anticancer drugs were discovered from natural origins or derived from natural compounds over the past half a century [20]. The natural compound indole-3-carbinol (I3C), which is found at relatively high levels in cruciferous vegetables such as broccoli and cabbage, and its dimer 3,39diindolylmethane (DIM) have shown anti-tumor activity in vitro and in vivo [21][22]. Recently, we reported that a formulated DIM (BR-DIM, obtained from BioResponse Nutrients, LLC., Boulder, Colorado, hereafter abbreviated as B-DIM), showed approximately 50% higher bioavailability in vivo [23] compared with DIM. B-DIM induced apoptosis and inhibited cell growth, angiogenesis, and invasion of prostate cancer cells, which is associated with regulation of Akt, NF-kB, VEGF and AR signaling pathways [24][25]. In addition, recent results have shown that B-DIM treatment of prostate cancer cells in vitro or B-DIM intervention in patients with prostate cancer led to the nuclear exclusion of AR associated with activation of miR-34a [24]. However, the precise molecular mechanism(s) by which B-DIM plays its anti-cancer and cancer-preventive roles have not been fully elucidated; more specifically, it has not been reported whether the biological activity of B-DIM is related to induction of AMPK signaling.
Therefore, in the current study, we investigated the effects of B-DIM on AMPK signaling and its related downstream targets in both androgen-sensitive LNCaP and androgen-insensitive C4-2B prostate cancer cells containing functional AR. Our results showed, for the first time, that B-DIM could function as an AMPK activator. Activation of AMPK by B-DIM resulted in the down-regulation of AR and prostate-specific antigen (PSA) expression, and caused induction of cell apoptosis, suppression of mTOR pathway, and inhibition of prostasphere formation in human prostate cancer cells in vitro and in vivo. Our findings also demonstrated that the AMPK pathway is one of the novel molecular targets of B-DIM for its anti-cancer effects against human prostate cancer.
Cell Culture, Protein Extraction, and Western Blot Assay
Human prostate cancer C4-2B (obtained from Professor Leland Chung, Emory University, School of Medicine, Atlanta, GA; and currently at Cedars-Sinai, Los Angeles, CA) and LNCaP (American Type Culture Collection Manassas, VA, USA) cells were grown in RPMI 1640 medium (Invitrogen, Carlsbad, CA) supplemented with 10% fetal calf serum (FCS), 100 mg/ml streptomycin, 100 units/ml penicillin, and 2 mM glutamine, in a humidified incubator with 5% CO 2 and 95% air at 37uC. A whole cell extract was prepared as previously described [26]. For Western blot analysis, an equal amount of protein from each cell extract was subjected to denaturing polyacrylamide gel electrophoresis (SDS-PAGE) and transferred to nitrocellulose membranes. Individual membranes were probed with indicated antibodies. Immunoreactive bands were developed using horseradish peroxidase conjugated secondary antibodies and Super-Signal WestPico chemiluminescent substrate (Pierce) and visualized using X-ray film.
Prostasphere Formation Assay
Prostasphere formation assay was performed to assess the capacity of cancer stem cell self-renewal following our published procedure [27]. Briefly, single cell suspensions of C4-2B cells were thoroughly suspended and plated in ultra low adherent-wells of 6-well plates (Corning, Lowell, MA) at 1,000 cells/well in 1.5 ml of sphere formation medium (1:1 DMEM/F12 medium supplemented with 50 units/ml penicillin, 50 mg/ml streptomycin, B-27, and N-2). One milliliter of sphere formation medium was added every 3 days. After 6 days of incubation with different concentrations of B-DIM or metformin (as a control), the formed spheres were collected by centrifugation at 3006g for 5 minutes and prostasphere numbers were counted under an inverted phase-contrast microscope. The proportion of sphere-generating cells was calculated by dividing the number of cells seeded by the number of prostaspheres.
Human Bone and Implantation of Tumor Cells
Human male fetal bone tissue was obtained by a third party nonprofit organization (Advanced Bioscience Resources, Alameda, CA), and written informed consent was obtained from the donor family, consistent with regulations issued by each state involved and the federal government. After one week of acclimatization, the mice were implanted with a single human fetal bone fragment as described previously [28][29]. C4-2B cells were harvested from subconfluent cultures after a brief exposure to 0.25% trypsin and 0.2% EDTA. Trypsinization was stopped by adding a medium containing 10% FBS. The cells were washed once in serum-free medium and resuspended. Only suspensions consisting of a single cell with .90% viability were used for the injections. Cells (1610 6 ) in 20-mL serum-free RPMI medium were injected intraosseously by insertion of a 27-gauge needle and Hamilton syringe through the mouse skin directly into the marrow surface of the previously implanted bone. In our previous experience with this model, we found a tumor uptake rate of .95% compared to skin xenograft wherein the tumor uptake rate was comparatively less with prolonged latency period. As soon as the majority of the bone implants began to enlarge (now called a ''bone tumor'') as determined by caliper measurements (30th day after cancer cell injection), mice were randomized into the following treatment groups (n = 7): (a) untreated control; (b) only B-DIM, 5 mg/mice fed everyday orally by gavage for 4 weeks since the initiation of therapy. The volume of the bone tumor in each group was determined by twice weekly caliper measurements. The body weight of mice in each group was also measured. All mice were euthanized one day after the last dose of B-DIM treatment (5 weeks) because large tumors were formed in the control mice, which required termination, and their final body weight and tumor volume were recorded. On autopsy, the tumor was neatly excised, freed of any extraneous adhering tissue, and part of the tissue was fixed in formalin and embedded in paraffin for immunostaining and H&E staining for confirming the presence of tumor.
B-DIM Activates AMPK Signaling in Human Prostate Cancer Cells
We assessed whether the AMPK signaling pathway could be one of the molecular targets of B-DIM in both androgeninsensitive C4-2B (Fig. 1A) and androgen-sensitive LNCaP (Fig. 1B) prostate cancer cells. Both cell lines were treated with different concentrations of B-DIM for 3 hours, and cell lysates of the treated cells were analyzed by Western blot to measure the levels of phosphor-AMPKa (T172), an active form of AMPK protein, as well as some downstream target proteins. The results showed that the level phosphor-AMPKa in both prostate cancer cell lines treated with B-DIM increased in a dose-dependent manner (Fig. 1A, B). As a control, the total AMPK protein levels remained relatively unchanged (Fig. 1A, B). Both regulatoryassociated protein of mTOR (Raptor) and acetyl-CoA carboxylase (ACC) are direct downstream substrates of AMPK, as activated AMPK is able to phosphorylate Raptor protein on serine residue 792 [8] and ACC protein on serine residue 79 [30][31]. Treatment with B-DIM resulted in increased levels of phosphor-Raptor (S792) and phosphor-ACC (S79) in the treated cells (Fig. 1A, B), further supporting that B-DIM is an AMPK activator. mTOR is a downstream signaling pathway of AMPK, and activation of AMPK can inhibit the mTOR signaling pathway [32]. Our data also revealed that activation of AMPK by B-DIM could suppress the mTOR pathway, as measured by decreased protein level of phosphor-mTOR (Fig. 1A, B), demonstrating, for the first time, the functionality of B-DIM as an AMPK activator in both ARsensitive and AR-insensitive prostate cell lines. The inactivation of mTOR by B-DIM is consistent with our previously published report [33].
Activation of AMPK by B-DIM at Early Hours is Associated with Subsequent Down-regulation of AR and PSA Protein Expression and Induction of Apoptosis in Human Prostate Cancer Cells
The results displayed above show that B-DIM harbors AMPKactivating property. We then investigated whether activation of AMPK by B-DIM could result in apoptotic cell death and inhibit the expression of prostate cancer signature proteins, such as AR and PSA. We showed previously that activation of AMPK signaling by B-DIM is an early event that occurs as early as three hour-treatment (the upper panel of Fig. 1A, B). We found that further treatment of C4-2B and LNCaP cells with B-DIM for up to 24 hours significantly decreased the expression levels of AR and PSA in a dose-dependent manner, and also resulted in apoptotic cell death as measured by PARP cleavage (the lower panel of Fig. 1A, B). These data suggest that activation of the AMPK signaling pathway is one of the major targets of B-DIM leading to the induction of apoptotic cell death of prostate cancer cells.
Activation of AMPK by B-DIM can be Blocked by an AMPK Inhibitor
Compound C was developed as a selective inhibitor of AMPK [34]. We hypothesized that if AMPK is an essential molecular target of B-DIM, co-treatment with Compound C should attenuate or block the effects of B-DIM on prostate cancer cells. To test this hypothesis, prostate cancer C4-2B and LNCaP cells were pre-treated with either 20 mM of Compound C or DMSO for 6 hours and then co-treated with B-DIM at two concentrations for additional 3 hours. The immunoblotting results showed that in both prostate cancer cell lines treated with B-DIM alone, AMPK signaling was activated as measured by a dose-dependent increase in the phosphorylation of AMPKa (T172) and the phosphorylation of Ser79-ACC (Fig. 2), a direct substrate of AMPK that is widely used as a detector of AMPK activation [35][36][37]. However the protein levels of phosphor-AMPKa and phosphor-ACC were dramatically decreased in cells pre-treated and co-treated with Compound C (Fig. 2), suggesting that B-DIM-activated AMPK can be blocked by an AMPK inhibitor.
B-DIM Enhances AR Suppressing-effect of Anti-AR Drug Casodex
One of the current treatment strategies for advanced prostate cancer is to suppress AR function by castration and antiandrogens. Casodex is an anti-androgen drug clinically used for patients with metastatic/advanced stage prostate cancer, and works by binding and preventing the activation of the AR. Our previous studies have shown that B-DIM is able to inhibit AR expression in prostate cancer cells [24]. We further hypothesized that the combination of B-DIM with casodex may have a synergetic effect on the inhibition of AR expression and induction of apoptosis in prostate cancer cells, which may be associated with the activation of AMPK. To test this hypothesis we co-treated both prostate cancer cell lines with 100 mM of casodex and 40 mM of B-DIM for 24 hours, and treatment with each agent alone served as controls. The data show that co-treatment of C4-2B and LNCaP cells with casodex and B-DIM significantly decreased the expression level of AR and increased apoptosis-associated PARP cleavage compared to each treatment alone, and the synergetic or additive effect was accompanied by increased AMPK activation (Fig. 3).
Both B-DIM and Metformin Significantly Inhibits Prostasphere Formation
Tumor stem cells have the characteristics of forming tumorspheres. It has been shown that metformin could inhibit cancer stem cell growth, associated with AMPK activation [16]. In order to test whether B-DIM, which functions as an AMPK activator ( Fig. 1, 2, 3) could target cancer stem-like cells and inhibit prostasphere formation, C4-2B cells were treated with different concentrations of B-DIM for 6 days in ultra-low adherent wells of 6-well plates. Treatment with metformin served as a control. The results showed that B-DIM inhibited prostasphere formation by 29% and 90% at treatment concentrations of 10 mM and 25 mM, respectively (Fig. 4), which is consistent with our previous findings [27]. The data demonstrate that B-DIM may possess the ability to suppress tumor stem-like cells in prostate cancer through activation of the AMPK pathway.
B-DIM Activates AMPK and Down-regulates AR in Androgen Independent C4-2B Prostate Tumor Xenografts in SCID Mice
The above results from our in vitro studies clearly showed that the AMPK signaling pathway is one of the novel molecular targets of B-DIM in prostate cancer cells. To confirm this finding in vivo, we designed and used the experimental bone metastasis animal model (see Materials and Methods) that mimics bone metastasis of human prostate cancer. We found that B-DIM treatment in vivo inhibited C4-2B tumor growth within the bone microenvironment to some extent (20%; data not shown). The tumor tissues were removed and analyzed by immunohistochemistry using antiphosphor-AMPKa, phosphor-ACC and AR antibodies. The results showed that p-AMPK-and p-ACC-positive cell populations increased significantly, while AR-positive cells decreased greatly in the tumors treated with B-DIM compared to the control (Fig. 5). These findings confirm that B-DIM is able to activate the AMPK pathway in vivo, associated with its anti-cancer activity.
Discussion
The search for new antitumor drugs from natural sources is one of the most promising approaches for cancer prevention and therapy. We have been studying formulated 3,39-Diindolylmethane (B-DIM) and showed that B-DIM can induce apoptosis and inhibit cell growth, angiogenesis, and invasion of prostate cancer cells by regulating NF-kB, Akt and AR signaling pathways [24][25]. However, the precise molecular mechanism(s) by which B-DIM elicit its anti-cancer effects on human prostate cancer have not been fully elucidated. In this study, we discovered that AMPK is one of the direct molecular targets of B-DIM in prostate cancer cells in vitro and in vivo.
The AMPK signaling pathway has recently become an important focus of interest in cancer prevention and therapy. Bowker et al. investigated the incidence of cancer in 10,309 diabetic patients treated with insulin, metformin or sulfonylureas for a period of 5 years. They reported that patients treated with metformin had a significantly lower rate of cancer-related mortality compared with patients exposed to other anti-diabetic medicines [12]. The major anti-cancer mechanism of metformin was associated with activation of AMPK signaling [34]. Activation of AMPK inhibits energy consuming pathways, protein synthesis and cell proliferation, through suppression of the mTOR pathway via tuberous sclerosis 2 protein (TSC-2) [38]. Hirsch H et al. reported that metformin selectively targets cancer stem cells and inhibits tumor growth in mouse models mediated by activation of the AMPK pathway [16]. It has been reported that some natural compounds including genistein (rich in soy bean), EGCG (abundant in green tea), and capsaicin (from hot pepper) are able to activate the AMPK pathway [17]. Discovery of more AMPK activators from natural sources is becoming an attractive approach to cancer prevention and therapy.
Our current findings show that B-DIM can activate AMPK signaling as early as three hours in both androgen-sensitive LNCaP and androgen-insensitive C4-2B prostate cancer cells, measured by: (i) increased protein levels of phosphor-AMPKa (T172), (ii) increased levels of phosphorylated ACC on serine residue 79 [30][31], (iii) increased protein levels of phosphor-Raptor (S792), which is a direct target of AMPK and an mTOR binding partner and inhibitor [8], and (iv) decreased levels of phosphor-mTOR (Fig. 1). Consequences of AMPK activation by B-DIM appear to be mediated through inhibition of AR and PSA expression, leading to the induction of apoptosis upon further treatment for an additional 21 hours (Figs. 1, 6). AMPK activation by B-DIM could be blocked by pre-treatment with Compound C, an AMPK inhibitor, in the prostate cancer cells (Fig. 2).
The androgen receptor (AR) is a critical factor for prostate cancer development and progression. Prostate cancer is dependent on androgen stimulation mediated by AR, and AR even plays an important role in cancer development and drug-resistance in androgen-independent prostate cancer cells. Alternative mechanisms of AR activation in androgen-independent prostate cancer are proposed through other signaling pathways, including ERKs, Akt, b-catenin and caveolin [39][40][41]. Therefore suppression of AR is one of the therapeutic strategies for prostate cancer patients. Casodex is an anti-androgen drug clinically used for patients in metastatic/advanced stage. Any natural compounds that could enhance the efficacy of casodex have potential therapeutic benefit in the clinic. We showed in the present study that co-treatment of cells with casodex and B-DIM led to a significant increase in phosphor-AMPKa and suppressed AR expression in prostate cancer cells (Fig. 3), demonstrating a novel mechanism for synergy of anti-AR therapy.
Prostate cancer tissue contains a rare population of multi-potent cancer stem cells with the capacity to self-renew. These prostate cancer stem cells could be enriched and measured by a colony formation assay in three-dimensional cultures, referred to as prostasphere formation. Prostate cancer cells with stem cell characteristics possess the ability to form prostaspheres from single cells as a condition for self-renewing in non-adherent culture conditions [42]. To determine if B-DIM could target prostate stem/stem-like cells, we tested it in prostasphere formation assays. The result showed that after six-day incubation of C4-2B cells with different doses of B-DIM, not only were numbers of formed prostaspheres significantly decreased, but the size of prostaspheres was reduced as well (Fig. 4). The findings from the in vitro studies were further confirmed by the immunohistochemical results from tumor tissues of prostate cancer xenografts treated with B-DIM (Fig. 5). Positive cell populations stained with phosphor-AMPK or phosphor-ACC were significantly increased, while AR-positive cells were reduced in the B-DIM treated tumor tissue (Fig. 5).
In summary, the present study provides the first evidence that the AMPK signaling pathway is one of the molecular targets of B-DIM for its anti-cancer activity. Activation of AMPK by B-DIM results in the suppression of its downstream target mTOR, downregulation of AR expression and induction of apoptosis in both androgen-sensitive LNCaP and androgen-insensitive C4-2B prostate cancer cells (Fig. 6). Activation of AMPK by B-DIM was also observed in treated prostate tumors. Our findings demonstrate that B-DIM could be used as a potential agent in the clinic for the prevention and/or treatment of prostate cancer regardless of androgen responsiveness of the cells. | 4,928 | 2012-10-09T00:00:00.000 | [
"Medicine",
"Biology"
] |
Temperature Sensor Based on Surface Plasmon Resonance with TiO2-Au-TiO2 Triple Structure
Temperature sensors have been widely applied in daily life and production, but little attention has been paid to the research on temperature sensors based on surface plasmon resonance (SPR) sensors. Therefore, an SPR temperature sensor with a triple structure of titanium dioxide (TiO2) film, gold (Au) film, and TiO2 nanorods is proposed in this article. By optimizing the thickness and structure of TiO2 film and nanorods and Au film, it is found that the sensitivity of the SPR temperature sensor can achieve 6038.53 nm/RIU and the detection temperature sensitivity is −2.40 nm/°C. According to the results, the sensitivity of the optimized sensor is 77.81% higher than that of the sensor with pure Au film, which is attributed to the TiO2(film)-Au-TiO2(nanorods) structure. Moreover, there is a good linear correlation (greater than 0.99) between temperature and resonance wavelength in the range from 0 °C to 60 °C, which can ensure the detection resolution. The high sensitivity, FOM, and detection resolution indicate that the proposed SPR sensor has a promising application in temperature monitoring.
Introduction
Surface plasmon resonance (SPR) refers to the collective oscillation of electrons on the metal surface when incident light shines on the interface between metal and medium, and the coupling of light and electrons on the metal surface forms an electromagnetic wave propagating along the metal surface [1][2][3][4][5][6][7]. When the frequency of electron oscillation is consistent with the frequency of incident light, resonance is generated [4]. When resonance occurs, the electromagnetic field on the metal surface is enhanced. The SPR sensor made of surface plasmon resonance can subtly detect the change in the refractive index of the sensing medium. SPR sensors can be used to directly measure RI or indirectly detect any physical factor causing RI changes, such as temperature, concentration, strain, magnetic field, pressure, density, and molecular species [5,[8][9][10][11][12][13][14][15][16][17][18][19].
A number of applications of temperature sensors can be seen in daily life, industrial production, and scientific research [20,21]. Sensors can monitor the room temperature and can also be used in air conditioners, induction cookers, microwave ovens, water dispensers, and other household appliances. In firefighting, measuring temperature is very important. Monitoring the temperature can find the abnormal temperature so that fire can be prevented, detected, and located [22]. In agricultural production, taking greenhouses as an example, suitable temperature is an important parameter for the plant growth processes [23]. In industrial production, such as metal smelting, the petrochemical industry, light industry, textile manufacturing, and water treatment, temperature sensors are the most common elements to ensure the normal operation of equipment [24]. In the medical industry, to ensure the activity of vaccines and other biological products, the environmental temperature should be constantly monitored during production and transportation [25]. From the above point of view, with the development of science and technology, research on temperature sensors has become a hot topic of research. The temperature sensor is developing in the direction of high accuracy and automation.
Due to the advantages of unmarkedness, real-time monitoring, quick response, and high sensitivity, SPR sensors also have important applications in temperature sensors in environmental regulation, food safety, medicine, and biological detection [8,26].
In metals where the SPR phenomenon occurs (such as gold, silver, aluminum, and copper), the precious metal gold is chemically stable and can output persistent SPR signals in the visible region [27][28][29][30][31][32]. Although gold has good oxidation resistance and corrosion resistance and can remain effective, the surface of the gold film is too smooth to adsorb a great number of molecules, thus limiting the improvement of sensor sensitivity. Titanium dioxide (TiO 2 ), a nanostructured semiconductor metal oxide, has good absorbability, excellent chemical stability, loose molecular structure, wide band gap (i.e., 3.2 eV for anatase and 3.0 eV for rutile), and high RI (i.e., 2.5 for anatase and 2.7 for rutile) [33][34][35][36][37][38][39]. Owing to the above characteristics, the TiO 2 layer is added to induce field confinement and enhancement in the interface, which is favorable for sensitivity improvement [40].
Changes in temperature can cause variations in the RI of alcohol, leading to the shift of the SPR spectrum. Nonetheless, the research on temperature sensors based on SPR film sensors has not attracted much attention. In this work, two SPR temperature sensors, i.e., TiO 2 -Au dual film structure and TiO 2 (film)-Au-TiO 2 (nanorods) triple structure, are proposed. The two sensors are simulated by the Finite Element Method (FEM). It is the purpose of this work to optimize the thickness of the TiO 2 film and the Au film, as well as the geometry of the TiO 2 nanorods, to maximize the sensitivity and FOM of the SPR temperature sensor. The optimized TiO 2 (film)-Au-TiO 2 (nanorods) triple structure SPR temperature sensor has a sensitivity of 6038.53 nm/RIU and performs −2.40 nm/ • C in temperature sensing. The sensitivity of the proposed sensor is improved by 77.81% compared with that of a traditional gold SPR sensor. This work is important not only for the enhancement of the SPR sensor but for the study of temperature measurement by the SPR sensor as well.
Theoretical Analysis
SPR is the collective oscillation phenomenon of metal particles caused by photons when light irradiates the surface of metal materials at a specific angle. The necessary condition for exciting surface plasmon (SP) is that the wave vector of the polarized incident light (k c ) should be equal to the wave vector of surface plasmon (k sp ) [41,42].
The wave vector of the polarized incident light (k c ) can be represented as [43]: where n p is the RI of the prism, λ is the incident wavelength, and α is the incident angle. The wave vector of surface plasmon (k sp ) is expressed as follows [44]: where n m represents the RI of the metal film and n s represents the RI of the sensing medium.
In this paper, the wavelength interrogation method was adopted; that is, the incident angle was fixed, and the incident wavelength was scanned in a certain range. At a particular wavelength, the reflected light intensity sharply decreases, displaying a sharp valley in the reflectance curve. The wavelength corresponding to the minimum reflectance point is the resonance wavelength. The resonance wavelength moves gradually as the RI of the sensing medium changes. The refractive index of metals varies with wavelength, and the corresponding functional relationship is as follows [45]: where n p is the RI of the prism, α is the incident angle, n m (λ) represents the RI of the metal film (varied with wavelength), and n s represents the RI of the sensing medium. The Kretschmann configuration (Figure 1a), a common SPR structure, consists of a prism, a thin metal film, and the sensing medium. When there is only gold film, Ppolarized light passes through the prism and undergoes attenuated total internal reflection at the prism/gold film interface. The evanescent wave penetrates the thin metal layer and resonates with propagating metal-dielectric surface plasmons, causing the absorption of the reflected light beam [26]. The electric field intensity is strongest at the intersection of gold film and sensing medium and declines exponentially with the increase in medium depth. After adding titanium dioxide, the high RI of TiO 2 leads to the enhancement of the interaction of the evanescent field and shifts the resonant wavelengths towards the near-infrared wavelengths [46,47]. is the resonance wavelength. The resonance wavelength moves gradually as the RI of the sensing medium changes. The refractive index of metals varies with wavelength, and the corresponding functional relationship is as follows [45]: where np is the RI of the prism, α is the incident angle, ( ) represents the RI of the metal film (varied with wavelength), and represents the RI of the sensing medium. The Kretschmann configuration (Figure 1a), a common SPR structure, consists of a prism, a thin metal film, and the sensing medium. When there is only gold film, P-polarized light passes through the prism and undergoes attenuated total internal reflection at the prism/gold film interface. The evanescent wave penetrates the thin metal layer and resonates with propagating metal-dielectric surface plasmons, causing the absorption of the reflected light beam [26]. The electric field intensity is strongest at the intersection of gold film and sensing medium and declines exponentially with the increase in medium depth. After adding titanium dioxide, the high RI of TiO2 leads to the enhancement of the interaction of the evanescent field and shifts the resonant wavelengths towards the nearinfrared wavelengths [46,47]. Sensitivity is an important parameter for evaluating the performance of SPR sensors and is generally calculated by the following formula: where ∆ is the swift of SPR wavelength and ∆ is the change in refractive index. The response sensitivity of SPR sensor to temperature is defined as: where ∆ is the change in temperature.
When it comes to evaluating sensor performance, sensitivity alone is not enough. The Sensitivity is an important parameter for evaluating the performance of SPR sensors and is generally calculated by the following formula: where ∆λ SPR is the swift of SPR wavelength and ∆n is the change in refractive index. The response sensitivity of SPR sensor to temperature is defined as: where ∆T is the change in temperature.
When it comes to evaluating sensor performance, sensitivity alone is not enough. The Figure of Merit (FOM) is another important factor, and the calculation is as follows: where S is the sensitivity of SPR sensor and FWHM is full width at half maxima of SPR reflectance spectrum. In this work, sensors for four kinds of structures were simulated. The prism material is BK7 glass, and the sensing medium is ethanol liquid, whose refractive index is affected by temperature. The four kinds of metal layers are pure gold film, pure titanium dioxide film, TiO 2 -Au film, and TiO 2 (film)-Au-TiO 2 (nanorods) triple film, respectively. The angle of incident light is 72 deg. Temperature changes linearly with RI of ethanol liquid [48]: where T is the test temperature and T 0 is the reference temperature (20 • C). Since ethanol is liquid at −144-78 • C, the performance of SPR sensors is explored at 0-60 • C (10 • C intervals), using the wavelength modulation method.
Results and Discussion
Firstly, the pure gold film SPR sensor was simulated. The sensor with a BK7 prism-Au thin film structure is shown in Figure 1b where S is the sensitivity of SPR sensor and FWHM is full width at half maxima of SPR reflectance spectrum. In this work, sensors for four kinds of structures were simulated. The prism material is BK7 glass, and the sensing medium is ethanol liquid, whose refractive index is affected by temperature. The four kinds of metal layers are pure gold film, pure titanium dioxide film, TiO2-Au film, and TiO2(film)-Au-TiO2(nanorods) triple film, respectively. The angle of incident light is 72 deg. Temperature changes linearly with RI of ethanol liquid [48]: where T is the test temperature and T0 is the reference temperature (20 °C). Since ethanol is liquid at −144-78 °C, the performance of SPR sensors is explored at 0-60 °C (10 °C intervals), using the wavelength modulation method.
Results and Discussion
Firstly, the pure gold film SPR sensor was simulated. The sensor with a BK7 prism-Au thin film structure is shown in Figure 1b In addition to the shift in resonance wavelength, the minimum reflectance should also be noted. With the increase in the gold film thickness, the minimum reflectance decreases first and then increases. The minimum reflectance at 0 • C and 60 • C is plotted in Figure 3. When the thickness of the gold film is between 35 nm and 60 nm, the minimum reflectance is higher than 20%. In general, sensors with a minimum reflectance greater than 20% are not considered, because the sensing effect is not good enough. In the subsequent research on sensor performance improvement, sensors with a minimum reflectance of above 20% will no longer be considered. SPR sensors based on gold film thicknesses of 40 nm, 45 nm, 50 nm, and 55 nm are explored in the following study.
In addition to the shift in resonance wavelength, the minimum reflectance should also be noted. With the increase in the gold film thickness, the minimum reflectance decreases first and then increases. The minimum reflectance at 0 °C and 60 °C is plotted in Figure 3. When the thickness of the gold film is between 35 nm and 60 nm, the minimum reflectance is higher than 20%. In general, sensors with a minimum reflectance greater than 20% are not considered, because the sensing effect is not good enough. In the subsequent research on sensor performance improvement, sensors with a minimum reflectance of above 20% will no longer be considered. SPR sensors based on gold film thicknesses of 40 nm, 45 nm, 50 nm, and 55 nm are explored in the following study. Titanium dioxide is an oxide semiconductor and has the advantages of stable chemical properties and a high refractive index. Moreover, the molecular structure of titanium dioxide nanomaterials is dispersed and there is a large volume gap. These characteristics of TiO2 film make it more sensitive to refractive index changes than gold film. The prism structure with pure titanium dioxide film is shown in Figure 1c and SPR spectra are shown in Figure 4. It can be seen that the FWHM of the SPR curve of a pure TiO2 film SPR sensor is generally wide, up to 254.16 nm and 311.73 nm. In addition, there is no clear and definite SPR resonance wavelength. Therefore, TiO2 film alone is not a good choice. In order to obtain the excitation effect of gold film and the enhancement effect of titanium dioxide at the same time, a prism-TiO2-Au SPR sensor was designed. The structure is shown in Figure 1d. In order to obtain the best combination of TiO2 layer and gold nanolayer, the simulation results in the sensing performance of different combinations of TiO2 layer thickness in the range of 140-190 nm (increasing 10 nm each time) and gold Titanium dioxide is an oxide semiconductor and has the advantages of stable chemical properties and a high refractive index. Moreover, the molecular structure of titanium dioxide nanomaterials is dispersed and there is a large volume gap. These characteristics of TiO 2 film make it more sensitive to refractive index changes than gold film. The prism structure with pure titanium dioxide film is shown in Figure 1c and SPR spectra are shown in Figure 4. It can be seen that the FWHM of the SPR curve of a pure TiO 2 film SPR sensor is generally wide, up to 254.16 nm and 311.73 nm. In addition, there is no clear and definite SPR resonance wavelength. Therefore, TiO 2 film alone is not a good choice. In addition to the shift in resonance wavelength, the minimum reflectance should also be noted. With the increase in the gold film thickness, the minimum reflectance decreases first and then increases. The minimum reflectance at 0 °C and 60 °C is plotted in Figure 3. When the thickness of the gold film is between 35 nm and 60 nm, the minimum reflectance is higher than 20%. In general, sensors with a minimum reflectance greater than 20% are not considered, because the sensing effect is not good enough. In the subsequent research on sensor performance improvement, sensors with a minimum reflectance of above 20% will no longer be considered. SPR sensors based on gold film thicknesses of 40 nm, 45 nm, 50 nm, and 55 nm are explored in the following study. Titanium dioxide is an oxide semiconductor and has the advantages of stable chemical properties and a high refractive index. Moreover, the molecular structure of titanium dioxide nanomaterials is dispersed and there is a large volume gap. These characteristics of TiO2 film make it more sensitive to refractive index changes than gold film. The prism structure with pure titanium dioxide film is shown in Figure 1c and SPR spectra are shown in Figure 4. It can be seen that the FWHM of the SPR curve of a pure TiO2 film SPR sensor is generally wide, up to 254.16 nm and 311.73 nm. In addition, there is no clear and definite SPR resonance wavelength. Therefore, TiO2 film alone is not a good choice. In order to obtain the excitation effect of gold film and the enhancement effect of titanium dioxide at the same time, a prism-TiO2-Au SPR sensor was designed. The structure is shown in Figure 1d. In order to obtain the best combination of TiO2 layer and gold nanolayer, the simulation results in the sensing performance of different combinations of TiO2 layer thickness in the range of 140-190 nm (increasing 10 nm each time) and gold In order to obtain the excitation effect of gold film and the enhancement effect of titanium dioxide at the same time, a prism-TiO 2 -Au SPR sensor was designed. The structure is shown in Figure 1d. In order to obtain the best combination of TiO 2 layer and gold nanolayer, the simulation results in the sensing performance of different combinations of TiO 2 layer thickness in the range of 140-190 nm (increasing 10 nm each time) and gold nanolayer thickness in the range of 40-55 nm (increasing 5 nm each time) are shown in Figure 5. As can be seen from Figure 5, in longitudinal comparison with the same gold film thickness, the sensitivity of the sensor first increases and then decreases with the increase in TiO 2 layer thickness. This proves that the addition of titanium dioxide can indeed make the SPR wavelength move a longer distance and improve the performance, but if the TiO 2 layer is too thick, the sensor's response to the RI change begins to decline. Table 1 shows the detailed data of the six most sensitive material structure combinations.
nanolayer thickness in the range of 40-55 nm (increasing 5 nm each time) are shown in Figure 5. As can be seen from Figure 5, in longitudinal comparison with the same gold film thickness, the sensitivity of the sensor first increases and then decreases with the increase in TiO2 layer thickness. This proves that the addition of titanium dioxide can indeed make the SPR wavelength move a longer distance and improve the performance, but if the TiO2 layer is too thick, the sensor's response to the RI change begins to decline. Table 1 shows the detailed data of the six most sensitive material structure combinations. The maximum sensitivity occurs when the SPR sensor is combined with 160 nm TiO2 and 40 nm Au, but due to the wide FWHM, the FOM is not so good. Based on the sensitivity and FOM, the SPR sensor combined with 160 nm TiO2 and 45 nm Au was finally selected as the sensor with the best effect. The SPR reflectance curve is shown in Figure 6a. In the temperature range of 0-60 °C, the SPR wavelength initially decreases as temperature increases. The SPR wavelength at 0 °C is 814.80 nm and is 691.00 nm at 60 °C, resulting in a resonance wavelength shift of 123.80 nm, with a sensitivity of 5184.25 nm/RIU. The minimum reflectance of the resonance wavelength is 0.032. Figure 6b shows the relationship between the temperature and SPR resonance wavelength, and the fitting is performed. Temperature and wavelength are linearly dependent, and the correlation coefficient R 2 is 0.9754. The response of the sensor to temperature is −1.98 nm/°C. The maximum sensitivity occurs when the SPR sensor is combined with 160 nm TiO 2 and 40 nm Au, but due to the wide FWHM, the FOM is not so good. Based on the sensitivity and FOM, the SPR sensor combined with 160 nm TiO 2 and 45 nm Au was finally selected as the sensor with the best effect. The SPR reflectance curve is shown in Figure 6a. In the temperature range of 0-60 • C, the SPR wavelength initially decreases as temperature increases. The SPR wavelength at 0 • C is 814.80 nm and is 691.00 nm at 60 • C, resulting in a resonance wavelength shift of 123.80 nm, with a sensitivity of 5184.25 nm/RIU. The minimum reflectance of the resonance wavelength is 0.032. Figure 6b shows the relationship between the temperature and SPR resonance wavelength, and the fitting is performed. Temperature and wavelength are linearly dependent, and the correlation coefficient R 2 is 0.9754. The response of the sensor to temperature is −1.98 nm/ • C. To further improve the sensitivity, FOM, and linear correlation of resonance wavelength and temperature, titanium dioxide nanorods were added to form the TiO2(film)-Au-TiO2(nanorods) triple structure SPR sensor (Figure 7). In this structure, three factors need to be considered, namely the height, radius, and the distance between the nanorods. The influence of the above three factors on the sensing effect is presented in Tables 2-4. To further improve the sensitivity, FOM, and linear correlation of resonance wavelength and temperature, titanium dioxide nanorods were added to form the TiO 2 (film)-Au-TiO 2 (nanorods) triple structure SPR sensor (Figure 7). In this structure, three factors need to be considered, namely the height, radius, and the distance between the nanorods. The influence of the above three factors on the sensing effect is presented in Tables 2-4. To further improve the sensitivity, FOM, and linear correlation of resonance wave length and temperature, titanium dioxide nanorods were added to form the TiO2(film) Au-TiO2(nanorods) triple structure SPR sensor (Figure 7). In this structure, three factor need to be considered, namely the height, radius, and the distance between the nanorods The influence of the above three factors on the sensing effect is presented in Tables 2-4. To further study the performance of the sensor, the structure of TiO 2 nanorods was changed, including height, radius, and spacing. The original geometric size of TiO 2 nanorods is 50 nm in height, 15 nm in radius, and 20 nm in spacing. The control variable method was used in subsequent studies. In order to explore the influence of the height of TiO 2 nanorods on the sensing performance (Table 2), the height was changed while keeping other parameters unchanged. It can be seen that the sensitivity of the sensor is improved while the height increases from 30 to 60 nm. When the height of the TiO 2 nanorods is 60 nm, the sensitivity is the highest. However, the FWHM and FOM are poor at this time. When measuring the performance of a sensor, the sensitivity and the FOM should be considered comprehensively. As can be seen from the detailed data in Table 2, the FOM of the sensor is the largest and has a high sensitivity when the height is 50 nm.
Furthermore, the effect of the radius of the TiO 2 nanorods and the spacing between the nanorods on the sensitivity of the sensor were investigated. The sensitivity of the sensor increases first and then slightly decreases as the radius increases from 5 to 20 nm. The value of FOM also increases first and then decreases, as shown in Table 3. When the radius of TiO 2 nanorods is 15 nm, the sensitivity and FOM are both maximized.
In this study, the spacing range between TiO 2 nanorods was 10-30 nm. With the increase in the spacing between nanorods, the sensor sensitivity shows a downward trend and the value of FOM first rises and then falls. Based on the data in Table 4, it can be concluded that the sensor with a 20 nm distance has the largest FOM and higher sensitivity and is the best choice.
In terms of geometric size, the optimized TiO 2 nanorods in the sensor have a height of 50 nm, a radius of 15 nm, and a spacing distance of 20 nm. The sensitivity of the SPR sensor is 6038.53 nm/RIU. The reflectance curve is shown in Figure 8a. The SPR dip shifts towards shorter wavelengths with the increase in temperature. As shown in Figure 8b, the relationship between SPR resonance wavelength and temperature shows a good linear relationship with a correlation coefficient of 0.9990.
x FOR PEER REVIEW To compare the sensing performance of the pure Au SPR sensor, the TiO2-Au SPR sensor, the TiO2(film)-Au-TiO2(nanorods) triple SPR sensor, the SPR wavelengths, and their corresponding temperatures are presented in Figure 9. The sensitivity of the TiO2(film)-Au-TiO2(nanorods) triple SPR sensor is 16.48% higher than that of the TiO2-Au SPR sensor and 77.81% higher than that of the pure Au SPR sensor. This can be seen by comparing the slopes of the three fitted lines. In addition to the significant increase in sensitivity, the correlation coefficient of the TiO2(film)-Au-TiO2(nanorods) triple SPR sensor fitting line is also significantly higher than the other two fitting lines. In summary, the proposed TiO2(film)-Au-TiO2(nanorods) triple SPR sensor has high sensitivity and excellent FOM, with a good linear correlation between resonance wavelength and temperature. Figure 9. SPR wavelengths corresponding to temperature of three proposed SPR temperature sensors and linear fitting curves. To compare the sensing performance of the pure Au SPR sensor, the TiO 2 -Au SPR sensor, the TiO 2 (film)-Au-TiO 2 (nanorods) triple SPR sensor, the SPR wavelengths, and their corresponding temperatures are presented in Figure 9. The sensitivity of the TiO 2 (film)-Au-TiO 2 (nanorods) triple SPR sensor is 16.48% higher than that of the TiO 2 -Au SPR sensor and 77.81% higher than that of the pure Au SPR sensor. This can be seen by comparing the slopes of the three fitted lines. In addition to the significant increase in sensitivity, the correlation coefficient of the TiO 2 (film)-Au-TiO 2 (nanorods) triple SPR sensor fitting line is also significantly higher than the other two fitting lines. In summary, the proposed TiO 2 (film)-Au-TiO 2 (nanorods) triple SPR sensor has high sensitivity and excellent FOM, with a good linear correlation between resonance wavelength and temperature. To compare the sensing performance of the pure Au SPR sensor, the TiO2-Au SPR sensor, the TiO2(film)-Au-TiO2(nanorods) triple SPR sensor, the SPR wavelengths, and their corresponding temperatures are presented in Figure 9. The sensitivity of the TiO2(film)-Au-TiO2(nanorods) triple SPR sensor is 16.48% higher than that of the TiO2-Au SPR sensor and 77.81% higher than that of the pure Au SPR sensor. This can be seen by comparing the slopes of the three fitted lines. In addition to the significant increase in sensitivity, the correlation coefficient of the TiO2(film)-Au-TiO2(nanorods) triple SPR sensor fitting line is also significantly higher than the other two fitting lines. In summary, the proposed TiO2(film)-Au-TiO2(nanorods) triple SPR sensor has high sensitivity and excellent FOM, with a good linear correlation between resonance wavelength and temperature. To confirm the temperature sensing performance, the optimized TiO2(film)-Au-TiO2(nanorods) triple SPR sensor was simulated to determine whether it can respond clearly to temperature changes. Within the range of 10-30 °C, an SPR reflectance curve was calculated and drawn for every 1 °C, as shown in Figure 10a. The resonance wavelength at each temperature is plotted in Figure 10b and then fitted. The fitting results show To confirm the temperature sensing performance, the optimized TiO 2 (film)-Au-TiO 2 (nanorods) triple SPR sensor was simulated to determine whether it can respond clearly to temperature changes. Within the range of 10-30 • C, an SPR reflectance curve was calculated and drawn for every 1 • C, as shown in Figure 10a. The resonance wavelength at each temperature is plotted in Figure 10b and then fitted. The fitting results show that the resonance wavelength is linearly correlated with the temperature and the correlation coefficient is 0.9957. TiO 2 (film)-Au-TiO 2 (nanorods) triple SPR sensor can clearly distinguish the temperature change of 1 • C in the 10-30 • C range and the correlation is good. Therefore, when the resonance wavelength is known, the temperature can be determined according to the fitted curve.
FOR PEER REVIEW 10 of 13 that the resonance wavelength is linearly correlated with the temperature and the correlation coefficient is 0.9957. TiO2(film)-Au-TiO2(nanorods) triple SPR sensor can clearly distinguish the temperature change of 1 °C in the 10-30 °C range and the correlation is good. Therefore, when the resonance wavelength is known, the temperature can be determined according to the fitted curve. Figure 11 shows the electric field |E| of the optimized TiO2(film)-Au-TiO2(nanorods) triple structure SPR temperature sensor at different interfaces of resonance wavelength when the temperature is 60 °C. When the reflectance is at its minimum, the intensity of the electric field approaches its maximum. In Figure 11, we can see the maximum electric field is obtained at the interface between the TiO2 nanorods and the sensing medium. The intensity field decays exponentially in the sensing medium. The electric field distribution indicates that the reduced reflectance is caused by the SPR phenomenon. In the preparation experiment, TiO2 films can be manufactured via a sol-gel process [36,37], Au film can be manufactured via magnetron sputtering, and TiO2 nanorods can be manufactured via a hydrothermal method [49]. The optimized TiO2(film)-Au-TiO2(na- Figure 10. (a) The reflectance curve of optimized TiO 2 (film)-Au-TiO 2 (nanorods) triple SPR sensor in the 10-30 • C range and (b) SPR wavelength corresponding to temperature for every 1 • C in the 10-30 • C range and linear fitting. Figure 11 shows the electric field |E| of the optimized TiO 2 (film)-Au-TiO 2 (nanorods) triple structure SPR temperature sensor at different interfaces of resonance wavelength when the temperature is 60 • C. When the reflectance is at its minimum, the intensity of the electric field approaches its maximum. In Figure 11, we can see the maximum electric field is obtained at the interface between the TiO 2 nanorods and the sensing medium. The intensity field decays exponentially in the sensing medium. The electric field distribution indicates that the reduced reflectance is caused by the SPR phenomenon. that the resonance wavelength is linearly correlated with the temperature and the correlation coefficient is 0.9957. TiO2(film)-Au-TiO2(nanorods) triple SPR sensor can clearly distinguish the temperature change of 1 °C in the 10-30 °C range and the correlation is good. Therefore, when the resonance wavelength is known, the temperature can be determined according to the fitted curve. Figure 11 shows the electric field |E| of the optimized TiO2(film)-Au-TiO2(nanorods) triple structure SPR temperature sensor at different interfaces of resonance wavelength when the temperature is 60 °C. When the reflectance is at its minimum, the intensity of the electric field approaches its maximum. In Figure 11, we can see the maximum electric field is obtained at the interface between the TiO2 nanorods and the sensing medium. The intensity field decays exponentially in the sensing medium. The electric field distribution indicates that the reduced reflectance is caused by the SPR phenomenon. In the preparation experiment, TiO2 films can be manufactured via a sol-gel process [36,37], Au film can be manufactured via magnetron sputtering, and TiO2 nanorods can be manufactured via a hydrothermal method [49]. The optimized TiO2(film)-Au-TiO2(nanorods) triple structure SPR temperature sensor prepared through the above methods can obtain the theoretically expected sensing effect and can be put into application. In the preparation experiment, TiO 2 films can be manufactured via a sol-gel process [36,37], Au film can be manufactured via magnetron sputtering, and TiO 2 nanorods can be manufactured via a hydrothermal method [49]. The optimized TiO 2 (film)-Au-TiO 2 (nanorods) triple structure SPR temperature sensor prepared through the above methods can obtain the theoretically expected sensing effect and can be put into application.
Conclusions
Temperature changes often cause variations in the RI of alcohol, thus leading to the shift of the SPR spectrum. However, the research on temperature sensors based on SPR film sensors has not attracted much attention. In this work, an SPR sensor with a TiO 2 (film)-Au-TiO 2 (nanorods) triple structure was proposed through a comparative study with a traditional pure Au film SPR sensor and a TiO 2 -Au dual film sensor. It was found that the triple combination of TiO 2 and Au could not only excite SPR with Au but also enhance the performance with TiO 2 . The sensitivity of the SPR temperature sensor can achieve 6038.53 nm/RIU with 160 nm TiO 2 film, 45 nm Au film, and 50 nm high TiO 2 nanorods (5 nm in radius, 20 nm in spacing), and the detection temperature sensitivity is −2.40 nm/ • C. The sensitivity of the TiO 2 (film)-Au-TiO 2 (nanorods) triple SPR sensor is 16.48% higher than that of the TiO 2 -Au SPR sensor and 77.81% higher than that of the pure Au SPR sensor. Good sensing characteristics show the application potential of the device in the field of temperature sensing. Owing to the high-temperature sensitivity, quick and clear response, simple structure, convenient operation, and environmental protection, the proposed TiO 2 (film)-Au-TiO 2 (nanorods) triple structure SPR sensor has great advantages in application. | 7,652.4 | 2022-11-01T00:00:00.000 | [
"Materials Science"
] |
The Impact of Exchange Rates on Stock Markets in Turkey: Evidence from Linear and Non-Linear ARDL Models
In this chapter we investigate the asymmetric impact of exchange rates on three major stock market indices in Turkey using four different ARDL models between 2003M1 and 2018M12. This chapter also attempts to differentiate the short-run and the long-run relationship between exchange rates and stock market indices namely BIST All shares, BIST National 100 index, and BIST National 30 index. Our motivating question is whether the relationship between exchange rates and three major stock market indices are symmetric or asymmetric in Turkey? To answer this, we first use the linear bivariate and multivariate models assuming the effects are symmetric. We then use the non-linear bivariate and multivariate models to examine whether exchange rate have symmetric or asymmetric effects on selected stock stock market indices in Turkey. The findings show that exchange rates have asymmetric effects on all three major stock market indices both in the short and long run. When we look at the long-run, the currency appreciation has positive and significant impact on selected stock markets but currency depreciation does not have an effect. This finding is in line with the understanding that Turkish sectors heavily depends on the import of raw and intermediate goods. The results also show that the economic activity has positive and significant effects on all stock markets implying that it is the main determinant in the long-run. Moreover, interest rates and volatility index were negative and significant in all markets. Thus, it has important implications for policy makers to provide stable prices and diverse investors.
Introduction
Some of the major developments in Turkey's economy during the past decades has been the liberalization of capital markets and implementation of floating exchange rate regime. These developments with the rapid growth of Turkey's economy has attracted international investors and thus increased Turkey's integration into the global economy. Turkey, as emerging market, became attractive to foreign investors for portfolio diversification but shocks in exchange rate markets create volatility in the stock market. It can react positively or negatively to fluctuations in foreign exchange markets. Thus, exporters can benefit from the local currency depreciation due to higher export competitiveness, while importers will pay higher prices for imported goods, thus determining a company's cash flow and market value. Causality refers to exchange rates that vary from stock markets. On the other hand, if a country's exports depend mainly on foreign inputs, the resulting relationship between equity and exchange rates may be insignificant. Since Turkey is a net importer of goods and services, potentially, the depreciation of Turkish lira will cause the value of shares to fall.
There are two main theories suggesting a relation between exchange rates and stock prices. The first is the flow-oriented exchange rate models [1] that focus on the current account or trade balance and predicts that changes in exchange rates will affects the country's real economic variables and therefore stock prices by affecting international competition and trade balance. According to this approach, there is a positive relationship between the two and the causality from exchange rate to stock prices. Fluctuations in exchange rates makes the domestic companies more competitive in case of the depreciation of the national currency, thus increase their exports. Because, these fluctuations affect the costs and profits of many companies due to borrowing in foreign currencies to finance their operations. This affects the stock prices of firms [1].
Second approach is the stock-oriented approach which predicts that movements on stock prices affect exchange rates and thus a causality from stock prices to exchange rates via capital account [2]. As capital is part of the stock, it can influence the exchange rate through the demand of money. According to this, a rising stock prices will attract capital inflows to a country and this will lead to a decline in exchange rates by increasing demand for local currency [3].
It has become a generally accepted notion that these two variables are the way to go for emerging economies to enable economic growth and development. The role of exchange rate is much more important for small open economies in particular emerging markets. In this chapter we seek to shed some light on the analysis of the symmetric and asymmetric effects of exchanges rates on the stock prices in Turkey at industry level using a linear and nonlinear framework. This study is of great interest for a country that has import-oriented economy and completed its financial liberalization in the early 1990s. Because the empirical studies trying to prove the relationship between the exchange rates and the stock prices have mixed results regarding the two main views mentioned above. Figure 1 shows the dynamics of Turkey's three major stock market indices. The 2008 crisis is seen as the most important point of decline in the trend. Since Borsa Istanbul is generally a foreign-invested market, the performance of the Turkish stock markets is negatively affected by foreign investors via the global financial crises. During this period, the risk premium was raised for Turkey. In parallel, CDS values increased. A similar effect occurred after 2018. The Turkish economy has shown that it is not fragile and has exceeded stress tests. Thus, after 2008, the index displayed a strong rise. The depreciation of the exchange rates at the end of the period led to a downward trend in three major stock market indices.
On the other hand, Figure 2 shows the developments in the exchange rate market in Turkey. The exchange rates displayed a stable outlook in the first half of the period, but an upward trend in the second half of the period. Recently, the depreciation of the exchange rate accelerated. Thus, it seems to have a negative impact on stock market performance especially when the index gets cheaper in Turkish Lira terms, so the trend is expected to turn up.
Therefore to see whether the relationship between exchange rates and three major stock market indices is symmetric or asymmetric in Turkey, we employed four different methods: linear bivariate ARDL model is applied to investigate linear relationship between stock prices and the exchange rates; linear multivariate ARDL model employed to show that changes in some additional variables such as interest rates and industrial production have symmetric or asymmetric effects on stock prices; as exchange rates has different impact on different sectors of the economy, multivariate ARDL models employed to analyze the relationship between them. Moreover, the relationship should not be based on the linear but also on nonlinear dimension. Thus finally, nonlinear bivariate and multivariate ARDL models applied to analyze the non-linear relationship between stock prices and the exchange rates in Turkey.
This study is of great interest for a country that has import-oriented economy, has completed its financial liberalization in the early 1990s, and become an attractive destination for foreign investors. The rationale for assessing the role of
3
The Impact of Exchange Rates on Stock Markets in Turkey: Evidence from Linear… DOI: http://dx.doi.org /10.5772/intechopen.96068 symmetric and asymmetric effects of exchanges rate on stock prices in Turkey is based on the perception, as expressed by [1,2], that the stock prices can react positively or negatively to fluctuations in exchange rates. Determining the factors that cause movements in stock prices is very important and is of great interest to policy makers and investors. The role of exchange rate on stock prices is much more important for small open economies in particular emerging markets. There is no sufficient research evidence showing the links between foreign exchange rate and Turkish stock market. We believe that this study will fill the gap in the literature in this area.
The rest of the chapter is organized as follows: Section 2 review the related literature; Section 3 describes data and methods applied; Section 4 presents empirical findings and discusses the implications of the analysis; and, finally, Section 5 concludes the paper and provides policy implications.
Literature review
The relationship between stock prices and exchange rates has been extensively studied by many researches. Some find positive association between the two [4,5] others discover negative relations [6,7] and even no relationship at all [8].
Studies on the relationship between exchange rate and stock prices in the literature can be summarized in different categories according to their empirical results. Firstly, there are some studies that find significant positive relationship between the two. For instance, the relationship between stock prices and exchange rates on financial, manufacturing, and services indices and fifteen sub-indices in Turkey investigated using Johansen cointegration test and the results show evidence that there is a long-run relationship among these indices and exchange rates. The results suggest that exchange rate exposure on financial and manufacturing industries have positive forex beta for the dollar exchange rate, but in terms of service industries there is negative forex beta [9]. A similar exercise undertook to investigate the effects of changes in foreign exchange on the stock returns on company level using panel data analysis. The results show evidence that changes in real exchange rate has positive and significant impact on stock returns in the manufacturing and trade sectors between the years 2006-2014 [10].
Secondly, there are some studies that find negative relationship between the two [6,11]. For example, Akıncı and Küçükçayşı analyses the relationships between stock markets and exchange rates in 12 countries and finds that the exchange rate has negative effect on the stock market index [6]. Belen and Karamelikli investigates the causality between the exchange rates and stock returns in Turkey and finds no evidence supporting any causal relationship between the dollar exchange rate and the BIST-30 Index [11]. Tsai examine the relationship between stock price index and exchange rate in six Asian countries, namely Singapore, Thailand, Malaysia, the Philippines, South Korea, and Taiwan. Their results show that all countries in the study have negative the relationship between stock prices and exchange rates, which is in line with the portfolio balance effect [12]. Recently, the relationships between real exchange rate returns and real stock price returns in Malaysia, the Philippines, Singapore, Korea, Japan, the United Kingdom and Germany examined using dynamic conditional correlation (DCC) and multivariate generalized autoregressive conditional heteroskedasticity (MGARCH) models. The results show that there is a negative relationship between real exchange rate return and real stock price return in Malaysia, Singapore, Korea and the UK [13].
Thirdly, there are some studies that find two-way causality between the exchange rate and stock prices [14]. For instance, Zeren and Koç examines the relationship between exchange rates and stock market indices in Turkey, Japan and England. They use using the time varying causality test and find two-way causality between the exchange rate and stock prices for during the global crises period. However, some empirical studies find one-way causality between the exchange rate and stock prices [14]. Coskun et al. investigate the link between stock index and macroeconomic variables (USD exchange rate, exports and imports, industrial production index, and gold price using monthly data for Turkey. Using Granger causality, they find one-way causality from exchange rate to BIST, and using impulse response function their results suggest positive response of BIST to exchange rate shock [4]. Aydemir and Demirhan analyses the causality between exchange rates and stock prices for national 100, services, financials, industrials, and technology indices. The results suggest that there is positive bi-directional causal relationship from technology indices to exchange rate, but in terms of national 100, services, financials and industrials indices to exchange rate the paper does provide negative causality [15]. On the other hand, Kendirli and Çankaya (2016) analyze the causal relationship between the USD and Istanbul Stock Exchange National 30 Index from 2009:01 to 2014:12 monthly data and find no causal relationship between USD and BIST-30 index returns [8].
Fourthly, there are some studies that investigates short and long-run relationship between the two [16]. Recently, the relationship between the stock prices and exchange rates, specifically BIST 100 and 23 sectors indexes investigated using ARDL model. The results suggest that the long run relationship exist only between exchange rate and textile, wholesale and retail, and technology indices [17]. The short and long-term relations between exchange rate and financial sector index, industrial sector index, service sector index and technology sector index investigated in Turkey [18]. The results suggest that exchange rate has no long term relationship with stock prices and the sectors. However, in short term relationship, the results show that exchange rate has bidirectional causality with stock prices, technology and service sectors while a unidirectional causality with financial sector index. Akel and Gazel (2014) investigate the long-run and short-run equilibrium relationships between real effective industrial index in Turkey. Based on ARDL cointegration analysis, they find that there is a positive relationship between industrial index and Dollar Index and Euro/TL exchange rate, but there is no evidence on the relationship between real effective exchange rate and industrial index. Based on VECM model, they find that industrial index is positively related to the REER while it is negatively related to the Dollar Index and Euro/TL exchange rate [19].
Methodology
This study investigates symmetric and asymmetric effects of exchange rates on three major stock market indices in Turkey using four different models. Firstly, linear and nonlinear bivariate ARDL models are estimated where the exchange rates are the only determinant of stock prices. The linear models are used to capture the symmetric effects of exchange rate changes while the nonlinear models are applied to capture asymmetric effects of exchange rate changes on stock prices.
Following Pesaran et al. [20] and Shin et al. [21] we apply the following the bivariate model to account for cointegration between exchanges rate and stock prices in Turkey.
where a is the drift component, SP t is the stock price index, EX t is the nominal effective exchange rate, and ε t is an error term. In order to estimate the short-run effects, the error correction form, proposed by Pesaran et al. [20] of the Eq. (1) can be written as follows: By now, we basically assume that exchange rate changes have symmetric effects on stock prices, but it might be possible that the effects could be asymmetric. In order to assess whether exchange rate changes have asymmetric effects on stock prices, we decompose the exchange rates into its positive and negative partial sums. For example, there might be differences between increases and decreases of the short-run interest rates. The partial sum of positive values is computed by replacing negative values with zeros as ÀÁ , and the partial sum of negative values are computed by replacing positive values with zeros as is the positive sum of changes in exchange rates, and ∆EX À j is the negative sum of changes in exchange rates. The LnEX in Eq. (2) is replaced by new generated POS and NEG variables in the nonlinear ARDL models as follows: Thus, the error correction form of the Eq. (3) takes the following form with POS and NEG variables.
Secondly, linear and nonlinear multivariate ARDL models are estimated where industrial production index (IP), volatility index (VIX) and interest rates (IR) are used as a determinants of stock prices in Turkey. In order to account the effect of these variables on stock prices we employ a linear multivariate model of Moore & Wang [22] and Bahmani-Oskooee & Saha [23] as follows: where IPI t is an index of industrial production, IR t is the short term (overnight) interest rates, VIX t is a measure of stock market volatility index and ε t is an error term. The coefficient sign of β could be positive or negative depending on the firm's international competitiveness and production costs due to depreciation in exchange rates. When firms gain international competitiveness, they export more and thus exchange rate affects stock prices positively. However, increased costs due to depreciation in exchange rate are expected to affect stock prices negatively. Since there is a common consensus that economic activities affect stock prices positively [23,24], the industrial production index is used as a proxy for measuring economic activity. Thus, we can expect stock prices to increase through increasing industrial production. Thus, we can expect the coefficient sing of γ to be positive.
As the interest rates are significant determinants of stock prices [25,26], we use the short term (overnight) interest rates as a broad measure of financing costs. However, the effects of on stock prices are ambiguous [27,28]. And finally, considering the international effects and theoretical predictions [29,30], the volatility index is included in the model. From Eq. (5), the coefficients estimate we get are the only long run effects. In order to infer the short-run effects, the Eq. (5) need to be rewrite in an error correction modeling format proposed by Pesaran et al. [20]. Therefore, we follow Pesaran et al.'s [20] bound testing approach and consider the following errorcorrection forms of multivariate model respectively: The Eq. (6) give short-run as well as long-run estimates in one step, where λ 1 , λ 5 are the long run parameters, Δ are the first difference operator, n and q are the optimal lag lengths for each variable, and u t is the usual White noise residuals. The estimates of coefficients attached to first-differenced variables gives the short-run effects while the estimates of λ2-λ5normalizedonλ1 give the long-run effects. In order for the long-run estimates to be valid, the F test proposed by Pesaran et al. [20] is applied to joint significance of lagged level variables λ 1 ¼ λ 2 ¼ λ 3 ¼ λ 4 in equation [6] as a sign of cointegration. The F test has obviously new critical values depending on whether variables in the model are I(0) or I [1], and whether the model contains an intercept and/or a trend.
Once the cointegration established, the long-run effects of exchange rates, industrial productions, interest rates and volatility index on stock prices are captured by the estimates of λ 2 À λ 5 normalized on λ 1 . The short-run effects are gathered by the estimates of the coefficients of the first differenced variables such as the short-run effects of industrial production index on stock prices are determined by θ k . The lag length of the first differences in Eq. (6) is chosen according to the Schwarz Bayesian Criteria (SBC) where we consider a maximum lag length of six.
The nonlinear multivariate ARDL models are constructed to assess the asymmetric effects of exchange rate changes on stock prices as follows: Where the exchange rate is replaced by new generated POS and NEG variables. Thus the nonlinearity comes from the two new variables where POS refers appreciation of home currency and NEG refers depreciation of the home currency.
Empirical findings
In this chapter both linear and nonlinear ARDL models are estimated for bivariate and multivariate models by using monthly data over the period of 2003 M1 to 7 The Impact of Exchange Rates on Stock Markets in Turkey: Evidence from Linear… DOI: http://dx.doi.org/10.5772/intechopen.96068 2018 M12 for three major stock market indices in Turkey. The results of short and long-run estimates of both linear and nonlinear for the bivariate and multivariate models are reported in Tables 1 and 2. Each of the tables consist of three panels: Panel A reports the short run estimates, Panel B reports the long-run estimates and the diagnostic statistics are then reported in Panel C. To ensure one of the requirements of Pesaran et al.'s [20] method that the variables could be I(0) or I [1] but not I [2], we use the traditional Augmented Dickey-Fuller (ADF) tests on levels as well as the first differenced variables. The lag order of the ADF test statistics is determined by the Akaike Information Criterion (AIC) and the results show that there are no I [2] variables.
Results of the bivariate models
In the bivariate models, the exchange rates are considered as the determinant of stock markets. value, the RESET statistic is insignificant for all stock market indices, suggesting that the model is correctly specified in the sector. The CUSUM and CUSUM square tests are also reported to establish stability of the short run and the long run estimates. The test results show that the estimated parameters for all stock market indices are stable. As can be seen, estimates are stable at least by one of the tests. Based on the above results, we can conclude that the exchange rate has short-run effects on three major stock market indices (BIST All, BIST100 and BIST30) in Turkey. However, we would also like to see whether the short-run effects change if non-linear adjustment process used. Then the answer will be based on the results of non-linear ARDL models reported in Table 2. The results show that the currency appreciation (ΔPOS) has significant negative short-run effects on all markets, while depreciation (ΔNEG) has no effect as it is insignificant. This results suggests that exchange rate changes in Turkey have asymmetric effect on stock indices in Turkey.
When we look at the long-run effects in Panel B, the currency appreciation has positive impact on all stock indices, but the effects are statistically insignificant. The currency depreciation also has no effect on any indices in Turkey. In order to see whether the long-run assessment is valid, we report F test and ECMt-1 test results. In order to further validate the short-run and long-run asymmetric effects, the equality of short-run and long-run coefficient estimates is also tested applying Wald test. As for the long-run asymmetry we test whether λ2=λ3. According to the Wald test statistic, the asymmetry effects between exchange rates and stock prices are supported for all markets in the short-run.
Results of the multivariate models
Tables 3 reports the results of short and long-run estimates of linear multivariate models for BIST All Shares, BIST 100 and BIST 30 stock prices. Panel A captures the symmetric effects of exchange rates on stock prices as well as other macroeconomics explanatory variables. The results show that all markets namely, BIST All, BIST100 and BIST30, are negatively affected by exchange rate changes. These markets on the other hand have a positive and statistically significant relationship with industrial production index implying that economic activity in Turkey has a significantly positive impact on the stock markets in the short run.
However, all markets have affected negatively by an increase in interest rates which implies that high interest rates lead to decrease in the investment level in the country and hence decrease economic activity. Likewise, volatility index (VIX) have a negative relationship with all stock market indices which implies that an increase in uncertainty lead to decrease the profitability of firm and thus lead to decrease stock prices in the short run.
When we look at the long run coefficient presented in Panel B, the industrial production index carries significant and positive relationship with all markets while interest rates and volatility index carries negative and significant relationship with stock prices in Turkey. Focusing on the exchange rate on stock prices, we found that cumulative sum of recursive residuals (CUSUM denoted CS) and the cumulative sum of recursive residuals of square (CUSUMQ denoted CS 2 ) tests. According to both CS and CS 2 test results, the models are stable except BIST All Shares. We also test the asymmetric effects of exchange rate changes on stock prices using the nonlinear multivariate models (see Table 4). Thus we decompose the exchange rates changes into its positive (POSÞ and negative (NEG) partial sums to test whether stock prices have asymmetric relationship with exchange rates changes. The results show that the currency appreciation (∆POSÞ has a negative and significant coefficient but the currency depreciation (∆NEG) do not have significant coefficient. This implies that there is asymmetric relationship between the exchange rate and stock prices in the short-run. This asymmetric relationship is not continue in the long-run as in Panel B, POS and NEG variables have insignificant coefficients. When we look at the effects of other variables we see that the industrial production index has positive and significant effect both in the short and long run.
Conclusion
The aim of this chapter is multiresolution analysis with the application of advanced economic techniques using four different ARDL models to shed some light on the analysis of the symmetric and asymmetric impact of exchange rates on three major stock market indices in Turkey using monthly data from 2003M1 to 2018M12. This chapter also attempts to differentiate the short-run and long-run relationship between exchange rates and market indices. The motivating question is whether the relationship between the two is symmetric or asymmetric in Turkey? To answer the question, we employed four different methods: linear bivariate ARDL model is applied to investigate linear relationship between stock markets and the exchange rates; linear multivariate ARDL model employed to show that changes in some additional variables such as interest rates and industrial production have symmetric or asymmetric effects on stock markets; as exchange rate has different impact on different sectors of the economy, multivariate ARDL models employed to analyze the relationship between them. Moreover, the relationship should not be based on the linear but also on non-linear dimension. Thus finally, non-linear bivariate and multivariate ARDL models applied to analyze the non-linear relationship between stock market indices and the exchange rates in Turkey.
This study is of great interest for a country that has import-oriented economy, has completed its financial liberalization in the early 1990s, and become an attractive destination for foreign investors. The rationale for assessing the role of symmetric and asymmetric effects of exchanges rate on stock markets in Turkey is based on the perception, as expressed by (Dornbusch andFischer 1980 andFrankel 1992), that the stock markets can react positively or negatively to fluctuations in exchange rates. Determining the factors that cause movements in stock markets is very important and is of great interest to policy makers and investors. The role of exchange rates on stock markets is much more important for small open economies in particular emerging markets. There is no sufficient research evidence showing the links between foreign exchange rate and Turkish stock market. We believe that this study will fill the gap in the literature in this area.
The findings show that exchange rates have asymmetric effects on all three major stock market indices both in the short and long-run. When we look at the long-run, the currency appreciation has positive and significant effect on stock market indices but currency depreciation does not have an effect. This finding is in line with the understanding that Turkish sectors heavily depends on the import of raw and intermediate goods. The results also show that the economic activity has positive and significant effects on three major stock market indices implying that it is the main determinant in the long-run. Moreover, interest rates and volatility index were negative and significant in all markets. Thus, it has important implications for policy makers to provide stable prices and diverse investors. | 6,266.4 | 2021-03-17T00:00:00.000 | [
"Economics"
] |
[18F]FET-PET Imaging for Treatment and Response Monitoring of Radiation Therapy in Malignant Glioma Patients – A Review
In the treatment of patients suffering from malignant glioma, it is a paramount importance to deliver a high radiation dose to the tumor on the one hand and to spare organs at risk at one the other in order to achieve a sufficient tumor control and to avoid severe side effects. New radiation therapy techniques have emerged like intensity modulated radiotherapy and image guided radiotherapy that help facilitate this aim. In addition, there are advanced imaging techniques like Positron emission tomography (PET) and PET/CT which can help localize the tumor with higher sensitivity, and thus contribute to therapy planning, tumor control, and follow-up. During follow-up care, it is crucial to differentiate between recurrence and treatment-associated, unspecific lesions, like radiation necrosis. Here, too, PET/CT can facilitate in differentiating tumor relapse from unspecific changes. This review article will discuss therapy response criteria according to the current imaging methods like Magnet resonance imaging, CT, and PET/CT. It will focus on the significance of PET in the clinical management for treatment and follow-up.
TRACERS FOR BRAIN TUMORS
Positron emission tomography (PET) is a functional imaging method that has gained widespread use in the assessment of brain tumors. PET-tracers currently used for imaging of brain tumors are mostly radio labeled amino acid (AA) tracers. These AA are preferentially taken up by tumor cells (Derlon et al., 1989;Heiss et al., 1999;Grosu et al., 2011) due to an overexpression of amino acid transporters, while the uptake of the normal brain tissue is relatively low. It has been demonstrated that AA uptake in tumor tissue is almost entirely mediated by type l-AA carriers (Heiss et al., 1999). It has been suggested in a rat model that brain tumors can stimulate transporter expression, especially in their vasculature (Miyagawa et al., 1998). The most common tracers for malignant brain tumors are 18 Fluoro-O-(2) fluoroethyl-l-tyrosine ([ 18 F]FET) and [ 11 C] Methionine (MET).
[ 11 C] Methionine (MET) is a physiologic amino acid labeled with a carbon-11 isotope, which has a half-life time of 20 min. Its uptake correlates with cell proliferation in vitro, Ki-67 expression, nuclear antigen expression, and microvessel density in proliferating cells (Dhermain et al., 2010). First studies on AA tracers were made with MET. 18 Fluoro-O-(2) fluoroethyl-l-tyrosine ([ 18 F]FET) is an amino acid labeled with fluorine-18 which has a half-life of 110 min.
Due to the short half-life of a positron-emitting radioisotope like carbon-11, the radiotracers labeled with carbon-11 require a cyclotron in close proximity to the PET imaging facility. The half-life of fluorine-18 is long enough that radiotracers labeled with fluorine-18 can be manufactured commercially at off-site locations and shipped to outlying facilities.
In clinical practice, FET and MET have been shown to be equally sensitive and specific (Weber et al., 2000;Astner et al., 2005;Grosu et al., 2011). In case of low-grade glioma FET can also reveal hot spots and suspected regions of histological upgrading of the tumor (Popperl et al., 2007).
[ 18 F]2 -fluoro-2 -deoxyglucose (FDG), an analog of glucose that is labeled with fluorine-18, is often used in extracerebral tumors. FDG shows a high uptake in gray matter, resulting in a poor tumor to background ratio, especially in low-grade glioma. Thus, FDG is currently restricted to special situation like cerebral lymphomas (Hoffman et al., 1993), where it is of prognostic value (Kasenda et al., 2013). 68 Ga-DOTATOC (DOTA 0 -Phe 1 -Tyr 3 -octreotide) or other somatostatin analogs are very sensitive in the detection and delineation of meningioma and its possible infiltration in sagittal sinus or falx (Milker-Zabel et al., 2006). 18 F-DOPA, a l-3,4-dihydroxyphenylalanine labeled with fluorine-18 shows an increased uptake in malignant glioma and have been shown to be comparable to MET (Becherer et al., 2003).
[ 18 F]3 -deoxy-3 -fluorothymidine (FLT), a nucleoside, shows an increased expression of thymidine kinase and cell proliferation (Ullrich et al., 2008) and correlates with Ki-67. It allows a non-invasive assessment of tumor proliferation as well as early response to chemotherapy by PET (Jacobs et al., 2005). Since 18 F-FLT does not cross intact brain-blood-barrier (BBB) it does not show uptake in low-grade tumors or stable lesions but 18 F-FLT visualize high-grade (grade III or IV) tumors with a disrupted BBB (Chen et al., 2005). 18 F-fluoromisonidazole ( 18 F-FMISO) has the ability to visualize the hypoxic cell fraction of tissue (Cher et al., 2006) and makes www.frontiersin.org it possible to achieve escalation of the radiation dose at these crucial points. However, at this point, FLT and FMISO are not yet well established in the clinical management.
THERAPY OF MALIGNANT GLIOMA
Standard treatment for malignant glioma is based on surgery followed by combined radio-chemo therapy up to 60 Gy and adjuvant chemotherapy with temozolomide (Stupp et al., 2005). Nevertheless, there are additional therapy approaches with radioactive seeds, other chemotherapy agents like irinotecan or antiangiogenic agents like bevacizumab.
Frequently used radiation therapy (RT) techniques in patient with malignant glioma are: 3-D conformal RT and, especially in cases of re-irradiation, stereotactic fractioned RT. Furthermore, intensity modulated RT, rapid arc techniques and image guided RT are also frequently used (Narayana et al., 2006;Hermanto et al., 2007).
RESPONSE MONITORING
During and after treatment, therapy response should be evaluated. Currently, most protocols use conventional imaging techniques like CT and Magnet resonance imaging (MRI) for this purpose. It is, however, important to differentiate between regression, treatment related changes and true recurrence. In this situation, AA-PET has become of great value due to its superior sensitivity for vital tumor tissue (Popperl et al., 2005).
LIMITATIONS OF MRI
Magnet resonance imaging with its high spatial resolution is an inexpendable tool in diagnosis, RT planning, and follow-up in patients with malignant glioma. However, there are pitfalls since this imaging method is not tumor specific ( Table 1).
Diagnostic criteria in routine MRI tumor imaging are generally based on the extent of contrast enhancement, which is caused by a breakdown of the BBB (Wen et al., 2010). However, a disruption of the BBB can also occur as a result of recent surgery or RT. On the other side, there can be tumor parts where the BBB is not yet affected. Furthermore, the contrast enhancement is in many cases smaller than the real tumor dimension, leading to an underestimate of tumor dimensions. This phenomenon is very common in low-grade glioma.
In addition, new emerging therapies like VEGF-inhibitors can reduce disturbance of the BBB, causing a decrease of contrast enhancement on MRI without an influence of the tumor dimension, which is referred to as "pseudo-regression." It is also seen after application of corticosteroids, which can also reduce leakage of blood vessels (Jacobs et al., 2005).
After RT, brain lesions may remain avid to contrast agents like Gadolinium but may become negative on AA-PET, which is indicative of good local tumor control. For example, if post-treatment changes like radio necrosis occurs, it is frequently associated with an increase of contrast enhancement which can mimic tumor progression (Giglio and Gilbert, 2003).
Such phenomenon occurring after treatment are called pseudoprogression and pseudoresponse (Brandsma and van den Bent, 2009). They point at a problematic discrepancy between morphological MRI imaging and true tumor behavior.
In case of pseudoprogression, increase of the contrast enhancement on MRI is not associated with real tumor growth (Taal et al., 2008). In such situations, AA-PET as a functioning imaging modality can help differentiate between real tumor progress and treatment related changes. Table 2 gives an overview of studies evaluating PET in the follow-up of glioma and its abilities to differentiate between recurrence of tumor, pseudoprogression, and radiation necrosis.
Sophisticated MR techniques, like MR spectroscopy and perfusion (Rock et al., 2004) are not standardized until now and their reproducibility between different facilities is difficult. In addition, MRI spectroscopy has multiple methodological limitations that impede its use in clinical practice; these limitations are, however, beyond the scope of this article.
ADVANTAGES OF AMINO ACID -PET
AA-PET (FET and MET) is often used when a recurrence of the tumor is assumed on MRI after treatment. In this situation AA-PET has been shown to be superior to MRI regarding discrimination of true tumor growth from treatment related changes. Reported sensitivity rates of AA-PET range from 75 to 100% and specificity from 60 to 100% (Pauleit et al., 2005;Tripathi et al., 2012), respectively. It helps to confirm the diagnosis of recurrence and is used for treatment planning in case of re-irradiation, as it enables better tumor delineation. 18 F-FET has been also shown to predict treatment response after radiotherapy. PET-responders showed a significant longer overall survival than non-responders (Piroth et al., 2011). Reduced AA uptake in brain tumors under therapy correlates with treatment response (Nariai et al., 2005).
It was shown that the target volume delineation for reirradiation according to MRI can vary extremely (Grosu et al., 2005a). In this situation an AA-PET should be used. Target volume delineation according to AA-PET is much more reliable
Mullins
Individual patterns of enhancement are not enough to distinguish necrosis from predominant tumor progression. Mullins et al. (2005) Rachinger For patients with glioma undergoing multimodal treatment or various forms of irradiation, conventional follow-up with MRI is insufficient to distinguish between benign side effects of therapy and tumor recurrence Rachinger et al. (2005) Frontiers in Oncology | Cancer Imaging and Diagnosis Dhermain, F. G. Advanced MRI and PET imaging for assessment of treatment response in patients with gliomas Dhermain et al. (2010) and less variable between different observers (Grosu et al., 2005a). Current European guidelines have determined cut-off values for semiquantitative PET analyses (SUV of tumor compared to SUV of healthy brain tissue) to be applied in various clinical settings depending on the tracer that is used (Vander Borght et al., 2006). For example, the current cut-off value of the tumor to background uptake-ratio for differentiating neoplastic brain tissue from healthy surrounding tissue with [ 18 F]FET has been set at 1.6. This makes it especially useful for integration into RT planning, since it enables semi-automated tumor delineation based on threshold values for FET-uptake and sets a vantage point for prospective studies.
Consequently, preliminary studies show a significant longer overall survival in patients who were irradiated on the basis of PET or SPECT in contrast to patients who were treated on the basis of MRI-based RT planning alone (Grosu et al., 2005b). Further studies are needed, but these data suggest that AA-PET might contribute largely to improvement of patient care.
LIMITATION OF PET
Numerous disadvantages of PET must be taken into account: while the method itself is not associated with adverse effects due to the use of only trace amounts of physiologic amino acids, it leads to a certain radiation exposure to the patient. However, radiation dosage received from modern PET imaging (ranging from 2 to 6 mSv), is negligible against the background of the RT dose delivered for the treatment and the poor prognosis of the disease at hand.
Furthermore, low spatial resolution of PET-studies is a limitation. Current scanners achieve about 4-8 mm compared to 1 mm on MRI. This can lead to false negative findings as PET might not detect very small lesions. On the other hand, the clinical relevance of lesions smaller than 5 mm remain debatable.
Positron emission tomography-tracers labeled with carbon-11 have a short half-life and a cyclotron is needed for its manufacture. With the introduction of 18F-fluorine marked tracers, this disadvantage has, however, been eliminated.
Expertise is needed for interpretation of PET data. The diagnostic value of amino acid uptake in brain tissue depends on multiple factors: the tumor to brain ratio for system L transport substrates can be relatively low, like mentioned above (Vander Borght et al., 2006). Unspecific uptake is possible shortly after operation or biopsy. Hence, due to the functional and not anatomical nature of PET-studies, specificity is generally low as PET does not allow to differentiate pathologic amino acid uptake (tumor) from unspecific uptake. For example [ 18 F]FET signal is physiologically increased in vascular malformations or the venous sinuses, which can mimic tumor extension to the contralateral hemisphere in the case of sinus sphenoidalis. [ 11 C]MET signal is increased, e.g., in the lacrimal gland. Thus, MRT imaging will remain inexpendable for tumor diagnostics and interpretation of AA-PET.
Much hope lies therefore in the upcoming of combined MR/PET scanners that will allow for superior diagnostic power in only one study, minimizing time consumption, and patient discomfort (Judenhofer et al., 2008).
SUMMARY
In brain tumors, PET is currently recommended when MRI is inconclusive especially in the setting of post-therapeutic care. PET shows a greater specificity in differentiation between tumor and post-treatment changes, while MRI is inexpendable in the evaluation of morphological features.
Positron emission tomography has become an important imaging technique to improve the definition of the target volume for irradiation and to decrease the inter-observer variability (Grosu et al., 2005a). There is also evidence that using PET for RT planning improves overall survival (Grosu et al., 2005b). This has yet to be evaluated in larger randomized studies.
www.frontiersin.org
Positron emission tomography is more and more established as imaging tool in assessment of treatment response, recurrence, and follow-up of malignant glioma patients.
Combination of PET and MRI imaging into one study (PET/MRI) could further improve patient care and facilitate therapy management. | 3,101 | 2013-01-09T00:00:00.000 | [
"Medicine",
"Physics"
] |
Data Mining for Material Feeding Optimization of Printed Circuit Board Template Production
Improving the accuracy of material feeding for printed circuit board (PCB) template orders can reduce the overall cost for factories. In this paper, a data mining approach based on multivariate boxplot, multiple structural change model (MSCM), neighborhood component feature selection (NCFS), and artificial neural networks (ANN) was developed for the prediction of scrap rate and material feeding optimization. Scrap rate related variables were specified and 30,117 samples of the orders were exported from a PCB template production company. Multivariate boxplot was developed for outlier detection. MSCMwas employed to explore the structural change of the samples that were finally partitioned into six groups. NCFS and ANN were utilized to select scrap rate related features and construct prediction models for each group of the samples, respectively. Performances of the proposed model were compared to manual feeding, ANN, and the results indicate that the approach exhibits obvious superiority to the other two methods by reducing surplus rate and supplemental feeding rate simultaneously and thereby reduces the comprehensive cost of raw material, production, logistics, inventory, disposal, and delivery tardiness compensation.
Introduction
Printed circuit board (PCB) is found in practically all electrical and electronic equipment (EEE), being the base of the electronics industry [1].Due to increased competition and market volatility, demand for highly individualized products promotes a rapid growth of PCB orders designed with many specialized features but short delivery time [2,3].Customeroriented small batch production is always employed by a factory with lots of PCB template orders, which is different from mass production, and therefore causes companies to face serious challenges.Optimization of material feeding is one of the critical problems.
The scrap rate and material feeding area of each PCB template order are difficult to be accurately determined in advance of the production.Many factories undergo the violent fluctuation in both surplus rate and supplemental feeding rate due to empirical manual feeding in practice by heavily depending on their experience and knowledge.Individualized surplus template products can be placed in inventory or directly destroyed while frequent material feeding brings supplemental production cost and delivery tardiness compensation.This motivates us to explore the pattern of historical orders through data mining (DM) approach to facilitate more reasonable material feeding for the orders automatically and therefore reduce the comprehensive cost caused by excessive or underestimated material feeding before production.
The general process of DM also known as knowledge discovery in databases (KDD) includes problem clarification, data preparation, preprocessing, DM in the narrow sense, and interpretation and evaluation of results [4].DM in the narrow sense as a step in the KDD process consists of applying data analysis and particular discovery algorithm within an acceptable computational efficiency limit [4].DM tasks can be classified into descriptive and predictive two groups [4,5].Descriptive function of DM mainly aims to explore the potential or recessive rules, characteristics, and relationships (dependency, similarity, etc.) that exist in the data, such as generalization, association, sequence pattern mining, and clustering [4][5][6][7][8].As to the predictive functions of DM, they are usually selected to analyze the relevant trends of the data or the relevant laws to predict the future state.It includes classification, prediction, time series, and analysis [4][5][6][7][8].To achieve these goals, DM solutions employ a wide variety of enabling techniques and specific mining techniques to both predict and describe interpretable and valuable information [4,5,[8][9][10].The enabling techniques mainly refer to the methods for data cleaning, data integration, data transformation, and data reduction that can support the implementation of DM in the narrow sense, while specific mining techniques, like regression, support vector machine, and artificial neural network (ANN), are the approaches used to explore useful knowledge from massive data [4,9].The scrap rate prediction and material feeding optimization of PCB template production can be taken as an application of predictive function of DM; and the specification of scrap rate related features, identifying features which affect scrap rate significantly and related mining techniques, should be carefully studied.Moreover, many features (e.g., required panel) have structural change influence for the scrap rate according to empirical knowledge, and therefore the enabling and mining techniques, interpretation, and evaluation steps in DM should also be adjusted accordingly.
The details of enabling techniques, DM applications for different manufacturing task and different manufacturing industry, patterns in the use of specific mining techniques, application performance, and software used in these applications have been widely studied, and one can refer to [4][5][6][7][8][9][10] for comprehensive review.Electronic product manufacturing industries also exploited several DM methods with the purpose of summarization, clustering, association, classification, prediction, and so on [5,8].And many are closely related to PCB manufacturing industry.Tseng et al. employed Kohonen neural networks, decision tree (DT), and multiple regression to improve accuracy of work hours estimation based on the PCB design data, and the performance clearly exceeds the conventional method of regression equations [11].Tsai et al. developed three hybrid approaches including ANN-genetic algorithm (GA), fuzzy logic-Taguchi, and regression analysisresponse surface methodology (RSM) to predict the volume and centroid offset two responses and optimize parameters for the micro ball grid array (BGA) packages during the stencil printing process (SPP) for components assembly on PCB, and the confirmation experiments show that the proposed fuzzy logic-based Taguchi method outperforms the other two methods in terms of the signal-to-noise ratios and process capability index [12,13].Some other approaches, like support vector regression (SVR) and mixed-integer linear programming, have also been developed for the parameter optimization of SPP [14].Haneda et al. [15] employed variable cluster analysis and -means approach to help engineers determine appropriate drilling condition and parameter for PCB manufacturing.DM-based defect (faults) diagnosis or quality control during manufacturing has also been widely studied [5,7], and many algorithms like adaptive genetic algorithm-ANN [16] and DT [17] have been developed for the defect (faults) diagnosis of PCB manufacturing.
The marketing and sales is another widely investigated direction of DM application in PCB industry.Success in forecasting and analyzing sales for given goods or services can mean the difference between profit and loss for an accounting period [18].Many DM-based methods like -mean cluster and fuzzy neural network, fuzzy case-based reasoning, and weighted evolving fuzzy neural network have been developed by to select a combination of key factors which have the greatest influence on PCB marketing and then forecasts the future PCB sales.Tavakkoli et al. [22] combined SVR, Bat metaheuristic, and Taguchi method to predict the future PCB sales, and performance comparison indicates that the accuracy of the proposed hybrid model is better than the GA-SVR, particle swarm optimization-SVR, and classical SVR.Hadavandi et al. hybridize fuzzy logic with GA and means to extract useful information patterns from sales data, and results show that the proposed approach outperforms the other previous approaches [18].
However, the quality related research for PCB manufacturing mainly focuses on one operation of the manufacturing process for the purpose of yield improvement [12][13][14][15], and there are few studies on material feeding optimization especially for PCB production using DM mechanism to the best of our knowledge.Meanwhile, the change structure of the studied problem and corresponding change of relevant features have seldom been considered during the mining procedure.Meanwhile, ANN-based approach, as a most frequently used DM method that will also be employed in the study, tries to exploit nonlinear patterns in different problems demonstrating reasonable results; however, problem divided into different subproblems according to structural change always requires different ANN architecture and different learned link weights based on different input features, while this is difficult to learn by the ANN without reasonable preprocessing.
In this paper, a data mining approach (MSCM-ANN) is presented to establish the scrap rate prediction model and optimize material feeding of PCB orders considering the structural change influence based on the use of multivariate boxplot, multiple structural change model (MSCM), neighborhood component feature selection (NCFS), and ANN.The comparison of MSCM-ANN to ANN and manual feeding will be conducted to verify the performance of the proposed approaches.The rest of the paper is organized as follows.In Section 2, variables specification and sample data are described.Methodology, including multivariate boxplot, MSCM, NCFS, ANN, and performance indicators, is presented in Section 3, followed by experimental results and discussion in Section 4. Lastly, conclusions are drawn in Section 5.
Variables and Sample
The data used in this study were collected from Guangzhou FastPrint Technology Co., Ltd.A total of 56 variables inherited from enterprise resource planning system combined with the derived variables were selected and specified in Table 1, in which variables 40 to 56 are the statistic results of manual feeding adopted by FastPrint.The unit in a panel, required quantity/panel/area, and delivery unit area can not only be taken as statistic items, but also feature candidates for MSCM-ANN and ANN model establishment.
Set and unit are two types of delivery unit, whereas panel is production unit that will be partitioned into either set The Qualr for the same order number in the past 2 years 8.824%-100% Note.New orders having no Hquar are replaced by the Qualr for orders having the same layer number and surface finishing operation during the past 2 years.or unit before delivery depending on the requirement of customers.The relation between set, unit, and panel specified in Table 1 is illustrated in Figure 1, in which each panel consists of 10 units in the PCB order.Suppose the customer's required quantity and required panel of the PCB order given in Figure 1 are 90 units and 9 panels, respectively.If the initial feeding is 100 units (10 panels) but finally ended up with 95 qualified units due to scrap rate (i.e., (100 − 95) × /100 × × 100% = 5% in this example) after production, then the surplus quantity is 5 units (= 95 − 90) and therefore feeding 10 panels is more reasonable to reduce the redundancy of the customized orders.Conversely, it will result in supplemental feeding if we feed only 9 panels initially due to the scrap rate.
Panel Set Unit
On this basis, 30,117 samples of the orders placed between October 31, 2015, and October 31, 2016, were exported with careful audit for erroneous and missing values.The number of the orders for each required panel is illustrated in Figure 2. It can be seen that the required panel is less than 30 in most of the case, which represents a typical customer-oriented small batch template production in PCB industry.
Methodology
The main flow of the proposed approach (MSCM-ANN) is presented in Figure 3, and various aspects of MSCM-ANN are discussed in detail in the following subsections.The multivariate boxplot, MSCM-based partition, and neighborhood
Low quartile Median Upper quartile
Minimum Maximum component feature selection are the enabling technique of DM considering structural change influence; and the ANNbased prediction model is the mining technique to predict scrap rate; meanwhile, the transformation of scrap rate to surplus rate and supplemental feeding rate is conducted.Performances of the proposed MSCM-ANN will be compared to the ANN and manual feeding based on the same 29,157 samples after outlier detection of original 30,117 samples.Some statistic results of manual feeding are given in Table 1 by the variables 40 to 56.MSCM-ANN and ANN were implemented in Matlab5 version 2017a.
Multivariate Boxplot-Based Outlier Detection.
Identification of outliers and the consequent removal are a part of the data screening process which should be done routinely before analyses [23,24].There are various methods of outlier detection.Some are graphical such as normal probability plot, and others are model-based approaches which assume that the data are from a normal distribution [24].Boxplot is a hybrid of the above two mechanisms for exploring both symmetric and skewed quantitative data, and it can also identify infrequent values from categorical data.Figure 4 shows a description of boxplot, and one could define an outlier as any observation outside the range where IQR = 2 − 1 and is the interquartile range which contains 50% of the data.The value is a lower outlier, if < 1 − 1.5IQR, and an upper outlier, if > 1 − 1.5IQR.Detection of outlier sample according to scrap rate, as the target variable of the prediction here, can reduce the impact of accidents that may be caused by machine break, wrong operation, and so on.However, scrap rate related outlier detection influenced by multivariable with structural change does not guarantee they are subject to normal distribution.The modification of the boxplot, called multivariate boxplot here, is developed to identify and discard outliers.The main procedure can be described in Algorithm 1.
Multiple Structural Change Model-Based Sample Partition.
The required panel has significant influence on the scrap rate according to expert experience and initial analysis.The average scrap rate of the orders with different required panel is illustrated in Figure 5.The curve shows declining tendency when the required panel is less than 9 but presents great fluctuations when the required panel is larger than 30.The multiple structural change of average scrap rate versus the required panel may require separate features and prediction models to improve the prediction accuracy.
MSCM was employed to explore multiple structural changes of samples and partition samples.MSCM was initially developed by Bai and Perron [25,26] to address problem of online (time serial related) multiple linear regression (MLR) with multiple structural change along with time.MSCM takes least squares method to detect the number of break points and estimate the change position.Here, the required panel in ascending sort order is considered as time serial related date, and the scrap rate is taken as the online regression objective.Then the online MLR of the scrap rate with breaks ( + 1 regimes) can be expressed as = + , = −1 + 1, . . ., , for = 1, . . ., + 1 with the convention 0 = 0 and +1 = .In this model, is the observed scrap rate, and is vectors of independent variable, but only the layer number, required panel, and number of operations are considered here. ( = 1, . . ., + 1) is the corresponding vectors of coefficients, and is the disturbance.The break points, 1 , . . ., , are explicitly treated as unknowns.The purpose is to estimate the unknown coefficients together with the break points when observations (samples) on , are available.
Significance test for the structural changes based on newly introduced statistic items sup (), max , max , and sup ( + 1/) is conducted.The sup () is the test with original hypothesis = 0 and alternative hypothesis = .Then max = max 1≤≤ sup () and max = max 1≤≤ sup () are introduced to check whether there is the structural change in the model, in which is set as the weight of value based on sup () hypothesis test, and is the maximum number of .The number of the break points for the model is determined according to sup ( + 1/) ( ≥ 1), and ( + 1/) is the test with original hypothesis having break points and alternative hypothesis having + 1 break points.Therefore the samples can be partitioned into subgroups according to the break points and corresponding change position [25,26].All the estimation and hypothesis tests are conducted based on Matlab code provided by Qu [27].
Neighborhood Component Feature Selection.
It is necessary to employ some feature selection methods to remove irrelevant and redundant features to reduce the complexity of analysis and the generated models and also improve the efficiency of the whole modeling processes [28][29][30].Wrappers, embedded, and filter are three types of the approaches developed for feature selection [31].In this study, neighborhood component feature selection (NCFS) was applied for each group of the samples.NCFS is an embedded method for feature selection with regularization to learn feature weights for minimization of an objective function that measures the average leave-one-out regression loss over the training data [32].
Given observations, = {( , ), = 1, 2, . . ., }, where ∈ are the feature vectors and ∈ are response (scrap rate).In this study, the aim is to predict the response given the training set .Consider a randomized regression model that randomly picks a point (Ref()) from as the "reference point" for and sets the response value at equal to the response value of the reference point Ref().Now consider the leave-one-out application of this randomized regression model, that is, predicting the response for using the data in − = /( , ).The probability that point is picked as the reference point for is where ( , ) = ∑ =1 2 | − |, , = 1, 2, . . ., , are the feature weights, and is the kernel function.Suppose () = exp(−/) as suggested in [32], where is set to 1 after standardizing the dependent value to have zero mean and unit standard deviation.
Let ŷ be the response value of the randomized regression model and let be the actual response for .And let be a Procedure (, , , ), : samples size; : initial step length; : regularization parameter; small positive constant Initialization: Standardize features to have zero mean and unit standard deviation; (0) = (1, . . ., 1), (0) = −∞, = 0 for = 1, . . ., do Compute and according to (1) and ( 2) loss function that measures the disagreement between ŷ and .Then, the average value of ( , ŷ ) is After adding the regularization term ∑ =1 2 , the objective function for minimization is The loss function for ( , ) here is the mean absolute deviation defined as ∑ =1 | − |/.The main procedure of NCFS for regression feature selection can be summarized in Algorithm 2.
Neural Network-Based Prediction Model and Transformation.
The most frequently used DM method for prediction is ANN.An ANN is network neurons that consist of propagation function and activation function, which receives the input, changes their internal state (activation) according to the input, and produces the output depending on the input and activation [33].Despite the black box mechanism of ANN, it has been widely used in prediction problems demonstrating reasonable results as scrutinized in the literature [34].ANN with their successful experience in forecasting diverse problems are among the most accurate and trustworthy used models.Their ability to learn from incomplete datasets in order to predict the unseen section of data besides their capability of modeling the problem with the least available data and estimating almost all continuous functions has made them attractive enough to be used in prediction problems [34].Köksal et al. [5] reviewed the reported performance of the DM methods and also pointed out that the ANN performance is mostly compared to the performance of the classical statistical modeling method such as multiple linear regressions (MLR), and better performance of ANN can naturally be observed in multidimensional data since these are powerful tools in modeling nonlinear relationships.
In this study, three-layer back propagation ANN was taken to predict the scrap rate; then it was transformed to determine the predicted surplus rate and supplemental feeding rate, two most concerned performances for material feeding optimization.The architecture of ANN is set by trial and error, and the number of nodes in hidden layer is set to max(3, ( + 1)/20), in which is the number of input features.The ANN-based architecture for the scrap rate prediction and material feeding optimization is illustrated in Figure 6, in which a neuron (in the hidden layer or the output layer) receives the outputs 1 , 2 , . . ., [] of other neurons 1, . . ., [] which are connected to with bias , and the propagation function of neuron is defined as =1 + , in which the superscript [] denotes the th layer and [−1] is the number of units of the ( − 1)th layer.The results of propagation function are further processed by sigmoid activation function; that is, = ( ) = 1/(1+ − ).
The transformation can be conducted according to (4)-( 5) following the hypothesis that each feeding panel of an order has the same scrap probability, and scrap rate for an order will not change along with the predicted feeding area.Therefore, a [1] a [2] a [2] a [1] 1 a [1] 2 a [1] n [1] −1 a [1] n [1] x 1 x 2 x n−1 x n Figure 6: ANN-based architecture for the prediction of scrap rate and material feeding optimization (one can refer to Tables 1 and 2 for the notations).Scraa Pd Predicted surplus rate Surpr Pd Predicted supplemental feeding frequency for an order Supff Pd Predicted supplemental feeding rate Supfr Pd the predicted surplus rate and supplemental feeding rate can be calculated according to ( 6)-( 8), and related variables are presented in Table 2.
The predicted feeding quantity and panel for each order are described as in which Duap is the delivery unit in a panel given in Table 1.
Then the predicted feeding quantity, area, and scrap area should be revised accordingly: Thus the predicted surplus rate of each order can be defined as The predicted supplemental feeding frequency for each order is specified as Reqa is the required area defined in Table 1.Therefore, the predicted surplus rate for these orders with = 0 can be calculated by is the number of samples with = 0, 1 ≤ ≤ .The for these orders with = 1 is not considered here because the surplus area cannot be determined before the supplemental feeding is finished.But their surplus rate is always lower than the defined in (10) in practice.
The supplemental feeding rate for all the samples can be defined as The surplus rate and supplemental feeding rate of the manual feeding can be computed by 3.5.Performance Indicators.In order to evaluate the effectiveness of the model, the following evaluation indicators are used [35].The mean squared error (MSE) is the average of square sums between predicted data ŷ and original data , which can be described as The mean absolute error (MAE) is the average of the sum of the absolute difference between observed values and estimated values.It can be expressed as The mean absolute percentage error (MAPE) is the average of the sum of the normalized absolute difference between observed values and estimated values.The formula is written as where is the number of samples.The final purpose is to determine the feeding panel for each order on the basis of the predicted scrap rate, and and ŷ are replaced by the least feeding panel and predicted feeding panel, respectively.
Then the deviation of the predicted feeding and the manual feeding can be computed according to ŷ − and − for sample , 1 ≤ ≤ , respectively.The error diagram can be drawn as a distribution of the deviation for all samples.Combined with the aforementioned predicted surplus rate and supplemental feeding rate for material feeding optimization, the final performance will be evaluated by the five indicators.
Results and Discussion
According to the multivariate boxplot approach described in Section 3.1, 960 outliers were trimmed and 29,157 samples left.Figure 7 shows the boxplot of the scrap rate for different value of the required panel and the layer number.Figure 7 illustrates that the outliers are shifted by the values of the required panel and the layer number, and therefore outlier detection considering different feature values is more reasonable.
Significance test for break points of the samples according to max , max , and sup ( + 1/) was conducted based on default parameters given in [27], and the results are given in Table 3.The values of max and max indicated that the samples have significant structural change at 5% level, and the values of sup ( + 1/) showed that 5 break points are significant.The final break position of ascending sorted samples according to the required panel was 8,935, 13,995, 17,003, 21,791, and 27,491.Therefore the samples were partitioned into 6 groups with indexes 1-6 which corresponds to the samples with the required panels 1, 2, 3, 4-6, 7-19, and greater than 19, respectively.The samples in group 6 can still be segmented for sup (5 | 4) greater than the critical value at 5% level [25].However, the sample size in group 6 was small (1666 samples) and the average of scrap rate greatly fluctuated which can be seen from Figures 1 and 6(a).Thus further partition for the samples in group 6 was not conducted.
Then NCFS was conducted for each group of the samples in which the initial step length was set to 0.9 and small positive constant was set to 10 −4 .5-fold cross-validation instead of a single test was conducted to optimize regularization parameter initialized with 20 randomly selected values between 0-1.2 × 10 −3 according to [32], and the value that minimizes the mean loss across the cross-validation was selected to fit NCFS. Figure 8(a) shows that the loss performance for 20 different values for the group of the samples with the required panels between 7 and 19 and the fourth corresponding to the lowest mean loss was selected as the regularization parameter for NCFS. Figure 8(b) illustrates the indexes of the selected features based on the selected .Final selected features for different group of the samples and all samples as a whole are given in Table 4. Difference of the selected features indicates that the samples with the different features may distribute in different regimes.However, layer number, Rogers material, number of operations, Huawei standard, plug hole with resin, second drilling, back drilling, Cu/Ni/Au pattern plating, gold finger plating, gold plating, delivery unit area, and historical qualified rate are critical features for most of the samples, which means that different values of these features will influence the scrap rate greatly in general, and these selected features also match well with the experience of experts from the factory.
The 70%, 15%, and 15% of the mutually exclusive samples were randomly selected as training, validation, and test data for each partitioned group of the samples, and the sample sizes for each group are given in Table 5. Prediction models of MSCM-ANN were trained, validated, and tested for each group of the samples with 5 runs based on the corresponding selected features while the ANN was trained, validated, and The average results of each group of the samples achieved by MSCM-ANN, ANN, and manual feeding are given in Table 7.The surplus rate and supplemental feeding rate obtained by the three approaches are given in Table 8.The following results can be drawn accordingly: (1) Both MSCM-ANN and ANN can reduce the surplus rate and supplemental feeding rate performances simultaneously compared to the manual feeding as shown in predicted surplus rate and 11.91% predicted supplemental feeding rate while ANN achieved 15.16% and 12.69% for the two performance indicators, respectively.Better performance of MSCM-ANN may be influenced by more precisely selected features, more reasonable ANN architecture, and well-trained models for each partitioned sample group based on MSCM considering the structural change influence compared to ANN which could not explore the pattern in each partitioned group.
(2) The results in Table 8 indicate that MSCM-ANN and ANN achieved the lower surplus rate but relatively higher supplemental feeding for the samples when the sample group corresponding to the intervals of required panel increases.The main reason is that the required panel that was rounded up to the nearest integer based on the required quantity resulted in high redundancy when the number of the required panels was small, which therefore caused lower supplemental feeding rate.Taking the PCB order in Figure 1, for example, if the required quantity is only 4 units, then it will cause 100% ((10-2-4)/4 × 100%) surplus rate for feeding one panel with 20% scrap rate; the supplemental feeding should not be conducted until the scrap rate is greater than 60%.In contrast, lower surplus rate but relatively higher supplemental feeding rate was obtained when the required panel increased with lower surplus, but great fluctuation of the scrap rate may cause insufficient feeding panel and therefore bring about high supplemental feeding frequency.
The predicted scrap rate and predicted supplemental feeding rate on average obtained by MSCM-ANN for the training, validation, and test sample in each group are Note.Surpr Pd and Supfr Pd are the predicted surplus rate and supplemental feeding rate, respectively, and they can be obtained according to the definition specified in Section 3.5 and the data provided in Table 7. illustrated by Figure 9, which indicates that the MSCM-ANN is stable to determine the surplus rate and supplemental feeding rate for each group of samples in most of the case.The relatively large deviation of the predicted supplemental feeding rate between training and test for the samples in group 6 may be caused by the large fluctuation of the scrap rate for different orders.Meanwhile, relatively small sample size is harmful to maintain the stability of the model.Figures 10(a) and 10(b) present the error diagrams of the results obtained from the manual feeding and run predicted results of MSCM-ANN, respectively, for the samples in group 5. MSCM-ANN are more likely to distribute with mean value 0.3 and short tail for training, validation, and test samples, while most of the errors obtained by the manual feeding distributed with mean value 1.725 (Figure 10(a)), and the large positive tail indicates (Figure 10(a)) that the manual feeding can easily lead to high redundancy after delivery of order.The deviations between the manual feeding panel/predicted feeding panel and least feeding panel for sample in group 5 are illustrated in Figure 11.It indicates that the predicted results in Figure 11(b) achieved lower deviation in most of the case compared to the manual feeding results in Figure 11(a), which can bring lower surplus panel, and therefore reduce the cost of material, production, inventory, and disposal.
Figures 12(a) and 12(b) present the regression of manual feeding panel and predicted feeding panel versus least feeding panel, respectively.Results indicate that the predicted feeding panel coincides better with the least feeding panel in Figure 12(b) compared to the manual feeding panel illustrated in Figure 12(a), and therefore the waste of surplus quantity and area can be reduced.The same coefficients and similar regression expressions obtained by MSCM-ANN for training, validation, test, and all samples mutually verify the stability of the proposed approach.
Conclusions
Accurate determination of the number of feeding panels for each PCB template order can reduce the cost of material, production, logistics, inventory, disposal, and delivery tardiness compensation.In this paper, a data mining approach (MSCM-ANN) involving the use of multivariate boxplot, MSCM, NCFS, and ANN was developed for establishing the scrap rate prediction model and material feeding optimization for PCB template order considering the structural change influence for the predicted scrap rate.The various aspects of the approaches have been discussed in detail.Mean squared error, mean absolute error, and mean absolute percentage error, three prediction performance indicators, combined with surplus rate and supplemental feeding rate, two most concerned performances indicators for material optimization in practice, were presented to evaluate the established model.The multivariate boxplot was adopted for scarp rate outlier detection considering the structural changes influence of different input features, while the MSCM was applied to explore the multiple structural changes of the samples and therefore partition the samples into 6 different subgroups.NCFS and ANN were utilized for feature selection and scrap rate prediction model establishment for each group of the samples, respectively.After comparing MSCM-ANN with ANN and the manual feeding, the following conclusions and contributions are highlighted as follows.
(1) The proposed MSCM-ANN shows superior prediction accuracy on training, validation, and test dataset with the lowest MSE, MAE, MAPE, surplus rate, and supplemental feeding rate performance indicators compared to ANN and the manual feeding.MSCM-ANN reduces the surplus rate and supplemental feeding rate from 27.44% and 17.91% obtained by the manual feeding to 11.96% and 11.91%, respectively, but ANN can only reduce them to 15.16% and 12.69%, respectively.The same coefficients and similar regression expressions of the predicted feeding panel versus the least required panel for training, validation, test, and all samples mutually verify the stability of the proposed MSCM-ANN.
(2) The established model provides a new mechanism based on DM for the material feeding optimization of PCB template production that has seldom been studied according to the best of our knowledge.The application of the developed approach can replace the empirical manual feeding and cut
Figure 1 :
Figure 1: Structure of a PCB panel.
Figure 2 :Figure 3 :
Figure 2: Number of orders for each required panel.
Figure 5 :
Figure 5: Average scrap rates with different required panel.
Figure 7 :
Figure 7: (a) Boxplot for different value of required panel.(b) Boxplot for different value of layer number.
Figure 8 :
Figure 8: (a) Mean loss performance for 20 different lambda () values for samples with required panel between 7 and 19.(b) Feature selection based on NCFS with the lowest loss lambda for samples with required panel between 7 and 19.
Figure 9 :
Figure 9: (a) Predicted scrap rate of MSCM-ANN for different samples.(b) Predicted supplemental feeding rate for different samples.
Figure 10 :
Figure 10: (a) Error diagram of the results from manual feeding for the samples in group 5. (b) Error diagram of predicted result obtained by MSCM-ANN for the samples in group 5.
Figure 10(b) illustrates that the errors obtained by
Figure 11 :
Figure11:(a) Deviation of samples between manual feeding panel and least feeding panel.(b) Deviation between predicted feeding panel and least feeding panel.
Table 2 :
Prediction results related variables.
Table 3 :
Significance test of break points.
*A statistic significance at the 5% level.
Table 4 :
Selected features for different group of the samples.Note.Selected features are marked with ◊.The description of features (variables) has been specified in Table1.testedfromallsamplesbased on the selected features as listed in the last column of Table4.Comparison of average MSE, MAE, and MAPE of 5 runs for MSCM-ANN, ANN, and the manual feeding is given in Table6.The results indicate that both MSCM-ANN and ANN have obvious superiority in reducing the three indicators.However, MSCM-ANN can achieve smaller MSE, MAE, and MAPE compared to ANN, which means that the established models considering structural change can further improve the precision.
Table 5 :
Samples sizes of training, validation, and test data.
Table 6 :
Performance indicators achieved by different approaches.
Table 7 :
Predicted and real results of each group of samples.
Table 8 :
Comparison of surplus rate and supplemental feeding rate. | 7,945 | 2018-04-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Maturation of White Adipose Tissue Function in C57BL/6j Mice From Weaning to Young Adulthood
White adipose tissue (WAT) distribution and WAT mitochondrial function contribute to total body metabolic health throughout life. Nutritional interventions starting in the postweaning period may impact later life WAT health and function. We therefore assessed changes in mitochondrial density and function markers in WAT depots of young mice. Inguinal (ING), epididymal (EPI) and retroperitoneal (RP) WAT of 21, 42 and 98 days old C57BL/6j mice was collected. Mitochondrial density [citrate synthase (CS), mtDNA] and function [subunits of oxidative phosphorylation complexes (OXPHOS)] markers were analyzed, together with gene expression of browning markers (Ucp1, Cidea). mRNA of ING WAT of 21 and 98 old mice was sequenced to further investigate functional changes of the mitochondria and alterations in cell populations. CS levels decreased significantly over time in all depots. ING showed most pronounced changes, including significantly decreased levels of OXPHOS complex I, II, and III subunits and gene expression of Ucp1 (PN21-42 and PN42-98) and Cidea (PN42-98). White adipocyte markers were higher at PN98 in ING WAT. Analyses of RNA sequence data showed that the mitochondrial functional profile changed over time from “growth-supporting” mitochondria focused on ATP production (and dissipation), to more steady-state mitochondria with more diverse functions and higher biosynthesis. Mitochondrial density and energy metabolism markers declined in all three depots over time after weaning. This was most pronounced in ING WAT and associated with reduced browning markers, increased whitening and an altered metabolism. In particular the PN21-42 period may provide a time window to study mitochondrial adaptation and effects of nutritional exposures relevant for later life metabolic health.
INTRODUCTION
Obesity prevalence is high in adults and considerably increased nowadays in children and adolescents (Ng et al., 2014). Childhood obesity increases the risk for early onset metabolic diseases, like type 2 diabetes mellitus (T2D) and cardiovascular disease (Reilly and Kelly, 2011). An important link between obesity and metabolic diseases is the metabolic function of the white adipose tissue (WAT), i.e., WAT health (Hammarstedt et al., 2012).
Mitochondrial density in adulthood appears to be strongly correlated to WAT health as shown by a reduced WAT mitochondrial density in obesity (Wilson-Fritch et al., 2004) and T2D (Choo et al., 2006). Nutrition may regulate WAT function as feeding a high fat diet reduced WAT mitochondrial density (Sutherland et al., 2008), a process that is already initiated after 5 days of western style diet (WSD) (Derous et al., 2015), while caloric restriction and diets enriched in poly-unsaturated fatty acids increased WAT mitochondrial density, oxidative capacity and biogenesis (Flachs et al., 2005;Nisoli et al., 2005). Dependent on their location in the body, WAT depots differ in their impact on metabolic health (Yang et al., 2008;Bjorndal et al., 2011). Visceral WAT is located in the abdominal cavity and visceral WAT mass is inversely correlated to total body insulin sensitivity and as such considered a risk factor for development of the metabolic syndrome (Pouliot et al., 1992;Ross et al., 2002). In contrast, subcutaneous WAT is located directly under the skin and is shown to have a higher oxidative capacity compared to the visceral depots in mice (Schottl et al., 2015).
Experimental evidence suggests that growth and distribution of WAT as well as mitochondrial density of WAT depots can be programmed by early life environmental factors. For example, maternal obesity, over-nutrition or undernutrition during pregnancy can all drive increased visceral adiposity and an adapted mitochondrial density in rodent offspring (Bruce et al., 2009;Jousse et al., 2014;Claycombe et al., 2016;Lecoutre et al., 2016). In addition, mild caloric restriction or a high fat diet exposure in the lactation period also programmed adult adiposity and metabolic health of pups (Mitra et al., 2009;Palou et al., 2010). Those studies show that suboptimal nutrient conditions in early life can have long-term metabolic consequences. Programming of the oxidative and storage capacity of WAT, which develops from the third trimester of gestation until adolescence (Spalding et al., 2008), may be an underlying mechanism.
The postnatal development of WAT includes differentiation from progenitor cells to fully developed adipocytes containing lipid droplets, a process starting before birth in subcutaneous depots and after birth in visceral depots (Han et al., 2011;Wang et al., 2013). After weaning WAT depots continue to grow and adipocytes increase in size (hypertrophy) and number (hyperplasia) in a depot specific manner (DiGirolamo et al., 1998). Furthermore, WAT depots develop postnatally from a white phenotype at PN10 to a brown phenotype at PN20 where the majority of the adipocytes are multilocular and express UCP1, after which these cells disappear again and are replaced by unilocular adipocytes which only express UCP1 upon cold-induction (Xue et al., 2007;Lasar et al., 2013;Birnbacher et al., 2018).
Many nutritional intervention studies, including postnatal programming studies, i.e., studies with a nutritional intervention in early life with the aim to improve adult metabolic health (Baars et al., 2016;Kodde et al., 2017;Bouwman et al., 2018;Fernandez-Calleja et al., 2018), take the postweaning period (around PN21) as starting point. Comprehensive and extended comparison of postweaning changes of markers for mitochondrial function and WAT browning in different WAT depots is crucial for the interpretation of those studies. Therefore, we here investigate early life changes in markers for mitochondrial density, function and browning in the developing WAT depots with the aim to substantiate and extend, in terms of number of markers and WAT depots analyzed, available research. To this end we collected inguinal (ING), epididymal (EPI) and retroperitoneal (RP) WAT, of 21, 42, and 98 days old mice, housed under standardized experimental conditions (ambient temperature) and comprehensively measured markers of mitochondrial density, function and browning as well as the effect of a WSD on these markers. In addition, we newly examined changes within mitochondrial functional profile by analyzing transcription of all established mitochondrial proteins and categorize them to function.
Study Design
Mice were kept at the animal facility of Intravacc (Bilthoven, Netherlands) under a 12 h light -12 h dark cycle (lights on at 06:00 h). Room temperature and humidity were kept at constant level (21 ± 2 • C and 50 ± 5%, respectively). This housing temperature was chosen to adhere to the most common temperature used for animal experiments. C57BL/6jOlaHsd breeders were purchased from Harlan (Envigo since 2015, Horst, Netherlands), acclimatized for 2 weeks, time mated and fed a American Institute of Nutrition-93G synthetic diet (AIN93G) (Reeves et al., 1993) during breeding, pregnancy and lactation. Within 2 days after birth, litters were culled to four males and two females and randomly assigned to a dam. At postnatal day 21 (PN21) female mice were killed, while male mice were weaned, housed in littermate-pairs and continued on AIN93G until PN42. From PN42 until sacrifice at PN98 mice were fed AIN93M (Reeves et al., 1993) or WSD (containing 39 en% fat; diet composition in Table 1). Food and water were available ad libitum during the entire experimental period. A very limited amount of food was supplied the night before dissection to ensure that the animals were in a fasted state (approximately 8 h). Body weight and food intake were measured twice a week. At different time points (PN21, PN42, and PN98) mice were sacrificed to assess the development of WAT depots and the effect of the WSD challenge, resulting in the following experimental groups (i) PN21 (n = 8), (ii) PN42 (n = 8), (iii) PN98-AIN (n = 11), and (iv) PN98-WSD (n = 11; Figure 1A). At dissection mice were anesthetized (isoflurane/N 2 O/O 2 ), terminated by bleeding (eye extraction) and ING, EPI, and RP WAT were collected, weighted, snap frozen and stored at −80 • C.
Frontiers in Physiology | www.frontiersin.org In addition, RP WAT of PN42 was too small to analyze mitochondrial DNA levels.
Gene Expression
RNA of EPI, ING and RP WAT was isolated using Trizol (Thermo Fisher Scientific, Landsmeer, Netherlands) followed by purification with a RNeasy Mini Kit (Qiagen Benelux b.v., Zwijndrecht, Netherlands) including a DNase treatment with a RNase-free DNase Set (Qiagen Benelux b.v.) as previously described (Vanhoutvin et al., 2009). RNA quantity and chemical purity were assessed with the Nanodrop 2000 (Thermo Fisher Scienctific) and integrity with the Agilent 2100 Bioanalyzer (Agilent, Santa Clara, CA, United States). iScriptcDNA synthesis kit (Bio-Rad, Veenendaal, Netherlands) was used according to manufacturer instructions. 6.25 ng cDNA was used as input for each Q-PCR reaction. SYBR Select Master Mix (Life Technologies Europe, Bleiswijk, Netherlands) was used according to manufacturer instructions and qPCR was performed with a QuantStudio 6 Flex Real-Time PCR System (Life Technologies Europe). mRNA expression of cell death-inducing DNA fragmentation factor, alpha subunit-like effector A (Cidea), Leptin (Lep), mesoderm specific transcript (Mest, also known as Peg1), delta-like 1 homolog (Dlk1, better known as Pref1, which will be used here) and uncoupling protein 1 (Ucp1) were analyzed relative to mean expression of two reference genes, hypoxanthine guanine phosphoribosyl transferase (Hprt) and zinc finger, AN1-type domain 6 (Zfand6). For a complete list of primers used see Table 2. Normalization of qPCR data was performed using the qbase + (Biogazelle, Genth, Belgium) based on the method of relative normalization as described (Hellemans et al., 2007;Kodde et al., 2017). Reference genes Hprt1 and Zfand6 were selected based on transcriptome data and stability checked by Q-PCR analyses. Primers for Ucp1 were purchased from Biorad, all other primers from Biolegio (Biolegio, Nijmegen, Netherlands).
Mitochondrial DNA Density
Mitochondrial copy number was assessed by the ratio ( Ct) between nuclear DNA (abundance of lipoprotein lipase (Lpl) DNA) and mitochondrial DNA (abundance of mitochondrial gene NADH dehydrogenase 1 (mt-Nd1) DNA) (Kaaman et al., 2007). Briefly, total DNA was isolated with the QIAamp DNA micro kit (Qiagen Benelux), following instructions of the manufacturer. DNA quantity was determined with Quant-iT Picogreen dsDNA assay kit (Thermo Fisher Scientific). 10 ng input DNA was used for each qPCR reaction. Primers sequences are shown in Table 2
Data Analyses mRNA Sequence Data
Data analysis included contrast analysis (R package Limma) between PN21 and PN98, omitting transcripts whit a FPKM (fragments per kilobase million) of zero in at least one of the samples and including transcripts with an average FPKM > 2 at PN21 or PN98. Principal Component Analysis (PCA) was performed to visualize the samples and results are reported in Supplementary Figure 1. Transcripts with a p-value below 0.05 were used for Ingenuity Pathway Analysis (Qiagen Bioinformatics, Aarhus, Denmark) and targeted analysis as described below. The data set was examined for changes in brown and white (pre)adipocyte markers derived from Gesta et al. (2007) to get insight in the change in cell population in ING WAT between PN21 and 98.
To further explore changes in mitochondrial function, a list of genes encoding proteins with strong support of mitochondrial localization was derived from Mitocarta (Mouse MitoCarta 2.0, Broad institute), checked for their regulation in the data set, annotated with Nextprot (SIB Swiss institute for bioinformatics) and sorted per function category. When more than one transcript per gene was present in the data set, the transcript with the lowest p-value was used. Some of the genes are listed as non-mitochondrial (11 of 327) in the results table, because these are annotated in Nextprot having a ribosomal, extracellular matrix or cell membrane localization rather than a mitochondrial localization.
Next to this, a list of genes known to be involved in mitophagy was examined in the same data set, to get a better understanding of underlying mechanisms for WAT whitening.
Statistical Analysis
SPSS 19.0 (SPSS Benelux, Gorinchem, Netherlands) was used for statistical analyses. Gaussian distribution was tested with Levene's test for equality of error variances in all parameters. Differences over time (per depot) were analyzed using Univariate ANOVA. Depot differences over time were analyzed using a twoway ANOVA (Brown-Forsyte) with time and depot as factors. A t-test was used to analyze the effect of the WSD. Data that did not show a Gaussian distribution was analyzed by Kruskall-Wallis for time differences and Mann-Whitney for the adult diet effect. qPCR data is presented as mean relative expression (scaled to average expression) + SEM and all other data is displayed as mean + SEM. Differences were considered significant at p < 0.05 and tendency was reported when 0.05 < p < 0.1. Correlations were analyzed with Pearson's test.
Body Weight, WAT Weight and Markers of Adiposity
Body weight increased significantly between weaning (PN21) and young adulthood (PN98) (p < 0.001) (Figure 1B), as did the weight of ING, EPI and RP WAT (p < 0.01; Figures 1C-E). In accordance with increased WAT weight, gene expression levels of adiposity (Lep) and adipocyte expansion (Mest) markers increased over time in EPI WAT (p < 0.05 for Lep and p < 0.001 for Mest) and RP WAT (p = 0.06 for Lep and p < 0.05 for Mest), but not ING WAT (Figures 1F-K). Mest and Lep expression levels were increased in ING WAT upon WSD exposure (p < 0.001 for both parameters; Figures 1F,I). Lep expression levels were also moderately increased in EPI and RP WAT upon WSD (p < 0.05 for EPI and p = 0.08 for RP WAT; Figures 1G,H), whereas Mest expression levels were unaffected upon WSD in EPI and RP WAT (Figures 1J,K).
Pref1 expression levels, a pre-adipocyte number marker, decreased in all depots over time (p < 0.001) and most pronounced from PN21 to 42 (Figures 1L-N). In contrast, Pref1 expression levels were in EPI and RP WAT not affected by the WSD, but levels were slightly elevated in ING WAT (p < 0.01).
Markers of Mitochondrial Density
Mitochondrial density measured by citrate synthase (CS) levels, assayed as activity, decreased over time in all WAT depots (p < 0.01; Figures 2A-C). Between PN21 and PN42 CS levels decreased substantially (−56%) in ING WAT (p < 0.05). CS levels were not affected by the WSD in any depot (Figures 2A-C). Activity-based levels of hydroxyacyl-Coenzyme A dehydrogenase (HADH), a mitochondrial enzyme involved in β-oxidation, decreased over time in ING and EPI WAT (p < 0.001; Figures 2G,H), but not in RP WAT ( Figure 2I). However, HADH activity tended to decrease in RP WAT upon WSD exposure (p = 0.07; Figures 2G-I). When mitochondrial density was measured as mtDNA copy number no significant decrease over time or upon WSD was found (Figures 2D-F).
Markers of Mitochondrial Oxidative Capacity
Mitochondrial oxidative capacity, measured by protein levels of five subunits representing the five oxidative phosphorylation (OXPHOS) complexes, decreased in ING WAT over time for NDUFB8 (p < 0.01; Figure 3A), SDHB and UQCRC2 (p < 0.01; Figures 3D,G) and upon WSD for UQCRC2 ( Figure 3G) and MTCOI ( Figure 3J). ATP5A protein expression decreased over time in EPI WAT (p < 0.05; Figure 3N) but was not affected in the ING WAT ( Figure 3M). Other complexes of EPI WAT and all complexes in RP WAT remained stable over time and were not affected by the WSD (Figure 3).
Markers of WAT Browning
Gene expression levels of the uncoupling protein Ucp1, as marker for browning of WAT, was relatively high in ING WAT at PN21 and decreased substantial over time (p < 0.001; Figure 4A). Ucp1 expression levels were low in visceral depots, but also decreased over time in EPI WAT (p < 0.01; Figures 4B,C). Cidea gene expression levels were substantially higher in ING WAT compared to visceral depots (Figures 4D,E), but in contrast to Ucp1 remained stable in ING WAT between PN21 and 42, after which Cidea levels declined from PN42 to 98 (p < 0.01). UCP1 protein levels tended to decline from PN21 to 42 and 98 (p = 0.09; Figure 4G) and were low and not changing over time in the visceral depots (Figures 4H,I). Upon WSD exposure, Ucp1 and Cidea gene expression levels decreased in ING WAT (p < 0.05), but remained unaffected in the visceral EPI and RP WAT depots. UCP1 protein levels tended to decline upon the WSD exposure in ING WAT (p = 0.1), was not affected by the WSD in the EPI WAT but increased upon the WSD in RP WAT (p < 0.01). Ucp1 and Cidea expression levels did not change over time in RP WAT (Figures 4C,F). It should be noted that UCP1 protein levels were very heterogeneous at PN98, showing some animals with much higher levels than on average.
Depot Differences
ING WAT had overall lower expression of adiposity markers (Lep, Mest; p < 0.001) and higher levels of mitochondrial (CS and HADH activity and OXPHOS subunits I -IV; p < 0.001) and browning markers (Ucp1 and Cidea gene expression and UCP1 protein levels; p < 0.001) compared to both EPI and RP WAT. Especially at PN21 and 42 CS and HADH levels were higher in ING compared to EPI and RP WAT (p < 0.01) and at PN42 levels of OXPHOS complexes I-IV were also higher in ING compared to RP and EPI WAT (p < 0.01). Subsequent decline in CS, HADH and OXPHOS complexes I-IV levels was steeper in ING WAT compared to the visceral depots and resulted in similar CS, HADH and OXPHOS subunit II levels in ING WAT and RP WAT and HADH and subunit II levels being similar in ING WAT and EPI WAT at PN98. Ucp1 and Cidea gene expression levels were also significant higher in ING compared to EPI and RP WAT at PN21 and 42 (p < 0.01) and had a steeper decline over time resulting in comparable Ucp1 gene expression levels in ING and RP WAT at PN98. OXPHOS complex I-IV from the electron transport system, which drives ATP synthesis by OXPHOS complex V, the final step that is bypassed by UCP1 mediated uncoupling. Remarkably, unlike complex I-IV, levels of complex V (represented by ATP5A) were similar between depots.
Pathway Analysis of mRNA Sequence Data
To better understand the changes in mitochondrial density and function markers in ING WAT over time, mRNA of ING WAT at PN21 and 98 was sequenced (n = 4 per time point). 91969 transcripts were found, 23193 of these transcripts had a FPKM value >0 for all samples and an average FPKM > 2 at PN21 or 98. For pathway analysis 5040 transcripts with a p-value <0.05 were used, resulting in a list of pathways which were significant regulated over time ( Table 3). Most differentially regulated pathways include those involved in regulation of cell proliferation and growth and pathways involved in hormonal regulation and immune response. Protein synthesis was down regulated over time and pathways involved in the FA metabolism were up regulated over time. Overlap between the regulated pathways was plotted as a network map, showing much overlap between pathways, including those involved in the regulation of cell proliferation and growth as well as pathways involved in hormonal regulation and immune response. Two pathways were less connected to the central network: "mitochondrial dysfunction" and "lipid antigen presentation by CD1" (Supplementary Figure 2). The "mitochondrial dysfunction" pathway contained genes representing subunits of the oxidative phosphorylation and other genes involved in the function of mitochondria, and its IPA name follows the general idea that downregulation of the genes contained in this pathway is associated with dysfunctional mitochondria, like in T2D, Alzheimer's or Parkinson's disease. Although ingenuity pathway analysis did not indicate a direction for the change in the mitochondrial dysfunction pathway, 35 of the 44 regulated genes in this pathway were down regulated over time, what may indicate that mitochondrial function is reduced at PN98 compared to PN21 in ING WAT (Supplementary Table 1).
Targeted Analysis of mRNA Sequence Data
A list of brown and white preadipocyte and adipocyte markers was extracted from literature (Gesta et al., 2007) and checked for their expression in the mRNA sequence data (the 5040 transcripts with p < 0.05) to get insight in possible changes in cell populations in ING WAT between PN21 and 98. Expression levels of brown adipocytes markers declined and levels of white adipocytes markers increased from PN21 to 98 ( Figure 4J and Supplementary Table 2 for the complete list of markers). For preadipocytes the picture is less clear, some brown preadipocytes markers were up and others were down regulated over time. Of note, the list of markers was extracted from the review of Gesta et al. (2007) and based on results of different experiments, the brown preadipocytes markers which were up regulated over time originate from another experiment than the down regulated markers (Boeuf et al., 2001;Timmons et al., 2007). The list of white preadipocytes markers is short and only one of these genes was abundant in our data set and up regulated over time.
A list of genes of which mitochondrial localization is strongly supported (Mitocarta, 1158 genes) were checked for expression in the data set to further explore functional changes of the mitochondria over time. 327 genes of this list were identified in the data set, of which 231 gene were down regulated, 88 up regulated and 8 genes had discrepancy in regulation between different transcripts. Genes involved in energy metabolism and protein syntheses were down regulated over time, including oxidative phosphorylation, citric acid (TCA) cycle, β-oxidation, import, transport and translation. The up regulated genes showed more diversity in function, including glycolytic metabolism, lipid synthesis, biosynthesis and mitophagy. The list with genes categorized to function is shown in Supplementary Tables 3, 4. A summary of these findings is presented in Figure 5.
Established mitophagy/autophagy markers (Ding and Yin, 2012) were extracted from the mRNA sequence data and their regulation was studied for insight in potential underlying mechanisms of WAT whitening and results are reported in Table 4. Microtubule-associated protein one light chain three alpha (Map1lc3a, better known as Lc3), sequestosome 1 (Sqstm1, better known as p62), unk-51 like kinase 2 (Ulk2), PTEN induced putative kinase 1 (Pink1) and BCL2/adenovirus E1B interacting protein 3 (Bnip3) were upregulated at PN98 compared to PN21. To further support this data, LC3 protein levels were analyzed with western blot. LC3.2 protein levels increased between PN21 and 98 in ING and RP WAT and were unchanged in EPI WAT (Supplementary Materials and Supplementary Figure 5). Regrettably, LC3.1 protein levels were not detectable in these samples under these conditions and ratios between LC3.2 and LC3.1 could therefore not be calculated. Gesta et al. (2007) and expression levels of transcripts with p < 0.05 included in table. Inserts in figures B, C, E, F, and H show the same data with adapted y-axis for better visualization of the low expression data. PN, postnatal day; AIN, AIM93-G diet; WSD, western style diet. Gene expression data expressed as mean + SEM and protein levels as mean levels corrected for total protein (Coomassie staining) + SEM, n = 8 for PN 21 and PN42, n = 11 for PN98, no data available (n/a) for RP WAT at PN21. Time and WSD effects were analyzed separately; time effect: * p < 0.05; * * p < 0.01; * * * p < 0.001; # 0.05 < p < 0.1; WSD effect: p < 0.05; p < 0.01; 0.05 < p < 0.1. Sequence data reported as fold changes between postnatal day 21 over 98. Up regulated values red and down regulated values green.
Correlations
There were many correlations between mitochondrial density and function markers and WAT weight (Table 5). Specifically, the inverse correlation between CS and HADH levels and weight Expression of transcripts determined with mRNA sequencing and data analyzed with ingenuity Pathway Analysis (Qiagen Bioinformatics, Aarhus, Denmark). Difference between postnatal day 21 over 98. * No information on activity pattern was available in IPA and no Z-score calculated as a consequence. of the corresponding depot was consistently strong in all depots. ING WAT weight was also inversely correlated to levels of the electron transport system complexes (OXPHOS complexes I-IV). In contrast, no correlation was found between ING WAT weight and the level of OXPHOS complex V (ATP5A ; Table 5). RP WAT demonstrated opposite findings: the electron transport system complexes did not correlate with WAT weight, but complex V (ATP5A) showed a mild, inverse correlation with WAT weight. Lep gene expression was correlated to WAT weight for each of the depots, but this correlation was stronger in the visceral depots compared to ING WAT. Ucp1 expression, a key marker for browning of WAT, in ING WAT was correlated to mitochondrial density (CS and mtDNA, Table 5) and function markers (OXPHOS complexes I-IV), but not in RP WAT. In EPI WAT Ucp1 expression was only to some extent correlated to CS and HADH levels. Ucp1 expression in ING WAT was not correlated to complex V levels.
DISCUSSION
In this study, we show clear changes in markers for mitochondrial density, mitochondrial function and browning in ING, EPI, and RP WAT depots of C57BL/6j mice after weaning up to young adulthood. We show that an increase in WAT depot mass and elevated levels of adiposity markers is associated with a decline in mitochondrial density over time in all depots. This decline was depot specific, being most pronounced in ING WAT with the steepest drop from PN21 to 42 and was accelerated by the WSD challenge. RNA sequence data from the ING depot showed that the decline in mitochondrial density was accompanied by a transfer from active, ATP producing mitochondria with a clear uncoupling potential toward mitochondria with a more diverse metabolic function and a higher biosynthesis. The RNA sequence data also showed that ING WAT developed from a "browner" toward a "whiter" phenotype with increased coupled mitochondria. These data also show that this is accompanied by an increased mRNA expression of mitophagy markers. The present study aimed to comprehensively compare postweaning changes of markers for mitochondrial function and WAT browning in three different WAT depots. There is evidence from one other study showing decreasing mitochondrial enzyme activity in human subcutaneous WAT from the postnatal period to adulthood (Novak et al., 1973), which is in accordance with the findings in our study. Previous experiments in rats showed lower mitochondrial respiration, enzyme activity and mitochondrial abundance in ING compared to EPI WAT, both when measured in whole tissue or isolated adipocytes (Prunet-Marcassus et al., 1999;Deveaud et al., 2004). But a study in adult mice showed a higher mitochondrial performance in isolated mitochondria of ING compared to EPI WAT while mitochondrial density was not different between the depots (Schottl et al., 2015). In the present study respiration was not measured, but the results were more in agreement with the later study, showing higher mitochondrial enzyme, OXPHOS protein and RNA marker levels in ING compared to EPI WAT of young mice. Moreover, the results of this study showed that the differences in mitochondrial density and function markers between ING and visceral WAT are age specific, being prominent at weaning and much smaller in early adulthood.
The declining mitochondrial density and function markers from weaning to young adulthood in ING WAT was accompanied by a diminished gene expression of the WAT browning markers Ucp1 and Cidea. UCP1 proteins levels tended to decrease accordingly, although statistical significance was not reached for the older AIN93 animals. Subcutaneous depots are more prone to WAT browning than visceral depots and have a higher abundancy of adipocytes with a brown-like phenotype, that can be activated upon cold exposure (Wu et al., 2012). The higher levels of WAT browning markers at weaning may therefore be part of a protective response of the vulnerable pups to the colder environment in the postnatal period, comparable to the uncoupling response in the brown adipose tissue (Obregon et al., 1989). Indeed, the decreased expression of brown adipocyte markers in the RNA sequence data at PN98 suggests that abundance of brown or brown-like adipocytes, with the capacity to produce heat, is declining over time. This is in line with evidence of other mice studies showing browning of the WAT depots from PN10 to 20 and subsequent whitening from PN20 to 30 (Xue et al., 2007;Lasar et al., 2013;Birnbacher et al., 2018) a process that has shown to be strongly genetically controlled (Chabowska-Kita et al., 2015). The increased abundancy of white adipocytes, with their lipid storage and insulation capacity, at the same time point further supports this explanation. This also provides an explanation for the differences between subcutaneous and visceral WAT, of which the latter showed a much lower expression of the browning makers Ucp1 and Cidea. Again, this it is fully in line with the developmental needs of a young pup, being small and vulnerable to cold stress, to a large, more mature individual with a sufficient layer of thermal insulation provided by WAT. This notion is supported by the substantial decline in preadipocyte marker expression from PN21 to PN42 and the increased expression of white adipocyte markers at PN98. The latter indicates that preadipocytes differentiated to adipocytes between those time points, a process that is probably already ongoing at PN21. Indeed, the decline in preadipocyte marker expression is previously reported and there coincided with an increased expression of adipocyte marker expression (Xue et al., 2007). Environmental factors, like dietary interventions or early life stress (Yam et al., 2017), may change the pace of whitening and subsequently have long-lasting effects on the oxidative and storage capacity of WAT depots. It would be of interest to investigate the effects of temperature on postweaning changes, in particular by repeating the experiment under thermoneutrality, but also at intermediate and lower low-ambient temperatures, and what the consequence of changes in the pace of whitening of the WAT depots is for later life metabolic health and WAT function. Moreover, investigating the pace of whitening and subsequent later life health consequences in UCP1 knockout or other relevant genetic mouse models or investigating the consequences of different aspects of the weaning process (maternal separation, dietary switch and early/late weaning) can give insight in the underlying mechanisms. Recent publications revealed that the whitening of WAT is controlled by autophagy induced mitochondrial clearance (mitophagy), indicated by the activation of mitophagy during the beige to white transition in cultured adipocytes following β3-AR agonist withdrawal and the impaired whitening when autophagy is deleted in knock-out mouse models (Altshuler-Keylin et al., 2016;Lu et al., 2019). Therefore, we checked the regulation of genes known to be involved in mitophagy (Ding and Yin, 2012) in ING WAT between PN21 and 98, the depot where changes in mitochondrial abundance and expression of browning markers was biggest. Indeed, genes involved in mitophagy were upregulated over time, as was the protein level of the autophagy marker LC3.2, confirming the role of mitophagy in whitening of WAT depots.
Our data on mitochondrial metabolic pathways show that mitochondria develop from organelles with a high expression of pathways directed at energy production (and dissipation), as in brown adipocytes (Forner et al., 2009), toward coupled mitochondria which display a wider variety of biochemical pathways. Part of the changes may be related to maturation of ING WAT, since expression of proteins involved in fatty acid metabolism and of mitochondrial chaperones has been shown to increase during adipogenesis (Wilson-Fritch et al., 2003). In ING WAT of the more mature PN98 mice we observe a much smaller number of genes related to especially protein import, translation and OXPHOS as well as nucleotide metabolism, suggestion a decrease in mitochondrial "growth"/biogenesis, while genes related to lipid synthesis, branched chain amino acid/short chain fatty acid metabolism, steroid metabolism and redox signaling appear, as well as a substantial number of genes related to diverse biosynthetic pathways (Figure 5). This indicates that the mitochondria have reached a condition where they interact more with the rest of the cell, no longer unilaterally focused on growth and energy metabolism (and dissipation) only. This especially suggests that ING WAT mitochondria have reached a steady state, which is supported by the appearance of autophagy and apoptosis genes, essential for mitochondrial (and cellular) turnover and quality control (Zimmermann and Reichert, 2017). Our data further indicate that ING WAT between PN21 (and possibly earlier) and PN42 provides a physiological relevant model to study and better understand functional changes in mitochondria related to adipose tissue development and mitochondrial adaptive capacity and to understand mitochondrial changes related to WAT whitening.
The changes in the functionality of the mitochondria in the WAT depots and the remodeling of these depots in early life may have an impact for intervention studies starting in early life, as the effect of the intervention may be very dependent on the starting point of the intervention. In line with the developmental origins of health and disease theory (Hales and Barker, 2001;Gluckman and Hanson, 2004), developmental processes may respond in a manner to optimally match an individual to anticipate later life conditions (Hanson and Gluckman, 2014). Previous studies in our lab showed that the postnatal period is amendable to nutritional programming since a relative mild dietary intervention at weaning indeed increased levels of later life mitochondrial oxidative capacity (Kodde et al., 2017). Although the perspective is there, additional studies are needed to fully understand to which extent nutritional interventions in the timeframe of weaning provides a window of opportunity for protection against later life metabolic disease.
CONCLUSION
The present study showed a decline in mitochondrial density and oxidative capacity markers in WAT depots of young C57Bl/6j mice over time while adipose tissue mass increased in size. The decline is more explicit in ING WAT compared to the visceral depots and is accompanied by an evolution from a browner, energy dissipating, to a whiter, biosynthetic, adipose tissue phenotype. These developmental changes may provide an opportunity to program a healthy WAT mitochondrial phenotype by nutritional interventions during the weaning period.
ETHICS STATEMENT
All animal procedures were in accordance with the principles of good laboratory animal care following the EU directive for the protection of animals used for scientific purposes and approved by an external, independent Animal Experimental Committee (DEC consult, Soest, Netherlands).
AUTHOR CONTRIBUTIONS
AK, EE, and KvL conducted the experiments. AK and JK analyzed the data and drafted the manuscript. AK, AO, and JK interpreted the results. AK prepared the figures. All authors contributed to the design of the study, edited and revised the manuscript, and approved the final version of the manuscript.
FUNDING
This work was funded by the Danone Nutricia Research. | 7,932.4 | 2019-07-09T00:00:00.000 | [
"Biology"
] |
INVESTIGATION OF THE MATHEMATICAL MODEL OF A SINGLE PENDULUM UNDER THE ACTION OF THE FOLLOWER FORCE
One of the main constructive elements of roadmakers, railway bridge supports and structures is a compressed rod, to the end of which a follower force is applied. Recently, the most frequently used model of such rod in the form of an inverted mathematical pendulum under the influence of an asymmetric follower force. Asymmetry is due to the simultaneous presence of both angular and linear eccentricities. The work is devoted to the study of vertical and non-vertical states of equilibrium of a single pendulum. The reduced mathematical model of a single inverted mathematical pendulum is generalized, since it takes into account both the angular eccentricity and the linear eccentricity of the follower force. In addition, the coefficients of influence allow to consider all types of elastic elements (rigid, soft or linear). In this case, both elements can have characteristics of the same type or of different types. For direct integration of the differential equation of the pendulum motion, and also the decoupling of the corresponding Cauchy problem, the authors use the method of extending the parameter of the outstanding Japanese scientist Y.A. Shinohara. Varying of the angular eccentricity of the follower force at zero linear eccentricity results in the inverted pendulum having one or three non-vertical equilibrium positions. The type of characteristics of the elastic elements affects the maximum possible deviation from the vertical, at which the pendulum will be in a state of equilibrium. Analysis of the results of computer simulation shows that the orientation of the follower force for fixed values of other parameters of the pendulum has a significant effect on the configuration of the equilibrium curve.
Introduction
The concept of bifurcations of stationary states is widely used in various fields of science.Interest that manifests itself to them recently in many countries is explained by the fact that the task of identifying the transformation of quantitative changes into qualitative changes is key in revealing the fundamental secrets of nature and the underlying processes and phenomena occurring in it.
The phenomenon of bifurcation (Latin Bifurcus -bifurcation, branching) in the phase space or in the space of states of the dynamic system corresponds to a qualitative change in the character of the motion of the output mechanical system.
Mathematical pendulums with an arbitrary number of links are constructed in [9].An analysis of the dynamics development of pendulum systems is given in [10].The research results of bifurcations of equilibrium states of an inverted double pendulum loaded at the upper end by an asymmetric follower force are presented in [1][2][3][4][5][6][7][8][9][10].The equilibrium states of a single inverted pendulum are considered in this paper.
Methods
The design scheme of the inverted single pendulum is shown in Fig. 1, consists of a weightless rod OA 1 1 of length l 1 and a material point A 1 of mass m 1 .At the point of support O there is visco-elastic hinge (realized, for example, by means of a spiral spring and a hydraulic damper).Let c 1 be the stiffness of the spiral spring, µ 1 -the viscosity coefficient in the hinge O, and also takes into account the effect of external friction.The upper end of the pendulum is resiliently fixed by means of a horizontal spring of rigidity c.Similarly to [2], let's assume that in the vertical position of the pendulum all elastic elements are deformed.The follower force P is also applied to the upper end of the pendulum.The angle between this force and the vertical will be denoted by = δ + ϕ 1 a k .Here δ = const -the angular eccentricity; = k constthe parameter of the follower force orientation; ϕ 1 -angle of the pendulum deviation from the vertical.
The equations of the perturbed motion of the pendulum have the form: This mathematical model of the inverted single pendulum is generalized, since in equation ( 1) the angular eccentricity δ and the linear eccentricity ε and in addition, the influence factors i ij q , q ( ) = i, j 1,2 allow to consider all possible types of elastic elements (soft, rigid, linear).
In order to reduce the number of parameters, let's proceed to dimensionless quantities by taking the values of m 1 , l 1 and c 1 as units of measurement, and all other quantities will be taken to these three (basic) quantities.Dimensionless quantities are denoted by a bar above: Here the prime denotes differentiation with respect to the dimensionless time t .Equations (2) at δ = 0 and ε = 0 admit a solution ϕ = 1 0, which corresponds to the vertical posi- tion of equilibum of the pendulum.Let's consider the problem of the existence of equilibrium positions of the pendulum for δ ≠ 0 and ε ≠ 0.
INVESTIGATION OF THE MATHEMATICAL MODEL OF A SINGLE PENDULUM UNDER THE ACTION OF THE FOLLOWER FORCE
Olha Bambura PhD, Senior Lecturer Department of Theoretical and Applied Mechanics 1<EMAIL_ADDRESS>
Oleksii Rezydent PhD Student Department of Automation and Computer-Integrated
Technologies of Transport 1 1 State University of Infrastructure and Technology<EMAIL_ADDRESS>One of the main constructive elements of roadmakers, railway bridge supports and structures is a compressed rod, to the end of which a follower force is applied.
Recently, the most frequently used model of such rod in the form of an inverted mathematical pendulum under the influence of an asymmetric follower force.Asymmetry is due to the simultaneous presence of both angular and linear eccentricities.The work is devoted to the study of vertical and non-vertical states of equilibrium of a single pendulum.The reduced mathematical model of a single inverted mathematical pendulum is generalized, since it takes into account both the angular eccentricity and the linear eccentricity of the follower force.In addition, the coefficients of influence allow to consider all types of elastic elements (rigid, soft or linear).In this case, both elements can have characteristics of the same type or of different types.For direct integration of the differential equation of the pendulum motion, and also the decoupling of the corresponding Cauchy problem, the authors use the method of extending the parameter of the outstanding Japanese scientist Y.A. Shinohara.Varying of the angular eccentricity of the follower force at zero linear eccentricity results in the inverted pendulum having one or three non-vertical equilibrium positions.The type of characteristics of the elastic elements affects the maximum possible deviation from the vertical, at which the pendulum will be in a state of equilibrium.Analysis of the results of computer simulation shows that the orientation of the follower force for fixed values of other parameters of the pendulum has a significant effect on the configuration of the equilibrium curve.Keywords: single pendulum, equilibrium states, follower force, bifurcations, mathematical model, orientation parameter, catastrophe, phase space, mechanical system, eccentricity.
It is logical to assume that there exists a non-vertical state of equilibrium that can be found by direct integration of the differential equation of motion of the pendulum, solved the corresponding Cauchy problem for fixed values of the parameters.
To do this, we use the parameter extension method developed by the Japanese scientist Y. A. Shinohara.
Results
The results of the solution of the Cauchy problem showed that only in the case of springs with soft characteristics ( = = q q 1 and = = = = 1 11 3 13 q q q q 0 in equation ( 2)) bifurcations of pendulum equilibrium states occur.In the cases of rigid = = (q q 1 and = = = = 2 12 3 13 q q q q 0) and linear ( ) q q 1 and q q q q 0 characteristics of the elastic elements of the pendulum, the dependences are single-valued.
Unique non-vertical (for ε ≠ 0) equilibrium position of the pendulum is corresponded to each value of the linear eccentricity of the follower force.In this case, the larger the value ε, the greater the angle of pendulum deviation ϕ * 1 from the vertical.A comparative analysis of the pendulum shows that for linear characteristics this angle is larger than for stiff characteristics, consistent with intuitive considerations.By varying the parameter b 1 , characterizing the helical spring in the hinge O, it is found that increasing or decreasing b 1 leads to a change in the configuration of the equilibrium state curve.For sufficiently small values of ε for b 1 =0,05, there are three equilibrium states of the pendulum, for b 1 =0,25 there are five of them, and for b 1 =0,5 -only one (ϕ = * 1 0).The parameter b characterizing the horizontal spring at the upper end of the pendulum does not affect the configuration of the equilibrium state curve.
For a pendulum with rigid characteristics of elastic elements, a change in the orientation parameter of the follower force leads to the fact that for small angular eccentricity the pendulum has only one non-vertical equilibrium state.The effect of this eccentricity is visible only for values of δ close to π.
For a pendulum with soft characteristics of elastic elements, a similar effect of the follower force orientation parameter on the equilibrium curves is observed.
Discussions of results
The theoretical and practical significance of the results is the development of analytical-numerical methods for constructing the dependencies of the equilibrium values of the generalized coordinates of single and double inverted mathematical pendulums on the parameters of the linear and angular eccentricities of the follower force, as well as its orientation parameter.The obtained results for different types of characteristics of elastic elements deepen the scientific base of design engineers on the influence of the parameters of pendulum systems on their dynamic behavior and can be used in research and design organizations in the modeling of hinges of control surfaces of aircrafts, railway trackers, in calculations of dynamic vibration dampers of building structures, in predicting the dynamic behavior of one-dimensional pipelines, in the mechanics of the number of crews, while predicting the functional capabilities of machine elements and mechanisms with pendulum systems.
Fig. 1 .
Fig. 1.The computational model of a single inverted pendulum | 2,340.8 | 2017-11-28T00:00:00.000 | [
"Physics"
] |
A general model of hippocampal and dorsal striatal learning and decision making
Significance A central question in neuroscience concerns how humans and animals trade off multiple decision-making strategies. Another question pertains to the use of egocentric and allocentric strategies during navigation. We introduce reinforcement-learning models based on learning to predict future reward directly from states and actions or via learning to predict future “successor” states, choosing actions from either system based on the reliability of its predictions. We show that this model explains behavior on both spatial and nonspatial decision tasks, and we map the two model components onto the function of the dorsal hippocampus and the dorsolateral striatum, thereby unifying findings from the spatial-navigation and decision-making fields.
Following Wan Lee et al. (1), we can use the reliability measure for arbitration. These authors computed transition rates α and β for transitioning from MF to MB states and vice versa as follows. Here we use the same terms but for transitions between MF and SR. These transition rates are functions of the reliability of the respective systems: β(χSR) = A β 1 + exp(B β χSR) [5] where the A and B parameters in both equations determine transition rate and the steepness of these curves, respectively.
32
These parameters were fitted to behavioural data by Wan Lee et al. (1) and we matched their parameter values (see Table S1).
33
At each time step, the rate of changes of the probability of choosing the SR system PSR was computed using the following 34 differential equation: Although not explored here (but see 1), this means that there is a certain "stickiness" to the model: if the model is currently 37 choosing MF actions, it will take some time to move weight to the MB system.
38
Following Wan Lee et al. (1), state action value estimates were given by a weighted average of the two model components: Thus, the degree to which a system contributes to the value estimate is influenced by its reliability. Given these full-model 41 state-action values, the agent chose actions following a softmax policy: where τ −1 is an inverse temperature parameter which sets the balance between exploration and exploitation. The higher the 44 inverse temperature, the more the agent chooses higher-valued actions.
45
Task-specific adaptations 46 Although the general model architecture remained the same throughout all simulations, different adaptations were made to the model described above such that it could be used in the differen state spaces defined by the tasks.
48
Plus maze. For the Plus Maze task described in Fig.3, landmark cells were tuned to the ends of the maze. We assumed that 49 the landmark cells could not distinguish between the two ends of the maze such that, from the point of view of the striatal 50 system, probe trials and training trials looked the same.
51
Blocking. For the blocking simulations (Fig.4), we adapted the hippocampal controller (that worked with a tabular state representation as input) to incorporate the effects of boundaries on place cell firing. To that end, we defined the hippocampal 53 SR system using linear function approximation. The agent observes states through a vector of features f (s) which, if chosen 54 rightly, will be of much smaller dimension than the number of states, allowing the agent to generalise to states that are nearby 55 in feature space. The feature-based SR (2) encodes the expected discounted future activity of each feature: As in the tabular case, the feature-based SR can be used to compute value when multiplied with a vector of reward expectations 58 per feature, u: V π (s) = ψ π (s) T u. In the case of linear function approximation, these Successor Features ψ in Equation 9 are 59 approximated by a linear function of the features f : where W is a weight matrix which parameterises the approximation.
62
In the context of hippocampus, the feature-based SR allows us to represent states as population vectors of place cells with corresponding to f and ψ, respectively.
73
As in the tabular case, temporal difference learning can be used to update the SR weights: Note that the algorithm has not changed with respect to the one-hot state encoding mentioned earlier -it is easy to see that To investigate the relationship between the agents' spatial navigation and non-spatial decision making strategies, we quantified 92 the agents' degree of MB planning, as well as their degree of using an allocentric strategy, and computed their correlation.
93
For quantifying MB planning, we followed earlier studies (10, 11) and analysed the agents' choices using a mixed-effects 94 logistic regression (estimated using the statsmodels Python package, (12)). For each trial, the dependent variable (stay with 95 the same first-level action or switch) was explained in terms of whether there was a reward on the previous trial, whether the 96 previous transition was of the rare or common type, and the interaction between these factors. The logic of the two-step task is 97 that an MB learner will stay with the same action if it was rewarded after a common transition, but will be more likely to 98 switch if it gets rewarded after a rare transition. Thus, the degree of MB planning can be quantified as the interaction between 99 previous reward and trial type.
100
For quantifying the degree of allocentric place memory, we computed the average distance between the previous platform 101 location and the location of the maximum of the agent's value function at the start of the next session. This is akin to the 102 boundary distande error employed by (13). | 1,306.2 | 2020-11-19T00:00:00.000 | [
"Biology"
] |
Electrical and Structural Properties of Ohmic Contacts of SiC Diodes Fabricated on Thin Wafers
. New generations of SiC power devices require to be fabricated on very thin substrates, in order to significantly reduce the series resistance of the device. The role of thinning process on the formation of backside ohmic contact has been investigated in this work. Three different mechanical grinding processes have been adopted, resulting in different amounts of defectivity and surface roughness values. An excimer UV laser has been used to form a Ni-silicide based ohmic contact on the backside of the wafers. The reacted layer has been studied by means of Atomic Force Microscopy (AFM), Transmission Electron Microscopy (TEM) and X-Ray Diffraction (XRD) analyses, as a function of grinding process parameters and laser annealing conditions. The ohmic contact has been evaluated by measuring the Sheet Resistance (R s ) of silicided layers and the V f at nominal current of Schottky Barrier Diode (SBD) devices, fabricated on 150 mm-diameter 4H-SiC wafers. A strong relationship has been found between the crystal damage, induced by thinning process, and the structural, morphological and electrical properties of silicided ohmic contact, formed by UV laser annealing, revealing that the silicide reaction is moved forward, at fixed annealing conditions, by the increasing of crystal defectivity and surface roughness of SiC.
Introduction
Among the wide bandgap semiconductor materials, silicon carbide is the most mature and the most widely used for power electronics [1][2] and sensors applications [3][4].In the last years, in order to lower the series resistance of power devices, the reduction of wafer thickness has become progressively more demanding, requiring the introduction of a new integration scheme for the backside ohmic contact formation [5][6].The replacement of Rapid Thermal Annealing (RTA) [7][8][9][10][11] with laser annealing process has been proposed and reported, both from experimental [12][13][14][15][16][17][18][19][20][21] and theoretical [22][23][24] point of view, for the formation of silicide-based ohmic contact.Even if the silicide reaction process has been widely described as a function of deposited material and laser annealing features, the role of wafer thinning process has not been yet deeply investigated so far.In this context, the impact of crystal damage, induced by mechanical grinding process, on the formation of ohmic contact has been studied and is reported in this work, with particular focus on the influence of sub-surface damage and surface roughness on the silicide formation by laser annealing.
Experimental Setup
Schottky Barrier Diode (SBD) devices have been fabricated on 150 mm-diameter 4H-SiC wafers, grinded on the backside down to a thickness of 180 µm.A 100 nm Ni layer has been deposited by sputtering in Ar ambient, at a base pressure of 1 x 10 -3 mbar, on the back side of the wafers.The Ni layer has been irradiated by using an excimer UV laser, with wavelength of 308 nm and pulse duration of 160 ns.Three different mechanical thinning processes, classified as rough grinding, fine grinding and ultra-fine grinding, have been adopted in this work, studying their impact on the reaction between Ni and 4H-SiC under laser irradiation.Sub-surface damage and substrate roughness have been evaluated by Atomic Force Microscopy (AFM) and Transmission Electron Microscopy (TEM) analyses, using respectively a Digital Instrument D3100, equipped with a Nanoscope V controller, and a JEOL-JEM microscope working at 200 keV.Morphological and Structural properties of Ni silicide contacts have been investigated by X-Ray Diffraction (XRD) analysis, using a Bruker AXS D8 DISCOVER diffractometer, working with a Cu-K source and a thin film attachment, and by AFM and TEM analyses, as a function of thinning process and UV excimer laser annealing conditions.A preliminary evaluation of the electrical properties of reacted layers has been done by sheet resistance measurements, performed by Four Point Probe (FPP) method.Moreover, to evaluate the electrical behaviour of the annealed samples on power devices, the Vf at nominal current of SBD devices has been measured, by using a semiconductor device parameter analyzer (Agilent B1500A) and a high-power curve tracer (Sony Tektronix 371A).
Results and Discussion
Surface roughness of thinned wafers has been measured by AFM analysis.As shown Fig. 1, the ultra-fine grinding process leaves a very smooth surface (Fig. 1a), while the fine-grinding process induces well visible marks and damage on the surface (Fig. 1b).These marks become more and more visible when the rough grinding process is adopted (Fig. 1c).
Formation of Solid-State Structures
The normalized surface roughness of SiC wafers, measured after thinning and after Ni deposition, is reported in Fig. 1d, showing a similar trend in both cases, as a function of thinning process.The difference between the roughness values for the different thinning processes is even more evident after the deposition of the nickel layer.In fact, while deposition seems able to make smoother the surface of ultra-fine and fine grinded samples, on the contrary it seems to make coarser the surface of rough grinded one.
As already reported [6], the typical Sheet Resistance (Rs) curve, as a function of laser energy density, shows an increasing of Rs values for lower laser energy density, due to the initial intermixing between Ni and Si, and then a rapid drop of Rs to a final plateau.A similar behavior has been observed for all the three examined thinning processes, with a shift towards lower laser energy density with the increasing of roughness and surface damage.This trend can be explained by the increased number of defects and the reduced reflectivity of more damaged samples.As a case study, we focused our investigation on a fixed laser annealing process, performed at 3.4 J/cm 2 with three pulses, that gives different Rs values for the different substrates, as reported in Fig. 2. Fig. 2. Rs values, normalized to the sheet resistance of as deposited Ni, of samples annealed at 3.4 J/cm 2 with three pulses, as a function of thinning process.At fixed annealing conditions, a lowering of Rs is observed with the increasing roughness and surface damage.
The surface morphology of the laser annealed contacts has been investigated by AFM analysis (Fig. 3), showing significant differences among the three samples.In fact, while the surface of ultrafine grinded sample still appears quite smooth (Fig 3a), de-wetting starts to be visible on fine grinded one (Fig. 3b).Moreover, a highly irregular surface is observed on rough grinded sample (Fig. 3c).Fig. 3. AFM analysis of samples annealed at 3.4 J/cm 2 with a three pulses laser process reveals significant differences of surface morphology, as a function of thinning process.The surface of ultrafine grinded sample (a) still appears quite smooth after laser annealing, while de-wetting starts to be visible on fine grinded one (b).A highly irregular surface is observed on rough grinded sample (c).
Solid State Phenomena Vol. 359
Cross sectional TEM analyses (Fig. 4) have been performed to evaluate the reaction interface, the morphology of silicide layer and the residual amount of defectivity.The silicide layer shows very flat interfaces and uniformly distributed C clusters for the ultra-fine grinded sample (Fig. 4a).On fine grinded sample, C clusters are distributed in two well defined lines and crystal damage is almost completely recovered (Fig. 4b).On the other hand, the rough grinded sample shows a highly non uniform thickness of silicide layer, with some exposed SiC area.Moreover, deep crystal damage is still visible below the interface between Ni silicide and silicon carbide (Fig. 3c).By looking more in detail, it is possible to observe that reaction interface moves down in correspondence of defects (Fig. 4d).This could be explained by the increased amount of silicon available for the reaction.The structural properties of silicide layers of the three samples have been evaluated by X-Ray Diffraction analysis (Fig. 5), revealing that Ni31Si12 is the predominant phase for ultra-fine grinded and fine grinded samples, with some presence of Ni3Si phase in the first case.Co-existence of several phases has been observed on rough grinded sample, with predominance of Ni2Si.These findings state a shifting of the reaction, at fixed annealing conditions, from Ni richer phases towards lower Ni/Si ratio phases, with the increasing of crystal defectivity.
100
Formation of Solid-State Structures Fig. 5. XRD analysis of annealed samples shows that Ni31Si12 is the predominant phase for ultra-fine grinded and fine grinded samples, with some presence of Ni3Si in the first case.Co-existence of several phases is observed on rough grinded sample, with predominance of Ni2Si.
The forward voltage drop Vf at nominal current I0 of Schottky Barrier diodes has been measured, to evaluate the electrical properties of the reacted layers, as shown in Figure 6.A comparison between the three different thinning processes has been performed, on samples annealed at 3.4 J/cm 2 with three pulses.As a reference, the Vf of diodes annealed by conventional Rapid Thermal Process (60 s @ 1000 °C in N2) is reported.
Fig. 1 .
Fig. 1.AFM analysis of samples thinned by ultra-fine (a), fine (b) and rough (c) grinding process.Surface damage and marks induced by thinning process become more and more visible while moving from ultra-fine to rough grinding process.The normalized surface roughness, measured by AFM, shows a similar trend after grinding and after Ni deposition (d).
Fig. 4 .
Fig. 4. Cross sectional TEM analysis of samples annealed at 3.4 J/cm 2 with a three pulses laser process.Silicide layer shows very flat interfaces and uniformly distributed C clusters for ultra-fine grinded sample (a).On fine grinded sample (b), C clusters are distributed in two well defined lines and crystal damage is almost completely recovered.The rough grinded sample (c) shows a highly non uniform thickness of silicide layer.The reaction interface moves down in correspondence of defects, still well visible below the Ni silicide layer (d).
Fig. 6 .
Fig.6.Vf at nominal current I0 of Schottky Barrier diodes, annealed at 3.4 J/cm 2 with 3 pulses.As a reference, the Vf of diodes treated with Rapid Thermal Annealing is reported.At fixed laser process conditions, the Vf decreases with the increasing of surface damage.
Figure 6
Figure6shows that Vf decreases when the amount of defectivity induced by thinning process in the sub-surface region increases.Moreover, the Vf measured on rough grinded diodes is comparable with the reference diodes annealed by RTA. | 2,325.4 | 2024-08-22T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics"
] |
Gingival crevicular fluid as a periodontal diagnostic indicator--I: Host derived enzymes and tissue breakdown products.
Researchers involved in the delivery of periodontal therapy are currently investigating the possible use of oral fluids in the diagnosis of oral diseases and drug development. Substantial improvements have been made in the understanding of the mediators implicated on the initiation, pathogenesis, and progression of periodontitis. This review will analyze the mechanisms involved in the breakdown of periodontal supporting tissues during chronic periodontitis and highlights the potential array of biomarkers present in gingival crevicular fluid (GCF), which may relate to existing or predicted tissue regions undergoing metabolic change.
Introduction
Periodontitis is a set of inflammatory diseases affecting the periodontium, i.e., the tissues that surround and support the teeth. Periodontitis involves progressive loss of the alveolar bone around the teeth, and if left untreated, can lead to the loosening and subsequent loss of teeth. It is caused by microorganisms that adhere to and grow on the tooth's surfaces, along with an overly aggressive immune response against these microorganisms [1].
Damage to the periodontal tissue is usually detected by means of periodontal probing, which shows loss of attachment of the tooth, or by radiographs that detect alveolar bone loss. These methods also evaluate the damage caused by previous destruction episodes, resulting in a retrospective diagnosis [2].
Accurate detection of periodontal sites exhibiting disease progression or those at risk of future deterioration has proven difficult. The development of a test for most mediators associated with the anatomic events of periodontitis may serve as a useful method for identifying and predicting future progression [3]. Of the 3 fluids found within the oral cavity-gingival crevicular fluid (GCF), serum, and total saliva-the first two have been the focus of the most research in recent years. Due to the noninvasive and simple nature of their collection, analysis of saliva and GCF may be especially beneficial in the determination of current periodontal status and a means of monitoring response to treatment [4,5].
GCF-as a diagnostic marker
GCF is an inflammatory exudate that seeps into gingival crevices or periodontal pockets around teeth with inflamed gingival [6]. It is composed of serum and locally generated materials such as tissue breakdown products, inflammatory mediators, and antibodies directed against dental plaque bacteria. The composition of the GCF is the result of the interplay between the bacterial biofilm adherent to the tooth surfaces and the cells of the periodontal tissues. The collection of GCF is a minimally invasive procedure and the analysis of specific constituents in the GCF provides a quantitative biochemical indicator for the evaluation of the local cellular metabolism that reflects a person's periodontal health status [7]. Since GCF is an inflammatory exudate that reflects ongoing events in the periodontal tissues that produce it, an extensive search has been made for GCF components that might serve as potential diagnostic or prognostic markers for the progression of periodontitis [8].
Curtis et al.
[9] stated that "markers of disease" might encompass three separate categories: 1) indicators of current disease activity; 2) predictors of future disease progression; 3) predictors of future disease initiation at currently healthy sites.
Over 65 GCF components have been preliminarily examined as possible markers for the progression of periodontitis. These components fall into three general categories: • Host-derived enzymes and their inhibitors ( Table 1); • Tissue breakdown products ( Table 2).
• Inflammatory mediators and host-response modifiers; The first two categories will be dealt in this Part, whereas, Part II will mainly contain Inflammatory mediators and host response modifiers i.e. category 3 and chair side point-of-care diagnostic aids. Matrix metalloproteinase-2 (MMP-2) Matrix metalloproteinase-9 (MMP-9) Tissue inhibitor of MMP-l (TIMP-l) Stromyelysins Myeloperoxidases Lactate dehydrogenase Arylsulfatase β-N-acetyl-hexosaminidase Aspartate aminotransferase -It is a cytoplasmic enzyme that is released upon cell death and elevated levels of total enzyme activity were found to be strongly associated with active disease sites [10]. Sites with severe gingival inflammation and progressive attachment loss demonstrate marked elevation in AST levels in GCF samples [11].
Alkaline phosphatase -It is a membrane-based glycoprotein produced by many cells within the area of the periodontium and gingival crevice. The main sources of the enzyme are polymorphonuclear leukocytes (PMNs), gram-negative anaerobic bacteria associated with periodontal disease and osteoblast and fibroblast cells. Bacterial alkaline phosphatase (B-AP) aids in the uptake and metabolism of phosphorylated organic molecules, which bacteria require for growth and replication. The presence of B-AP is indicative of bacterial infection at the site. Alkaline phosphatase is thought to play a role in bone metabolism and mineralization and collagen formation. The activity of alkaline phosphatase has been show to be correlated with pocket depth and the percentage of bone loss [12] and this activity was found to be 20 times greater in GCF from active sites than in serum.
Acid phosphatase -It has been widely investigated amongst the lysosomal enzymes and has often been used as a lysosomal marker. Quantitative analysis confirmed that gingival fluid contains 10-20 times more acid phosphatase than serum. The host sources are the PMNs and desquamating epithelial cells [13]. About 60% of the total acid phosphatase in whole gingival fluid originates from bacteria [14]. The levels of acid phosphatase do not correlate with measurements of disease severity or activity.
β -Glucuronidase -It is one of the hydrolases found in the azurophilic or primary granules of PMNs [13]. The enzyme is liberated from macrophages, fibroblasts and endothelial cells of healthy or chronically inflamed gingival [15]. It is also positively associated with the number of Spirochetes, Porphyromonas gingivalis, Prevotella intermedia and lactose-negative black pigmenting bacteria in the subgingival flora. The level of β-glucuronidase correlates significantly with attachment loss that may subsequently occur in individuals with adult periodontitis [16].
Elastase -Neutrophil elastase is a serine proteinase confined to the azurophil granules of PMNs which are analogous to lysozymes [17]. It acts upon elastin, proteoglycans, hemoglobin, fibrinogen and collagen. Leukocyte elastase degrades mature collagen fibers. Amounts of GCF elastase are greater in periodontitis patients than healthy controls [18].
Elastase inhibitors -The activity of proteases in the tissues is probably modulated by the presence of inhibitors either produced locally or circulating in plasma. The main plasma inhibitors are α 2 -macroglobulin and α 1antitrypsin, which accounts for more than 90% of the total protease inhibiting capacity of serum. A third physiological inhibitor, α 2 -antichymotrypsin seems to inactivate only chymotrypsin-like enzymes, for instance Cathepsin G α 2macroglobulin inhibits all three neutral proteinases from PMN's by a similar mechanism which consists of irreversible trapping of the enzyme molecule by the inhibitor. α 1 -antitrypsin inactivates mainly serine proteinases, elastase and cathepsin G and partially mammalian collagenase [18]. Both α 1 -antitrypsin and α 2macroglobulin were found in gingival fluid by Schenkein and Genco [19] in concentrations representing threefourths of those found in serum. In inflamed gingiva, GCF samples had about twice as much α 2 -macroglobulin than the samples collected in the same area after therapy.
Cathepsins -It is an enzyme belonging to the class of cysteine proteinases. In GCF, macrophages are the main producers of cathepsin B [20]. GCF concentrations of cathepsin B were found to be elevated in patients with periodontal disease, but lower in patients with gingivitis [21]. Thus, it may have a potential use in distinguishing periodontitis from gingivitis and in planning treatment and monitoring treatment outcomes [22]. Cathepsin D, a carboxy endopeptidase, is present at high concentration in inflamed tissues. Its concentration is found to be 10 times higher in GCF during periodontal destruction [23]. Cathepsin G is serine endopeptidase contained in the azurophil granules of PMNs. It is also known as chymotrypsin like, because it attacks a number of synthetic substrates typical for chymotrypsin and is inhibited by the same inhibitors. It hydrolyzes hemoglobin and fibrinogen, casein, collagen and proteoglycans. Measurements of cathepsin and neutral proteases, have also shown a relationship to the severity of inflammation but no association with disease activity has been demonstrated [24].
Trypsin-like enzymes -Proteolythic activities associated with black-pigmented Bacteroides species have long been considered as virulence factors in the pathogenesis of periodontal disease [25]. Porphyromonas gingivalis, which is frequently isolated from periodontal lesions in adults with advanced periodontitis, possesses a spectrum of proteases including a trypsin-like enzyme [26]. The presence of this trypsin-like enzyme increases the potential of this organism to mediate destruction of periodontal tissues. It has to cleave peptide substrates with arginine terminal groups such as benzoyl-arginine-2 naphthylamide (BANA) or benzoyl-arginine-p-nitroanilide (BAPNA). The trypsin-like enzyme found in P. gingivalis is able to degrade collagen directly and its GCF levels might provide useful information on the periodontal condition [27].
Immunoglobulin-degrading proteases -They constitute a group of microbial enzymes that have gained much interest due to their potential significance as virulence factors [28]. Such enzymes have been assumed to facilitate both bacterial colonization on mucous membranes and penetration of bacterial cells and their antigenic products through the mucosal barrier by elimination of immunoglobulins [28]. Consequently, immunoglobulin-degrading enzymes have been demonstrated mainly in pathogenic species and in species closely associated with infectious diseases [28]. GCF IgG antibodies to periodontopathic organisms are present in significantly higher levels in periodontal disease patients than in normal control subjects [29].
Dipeptidyl Peptidases (DPP) -They are derived from lymphocytes, macrophages, and fibroblasts. DPP II has been localised to macrophages and fibroblasts 30 in gingival tissue and in cells in GCF. DPP IV has been localised to monocytes, macrophages, fibroblasts and CD4 and CD8 lymphocytes [30]. They have the capacity to degrade collagen but their main function most likely lies in the activation of pro-forms of other enzymes, cytokines, and other immune mediators. Eley and Cox [31] monitored GCF levels of DPP II and IV and reported higher levels of both enzymes in sites with rapid and gradual attachment loss than in paired sites without attachment loss.
Non-specific neutral proteases -Neutral protease is a non-specific metalloprotease. It cleaves fibronectin, collagen IV, and to a lesser extent collagen I, but it does not cleave collagen V or laminin. It hydrolyzes N-terminal peptide bonds of non-polar amino acid residues and may preferentially attack denatured and intercellular proteins with exposed hydrophobic amino acid residues [32]. It has been reported that an elevated level of neutral protease activity suggests an active phase of periodontal disease [32].
Matrix Metalloproteinases -Host cell-derived enzymes such as matrix metalloproteinases (MMPs) are an important group of neutral proteinases implicated in the destructive process of periodontal disease that can be measured in GCF [33]. The neutrophils are the major cells responsible for MMP release at the infected site, specifically MMP-8 (collagenase-2) and MMP-9 (gelatinase-B) [34]. Although MMP-8 is able to potently degrade interstitial collagens, MMP-9 degrades several extracellular matrix proteins [31]. Mammalian collagenases initiate degradation by making a single cut; subsequent degradation of the denatured collagen molecule can be mediated by the gelatinases. GCF collagenase and collagenase activity has been shown to increase with increasing severity of inflammation and increasing pocket depth and alveolar bone loss [35]. Stromelysins (SL) are the major MMPs of fibroblast origin, and can activate fibroblast type collagenase [36]. Birkedal-Hansen et al. [37] have also suggested that SL may act as a marker of stromal cell involvement in the process of tissue degradation.
TIMPs -They are locally produced and their main role is defending connective tissues in the very local area around the cell from which metalloproteinases are secreted. The tissue degradation is further thought to be induced by an imbalance between MMPs and TIMP [38] The mean amounts of SL and TIMP in diseased sites (gingivitis and periodontitis) is significantly higher than the mean amount of these GCF components in healthy sites [39].
Myeloperoxidase -The myeloperoxidasehydrogen peroxide-chloride system, which is part of the innate host defence mediated by polymorphonuclear leukocytes, possesses potent antimicrobial activity [40]. MPO is produced in the phagosome in excess concentrations of those that mediate bacterial killing. It has been suggested that MPO functions primarily to maintain a low concentration of hydrogen peroxide in the phagosome, thereby preserving the function of the granule proteases that otherwise would undergo irreversible oxidant-mediated inactivation if hydrogen peroxide accumulated in the phagosome [41]. High MPO levels in GCF from patients with progressive chronic periodontitis, and their reduction in response to treatment have been reported by Hernandez et al. [42].
Lactate dehydrogenase -It catalyses the reversible reduction of pyruvate to lactate. GCF contains 10-20 times more LDH than blood [43]. Although LDH is found in bacteria, most of its GCF concentration originates from the periodontal tissues. No significant correlation is found between the levels of LDH in gingival fluid and disease severity. As such it could reflect metabolic changes, such as the increase in anaerobic glycolysis characteristic of inflamed gingival [44].
Arylsulfatase -It catalyzes the release of esterbound sulfate from a variety of O-sulfate esters and was shown to be higher in activity in GCF in gingivitis and periodontitis patients. Larnster and Co-workers [45] have examined the relationship between β-glucuronidase and arylsulfatase and have shown that levels in the GCF are elevated in inflamed relative to healthy non-inflamed sites, and that these levels decrease following periodontal treatment.
β-N-acetyl-hexosaminidase (β-NAH) -It is an acid lysosomal hydrolase that emanates into GCF during neutrophilic phagocytosis. Under secretory conditions, the precursor forms of the newly synthesized enzyme will be liberated. During phagocytosis and cellular lysis, the lysosomal beta-N-acetyl-hexosaminidase is present. Untreated periodontitis is associated with elevated levels of myeloperoxidase, β-NAH, β-Glucoronidase, and Cathepsin D that may contribute to promoted loss of periodontal support and periodontal therapy induces a sustained down regulation of leukocyte activity as evidenced by the remission of GCF markers [46]. Glycosaminoglycans -Proteoglycans have a core protein on which one or more heteropolysaccharides (called glycosaminoglycans) are bound covalently. Different glycosaminoglycans can be found, depending on the tissue, although the most common are the nonsulfated hyaluronic acid, and the sulfated heparan sulfate, Dermatan sulphate, chondroitin-4 sulfate and chondroitin-6 sulfate. In general, chondroitin-4-sulfate is the most common glycosaminoglycan in the periodontium. Proteoglycans have the ability to bind most collagens as well as fibronectin. Upon degradation of periodontal tissues, glycosaminoglycans are released, making their way into the GCF. Chondroitin-4-sulfate appears to be the major glycosaminoglycan in untreated chronic periodontitis sites, as shown in both animal [46] and human [47] studies. Elevated glycosaminoglycan concentrations were also found in aggressive periodontal diseases, and associations have been made with periodontal pathogens such as P. gingivalis [48].
Hydroxyproline -It is a characteristic amino acid is a major component of the collagen. Hydroxyproline and proline play key roles for collagen stability. They permit the sharp twisting of the collagen helix. Thus it is a major breakdown product of collagen present in the GCF [49].
Fibronectin fragments -It is one of the components of the extracellular matrix (ECM) of periodontal tissue [50] its main role is in cell adhesion and proliferation, which explains its potential use in regenerative strategies. Cross-sectional studies have revealed that fibronectin is invariably found in a degraded form in the GCF, [51,52] and therefore is inactive [53]. Therefore, its presence in GCF would indicate FN fragmentation due to tissue destruction and not simply inflammation [54].
Connective tissue and Bone proteins
Osteonectin -Also referred to as secreted protein acidic and rich in cysteine and basement membrane protein (BM-40), osteonectin is a single-chain polypeptide that binds strongly to hydroxyapatite and other extracellular matrix proteins including collagens. Because of its affinity for collagen and hydroxylapatite, osteonectin has been implicated in the early phases of tissue mineralization [55]. In a cross-sectional study by Bowers et al. [56]. GCF samples were analyzed from patients with gingivitis, at moderate or severe periodontal disease states.
Osteocalcin -It is a small calcium-binding protein of bone, and is the most abundant non collagenous protein of mineralized tissues [57]. Osteocalcin is predominantly synthesized by osteoblasts [58], and it has an important role in both bone resorption and mineralization [59]. Elevated serum osteocalcin levels have been shown in periods of rapid bone turnover [60]. Serum osteocalcin is presently considered a valid marker of bone turnover when resorption and formation are coupled, and a specific marker of bone formation when formation and resorption are uncoupled [59]. Relationship between GCF osteocalcin levels and periodontal disease have been reported [61].
Type I collagen peptides -The most common extracellular matrix component is collagen, which is synthesized in a pro-form containing a terminal propeptide. After cleavage, these peptides are eliminated through the gingival pocket where they can be measured, thus they represent collagen biosynthesis and not degradation. Collagen I carboxy-terminal propeptide and collagen III amino-terminal propeptide were detectable in the GCF of patients with periodontitis, but not in healthy subjects, suggesting that turnover is higher in inflamed sites. The GCF levels of these collagens are increased after nonsurgical periodontal treatment, and return to baseline levels after a few days [51,52].
Osteopontin (OPN) -It is a single-chain polypeptide. In bone matrix, OPN is highly concentrated at sites where osteoclasts are attached to the underlying mineral surface, that is, the clear zone attachment areas of the plasma membrane [62]. However, since OPN is produced by both osteoblasts and osteoclasts, it holds a dual function in bone maturation and mineralization as well as bone resorption [63] Sharma et al. [64] published findings from an investigation of GCF OPN that its concentrations increased proportionally with the progression of disease and when nonsurgical periodontal treatment was provided, its levels were significantly reduced.
Laminin -It is a 900-kDa glycoprotein found in all basement membranes. During gingival inflammation, neutrophils leave the blood vessels and migrate through the connective tissue towards the inflammatory lesion, and some of them invade the gingival crevice. Steadman et al. [65] noted that a simple response against chemotactic factors seemed not to lead to basement membrane destruction, while activated neutrophils generated extensive destruction of the basement membrane. Higher amounts of laminin in GCF from patients with periodontitis suggest the presence of hyperactive neutrophils during the transmigration process through the endothelium/epithelium [66].
Calprotectin -It is a 36-kDa protein composed of a dimeric complex of 8-and 14-kDa subunits. Neutrophils are the primary source of calprotectin although other cells, such as activated monocytes and macrophages and specific epithelial cells, are also capable of manufacturing the protein. Calprotectin acts as a calcium-and zinc-binding protein with both antimicrobial and antifungal activities. It also plays a role in immune regulation through its ability to inhibit immunoglobulin production and acts as a proinflammatory protein for neutrophil recruitment and activation. In periodontology, Kido et al. [67] identified calprotectin in GCF and found that GCF concentration levels in patients with periodontal disease were higher than those in GCF from healthy subjects. The expression of calprotectin from inflammatory cells appears to offer protection of the epithelial cells against binding and invasion by P. gingivalis. In periodontal disease, it appears to improve resistance to P. gingivalis by boosting the barrier protection and innate immune functions of the gingival epithelium [68].
Hemoglobin β-chain peptides -2 peptides derived from the hemoglobin (Hb) are β-chain decapeptide and a dodecapeptide. They are pharmacologically and physiologically active, and act as inflammatory mediators [69]. Both peptides may also act as substrates of proline-specific peptidases studied in treponemes isolated from the human subgingival dental plaque [69]. These two particular Hb β-chain sequences were present in GCF and successful periodontal therapy will reduce the levels of these peptides [70].
Pyridinoline crosslinks (ICTP) -They represent a class of collagen-degrading molecules that include pyridinoline, deoxypyridinoline, N-telopeptides, and C-telopeptides [71]. Subsequent to osteoclastic bone resorption and collagen matrix degradation these molecules are released into the circulation. Given their specificity for bone resorption, pyridinoline cross-links represent a potentially valuable diagnostic aid in periodontics, because biochemical markers specific for bone degradation may be useful in differentiating the presence of gingival inflammation from active periodontal and peri-implant bone destruction [72].
Polypeptide growth factors are a class of natural biological mediators that regulate key cellular events in tissue repair, including cell proliferation, chemotaxis, differentiation, and matrix synthesis, by binding to specific cell-surface receptors [73]. Several growth factors are concentrated in the organic matrix of bone and released during bone resorption [74], and are therefore suggested to play a role in bone remodelling through regulation of the coupling process of bone resorption and formation [75]. There are several studies in the periodontal literature examining GCF and salivary levels of growth factors for periodontal disease diagnosis including of epidermal growth factor (EGF), [76] transforming growth factor-α (TGF-α) and TGF-β, [74] platelet-derived growth factor (PDGF), [77] and vascularendothelial growth factor (VEGF) [78].
Conclusion
GCF is a vehicle for monitoring tissue and cell products and allows a degree of non-invasive access to the periodontium, unlike the majority of other tissues in the body. It carries multiple molecular factors derived from the host response and is considered a significant protective mechanism in periodontal infection. Substantial improvements have been made in the understanding of the mediators implicated in the initiation, pathogenesis, and progression of periodontitis. Evaluation of the markers in GCF is considered a good method in the determination of a person's risk for periodontal disease. | 4,866.4 | 2012-12-15T00:00:00.000 | [
"Medicine",
"Biology"
] |
Visual Object Tracking Based on 2DPCA and ML
We present a novel visual object tracking algorithm based on two-dimensional principal component analysis (2DPCA) and maximum likelihood estimation (MLE). Firstly, we introduce regularization into the 2DPCAreconstruction anddevelop an iterative algorithm to represent an object by 2DPCA bases. Secondly, the model of sparsity constrainedMLE is established. Abnormal pixels in the samples will be assigned with low weights to reduce their effects on the tracking algorithm. The object tracking results are obtained by usingBayesianmaximumaposteriori (MAP) probability estimation. Finally, to further reduce tracking drift, we employ a template update strategy which combines incremental subspace learning and the error matrix. This strategy adapts the template to the appearance change of the target and reduces the influence of the occluded target template as well. Compared with other popular methods, our method reduces the computational complexity and is very robust to abnormal changes. Both qualitative and quantitative evaluations on challenging image sequences demonstrate that the proposed tracking algorithm achievesmore favorable performance than several state-of-the-art methods.
Introduction
As one of the fundamental problems of computer vision, visual tracking plays a critical role in advanced vision-based applications (e.g., visual surveillance, human-computer interaction, augmented reality, intelligent transportation, and context-based video compression) [1][2][3].However, building a robust model-free tracker is still a challenging issue due to the difficulty arising from the appearance variability of an object of interest, which includes intrinsic appearance variability (e.g., pose variation and shape deformation) and extrinsic factors (illumination changes, camera motion, occlusions, etc.).
Typically, a complete tracking system can be divided into three main components: (1) an appearance observation model, which evaluates the likelihood of a candidate state belonging to the object model, (2) a motion model, which aims to model the states of an object over time (such as Kalman filtering and particle filtering), and (3) a search strategy for finding the most likely states in the current frame (e.g., mean shift and sliding window).In this paper, we are devoted to developing a robust appearance model.
Due to the power of subspace representation, subspacebased trackers (e.g., [4,5]) are robust to in-plane rotation, scale change, illumination variation, and pose change.However, they are sensitive to partial occlusion caused by their underlying assumption that the error term is Gaussian distributed with small variances.This assumption does not hold for object representation when partial occlusion occurs as the noise term cannot be modeled with small variances.
An effective tracking algorithm (called L1 tracker) based on sparse representation within a particle filter framework is developed in [6].The L1 tracker represents the tracked target by using a set of target templates and trivial templates.The target templates depict a subspace on the tracked object and the trivial templates aim to model the occlusion effectively.However, the use of trivial templates increases the number of templates significantly, which make the computational complexity of L1 tracker too high to satisfy real applications.
In [7], the authors also presented a sparse coding-based tracker by combing sparse coding and Kalman filtering and fusing the color and gradient features.To account for the variations of the tracked object during the tracking Mathematical Problems in Engineering processing, they use a template update strategy by replacing a random template of the original template library with the last tracking result.However, this simple update manner can easily introduce tracking errors when abnormal changes occur, which may cause tracking drift.
Motivated by aforementioned discussions, we propose an object tracking algorithm based on 2DPCA and MLE.Firstly, we introduce regularization into the 2DPCA reconstruction and develop an iterative algorithm to represent an object by 2DPCA bases.Secondly, the model of sparsity constrained MLE is established.Abnormal pixels in the samples will be assigned with low weights to reduce their affects on the tracking algorithm.The object tracking results are obtained by using Bayesian maximum a posteriori probability (MAP) estimation.Finally, to further reduce tracking drift, we employ a template update strategy which combines incremental subspace learning and the error matrix.This strategy adapts the template to the appearance change of the target and reduces the influence of the occluded target template as well.The experimental results show that our algorithm can achieve stable and robust performance especially when occlusion, rotation, scaling, or illumination variation occurs.
Visual Object Tracking Model Based on
2DPCA and MLE: The Theory of 2DPCA [8]).It finds the projection directions along which the reconstruction error to the original data is minimum and projects the original data into a lower dimensional space spanned by those directions corresponding to the top eigenvalues.Recent studies demonstrate that two-dimensional principal component analysis (2DPCA) could achieve performance comparable to PCA with less computational cost [9,10].Given a series of image matrices Y = [ 1 2 ⋅ ⋅ ⋅ ], 2DPCA aims to obtain an orthogonal left-projection matrix U, an orthogonal right-projection matrix V, and the pro- Then the coefficient A can be approximated by A ≈ U Y V. We note that the underlying assumption of (1) is that the error term is Gaussian distributed with small variances.This assumption is not able to deal with partial occlusion as the error term cannot be modeled with small variances when occlusion occurs.In this paper, we propose an object tracking algorithm by using 2DPCA basis matrices and an additional MLE error matrix Y ≈ UAV + e.
Let the objective function be the problem is min where Y denotes an observation matrix, A indicates its corresponding projection coefficient, and is a regularization parameter.e describes the error matrix.
MLE Model.
The basic idea of sparse coding is to use the templates in a given dictionary T to represent a testing sample (as ≈ ), where is sparse coding coefficient vector.Traditionally, the sparsity can be measured by L0norm and the L0-norm minimization is an NP-hard problem.
Fortunately, [11] proves that when the solution is sparse enough, L0-norm minimization is equivalent to the L1-norm minimization.
Therefore, the sparse coding problem can be defined as [12,13] where > 0 is a very small constant.This model shows two constraints in sparse coding: one is that min ‖‖ 1 constrains the sparsity of represented signal; the other is that ‖ − ‖ 2 2 ≤ constrains the accuracy of the represented signal [14][15][16][17].
The analysis of the two constraint terms mentioned earlier is as follows.For object tracking, the accuracy constraint is more important than the sparsity one, especially when occlusion, rotation, scaling, or illumination variation happens to the object.In that case, considering some possible abnormal changes, whether the model can accurately describe the object or not will directly determine the success or failure of tracking algorithm.Most of current algorithms are presented under the assumption that the sparse coding residual e = − α follows the Gaussian distribution.In practice, however, this assumption is limited when abnormal changes happen which will inevitably lead to the failure of tracking algorithm.
In sparsity constraints, though L1-norm minimization is more efficient than the L0-norm minimization, the fact is that the L1-norm minimization programming is still very time consuming.Object tracking algorithms are different from face recognition algorithms in that face recognition algorithms do not demand fast processing speed in a sample training process, while in object tracking, slow processing speed will directly affect the practical value of the object tracking algorithm.In that case, the introduction of L1-norm minimization into the field of object tracking would greatly reduce the performance of tracking algorithms.
We note that the tracking accuracy and speed are two important aspects for evaluating the performance of object tracking algorithms.Therefore, in this paper, we develop an MLE-based model that improves the traditional sparse coding model from the two aspects and then apply it to achieve an effective and efficient tracker.
In the field of object tracking, accuracy is the most important issue.Hence, at first, we need to improve the accuracy constraint term in the traditional sparse coding model.
When the reconstruction error e = − α follows the Gaussian distribution, the traditional sparse coding solution can be written as where is a regularization parameter.For object tracking, the dictionary T = [ follows the Gaussian distribution, the solution of ( 5) is the maximum likelihood estimation.
However, in practical applications, when the object suffers from occlusion, rotation change, scale change, or illumination variation, the reconstruction errors e of abnormal pixels will not follow the Gaussian distribution.In that case, these algorithms may not track the object accurately.Therefore, we need to build a more adaptive object representing model.
Taking into account the sparsity constraint of , the MLE of can be formulated as the following minimization: According to [6], formula ( 6) can be converted into weighted sparse coding problem where is a diagonal matrix with diagonal elements as follows: which also stands for the th pixel's weight value. and are positive constants.If we make , = 2, then the model would be the traditional sparse coding problem.Hence, we can see that formula (7) is more adaptive than (3).
In this study, we choose it as the weight function where is a scale factor (we choose = 10 in our experiments).The physical meaning of , is to allocate smaller weights to those pixels with bigger residuals (probably abnormal pixels) and allocate bigger weights to pixels with smaller residuals.By setting a reasonable weight threshold, we can get rid of those abnormal pixels lower than the threshold and do further sparse coding.In that case, we can effectively reduce the effect of abnormal pixels and therefore achieve good performance during the tracking processing.From ( 9), we can see that the weight value , is bounded between 0 and 1 which makes sure that even the pixels with very small residuals would not have too large weight values.This would guarantee the stability of the algorithm.
Bayesian MAP Estimation
We can regard object tracking as a hidden state variables' Bayesian MAP estimation problem in the Hidden Markov model; that is, with a set of observed samples = { 1 , 2 , . . ., }, we can estimate the hidden state variable using Bayesian MAP theory.
According to the Bayesian theory, where ( | −1 ) stands for a state transition model for two consecutive frames and ( | ) stands for an observation likelihood model.We can obtain the object's best state in th frame through maximum posterior probability estimation; that is, where stands for the th sample of state variable in th frame.In this paper, we choose = 400.
State Transition Model.
We choose object's motion affine transformation parameters as state variable = { , , , , , }, where and , respectively, represent the direction and -direction translation of the object in th We assume that the state transition model follows the Gaussian distribution; that is, where Ψ is a diagonal matrix whose diagonal elements are motion affine parameter's variation where N(⋅) means Gaussian distribution, and 2 , respectively, represent the mean and variation of Gaussian distribution, stands for the number of pixels of an object template, and stands for the reconstruction error of th pixel of object templates in th frame.
Templates Updating.
To consider that the appearance of the target may change during the tracking processing, it is necessary to dynamically update the template library.
In this paper, we use a method named "Half Updating Strategy" to update the templates.We take the tracking results
Experimental Results and Analysis
In order to evaluate the performance of our tracker, we conduct experiments on three challenging image sequences (Table 1 and Figures 1, 2, and 3).These sequences cover most challenging situations in object tracking: occlusion, motion blur, in-plane and out-of-plane rotation, large illumination change, scale variation, and complex background.
For comparison, we run six state-of-the-art algorithms with the same initial position of the target.These algorithms are the Frag tracking [18], IVT tracking [19], MIL tracking [20], L1 tracking [6], PN tracking [21], and VTD tracking [22] methods.We present some representative results in this section.
Conclusions/Outlook
This paper presents a robust tracking algorithm via 2DPCA and MLE.In this work, we represent the tracked object by using 2DPCA bases and an MLE error matrix.With the proposed model, we can remove the abnormal pixels and thus reduce the effect of abnormal pixels on tracking algorithms.We take the object's reconstruction error into the Bayesian maximum posterior probability estimation framework and design a stable and robust tracker.Then, we explicitly take partial occlusion and misalignment into account for appearance model update and object tracking.Experiments on challenging video clips show that our tracking algorithm performs better than several state-of-the-art algorithms.Our future work will be the generalization of our representation model into other related fields.
Table 1 :
The description of test videos. | 2,912 | 2013-06-24T00:00:00.000 | [
"Computer Science"
] |
Search for a Two-Photon Exchange Contribution to Inclusive Deep-Inelastic Scattering
The transverse-target single-spin asymmetry for inclusive deep-inelastic scattering with effectively unpolarized electron and positron beams off a transversely polarized hydrogen target was measured, with the goal of searching for a two-photon exchange signal in the kinematic range 0.007<x_B<0.9 and 0.25 GeV**2<Q**2<20 GeV**2. In two separate regions Q**2>1 GeV**2 and Q**2<1 GeV**2, and for both electron and positron beams, the asymmetries are found to be consistent with zero within statistical and systematic uncertainties, which are of order 10**(-3) for the asymmetries integrated over x_B.
The transverse-target single-spin asymmetry for inclusive deep-inelastic scattering with effectively unpolarized electron and positron beams off a transversely polarized hydrogen target was measured, with the goal of searching for a two-photon exchange signal in the kinematic range 0.007 < xB < 0.9 and 0.25 GeV 2 < Q 2 < 20 GeV 2 . In two separate regions Q 2 > 1 GeV 2 and Q 2 < 1 GeV 2 , and for both electron and positron beams, the asymmetries are found to be consistent with zero within statistical and systematic uncertainties, which are of order 10 −3 for the asymmetries integrated over xB.
PACS numbers: 13.60.-r, 13.60.Hb, 13.88.+e, 14.20.Dh,In recent years, the contribution of two-photon exchange to the cross section for electron-nucleon scattering has received considerable attention. In elastic ep scattering, two-photon exchange effects are believed to be the best candidate to explain the discrepancy in the measurement of the ratio G E /G M of the electric and magnetic form factors of the proton obtained at large fourmomentum transfer between the Rosenbluth method and the polarization transfer method [1]. It has been shown that the interference between the one-photon and twophoton exchange amplitudes can affect the Rosenbluth extraction of the nucleon form factors at the level of a few percent. This is enough to explain most of the discrepancy between the results of the two methods [2,3], although none of the recent calculations can fully resolve the discrepancy at all momentum transfers [4]. Twophoton exchange effects have also been shown to affect the measurement of parity violation in elastic scattering of longitudinally polarized electrons off unpolarized protons, with corrections of several percent to the parityviolating asymmetry [5].
In order to investigate contributions from two-photon exchange, it is necessary to find experimental observables that allow their isolation. Beam-charge and transverse single-spin asymmetries (SSAs) are two suitable candidates. In both elastic and inclusive inelastic leptonnucleon scattering, these asymmetries arise from the interference of one-photon and two-photon exchange amplitudes. Specifically, beam-charge asymmetries in the unpolarized cross section arise from the real part of the two-photon exchange amplitude [6], while inclusive transverse SSAs are sensitive to the imaginary part [7].
To date, all evidence of non-zero two-photon exchange effects in lepton-nucleon interactions comes from elastic scattering, l + N → l ′ + N ′ . Measurements of the crosssection ratio R = σ e + p /σ e − p are compiled in Ref. [6]. Though the individual measurements are consistent with R being unity, a recent reanalysis [8] demonstrates that a deviation of about 5% at low values of four-momentum transfer and virtual-photon polarization is not excluded. Three experiments have measured a non-zero transversebeam SSA of order 10 −5 − 10 −6 in elastic scattering of transversely polarized electrons off unpolarized protons [9,10,11].
In inelastic scattering no clear signature of two-photon exchange effects has yet been observed. Measurements of the cross-section ratio R with e + /e − and µ + /µ − beams [12,13,14,15,16,17,18] show no effect within their accuracy of a few percent. The transverse-target SSA has been measured at the Cambridge Electron Accelerator [19,20] and at Slac [21]. The data are con-fined to the region of nucleon resonances, and show an asymmetry which is compatible with zero within the fewpercent level of the experimental uncertainties.
In inclusive deep-inelastic scattering (DIS), l+p → l ′ + X, and in the one-photon exchange approximation, such a SSA is forbidden by the combination of time reversal invariance, parity conservation, and the hermiticity of the electromagnetic current operator, as stated in the Christ-Lee theorem [22]. A non-zero SSA can therefore be interpreted as an indication of two-photon exchange.
Ref. [7] presents a theoretical treatment of the transverse SSA arising from the interference of one-photon and two-photon exchange amplitudes in DIS. For an unpolarized beam (U) and a transversely (T) polarized nucleon target, the spin-dependent part of the cross section is given by Here, e l is the charge of the incident lepton, M is the nucleon mass, −Q 2 is the squared four-momentum transfer, p, k and k ′ are the four-momenta of the target, the incident and the scattered lepton, respectively, while ε µνρσ is the Levi-Civita tensor. The term ε µνρσ S µ p ν k ρ k ′σ is proportional to S ·( k× k ′ ), consequently the largest asymmetry is obtained when the spin vector S is perpendicular to the lepton scattering plane defined by the three-momenta k and k ′ . Finally, C T is a higher-twist term arising from quark-quark and quark-gluon-quark correlations. As σ UT is proportional to the electromagnetic coupling constant α em , it is expected to be small. Furthermore, due to the factor M/Q in Eq. (1), σ UT is expected to increase with decreasing Q 2 . A calculation based on certain model assumptions [23] for a Jlab experiment [24] yields expectations for the asymmetry of order 10 −4 at the kinematics of that experiment. The authors in Ref. [7], on the other hand, do not exclude asymmetries as large as 10 −2 and point out that the term C T in Eq. (1) cannot be completely evaluated at present. Due to the factor e l in Eq. (1), the asymmetry is expected to have a different sign for opposite beam charges. The capability of the Hera accelerator to supply both electron and positron beams thus provides an additional means to isolate a possible effect from two-photon exchange.
In this paper a first precise measurement of the transverse-target SSA in inclusive DIS of unpolarized electrons and positrons off a transversely polarized hydrogen target is presented.
The data were collected with the Hermes spectrometer [25] during the period 2002-2005. The 27.6 GeV positron or electron beam was scattered off the transversely polarized gaseous hydrogen target internal to the Hera storage ring at Desy. The open-ended target cell was fed by an atomic-beam source [26] based on Stern-Gerlach separation combined with radio-frequency transitions of hydrogen hyperfine states. The direction of the target spin vector was reversed at 1-3 minute time intervals to minimize systematic effects, while both the nuclear polarization and the atomic fraction of the target gas inside the storage cell were continuously measured [27]. Data were collected with the target polarized transversely to the beam direction, in both "upward" and "downward" directions in the laboratory frame. The beam was longitudinally polarized, but a helicitybalanced data sample was used to obtain an effectively unpolarized beam. Only the scattered leptons were considered in this analysis. Leptons were distinguished from hadrons by using a transition-radiation detector, a scintillator pre-shower counter, a dual-radiator ring-imaging Cherenkov detector, and an electromagnetic calorimeter. In order to exclude any contamination from a transverse hadron SSA in the lepton signal, hadrons were suppressed by very stringent particle identification requirements such that their contamination in the lepton sample is smaller than 2 × 10 −4 . This resulted in a lepton identification efficiency greater than 94%. Events were selected in the kinematic region 0.007 < x B < 0.9, 0.1 < y < 0.85, 0.25 GeV 2 < Q 2 < 20 GeV 2 , and W 2 > 4 GeV 2 . Here, x B is the Bjorken scaling variable, y is the fractional beam energy carried by the virtual photon in the laboratory frame, and W is the invariant mass of the photonnucleon system.
The differential yield for a given target spin direction (↑ upwards or ↓ downwards) can be expressed as Here, φ S is the azimuthal angle about the beam direction between the lepton scattering plane and the "upwards" target spin direction, σ UU is the unpolarized cross section. Also, L ↑(↓) is the total luminosity in the ↑ (↓) polarization state, L ↑(↓) P = L ↑(↓) (t) P (t) dt is the integrated luminosity weighted by the magnitude P of the target polarization, and Ω is the detector acceptance efficiency. The sin φ S azimuthal dependence follows directly from the form S · ( k × k ′ ) of the spin-dependent part of the cross section; A sin φS UT refers to its amplitude. The asymmetry was calculated as where N ↑(↓) are the number of events measured in bins of x B , Q 2 , and φ S . With the use of Eq. (2), it can be approximated, for small differences of the two average target polarizations P ↑(↓) = L As shown in Table I, P ↑ and P ↓ are the same to a good approximation for all data-taking periods. The advantage of using the fully-differential asymmetry A UT (x B , Q 2 , φ S ) in Eq. (3) instead of the more common left-right asymmetry A N (x B , Q 2 ) is that the acceptance function Ω cancels in each (x B , Q 2 , φ S ) kinematic bin, if the bin size or the asymmetry is small. Assuming the φ S dependence of σ UT in Eq. (1) and Eq. (2), it can be easily shown that the sin φ S amplitude A sin φS UT and the left-right normal asymmetry A N are related by where σ L (σ R ) refers to the integrated cross section within the angular range 0 ≤ φ S < π (π ≤ φ S < 2π). For this analysis the Q 2 range was divided into a "DIS region" with Q 2 > 1 GeV 2 and a "low-Q 2 region" with Q 2 < 1 GeV 2 . To test for a possible enhancement of the transverse-target SSA due to the factor M/Q appearing in Eq. (1) the data at low Q 2 are also presented, though, strictly speaking, Eq. (1) may not be applicable to this range.
The A sin φS UT amplitudes were extracted with a binned χ 2 fit of the functional form p 1 sin φ S +p 2 to the measured asymmetry. Leaving p 2 as a free parameter or fixing it to the values given by Eq. (4) and Table I for electrons and positrons. In both cases the asymmetries are consistent with zero within their uncertainties. Due to the kinematics of the experiment, the quantities x B and Q 2 are strongly correlated, as shown in the bottom panel of Fig. 1.
The resulting amplitudes were not corrected for kinematic migration of inelastic events due to detector smearing and higher order QED effects or contamination by the radiative tail from elastic scattering. The latter correction requires knowledge of the presently unknown elastic two-photon asymmetry. Instead, the contribution of the elastic radiative tail to the total event sample was estimated from a Monte Carlo simulation based on the Lepto generator [28] together with the Radgen [29] determination of QED radiative effects and with a Geant [30] based simulation of the detector. The elas-tic fraction is shown in the lower panel of Fig. 1. It reaches values as high as about 35% in the lowest x B bin, where y is large ( y ≃ 0.80) and hence radiative corrections are largest [31]. The elastic fraction rapidly decreases towards high x B , becoming less than 3% for x B > 0.1.
The systematic uncertainties, shown in the fourth column of Table II and as error boxes in Fig. 1, include contributions due to corrections for misalignment of the detector, beam position and slope at the interaction point and bending of the beam and the scattered lepton in the transverse holding field of the target magnet. They were determined from a high statistics Monte Carlo sample obtained from a simulation containing a full description of the detector, where an artificial spin-dependent azimuthal asymmetry was implemented. Input asymmetries being zero or as small as 10 −3 were well reproduced within the statistical uncertainty of the Monte Carlo sample, which was about five times smaller than the statistical uncertainty of the data. For each measured point the systematic uncertainty was obtained as the maximum value of either the statistical uncertainty of the Monte Carlo sample or the difference between the input asymmetry and the extracted one. Systematic uncertainties from other sources like particle identification or trigger efficiencies were found to be negligible.
The transverse single-spin asymmetry amplitudes A sin φS UT for electron and positron beams integrated over x B are given separately for the "low-Q 2 region" and the "DIS region" in Table II along with their statistical and systematic uncertainties. All asymmetry amplitudes are consistent with zero within their uncertainties, which in the DIS region are of order 10 −3 . The only exception is the low-Q 2 electron sample, where the asymmetry is 1.9 standard deviations different from zero. No hint of a sign change between electron and positron asymmetries is observed within uncertainties.
In conclusion, single-spin asymmetries were measured in inclusive deep-inelastic scattering at Hermes with unpolarized electron and positron beams and a transversely polarized hydrogen target with the goal of searching for a signal of two-photon exchange. No signal was found within the uncertainties, which are of order 10 −3 .
We gratefully acknowledge the Desy management for its support and the staff at Desy and the collaborating institutions for their significant effort. with its statistical and systematic uncertainties and the average values for xB and Q 2 measured separately for electron and positron beams in the two Q 2 ranges Q 2 < 1 GeV 2 (upper rows) and Q 2 > 1 GeV 2 (lower rows). The systematic uncertainties contain the effects of detector misalignment and beam position and slope at the target, as estimated by a Monte Carlo simulation, but not the scale uncertainties from the target polarization which amounts to 9.3% (6.6%) for the electron (positron) sample. Also, the results are not corrected for smearing, radiative effects and elastic background events. | 3,355.8 | 2009-07-30T00:00:00.000 | [
"Physics"
] |
Correlation of Dielectric Properties and Vibrational Spectra of Composite PVDF/Salt Fibers
Nitride salts were added to polyvinylidene fluoride fibers and then the fiber mats were prepared by electrospinning. An experimental investigation of the structure was provided by Raman, FTIR, SEM, and XRD. The phase ratio of the polymer was studied both theoretically and experimentally in connection with the addition of the hydrates Mg(NO3)2, Ca(NO3)2, and Zn(NO3)2 salts. The comparison of simulated and experimental data for vibrational spectroscopies is discussed. We provide a comparison of triboelectric, dielectric, and compositional characterization of PVDF fibers doped with three types of nitride hydrates. Doping of PVDF fibers with magnesium nitrate hexahydrate leads to significant improvement of the triboelectric performance.
Introduction
Doping of piezopolymers with functional additives is a way to create smart and responsive materials [1].PVDF fibers loaded with nitrate salts have several potential applications, particularly in the field of energetic materials, sensors, and biomedical devices.The combination of PVDF nanofibers and nitrate salts can offer unique properties and functionalities, making them suitable for various practical uses.The combination of PVDF's high mechanical strength and piezoelectricity with the energy release properties of nitrate salts can lead to improved performance of energetic materials or propellants in rocket propulsion systems and other propulsion devices [2,3].By integrating nitrate salts into PVDF, it is possible to prepare the composites for sensors that are capable of detecting pressure, strain, and mechanical vibrations and find practical applications in structural health monitoring, wearable devices, and smart textiles [4].When PVDF nanofibers are combined with nitrate salts, they exhibit potential in drug delivery systems and tissue engineering scaffolds [5].Furthermore, their piezoelectric properties enable applications in nerve regeneration and as sensors for biological signals [6].The amalgamation of PVDF nanofibers and nitrate salts opens avenues in pyrotechnics and fireworks, creating unique visual effects through the energetic release of the salts upon ignition.PVDF nanofibers loaded with nitrate salts find utility in environmental monitoring devices, capable of detecting hazardous substances.Additionally, they can be integrated into safety systems for identifying gas leaks or explosive materials [7,8].Dielectric properties refer to a material's ability to store and dissipate electric energy.Key parameters include the dielectric constant (permittivity) and dielectric loss.These properties are influenced by molecular structure, polarizability, and the interaction of molecules with an electric field.Vibrational spectroscopies involve passing radiation through a sample and recording the absorbed wavelengths.The resulting spectrum provides information about the molecular vibrations, which are related to the functional groups and molecular structure of the material.
Dielectric properties are related to molecular motions such as dipole relaxation and vibrations.Vibrational spectroscopies can help identify these motions and their contributions to energy dissipation.Triboelectric charging, also known as contact electrification, is a widely recognized and frequently observed phenomenon.It occurs when material fibers repeatedly come into contact and then separate, causing charge transfer between surfaces.This transfer is influenced by both kinetic and equilibrium effects.PVDF in particular tends to gain a negative charge when it comes into contact with other materials.
In this paper, we suggest a theoretical and experimental approach for the explanation of the composite formation through modeling and experimental investigation of vibrational spectra.These studies contribute to the evaluation of the dielectric properties of the polymeric fibers which defines their application potential.We present XPS and SEM analyses because the surface condition is crucial for triboelectric response.Electrical characterization is necessary for electrospun fiber mats to facilitate their development and commercialization.
Specific types of nitrate hydrates were chosen, where the compatibility with PVDF and mechanism of interaction was previously studied [9].Earlier we reported the functional properties of these composites [10].In this work, we are focused on understanding PVDF phase conformations in dependence on salt cations.The novelty of the research consists of a comparison of theoretical and experimental approaches for the phase's investigation for the explanation of the dielectric properties of the composites.
Sample Preparation
The polyvinylidene fluoride (PVDF) material used in the following measurements was obtained from Sigma Aldrich (St. Louis, MO, USA) and prepared as fibers with a molecular weight of 275,000 g × mol −1 .The fibers were produced by electrospinning a 15 wt% PVDF solution in a blend of dimethylsulfoxide (Sigma Aldrich, St. Louis, MO, USA) and acetone (Sigma Aldrich, St. Louis, MO, USA) at a volume ratio of 7/3.To this solution, calcium, magnesium, and zinc nitrates (in their hydrated form) from Lach-Ner (Neratovice, Czech Republic) were added at 8 wt% relative to the solid polymer before dissolving the PVDF.The solution was stirred for 24 h at 40 • C.
Electrospinning was conducted using 4-SPIN equipment from Contipro (Dolní Dobrouč, the Czech Republic) at a feeding rate of 20 µL × min −1 , employing a thin needle with a diameter of 1.067 mm (17 G).The resulting fibers were collected on an aluminum foil-covered rotation collector, spinning at a speed of 2000 rpm for 30 min.The distance between the needle tip and the collector was maintained at 20 cm during the process.The nonwoven fiber mats produced were left to dry overnight at room temperature.The resulting fibers' diameter ranged from 300 to 700 nm.
Sample Experimental Characterization
Scanning electron microscopy analysis was conducted by electron microscope utilizing a Tescan LYRA3 (Tescan, Brno, Czech Republic).The samples were coated with 10 nm of carbon to avoid charging.The acceleration voltage was 5 kV with a view field of 200 µm and 10 µm.Cross-sectional imaging was performed using a Focused Ion Beam/Scanning Electron Microscope FEI Helios NanoLab 660 (FEI, Brno, Czech Republic).A Ga-focused ion beam was employed with an accelerating voltage of 5 kV and a current of 43 pA for sectioning.The resulting cross-sections were then examined using the SEM capabilities of the same Helios system.
X-ray photoelectron spectroscopy (XPS) was performed to analyze the chemical bonding in the samples using an AXIS Supra instrument (Kratos Analytical Ltd., Manchester, UK).The measurements were taken with an emission current of 15 mA, and the spectra were acquired at a resolution of 20 for wide scans and 80 for element-specific scans.Fourier-transform infrared (FTIR) spectroscopy measurements were conducted to assess the phase composition of the samples using a Bruker instrument (Billerica, MA, USA) in transmission mode, with 512 scans and a resolution of 1 cm −1 .X-ray powder diffraction (XRD) analysis was employed to confirm the crystalline structure of the samples, utilizing a Rigaku SmartLab 3 kW system (Rigaku, Tokyo, Japan) configured in the Bragg-Brentano geometry.Diffraction patterns were recorded in the 2θ range of 10 • to 25 • using Cu Kα radiation.Raman spectroscopy was performed to analyze the structural characteristics of the samples with a WITec alpha300 R system (WITec, Ulm, Germany), operating at an excitation wavelength of 532 nm and a laser power of 10 mW.The Raman signal was averaged over 20 accumulations with an integration time of 10 s per accumulation.
Computational Methods
Density functional theory (DFT) studies were conducted to illustrate the inter-component interactions within PVDF/salt complexes.All the DFT calculations were performed based on the linear combination of atomic orbitals (LCAO) theory using the Gaussian 16 software suite [11].Hybrid exchange-correlation functional B3LYP (Becke's nonlocal gradientcorrected three-parameter exchange functional [12] with correlation functional developed by Lee-Yang-Parr [13]) was used with polarized triple-ζ basis set 6-311+G(d,p).Long-range dispersion correction was incorporated by implementing Grimme's dispersion correction DFT-D3 [14].Frequency scaling factors of 0.9679 and 0.9877 were applied to obtain the vibrational frequency modes and zero-point energies (ZPE), respectively [15].
To model the PVDF/salt complexes, tetramer chains of three polymorphs of PVDF, i.e., α-, β-, and γ-PVDF were considered.Two configurations (denoted as C1 and C2 elsewhere) of each of the hydrated nitrate salt molecules, i.e., Mg(NO 3 ) 2 •6H 2 O, Ca(NO 3 ) 2 •4H 2 O, and Zn(NO 3 ) 2 •6H 2 O were designed according to their coordination geometries, i.e., the arrangement of the central bivalent cations (Mg 2+ or Ca 2+ or Zn 2+ ), NO 3 − anions, and the water molecules, based on similar DFT studies on hydrated nitrate salts of different metals reported earlier [16,17].The formation energies of the two configurations C1 and C2 (E f ,C1 and E f ,C2 , respectively) were calculated as, Here, E C1 and E C2 denote the ZPE-corrected energies of the two configurations of each salt molecule (C1 and C2), respectively.E M(NO 3 ) 2 and E [M(H 2 O) n ] 2+ are the ZPE-corrected energies of the central units of C1 and C2, respectively, with M and n denoting the metals (Mg or Ca or Zn) and the number of water molecules, respectively.PVDF/salt interaction energies (E int ) were calculated as, Here, E PVDF/salt , E PVDF , and E salt represent the ZPE-corrected energies of the PVDF/salt complex, isolated PVDF (in α-, β-, or γ-phase) tetramer chain, and isolated salt (C1 or C2) molecules.
Electrical Characterization
To measure the triboelectric properties of materials, a systematic approach involves using a triboelectric generator, variable load resistor, and precise voltage and current measurement tools.The experiment begins by setting up the triboelectric generator, which consists of two different materials that come into contact and then separate, generating an electric charge.The output terminals of the triboelectric generator are connected to a variable load resistor.Initially, the load resistor is set to a very high value to measure the open-circuit voltage, and then to a very low value to measure the short-circuit current.These measurements provide baseline data for the voltage and current capabilities of the triboelectric generator.Subsequently, the load resistance is adjusted incrementally from low to high values.For each resistance value, the voltage across the resistor and the current through it are measured using precision voltmeters and ammeters, with data recorded over multiple cycles to ensure accuracy and repeatability.The measurements were validated and published in previous work dedicated to this method [18].Dielectric property measurements, crucial for defining sample functionality, were conducted using a Novocontrol Alpha Analyzer device (Novocontrol Technologies, Montabaur, Germany) across a frequency range spanning 1 to 100,000 Hz.
Sample Morphology
The detailed surface topography of the composite fibers is presented in secondary electron images presented in Figure 1.The fibers were uniform in terms of diameter and surface texture (Figure 1a,c,e).The fibers exhibited an average diameter of 500 nm.The surface of the fibers appeared smooth, with the rare presence of grooves (Figure 1b,d,e).This texture can be attributed to the chosen composition and electrospinning parameters.The thickness of the samples varied between 4 and 8 µm, as illustrated in Figure 1g,h.The images show the cross-section of the Mg(NO 3 ) 2 •6H 2 O sample, the general characteristics of the remaining samples exhibit similar features.
Electrical Characterization
To measure the triboelectric properties of materials, a systematic approach involves using a triboelectric generator, variable load resistor, and precise voltage and current measurement tools.The experiment begins by setting up the triboelectric generator, which consists of two different materials that come into contact and then separate, generating an electric charge.The output terminals of the triboelectric generator are connected to a variable load resistor.Initially, the load resistor is set to a very high value to measure the open-circuit voltage, and then to a very low value to measure the short-circuit current.These measurements provide baseline data for the voltage and current capabilities of the triboelectric generator.Subsequently, the load resistance is adjusted incrementally from low to high values.For each resistance value, the voltage across the resistor and the current through it are measured using precision voltmeters and ammeters, with data recorded over multiple cycles to ensure accuracy and repeatability.The measurements were validated and published in previous work dedicated to this method [18].Dielectric property measurements, crucial for defining sample functionality, were conducted using a Novocontrol Alpha Analyzer device (Novocontrol Technologies, Montabaur, Germany) across a frequency range spanning 1 to 100,000 Hz
Sample Morphology
The detailed surface topography of the composite fibers is presented in secondary electron images presented in Figure 1.The fibers were uniform in terms of diameter and surface texture (Figure 1a,c,e).The fibers exhibited an average diameter of 500 nm.The surface of the fibers appeared smooth, with the rare presence of grooves (Figure 1b,d,e).This texture can be attributed to the chosen composition and electrospinning parameters.The thickness of the samples varied between 4 and 8 µm, as illustrated in Figure 1g,h
Sample Composition
The crystalline structure of the used salt hydrates can be described in terms of the arrangement of its constituent ions and water molecules.The detailed structure is typically determined through X-ray diffraction.The overall structure is stabilized by hydrogen bonds, forming a crystalline lattice.
Magnesium nitrate hexahydrate (Mg(NO 3 ) 2 •6H 2 O) and zinc nitrate hexahydrate (Zn(NO 3 ) 2 •6H 2 O) crystallize in the monoclinic crystal system.In this structure, the metal ions are surrounded by six water molecules, forming a coordination complex.The nitrate ions are also involved in the crystal lattice, contributing to the overall stability and arrangement of the crystal.Calcium nitrate tetrahydrate (Ca(NO 3 ) 2 •4H 2 O) crystallizes in the monoclinic crystal system.The structure typically features calcium ions coordinated by water molecules and nitrate ions, creating a complex network.
Sample Composition
The crystalline structure of the used salt hydrates can be described in terms of the arrangement of its constituent ions and water molecules.The detailed structure is typically determined through X-ray diffraction.The overall structure is stabilized by hydrogen bonds, forming a crystalline lattice.
Magnesium nitrate hexahydrate (Mg(NO₃)2•6H2O) and zinc nitrate hexahydrate (Zn(NO3)2•6H2O) crystallize in the monoclinic crystal system.In this structure, the metal ions are surrounded by six water molecules, forming a coordination complex.The nitrate ions are also involved in the crystal lattice, contributing to the overall stability and arrangement of the crystal.Calcium nitrate tetrahydrate (Ca(NO3)2•4H2O) crystallizes in the monoclinic crystal system.The structure typically features calcium ions coordinated by water molecules and nitrate ions, creating a complex network.
XRD patterns of the composites sample do not include any diffraction peaks of magnesium nitrate hexahydrate (Figure 2a) [19][20][21], calcium nitrate tetrahydrate (Figure 2b), or zinc nitrate hexahydrate (Figure 2c).The raw data are presented in Supplementary Materials.The survey XPS spectra cover a wide range of binding energies, from 0 to 1200 eV.The salt spectra are shown in Figure 3a The main peaks include the C 1s and F 1s which correspond to the polymer's backbone structure, and O 1s peaks which indicate the presence of oxidized elements and oxygen contamination (Figure 3b).All the spectra demonstrate the presence of the F KLL Auger peak at binding energies of 830 eV as a confirmatory signal.Deconvolution techniques to separate overlapping peaks and accurately identify the chemical states of the elements were done by Casa XPS software version 2.3.17PR1.1.Raw data are presented in Supplementary Materials.
Detailed XPS spectra are presented in Figure 4a-c.The Ca2p duplet peaks move to the lower energies for the Ca + involved in polymer composite (Figure 4a,d).The shape of Zn2p shows the appearance of the second doublet, corresponding to the appearance of an additional oxidation state in polymer composition (Figure 4b,e).Both the position and area of Mg2p ionic bonds show fewer oxygen bonds in the composite samples.The distance between the spin split peaks shown in the figures demonstrates expectable changes in the chemical surroundings of the cations.The main peaks include the C 1s and F 1s which correspond to the polymer's backbone structure, and O 1s peaks which indicate the presence of oxidized elements and oxygen contamination (Figure 3b).All the spectra demonstrate the presence of the F KLL Auger peak at binding energies of 830 eV as a confirmatory signal.Deconvolution techniques to separate overlapping peaks and accurately identify the chemical states of the elements were done by Casa XPS software version 2.3.17PR1.1.Raw data are presented in Supplementary Materials.
Detailed XPS spectra are presented in Figure 4a-c.The Ca2p duplet peaks move to the lower energies for the Ca + involved in polymer composite (Figure 4a,d).The shape of Zn2p shows the appearance of the second doublet, corresponding to the appearance of an additional oxidation state in polymer composition (Figure 4b,e the lower energies for the Ca + involved in polymer composite (Figure 4a,d).The shape of Zn2p shows the appearance of the second doublet, corresponding to the appearance of an additional oxidation state in polymer composition (Figure 4b,e).Both the position and area of Mg2p ionic bonds show fewer oxygen bonds in the composite samples.The distance between the spin split peaks shown in the figures demonstrates expectable changes in the chemical surroundings of the cations.Detailed spectra of carbon were measured by the C1s peak.The C-C peak is present at all spectra of hydrates and composites (Figure 6a-f).The significant contribution of the
DFT Analyses
In this section, the PVDF/salt interaction is elucidated based on first-principles DFT analyses.Optimized structures of the hydrated salt molecules are provided in .45,and 2.13 Å, respectively, within C1, and 2.09, 2.36, and 2.12 Å, respectively, within C2.The configurational energy differences (∆ = − ) and the formation energies ( , and , ) of the salt complexes are provided in Table 1.A negative value of ∆ indicates better stability of C1 with more negative electronic energy.On the other hand, a positive value of ∆ indicates that C2 exhibits more negative electronic energy, i.e., better stability compared to C1.Therefore, as evident from Table 1, Mg(NO3)2•6H2O is more stable as C2 than C1 with ∆ of 1.88 kcal/mol.On the contrary, C1 of Ca(NO3)2•4H2O and Zn(NO3)2•6H2O are more stable than the respective C2 structures, with ∆ of 1.26 and 1.88 kcal/mol, respectively.Furthermore, , values are found to be more negative than , for all the salt complexes.
DFT Analyses
In this section, the PVDF/salt interaction is elucidated based on first-principles DFT analyses.Optimized structures of the hydrated salt molecules are provided in .45,and 2.13 Å, respectively, within C1, and 2.09, 2.36, and 2.12 Å, respectively, within C2.The configurational energy differences (∆E C = E C1 − E C2 ) and the formation energies (E f ,C1 and E f ,C2 ) of the salt complexes are provided in Table 1.A negative value of ∆E C indicates better stability of C1 with more negative electronic energy.On the other hand, a positive value of ∆E C indicates that C2 exhibits more negative electronic energy, i.e., better stability compared to C1.Therefore, as evident from Table 1, Mg(NO 3 ) 2 •6H 2 O is more stable as C2 than C1 with ∆E C of 1.88 kcal/mol.On the contrary, C1 of Ca(NO 3 ) 2 •4H 2 O and Zn(NO 3 ) 2 •6H 2 O are more stable than the respective C2 structures, with ∆E C of 1.26 and 1.88 kcal/mol, respectively.Furthermore, E f ,C2 values are found to be more negative than E f ,C1 for all the salt complexes.Nevertheless, considering the feasibility of the formation of both salt configurations, as indicated by the negative E f ,C1 and E f ,C2 values, we have included both C1 and C2 to model the PVDF/salt complexes.Understanding PVDF/salt interaction is crucial for accurately representing the composite solution of hydrated salts and PVDF.Both C1 and C2 structures of all the salt molecules are considered to elucidate their interaction with α-, β-, and γ-PVDF tetramers.Optimized structures of all the PVDF/salt complexes are provided in Figures 8-10.The configurational energy differences according to the salt structures C1 or C2 within the PVDF/salt systems (∆ / ) and PVDF/salt interaction energies ( ) are provided in Table 2.A negative value of ∆ / indicates that the PVDF/C1 complex is more stable than the respective PVDF/C2 configuration.Therefore, PVDF/C1 is more stable than PVDF/C2 for α-PVDF/Ca(NO3)2•4H2O, α-PVDF/Zn(NO3)2•6H2O, β-PVDF/Ca(NO3)2•4H2O, γ-PVDF/Mg(NO3)2•6H2O, and γ-PVDF/Zn(NO3)2•6H2O. Conversely, positive ∆ / indicates that PVDF/C2 is more stable than PVDF/C1 for α-PVDF/Mg(NO3)2•6H2O, β-PVDF/Mg(NO3)2•6H2O, β-PVDF/Zn(NO3)2•6H2O, and γ-PVDF/Ca(NO3)2•4H2O.The configurational stability is further demonstrated by the PVDF/salt interaction energies.A more negative indicates stronger inter-component interaction within PVDF/salt complexes.Note that, among all the PVDF/salt systems, β-PVDF/salt systems generally exhibit the strongest PVDF/salt interaction compared to their α-PVDF/salt and γ- Understanding PVDF/salt interaction is crucial for accurately representing the composite solution of hydrated salts and PVDF.Both C1 and C2 structures of all the salt molecules are considered to elucidate their interaction with α-, β-, and γ-PVDF tetramers.Optimized structures of all the PVDF/salt complexes are provided in Figures 8-10.The configurational energy differences according to the salt structures C1 or C2 within the PVDF/salt systems (∆E PVDF/C ) and PVDF/salt interaction energies (E int ) are provided in Table 2.A negative value of ∆E PVDF/C indicates that the PVDF/C1 complex is more stable than the respective PVDF/C2 configuration.Therefore, PVDF/C1 is more stable than PVDF/C2 for α-PVDF/Ca(NO The configurational stability is further demonstrated by the PVDF/salt interaction energies.A more negative E int indicates stronger inter-component interaction within PVDF/salt complexes.Note that, among all the PVDF/salt systems, β-PVDF/salt systems generally exhibit the strongest PVDF/salt interaction compared to their α-PVDF/salt and γ-PVDF/salt counterparts.This finding aligns with the experimental measurements showing the high-est amount of piezoelectric β-PVDF phase in the PVDF/salt samples (Figure 7, Table 1).Specifically, for Ca(NO 3 ) 2 •4H 2 O added PVDF samples, the PVDF phase composition was found to be 7.71% α, 92.29% β, and virtually 0% γ.For PVDF/Mg(NO 3 ) 3 •6H 2 O samples, the composition was 4.25% α, 95.75% β, and 0% γ.For the PVDF/Zn(NO 3 ) 2 •6H 2 O sample, 14.70% α, 85.30% β, and again 0% for γ phases were obtained.These results corroborate an excellent match between the experimental and DFT results.PVDF/salt counterparts.This finding aligns with the experimental measurements showing the highest amount of piezoelectric β-PVDF phase in the PVDF/salt samples (Figure 7, Table 1).Specifically, for Ca(NO3)2•4H2O added PVDF samples, the PVDF phase composition was found to be 7.71% α, 92.29% β, and virtually 0% γ.For PVDF/Mg(NO3)3•6H2O samples, the composition was 4.25% α, 95.75% β, and 0% γ.For the PVDF/Zn(NO3)2•6H2O sample, 14.70% α, 85.30% β, and again 0% for γ phases were obtained.These results corroborate an excellent match between the experimental and DFT results.In order to further investigate the structures of the PVDF/salt systems, the calculated and experimental IR and Raman spectra are compared in Supplementary Materials, Figure S1a-c.Interestingly, substantial consistency is observed between the calculated and experimental IR and Raman spectra (Supplementary Materials, Figure S1a).However, some mismatch between the calculated and experimental results is found at the high-fre-−1 In order to further investigate the structures of the PVDF/salt systems, the calculated and experimental IR and Raman spectra are compared in Supplementary Materials, Figure S1a-c.Interestingly, substantial consistency is observed between the calculated and experimental IR and Raman spectra (Supplementary Materials, Figure S1a).However, some mismatch between the calculated and experimental results is found at the highfrequency region (beyond 3000 cm −1 ) in the IR spectra of PVDF/salt systems (the raw data are presented in Supplementary Materials, Figure S1b).This discrepancy might be attributed to the solvent effect not being considered in the calculations.Nevertheless, the calculated Raman spectra of PVDF/salt systems show comparatively better alignment with the experimental data (Supplementary Materials, Figure S1c), implying the reliability of the current computational method.
Dielectric Properties
The results of triboelectric measurements are plotted to show the relationship of power as a function of the load resistance.Typically, the voltage increases with increasing load resistance until it reaches a saturation point, while the current decreases according to Ohm's law.The power output curve often reveals a peak, indicating the optimal load resistance for maximum power extraction from the triboelectric generator.The raw data are presented in Supplementary Materials.This analysis helps in understanding the efficiency and performance characteristics of triboelectric materials under varying electrical loads.By systematically varying the load resistance and accurately measuring the corresponding electrical outputs, the triboelectric properties such as maximum power output, optimal load resistance, and overall energy conversion efficiency are thoroughly characterized (Figure 11).The surface roughness of the samples was ~3 µm.The roughness is desirable for triboelectric charge accumulation due to the increased surface area.At the same time, it still provides sufficient contact with the contact for the measurement setup.The dielectric characterization of composites reveals that the dielectric constants exhibit highly similar trends (Figure 12).The additional data (dielectric loss, capacity, resistivity) are presented in Supplementary Materials.The dielectric characterization of composites reveals that the dielectric constants exhibit highly similar trends (Figure 12).The additional data (dielectric loss, capacity, resistivity) are presented in Supplementary Materials.The dielectric characterization of composites reveals that the dielectric constants exhibit highly similar trends (Figure 12).The additional data (dielectric loss, capacity, resistivity) are presented in Supplementary Materials.Hydrated salts introduce ionic conductivity in the polymer composite, which influences the overall dielectric behavior.Comparing the dielectric properties of PVDF composites with different types of hydrated salts can provide a comprehensive understanding of how different ionic species affect dielectric performance.This can aid in tailoring composites for specific applications.The dielectric constant of PVDF fibers was significantly influenced by the doping of different salt hydrates.Among the tested dopants, PVDF fibers doped with Ca(NO3)2•4H2O exhibited the highest dielectric constant.This increase can be attributed to the effective interaction between the Ca²⁺ ions and the polymer matrix, which enhances the polarization under an electric field.In contrast, PVDF fibers doped Hydrated salts introduce ionic conductivity in the polymer composite, which influences the overall dielectric behavior.Comparing the dielectric properties of PVDF composites with different types of hydrated salts can provide a comprehensive understanding of how different ionic species affect dielectric performance.This can aid in tailoring composites for specific applications.The dielectric constant of PVDF fibers was significantly influenced by the doping of different salt hydrates.Among the tested dopants, PVDF fibers doped with Ca(NO 3 ) 2 •4H 2 O exhibited the highest dielectric constant.This increase can be attributed to the effective interaction between the Ca 2+ ions and the polymer matrix, which enhances the polarization under an electric field.In contrast, PVDF fibers doped with Mg(NO 3 ) 2 •6H 2 O showed the lowest dielectric constant.The lower dielectric constant in this case may result from the different ionic size and hydration level of Mg 2+ , which could affect the extent of polarization and the overall dielectric response of the material.
Discussion
These detailed observations by SEM were crucial for assessing the quality and uniformity of the composite electrospun fibers, providing insights into the optimization of the electrospinning process and the potential applications of the fibers.SEM images (Figure 1a-e) reveal that the fibers exhibit a preferential orientation aligned with the direction of spinning.This alignment indicates that the fibers are consistently arranged parallel to the spinning axis, suggesting uniformity in their structural organization.The detailed images provided by the SEM highlight the extent of this orientation, demonstrating how the spinning process influences the final alignment of the fibers.The SEM images (Figure 1a-e) clearly show this alignment, with fibers consistently arranged in a uniform direction.This uniform alignment is crucial for ensuring consistent electrical properties in the composite fibers.This preferential orientation can impact the properties and overall performance of the fiber material, making it a significant observation in the study of spun fibers [22].
The XRD patterns of the composite samples do not display any diffraction peaks corresponding to Mg(NO 3 ) 2 •6H 2 O, Zn(NO 3 ) 2 •6H 2 O, and Ca(NO 3 ) 2 •4H 2 O. Instead, the diffraction peaks observed are solely attributable to PVDF.This absence of peaks from the nitrate hydrates confirms that the crystal structures of these salts have been disrupted or dissolved within the composite matrix.The PVDF in the composite retains its characteristic phases, which we have comprehensively detailed in our previous work.This observa-tion underscores the effective incorporation of PVDF as the dominant phase within the composite, with the nitrate hydrates no longer maintaining their crystalline integrity.
These different transitions result in four distinct LMM Auger peaks in the spectrum, labeled a, b, c, and d.Each peak corresponds to a specific combination of initial and final states within the L and M shells, resulting in unique energies for the ejected Auger electrons.The presence of Auger peaks for elements reflects the possible electronic transitions between the subshells involved, each contributing to the overall Auger spectrum with its characteristic energy (see Supplementary Materials).
There was an attempt to explain of electrical behavior of the polymeric composites based on the electronegativity of the metal cation in salts.Molecular vibrations observed in FTIR spectra can be linked to the polarizability of the material.Polarizability affects how the material responds to an electric field, thus influencing its dielectric properties.Key types of vibrations include stretching vibrations, which are changes in bond length, and bending vibrations, which are changes in bond angles.Functional groups with polar bonds of PVDF contribute significantly to the dielectric properties due to their dipole moments.Shifts in peak positions and changes in intensity can indicate changes in molecular interactions and polarizability.Two configurations of each nitrate salt are presented.In the initial structures of salt configuration 1, central metal cations (Mg 2+ , Ca 2+ , and Zn 2+ ) are coordinated to [NO 3 ] − and form [M(NO 3 ) 2 ], which is surrounded by H 2 O moieties.On the other hand, in the initial structures of salt configuration 2, central metal cations (Mg 2+ , Ca 2+ , and Zn 2+ ) are coordinated to H 2 O and form [M(H 2 O)] 2+ which is surrounded by [NO 3 ] − units.However, after optimization, both H 2 O and [NO 3 ] − form a bond (ionic or coordination) with the central metal cation.The formation energies of the salts are calculated accordingly.More negative electronic energy and formation energy indicate higher stability of the structure.Notably, Ca(NO 3 ) 2 •4H 2 O exhibits high positive formation energy.Interactions between α-, β-, and γ-PVDF and salt configurations are compared.β-PVDF/salt (both configurations 1 and 2) interaction energies are more negative compared to αand γ-PVDF/salt counterparts.Simulated FTIR and Raman spectra are compared.A mismatch is observed at the highfrequency region which can be caused by hydrogen bond interaction, but it is not valuable for this study.
Independent modeling and experimental characterization by vibrational techniques are in agreement and prove that for obtaining a higher electroactive response, the doping of PVDF fibers by Mg(NO 3 ) 2 •6H 2 O is preferable.To confirm this conclusion, the triboelectric measurements were analyzed.Triboelectric charging involves the mechanical transfer of electric charge facilitated by the movement of fiber surfaces.When two solid surfaces make contact and then pull apart, this process results in the transfer of charge from one surface to the other.Having larger polarization of the surface, the PVDF fibers doped by Mg(NO 3 ) 2 •6H 2 O showed a higher triboelectric response and could be more desired for practical applications.Comparisons to recent studies are presented in the table and show a wide range of the electro-response (piezo-and tribo-effect) values (Table 3).The dielectric constant is related to the density and polarizability of polar functional groups identified in the FTIR and Raman spectra.Surprisingly, the dielectric contact of fibers doped by Ca(NO 3 ) 2 •4H 2 O shows the highest dielectric constant.The non-polarized α-phase of PVDF can contribute to increased permittivity due to the interplay between its structural characteristics, dipole reorientation, interfacial polarization, and processing conditions.The α-phase has a TGTG' (trans-gauche-trans-gauche') conformation, which is non-polar, but still allows the molecular dipoles to reorient under an electric field.This reorientation of dipoles within the PVDF chains increases the polarization of the material, thus contributing to higher permittivity.Although the intrinsic dipole moment is lower due to partial cancellation, the molecular segments can still rotate or reorient under an electric field, enhancing the dielectric response.Additionally, at the interfaces between the crystalline and amorphous regions, charges can accumulate, leading to interfacial polarization, which significantly contributes to the overall permittivity.These effects are further influenced by the processing conditions, such as stretching and poling, which can enhance chain alignment and crystallinity, allowing for better dipole alignment and trapping of charges.The combined effect of these factors results in a significant dielectric response from the α-phase PVDF, even though it is non-polar, leading to an increased permittivity.
Conclusions
This consistent alignment parallel to the spinning axis signifies a high degree of structural organization, profoundly affecting the mechanical properties and overall performance of the fiber material, which is critical for optimizing the electrospinning process and enhancing the functional applications of the fibers.The successful incorporation of PVDF as the dominant phase, coupled with the dissolution of nitrate hydrates, underscores the efficacy of the composite fabrication method.
Both computational modeling and experimental characterization confirmed that doping PVDF fibers with Mg(NO 3 ) 2 •6H 2 O enhances the electroactive response (58 nW), as evidenced by triboelectric measurements indicating superior performance, while fibers doped with Ca(NO 3 ) 2 •4H 2 O exhibited the highest dielectric constant (4.04).The nonpolarized α-phase of PVDF contributes to increased permittivity through the reorientation of molecular dipoles under an electric field and interfacial polarization at the boundaries between crystalline and amorphous regions.
This complex and comprehensive approach is a way to successfully develop and utilize such composites.The successful utilization and effectiveness of PVDF micro and nanofibers incorporating nitrate salts heavily rely on factors such as the composite's formulation, structure, and the specific properties of the nitrate salts employed.Achieving optimal results necessitates meticulous design and engineering to customize the material for its intended purpose.
E H 2 O
and E [NO 3 ] − refer to the ZPE-corrected energies of an isolated H 2 O molecule and [NO 3 ] − ion, respectively.
Figure 2 .Figure 2 .
Figure 2. XRD spectra of nitrate hydrates and doped PVDF fibers for (a) Ca(NO 3 ) 2 •4H 2 O; (b) Mg(NO 3 ) 2 •6H 2 O; and (c) Zn(NO 3 ) 2 •6H 2 O.The survey XPS spectra cover a wide range of binding energies, from 0 to 1200 eV.The salt spectra are shown in Figure 3a and contain expected ions of metals, nitrogen, oxygen, and contamination of carbon.The survey spectrum displays peaks corresponding to various core levels of the metal elements present in the sample.There are visible peaks of calcium Ca2s, Ca2p, Ca3p for Ca(NO 3 ) 2 •4H 2 O; Mg2s, Mg2p, and Mg KLL Auger peak for Mg(NO 3 ) 2 •6H 2 O; Zn2p, Zn3s, Zn3p, and four Auger peaks of Zn for Zn(NO 3 ) 2 •6H 2 O.The Zn LMM notation specifies the shells involved in the Auger process: the initial hole is in an L shell, and the transitions involve electrons from the M shells.The various possible transitions can lead to different Auger peaks (Figure 3a) depending on the specific subshells involved.These subshells are characterized by different binding energies, leading to distinct Auger electron energies.Here are the potential LMM transitions for zinc: L 3 M 4 M 4 , L 3 M 4 M 5 , L 2 M 4 M 4 , and L 2 M 4 M 5 .
).Both the position and area of Mg2p ionic bonds show fewer oxygen bonds in the composite samples.The distance between the spin split peaks shown in the figures demonstrates expectable changes in the chemical surroundings of the cations.The detailed spectra of fluorine are shown in Figure 5a-c.The spectra of PVDF fibers doped Ca(NO 3 ) 2 •4H 2 O and Zn(NO 3 ) 2 •6H 2 O show similar ratios of the fluorine-carbon and fluorine-metal bonds (Figure 5a,b).The area of the C-F bond is twice as large in PVDF fibers doped Mg(NO 3 ) 2 •6H 2 O composite.Detailed spectra of carbon were measured by the C1s peak.The C-C peak is present at all spectra of hydrates and compsites (Figure 6a-f).The significant contribution of the O-C=O bond could be observed at 288.53 eV at Coa(NO 3 ) 2 •4H 2 O sample (Figure 6a).The samples of composites have a C-F peak of around 289.1 eV.The lowest rate of C-F in comparison (37.32%) in comparison with the C-C bond is at the sample of PVDF fibers doped by Mg(NO 3 ) 2 •6H 2 O.It is in agreement with the F1s spectra (Figure 5c) and confirms the bonding of fluorine with metal.
Figure 4 .Figure 5 .
Figure 4. Detailed XPS peaks of the metal cation: (a) Ca 2p in the salt hydrate; (b) Zn 2p in the salt hydrate; (c) Mg 2p in the salt hydrate; (d) Ca2p peak in the polymer composite; (e) Zn 2p peak in the polymer composite; (f) Mg 2p peak in the polymer composite.The detailed spectra of fluorine are shown in Figure 5a-c.The spectra of PVDF fibers doped Ca(NO3)2•4H2O and Zn(NO3)2•6H2O show similar ratios of the fluorine-carbon and fluorine-metal bonds (Figure 5a,b).The area of the C-F bond is twice as large in PVDF fibers doped Mg(NO3)2•6H2O composite.
Figure 4 .Figure 4 .Figure 5 .
Figure 4. Detailed XPS peaks of the metal cation: (a) Ca 2p in the salt hydrate; (b) Zn 2p in the salt hydrate; (c) Mg2p in the salt hydrate; (d) Ca2p peak in the polymer composite; (e) Zn 2p peak in the polymer composite; (f) Mg2p peak in the polymer composite.
Figure 7 .
Within initial configurations (before optimization) of C1 structures, the bivalent metal ions M 2+ (M: Mg, Ca, Zn) are connected to two [NO3] − ions forming [M(NO3)2] central unit surrounded by water molecules.Conversely, in the initial C2 structures, M 2+ ions are bonded with water molecules, forming the [M(H2O)n] 2+ central unit which is surrounded by two [NO3] − ions.Notably, after optimization, both H2O and [NO3] − moieties can form ionic or coordination bonds with the central metal cation, as depicted in Figure 7.The average Mg-O, Ca-O, and Zn-O bond distances are 2.11, 2
Figure 7 .
Within initial configurations (before optimization) of C1 structures, the bivalent metal ions M 2+ (M: Mg, Ca, Zn) are connected to two [NO 3 ] − ions forming [M(NO 3 ) 2 ] central unit surrounded by water molecules.Conversely, in the initial C2 structures, M 2+ ions are bonded with water molecules, forming the [M(H 2 O) n ] 2+ central unit which is surrounded by two [NO 3 ] − ions.Notably, after optimization, both H 2 O and [NO 3 ] − moieties can form ionic or coordination bonds with the central metal cation, as depicted in Figure 7.The average Mg-O, Ca-O, and Zn-O bond distances are 2.11, 2
Polymers 2024 ,
16, x FOR PEER REVIEW 11 of 19Nevertheless, considering the feasibility of the formation of both salt configurations, as indicated by the negative , and , values, we have included both C1 and C2 to model the PVDF/salt complexes.
Figure 12 .
Figure 12.The dielectric constant of composites.
Figure 12 .
Figure 12.The dielectric constant of composites.
Table 1 .
Configurational energy differences (∆E C ), and the formation energies (E f ,C1 and E f ,C2 ) of the salt configurations C1 and C2.
C = E C1 −E
Table 3 .
Ranking of electro-response in recent publications. | 8,927.8 | 2024-08-26T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Direct derivation of Lienard Wiechert potentials, Maxwell's equations and Lorentz force from Coulomb's law
In 19th century Maxwell derived Maxwell equations from the knowledge of three experimental physical laws: the Coulomb's law, the Ampere's force law and Faraday's law of induction. However, theoretical basis for Ampere's force law and Faraday's law remains unknown to this day. Furthermore, the Lorentz force is considered as experimental phenomena, the theoretical foundation of this force is still unknown. To answer these fundamental theoretical questions, we derive Lienard Wiechert potentials, Maxwell's equations and Lorentz force from two simple postulates: (a) when all charges are at rest the Coulomb's force acts between the charges, and (b) that disturbances caused by charge in motion propagate away from the source with finite velocity. The special relativity was not used in our derivations nor the Lorentz transformation. In effect, it was shown all the electrodynamic laws, including the Lorentz force, can be derived from Coulomb's law and time retardation. This was accomplished by analysis of hypothetical experiment where test charge is at rest and where previously moving source charge stops at some time in the past. Then the generalized Helmholtz decomposition theorem, also derived in this paper, was applied to reformulate Coulomb's force acting at present time as the function of positions of source charge at previous time when the source charge was moving. From this reformulation of Coulomb's law the Lienard Wiechert potentials and Maxwell's equations were derived. In the second part of this paper, the energy conservation principle valid for moving charges is derived from the knowledge of electrostatic energy conservation principle valid for stationary charges. This again was accomplished by using generalized Helmholtz decomposition theorem. From this dynamic energy conservation principle the Lorentz force is derived.
Introduction
In his famous Treatise [1,2] Maxwell derived equations of electrodynamics based on the knowledge about the three experimental laws known at the time: the Coulomb's law describing the electric force between charges at rest; Ampere's law describing the force between current carrying wires, and the Faraday's law of induction. Prior to Maxwell, magnetism and electricity were regarded to as separate phenomena. It was James Clerk Maxwell who unified these seemingly disparate phenomena into the set of equations collectively known today as Maxwell's equations. In modern vector notation, the four Maxwell's equations that govern the behavior of electromagnetic fields are written as: where symbol D denotes electric displacement vector, E is electric field vector, B is a vector called magnetic flux density, vector J is called current density and scalar ρ is the charge density. Furthermore, there are two more important equations in electrodynamics that relate magnetic vector potential A and scalar potential φ to electromagnetic fields B and E: In standard electromagnetic theory, if a point charge q s is moving with velocity v s (t) along arbitrary path r s (t) the scalar potential φ and vector potential A caused by moving charge q s are described by well known, relativistically correct, Liénard-Wiechert potentials [3,4]: φ = φ(r, t) = 1 4π q s (1 − n s (t r ) · β s (t r )) |r − r s (t r )| (7) A = A(r, t) = µc 4π q s β s (t r ) (1 − n s (t r ) · β s (t r )) |r − r s (t r )| (8) where t r is retarded time, r is the position vector of observer and vectors n s (t r ) and β s (t r ) are: n(t r ) = r − r s (t r ) |r − r s (t r )| These equations were almost simultaneously discovered by Liénard and Wiechert around 1900's and they represent explicit expressions for time-varying electromagnetic fields caused by charge in arbitrary motion. Nevertheless, Liénard-Wiechert potentials were derived from retarded potentials, which in turn, are derived from Maxwell equations. Maxwell's electrodynamic equations provide the complete description of electromagnetic fields, however, these equations say nothing about mechanical forces experienced by the charge moving in electromagnetic field. If the charge q is moving in electromagnetic field with velocity v then the force F experienced by the charge q is: The force described by equation (11) is well known Lorentz force. Discovery of this electrodynamic force is historically credited to H.A. Lorentz [5], however, the similar expression for electromagnetic force can be found in Maxwell's Treatise, article 598 [2]. The difference between the two is that Maxwell's electromotive force acts on moving circuits and Lorentz force acts on moving charges. However, it is not yet explained what causes the Lorentz force, Ampere's force law and Faraday's law. Maxwell derived his expression for electromotive force along moving circuit from the knowledge of experimental Faraday's law. Later, Lorentz extended Maxwell's reasoning to discover the force acting on charges moving in electromagnetic field [5]. Nevertheless, it would be impossible for Lorentz to derive his force law without the prior knowledge of Maxwell equations [6].
Nowadays, the Lorentz force (qv × B term) is commonly viewed as an effect of Einstein's special relativity. For example, an observer co-moving with source charge would not measure any magnetic field, while on the other hand, the stationary observer would measure the magnetic field caused by moving source charge. However, in this work, we demonstrate that the special relativity is not needed to derive the Lorentz force and Maxwell equations. In fact, we derive Maxwell's equations and Lorentz force from more fundamental principles: the Coulomb's law and time retardation.
There is another reason why the idea to derive Maxwell's equations and Lorentz force from Coulomb's law may seem plausible. Because of mathematical similarity between Coulomb's law and Newton's law of gravity many researchers thought that if Maxwell's equations and Lorentz force could be derived from Coulomb's law that this would be helpful in understanding of gravity. These two inverse-square physical laws are written: The expressions for Coulomb's force and Newton's gravitational force are indeed similar, however, these two forces significantly differ in physical nature. The latter force is always attractive while the former can be either attractive or repulsive. Nevertheless, a number of researchers attempted to derive Maxwell's equations from Coulomb's law, and most of these attempts rely on Lorentz transformation of space-time coordinates between the rest frame of the moving charge and laboratory frame.
The first hint that Maxwell's equations could be derived from Coulomb's law and Lorentz transformation can be found in Einstein's original 1905 paper on special relativity [7]. Einstein suggested that the Lorentz force term (v × B) is to be attributed to Lorentz transformation of the electrostatic field from the rest frame of moving charge to the laboratory frame where the charge has constant velocity. Later, in 1912, Leigh Page derived Faraday's law and Ampere's law from Coulomb's law using Lorentz transformation [8]. Frisch and Willets discussed the derivation of Lorentz force from Coulomb's law using relativistic transformation of force [9]. Similar route to derivation of Maxwell's equations and Lorentz force from Coulomb's law was taken by Elliott in 1966 [10]. Kobe in 1986 derives Maxwell's equations as the generalization of Coulomb's law using special relativity [11]. Lorrain and Corson derive Lorentz force from Coulomb's law, again, by using Lorentz transformation and special relativity [12]. Field in 2006 derives Lorentz force and magnetic field starting from Coulomb's law by relating the electric field to electrostatic potential in a manner consistent with special relativity [13]. The most recent attempt comes from Singal [14] who attempted to derive electromagnetic fields of accelerated charge from Coulomb's law and Lorentz transformation.
All of the mentioned attempts have in common that they attempt to derive Maxwell equations from Coulomb's law by exploiting Lorentz transformation or Einstein's special theory of relativity. However, historically the Lorentz transformation was derived from Maxwell's equations [15], thus, the attempt to to derive Maxwell's equations using Lorentz transformation seems to involve circular reasoning [16]. The strongest criticism came from Jackson who pointed out that it should be immediately obvious that, without additional assumptions, it is impossible to derive Maxwell's equations from Coulomb's law using theory of special relativitiy [17]. Schwartz addresses these additional assumptions and starting from Gauss' law of electrostatics and by exploiting the Lorentz invariance and properties of Lorentz transformation he derives the Maxwell's equations [18].
In addition to the criticism above, we point out that the derivations of Maxwell's equations from Coulomb's law using Lorentz transformation should only be considered valid for the special case of the charge moving along the straight line with constant velocity. This is because the Lorentz transformation is derived under the assumption that electron moves with constant velocity along straight line [15]. For example, if the particle moves with uniform acceleration along straight line the transformation of coordinates between the rest frame of the particle and the laboratory frame takes the different mathematical form than that of the Lorentz transformation [19]. If the particle is in uniform circular motion yet another coordinate transformation from the rest frame to laboratory frame, called Franklin transform, is valid [20]. None of the above cited papers consider the fact that Lorentz transformation is no longer valid when the charge is not moving along straight line with constant velocity.
To circumvent problems with special relativity and Lorentz transformation we take entirely different approach to derive Liénard-Wiechert potentials and Maxwell's equations from Coulomb law. We start our derivation from the analysis of the following hypothetical experiment: consider two charges at rest at present time, one called the test charge, and the other called the source charge. The source charge was moving in the past but it is at rest at present time. Because both charges are at rest at present the force acting on test charge at present time is the Coulomb's force.
However, in the past when the source charge was moving, we assume that the force 4 Each point on closed contour C is affected by Coulomb's electrostatic field Ec. Energy conservation principle at present time is C Ec · dr = 0. In (b) the source charge qs is moving along arbitrary path and it stops at past time ts < tp. Dynamic energy conservation principle valid in the past is assumed to be unknown when source charge was moving.
acting on test charge was not the Coulomb's force. To discover the mathematical form of this "unknown" electrodynamic force acting in the past from the knowledge of known electrostatic force (Coulomb's law) acting at present time the generalized Helmholtz decomposition theorem was applied. This theorem, derived in Appendix A, allowed us to relate Coulomb's force acting at present time to the positions of source charge at past time. From here, Liénard-Wiechert potentials and Maxwell's equations were derived by careful mathematical manipulation.
It should be emphasized that we did not resort to theory of special relativity nor to Lorentz transformation in our derivation of Maxwell's equations. Not less importantly, the presented derivation of Maxwell's equations from Coulomb's law is valid for charges in arbitrary motion. In effect, we may say that more general physical law (Maxwell's equations) acting at past time is derived from the knowledge of limited physical law (Coulomb's law) acting at present time.
However, from Maxwell's equations, it is very difficult, if not entirely impossible, to derive the Lorentz force without resorting to some form of energy conservation law. As shown in Fig. 1a, at present time, the single stationary charge creates Coulomb's electrostatic field. Known energy conservation law valid at present time states that contour integral of Coulomb's field along closed contour C is equal to zero.
But, this electrostatic energy conservation law valid at present is not necessarily valid in the past when the source charge was moving. Thus, in the second part of this paper we derive this "unknown" dynamic energy conservation principle valid in the past from the knowledge of electrostatic energy conservation principle valid at present time. This was again achieved by the careful application of generalized Helmholtz decomposition theorem which allowed us to transform electrostatic energy conservation law valid at present to dynamic energy conservation law valid in the past. This dynamic energy conservation law states that the work of non-conservative force along closed contour is equal to the time derivative of the flux of certain vector field through the surface bounded by this closed contour. From this dynamic energy conservation law the Lorentz force was finally derived.
Generalized Helmholtz decomposition theorem
Because generalized Helmholtz decomposition theorem is central for deriving Maxwell equations and Lorentz force from Coulomb's law, in this section, we briefly present this important theorem while the derivation itself is moved to Appendix A. There have been several previous attempts in the literature to generalize classical Helmholtz decomposition theorem to time dependent vector fields [21,22,23]. However, in none of the cited articles the Helmholtz theorem for functions of space and time is presented in the mathematical form usable for the mathematical developments described in this paper. This is probably caused by difficulties in stating such a theorem and this was clearly stated in [22]: "There does not exist any simple generalization of this theorem for time-dependent vector fields".
However, we show that there indeed exists the simple generalization of Helmholtz decomposition theorem for time-dependent vector fields and that it can derived from time-dependent inhomogeneous wave equation. To improve the clarity of this paper, the complete derivation of Helmholtz decomposition theorem for functions of space and time is moved to Appendix A, subsection A.2. As it was shown in Appendix A, the generalization of Helmholtz decomposition theorem for the vector function of space and time F(r, t) can be written as: where scalar function G(r, t; r , t ) is the fundamental solution of time-dependent inhomogeneous wave equation given as: In the equation above, δ(r − r ) = δ(x − x )δ(y − y )δ(z − z ) is 3D Dirac delta function, and δ(t − t ) is Dirac delta function in one dimension. Fundamental solution G(r, t; r , t ), sometimes called Green's function, represents the retarded in time solution of the inhomogeneous time dependent wave equation and it can be written as: 6 where position vector r is the location of the source at time t . From equation (14) it is evident that the Helmholtz decomposition theorem for functions of space and time can be regarded to as a mathematical tool that allows us to rewrite any vector function that is function of present time t and of present position r as vector function of previous time t and of previous position r . Furthermore, the generalized Helmholtz decomposition theorem (14) comes with additional limitation that it is valid if vector function F(r , t ) approaches zero faster than 1/ |r − r | as |r − r | → ∞.
Very similar theorem was presented in article written by Heras [24]; the difference is that in Heras' article the time integrals in equation (14) were a priori evaluated at retarded time t = t − |r − r | /c. As such, the generalized Helmholtz theorem presented in [24] is not suitable for the derivation of Maxwell equations and Lorentz force from the Coulomb's law. Reason for this, as it will become evident later in this paper, is that if we immediately evaluate the time integrals in equation (14) the important information is lost from the equation.
Derivation of Maxwell Equations from Coulomb's law
In this section we derive Maxwell equations from Coulomb's law using generalized Helmholtz decomposition theorem represented by equation (14). To begin the discussion, we consider hypothetical experiment shown in Fig. 2, where source charge q s is moving along trajectory r s (t) and it stops at some past time t s . The test charge q is stationary at all times. We assume that the disturbances caused by moving source charge propagate outwardly from the source charge with finite velocity c. These disturbances originating from the source charge at past time manifest itself as the force acting on stationary test charge at present time. This means that there is a time delay ∆t between the past time t s when the source charge has stopped and the present time t p when this disturbance has propagated to the test charge: At precise moment in time t p , that we call the present time, the force acting on stationary test charge q is the Coulomb's force because source charge and test charge are both at rest, and because the effect of source charge stopping at past time t s had enough time to propagate to test charge. The Coulomb's force F c experienced by the test charge q at present time t p can be expressed by the following equation: Let us now consider the time t just one brief moment before the stopping time t s : where δt → 0 is very small time interval. This time interval δt is so small that we might even call it infinitesimally small. Then at the moment in time infinitesimally before the Figure 2: Source charge qs is moving along arbitrary trajectory rs(t) and stops at time ts. Because qs stops moving at past time ts, at present time tp, the stationary test charge q experiences electrostatic Coulomb force. present time t p the force felt by test charge q is still the Coulomb's force if δt → 0. Using these considerations, we can now rewrite equation (18) as: Note that equation (20) is equivalent to equation (18) when δt → 0. The reason why we have written the Coulomb's law this way is to permit slight variation of time before stopping time t s so that we can exploit generalized Helmholtz decomposition theorem in order to derive Maxwell's equations from Coulomb's law. Had we not done this then the source charge position vector r s (t s ) would simply be the constant vector and generalized Helmholtz decomposition could not be used.
Because the right hand side of equation (20) is now the function of time t and position r we are allowed to use the generalized Helmholtz decomposition theorem to rewrite the right hand side of equation (20). This is because generalized Helmholtz decomposition theorem states that any vector function of time t and position r can be decomposed as described by this theorem if that function meets certain criteria. Thus, using generalized Helmholtz decomposition theorem we can rewrite the right hand side of equation (20) as: To clarify the notation in equation above note that dV = dx dy dz represents the differential volume element of an infinite volume R 3 . As defined in Appendix A, subsection A.1, the primed position vector r is written in Cartesian coordinate system as: where variables x , y , z ∈ R. Vectorsx,ŷ andẑ are orthogonal Cartesian unit basis vectors. Furthermore, in Cartesian coordinates, the primed del operator ∇ that appears in equation (21) is defined as: From the definition above, it follows that primed del operator ∇ acts only on functions of variables x , y , z , and consequently, on functions of primed position vector r = x x + y ŷ + z ẑ. It does not act on functions of position vector of source charge r s (t ) because this position vector is function of variable t . Using these definitions we can write the following simple relations: where δ (r − r s (t )) is 3D Dirac's delta function. Inserting equations (25) and (26) into equation (21), and eliminating charge q from the equation, yields the following relation: In Appendix C, subsection C.1, we have shown that the time derivative that appears in the second right hand side integral of equation (27) can be written as: where v s (t ) is the velocity of the source charge q s at time t : By inserting equation (28) into equation (27) it is obtained that: We now make use of the following identity, also derived in Appendix Appendix C, subsection C.2, to rewrite the last right hand side term of equation (30) as: Replacing the last right hand side integral in equation (30) with equation (31) and differentiating the resulting equation with respect to time t yields: In the physical setting shown in Fig. 2 the coordinates of the test charge q are fixed, hence, order in which we apply operator ∇ × ∇× and second order time derivative ∂ ∂t 2 can be swapped (because operator ∇ does not affect variable t). Furthermore, because variables t , x , y and z are independent of time t we can move the double time derivative under the integral sign in the last right hand side integral of above equation: The second order time derivative of G(r, t; r , t ) in the last term of equation (33) can be replaced with equation (15) to obtain: Using sifting property of Dirac's delta function allows us to rewrite the third right hand side term of equation (34) as: To continue the derivation of Maxwell's equations from Coulomb's law we should note that operator ∇ does not affect vector v s (t) because v s (t) is a function of variable t. Hence, the application of standard vector calculus identity ∇ × ∇ × P = ∇ (∇ · P) − ∇ 2 P yields: Combining equations (34), (35) and (36), after cancellation of appropriate terms, yields: In Appendix Appendix C, subsection C.3, we have derived the following mathematical identity: By inserting equation (38) into equation (37) it is obtained that: If we now introduce new constant µ = 1 c 2 and divide whole equation (39) by c 2 we obtain: Although it is perhaps not yet apparent, equation (39) is Maxwell-Ampere equation given in introductory part of this paper as equation (4). To evaluate right hand side integrals in equation (40) we use sifting property of Dirac's delta function is function of position vector r , to rewrite the right hand side integrals in equation (40) as: To evaluate right hand side integrals in equations above we now replace Green's function G(r, t; r s (t ), t ) in these equations with equation (16) to obtain: The right hand side integrals in equations (43) and (44) can be evaluated by making use of the following standard mathematical identity involving Dirac's delta function: where f (u) is real function of real argument u, and u 0 is the solution of equation f (u 0 ) = 0. Using identity (45), the Dirac's delta function in equations (43) and (44) can be written as as: where β(t r ) and n(t r ) are given by equations (9) and (10), respectively. From equation (45) it follows that the time t r is the solution to the following equation: Evidently, the time t r is the time when the disturbance created by moving source charge at position in space r s (t r ) was created. This disturbance moves through the space with 13 finite velocity c and reaches the position r of the test charge at time t. In the electromagnetic literature this time t r is commonly known as retarded time.
To proceed with derivation of Maxwell equations, we now insert equation (46) into equations (43) and (44), and evaluate the integrals over t using the sifting property of Dirac's delta function to obtain: By inserting equations (48) and (49) into equation (40), and rearranging, it is obtained: The first right hand side term of the equation above can be identified as the current J of the point charge distribution moving with velocity v s (t) multiplied by constant µ: We now define scalar function θ(r, t) and vector function Q(r, t) as: With the aid of scalar function θ(r, t), vector function Q(r, t), and expression µJ given by equation (51) the equation (50) can be written as: Furthermore, we now define two vector functions M and N as: Using definitions of vector functions M and N given by equations (55) and (56) we can rewrite equation (54) as: We shall now investigate the mathematical properties of vector fields M and N. Note that because for any differentiable vector field P we can write ∇ · ∇ × P = 0, from equation (55) it follows that: The curl of the gradient of any differentiable scalar function ψ is equal to zero, i.e. ∇ × ∇ψ = 0. Thus, taking the curl of equation (56) yields: Finally, in Appendix Appendix C, subsection C.4, we have shown that the divergence of vector field N is: which completes the derivation of electrodynamic equations from Coulomb's law.
To compare these equations to Maxwell's equations, in Table 1 we have summarized governing equations for scalar potential θ(r, t), vector potential Q(r, t), vector field M and vector field N which are all derived from Coulomb's law. By comparison with Liénard-Wiechert potentials given in Table 2, we see that scalar potential θ(r, t) is identical to Liénard-Wiechert scalar potential φ(r, t) and vector potential Q(r, t) is identical to Liénard-Wiechert magnetic vector potential A(r, t). Furthermore, by comparing Table 1 and Table 2 we find that vector field M is identical to magnetic flux density B and that vector field N is identical to electric field E.
In Table 3 we have compared Maxwell's equations governing fields B and E with differential equations governing vector fields M and N. Clearly, left hand side of Table 3 is identical in the mathematical form to the right hand side of the same table, hence, differential equations governing vector fields M and N are identical to those governing vector fields B and E. This is expected, because we already know that vector field N = E and vector field M = B.
Thus, it should be evident by now that we have derived Maxwell equations and Liénard-Wiechert potentials directly from Coulomb's law. This was achieved by mathematically relating known electrostatic Coulomb's law acting on test charge at present time to "unknown" electrodynamic fields acting at past. The mathematical link between the static case in the present and dynamic case in the past was provided by generalized Helmholtz theorem. The derived equations are valid for arbitrarily moving source charge and these equations are not confined to motions along straight line. Furthermore, it should be noted that we have derived the Maxwell equations and Liénard-Wiechert potentials directly from Coulomb's law without resorting to special relativity or Lorentz transformation.
vector field N derived from Coulomb's law Figure 3: Source charge qs is moving along arbitrary trajectory rs(t) and it stops at past time ts. Closed contour C is at rest at all times. All the points r on contour C are inside the sphere of radius R = c(tp − ts). At present time tp > ts all the points on contour C are affected only by Coulomb's electrostatic field.
Derivation of Electrodynamic Energy Conservation Law and Lorentz Force
To derive the electrodynamic energy conservation law from Coulomb's law we first consider hypothetical physical setting shown in Fig. 3 where the source charge q s is moving along arbitrary trajectory r s (t). Then the source charge q s stops at some time in the past t s . In this physical setting, closed contour C is at rest at all times. At present time t p > t s all the points inside the sphere of radius R = c(t p − t s ) are affected only by electrostatic Coulomb's field. The known energy conservation law valid at present dictates that contour integral of electrostatic field along any closed contour immersed inside the sphere of radius R equals to zero: where E c is Coulomb's electrostatic field, r s (t s ) is the position vector of source charge when it stopped moving, and vector r is the position vector of the point on contour C. This electrostatic energy conservation law, valid at present time t p , states that no net work is done in transporting the unit charge along any closed contour immersed in electrostatic field. To proceed, we assume that in the past, when the source charge was moving, the energy conservation law is unknown. However, generalized Helmholtz decomposition 17 theorem allows us to derive this "unknown" electrodynamic energy conservation law valid in the past from the knowledge of electrostatic energy conservation law valid at present. To derive this unknown electrodynamic electrodynamic conservation law we consider the contour integral (61) at the moment t infinitesimally before the time when the source charge stopped: where δt is infinitesimally small time interval. If time interval δt approaches zero (δt → 0) we can rewrite the contour integral (61) as the function of time t: Because the integrand on the right hand side of equation (63) is the function of varying time t and position vector r the generalized Helmholtz decomposition theorem can be applied to rewrite this integrand as the function of past positions and velocities of the source charge. In fact, such expression is already derived in previous section as equation (33), repeated here for clarity: Substituting the first two right hand side terms of equation (64) with equations (48) and (49) and combining the result with equations (52) and (53), and using c 2 = 1/µ yields: where vector function K(r, t) is equal to the last right hand side term of equation (64): Replacing the first two terms on the right hand side of equation (65) with equation (56) yields: Then, by inserting equation (67) into right hand side of equation (63) it is obtained that: The space-time integral on the right hand side of equation (66) is very difficult to evaluate. However, we can eliminate vector field K(r, t) from the right hand side of equation (68) by the application of Stokes' theorem: From here, we take the curl of both sides of equation (67) and by combining with equation (59) it is obtained that: Because surface S and contour C are stationary we can write that ∂ ∂t M(r, t) = d dt M(r, t). Inserting equation (70) into equation (69) and taking into account that surface S and contour C are not moving yields: The right hand side of equation (71) is unknown energy conservation principle valid for varying in time dynamic fields N(r, t) and M(r, t) and it is derived from electrostatic energy conservation principle valid at present time. If N is replaced by E and if M is replaced by B it can be seen that we have just obtained the physical law known in electrodynamics as Faraday's law. From equation (71) the conclusion can be drawn about the nature of Faraday's law. It represents the energy conservation principle valid for non-conservative dynamic fields and it is dynamic equivalent of electrostatic energy conservation principle valid for Coulomb's electrostatic field.
However, even the Faraday's law itself can be considered as consequence of something else. To see this, consider simply connected volume V bounded by surface ∂V as shown in Fig. 4. The surface ∂V is union of two surfaces S and S 1 bounded by respective contours C and C 1 . Contours C and C 1 consist of exactly the same spatial points, however, the Stokes' orientation of these contours is opposite C = −C 1 . Then, using ∇ × N(r, t) = − ∂ ∂t M(r, t) the first right hand side contour integral of equation (71) can be written as: Replacing the first right hand side term of equation (71) with equation (72) yields different form of dynamic energy conservation law: If we replace M(r, t) with B we see that right hand side of equation (73) is time derivative of Gauss' law for magnetic fields. The standard interpretation of Gauss' law for magnetic fields is that magnetic monopoles do not exist. However, from equation (73) we conclude that alternative interpretation of this law is that its time derivative represents 19 the dynamic energy conservation law. From the derivations presented, we might even say that Faraday's law is the consequence of Gauss' law for magnetic fields. It should be noted that these energy-conservation equations were all derived from simple electrostatic Coulomb's law. From dynamic energy conservation law the derivation of Lorentz force is straightforward: we now assume that all the points on surface ∂V shown in Fig. 4 have some definite velocity v such that |v| << c. Then the surface ∂V is the function of time, hence, C = C(t) and S = S(t). Hence, we can rewrite equation (73) as the sum of two surface integrals: where ∂V = S(t) ∪ S 1 (t). The Leibniz identity [25] for moving surfaces states that for any differentiable vector field P we can write: Applying the Leibniz identity to the surface integral over surface S 1 (t) in equation (74) and using ∇ · M(r, t) = 0 yields: Using the result from previous section, i.e. ∇ × N(r, t) = − ∂ ∂t M(r, t), and applying the Stokes' theorem yields: Because curves C 1 (t) and C(t) comprise of same points, however, Stokes' orientation of curves C 1 (t) and C(t) is opposite, i.e. C 1 (t) = −C(t), we can rewrite equation (77) as: Figure 4: Closed surface ∂V that bounds volume V is union of two surfaces S and S 1 . Contour C bounds surface S and contour C 1 bounds surface S 1 . Contours C and C 1 are identical, however they have different Stokes' orientation.
20
Note that equation (78) could not be derived from the right hand side of equation (71), i.e. from Faraday's law, even with Leibniz rule. For that reason, we might take that the energy conservation law on the right hand side of equation (73) is perhaps more general than the one given by equation (71). Furthermore, note that the time derivative of the surface integral in equation (78) does not represent the work of any force. However, from equation (73) we know that the terms in equation (78) have dimensions of the work done by electrodynamic force in moving the unit charge along contour C(t). Because the first term in equation (78) is contour integral of vector field we can conclude that this term represents the nonzero work done by non-conservative electrodynamic force in transporting the unit charge along contour C(t).
Hence, just as the left hand side of equation (73) represents the work done by conservative electrostatic force in transporting the unit charge along contour C, the contour integral on the left of equation (78) represents the work done by non-conservative electrodynamic force in transporting the unit charge along the same contour. The purpose of surface integral on the left hand side of equation (78) is to balance non-zero work of non-conservative electrodynamic force along contour C. Thus, it can be concluded that the electrodynamic force F D on charge q moving with velocity v along contour C is: Finally, in previous section we have shown that N = E and that M = B. Thus, by replacing N with E and M with B it is obtained that: which is expression for well known Lorentz force. It was derived theoretically from the knowledge of electrostatic energy conservation law which, in turn, can be derived from Coulomb's law. Thus, we may say that we have just derived the Lorentz force from simple electrostatic Coulomb's law.
Conclusion
In this paper we have presented the theoretical framework that explains Maxwell equations and the Lorentz force on more fundamental level than it was previously done. Maxwell derived Maxwell equations from experimental Ampere's force law and experimental Faraday's law, and Lorentz continued work on Maxwell's theory to discover the Lorentz force. In last 150 years, no successful theory was presented that would explain Maxwell's equations and Lorentz force on more fundamental level.
To accomplish this, relativistically correct Liénard-Wiechert potentials, Maxwell equations and the Lorentz force were derived directly from electrostatic Coulomb's law. In contrast to frequently criticized previous attempts to derive Maxwell's equations from Coulomb's law using special relativity and Lorentz transformation, the Lorentz transformation was not used in our derivations nor the theory of special relativity. In fact, in this work, dynamic Liénard-Wiechert potentials, Maxwell equations and Lorentz force were derived from Coulomb's law using the following two simple postulates: (a) when charges are at rest the Coulomb's law describes the force acting between charges (b) disturbances caused by moving charges propagate outwardly from moving charge with finite velocity The derivation of these dynamic physical laws from electrostatic Coulomb's law would not be possible without generalized Helmholtz decomposition theorem also derived in this paper. This theorem allows the vector function of present position and present time to be written as space-time integral of positions and velocities at previous time. In contrast, standard Helmholtz decomposition theorem is valid for functions of space only and it ignores time.
To derive the Lorentz force from Coulomb's law, in section 4, the "unknown" dynamic energy conservation law valid in the past was derived from the knowledge of electrostatic energy conservation law valid at present. The link between the present and the past was again provided by generalized Helmholtz decomposition theorem. This "unknown" dynamic energy conservation principle turned out to be Faraday's law of induction. Additionally, it was shown that Faraday's law of induction can be considered equivalent to time derivative of Gauss' law for magnetic field. From these energy conservation considerations the Lorentz force was derived.
From the presented analysis one important question naturally arises: are Maxwell's equations and Lorentz force the consequence of electrostatic Coulomb's law? They are most probably not. It is rather the opposite, Coulomb's law is the limiting case of Lorentz force when the source charge becomes stationary. However, as it was shown in this paper, it is entirely possible to deduce dynamic Maxwell equations and Lorentz force from the knowledge of simple electrostatic Coulomb's law.
Finally, this paper attempts to answer another important question: how can we deduce more general dynamic physical laws from the limited knowledge provided by static physical law? The significance of answering this question is that in the future it will perhaps become possible that similar reasoning could deepen the understanding of physical laws other than Maxwell equations and Lorentz force.
Appendix A. Derivation of generalized Helmholtz decomposition theorem
In this appendix, we derive the generalized Helmholtz decomposition theorem for vector functions of space and time. However, in effort to enhance the readability of this work, we first start by considering some basic identities given in section A.1 of this appendix.
A.1. Preliminary considerations
To clarify notation used throughout this paper we first define position vectors r and r as: wherex,ŷ andẑ are Cartesian, mutually orthogonal, unit basis vectors. Variables x, y, z ∈ R and x , y , z ∈ R are linearly independent variables. Furthermore, throughout this paper we use position vector r s (t ) to indicate the position of the source charge. This position vector r s (t ) is defined as: where x s (t ), y s (t ) and z s (t ) are all functions of real variable t ∈ R which is independent of variables x, y, z ∈ R and x , y , z ∈ R. The time derivative of position vector r s (t ) is velocity v s (t ) of the source charge: On many occasions in this paper we have used differential operators ∇ and ∇ defined as: Operator ∇ acts only on functions of variables x, y, z, hence, on functions of position vector r. On the other hand, operator ∇ acts only on functions of variables x , y , z , thus, it acts on functions of position vector r . For example, if function f is the function of position vector r, that is f = f (r) we can generally write: On the other hand, if function f is the function of position vector r , that is if f = f (r ) we can write: Furthermore, because variable t is independent of variables x, y, z and x , y , z neither operator ∇ nor ∇ acts on position vector r s (t ) and velocity vector v s (t ). Using these considerations we see that the following equations are correct: However, both operators ∇ and ∇ act on Green's function G(r, t; r , t ) given by equation (16). In fact, one can easily verify that the following equations hold: A.2. Generalized Helmholtz decomposition theorem To start deriving generalized Helmholtz decomposition theorem for vector functions of space and time we first consider inhomogeneous transient wave equation: where G(r, t; r , t ) is the function called fundamental solution or Green's function and δ is Dirac's delta function. The Green's function for inhomogeneous wave equation is well known and it represents an outgoing diverging spherical wave: Let us now suppose that vector field F is the function of both space r and time t, i.e. F = F(r, t). Using sifting property of Dirac delta function we can write vector function F(r, t) as the volume integral over infinite volume R 3 and over all the time R as: where differential volume element dV is dV = dx dy dz . We now replace δ(r−r )δ(t−t ) in equation above with left hand side of equation (A.2.11) to obtain: From the discussion presented in section A.1 of this appendix, we know that D'Alambert operator ∇ 2 − 1 c 2 ∂ 2 ∂t 2 does not act on variables x , y ,z and t nor does it act on vector function F(r , t ). Hence, we can write the D'Alambert operator ∇ 2 − 1 c 2 ∂ 2 ∂t 2 in front of the integral: Using standard vector calculus identity ∇ × ∇ × P = ∇(∇ · P) − ∇ 2 P we can rewrite equation (A.2.15) as: Because operators ∇ and ∂ ∂t do not act on variables x , y , z and t we can move operator ∇ and partial derivative ∂ ∂t under right hand side integrals in equation (A.2.16). 24 Then using standard vector calculus identities ∇ × (ψP) = ∇ψ × P + ψ∇ × P and ∇ · (ψP) = ∇ψ · P + ψ∇ · P, and noting that ∇ × F(r , t ) = 0 and ∇ · F(r , t ) = 0 we can rewrite equation (A.2.16) as: We now use identities ∇G(r, t; r , t ) = −∇ G(r, t; r , t ) and ∂ ∂t G(r, t; r , t ) = − ∂ ∂t G(r, t; r , t ) to rewrite the right hand side integrals in equation (A.2.17) as: Using vector calculus identity ∇×(ψP) = ∇ψ×P+ψ∇×P and the the form of divergence theorem V ∇ × PdV = ∂V P × dS we rewrite the first right hand side integral over R 3 as: Note that the surface ∂R 3 is an infinite surface that bounds an infinite volume R 3 . Furthermore, for the surface integral in the equation above, position vector r is located on infinite surface ∂R 3 , i.e. r ∈ ∂R 3 . Hence, if vector function F(r , t ) decreases faster than 1/ |r − r | as |r − r | → ∞ the surface integral in equation (A.2.19) vanishes. In that case, we can write: Using similar considerations, vector calculus identity ∇ · (ψP) = ∇ψ · P + ψ∇ · P and standard divergence theorem V ∇ · PdV = ∂V P · dS it is obtained that: To treat the last integral on the right hand side of equation (A.2.18) we use the following identity: Using the identity above and noting that t is independent of x , y and z we can rewrite the last right hand side integral of equation (A.2.18) as: By integrating over t , it can be shown that the first right hand side integral in equation (A.2.23) vanishes : The theorem is valid for functions F(r , t ) that decrease faster than 1/ |r − r | as |r − r | → ∞.
Appendix B. Novel vector calculus identities
In this appendix we prove two novel vector calculus identities, without which, it would be very difficult, perhaps even not possible, to derive Maxwell's equations from Coulomb's law. These two novel vector calculus identities are given by the following two equations: where P is differentiable vector field, ψ is differentiable scalar function, volume V ⊂ R 3 is simply connected volume, ∂V is the bounding surface of volume V and dS is differential surface element of ∂V such that dS = ndS. Vector n is an outward unit normal to the surface ∂V . In Cartesian coordinate system the product ψ∇ 2 P can be written in terms of Cartesian components as: where P x , P y and P z are Cartesian components of vector P and vectorsx,ŷ andẑ are Cartesian unit basis vectors. Using standard vector calculus identity ∇ · f T = ∇f · T + f ∇ · T, valid for some scalar function f and for some vector function T, we can rewrite equation (B.3) as: To proceed, we now expand the identity (∇ψ · ∇) P in terms of its Cartesian components as: =x∇ψ · ∇P x +ŷ∇ψ · ∇P y +ẑ∇ψ · ∇P z Inserting equation (B.5) into (B.4) it is obtained that: We now integrate equation (B.6) over volume V and apply the divergence theorem V ∇ · TdV = ∂V T · dS to obtain: The first three right hand side terms of equation (B.7) can be rewritten as: Inserting equation (B.8) into (B.7) yields: which we intended to prove. To prove equation (B.2) we rewrite P∇ 2 ψ in terms of Cartesian components as: By using standard differential calculus identity f ∇ 2 h = ∇ · (f ∇h) − ∇f · ∇h, where f and h are differentiable scalar functions, equation (B.10) can be written as: y∇ · (P y ∇ψ) −ŷ∇P y · ∇ψ+ z∇ · (P z ∇ψ) −ẑ∇P z · ∇ψ Inserting equation (B.5) into equation (B.11) yields: P∇ 2 ψ =x∇ · (P x ∇ψ) +ŷ∇ · (P y ∇ψ) +ẑ∇ · (P z ∇ψ) − (∇ψ · ∇) P (B.12) Integrating equation (B.12) over volume V and applying divergence theorem V ∇ · TdV = ∂V T · dS it is obtained that: which we intended to prove.
Appendix C. Derivation of auxiliary mathematical identities
In this appendix we derive auxiliary mathematical identities that we find useful for the derivation of Maxwell equations from Coulomb's law.
Because coordinates x , y and z are independent of time t we can swap operator ∇ and time derivative with respect to time t as: Furthermore, because coordinates x , y and z are independent of time t , the time derivative of r is equal to zero ∂r ∂t = 0. The inner time derivative in the equation (C.1.2) can now be written as: where v s (t ) is the velocity of the source charge q s at time t given by equation To proceed with derivation, we now make use of standard vector calculus identity ∇ × ∇ × P = ∇ (∇ · P) − ∇ 2 P, valid for any differentiable vector function P. This identity allows us to rewrite the equation (C.1.4) as: Since Laplacian operator ∇ 2 does not have effect on velocity vector v s (t ) the last right hand side term in equation (C.1.5) can be written using 3D Dirac's delta function as: Hence, replacing the last right hand side term of equation (C.1.5) with equation (C.1.6) yields: which proves equation (28).
C.2. Derivation of equation (31)
To derive equation (31) we make use of standard vector calculus identity ∇ × (ψP) = ∇ψ × P + ψ∇ × P, where ψ is a scalar function and P is a vector function, to rewrite the integrand in the last right hand side term of equation (30) as: G(r, t; r , t ) = (C.2.8) = ∇ × ∇ × v s (t ) |r − r s (t )| G(r, t; r , t ) − ∇ G(r, t; r , t ) × ∇ × v s (t ) |r − r s (t )| Integrating equation (C.2.8) with respect to variables x , y , z and t , and making use of a standard form of divergence theorem V ∇ × PdV = ∂V dS × P it is obtained that: G(r, t; r , t )dV = (C.2.9) where dV = dx dy dz , ∂R 3 is an infinite surface that bounds R 3 , and dS is differential surface element of the surface ∂R 3 . Because ∂R 3 is an infinite surface, the first right hand side integral vanishes. To see this, we can use standard vector identity ∇ × (ψP) = ∇ψ × P + ψ∇ × P and using ∇ × v s (t ) = 0 to rewrite the first term in the first right hand side integrand as: ∇ G(r, t; r , t ) × ∇ × v s (t ) |r − r s (t )| dV There is another useful property of Green's function G(r, t; r , t ) which enables us to proceed with the derivation of equation (31). This property can be written as follows: ∇ G(r, t; r , t ) = −∇G(r, t; r , t ) (C.2.12) Using this property and standard vector calculus identity ∇ × (ψP) = ∇ψ × P + ψ∇ × P allows us to rewrite the integrand in equation (C.2.9) as: G(r, t; r , t )dV = (C.2.14) Because differential volume element is dV = dx dy dz and because operator ∇ does not act on variables x , y , z and t we can write operator ∇ in front of the integral: G(r, t; r , t )∇ × v s (t ) |r − r s (t )| dV Using the same trick again, that is, by using standard vector calculus identity ∇×(ψP) = ∇ψ × P + ψ∇ × P, using ∇ G(r, t; r , t ) = −∇G(r, t; r , t ) and noting that operator ∇ does not act on variables x , y and z we can rewrite the integrand in equation (C.2.15) as: G(r, t; r , t )∇ × v s (t ) |r − r s (t )| = (C.2.16) = ∇ × G(r, t; r , t ) v s (t ) |r − r s (t )| + ∇ × G(r, t; r , t ) v s (t ) |r − r s (t )| Then, by inserting equation (C.2.16) into equation (C.2.15) and using a form of standard divergence theorem V ∇ × PdV = ∂V dS × P we obtain that: | 11,953.2 | 2020-12-18T00:00:00.000 | [
"Physics"
] |
Tunable Fermi level and hedgehog spin texture in gapped graphene
Spin and pseudospin in graphene are known to interact under enhanced spin–orbit interaction giving rise to an in-plane Rashba spin texture. Here we show that Au-intercalated graphene on Fe(110) displays a large (∼230 meV) bandgap with out-of-plane hedgehog-type spin reorientation around the gapped Dirac point. We identify two causes responsible. First, a giant Rashba effect (∼70 meV splitting) away from the Dirac point and, second, the breaking of the six-fold graphene symmetry at the interface. This is demonstrated by a strong one-dimensional anisotropy of the graphene dispersion imposed by the two-fold-symmetric (110) substrate. Surprisingly, the graphene Fermi level is systematically tuned by the Au concentration and can be moved into the bandgap. We conclude that the out-of-plane spin texture is not only of fundamental interest but can be tuned at the Fermi level as a model for electrical gating of spin in a spintronic device.
(a) ARPES dispersion of Dirac cone sampled along Γ − K of graphene SBZ. Anisotropic intensity of π and π * bands due to Brillouin zone effects is seen. Yellow frame denotes acceptance frame of spin analyser positioned precisely at K-point. Red frame denotes angular localization of spin hedgehogs around K. (b) Spin-resolved EDC spectrum simulated by integration of ARPES intensity over yellow frame reveals equal intensity of π and π * peaks.
(c,d) Same as (a,b) but yellow frame of spectrometer is misaligned by 0.12 • toward 1 st SBZ.
Intensity disbalance between π and π * is identical to that seen in spin-resolved spectrum in Figure 3(f) (in the article), but red frame of spin hedgehog remains perfectly inside of yellow frame of analyser and therefore is fully acquired. Photoemission measurements were performed with photon energy of 62 eV. Figure 7. Relevance of spin hedgehogs in graphene to spintronics. (a)
Supplementary
Valley Hall effect (VHE) emerging in graphene with inequivalent orbital magnetic moments of K and K ′ valleys. When the electric field E in−plane is applied in the graphene plane and collinear to the current direction, deflection of the charge carriers toward the edge of the stripe occurs due to valley-polarized scattering induced by a Berry phase effect. This causes different population of Dirac cones at K and K ′ valleys at opposite edges of the graphene stripe. This idea was firstly formulated in Ref. [20] but for valley associated pseudo spin and not for real spin. (b) Scheme of Y-shaped spin separator utilizing the effect of spinvalley scattering for spin filtering of electric currents. The spin separator is attached to three conducting gates/leads. Gates 2 and 3 (collectors) have equal potentials V G2 and V G3 . Voltage at gate 1 (emitter) V G1 is different. The difference between V G1 and V G2,G3 drives electric current through Y-shaped graphene flake, but, at the same time, creates an in-plane electric field which activates spin-valley scattering. As a result, charge carriers with one spin are deflected toward gate 2 (red arrow) and charge carriers with the opposite spin toward gate 3 (blue arrow). Principle of valleytronic device utilizing spin-valley scattering was proposed in Ref.
[21] but for using Zeeman effect for spin-polarization of K and K ′ valleys and not spin hedgehogs (which in present study are already available in ground state without application of external field). [e.g. graphene/Ir(111)] it is known that energy gaps occur as minigaps at the crossings of replicas with the main Dirac cone [8][9][10][11]. The width of the minigaps is determined by the amplitude of the modulating lateral superpotential [9,12]. The case of 2ML of intercalated Au is more complicated. Supplementary Figure 3 shows that the lower Dirac cone demonstrates a curvature just below E F (lower edge of the gap) but the upper cone of π * band has moved above the Fermi level and neither width of the gap between π and π * nor middle of the gap can be read directly from the ARPES dispersion.
In this case we apply an extrapolation scheme assuming that upper and lower Dirac cones are mirror symmetric. Such scheme provides an estimate for the minimal value of E g . As shown in Supplementary Figure 3(b) the linear dispersion of the π-band is extrapolated by straight lines and the crossing point between them is taken as Dirac energy E D . The width of the gap E g is determined as twice the difference between E D and the lower edge of the gap. In this way we obtain E D =E F (±10 meV) meaning that graphene is charge neutral and Figures 5(b,e)], but the measured out-of-plane spin polarization S OP is zero [ Supplementary Figures 5(c,f)]. Such behaviour fully complies with the scenario of the Rashba effect in graphene [15,16]. We should emphasize that the measurements of in-plane and out-of-plane spin components are feasible despite certain polar rotation of the sample (θ) toward K. There is only minor projection of in-plane spins onto the S OP axis while measuring an out-of-plane signal because the rotation of θ toward K is small (θ ∼24 • at photon energy hν ∼60 eV) and additionally reduced by non-zero tilt τ . The resulting magnitude of out-of-plane projection is less than 1 3 of the in-plane component and nearly undetectable. Particular care was taken to ensure the correct observation of the hedgehog-type out-ofplane spin texture at K. Although projection of Rashba-type (in-plane) spin polarization onto S OP axis is negligible, one may naively argue that the spin polarization of exchangesplit 3d bands in the underlying Fe film may contribute. In order to exclude such possibility Finally, we want to emphasize that the measured hedgehog-type spin textures originate from the outer band of spin-orbit spilt Dirac cone, which is sketched in Figure 3
Supplementary Note 5 Precision of sample alignment
We would like to comment on the momentum resolution of the spectrometer and on the accuracy of sample orientation, and show that small experimental errors are negligible for the correct interpretation of our spin-and angle-resolved photoemission measurements. Spinresolved spectra revealing out-of-plane spin polarization (spin hedgehog) in the gap of the Dirac cone [ Figure 3(f) in main article] display slightly different intensities of lower (π) and upper (π * ) bands at the gap edges. However, precisely at K, intensities of upper and lower cones have to be equal, as ARPES data in Figure 3(a) in the article shows. This suggests that the sample had slight angular misalignment in the spin-resolved measurement. This small misalignment originates from the transfer lens setup (aperture positioning) of our state-of the-art spectrometer which allows for simultaneous acquisition of ARPES dispersions and spin-resolved EDCs without changing the sample position. This is an important feature of the spectrometer and the small misalignment is, hence, principally unavoidable. Our analysis below shows that the resulting experimental mistake is negligible and has no effect on the results obtained.
The error introduced by the transfer lens is easily estimated by the angular dependence of the ARPES signal. Relatively large differences of the intensities of upper and lower Dirac cone in the gap result already from a minor angular misalignment of the the sample. The reason for this is the distribution of photoelectron intensity in the Dirac cones, which is extremely anisotropic due to a Brillouin zone effect [6]. Due to this effect, the intensity of the lower cone (π) in the 2 nd surface Brillouin zone (SBZ) is suppressed by factor of ∼50 as compared to its intensity in the 1 st SBZ. For the upper cone (π * ) the situation is opposite.
Its intensity in the 2 nd SBZ is enhanced while in the 1 st SBZ it is dramatically suppressed.
This scenario is clearly seen in Supplementary Figures 6(a,c) which show zoomed dispersion of Dirac bands along the Γ − K direction of the SBZ (k passes from 1 st to 2 nd SBZ through K-point). The anisotropy of intensities is so strong that even small angular inaccuracies should cause significant imbalance between intensities of π and π * peaks.
Since spin resolved data shown in Figure 3 in the article (and in Supplementary Figure 5) was measured in the direction perpendicular to Γ − K (optimal geometry for elimination of Brillouin zone effects and observation of both sides of Dirac cones), the misalignment causing unequal intensities of π and π * in Figure 3(f) corresponds to misalignment along Γ − K. This allows us to estimate the angular misalignment from ARPES data shown in Supplementary Figures 6(a,c).
In Supplementary Figure 6(a) the angular acceptance frame of the spin-ARPES spectrometer used for Figure 3(f) (0.7 • ) is denoted by a yellow rectangle and positioned precisely at K. The EDC profile, representing the corresponding spin-resolved spectrum, is acquired by integration of ARPES intensity within yellow frame and is shown in Supplementary Figure 6(b). In this spectrum peaks of π and π * bands have equal intensity. In our analysis we have scanned the position of the spectrometer frame along Γ − K and looked for the intensity variations of π and π * peaks. The intensity imbalance seen in Figure 3 [18]), the localization region of out-of-plane spins around the K-point is given by where ν F is the Fermi velocity of Dirac fermions, and λ R the Rashba parameter for the spin-orbit interaction in the graphene. The Rashba splitting seen in Figures 3(c) and 3(d) in the article (∼70-80 meV) means λ R ∼25-30 meV, which gives for the localization of the hedgehog ∆k S ∼0.015Å −1 (or 0.25 • at 62 eV photon energy). The angular localization of the spin hedgehog around K is marked in Supplementary Figures 6(a,c) by a red frame.
Apparently, the red frame remains perfectly inside of the yellow frame of the spectrometer in the case of 0.12 • sample misalignment (and would remain there for even larger errors).
This in turn means that the entire spin hedgehog around K is acquired by the spectrometer which confirms the out-of-plane spin obtained in the measurement.
We can also roughly estimate the sensitivity of the spin-resolved measurement and the expected magnitude of out-of-plane spin polarization. Indeed, the angular localization A similar effect was later elaborated theoretically by Tsai et al. [20] but for real spin based on the high-spin-orbit material with broken sublattice symmetry silicene. It was suggested to use an electric field applied perpendicular to graphene in order to create a Zeeman-type splitting of the Dirac cone. Such induced spin-polarization has opposite sign at K and K ′ valleys. Scattering of electrons to the valleys with spin polarization opposite to the electron spin is suppressed. This spin-valley scattering is expected to be much more effective than the simple valley-only scattering described in Ref. [19] and may allow for nearly 100% filtering of electron spin [20].
The scheme of a possible device utilizing the effect of spin-valley scattering (originally proposed in Ref. [20]) is shown in Supplementary Figure 7(b). This is a spin separator consisting of a Y-shaped flake of graphene attached to three conducting gates/leads. It is assumed that gates 2 and 3 (collectors) have equal potential V G2 and V G3 , while the voltage at gate 1 (emitter) V G1 is different. The potential difference between V G1 and V G2,G3 drives electric current through the graphene flake, but, at the same time, creates an in-plane electric field which activates spin-valley scattering. As a result charge carriers with one spin are deflected toward gate 2 (red arrow) and charge carriers with the opposite spin toward gate 3 (blue arrow).
The present case of graphene/Au/Fe(110) is very interesting in the context of such device since it has high spin-orbit splitting and broken sublattice symmetry and out-of-plane spin polarization in the gap of Dirac cones, which according to Ref. [18], changes its sign at K and K ′ valleys. It is also remarkable that no external electric field is needed to achieve spin polarization of valleys in graphene/Au/Fe(110), since it is induced not by a Zeeman field but through extrinsic spin-orbit interactions.
Although graphene/Au/Fe(110) cannot be directly used for the construction of an effective spin separator since it has conducting substrate and cannot bear 2D currents, it is a useful system allowing to study and understand physics relevant to valleytronic devices and graphene-gate junction therein.
Supplementary Note 7
Preparation of graphene on Fe(110) All sample preparations were done in situ. The Fe(110) substrate was prepared as several tens of monolayers (ML) of Fe grown on W(110). The W(110) crystal was initially cleaned by repeated cycles of annealing in oxygen (partial pressure of oxygen 1×10 −7 mbar, temperature 1500K) followed by short flashing of the sample up to 2300K in ultra-high vacuum (UHV) environment [1,2]. The sample preparation is reported in Supplementary Figure 8. (110) is additionally evidenced by the presence of a faint dispersion due to a surface resonance at ∼3 eV (at Γ-point) denoted as SR [17].
Graphene was synthesized by chemical vapour deposition of ethylene or alternatively propylene. The Fe(110) sample was heated to 950-1050K in UHV. Then the hydrocarbon was let into the chamber at a partial pressure of 5×10 −6 mbar for 10 minutes. The successful synthesis of graphene depends strongly on the partial hydrocarbon pressure and sample temperature. In the case of insufficient control over these parameters an Fe surface carbide is formed. Its valence band structure is shown in Supplementary Figure 8 and graphene (orange dashed lines) extracted from experimental LEED measurements. One sees that the principle diffraction spots at the corners of LEED patterns (see the area inside of dashed circle) are not in registry with each other due to surface lattice mismatch between Fe and graphene. This structural difference determines the 2D repetition of diffraction spots (black points) with periodicity (7×17) in terms of graphene hexagons [4]. This moiré constellation around a principle graphene spot as it occurs in LEED is zoomed in Supplementary Figure 9 Intercalation with Au was achieved by deposition of one up to several monolayers of Au on graphene/Fe(110) and subsequent annealing at 750-800K. We have studied two concentrations of Au for which band structures and charge doping of graphene were found clearly defined and homogeneous over the surface. The first phase, showing n-type doping and a 1D electronic structure, is achieved at nominally 1.4ML of intercalated Au. This phase is referred in the manuscript as 1ML-phase. The second phase (charge neutral) is achieved after increasing the total amount of intercalated Au up to 2.3ML. This phase with higher Au concentration is referred in the paper as 2ML-phase. For higher concentrations of intercalated Au we found no differences in the band structure as compared to 2ML. | 3,424.4 | 2015-07-27T00:00:00.000 | [
"Physics"
] |
Drivers of the US CO2 emissions 1997–2013
Fossil fuel CO2 emissions in the United States decreased by ∼11% between 2007 and 2013, from 6,023 to 5,377 Mt. This decline has been widely attributed to a shift from the use of coal to natural gas in US electricity production. However, the factors driving the decline have not been quantitatively evaluated; the role of natural gas in the decline therefore remains speculative. Here we analyse the factors affecting US emissions from 1997 to 2013. Before 2007, rising emissions were primarily driven by economic growth. After 2007, decreasing emissions were largely a result of economic recession with changes in fuel mix (for example, substitution of natural gas for coal) playing a comparatively minor role. Energy–climate policies may, therefore, be necessary to lock-in the recent emissions reductions and drive further decarbonization of the energy system as the US economy recovers and grows.
Supplementary Table 1
Subscript for the components in the coefficients Weight K first second third fourth fifth 0
Supplementary Methods
As presented in Methods section in the main text, in this study the change of CO 2 emission is decomposed into six additive terms, and each term represents the contribution of the changing factor to the total change of CO 2 emission in the US. One can perceive a logical pattern that the changing factors is placed, at each term, in turn from left to right in the product with all other factors; and the other unchanged factors on the left hand side of the changing factors are in base year value (year "t 1"); and the ones on the right hand side of the changing factors are in target year value (year "t"). Therefore, by extracting the unchanged values in each term the equation can be merged as: (1) where the , , , , , and are the so-called "weight" or "coefficient" for each "Δfactor" respectively. The calculation of these "weights" or "coefficients" are usually done via econometric methods; alternatively, they can be generated via a more straight forward way by deriving them with the structural decomposition method 7, 8 .
However, supplementary equation (1) is not a unique decomposition equation, which is only one of the 720 decomposition equations by assuming the order of the driving forces of "p·f·E·L·y s ·y v ". However, the order can also be "f·p·E·L·y s ·y v " or "f·E·p·L·y s ·y v " and so on.
Although each decomposition equation would produce exactly the same result for ΔCO 2 , de Haan 9 found that the size of the contribution of each "Δfactor" significantly differs across the equations.
In other words, the "coefficient" (w) of each "Δfactor" is varied in different equations.
Due to the non-uniqueness issue, Dietzenbacher and Los 10 suggested to take the average of all the n! (6! in this case) decomposition equations (Supplementary Table 1). In order to do so, all the 720 equations need to be sorted into a standard order, for example, every term in the equation needs to be re-arranged to the order of "p·f·E·L·y s ·y v ", and the "Δfactor" is in turn placed from the first factor of "p" in the first term of the equation to the last factor of y s in the last (seventh) term.
Then, all the equations have been re-arranged in the same pattern. For example, the first term of every equation contains the information of the contribution of population growth (Δp) to the change of CO 2 (ΔCO 2 ) with other factors kept unchanged. The product of the unchanged values of other factors is the "coefficient" for Δp. The "coefficient" appears 120 times, and same as the "coefficient" does. de Haan 9 and Seibel 11 found that each term in the equation always has 2 (n-1) different "coefficients" attached to the "Δfactor", 2 (6-1) = 32 different "coefficients" to every "Δfactor" in this case.
Next one can calculate the "weights" of the "coefficients" which is attached to the "Δfactor".
The easiest way is via observations, to count how many cases of "Δfactor" are attached to the same "coefficient". For example as mentioned previously, the "coefficient" appears 120 times in the 720 equations, and therefore its weight is 120. However, the observation method could be difficult in large number of decomposition equations with more than 5 factors.
Seibel 11 proposed a mathematic method to deal with this. Firstly, let k represent the number of subscript "t 1" (base year) in a coefficient; k runs from "0" to "n 1"; therefore, the number of subscript "t" (target year) would be "n 1 k". Secondly, for each k, the number of different coefficients attached to the "Δfactor" can be calculated by: In this study, n is set to 6 (six factors). So when k = 0 or 5, there is only one coefficient for each case; when k = 1 or 4, the number of different coefficients are 5 respectively; when k = 2 or 3, there would be 10 different coefficients. Thirdly, supplementary equation (3) calculates how many times each of these coefficients is repeated as "weights" for each "Δfactor" term in every equation and it is similar to obtain other "w"s in supplementary equation (1). | 1,223.6 | 2015-07-21T00:00:00.000 | [
"Environmental Science",
"Economics"
] |
Unraveling the orientation of phosphors doped in organic semiconducting layers
Emitting dipole orientation is an important issue of emitting materials in organic light-emitting diodes for an increase of outcoupling efficiency of light. The origin of preferred orientation of emitting dipole of iridium-based heteroleptic phosphorescent dyes doped in organic layers is revealed by simulation of vacuum deposition using molecular dynamics along with quantum mechanical characterization of the phosphors. Consideration of both the electronic transitions in a molecular frame and the orientation of the molecules at the vacuum/molecular film interface allows quantitative analyses of the emitting dipole orientation depending on host molecules and dopant structures. Interactions between the phosphor and nearest host molecules on the surface, minimizing the non-bonded van der Waals and electrostatic interaction energies determines the molecular alignment during the vacuum deposition. Parallel alignment of the main cyclometalating ligands in the molecular complex due to host interactions rather than the ancillary ligand orienting to vacuum leads to the horizontal emitting dipole orientation.
T he orientation of molecules in molecular films dictates their electrical and optical properties such as charge mobility 1, 2 , birefringence 3 , absorption 4 , emission 5 , ionization potential 6 , and dielectric 7 and ferroelectric properties 8 . Therefore, understanding and control of molecular orientation in organic films have been a research topic with central importance in organic electronics and photonics, including the fields of liquid crystals 9 , organic field effect transistors 10 , and organic photovoltaics 11 . In organic light-emitting diodes, the molecular orientation of emitter embedded in the emissive layer has been an issue to enhance the outcoupling efficiency of light pursuing the horizontal alignment of the emitting dipole moment 3,[12][13][14][15][16][17][18][19][20][21] .
Interestingly enough, it is only in recent years has attention turned to the orientation of emitting dipoles of iridium-based phosphors, the most verified light-emitting dyes with high photoluminescence quantum yield and variety of chromatic spectrum as doped in the emissive layers; probably because their iridium-centered spherical shape and the amorphous surrounding nature in the emissive layers are far from having strong molecular alignments. Recently, some heteroleptic Ir complexes exhibiting efficient electroluminescence in organic light-emitting diodes are reported to possess preferred horizontal emitting dipole orientations (EDOs) [13][14][15][16][18][19][20] . However, it was difficult to assert the reason why the spherical-shaped phosphors have a propensity toward preferred molecular alignment in the emissive layers. A few mechanisms have been proposed to explain the preferred molecular orientation of the Ir complexes doped in vacuum-deposited organic semiconducting layers: molecular aggregation of the dopants leading to randomizing their orientation by suppressing the intermolecular interaction between the dopant and host molecules 22 , strong intermolecular interactions between electro-positive sides of the dopant, and the electro-negative host molecules promoting parallel alignment of the N-heterocycles of Ir complexes by forming host-dopant-host pseudo-complex mainly participating in 3 MLCT transition 16,23 , and π-π interactions between the dopant and host molecules on the organic surface bringing alignment of aliphatic ligands to the vacuum side 20,24 . Currently, it is not very clear which mechanism most comprehensively describes the origin of the preferred EDO of the heteroleptic iridium phosphors. Moreover, the models are too oversimplified to describe the EDO values quantitatively, which depend on structures of the phosphors and host molecules 23 . Therefore, the molecular configurations and the interactions responsible for the EDOs of Ir complexes should be established by atomic-scale simulation of the Ir complexes interacting with host molecules during film fabrication.
In this paper, we carefully examine the vacuum deposition process of phosphors on organic layers using a combination of Fig. 1 Method for the simulation of the EDO of emitters in vacuum-deposited layers. a Transfer of the TDM vectors (red arrow) in the molecular coordinates to the vectors of the molecules on the organic substrate during the vacuum deposition simulation. b Three rotation angles (α, β, and γ for the clockwise rotation to the n x -, n y -, and n z -axes, respectively) were the orientation parameters of the molecules to correlate the molecular orientation to the laboratory axis. Angles between n z axis and the TDM vector (φ L ) and the C 2 axis (φ C ) are obtained after the vector transformation. c A simulation box consisting of the substrate and target molecules. About 50 target molecules were located above the substrate with 5.0 nm of inter-planar space dropped individually at 300 K. The distance unit in the figure is angstrom (Å) molecular dynamics (MD) simulations and quantum mechanical analyses. The triplet EDO of heteroleptic Ir complexes doped in organic layers is studied with systematic variations of the molecular structures of both host and dopant. Theoretical prediction of EDO from simulated deposition process reveals excellent quantitative agreement with experimental observations, reproducing the anisotropic molecular orientations of heteroleptic Ir complexes in the emissive layers. In-depth analysis indicates that the molecular orientation originates from the coupling of the cyclometalated main ligand participating in the optical transition with neighbor host molecules rather than from the alignment of aliphatic ancillary ligand toward the vacuum. Close observation of the simulation results indicates that non-bonded interaction energy has a critical influence on the molecular orientation during the deposition.
Results
Modeling of emitting dipole orientation. The simulation method for obtaining EDO of an emitter in the vacuum-deposited layer is schematically illustrated in Fig. 1a. First, the transition dipole moment (TDM) vector in the molecular frame (m x -, m y -, and m z -axes) was determined by quantum mechanical calculations after optimization of molecular geometry. For iridium-based phosphors, spin-orbit-coupled time-dependent density functional theory (SOC-TDDFT) was employed for the calculation of the triplet TDM vectors for phosphorescence. Second, vacuum deposition of the emitting molecules on organic surfaces was simulated using MD. Finally, the TDM vectors in the molecular axis in each frame of MD were transformed to the vectors in the laboratory axis (n x -, n y -, and n z -axes) by rotation matrix method (Fig. 1b). We determine φ C and φ L as the angle between m z and n z axes and the angle between the TDM vector of the emitter and n z axis, representing the molecular orientation and the EDO against the vertical direction in the laboratory axis, respectively. The ratio of the horizontal (TDM H ) to the vertical transition dipole moment (TDM V ) follows the trigonometric relationship: where μ 0 is the magnitude of the dipole moment and squares of the components indicate the intensity of the transition (emission intensity). The EDO describes an average fraction of the horizontal and vertical dipole moment of whole emitters embedded in the emissive layer. An ensemble average of The color legend is identical to that for Ir complexes. Optimization of the molecular structures was demonstrated using B3LYP method and LACVP** basis set for the phosphors and 6-31 g(d)** for the host materials, respectively. SOC-TDDFT of the phosphors were carried out using B3LYP method and DYALL-2ZCVP_ZORA-J-PT-GEN basis set the horizontal dipole moment gives the fraction of horizontal emitting dipole moment in the emissive layer (Θ) as a parameter of the EDO by Details about the rotation matrix and the vector transformation are given in Methods section.
The deposition simulation was performed by dropping a target molecule onto organic substrates under vacuum followed by thermal equilibration at 300 K as shown in Fig. 1c. The simulations were performed using the Materials Science Suite (Version 2.2) released by Schrödinger Inc. 25 Force field of OPLS_2005 26 and periodic boundary conditions were used for the MD simulations. Organic substrates were prepared by packing of 256 host molecules at 300 K and 1 atm. The simulated substrates have random molecular orientations and similar densities as the experimental results. Detailed steps of preparation of the substrates are given in Supplementary Fig. 1 and Supplementary Note 1. One of the challenges of a singletrajectory-based MD analysis for orientation during deposition is that the time scale needed to observe the entirety of lateral degrees of freedom for a single molecule is much longer than that of a typical MD simulation. As such, we introduced 50 independent deposition events per dopant, instead of relying upon a single MD trajectory for each. About 50 target dopant molecules were distributed in the vacuum slab of the periodic substrate model at un-overlapped locations with different orientations for the deposition simulation. Each target molecule was individually dropped onto the substrate under vacuum at 300 K. Translational motion of the host molecules at the bottom of the substrate was restrained in order to avoid the drift of the system. The deposition simulation used an NVT ensemble for a duration of 6000 ps with a time step of 2 fs and configurations of the system were recorded every 6 ps. One example of the process is shown in Supplementary Movies 1 and 2. Finally, EDOs of the phosphors and the molecular angles (φ C ) were analyzed using Eq. (1) from the configurations. The analysis is based upon an assumption that the characteristic time to determine the orientation of dopants is in the same scale of which the intermolecular interaction converges after the deposition of a dopant.
Materials. Chemical structures of the materials used in this study are depicted in Fig. 2a, b. Three heteroleptic iridium complexes of Ir(ppy) 2 tmd, Ir(3′,5′,4-mppy) 2 tmd 18 , and Ir(dmppy-ph) 2 tmd 19 possessing high Θ values were adopted to investigate the effect of the phosphor molecular structure. The molecular C 2 symmetry axis toward the center of the ancillary ligand from the origin located at the Ir atom was set as m z , the orthogonal vector to m z normal to the molecular Ir-O-O plane was set as m x , and m y was determined by a cross product of m z and m x in the dopants. The triplet TDM vectors of the three Ir complexes align along the direction of the iridium atom to the pyridine rings by 3 MLCT as displayed in Fig. 2a. Coordinates of the TDM vectors of Ir (ppy) 2 tmd, Ir(3′,5′,4-mppy) 2 tmd, and Ir(dmppy-ph) 2 , respectively, indicating that the substituents at the 4-position of the pyridine of the main ligands do not change the direction of triplet TDM vectors much.
Diphenyl-4-triphenylsilyphenyl-phosphineoxide (TSPO1), 1,4-bis(triphenylsilyl)benzene (UGH-2), and 4,4′-bis(N-carbazolyl)-1,1′-biphenyl (CBP) were selected as host materials to investigate the effect of ground state dipole and conjugation length of the host on the EDO. TSPO1 has large permanent dipole moment due to the polar phosphine oxide group and the asymmetric structure, while UGH-2 and CBP molecules have small ground state dipole moments compared to TSPO1 due to the symmetric structures and the less polar groups. On the other hand, CBP has longer conjugation length than UGH-2 and TSPO1, indicating that CBP has a larger polarizability than UGH-2.
Simulation results. The simulated variations of the orientation of the TDM H and C 2 axis of the dopants with time on the different hosts are displayed in Supplementary Fig. 3 for 50 depositions for each system. The orientation of the phosphors was stabilized after certain time for some molecules, but fluctuated continuously for other molecules. Figure 3a exhibits the histograms of the EDO resulting from the deposition simulation. The blue lines represent the probability density of TDM H (sin 2 φ L , derivation in Methods section) of an arbitrary vector. The green lines exhibit the deviations of the population from the random distribution. The simulated Θ values of Ir(ppy) 2 tmd were 0.63, 0.72, and 0.74 on the UGH-2, CBP, and TSPO1 substrates, respectively. Ir(3′,5′,4-mppy) 2 tmd and Ir (dmppy-ph) 2 tmd on TSPO1 substrates have the Θ values of 0.76 and 0.82, respectively. In addition, the simulation was performed for Ir(ppy) 3 , a homoleptic complex exhibiting isotropic EDO when doped in CBP as a refs 14,23 . The distribution of the emitting dipole moment of Ir(ppy) 3 was close to the random distribution with a simulated Θ value of 0.67 and random orientation of the C 3 symmetry axis of the molecule (Supplementary Fig. 4; Supplementary Note 3). The simulated EDOs match well with the experimental results as compared in Table 1, verifying that the MD simulation describes the vacuum deposition adequately. The results show that Ir (ppy) 2 tmd in UGH-2 has larger molecular population with vertical TDM at the expense of reduced population with horizontal TDM compared to the random distribution (Θ = 0.67). Higher Θ values are obtained when population of molecules possessing high TDM H is getting larger with the reduced population with low TDM H .
The orientation of the C 2 axes of the phosphors on the organic layers is shown in Fig. 3b to find out if alignments of the aliphatic ancillary ligands have any correlation with EDO, for instance, if the horizontal EDO results from the vertical alignment of ancillary ligands with respect to the substrate 20,24 . One expects that the distribution function follows sin φ c (blue line) if the orientation is random. We can extract several interesting results from Fig. 3b. First, the orientation of the ancillary ligand of the Ir complexes has broad distributions for all the deposited films. Second, host effect on EDO is independent of alignments of the ancillary ligand. The total distribution of the C 2 axis of Ir (ppy) 2 tmd is similar on the UGH-2, CBP, and TSPO1 layers with the average φ C of 70°, but the EDO on UGH-2 host is different from the EDOs on other two hosts. Third, the orientation of the C 2 axes of Ir(3,5′,4-mppy) 2 tmd and Ir(dmppy-ph) 2 tmd on TSPO1 are more random (closer to sin φ c ) even though they possess higher Θ values than Ir(ppy) 2 tmd. The random distributions are observed even in the region with high horizontal alignment of the emitting dipole moment (green regions in the stacked histogram with 0.95 ≤ TDM H ≤ 1). Fourth, the dopant molecules with vertical TDM (red regions in the stacked histogram with 0 ≤ TDM H ≤ 0.3) have φ C close to 90°for all the system, indicating that the ancillary ligands align parallel to the surface. All the results show that there is little correlation between the orientation of TDMs and alignment of the ancillary ligands.
Discussion
The size of the substrates turns out to be large enough to simulate the vacuum deposition of the phosphorescent dyes adequately, as confirmed by the similar results obtained on a larger substrate consisting of 1024 molecules (Supplementary Fig. 5; Alignment of aliphatic ligands of heteroleptic Ir complexes to vacuum (model 3) is not required for preferred horizontal EDO either as shown in Fig. 3. A much larger portion of the aliphatic ligand (-tmd group) of Ir(ppy) 2 tmd molecules align to the vacuum side (0°< φ C < 90°in Fig. 3b) than Ir(3′,5′,4-mppy2)tmd and Ir(dmppy-ph) 2 tmd molecules. However, the Θ value of Ir (ppy) 2 tmd is much lower than Ir(3′,5′,4-mppy) 2 tmd and Ir (dmppy-ph) 2 tmd. These results are the reverse direction from the prediction based on the model and clearly demonstrate, therefore, that alignment of aliphatic ligands to the vacuum side is not a necessary condition for the alignment of EDO in heteroleptic Ir complexes. The reason why it is not required can be understood from the following consideration.
The relationship between the orientation of molecules and emitting dipole moment can be easily figured out using schematic molecular orientations of a heteroleptic Ir complex shown in Fig. 4. The C 2 axis is toward the ancillary ligand (dark blue arrows) and the TDM vector (red arrows) is approximately along the direction from the iridium center to one of the pyridine rings. The alignment of iridium-pyridines determines the orientations of TDM for Ir(ppy) 2 tmd, Ir(3′,5′,4-mppy) 2 tmd, and Ir(dmppy-ph) 2 tmd. Figure 4 shows five configurations with different rotation angles of the C 2 axis for the horizontal TDM and one configuration for the vertical TDM. Rotation of the C 2 axis from the vertical to the horizontal direction can result in the horizontal TDM as long as the TDM is located on the horizontal plane (substrate) with an arbitrary orientation of the ancillary ligand. In other words, horizontal EDO is possible no matter which direction of the ancillary ligand aligns, either toward vacuum or film. On the other hand, the vertical TDM is obtained only when the pyridine rings are aligned perpendicular to the substrate. It accompanies horizontal alignment of the C 2 axis (φ C~9 0°) on the configuration. This consideration is consistent with the simulation results in Fig. 3.
The distributions of the ancillary ligand can be partly explained by analyzing the differences in Hildebrand solubility parameters (δ) of the phosphor and host molecules.
where ΔE v is the internal energy change of vaporization, and V m is the molar volume, respectively. In general, the differences in the solubility parameters (Δδ) between two components in a chemical mixture can be an indicator of the degree of miscibility, with smaller and larger values of Δδ indicating more and less miscible, respectively. In this work, the host and the phosphor Δδ are much less than 7 MPa 1/2 , suggesting that all the phosphors are miscible with the hosts 28 . However, Δδ's between Ir(ppy) 3 and the hosts are smaller than between Ir(ppy) 2 tmd and the hosts. Since the difference comes from the ppy and tmd groups, it follows that the tmd group is less miscible in the host substrates than the main ligand ppy group, which explains the orientation of the aliphatic ancillary ligand toward vacuum side for Ir(ppy) 2 tmd. On the other hand, the difference in the solubility parameters among Ir(ppy) 2 tmd, Ir(3′,5′,4-mppy) 2 tmd, and Ir(dmppyph) 2 tmd comes from the difference in main ligands. The reduced solubility of Ir(3′,5′,4-mppy) 2 tmd and Ir(dmppy-ph) 2 tmd indicates that both 3′,5′,4-mppy and dmppy-ph groups are less miscible to the host than ppy of Ir(ppy) 2 tmd and less preference to attachment to the substrate compared to the ppy group. Therefore, the orientations of the ancillary ligand of the two phosphors are more randomized during the deposition, consistent with the simulated distributions in Fig. 3b. The result of Ir(ppy) 3
doped in the CBP layer is added to the results in five combinations of host and heteroleptic Ir complexes for reference
Non-bonded interaction energy is calculated as a summation of van der Waals and Coulomb interaction energies from the MD simulation to investigate if the intermolecular interaction between the phosphor and neighbor host molecules is responsible for the spontaneous molecular alignments of the phosphors on the surfaces. Figure 5a depicts the correlation between non-bonded interaction energy and orientation of the emitting dipole moment of the phosphors in the five different host-dopant systems. Distributions of the non-bonded interaction energy are given in Supplementary Fig. 7. There is a broad energy trap of~3 kcal/mol in the region of TDM H = 0.1−0.5 for Ir(ppy) 2 tmd on the UGH-2 host and the energy increases with further increasing of TDM H , thereby resulting in rather a vertical EDO compared to random orientation because the population of TDM H is expected to be concentrated in the regions of low (large) non-bonded interaction energy. On the other hand, non-bonded interaction energies of Ir(ppy) 2 tmd on CBP and TSPO1 layers, and the energies of Ir(3′,5′,4-mppy) 2 tmd and Ir(dmppy-ph) 2 tmd on TSPO1 are lowered as TDM H increases. As a result, molecular alignment with horizontal TDM is energetically preferred when they are deposited onto the organic semiconducting layers. Furthermore, much lower energies were obtained from Ir(3′,5′,4-mppy) 2 tmd and Ir(dmppy-ph) 2 tmd than Ir(ppy) 2 tmd on the TSPO1 layer, indicating that the increased EDOs are also related to the stabilization by neighbor molecules. The calculated non-bonded interaction energy and the statistical results indicate that the host-dopant interaction plays a pivotal role in orienting heteroleptic Ir complexes and the force applies to the alignment of the iridium-pyridine bonds of the phosphors toward the horizontal direction.
The type and magnitude of the non-bonded interactions are different for different phosphors and hosts, leading to different EDOs. The separated vdW and Coulomb energies as a function of the molecular orientation are depicted in Fig. 5b, c. The vdW energies in CBP:Ir(ppy) 2 tmd, TSPO1:Ir(ppy) 2 tmd, TSPO1:Ir (3′,5′,4-mppy) 2 tmd, and TSPO1:Ir(dmppy-ph) 2 tmd systems decrease, whereas the energy in UGH-2:Ir(ppy) 2 tmd increases as the ratio of the horizontal transition dipole moment (TDM H ) increases. For a polar host molecule of TSPO1, Coulomb energies of the phosphors are lowered as TDM H increases. The variation of vdW energy depending on the molecular orientation was 3-5 kcal/mol, which is larger than the variation of Coulomb energy of 0-1 kcal/mol. The results indicate that vdW interactions (dipole-induced dipole and induced dipole-induced dipole interactions) between the aromatic ligands and the nearest host molecules are the main mechanism contributing to the molecular alignment of the phosphors. The Coulomb interaction helps further alignments of the phosphors if polar host materials are employed. For instance, Ir(ppy) 2 tmd and Ir(3′,5′,4-mppy) 2 tmd have a quadrupole composed of two dipoles from pyridines (δ+ charge) to Ir atom (2δ− charge). If there is a dipole in host molecule (i.e., TSPO1), dipole and quadrupole interaction [-P = O δ− and δ+ H(pyridine)] anchors one phosphor molecule to two host molecules, leading to a rather horizontal orientation of iridium-pyridines bond of the phosphors, which is approximately parallel to the TDM. In contrast, if host molecule has the positive surface potential (i.e., δ+ (phenyl) 3 -Si-phenyl-Si-(Phenyl) 3 δ+ in UGH-2), there must be a repulsive force between pyridine of phosphors and host molecules so that pyridine ring must be pushed to vacuum. Dispersion force between the conjugated phenyl substituents of Ir(dmppy-ph) 2 tmd and nearest neighbors anchors the pyridines onto the surface as well and lowers the energy with the molecular long axis lying on the surface. Meanwhile, random EDO of Ir(ppy) 3 is attributed to three intermolecular interaction sites, resulting in random orientation of the molecule. Figure 6a-c exhibit the representative molecular behaviors of phosphors and nearest host molecules during the deposition out of 50 cases (Supplementary Fig. 3) with different host-dopant combinations of UGH-2:Ir(ppy) 2 tmd, TSPO1:Ir(ppy) 2 tmd, and TSPO1:Ir(dmppy-ph) 2 tmd, respectively. Large vibrations, rotations, and diffusions of Ir(ppy) 2 tmd on the surface of the UGH-2 layer without lowering the energy were observed in the trajectory shown in Fig. 6a. The perpendicular alignment of pyridines occasionally formed on the surface resulted in vertical emitting dipole moment in average. On the other hand, a hydrogen atom at one of pyridines of Ir(ppy) 2 tmd faced toward an oxygen atom of TSPO1 with the −P = O··H(pyridine) distance around 0.4 nm at t = 2472 and 4560 ps, thereby the parallel alignment of the Ir-pyridines of Ir(ppy) 2 tmd to the surface. Larger quadrupole moment of Ir(3′,5′,4-mppy) 2 tmd than that of Ir(ppy) 2 tmd increased the strength of quadrupole-dipole interaction and resulted in enhancement of fraction of the horizontal dipole compared to Ir(ppy) 2 tmd. The horizontal EDO of Ir (ppy) 2 tmd in CBP could be understood by the dipole inducement in carbazole groups of CBP when the positive pole of pyridine approaches, but the interaction strength for the iridium−pyridines alignment between Ir(ppy) 2 tmd-CBP is smaller than that between Ir(ppy) 2 tmd−TSPO1. Compared to the former cases, the picture of Ir(dmppy-ph) 2 tmd shown in Fig. 6c is rather simple. The phosphor deposited onto the TSPO1 layer was stabilized after short time of deposition to the one with the horizontal iridium-pyridine-phenyl alignment and maintain the configuration. The much lower (larger) non-bonded interaction energy of Ir(dmppy-ph) 2 tmd in Fig. 5a restrained rotation of the molecule and the molecular configurations are easily fixed on the surface, resulting in much enhancement of horizontal EDO was achieved by the substitutions. The origin of the molecular orientation and EDO of doped heteroleptic iridium complexes in vacuum-deposited organic layers is investigated using MD simulations and quantum mechanical analyses in direct comparison with experimental observation. Careful analyses of the simulation results revealed that molecular alignments of the phosphors are spontaneous by local electrostatic and van der Waals interaction with nearest host molecules interacting in a smaller scale than a molecule. The orientation of the TDM vector of the phosphors on the organic Fig. 6 Representative configurations during deposition. Snapshots of local configurations and time-dependent trajectories of the EDO, angle of the C 2 axis, and non-bonded interaction energy up to 6 ns are depicted together. The ancillary ligand and pyridine rings of the phosphors at the octahedral sites are colored by red and blue, respectively. a Ir(ppy) 2 tmd deposited onto the UGH-2 layer has a continuous rotation and the occasionally observed perpendicular alignment of pyridines with respect to the substrate results in vertical EDO. b Ir(ppy) 2 tmd anchors on the surface of TSPO1 layer by the local quadrupole-dipole interaction between the two nearest host molecules located at both sides. The hydrogen atoms at both pyridines of Ir(ppy) 2 tmd and the oxygen atoms of TSPO1 connected by a broken line were the plausible binding sites. The distances between the two atoms (broken lines) are getting closer until around 0.4 nm as the time increases and a host-dopant-host pseudo complex is formed with the parallel alignment of pyridines with respect to the substrate. c Ir(dmppy-ph) 2 tmd deposited onto the TSPO1 layer are less mobile than Ir(ppy) 2 tmd with the low non-bonded interaction by the configuration of large dispersion force energy along the direction of TDM surfaces follows the direction of the ligand mainly participating in optical transition, such as pyridines in ppy, in the molecular alignment, whereas the alignment of ancillary ligand does not have a direct correlation with the EDO. Attractive interactions between pyridines of a phosphor and CBP (quadrupole-induced dipole interaction) or TSPO1 (quadrupole-dipole interaction) anchor the phosphor onto host molecules with the parallel iridium-pyridine alignment, thereby increasing the horizontal EDO. Ir(3′,5′,4-mppy) 2 tmd has larger quadrupole moment than Ir(ppy) 2 tmd resulted in further molecular alignment for the horizontal emitting dipole moment. The increase of the dispersion force along the direction of TDM was also effective on control of the molecular orientation for the horizontal EDO with lowered non-bonded interaction energy.
Methods
Quantum mechanical calculations and molecular dynamics simulations. Density functional theory (DFT) was used to obtain molecular geometries and electrostatic potentials of host and phosphors. Triplet TDMs of the same phosphors from T 1 to S 0 were also calculated via SOC-TDDFT. The DFT and SOC-TDDFT calculations and the follow-up analyses were performed with Schrodinger Materials Science Suite 25 along with the quantum chemical engine, Jaguar 29 . A TDM having the largest oscillator strength among the three degenerated states of T 1 level (T x , T y , and T z ) obtained from the density functional calculations was used in this study as a representative TDM. All MD simulations were performed by Desmond 30, 31 , a MD engine implemented in the Schrodinger Materials Science Suite. Equilibration simulations prior to deposition were performed in NPT ensembles, where pressure and temperature were set constant via Nose-Hoover chain and Martyna-Tobias-Klein method, respectively. There were no explicit constraints to geometry and/or positions to any of the molecules that were introduced in the simulation box. The simulations were performed over NVIDIA general-purpose GPU cards (K80).
Rotation matrix method. Rotation matrix method was used for transformation of the TDM vector from the molecular coordinate to the laboratory coordinate. Rotation angles of α, β, and γ are defined as the clockwise rotations to laboratories axes of n x -, n y -, and n z -axes, respectively. Then, the rotation matrixes for α, β, and γ rotations are followings: Sequential αβγ rotations of the dopant molecules were extracted in every configurations of the MD simulation. A product of the three rotation matrixes gives a matrix representing the orientation of the dopant molecule by Finally, the TDM vectors in the laboratory coordinate were obtained by Calculation of a probability density function of TDM H . Integration of a probability density function, f, indicates a probability of a variable X between X = a and b.
To calculate the probability density function of sin 2 φ from an arbitrary vector, we define an arcsine function of which is a reversed function of y ¼ sin 2 x. For a monotonic function, the variables are related by If we put f Y ðyÞ ¼ sinðyÞ and dy dx ¼ 1 2 ffiffiffiffiffiffiffi ffi xÀx 2 p into equation (A4), the probability density function (f X ) is obtained as Data availability. The authors declare that all data supporting the findings of this study are available in the article and in Supplementary Information file. Additional information is available from the corresponding author upon request. | 6,741 | 2017-06-01T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Nonlinear Processes in Geophysics Conditional nonlinear optimal perturbations of the double-gyre ocean circulation
In this paper, we study the development of finite amplitude perturbations on linearly stable steady barotropic double-gyre flows in a rectangular basin using the concept of Conditional Nonlinear Optimal Perturbation (CNOP). The CNOPs depend on a time scale of evolution te and an initial perturbation threshold δ. Under symmetric wind forcing, a perfect pitchfork perturbation occurs in the model. The CNOPs are determined for all linearly stable states and the time evolution of the CNOPs is studied. It is found that the patterns of the CNOPs are similar to those of the non-normal modes for small te and approach those of the normal modes for larger te. With slightly asymmetric winds, an imperfect pitchfork occurs in the model. Indications are found that the time evolution of the CNOPs is related to the value of the dissipation function of the underlying steady state.
Introduction
The so-called quasi-geostrophic double-gyre flow has been recognized as one of the characteristic problems to study the nonlinear dynamics of the wind-driven ocean circulation (Jiang et al., 1995;Dijkstra, 2005).Usually such a flow is considered in an idealized geometry, such as a rectangular ocean basin, on a midlatitude β-plane.The linear problem, neglecting inertia, is the basis for the Sverdrup-Stommel-Munk theory of the wind-driven ocean circulation.In this case, the Sverdrup balance holds over most of the basin and viscosity only affects the flow in thin boundary layers at the eastern and western boundaries.The Sverdrup transport is compensated only in the western boundary layer and hence the western boundary flow is much stronger than the eastern one.
Correspondence to: A. D. Terwisscha van Scheltinga (arjen.terwisschavanscheltinga@ualberta.ca)When the flow is forced by a symmetric wind stress (with respect to the mid axis of the basin) the quasi-geostrophic equations have a reflection symmetry.For the one-layer (barotropic) case much is known about stability bounds and bifurcation behavior of nonlinear flows as the viscosity is decreased.When only lateral friction is considered as a dissipation mechanism, there is basically only one control parameter, the Reynolds number Re.The finite amplitude stability of the linear (Munk) solution was studied using analytical methods (Crisciani and Mosetti, 1990;Crisciani et al., 1994Crisciani et al., , 1995)).The energy stability limit Re E of the unique antisymmetric solution, existing at high viscosity, was calculated numerically in Dijkstra and De Ruijter (1996) and guarantees a monotonic decay of the kinetic energy of any perturbation; this stability limit hence provides sufficient conditions for stability (Joseph, 1976).
The linear stability limits Re L of this barotropic doublegyre flow in a relatively small ocean basin were presented in Dijkstra and Katsman (1997).The first bifurcation is a symmetry-breaking pitchfork bifurcation where the antisymmetric solution becomes unstable and two asymmetric solutions appear.These asymmetric solutions become unstable at several Hopf bifurcations where periodic orbits appear.Eventually chaotic behavior occurs due to a homoclinic bifurcation, which can be either of Lorenz or Shilnikov type, depending on the parameters of the system (Nadiga and Luce, 2001;Simonnet et al., 2005).
In recent years, tools of generalized stability theory (Farrell and Ioannou, 1996;Moore, 1999;Moore et al., 2002) have also been applied to this problem.With these tools, one is interested in determining growth of perturbations on a particular reference state due the non-normality of the Jacobian of that state; the latter state may be either a linearly stable steady state or a time-mean state of a very irregular flow.These tools enable one to determine the response of the flow to stochastic perturbations and hence are interesting with respect to predictability issues.
A. D. Terwisscha van Scheltinga and H. A. Dijkstra: CNOPs of double-gyre flows In Moore (1999), a basin of size 1000×2000 km was considered for both the asymptotically stable as well as unstable regimes and the effects of stochastic wind forcing on the flow was studied.Focus was on the stochastic forcing patterns that account for the largest fraction of noise-induced variability, the so-called stochastic optimals.The structure of the Ekman pump velocity of the gravest stochastic optimal on time scales of 2-3 weeks corresponds to a single-gyre basin wide flow.This particular wind perturbation induces changes in vorticity which project strongly on the fastest linear singular vectors.The noise forcing is most effective in the asymptotically stable regime but otherwise not sensitivity to the chosen norm, basic state flow and geometry.In Moore et al. (2002), it was shown that the variability is maintained by Rossby waves that interact with the western boundary current.The perturbations that maintain the stochastically induced variance in the linearly stable regime have a large projection on some of the non-normal, least-damped eigenmodes.
In generalized stability theory, it is assumed that the initial perturbation is so small that its evolution can be described by a linearized system (the tangent linear model) and optimal growth is determined through the largest singular value of the forward propagator of the linearized system.A generalization of linear singular vectors is the concept of Conditional Nonlinear Optimal Perturbations (CNOPs) as introduced by Mu et al. (2003).The CNOP is the initial (finite amplitude) perturbation whose nonlinear evolution attains a maximum growth rate at a chosen end time t e given an initial bound δ on the norm of the initial condition.
The CNOPs of a steady state determines the dominant time-dependent nonlinear behavior of finite amplitude perturbations.On one hand, such behavior bridges the gap between the behavior below the energy stability boundary (monotonic decay of all perturbations) and above the linear stability boundary (exponential growth of infinitesimally small perturbations).On the other hand, when compared to non-normal modes, the CNOP displays how much nonlinearity affects the evolution of finite amplitude perturbations.In the case of linearly stable multiple equilibria, the CNOPs also provide a way to compute finite amplitude stability boundaries of each of the equilibria (Mu et al., 2004).It is thus important to be able to compute CNOPs for flows modeled by systems of partial differential equations.
The computation of CNOPs has so far been only accomplished in models having a small number of degrees of freedom such as ocean box models (Mu et al., 2004) and relatively simple atmospheric (Mu and Zhang, 2006) and ENSO models (Mu et al., 2003).As far as we know the CNOPs for the barotropic double-gyre ocean flow problem are here calculated for the first time.We use the implicit 4D-Var methodology (Terwisscha van Scheltinga and Dijkstra, 2005) which is relatively easily extended to compute CNOPs.We determine the CNOPs for the double-gyre problem both under symmetric and asymmetric wind stress forcing.
Model and methods
In this section, we will shortly recall the model used (Sect.2.1) and then briefly describe the CNOP methodology (Sect.2.2).
Model
Consider a flow domain V consisting of a rectangular ocean basin of size L×L having a constant depth D. The basin is situated on a midlatitude β-plane with a central latitude θ 0 =45 • N and Coriolis parameter f 0 =2 sin θ 0 , where is the rotation rate of the Earth.The meridional variation of the Coriolis parameter at the latitude θ 0 is indicated by β 0 .The density ρ of the water is constant and the flow is forced at the surface through a wind-stress vector T=τ 0 [τ x (x, y), τ y (x, y)].The governing equations are nondimensionalized using a horizontal length scale L, a vertical length scale D, a horizontal velocity scale U , the advective time scale L/U and a characteristic amplitude of the windstress vector, τ 0 .The effect of deformations of the oceanatmosphere interface on the flow is neglected.
The dimensionless barotropic quasi-geostrophic model of the flow for the vorticity ζ and the geostrophic streamfunction ψ is (Pedlosky, 1987) where the horizontal velocities are given by u=−∂ψ/∂y and v=∂ψ/∂x.The parameters in Eq. ( 1a) are the Reynolds number Re, the planetary vorticity gradient parameter β and the wind-stress forcing strength α τ .These parameters are defined as: where g is the gravitational acceleration and A H is the lateral friction coefficient.When the horizontal velocity scale is based on a Sverdrup balance of the flow, i.e., it follows that α τ =β.Consequently, there are only two free parameters, for example the dimensionless boundary layer thicknesses δ I and δ M defined by δ 2 I =1/β and δ 3 M =1/(βRe), respectively.
We assume no-slip conditions on the east-west boundaries and slip on the north-south boundaries.The boundary conditions are therefore given by The wind-stress profile considered is where the dimensionless parameter σ controls the shape of the zonal wind stress and τ 0 is a typical amplitude.For σ =1 (σ =0), the wind stress induces a single-gyre (double-gyre) flow.
The governing equations are discretized on a 60×40 grid with central spatial differences.The resolution in the eastwest direction is slightly higher because the flows are westerly intensified.An implicit time-integration scheme (Terwisscha van Scheltinga and Dijkstra, 2005) is used with a time step of t=1 day.Standard parameter values of the model are shown in Table 1.After discretization, the state vector x ∈ R d (of dimension d=2×60×40=4800) consists of the values of ψ and ζ at the grid points.
Calculation of CNOPs
The discretized equations governing the evolution of perturbations x on a particular state x can be written as: where t is time, x(t)=(x 1 (t), x 2 (t), ..., x n (t)) is the perturbation state vector and F is a nonlinear differentiable operator.Furthermore, x 0 is the initial perturbation, x is the basic state which we take here as a linearly stable steady state, (x, t)∈R d ×[0, t e ] and t e <+∞.Suppose the initial value problem (6) is well-posed and the nonlinear propagator M is defined as the evolution operator of Eq. ( 6) which determines a trajectory from the initial time t=0 to time t e .Hence, for fixed t e >0, the state is the result of the time-evolution at t=t e of the initial perturbation x 0 at t=0.For a chosen norm • measuring x, the perturbation x 0δ is called the Conditional Nonlinear Optimal Perturbation (CNOP) with constraint condition C(x 0 )= x 0 ≤δ, if and only if where The CNOP is the initial perturbation x 0 whose nonlinear evolution attains the maximal value of the functional J at time t e with the constraint condition x 0 ≤δ; in this sense it is 16), against the control parameter Re=U L/A H .The energy stability boundary Re E is about Re E ≈10 (Dijkstra and De Ruijter, 1996).(b) Streamfunction ψ of the anti-symmetric steady state for Re=25, (c) the jet-down steady state for Re=50 and (d) the jet-up steady state for Re=50.The contour values are scaled with respect to a maximum of ψ=2.2 for (b), which represents a transport of 5.5 Sv; and a maximum of ψ=1.1 for (c, d), which represents a transport of 10.9 Sv.The contour interval is 0.2.Table 1.Standard values of the parameters for the barotropic quasigeostrophic ocean model in the steady flow regime.For these values of the parameters, the dimensional parameters have values α τ =β=2.8×10 3 .
Parameter
Value called "optimal" (Mu et al., 2003).The CNOP can be regarded as the most (nonlinearly) unstable initial perturbation superposed on the basic state.
To numerically calculate the CNOP for the double-gyre problem, the kinetic energy norm Let L be a linear operator that maps x 0 to the velocity vector which is calculated from the values of ψ at four neighbouring grid points.The energy norm is now evaluated numerically as: where • 2 the L 2 -norm.Hence, the energy norm is calculated by multiplying the perturbation x 0 with the matrix L and calculating the norm of the result.Using this numerical approximation and Eq. ( 7) we find the following implementation J num of J : It is easy to derive the gradient: where M is the tangent linear model and the subscript T indicates the transpose.The constraint function is numerically implemented likewise as: The cost function is evaluated using forward integration.Here, the perturbation x 0 is added to the basic steady state x.This state is then propagated forward.At t=t e the basic steady state is subtracted and the kinetic energy norm of the perturbations is calculated.The gradient is evaluated by backward integration with the adjoint model M T .The same techniques developed for the implicit data assimilation (Terwisscha van Scheltinga and Dijkstra, 2005) are used here.We use the same tangent linear model, which is stored during the forward integration.For the evaluation of the gradient the stored tangent linear model is transposed and used as adjoint.
The constraint optimization problem ( 8) is implemented as: and is solved using the NAG routine E04UCF.This routine uses a method that is essentially identical to the method discussed in Gill et al. (1986).The basic structure uses an Sequential Quadratic Programming method (Gill et al., 1981) to solve a quadratic subproblem along the search direction and uses Lagrangian multipliers for the constraints.
Results
In the results below, we first consider the case of symmetric wind forcing (σ =0) and subsequently the case of a slightly asymmetric wind forcing (σ =0.05).
Symmetric case
For the case σ =0, the structure of the steady solutions is shown through the bifurcation diagram in Fig. 1a, where the value of the asymmetry of streamfunction , defined as is plotted against Re=U L/A H .At large values of A H (small Re), the anti-symmetric double-gyre flow (Fig. 1b) is a unique state.When lateral friction is decreased, this flow becomes unstable at the pitchfork bifurcation P 1 and two branches of stable asymmetric states appear for smaller values of A H (larger Re).The solutions on these branches (Fig. 1c, d) have the jet displaced either northward (negative ) or southward (positive ) and are exactly symmetrically related for the same value of Re.
We first focus on the case Re=25 which is in the asymptotically stable regime of the anti-symmetric state.For δ=0.1, the streamfunction patterns of the CNOPs are shown for four different values of t e =7, 14, 21, 28 days in Fig. 2. The pattern of the CNOP in Fig. 2a is similar to the pattern of the most energetic disturbance at a similar value of β (Fig. 7b of Dijkstra and De Ruijter, 1996) as determined through the energy stability analysis.As the energy stability boundary Re E is located near Re E ≈10 (Fig. 5 of Dijkstra and De Ruijter, 1996), this shows that in the conditional stability regime (Joseph, 1976) still relevant for short times t e .When the CNOP is computed for larger times t e , the pattern keeps the same symmetric three cell structure, but the cells at top and bottom extend in size (Fig. 2b-d).
The streamfunction patterns at t e days when the CNOPs in Fig. 2 are taken as an initial condition are shown in Fig. 3.The pattern of Fig. 3a results after 7 days when the steady state at Re=25 is perturbed with the pattern in Fig. 2a with δ=0.1.It has not changed much in shape but it becomes more localized into the western boundary region.The patterns of Fig. 3b-d arise at 14, 21 and 28 days when the steady state is perturbed with the pattern in Fig. 2b-d, respectively, with δ=0.1.The final pattern in Fig. 3d is recognized as the least stable normal mode, the P-mode in Simonnet and Dijkstra (2002).Hence, the CNOPs for larger times t e induce a response into the direction of the normal mode, which indeed controls the long time evolution behavior.
The energy norm of the perturbation at t e for the steady state at Re=25 and several values of δ=0.1, 0.25 and 0.50 is plotted in Fig. 4 as a function of t e .For δ=0.1, the values correspond to the amplitudes of the patterns in Fig. 3.For each value of δ, the energy norm flattens for larger times t e and the value increases with increasing δ.
We next consider the case Re=50 on the asymmetric branches for which two asymmetric steady solutions (jet-up state and the jet-down state) are linearly stable.For δ=0.1, the streamfunction patterns of the CNOP of the jet-down state is shown for t e =7 days in Fig. 5a.The pattern is no longer symmetric because of the asymmetry of the back- ground state and it already quite localized near the western boundary current region.When the steady state is perturbed with the CNOP and with δ=0.1, the deviation from the steady state after 7 days (Fig. 5b) shows a bipolar pattern resembling one phase of a Rossby basin mode.This is an oscillatory normal mode to which the steady state becomes unstable at slightly larger Re (Dijkstra and Katsman, 1997).Figure 5c-d shows the CNOP and its evolution after 7 days for the jet-up steady state at Re=50.The patterns are simply related to those in Fig. 5a-b by the reflection symmetry.There are very minor differences due to accuracy set in the minimization algorithm and the implicit time-stepping schemes.For both asymmetric steady states the curves of the final amplitude of the energy norm versus t e are the same due to the reflection symmetry.For Re=50 the time-scale of flow changes in the system is set by the gyre advection which is a few years.For each δ the energy norm (not shown) shows a monotonic increase in the range of t e contrary to the case of Re=25 (Fig. 4) where saturation occurs over a period of a month.
Asymmetric case
When the wind stress is taken slightly asymmetric, an imperfect pitchfork bifurcation results as can be seen in the the bifurcation diagram for σ =0.05 in Fig. 6a.The jet-up solution is now continuously connected with the near anti-symmetric solution at small values of Re.On the other hand, the jetdown solution becomes an isolated branch.The asymmetric wind-stress forcing gives a preference for the jet-up solution since the easterlies in the northern part of the domain are slightly weaker than those in the southern part of the domain.The position of the saddle-node bifurcation (at Re≈54 for σ =0.05 Fig. 6a) shifts to larger values of Re when σ increases.Along the branches for σ =0.05 the value of the dimensionless viscous dissipation function is plotted in Fig. 6b.As can be seen, values of differ between the jet-up and jet-down solutions for similar values of Re, with the (stable) jet-down steady state having a lower viscous dissipation.For σ =0.05 and Re=60, CNOPs for fixed δ=0.1 and t e =7 days are plotted for both jet-up and jet-down solutions in Fig. 7. Patterns now slightly differ between the two cases.The pattern for the jet-down state seems less deformed from the symmetric case (compare Fig. 7a with Fig. 5a).The CNOP pattern for the jet-up solution on the contrary has deformed substantially (compare Fig. 7c with Fig. 5c).The evolution of both CNOPs eventually leads to anomalies which have a pattern resembling a Rossby-basin mode, just as in the symmetric case.
For both states the final amplitude of the energy norm is plotted against t e for several values of δ in Fig. 8.The solid curves are those for the jet-up steady state while the dashed ones are those for the jet-down solution.For all values of δ there is a clear difference between the CNOP evolution from both states.For t e <16 days, the final amplitude of the energy norm is the largest for the jet-down state, i.e. the state with the lower viscous dissipation; for t e >16 days the opposite occurs.In these results, equilibration of the amplitude of the perturbations occurs on a longer (advective) time scale.
Conclusions
In this paper, we have explored the development of finite amplitude perturbations of linearly stable steady states of the double-gyre flow in the barotropic quasi-geostrophic model, by determining the Conditional Nonlinear Optimal Perturbations (CNOPs).These are the perturbations to the flow which have an optimal nonlinear evolution at a time t e (in a chosen norm) under the condition of a bound δ on the norm of the initial perturbation.The i4D-Var methodology as presented in Terwisscha van Scheltinga and Dijkstra (2005) was easily adapted to compute these CNOPs efficiently and hence provides a technique to determine CNOPs for fairly general systems of partial differential equations.By calculating the CNOPs for the symmetric (σ =0) double-gyre flow, we have added another detail of the behavior of this flow system when Re is changed.Up to the energy stability boundary Re E ≈10 (as determined in Dijkstra and De Ruijter (1996)) the anti-symmetric flow is monotonically stable, i.e., the kinetic energy of every finite amplitude perturbation decays monotonically to zero.Just above Re E , there exist perturbation patterns of which the kinetic energy grows in time and the CNOPs are the ones with optimal growth under the conditions of chosen t e and δ.The patterns of these CNOPs are basin wide and their spatial structure correspond to the ones of the non-normal modes as found in Moore et al. (2002).For small δ, the growth of the CNOPs is similar to that of the non-normal modes but for large δ it may be larger.For Re<Re L (the first pitchfork bifurcation) these CNOPs evolve in time to patterns resembling the least stable normal modes.Certainly, as soon as Re>Re L , the anti-symmetric state becomes linearly unstable and the perturbations with the largest growth rates are the normal modes.Fig. 8.The energy norm of the perturbation at t e , J (x 0δ ) as defined by Eq. ( 12), against t e , for different δ for the steady state at Re=60 and σ =0.05.The solid curves are for the jet-up solution while the dashed curves are for the jet-down solution.
For Re>Re L , two asymmetric linearly stable steady states exists which are (by their simultaneous existence) unstable to finite amplitude perturbations.The CNOPs for both solutions (for the same Re) are symmetry related and these patterns project during evolution on the normal mode patterns (associated with the first Hopf bifurcation on the asymmetric branches) which is most clearly seen at large evolution times t e .For the slightly asymmetric case, we showed that the growth of finite amplitude perturbations is different for the jet-up and jet-down steady states at similar Re.The physics of this difference is likely related to differences in the value of the viscous dissipation function of each steady state.
The separatrices (of attraction basins) are very difficult to calculate for the double-gyre flows; for the 60×40 grid used here a system with 4800 degrees of freedom results.However, the CNOPs may provide information on the finite amplitude stability boundaries in multiple equilibrium regimes.One can vary δ at fixed t e and determine for which critical δ the time evolution of the CNOP will not return to the original steady state.Such finite amplitude stability boundaries were determined in Mu et al. (2004) for a simple box-model (with 2-degrees of freedom).As this is not an easy computation for the double-gyre flow, with a very large CPU time needed for the minimization process, it is outside the scope of this paper.
Fig. 1.(a) Bifurcation diagram for the double-gyre (σ =0) for a square basin with the asymmetry of the streamfunction, defined as Eq.(16), against the control parameter Re=U L/A H .The energy stability boundary Re E is about Re E ≈10(Dijkstra and De Ruijter, 1996).(b) Streamfunction ψ of the anti-symmetric steady state for Re=25, (c) the jet-down steady state for Re=50 and (d) the jet-up steady state for Re=50.The contour values are scaled with respect to a maximum of ψ=2.2 for (b), which represents a transport of 5.5 Sv; and a maximum of ψ=1.1 for (c, d), which represents a transport of 10.9 Sv.The contour interval is 0.2.
Fig. 2 .
Fig. 2. Patterns of the barotropic streamfunction ψ for the CNOPs of the steady state at Re=25 for δ=0.1 and (a) t e =7 days, with an absolute maximum of 0.20; (b) t e =14, with an absolute maximum of 0.17; (c) t e =21 days, with an absolute maximum of 0.13 and (d) t e =28 days, with an absolute maximum of 0.11.
Fig. 5 .
Fig. 5.For Re=50, σ =0.0, δ=0.1 and t e =7 days: (a) CNOP for the jet-down steady state; (b) deviation of the flow from the jetdown steady state at t=t e ; (c) CNOP for the jet-up steady state; (d) deviation of the flow from the jet-up steady state at t=t e .
Fig. 6 .
Fig. 6.Bifurcation diagram for σ =0.05 where the asymmetry of the streamfunction ψ plotted against control parameter Re.(b) Dimensionless viscous dissipation along the branches in (a).
Fig. 7 .
Fig. 7.For the case Re=60, σ =0.05, δ=0.1 and t e =7 days: (a) CNOP for the jet-down steady state; (b) deviation of the flow from the jet-down steady state at t=t e ; (c) CNOP for the jet-up steady state and (d) deviation of the flow from the jet-up steady state at t=t e . | 5,783.4 | 2008-10-10T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Insights into the pore structure characteristics of the Lower Silurian Longmaxi Formation shale in the Jiaoshiba area, Southern Sichuan Basin, China
In this paper, the pore structure characteristics of shales and its controlling factors were analyzed by means of total organic carbon (TOC) analysis, X-ray diffraction (XRD) analysis, field emission scanning electron microscopy (FE-SEM) and low-pressure N2 adsorption (LPNA) analysis. Based on the grey relational analysis, the controlling factors of pore structure parameters were discussed. The results showed that the TOC contents range from 2.98 to 4.97%, the main minerals of shales are quartz and clay minerals with an average of 41.62 and 30.98%, respectively. The organic matter pores, the interparticle pores, the intraparticle pores, and the micro-fractures are the main pore types determined by the FE-SEM observation. The pore volume of shales is between 0.0637 and 0.1053 cm3/g, the specific surface area ranges from 16.44 to 37.61 m2/g, the average pore size is between 11.20 and 15.50 nm. The organic matter and the quartz have a positive influence on the specific surface area and total pore volume, whereas the clay minerals have a negative impact. The shales have a wide range of pore size, and the mesopores and macropores are the dominant contributor to the total pore volume while the mesoporous contribute the main specific surface area. The TOC contents and quartz contents have the most significant effect on the total pore volume and specific surface area, and the average pore size is mainly controlled by the quartz contents.
Introduction
Shale gas is an unconventional natural gas occurring in reservoir rocks dominated by organic-rich shales, which exists in the form of free state, adsorbed state and dissolved state and is a kind of clean and effective energy resources. Shale gas has become the most important energy supply with the increasing shortage of conventional oil and gas resources all over the world (Jarvie et al. 2007;Clarkson et al. 2012;Curtis et al. 2002;Loucks et al. 2009;Jia et al. 2017). In 2015, the Energy Information Administration reported that the global shale gas reservoirs are estimated to be approximately 214.6 × 10 12 m 3 and 31.6 × 10 12 m 3 in China, accounting for 14.73% of the global total, indicating that China possesses a significant exploration and development potential in the field of shale gas (EIA 2015). At present, in China, numerous shale gas reservoirs have been discovered in Sichuan Basin, Ordos Basin, Bohai Bay Basin, Songliao Basin and Tarim Basin, etc. (Zou et al. 2010(Zou et al. , 2018Guo et al. 2020a, b;Ding et al. 2013;Gao et al. 2018), and new theories of exploration and development have been applied, the production of the shale gas in China continues to rise, with a production of about 15.4 × 10 9 m 3 in 2019 (Zhen et al. 2020;Fan et al. 2020). Among many basins where a large number of shale gas reservoirs have been found, major breakthroughs of shale gas exploration and development have been made in the Sichuan Basin, and the commercial exploitation of shale gas has been realized for the first time in the Jiaoshiba area in the Sichuan Basin. Ma, 2019;Guo et al. 2020a, b).
The shales generally have the characteristics of various mineral composition, diverse pore morphology and wide pore size distribution, indicating that shales have strong heterogeneity. The existing gas occurs commonly with adsorbed and free gas state in the pore spaces of shales (Jarvie et al. 2002;Ross et al. 2009). Previous studies indicated that the pore structure of shales is an important factor, affecting the occurrence state of shale gas (Hu et al. 2017;Wei et al. 2018). Therefore, the research on the pore structure characteristics of shales is of great significance for exploration and development of shale gas. The pore structure refers to the pore type, the pore morphology and the range of pore size distribution (Fu et al. 2015;Yan et al. 2018). Among them, it is very important to understand the pore size distribution and the parameters affecting the pore structure of rocks. And the pore structure is controlled by geological factors such as mineral compositions and the total organic carbon (TOC). Taking it as a research direction, most of scholars have used the least square method to investigate the relationships among pore structure parameters, TOC, mineral compositions Zhang et al. 2019;Wang et al. 2019). These studies have suggested the pore structure parameters of rocks are related to many factors, such as TOC, quartz, clay minerals, etc. However, there are relatively few reports on the analysis of main controlling factors of the pore structure parameters of shales. So, it is necessary to introduce some mathematical methods to study the main controlling factors of the pore structure parameters. The grey relational analysis is a method to provide the best possible solution for multiphase problems and demonstrate the inner relationships without large sample data (Pandya et al. 2020;Jaiprakash et al. 2020). In some parameter correlation studies, the grey relational analysis had been adopted by some scholars to investigate (Chen et al. 2009;Mondal et al. 2013;Wen et al. 2022). These studies have shown that the grey relational analysis can provide quantitative control factors as a mathematical method. Therefore, it is appropriate to using the grey relational analysis to quantitatively evaluate the influences of TOC and mineral compositions on the pore structure characteristics of the shales from the Lower Silurian Longmaxi Formation.
The goals of this paper are to investigate the pore structure and its controlling factors of shales from the Lower Silurian Longmaxi Formation in the Jiaoshiba area of Southern Sichuan Basin in China using X-ray diffraction (XRD) analysis, total organic carbon (TOC) analysis, field emission scanning electron microscopy (FE-SEM) and low-pressure N 2 adsorption (LPNA) analysis. The relationships among the pore volume, the specific surface area and the average pore size were studied. Meanwhile, the pore size distributions of shales were studied by using the Barret-Joyner-Halenda (BJH) method and the LPNA data. Finally, the controlling factors of pore structure parameters of shales were studied by using the grey relational analysis.
Data
In order to ensure that the shale core samples can accurately reflect the shale characteristics in the study area, a total of 34 shale samples were collected from some wells (with depth of 2800-3200 m) in the Lower Silurian Longmaxi Formation in the Sichuan Basin, China. The Sichuan basin can be divided into six tectonic zones, has undergone many times of tectonic movements. In the basin, the marine facies shale strata of the Upper Ordovician Wufeng Formation and the Lower Silurian Longmaxi Formation are widely distributed. The Lower Silurian Longmaxi Formation shale with rich organic matter and silicon contents deposited in deep-water shelves. (Zhao et al. 2017;Guo et al. 2020a, b). The lithology of the Longmaxi Formation shale is mainly composed of the black shale, gray-black shale and silty mudstone (Chen et al. 2011;Bai et al. 2013). The experimental shale samples were divided indiscriminately into two batches. Total organic carbon (TOC) and X-ray diffraction (XRD) analysis were performed on 34 samples in the first batch, and 20 shale samples were selected for low-pressure nitrogen adsorption analysis in the second batch.
Experimental methods
Thirty-four shale samples were crushed into powder less than 100 mesh and 100 mesh powder following the experimental needs. Prior to the TOC analysis, the powder less than 100 mesh treated with hydrochloric acid to remove carbonate content, then TOC tests were performed using a LECO CS230 carbon/sulfur analyzer. The remaining 100 mesh powder was analyzed with an X' Pert PRO instrument for XRD according the Chinese National Standards GB/T19145-2003 and GB/T18602-2001.
Seven shale samples were cut into 10 × 10 × 3 mm thin slices for FE-SEM analysis. Before the FE-SEM observation, the shale samples were polished by argon ion with the LEICA EM TIC 3X tri-ion sections polisher. Then the shale samples were coated with gold to increase the conductivity of the surface. Subsequently, the samples were placed in the sample chamber of the field emission environment scanning electron microscope and vacuumed until the vacuum of the sample chamber reached the experimental requirements. Finally, the FE-SEM observation was analyzed by the FEI Quanta 650 FEG. All of the FE-SEM images were obtained for analysis of pore types and morphology.
Twenty shale samples were crushed to grains of 60-80 mesh size for low pressure nitrogen adsorption experiments by using the NOVA200e automatic specific surface and porosity analyzer. The standard of experiment following Chinese National Standard GB/T19587-2004 and GB/T21650.2-2008. Before the experiments, the grains need to out-gassed at 378 K for 24 h. During the experiment, the N 2 adsorption/desorption isotherms of shale samples were measured at relative pressure ranging from 0.010-0.995 at 77 K.
Grey relational analysis
In this paper, to explore the relationships among organic matter content and mineral composition, investigate the main factors affecting the pore structure parameters, the grey relational analysis was used, which is commonly used to determine the interrelationships among the multiple parameters. The grey relational analysis has advantages in solving interrelationships and determining the main controlling factors compared with the least square method.
Basing on the grey relational analysis theory, the pore structure parameters are taken as the reference sequence expressed by Eq. (1), and the TOC contents, the quartz contents, the feldspar contents, the carbonate minerals contents and the clay minerals contents are taken as the factor sequence expressed by Eq. (2).
where X i is the influencing factor, n is the sampling points, m is the number of influencing factors.
As the range and units of each data is different, the data would be normalized between 0 and 1 by Eq. (3) and Eq. (4) respectively. When there is a positive correlation between the pore structure parameters and TOC contents, mineral compositions contents, it is normalized by Eq. (3), else, Eq. (4).
The grey relational coefficient is computed from the normalized data by Eq. (5). (1) | is the absolute difference between X 0 and X i at the point k.
Due to the numerous numbers of relational coefficients, leading to the dispersed information; the grey relational grade is calculated by averaging the grey relational coefficient using Eq. (6).
TOC and mineralogical compositions
The results of shale samples' TOC and XRD analysis are presented in Fig. 1. As shown in Fig. 1, the TOC contents range from 2.08 to 4.97%, with an average of 3.46%, suggesting that the shales from the Longmaxi Formation in the Jiaoshiba region are rich in the organic matter. Meanwhile, we can observe that there is a wide distribution in mineralogical compositions, indicating that the mineralogical compositions in shales is complex. In addition, shale samples are mainly composed of the quartz and the clay minerals, followed by the carbonates and the feldspars. The brittle mineral contents are between 46.46 and 64.32%, with an average of 54.64%, whereas the clay minerals contents range from 21.45 to 39.08%, with an average of 30.07%, showing that the contents of brittle minerals in shales is higher, but the contents of clay minerals in shales is relatively lower. The main clay minerals are the illite, ranging from 14.82 to 29.27%, with a mean of 22.15%. The pyrite contents are ranged from 2.68 to 5.00%, with an average of 3.61%. Above findings are consistent with the previous studies on shales from the Longmaxi Formation in the Sichuan Basin of China Ji et al. 2020).
Pore morphology
As shown in Fig. 2, the shale samples develop various pore types and the pore size distribute in a wide range from nanoscale to micro-scale. In this work, pore types of the shale samples are categorized into the organic matter (OM) pores and inorganic pores including; interparticle (Inter), intraparticle (Intra), and micro-fractures.
Previous researches (Hu et al. 2017;Liu et al. 2020) have shown that a certain level of thermal maturation is necessary condition for the development of OM pores, which means that the development of OM pores is closely related to the thermal evolution and hydrocarbon generation process. OM pores are distributed widely within organic matters in the (Fig. 2d, Fig. 2f, Fig. 2g, Fig. 2h), which are mainly spherical shaped. Meanwhile, OM pores have a huge specific surface area, it means that there are a large number of gas adsorption sites in the pore structure, which is conducive for adsorption and storage of shale gas.
The intergranular pores, developed between mineral grains, are mainly influenced by the degree of compaction, cementation, and overburden pressure. As shown in Fig. 2a, Fig. 2e and Fig. 2j, the intergranular pores are mainly in triangles, polygons, and irregular slits form. These pores are randomly distributed in shale samples. In addition, the intergranular pores commonly have well connectivity, which can provide an effective seepage channel for shale gas. As a result, the development of intergranular pores is conducive to the storage and migration of shale gas. Besides intergranular pores, the intraparticle pores are observed in Fig. 2e and Fig. 2f. The intraparticle pores are spaces developed inside mineral particles such as feldspars and calcites. The morphology of these pores is irregular, and they have poor connectivity. As shown in Fig. 2, compared with interparticle pores the intraparticle pores are not well developed.
In the development of shale gas, the existence of microfractures is very important, which can provide necessary seepage channel and greatly improve the seepage ability of shale gas. Development of microfractures is mainly controlled by tectonic movement, mineral composition, abundance of organic matter, pressure distribution and sedimentary microfacies, etc. (Dong et al. 2018) Image observations suggest that the microfractures in the study area are mainly occupied by tectonic microfractures, and the width ranges from dozens of nanometers to several microns. In addition, the microfractures are developed at the edges of organic matter and other minerals (Fig. 2d, Fig. 2f, Fig. 2g, Fig. 2h). The kind of microfractures is formed due to the dehydration and shrinkage of the organic matter in the process of organic evolution. Figure 3 (V is the adsorption capacity and P/Po is the relative pressure) represents LPNA analysis results in some samples. According to this figure, the desorption branch on the image does not lag, so there is no hysteresis loop, suggesting that shale samples have a closed pore structure. According to the classification types proposed by the International Union of Pure and Applied Chemistry (IUPAC) (Sing et al. 1985), the pore shapes represented by these four classifications are cylindrical holes, ink bottle holes, wedge holes (open at one or both ends) and crack holes. The N 2 adsorption-desorption isotherms of shale samples are similar to H3-type, and have H4-type characteristics, indicating that the shale samples mainly contain wedge holes and cracked holes, and have irregular pore structure characteristics.
Pore size distributions
The plots of dS/d(logD) vs D (S is the specific surface area and D is the pore size) or dV/d(logD) vs D (V is the pore volume) can be adopted to present the pore size distribution (Tian et al. 2013;Xiong et al. 2015). According to the results of LPNA experiments, the distribution of the specific surface area or pore volume of some shale samples calculated based on the BJH method are presented in Fig. 4. From Fig. 4, we can note that the distribution of the specific surface area or pore volume among each sample from the Longmaxi Formation have different shapes, and much broader pore size distribution. According to classification type of the pore size presented by the IUPAC (Sing et al. 1985), the pore size was divided into the micropores (< 2 nm), the mesoporous (2-50 nm), and macropores (> 50 nm). The histogram of pore volume and specific surface area of each pore size section of shale samples from Longmaxi Formation are shown in Fig. 5. From Fig. 5(a), we observe that the pore volume of shale samples is mainly composed of mesoporous and macropores, the average pore volume of mesoporous is 65.42%, and the average pore volume of macropores is 31.51%, they account for about 97% of the total pore volume. For the specific surface area, the contribution of mesopores is large (as shown in Fig. 5(b)), which accounting for 66.56-82.79%, with an average of 74.08%, whereas the average contribution of the micropore and macropore to the specific surface area are 17.77 and 8.15%, respectively. The result show that the mesoporous and macropores of shales from the Longmaxi Formation provide the main total pore volume, and the mesopores contribute the main specific surface area.
Pore structure parameters
On the basis of LPNA analysis results, the specific surface area calculated using the BET model, and the N 2 adsorption volume at p/p 0 about 0.98 can be used to estimate pore volume. The pore structure parameters of shale samples are presented in Table 1. As it can be seen from this table, the pore volume ranges from 0.06370 to 0.10532 cm 3 /g, with an average of 0.08143 cm 3 /g. The specific surface area ranges from 16.44 to 37.61 m 2 /g, with an average of 25.87 m 2 /g. The average pore size ranges from 11.20 to 15.50 nm, with an average of 12.78 nm, which is classified as mesopore according to the IUPAC classification.
Relationship pore structure parameters and TOC, mineral compositions
As shown in Fig. 6, the pore volume and specific surface area of shale samples are positively correlated, and the Fig. 4 Specific surface area distribution a and pore volume distribution b with pore size reconstructed from the isotherms of some shale samples using the BJH method Fig. 5 Percentages of the total pore volume a and specific surface area b under the IUPAC classification correlation coefficient is 0.9795, while the average pore size is negatively correlated with pore volume and specific surface area, and the correlation coefficients are 0.7412 and 0.8393 respectively, indicating that the shales with smaller pore is conducive to the storage of shale gas (Bustin et al. 2008;Ross et al. 2009;Xiong et al. 2015;Liu et al. 2015;. The relationships among the TOC contents, the clay mineral contents, the quartz contents and the total pore volume, the specific surface area, the average pore size (as shown in Figs. 7, 8 and 9) are established to explore the influencing factors of the pore development of shales from Longmaxi Formation in the study block. From Figs. 7-9, the TOC, clay mineral contents and the quartz contents are the main influencing factors of the total pore volume, the specific surface area, and the average pore size of shales.
From Fig. 7, we observe that the total pore volume of shales is positively correlated with the TOC (Fig. 7a) and the quartz contents ( Fig. 7(b)), with correlation coefficients of 0.8060 and 0.7470, respectively. The total pore volume of shales is negatively correlated with the clay mineral contents (Fig. 7c), with correlation coefficient of 0.6298. Meanwhile, from Fig. 8, we observe that the specific surface area is positively correlated with the TOC (Fig. 8a) and the quartz contents (Fig. 8b), with correlation coefficients of 0.8143 and 0.7560, respectively. There is a negatively relationship between the specific surface area and the clay mineral contents (Fig. 8c), with correlation coefficient of 0.6117. The shale samples from from Longmaxi Formation with more the TOC contents have lager total pore volume and specific surface area. And with the increase of the quartz contents, the total pore volume and the specific surface area of shales can increase. This may be because that the quartz originates from the biogenesis (siliceous organisms). With the increase of the siliceous components in shales, the number of micropores can increase, resulting in the increase of the total pore volume and the specific surface area . However, from Fig. 9, the clay mineral contents in shales can enhance the compaction in the diagenetic process, leading to dense mineral arrangement and reducing the total pore volume and specific surface area. This has similarities (the clay minerals had a negative influence on the specific surface area and pore volume, especially the micropore structure parameters.) and difference (the clay-rich rocks have higher porosity and permeability than the biogenic silica-rich shales or the carbonate-rich shales.) with conclusion found by previous scholars (Bustin et al. 2008;Ross et al. 2009;Xiong et al. 2015). In the end, analyzed from horizontal, the . 6 Relationships between a pore volume and specific surface area, b average pore size specific surface area and pore volume; c average pore size and specific surface area in shale samples average pore size is negatively correlated with the total pore volume and the specific surface area according to the result of analysis. This conclusion is in agreement with previous works, such as shales from the Lower Cambrian Niutitang Formation (Yang et al. 2014;Liu et al. 2020), Upper Ordovician Wufeng Formation .
The controlling factors on the pore structure characteristics
In this work, we try to introduce the grey correlation analysis method to find the important controlling factors of pore structure characteristics. As shown in Tables 2, 3 Clay mineral(%) Fig. 9 Relationships between the clay mineral contents and a total pore volume, b specific surface area c average pore size 1 3 the grey correlation coefficient and grey correlation degree of different pore structure parameters including the total pore volume, the specific surface area and the average pore size are presented, respectively. The higher the grey relational grade, the stronger the correlation with pore structure parameters. From Tables 2-4, the order of grey relational grade influencing the total pore volume was determined as the TOC > Quartz > Clay minerals > Feldspars > Carbonates, the order of grey relational grade influencing the specific surface area was determined as the TOC > Quartz > Feldspars > Clay minerals > Carbonates, and the order of grey relational grade influencing the average pore size was determined as the Quartz > TOC > Feldspars > Clay minerals > Carbonates. Therefore, based on the grey relational theory, the TOC contents and the quartz contents have a higher grey correlation in the total pore volume and the specific surface area, indicating that the TOC contents and the quartz contents have an obvious effect on the total pore volume and the specific surface area. And the average pore size is mainly influenced by the quartz contents.
Conclusions
In this study, the pore structure characteristics, and its controlling factors of shale samples from the Lower Silurian Longmaxi Formation in the southern Sichuan Basin of China were studied by TOC, XRD, FE-SEM, LPNA analysis and grey relational analysis. The major conclusions are as follows: (1) The shale samples are rich in organic matter, and the TOC contents are between 2.94 and 4.97% with an average of 3.50%, this formation are mainly composed of quartz (average 40.60%) and clay minerals (average 30.07%). The OM pores, the interparticle pores, the intraparticle pores, and the micro-fractures are the main pore types determined by the FE-SEM observation.
(2) The pore volume of shale samples is from 0.0637 to 0.1053 cm 3 /g, the specific surface area ranges from 16.44 to 37.61 m 2 /g, the average pore size is between 11.20 and 15.50 nm. The TOC and the quartz contents have a positive influence on the specific surface area and total pore volume while the clay minerals contents have a negative impact. (3) The TOC contents and quartz contents have the most significant effect on the total pore volume and specific surface area, and the average pore size is mainly controlled by the quartz contents.
Funding This study was fundedby the Open Fund of Shale Gas Evaluation and Exploitation Key Laboratory of Sichuan Province (No. YSK2022006), National Natural Science Foundation of China (No. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 5,686.8 | 2022-03-31T00:00:00.000 | [
"Geology",
"Materials Science"
] |