id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
9449233
pes2o/s2orc
v3-fos-license
Why does costly signalling evolve? Challenges with testing the handicap hypothesis Zahavi’s handicap hypothesis (Grafen, 1990; Zahavi, 1975; Zahavi & Zahavi, 1997) is a popular explanation for the evolution of honest and costly signalling. The general idea is that individuals honestly signal their quality because signalling is costly and therefore low-quality individuals cannot afford to produce dishonest signals. However, this hypothesis is controversial for several reasons. (1) Zahavi suggested that selection favours the evolution of honest signalling because (and not despite) of their costs, and he made the radical suggestion that when it comes to the evolution of signalling, natural selection favours waste rather than efficiency. (2) Zahavi argued that this idea is a general principle, not merely a hypothesis, which explains honest signalling in most or all contexts. (3) There are several versions of the handicap hypothesis, but attempts to provide theoretical support have largely failed. The main exception is a model proposed by Grafen (1990), which has become widely accepted among behavioural ecologists; however, his conclusions have been challenged (Bergstrom, Szamado, & Lachmann, 2002; Getty, 1998, 2006; Hurd, 1995; Lachmann, Szamado, & Bergstrom, 2001; Szamado, 1999, 2000, 2011). (4) There have been many attempts to empirically test the handicap hypothesis, but there is no consensus regarding how it might be tested (Kotiaho, 2001). Despite these difficulties, Polnaszek and Stephens (2014) recently conducted a study with trained blue jays, Cyanocitta cristata, to experimentally test the handicap hypothesis. They concluded that their findings provide the first experimental evidence that signal costs enforce honesty, and they interpreted their results to support the handicap principle. This experiment is unusually clever and insightful, and the findings provide important implications for honest signalling and receiver psychology (Guilford & Dawkins, 1991). However, we raise several caveats about the theoretical background, interpretations and conclusions of the study, and we explain why this study and other attempts to test the handicap hypothesis will be problematic as long as there is not a clear theoretical model to test. caveats about the theoretical background, interpretations and conclusions of the study, and we explain why this study and other attempts to test the handicap hypothesis will be problematic as long as there is not a clear theoretical model to test. THE JAY TRAINING EXPERIMENT In this experiment, pairs of blue jays occupying adjacent cages were trained to play a communication game in which one bird, the sender, could choose to hop onto one of two perches, which could be used as a signal about the state of the environment, and the receiver responded by selecting a perch on the same or opposite side of the sender, depending upon the signal it perceived (Polnaszek & Stephens, 2014). The sender could choose to send an honest or dishonest signal about the environment, depending on whether one of the two red lights in the signaller's cage (visible only for the signaller) were turned on or off indicating the state for the given trial as either true or false. The birds were experimentally rewarded depending on their choices and they were tested under two conditions. In the incentivesaligned treatment, there was mutual interest between signaller and receiver, as both birds were rewarded for choosing a response that corresponded to the state of the environment. In the incentives-opposed treatment, there was a conflict of interest, as the signaller was interested in selecting the signal state regardless of the state of the environment, whereas the receiver was only rewarded if the response corresponded to the state of the environment. The authors also experimentally manipulated the cost of signalling by forcing the sender to take loops of shuttle flights between a third perch and its current position before it could use the signalling or the nonsignalling perch. The authors showed that when there was no conflict (incentives-aligned treatment), the jays produced honest signals, and increasing cost of the signals had no effect on honesty. However, when they increased the conflict (incentivesopposed treatments), increasing the signalling costs affected their honesty: when the costs of signalling were low, they were often dishonest (not corresponding to the state of the environment), whereas when the costs of signalling increased, the jays produced more honest signals. The study also showed that the receivers followed or trusted the signals more often when they were reliable. The authors concluded that their study provides the first experimental evidence demonstrating that signal costs stabilize honesty, and they imply that this finding confirms the handicap principle. ZAHAVI'S HANDICAP PRINCIPLE Rather than supporting Zahavi's handicap principle (Zahavi & Zahavi, 1997), the findings in this study contradict this proposal. The costs of signalling stabilized honesty, but only when there was a conflict of interest between signaller and receivers. To our knowledge, this study provides the first experimental evidence that signals need not be costly to be honest under shared interests, and that signal cost has no effect on honesty under such conditions. This result is theoretically expected, but it contradicts suggestions that the handicap hypothesis is a general principle that explains honest signalling (with and without conflicts of interest; Zahavi & Zahavi, 1997). Also, Zahavi assumed that honest signals must be perceptibly costly or wasteful, since this is the only way to demonstrate honesty, and yet the birds' shuttle flights (the costs that maintained honesty) could not be seen by the receivers. There are other restricted versions of the handicap hypothesis, but as we explain next, these models were not supported either. HANDICAPS AS STRATEGIC COSTS The jay study was also interpreted to support a version of the handicap hypothesis proposed by Maynard Smith and Harper (1995), which views handicaps as strategic costs of signalling, and Polnaszek and Stephens (2014, p. 2) defined handicaps accordingly, i.e. 'any signal whose reliability is ensured by costs that exceed the minimal cost necessary to make the signal'. All signals have production or efficacy costs, which are necessary for a trait to transmit information or influence the behaviour of conspecifics, and the Maynard Smith and Harper (1995) version crucially predicts that they have additional strategic costs (the cost component that maintains honesty under conflict of interests). A cricket's song is costly to produce to reach females from afar (production costs), but the question is whether the males' songs are more costly than they need to be to reach female receivers. Do gazelles jump higher than they need to jump to signal their health to predators when stotting? No one has proposed how to measure such strategic costs, and the jay experiment did not attempt to distinguish strategic versus efficacy costs of signalling, which is the basis for this definition of handicaps. Polnaszek and Stephens (2014, p. 6) also cited Grafen's (1990) strategic handicap hypothesis as the 'authoritative mathematical statement of the handicap principle'; however, criticisms of his model (Getty, 1998(Getty, , 2006 and conclusions (Hurd, 1995;Lachmann et al., 2001;Számadó, 1999Számadó, , 2011 were too lightly brushed off. Grafen's (1990) main results were that (1) signals are honest, (2) signals are costly and (3) signals are costlier for worse signallers, and yet these conditions have all been challenged by later models and empirical results (see Számadó, 2011 for a review). Signals need not be honest, not even on average, to evolve (Számadó, 2000). Honest signals need not be costly even under conflicts of interest (Bergstrom et al., 2002;Hurd, 1995;Lachmann et al., 2001;Számadó, 1999) and honest costly signals need not be costlier for poor-quality signallers (Getty, 1998(Getty, , 2006. THE STRATEGIC HANDICAP HYPOTHESIS It is also unclear how the jay experiment provides evidence or a test of Grafen's strategic handicap model. The versions of the model proposed by Grafen (1990) and Zahavi and Zahavi (1997) assume that the costs of signalling that enforce honesty are a strategic choice (where individuals can choose their level of investment) rather than an unavoidable constraint imposed on the signallers, for example high-quality signallers could use lowintensity signals but they 'choose' not to and vice versa. However, in the jay experiment costs of shuttle flights were artificially forced on the signallers: the birds could not use the signalling perch before paying the full cost of the signal. In addition, an experimental test requires showing that the marginal cost of producing the same signal is greater for low-than high-quality individuals, but this hypothesis was not tested for two reasons. First, the quality or condition of the birds was not known or examined, and quality was only mimicked by imposing two different conditions ('true' versus 'false') on the jays, which were signalled by red lights. This implementation is irrelevant to the jays' ability to bear the cost of signalling. Second, the model in the jay study is a differential benefit model (like the Sir Phylip Sydney game, Maynard Smith, 1991), rather than a differential cost model (Grafen, 1990). The costs imposed on the signallers were the same in the two different conditions, and thus, by definition, there cannot be any difference in the marginal costs. ACTION-RESPONSE GAME VERSUS HANDICAP MODEL The authors constructed a simple model to derive the conditions of honesty for the jay experiment, and they cited Grafen's model (1990) as the 'authoritative cost condition' (Polnaszek & Stephens, 2014, p. 3) of honesty. However, the authors' model is an example of an action-response game (Hurd, 1995;Számadó, 1999) rather than a handicap model, and the conditions of honesty that can be derived from these games are different (see Appendix). The results of action-response games show that honest signals need not be costly not even under conflict of interest for high-quality signallers (Bergstrom et al., 2002;Hurd, 1995;Lachmann et al., 2001;Számadó, 1999), contrary to previous authors' claims (Grafen, 1990;Maynard Smith & Harper, 1995;Zahavi & Zahavi, 1997), assuming that signal costs vary as a function of quality. The explanation is that it is not the cost paid by 'high-quality', i.e. true condition, signallers at the equilibrium that maintains honesty, but the potential cost of cheating for 'low-quality signallers', i.e. false condition (Hurd, 1995;Számadó, 1999). This potential cost of cheating will be paid at the equilibrium for high-quality signallers only if there is a constraint linking the signal cost paid by low-quality signallers to the cost paid by high-quality signallers. In terms of the jay experiment, if the experimenters impose a cost only on the 'false' condition, the system still remains honest and individuals under the 'true' condition (i.e., 'high-quality' individuals) do not have to pay a cost at the equilibrium. Consequently, if individuals pay a cost under the 'true' condition, then it is only because the constraint imposed by the experimenters was chosen that way (i.e. they implemented a differential benefit model). Therefore, results of the experiment cannot be used as evidence in favour of the necessity of such cost (as assumed by the handicap models), as it only reflects the choice made by the experimenters. INDICES HANDICAPS? The findings in the jay experiment are more consistent with another explanation for the evolution of honest signalling called the 'index hypothesis' (Maynard Smith & Harper, 1995;Maynard Smith & Harper, 2004). This hypothesis assumes that honesty is enforced due to physical, developmental or physiological constraints that cannot be cheated, rather than additional costs that evolve on top of the (efficacy) costs required to produce a minimal signal. Because the costs of signalling were experimentally manipulated, as an unavoidable constraint, the findings are more consistent with the index hypothesis than the handicap hypothesis (Grafen, 1990;Maynard Smith & Harper, 1995). The index hypothesis is not controversial, but it is not considered to be a version of the handicap hypothesis, and classifying it as such would require redefining the handicap hypothesis. Számadó EXPERIMENTS? Finally, we raise additional caveats about using such learning experiments for testing the handicap hypothesis, or any other ideas about the evolution of animal signals, i.e. adaptive behaviours, morphology or other phenotypic features that function to influence the behaviour of receivers (Maynard Smith & Harper, 1995;Searcy & Nowicki, 2005). The experimental design was set up to copy the structure of general action-response games, yet the elements of this game (the state of nature, the action used as a signal, the cost and the benefit) were all artificial (i.e. red light, perch hopping, flying loops and food pellets). It is unclear whether the signal in this study (perch hopping) functions as a signal in jays or other species. Moreover, it is unclear how such learning experiments can directly test hypotheses about the evolution of animal signals. Polnaszek & Stephens (2014, p. 6) acknowledged that their approach was a 'fairly drastic departure' and 'radically different, from the traditions of 'costly signalling' research'. To justify their methods, the authors pointed out that there are similarities between learning and evolution and new approaches are needed to test the handicap hypothesis. We agree with all of these points, but it is still unclear how this learning experiment can be extrapolated to test an evolutionary hypothesis. The unstated assumption is that if experimentally increasing the costs of signalling results in honest signals when animals are trained to produce a signal, then selection will favour the evolution of such costs as a mechanism to enforce honesty. It remains unresolved how the costs of signalling evolve, and whether any proximate rewards for honesty that might occur in nature will provide enough fitness benefits to overcome the costs of signalling. We agree that such learning studies provide a valuable tool that allows one to experimentally manipulate variables that would otherwise be difficult or impossible to test, but they are more akin to socalled 'proof of concept' studies than empirical tests of the handicap hypothesis. CONCLUSIONS We raised these caveats regarding the theoretical background, interpretations and conclusions of the study by Polnaszek and Stephens (2014) to emphasize the problems with the handicap hypothesis and the challenges with testing this idea. Future studies should consider the theoretical objections with the handicap hypothesis, or provide more convincing justifications for why these critiques can be ignored. The critics of the handicap hypothesis do not question the potential role of signal costs in maintaining honesty, and on the contrary, they classify their models as part of the 'costly signalling' paradigm. No one has shown how selection can possibly favour costly signals because of their costs (contrary to Zahavi, costly signals can only evolve despite, not because, of their costs), and the jay experiment falls short of providing such evidence. Future efforts to test the handicap hypothesis defined as strategic signalling costs (Maynard Smith & Harper, 1995) should be aware that distinguishing strategic from efficacy costs may not be possible even in principle. For example, if the information being transmitted by a sender and evaluated by a receiver are the costs of the signal, as Zahavi proposed, then all of the signalling costs are strategic. We suggest that the jay study provides evidence that uncheatable constraints can enforce honesty (index signal hypothesis; Maynard Smith & Harper, 2004), but studies are needed to find an explanation for the evolution of such constraints (Biernaskie, Grafen, & Perry, 2014). Despite our concerns, we commend the authors on their clever and innovative approach to studying animal signals. Showing that signal costs enforce honesty is an important step, and we suggest that similar experiments have great potential to provide insights into the underlying proximate mechanisms that control receiver psychology (Guilford & Dawkins, 1991). Studies are needed to determine how costly signalling evolves, and whether costly signals function to enforce honesty (i.e. do low-quality individuals pay a higher marginal cost or receive more benefits than high-quality signallers?). Finally, it would be especially helpful if future studies would identify inconsistencies, as well as the support for the handicap hypothesis. Table A1 gives the variables of a general action-response game (Hurd, 1995;Számadó, 1999), Table A2 gives the values of these variables according to the model by Polnaszek and Stephens (2014) and Table A3 gives the conditions of honesty (Hurd, 1995;Számadó, 1999) and the values used in the model by Polnaszek and Stephens (2014) substituted into these conditions. One can see that assuming r = 0 we get a − b < c, the condition derived in Polnaszek and Stephens' (2014) article. In contrast, Table A4 shows Grafen's conditions (Grafen, 1990;Maynard Smith & Harper, 1995) and the corresponding values according to the current game. Polnaszek and Stephens (2014) provide a different set of conditions, which is not surprising as Grafen's conditions do not describe the conditions of honesty in actionresponse games (Hurd, 1995;Számadó, 1999). Table A1 Parameters and notations of the action-response game (Hurd, 1995;Számadó, 1999) Parameter Description Difference in the cost of signals for low-quality signallers Conditions of honesty in the Polnaszek and Stephens C h /V h <C l /V l c/1<c/(a−b) which results in: a−b<1
2016-05-04T20:20:58.661Z
2015-12-01T00:00:00.000
{ "year": 2015, "sha1": "e1774a448bd7d531b7ab2263e9a9d657b3494093", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.anbehav.2015.06.005", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "661f424549f062ad79824ec1fe5440793fe09589", "s2fieldsofstudy": [ "Biology", "Economics" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
266705307
pes2o/s2orc
v3-fos-license
Effect of Intravenous Injection of Magnesium Sulphate on Intraoperative End-Tidal CO2 Level and Postoperative Pain in Laparoscopic Cholecystectomy Background Pain control and stabilizing hemodynamic indices are serious medical challenges, especially in anesthesia. Laparoscopic surgery is increasing in the world, and cholecystectomy surgery is no exception. Objectives This study investigated the effect of intravenous (IV) magnesium sulfate injection on intraoperative end-tidal CO2 (ETCO2) levels and postoperative pain in laparoscopic cholecystectomy. Methods This is a clinical trial. The sample size was calculated to be 64 people who were selected among the patients who were candidates for laparoscopic surgery by convenience sampling. They were randomly assigned to intervention and control groups. The intervention group received magnesium sulfate (50 mg/kg) and normal saline (100 mL) within 1 h. The control group only received normal saline (100 mL). Systolic and diastolic blood pressures, ETCO2 level, heart rate, arterial oxygen saturation, pain level, and narcotic analgesics in recovery were measured 2, 6, 12, and 24 h after surgery. The data were analyzed using 1-way analysis of variance (ANOVA) and repeated measures analysis. Results The mean of systolic blood pressure and ETCO2 during recovery in the intervention group were less than the control group (P = 0.029 and P = 0.015). In the intervention group, analgesic consumption in recovery and 6 h after surgery was less than the control group (P < 0.001). The mean pain score in the intervention group in recovery and 2, 6 (P < 0.001), and 12 h (P = 0.038) after surgery was significantly lower than the control group. Conclusions Magnesium sulfate can be a suitable and safe supplement to reduce pain after surgery and reduce the use of narcotics. The current conclusion should be investigated on a larger scale of patients, with extended monitoring for postoperative pain over a longer period of time. Laparoscopic cholecystectomy is a minimally invasive surgical procedure used to remove a patient's gallbladder.Since the early 1990s, this method has largely replaced open cholecystectomy.Laparoscopic cholecystectomy is currently used for the treatment of acute or chronic cholecystitis, symptomatic kidney stones, biliary dyskinesia, acallous cholecystitis, gallstone pancreatitis, and gallbladder masses or polyps. Postoperative pain is a complex physiological reaction to tissue damage.The main concern of patients undergoing surgeries is the postoperative pain that they would experience.Postoperative pain causes acute adverse physiological impacts associated with manifestations on multiple organ systems, possibly causing significant morbidity.The pain that limits walking after surgery and increases stress-induced coagulation may increase the risk of deep vein thrombosis (DVT).Catecholamines released in response to pain may cause tachycardia and systemic hypertension, causing myocardial ischemia in predisposed patients. Surgery and postoperative planning aim to reduce the pain level.Proper control of postoperative pain improves postoperative rehabilitation, short-and long-term recovery, and postoperative quality of life (1).Narcotic analgesics are associated with various complications (such as respiratory depression), causing insufficient dose prescription of narcotics that do not well control the pain.Finding more effective drugs reduces the pain and costs for patients and hospitals while increasing postoperative satisfaction and quality of life (2,3). Despite all the advantages of laparoscopic surgeries, postoperative pain remains a basic concern for patients undergoing such surgeries. This medical problem may cause clinical and mental changes and increased complications, mortality rate, and costs, reducing the quality of life (4).Inefficient postoperative pain management may cause DVT, pulmonary embolism, coronary artery stress, atelectasis, pneumonia, poor wound healing, insomnia, and demoralization (5,6).Carbon dioxide gas is usually blown into the abdomen to create pneumoperitoneum in laparoscopy (7,8), moving the abdomen contents away from the intended site and providing a better background and visibility for the surgeon for required procedures. However, system absorption of CO 2 in the peritoneal cavity causes hypercarbia (9). On the other hand, when pneumoperitoneum is associated with a patient's Trendelenburg with an angle of 15 -20°, it significantly impacts the patient's hemodynamics (10) by suddenly increasing the arterial blood pressure, increasing the peripheral vascular resistance, and reducing the cardiac output (11).Notably, a sudden increase in arterial blood pressure and heart rate may cause multiple damages to the patient, which can be irreparable in patients with underlying heart disease (12).In the meantime, magnesium sulfate can be a suitable approach to reducing cardiac risks due to preventing catecholamine release in the adrenal gland and peripheral nerve endings (13).The challenge faced by anesthesiologists in laparoscopic surgeries is the effect of CO 2 gas on patients during pneumoperitoneum.Magnesium sulfate is increasingly used due to its effect on hemodynamic stability.Generally, the magnesium effect is related to the interference in membrane ca-ATPase and Na-K ATPase activation, playing a key role in the membrane exchange of ions.Consequently, it can be argued that magnesium sulfate is considered a cell membrane modifier.Moreover, the inhibitory effect of magnesium on calcium causes vasodilation and prevents vasospasm.On the other hand, magnesium reduces catecholamine release by sympathetic stimulation, reducing response to postoperative stress (13,14).Magnesium sulfate is an NMDA antagonist receptor (N-methyl-D-aspartate) and calcium channel blocker.NMDA receptors play a vital role in pain transmission in the central and peripheral nervous systems, causing acute pain in the body.By blocking calcium channels, they prevent the transmission of pain nerve impulses (15). A meta-analysis supports the idea that magnesium sulfate can be prescribed to provide stable anesthesia without prescribing opioids. Since this study was conducted on gynecologic surgeries, and the positive effect of magnesium sulfate on the provision of stable anesthesia has been confirmed, this drug can also be safely used in all laparoscopic surgeries, such as gynecologic operations on Trendelenburg patients.This meta-analysis study confirms the results of this study on the positive effect of magnesium sulfate injection (16).A study showed that prescribing magnesium increases the effect of local anesthetics (17).Based on electron microscopy, the researchers found that intrathecal administration of magnesium sulfate causes neurodegeneration (18).Magnesium acts as an antagonist for NMDA receptors and can relieve pain.The pain relief effect of magnesium has been confirmed in intra-and postoperative periods (19)(20)(21). Objectives Since recent studies have emphasized the positive pretreatment effects of magnesium sulfate in managing and controlling surgery-induced pain, this study investigated the effect of intravenous (IV) injection of magnesium sulfate on the intraoperative end-tidal CO 2 (ETCO 2 ) level and postoperative pain in laparoscopic cholecystectomy. Methods This double-blind, randomized clinical trial was approved by the Ethics Committee of Rafsanjan University of Medical Sciences (IR.RUMS.REC.1399.235)and the Iranian Registry of Clinical Trials (IRCT20210302050549N1).The participants included all patients who were candidates for laparoscopic cholecystectomy who visited Ali Ibn Abitaleb Hospital in Rafsanjan city in 2021.A sample volume of 64 was calculated using the convenience sampling method and enrolled in the study. Inclusion criteria were informed consent to participate in the study, class I and II anesthesia, and an age range of 20 -60 years.Non-inclusion criteria were a history of drug abuse, neuromuscular diseases, liver and kidney failure, heart disease, cholecyst surgery, drug sensitivity to magnesium sulfate, chronic obesity, and an ejection fraction larger than 40%.Exclusion criteria were over 20% reduction in the blood pressure or heart rate during anesthesia.The visual analog scale (VAS) was used to measure the pain level.The visual analog scale is a numerical observational scale to express the pain level in patients ranging from 0 to 10 (0 indicates the lack of pain, and 10 indicates unbearable pain) (22).After describing the study goals and obtaining informed consent, the hospitalized patients were trained on how to use this scale. 2 Anesth Pain Med.2023; 13(6):e135189. The body mass index (BMI) was calculated by measuring the height and weight after entering the operating room.Systolic and diastolic blood pressures were measured by an arm mercury sphygmomanometer cuff (AIPk-II) before and after surgery.The heart rate and the atrial oxygen saturation were measured before and after surgery, respectively, by cardiac monitoring (SAADAT, Alborz-25, Iran) and pulse oximetry monitoring.The ETCO 2 level was measured and recorded by an anesthesiology resident using a capnograph.The general anesthesia technique was the same in all operations, and the patients received no prophylaxis.All surgeries were performed by a surgeon and an anesthesiologist.Anesthesia indication started with fentanyl (2.5 µg/kg) and nesdonal (5 mg/kg). People were placed in the intervention or control group by lottery, so it was determined that the first patient would be placed in the intervention group and the second patient in the control group, and this process continued until the last person.The intervention group received 50 mg/kg of magnesium sulfate diluted in 100 mL of 0.9% normal saline after endotracheal intubation for 15 min, but the control group received 100 mL of 0.9% normal saline immediately after tracheal intubation (25). The bleeding rate and the volume of received liquids were recorded during surgery.The postoperative pain intensity was measured and recorded using VAS after recovery and 2, 6, 12, and 24 h after surgery.Systolic and diastolic blood pressures, heart rate, arterial oxygen saturation, and intraoperative ETCO 2 level were recorded within 5-min intervals.The opioid amount consumed (in the case of pain score higher than 5) was recorded in the checklist after recovery and 6 and 12 h after anesthesia.The data were analyzed using SPSS version 21, as well as using 2-way analysis of variance (ANOVA) with repeated measures, Tukey's multiple comparisons, chi-square, and independent t-tests.A significance level of 5% was considered.It should be noted that our colleague who measured the hemodynamic indices and the data analysts were unaware of the grouping of subjects. Results The intervention group included 12 males (37.5%) and 20 females (62.5%), while the control group consisted of 6 males (18.8%) and 26 females (81.3%).There was no significant difference between the groups regarding gender (P = 0.095).There was also no significant difference between the 2 groups regarding mean age (P = 0.651) and BMI (P = 0.994; Table 1).In recovery, the mean ETCO 2 level was significantly lower in the intervention group (30.2 ± 81.58) than in the control group (32.50 ± 2.98).In recovery, the systolic blood pressure of patients was also significantly lower in the intervention group (119.66 ± 15.75) than in the control group (128.12 ± 14.59).There was no significant difference between the 2 groups in recovery regarding diastolic blood pressure, heart rate, arterial oxygen saturation, the volume of received liquids, and surgery duration (P > 0.05; Table 2).In recovery, 40.6% in the intervention group and 93.7% in the control group received narcotic sedatives.Six hours after surgery, 31.2% in the intervention group and 90.6% in the control group received narcotic sedatives.Finally, 24 h after surgery, 87.5% in the intervention group and 81.3% in the control group received narcotic analgesics (Table 3). The repeated measures analysis was used to evaluate pain level variations in both groups in recovery up to 24 h after surgery.Given the significant impact of time, the pain Anesth Pain Med.2023; 13(6):e135189.level decreased in both groups with time from recovery to 24 h after surgery (P < 0.001).The significant impact of intervention indicates a significant difference between the 2 groups regarding the pain level, so the pain level was lower in the intervention group than in the control group (P < 0.001).The significant interaction of time and intervention indicates a significant difference in the pain level variations in both groups (P = 0.009; Table 4). Given the significant impact of the intervention and the 2 studied groups, Tukey's post-hoc test was used for paired comparison of the 2 groups.On average, the mean pain in the intervention group was 0.988 higher than the control group, which was statistically significant (P < 0.0001; Table 5). At all times, the mean pain level was less in the intervention group than in the control group.The mean pain level in both groups decreased over time.However, the pain level variations were almost identical in both groups and appeared as 2 parallel lines (Figure 1). Discussion This study investigated the effect of IV magnesium sulfate injection on the intraoperative ETCO 2 level and postoperative pain in laparoscopic cholecystectomy.The results showed a significant difference between the 2 groups regarding postoperative pain.However, this difference was not significant 24 h after surgery.The time variations of pain were significant in the intervention group but were insignificant in the control group.Moreover, drug consumption in recovery and 6 h after surgery was lower in the intervention group than in the control group.A meta-analysis examined intraoperative prescription of magnesium and postoperative pain in 25 clinical trials.The results showed a 24.4% reduction in morphine consumption and a reduction in the pain score 24 h after surgery (26).Su et al. showed that the intraoperative use of magnesium sulfate reduced the need for anesthetics (27).Dar et al. found that prescribing 50 mg/kg of magnesium sulfate reduced hemodynamic changes in laparoscopic surgeries (28).Consistent with our results, Mentes et al. reported reduced pain scores and narcotic dose in patients who underwent laparoscopic cholecystectomy in the magnesium sulfate group 0, 4, and 12 h after surgery (29).Radwan et al. concluded that the use of magnesium sulfate is rational and effective in reducing pain, is more physiological, and shortens convalescence after outpatient arthroscopic meniscectomy (30).Asadollah et al. confirmed the effectiveness of magnesium sulfate in reducing postoperative pain control following lower abdominal laparotomy (31).Consistent with our results, Kaur et al. found a positive effect of magnesium sulfate on reduced pain and consumption of analgesics after upper-extremity orthopedic surgery (32). Magnesium sulfate improves the effect of local anesthesia on peripheral nerves and thus is used as a muscle relaxant for pain relief (33).Magnesium can have a pain-prevention effect before the onset of surgery-induced stimulation. Preventive analgesia that occurs by preventing the formation of the central sensitization process or what happens by cutting, inflammation, or both can be useful for patients.Many studies have shown the need for peri-anaoperative analgesics by magnesium prescription (34,35).Our results are also consistent with those reported in the literature.Levaux et al. used 50 mg/kg of IV magnesium sulfate to reduce pain after major orthopedic spine surgery (36).Ghaffaripour et al. showed that the infusion of magnesium sulfate during laminectomy had no effect on patients' pain and opioid requirement during the first 24 h after surgery (37).The significant difference in postoperative pain disappeared after 24 h.Magnesium sulfate usually has a reduced effect because it usually takes effect immediately and can remain in the body's system for at least several hours and up to about 24 h.Consistent with our results, Bhatia et al. studied the effects of magnesium infusion on analgesia during cholecystectomy and reported no significant decrease in the amount of consumed morphine (38).One potential explanation for this controversy between our findings and previous studies is a difference in the patient population.Pain perception can be influenced by various factors, such as gender, psychological, personality, genetics, and ethnicity.Some ethnicities can tolerate pain better than others.In some cultures, enduring pain is considered a pleasant or acceptable experience (39,40).In this study, the opioid consumed 24 h after surgery was lower in the magnesium sulphate group than in the control group.Based on the VAS values, the pain intensity scores in the group that received magnesium sulfate were significantly lower than those in the control group.Since 4 Anesth Pain Med.2023; 13(6):e135189.abdominal surgeries are associated with a lot of pain, and this pain can lead to immobility and complications (such as constipation, infection, clot formation, and other complications), magnesium sulfate can be safely used to reduce pain after surgery to reduce the consumption of narcotics and prevent their side effects.The current conclusion needs to be investigated over a wider scale of patients, with extended monitoring for postoperative pain over a longer time frame. Conclusions Magnesium sulfate is a suitable and safe supplement to reduce pain after surgery and reduce the use of narcotics.The current conclusion should be investigated on a larger scale of patients, with extended monitoring for postoperative pain over a longer period of time. Footnotes Authors' Contribution: A. S. re-evaluated the clinical data, conceived and designed the evaluation, drafted the manuscript, participated in designing the evaluation, revised the manuscript, performed the statistical analysis, revised the manuscript, performed parts of the statistical analysis, and helped draft the manuscript.M. A. collected the clinical data, re-analyzed the clinical and statistical data, and revised the manuscript.All authors read and approved the final manuscript. Conflict of Interests: The authors declare no conflict of interest. Ethical Approval: This study was approved under the ethical approval code of IR.RUMS.REC.1399.235. Funding/Support: This study was supported by Rafsanjan University of Medical Sciences. Informed Consent: Informed consent was obtained. Figure 1 . Figure 1.Pain level variations in both groups vs time Table 1 . Demographic Indices of Patients Who Underwent Laparoscopic Cholecystectomy in the Intervention and Control Groups Abbreviation: BMI, body mass index.a Chi-square test.b Independent t-test, P < 0.05. Table 2 . Hemodynamic Indices of Patients Who Underwent Laparoscopic Cholecystectomy in the Intervention and Control Groups a Independent t-test, P < 0.05. Table 3 . The Frequency Distribution of Pethidine Injection in Patients Who Underwent Laparoscopic Cholecystectomy in the Intervention and Control Groups Table 4 . The Results of Repeated Measurements for Pain Level Variations in the 2 Groups of Patients Who Underwent Laparoscopic Cholecystectomy Table 5 . The Results of the Post-Hoc Test
2024-01-02T16:03:14.854Z
2023-12-01T00:00:00.000
{ "year": 2023, "sha1": "acfa8c1bd6f354f742cdc01987c47a0009132670", "oa_license": "CCBY", "oa_url": "https://brieflands.com/articles/aapm-135189.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9e7bb6b0c02a0e7b4f2d4e6ed633273e5497a872", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
259181486
pes2o/s2orc
v3-fos-license
U.S. Military Assistance to Europe, NATO’s Military Build Up, and the Start of Intra-European Economic Cooperation, 1947-1955 : This contribution pinpoints the interconnectedness between Nato's military buildup at the start of the Atlantic Alliance under the umbrella of U.S. military assistance to Europe, the revamping of a few manufacturing sectors that it fostered across Western Europe, and the early steps in the making of intra-European economic integration through cooperation in military productions. After an introduction, section two cast light on the early U Introduction Since their combined postwar inception the history of NATO, that of European industrial restart and take off, and that of intra-European and transatlantic trade and payments came to be strictly interconnected one another. More precisely, the non-military objectives set forth in the article 2 of the Atlantic Treaty, the reorganization and reconstruction of low and average capital intensive manufacturing, and labor intensive infrastructures across the Western European member states of NATO, and the early steps of the process of intra-European economic cooperation, became complementary to each other. The signing of the Atlantic Pact in 1949 followed the restart of bilateral trade and economic relations between different war-torn Western European countries over the previous two years. Since that time, the revamping of European manufactures and means of mass communication and telecommunications revolved around and were substantially grounded on the industrial mobilization stimulated by the Atlantic Alliance defence policies and military build-up. Furthermore, and crucial to this contribution, since the beginning such industrial Economic Cooperation, 1947Cooperation, -1955 mobilization took place against the backdrop of increasing intra-European economic cooperation and intensified trade relations between western Europe and the United States, and became a powerful tool for fuelling the beginning and erection of intra-European trade and payments. This contribution aims at outlining such interconnectedness between Nato's military buildup, the revamping of a few manufacturing sectors that it fostered across Western Europe, and the early steps in the making of intra-European economic integration, with specific reference to the shaping of a Continent-wide system of trade and payments in raw materials, instrumental and investment goods, consumer goods and manufacturing-processed military end-items. A process of economic integration revolving around the pillar of military productions that set conditions for a full exploitation of and interchangeability among each Western European economy's manufacturing capacity, instrumental goods and raw materials resources, or manpower. In so far as the pillar of this industrial and trade cooperation were on the one side the labor intensive construction industry and all of those mature sectors boosted by the NATO infrastructure programs, and on the other the average capital intensive manufacturing industry that took advantage from the Atlantic Alliance defence programs set off at the very start of the 1950s, this contribution focuses attention on the NATO infrastructure programs and the offshore procurement programs (OSP) that NATO and the United States placed with low and average capital intensive European industries such as the ammunition industry and the mechanical and metalworking industry working on subcontracting for the aircraft industry, a sector that forewent Intra-European industrial cooperation in average-capital intensive production lines. In carrying out this exploration this contribution also aims at pinpointing the two lines of financial contribution to the OSP programs: on the one side NATO cost-sharing financial allotments to the member states of the Alliance, on the other the appropriations to finance European industrial mobilisation on behalf of NATO defence programs out of the annual law for foreign aid passed by the U.S. Congress annually. Thereafter, we spotlight the trajectory of such cooperation in manufacturing in the following decade up to the mid-1950s cleavage: at the time on the one side the creation in Spring 1956 of the Committee on cooperation in non military field, the so called Three Wise Men Committee, on the other extensive discussions on disarmaments that took place in 1957 between the Atlantic Community and the Soviet Union within the United Nations SubCommittee on disarmament, marked a watershed after which the trajectory of NATO rearmament went all the way down to foster the manufacturing of new production lines to supply new weapons to its member countries [1]. Therefore, by focusing on the NATO coordinated productions of aircrafts and ammunition during this time frame, this contribution leaves intentionally aside the upscale of coordinated production for air defence from the 1960s through to the following decade, when since the production of the F104 G aircraft and of advanced anti aircraft missile systems such as the Hawk or the air-to-air Missiles Sidewinder [2], from the early 1960s up to the start of production-sharing for technologically advanced air strikers and helicopters during the 1970s, intra-European and transatlantic manufacturing cooperation combined closer trade and industrial exchanges, and increased capital intensive military production lines and weapon products that equipped the Atlantic military forces with highly technologically-advanced weapon and service equipments. U.S. Support for European Economic Cooperation and Military Assistance to Europe 1947-1949 The history of post-WWII industrial cooperation in the military sector among war-torn European countries began a little bit earlier than the signing of the Atlantic Pact in the Spring of 1949, and took place within the framework of postwar U.S. bilateral military assistance to West European countries. Besides, before the birth of NATO it mainly revolved around U.S. transfer to European countries of military equipments, munitions, spare parts and military enditems against the backdrop of U.S. Congress appropriations to assist friendly nations at the inception of the Cold War confrontation. In essence such a kind of assistance took place from 1947 to 1949. Since the formulation of the European Recovery Program (ERP) the Truman Administration considered to include a military Annex to the Marshall Plan legislation, the so called ECA bill, to provide Europeans with military assistance as part of Marshall Plan aid programs. Though this annex was discarded for fear that it might prevent the Marshall Plan from getting Congressional approval [3], it sheds light on the early U.S. concern for the security of European countries and on the U.S. intuition that, instead of running counter to each other, security policies and economic recovery could be combined with each other. The U.S. fears began as early as the British retrenched from their past military commitment on the Mediterranean basin, first and foremost in Greece since 1946, and took stage in the beginning of 1947 when the U.S. administration ask the Congress' approval of an economic assistance package to Turkey and Greece [4]. Thereafter, it mounted out of the Soviet coup d'etat in Czechoslovakia in February 1948, the East-West confrontation over the city of Berlin in 1948-1949, which among other events in international affairs marked the rise of the cold war with the Soviet-led Berlin blockade in 1949. During this two-year period the United States administration set in motion a series of military assistance plans and brought them before the Congress to get them funded. These military plans at the same time aimed at reorganising the national military army of countries in Europe and provided bilateral military assistance from the United States to each of them. Concomitantly, during this two-year period the United States laid down the foundation of another initiative that at the turn of the new decade will play out in making the NATO military build up a powerful tool to promote industrial cooperation and the integration of Western European countries' manufacturing system: owing to the slow and uncertain progress of European recovery as early as these years the Truman Administration was aware that without U.S. support for promoting economic cooperation and exchanges among defeated nations it would not be possible to make Europe safe from Soviet influence. Against this backdrop the United States pointed attention to Germany and its crucial economy: before 1949 Washington not only had discarded past projects to scale back the German economy, an approach mostly mastered by the Roosevelt administration's Secretary of the Treasury Henry Morgenthau, whom suggested to deindustrialise the German economy and scale it back to an agriculture-centred economy [5], nor did the United States any longer fear from the resurgence of German economic might and potential expansionism. Rather, Washington began considering the resurgence of a German national economy and manufacturing industry as a vital step to promote European economic cooperation, in turn essential to strike back any Soviet move to extend its influence westward: U.S. officials recognised that the German economy was vital to the broader process of European reconstruction, and owing to French resistance to integrate defeated Germany in Western Europe the United States identified with a process of close European economic integration a pathway to accomplish European reconstruction and make at the same time continental economic recovery bound to the German economy [6]. However, even as the United States started planning full support for a European recovery revolving around the linchpin of closer economic relations and cooperation among European partners with the German economy as a pivotal driving force, the aforementioned mounting tensions that culminated in the 1948 Berlin blockade prompted Cold War tensions to rise. Such tensions brought Soviets and Americans to the brink of armed conflict. It was in this context that the American turn to promoting economic cooperation in Europe paired with Washington's support for a quick reorganization of national military armies in most Western European countries, first and foremost in Greece and Turkey, then in Italy and other war-shattered European nations. Accordingly, before the signing of the Atlantic Pact in April 1949 the United States began transferring U.S. military surpluses. Therefore, the U.S. Congress passed laws appropriating financial support to rebuild national armies in key countries that will remain at the center of American military assistance and strategies at least up to the mid-1950s [7]. In essence these appropriations for military assistance were bilateral and boosted American export to each recipient nations [8], thus paving the way for making production, supply and demand of military equipment the fly-wheel to resurrect bilateral trade relations between the United States and former warwrecked western European economies. In 1949 this set of bilateral military assistance programs uttered in the signing into law by the the U.S. Congress the Mutual Defence Assistance Act, which provided funds to friendly nations: this program, which was implemented after the signing of the North Atlantic Treaty, was organised in grants and loans for military assistance purposes. It offered to non-Communist nations both weapons, components for European military productions, spare parts, machine tools, and financial support to each of its beneficiary European economy in support for European import of instrumental goods and consumer goods required in connection with the start of military assistance [9]. Therefore, during this two-year period two characteristics featured economic cooperation in the military sectors. On the one side it followed the fundamental dynamics that underpinned the restart of industrial cooperation and economic relations among the western industrial countries. At the time many European countries began reopening their national economies through bilateral trade and economic relations with other former belligerent countries. This was for instance the case of Italy and France: the two countries restarted bilateral trade in the first few years after the end of World War II [10]. Likewise, in so far as the course of cooperation in the military field began under the auspices of U.S. Congress military appropriations to friendly nations, it came about through U.S. bilateral military assistance to Western European countries. In the second instance, military assistance did not involve the restart of production lines in friendly nations or any effort to technological transfer or the creation of an infant military industry. In the United States the debate revolved around whether or not, and to who extent, set in motion the reorganisation of national military forces in former belligerent western European nations. Both in countries where this eventually occurred, such as in Italy, and where it did not owing to multiple reasons as it was the case of Germany where both the German public opinion and some future NATO partners as France run counter to an early reestablishment of a German national military [11], before 1949 the Americans debated, and the Congress appropriated, the transfer of military spare parts, instrumental goods and end-item weaponries in the pursuit of reorganising a national army, rather than in the aim to stimulate the rebirth of a national military industry. Therefore, prior to the establishment of the Atlantic Alliance the United States did not conceive military assistance as a fly-wheel to develop autonomous production and defence capacity across friendly nations. For this reason in order to pinpoint the linkages between the beginning of military build up and the rebirth of Europewide military industrial complex, as well as the establishment or recovery of labor-intensive and average capital-intensive manufacturing firms across the Western European countries, and the beginning of intra-European industrial and trade cooperation in such manufacturing sectors for which cooperation in the military sector served as a chance and a fly-wheel, it is worth focusing on the military productions stimulated under the NATO procurements since its foundation at the turn of the 1940s. European Industrial Cooperation, Financial Burden-Sharing and Economic Expansion 1950-1952 The establishment of the North Atlantic Council in early 1951, and within it of its working organisation, the North Atlantic Council Deputies, marked the starting point for a reconstruction aiming at pinpointing the aforementioned interconnectivity between the Atlantic Alliance defence policies, the full recovery, restart and technical upgrade of European industrial production lines, and the early turnaround in the process of European trade cooperation. The North Atlantic Council brought together representatives from the member states' Ministers involved at national level in the defence policies and military buildup of NATO. As such, not only the Foreign Ministers, but also representatives from Economic, Finance, Defence and Treasury Ministers represented their respective governments. Therefore, since its inception the Council approached the issue of rearmament from a broader perspective that encompassed both the Alliance defence and security targets, and the economic implications that this entailed for its member states. However, in early 1951 such attention to this interconnectedness between military build up targets and their economic feasibility in each national economy was thought in order to maximise available and unused industrial production lines to meet the Alliance defence objectives: it was not clearly based on the idea that rearmament should entail economic growth or help accomplish economic reconstruction. The creation in January 1951 within the Alliance of the Defence Production Board was aimed to maximise military production but did not place it within the framework of broader expansionary targets for the national economies broadly conceived: as a matter of fact its purpose was «to achieve the maximum production of military equipment in the most efficient manner at the least cost and in the shortest time to meet the military material requirements of NATO» [12]. However, the parallel creation of the Finance and Economic Board, a sort of advisory body charged with instructing the North Atlantic Council and the single national governments with respect to the economic and financial aspects of the Community's rearmament programs, marked a step forward in the ways the Council approached the economic dimension of rearmament. The Board had to advice the national governments on the economic feasibility of the defence effort set up at NATO level for each member state's economy. At the same time, it was designed to serve as a sort of liason between the Council and the infant Organisation for European Economic Cooperation (OEEC). This two-fold commitment best spotlights two concerns that drove the early steps of the Atlantic Community: on the one side the aim at setting conditions to strike the balance between the economic impact of the industrial mobilization required by the military build-up at national level, and the military and defence objectives of NATO as a defence and security community. In the second instance, it sheds light on the intertwining between such NATO industrial mobilisation, and the very beginning of European economic cooperation. However, what brought the shaping of the Atlantic Alliance defence efforts all the way down in this direction was the Ottawa Council Meeting held in September 1951. At the time, the Military Committee called on the national governments to provide a financial contribution that exceeded far greater the amount each country was willing to contribute. The combining of this issue and an early Fall 1951 macroeconomic framework at international level marked by rising raw material prices, peaking international inflation, and balance of payments disequilibria across the member countries of NATO, prompted the North Atlantic Council to take increasing care for waging the industrial mobilisation required in each member state and its implication on the economic stability and growth-level in each national economy. The establishment of the Temporary Council Committee (TCC), gathering together clusters of experts from the member states and led by the so called three wise men (Plowmen, Monnet and Harriman), was thought to exactly target such tangle of issues. At the same time the North Atlantic Council instructed its working bodies and the member states to implement the Article 2 of the Treaty, where the founding nations stressed the non-military and civilian objectives of the Alliance as a vital component of a defence and security organisation. The TCC, which became operative shortly after its creation, worked on a report filed to the North Atlantic Council at the end of 1951 that marked a step forward along the Alliance's commitment to seek a balance between its defence programs, Europe-wide industrial cooperation in military productions, and its implication on the rate of economic growth and social stability in each domestic economy. In setting benchmarks to determine the degree of industrial efforts that each country could bear without impairing its economic stability, the TCC report made a step forward crucial to understanding the relations between defence programs, the restart and expansion of European military industrial complex, and domestic economic stability that soon underpinned military build-up. According to the TCC, the economic and industrial mobilisation set for rearmament purposes should not only not impair economic development and social stability in each member state, but it was also designed to serve as a flywheel for implementing and multiplying economic expansion. This target forerun a long term non military objectives of the Atlantic Alliance rearmament programs and the economic cultures which it was grounded on. In the first instance NATO viewed the defence efforts, and therefore its defence and security programs, as a powerful tools to foster economic growth at national level and, as this contribution will point out, closer economic and trade bonds among its member states. Reasoning along this line, the military efforts enforced by the early Cold War arms race were designed to have a positive economic spill over on the rate of growth of the Alliance's member states. Secondly, such approach highlights the influential role of military Keynesianism on the approach of the Atlantic Alliance to the rearmamentinduced industrial mobilization and the importance placed with NATO's role and objectives in non-military fields. It was against the backdrop of this shifting attention of the Atlantic Alliance to the linkage between defence mobilisation and economic expansion in connection with the activities and reports of the TCC and the economic downturn that shook the advanced industrial economies in the second half of 1951 that it is worth placing the creation and beginning of multilateral rearmament programs promoted by the United States and implemented under the umbrella of NATO. In this respect it is particularly worth exploring the launching of the so called Off-shore procurement programs (OSP) in 1952. Designed to shift U.S. military assistance from bilateral programs to multilateral programs based on appropriations either by the Pentagon or by the Alliance to place and paid for in U.S. dollars production orders for military spare parts, raw material, components, instrumental goods or assembling activities with specific NATO member countries for transfer to other member states of the Alliance, this set of programs best showcases such shift in Washington and within NATO toward an increasing attention to the aforementioned intertwining between rearmament and economic expansion, military coordination and industrial cooperation among the member states of the Alliance. Furthermore, it featured one more novelty in the ways in which the Atlantic Alliance conceived the economic impact of military productions: in so far as these contracts were placed with member states according to each economy's raw material and manufacturing resources and defence requirements, the OSP were aimed to fully exploit each economy's resources and to integrate differing European economies so as to maximise the production capacity of each of them and in order to integrate them along transnational supplying and manufacturing lines. In this respect, the OSP contracts prompted an increase in trade and industrial cooperation, and technological transfer among European economies and between them and the U.S. economy. Furthermore, so long as the Pentagon and NATO paid for the orders in U.S. dollar, the OSP programs made the economic mobilisation stimulated by the Atlantic Alliance defence target a powerful tool to continuing the ERP objective of fixing up the dollar gap in Europe. It is worth pinpointing that U.S. appropriations and NATO budgetary expenses on OSP contracts should also wage the implications of each member country's industrial mobilisation on their respective balance of payments. As the program was promoted and financed by both NATO and the U.S. governments it is worth exploring both lines of actions to better understand the relative weight of the U.S. government in levering a multilateral assistance program and its influence on a program of production and trade integration, and financial burden-sharing within NATO. Before analysing these two lines of development it is worth stressing that though they continued throughout the second half of the decade as pinpointed in the Conclusion of this contribution, the OSP programs specifically featured NATO military productions and U.S. military assistance during the first half of the 1950s. The starting point to make sense of the transition from U.S. bilateral military assistance to the multilateral structure of NATO coordinated production programs is the year 1950: although the United States' bilateral military assistance programs were still under operation within the framework of the MDAP appropriations for foreign aid, the recentlyfounded Atlantic Alliance launched a Medium Term Defence Program for the member countries aimed to coordinate the rearmament of each country according to the defence requirements fixed by the Alliance for each of its member states. This program was intended to combine in the most efficient way a widespread call by both NATO and the U.S. government on European partners to engage with expanded budgetary appropriations on defence spending and increased industrial output and mobilisation in the wake of the defence efforts, and desperate need in former belligerent European countries for financial stability and balance of payments equilibrium [13]. Therefore, since the inception of this program the Alliance focused attention on combining defence and security objectives, economic growth-enhancing expansion in manufacturing capacity, and external equilibrium. As anticipated in this section, in so far as the Alliance was then still far from establishing a linkage between rearmament and economic expansion, the MTDP was merely conceived to coordinate national defence budget, industrial capacity, and external equilibrium within the framework of a rearmament-induced industrial mobilisation. However, since 1951 within both the United States and NATO several policymakers retained that such effort could not be accomplished without both coordinating and integrating manufacturing production lines and sharing the financial burden of this industrial mobilisation. In the context of a number of procurement contracts placed by NATO with European industry in the second half of 1951 for transfer to other NATO member states and paid for in US dollar by the United States, on the one side the Defence Production Board was charged with breaking down each European economy to make an assessment of its raw material stockpile, industrial output capacity or shortage, and manpower to make the most of each European economy's material resources and input. At the same time, some top ranking ECA officials such as Bissell, argued that NATO member states should share the financial burden of such infant Europe-wide industrial cooperation as much as possible, without charging the United States with bearing the entire cost of such effort in the direction of collective defence and industrial cooperation and integration [14]. Along the same line of reasoning within NATO the debate on the establishment of a common fund to finance coordinated production intensified [15]. Therefore, by the second half of 1951, even as the TCC drafted its report and the western economies felt the pinch of economic setbacks, the construction of coordinated industrial production programs among the economies of the Alliance was already in the making, and the two most critical issues on which the following OSP programs would be revolving around, namely full-utilization of each national economy's resources and industrial capacity, and the burden sharing of economic cost for such coordinated programs, were well Economic Cooperation, 1947Cooperation, -1955 onset. Making matters better set for the launching of coordinated military productions programs, at the ninth session of the North Atlantic Council held in Lisbon in February 1952 the representatives from member countries passed a collective rearmament program that entailed a total contribution by the member states for 50 divisions, 4000 aircrafts and strong naval forces by the end of 1952 [16]. It was within the framework of these combined evercompelling rearmament commitments, pressures from within NATO and the U.S. government to share the burden of NATO rearmament efforts, and the Alliance push for implementing coordinated production programs based on full exploitation of industrial capacity and other manufacturing inputs that since the Spring of 1952 the Atlantic Alliance launched a systematic set of off shore procurements placed with European industry and paid for by both the United States and the European countries according to a burden sharing formula. As a matter of fact, along with the launching of the OSP the Alliance set up a common budgetary fund: according to it, each member state was requested to contribute to it in proportion to its national income. The program was thought to implement a set of production programs across the European economies paid for in dollar by either NATO or the United States for transfer from European supplying industries to other European importing economies. The clear aim of this two-fold initiative that received full support from the United States law designed to shape American total foreign aid for each fiscal year, the Mutual Security Program, was both to finance coordinated industrial productions among member states, and the import of raw and strategic material by the European manufacturing economies, domestic industrial and capital investments required to foster industrial mobilisation, as well as import of manufactured military goods by other European countries [17]. Furthermore, it was intended to prevent such industrial mobilisation and full trade integration and industrial cooperation from straining the balance of payments and monetary stability of the Atlantic Alliance member countries. By the end of 1952 the Alliance placed such coordinated set of procurements against the backdrop of a program for collective rearmament and security that clearly linked the industrial mobilisation of its member economies for rearmament to the objectives of economic expansion, monetary stability and external equilibrium for each of its member states: tellingly, at the Ministerial meeting of the North Atlantic Council held in Paris in December 1952, in reckoning «the progress being made in the coordination of production of defence equipment» [18], the North Atlantic Council voted a resolution on the application of Article 2 of the Alliance Treaty that prompted NATO to promoted both the defence and the economic progress of its member states: more specifically, the Council called on member governments to find «solutions to their problems such as balance of payments, increase of output, internal financial stability and manpower.» [19]. Such industrial production cooperation, closer trade integration and financial burden sharing, as well as the more widely-known objective of filling up the European economies dollar gap that underpinned the Atlantic Alliance's OSP contracts had earlier been largely experimented in the field of civilian productions under the European Recovery Program and its US-led institution, the Economic Cooperation Administration (ECA). Since the first implementation of Marshall Plan assistance a portion of ECA procurements were orders placed with European economies for transfer to other economies beneficiaries of ERP funds. In the fall of 1948, for instance, approximately 12 percent of the procurement authorisations approved by ECA were for orders and purchases among the Western European countries. In particular, such dollar-financed European purchases in other European countries were mostly coal from the so called Bizone to supply Austria, Denmark, France and Italy, among others; the ECA dollars were also used to supply material to the Bizone, that being the case of procurements placed in Belgium to construct goods wagons to be shipped to the Bizone, or non-ferrous metals to be supplied to other ERP countries [20]. Furthermore, as early as 1950 the U.S. foreign policymakers envisioned that the idea of transferring goods produced in an ERP country to other European countries that benefitted from the Marshall Plan should apply to military productions: according to Dean Acheson the ERP should finance military items manufactured in a country for transfer to other European countries and pay for such transfer by drawing on the so called counterpart funds administered by ECA [21]. Therefore, it is true that the financial mechanisms was rather different in the ERP compared to NATO's OSP program: unlike the latter one, under the ERP the United States practically financed the program in its entirety whereas the OSP were paid for in U.S. dollar by either the Pentagon or NATO under the burden sharing formula: however, this coordinated integration of production and exchange of raw material and instrumental goods among European countries followed the same line of reasoning of later NATO's OSP programs: the integration of production lines and critical resources for manufacturing among the European countries that benefitted from the ERP, as well as their increasing trade exchanges under the Marshall Plan stimulated basic recovery of European industries, employment of unemployed or underemployed manpower, and utilisation of unused industrial capacity; on the other hand, the OSP contracts prompted the economies of the European member states to full utilisation of the factors of production and further technological drift, particularly in so far as NATO's procurements involved new technologically advanced sectors as electronics and mechanical industries. The OSP Programs: Low-Capital Intensive and High-Technological Content Industries 1952-1955 The largest allocation of OSP contracts were with the ammunition, aircraft and related equipment-producing industry, and with the shipbuilding sector. Therefore, the OSP contracts financed through the contribution of all NATO member states according to the burden-sharing exercise or through contracts directly placed by the U.S. Department of Defence with European industry exerted a leverage on both low-capital intensive manufacturing sectors and on highesttechnological content firms of the time. Both kind of manufacturing sectors were placed orders by both NATO and the Pentagon, and as such both the least technologically advanced national economies and the most developed ones could take advantage from their participation to the OSP programs to increase their dollar earnings and to improve their balance of payments. Likewise, both contracts placed by NATO and orders placed by the Pentagon with low and high capital intensive firms followed the same pattern: their scale and financial amount rose from fiscal year 1952 to fiscal year 1953 and had the largest and most striking impact on European production lines and industrial expansion from 1953 to 1954, whereas later on they either got stable or began declining. On the one side the low-capital intensive sectors that were recipients of OSP contracts were in either case the ammunition industry and the large archipelago of industrial and service companies working on contracts placed in connection with NATO's common infrastructure programs. On the other hand the OSP contracts placed with the hightechnological content European industry revolved at the time by and large around the aeronautical industry and electronics, mechanics and metalworking firms working on subcontracting on behalf of European and U.S. aeronautical companies. The aircraft industry is a useful case in point to point out the difference between contracts placed with European firms before 1952 and the OSP launched that year. Until the Fall of 1951 the off shore contracts placed in Europe by the Pentagon to supply its forces stationing in Europe or for transfer to other NATO member states had only been for manufacturing spare parts and machineries, not for promoting the launching across Europe of full production lines [22]. With the launching of a number of off shore contracts within the framework of the industrial mobilisation stirred by the Atlantic Alliance defence requirements set for each member state and the birth of the first systemic set of OSP under the umbrella of NATO, for the first time coordinated production among European companies led to full-assembly of military end items: the aircraft production program approved by the Alliance for fiscal year 1952/1953 led NATO to buy complete aircrafts in Western Europe [23]. To begin with contracts placed by NATO, in the leasttechnically advanced sectors the apex was reached in 1953. In September of that year a program worth up to over £ 300 million for coordinating the ammunition production of NATO European member countries formulated by the NATO experts was approved by the North Atlantic Council and recommended for implementation by the member governments of the Atlantic Alliance [24]. The program, aimed at producing almost every type of ammunition from small bullets to heavy calibre shells, was thought by the Alliance to develop sources of ammunition supply as near as possible to operational areas and to produce ammunition at the time not produced in sufficient quantity by European manufacturers. The burden of financing it was jointly shared by the European members of NATO and the government of Washington; it clearly followed the principle of off shore orders and was at the same time thought to increase dollar earnings by some European producers [25]. A substantial contribution to the OSP contracts placed with low-capital intensive European manufacturers and service industry was given by the new Common Infrastructure program of the Atlantic Alliance, which boosted a varied archipelago of low-capital intensive European manufacturing sectors and service companies. Since late 1951 the North Atlantic Council approved a Common Infrastructure Program intended for establishing basic installations such as airfields and signal networks to be installed in a member country for use by both that country's army and any other NATO's member state army. As a follow up, for the years since 1952 it entailed that NATO military forces submit an annual infrastructure program to the North Atlantic Council to implement the program [26]. In the following few years through the middle of the decade the program essentially evolved into a set of procurements that stimulated the European construction industry and, to a lesser extent, the communication industry. In fact, by 1953 it had evolved along three main lines of infrastructural projects: airfields constructions, Jet fuel pipelines, and telecommunication projects for a total amount of roughly $1.3 billion. The United States contributed to financing the program in the Mutual Security Act of 1953 [27]. In a way similar to the off shore contracts placed with manufacturing firms, contracts distributed to produce within the framework of the Common Infrastructure Program followed the burden-sharing principle in a strict manner: each infrastructure work carried out on the territory of a NATO member country was to be financed jointly by all member of the Alliance pending approval by the Infrastructure Payments and Progress Committee. As such the program devised a cost-sharing formula to be approved by each member state [28]. Earlier negotiations on U.S. bilateral economic assistance for military purposes had occurred since 1949 under the MDAP and its predecessor military aid programs, before the bargaining process on military and defence build up target, as well as defence appropriations in each NATO member state were discussed at multilateral level in connection with the industrial and manpower efforts, as well as the supplying of raw material and services to the OSP coordinated production programs. In a similar way bilateral bargaining between the United States and each NATO partner nation took place in most cases. This feature made the negotiations on the OSP contracts resembling earlier programs of bilateral military assistance: this bilateral bargaining presided over the integration of NATO member states resources, production lines and national markets over the following years [29]. Certainly in the first phase since the launching of NATO off-shore procurement programs at the beginning of the new Economic Cooperation, 1947Cooperation, -1955 decade and for a few years in the first half of the 1950s, notwithstanding the new NATO burden-sharing formula, NATO was not left alone with coordinating the American economy and the European national economies in developing coordinated military production programs and allocating funds to manufacturing and service European companies. As a matter of fact, along the line of the MDAP under operation in the past few years, even in the OSP contracts a large amount of procurements were still placed by the U.S. Department of Defence with European firms, rewarded in U.S. dollars and transferred to other NATO member states 'national armies to help them meet the Atlantic Alliance defence targets, or used to fill up the Pentagon defence effort. The U.S. legislative framework in which appropriation to finance and pay for in U.S. dollar contract placed with European firms by the Pentagon was the Mutual Security Program: since 1951 under the Mutual Security Act it received on annual basis Congressional approval for integrating all forms of foreign aid [30]. The U.S. Department of Defence financial commitment in the first two years since the launching of the OSP program in 1951 demonstrates the role of Washington in helping the Atlantic Alliance to implement this multilateral procurements program. During the first year of operations the Department of Defence placed orders overseas for $2.7 billion for the production of military material in the framework of the military assistance program. This first round of OSP contracts both stimulated an expansion on the supply side in Europe, and served as a leverage to promote productions in the most capital intensive sectors of European manufacturing and the restart of production lines where they were underutilized. Furthermore, it marked continuities in U.S. technological transfer to European national industry compared with the early reorganisation of European industrial capacity from 1949 to 1951 tracked in the previous sections. Furthermore, the first American-financed OSP program stirred production and supply in low to average capital intensive European industries. In the wake of the importance given by Washington prior to the foundation of the Atlantic Alliance to providing Europeans with ammunitions, the European ammunition industry and related sectors accounted for a substantial share in the first set of OSP contracts placed by the Pentagon with European industry. As in 1951 the European ammunition industry was working at full production capacity to supply European buyers, the contracts placed by the Pentagon with European suppliers under the first OSP program stimulated an expansion in production capacity of European ammunition firms. This expansion was achieved through the adoption across Europe of modern production techniques and by building entirely new production lines. Therefore, as far as a traditionally low-capital intensive military production sector as ammunition was concerned, the first OSP program triggered a shift to increase the technical content of European industry. This feature of the 1951 off shore program was all the more distinctive of other European manufacturing sectors that were the recipients of OSP contracts from the Department of Defence: in so far as contracts were placed with the highest-technologically advanced European industries as electronics, new investments to increase production or to extend existing facilities for production were achieved [31]. Therefore, on the part of the United States the placing of contracts in Europe under the first OSP program was not only a way to help NATO in the early stage of its new role as a fly-wheel for industrial and trade integration among European economies but also a means of triggering an expansion and technological drift in production capacity. This critical role of the United States in resurrecting European industry, propping up expansion in industrial production and full utilisation of under-utilised production facilities, as well as technological upgrade as a starting point to let European production lines and patent industry supply other NATO member states to meet the Atlantic Alliance defence requirements marked further continuities with the early steps of military assistance. A straightforward demonstration of this continuity and the U.S. contribution to it was the pivotal role that British models and patents still retained during these early years of cooperation on military production and supplies by NATO member economies. At the time the OSP programs under the umbrella of NATO were launched, the British industry played a critical role both as a supplier of models and patents to other European assembling industries, and as a national manufacturing industry charged with final assembly. In the aeronautical sector, for instance, though other European mechanical industries assembled non British models such as the Marcel Dessault Myster produced in France or the F86 all-weather fighters produced in Italy, under the first OSP program British type jet fighters were produced in Belgium and the Netherlands. On the other hand, in 1952 the United States concluded with Great Britain an off-shore procurement contract under which London agreed on building 500 Centurion Tanks to be paid for in U.S. dollars for transfer to Denmark and Holland. Likewise, within the late Truman administration it was registered a widespread consensus that the termination of the Marshall plan should pave the way for the end of direct and purely economic assistance to Europe, whereas military assistance under the OSP coordinated production programs should be implemented and increased over the next few years: this was for instance the U.S. Secretary of Commerce Sawyer's approach to foreign aid for the future. Likewise, at the end of 1952 many U.S. officials suggested that traditional postwar transfer or selling of military end items to European armies should be substantially cut, whereas appropriations for OSP contracts to the European firms from the Department of Defence for transfer to other NATO member states should be increased from $1000 million as of fiscal year 1952/1953 to $2000 million during the following fiscal year [32]. In the wake of Korean war-related industrial mobilisation, by staggeringly expanding appropriations to finance the OSP contracts placed by the Pentagon with the European industry, in 1952 and 1953 the U.S. Congress pushed forward the American strategy to promote full utilisation of under-utilised production and manpower capacity in Europe, expansion of production capacity and reorganisation and technical upgrade of the most capital intensive European manufacturing sectors that were the recipients of OSP contracts from the U.S. Department of Defence: the OSP contracts placed in Europe totalled $630 million in fiscal year 1952 and over $1.6 billion in 1953. Yet again the British industry accounted for a large share of the U.S. off shore contracts placed with the industry of the old continent, and its ammunition industry took the lion share of it: in June 1953 alone two OSP contracts worth up to $20 million for shells supply and for the production of rockets were signed by the British Ministry of Supply and the United States government to be paid for in U.S. dollar to supply other NATO countries [33]. Such total staggering increase in the scale of OSP contracts placed with European economies had an impact on the trade and external payments position of European countries of equal or similar magnitude. In particular, the external payment position of NATO European countries and Germany improved remarkably from fiscal year 1952 to fiscal year 1953: their overall deficit in the net gold and dollar balance of payments dropped from $3.9 billion to $600 million [34]. This trend clearly suggests the positive impact that increasing U.S. appropriations to OSP contracts placed with European firms had on the level of trade integration and financial stabilisation of European member states of NATO required to meet the defence target set forth by the Atlantic Alliance. The US-financed portion of the OSP contracts placed with European industry began decreasing since fiscal year 1954 against the backdrop of increasing and binding conditions posed by the U.S. Congress on allocating funds out of the Mutual Security Program for military production overseas. These conditions reflected both concerns for the security of production lines in specific European countries, particularly in supposedly Communist-dominated plants in Italy, and fear by the American business community for the rise of competitive European producers: in discussing the Mutual Security Program for fiscal year 1954, for instance, the U.S. House Committee on Foreign Affairs brought forth these concerns and restrictive conditions [35]. That same year, within the framework of Congressional debate on the annual Foreign Aid Bill, that in 1954 cut appropriations in military and economic assistance to the free world against attacks, many congressional voices raised the issue of supposedly ill-functioning OSP contracts. In particular the Senate appropriation Committee complained about the use of military assistance funds by Britain, making the argument that London retained aircrafts produced under the OSP contracts for her own use instead of distributing them to other NATO partners, and that London was taking advantage from dollar funds to develop her own aircraft industry in competition with U.S. manufacturers [36]. However, if we consider the OSP contracts allotted to European industry that year relative to total U.S. delivery of military equipments to European partners we can find that the OSP contracts increased their share out of it: that year most of the military equipments procured with new funds appropriated by the Congress for Europe were under the category of off shore procurements. Therefore, since 1953 the OSP contracts placed by the United States with European firms relative to the total OSP contracts decreased, but they increased relative to the total amount of U.S. payments to industry for delivery of military equipment funded by the United States [37]. Along the lines of prompting industrial cooperation among the national manufacturing and labour markets germane to the new OSP programs, NATO procurements also marked continuities with respect to bilateral assistance programs promoted by U.S. diplomacy and funded under the MDAP and other military assistance programs financed by the United States and appropriated to friendly nations by the U.S. Congress. For instance, consistently with the importance that assistance to provide allied nations with munitions, earlier carried out under the auspices of the Additional Military Productions Program, the OSP programs charged national industrial productions with procurements contracts and appropriated financial assistance in U.S. dollar to place contract with the national munition industries specialising in this sector for transfer to other NATO member states. In 1953 the Council of the North Atlantic Treaty Organisation recommended the approval of an ammunition procurement program worth up to £ 357 million: the program was on the list of the OSP programs for that year and as such should be funded by the participating supplying and importing countries and by the U.S. Mutual Security Agency in so far as its budget contributed to financing the OSP program. To better understanding the share of these low-capital intensive ammunition program out of total OSP since that year it is worth stressing that that fiscal year total OSP programs amounted to more than £714 million [38]. Atlantic Industrial Cooperation and Cold War Confrontation: The OSP Programs by the Mid-1950s By the mid-1950s combined cooperation in the military and economic fields under the umbrella of NATO's off-shore procurements had involved practically most NATO member states. However, most member states had either purchasing or supplying and manufacturing preferences that periodically brought negotiations on the off-shore contracts among NATO partners on the verge of a standstill. Furthermore, frequently some member nations linked their role and commitment to meet NATO defence programs to the Atlantic Alliance's efforts to deploy a decent and sufficient army to protect their defence and borders at the apex of Cold War confrontation. Other NATO member states bargained their contribution to the OSP programs by promoting or protecting the role of supplier that their national industry had carved out over the past few years since NATO had launched its Economic Cooperation, 1947-1955 procurement programs. It is worth making reference to some cases according to their position on the the demand or supply side of the Off-shore procurements programs. On the demand side of NATO procurements, it is certainly noteworthy among others the case of Denmark and its aeronautics: since the beginning of NATO's coordinated production programs and burden-sharing Denmark had taken advantage from the leading and outstanding British-manufactured fighters and equipments: in 1954, within the framework of a debate between the United States, NATO and Denmark about the equipping of Danish air force with all-weather fighters, the Government of Copenhagen showed its reluctance to accept offers from the Atlantic Alliance for supplies from national manufacturing industries other than British firms. By that year the country, which had only one squadron of interim all weather fighters equipped with British NF Mk 11s, relentlessly refused American and NATO offers for American Sabres or Canadian built all-weather fighters and insisted for British delta-wings Javelins, which at the time NATO could not supply. What explained this Danish preference for British supplies was both the leading and longstanding experience of British aircraft and mechanical industry, and a firm Danish preference for British radar equipments. Furthermore, in 1953 Denmark rejected an American offer to establish in the country two fighter wings with 150 aircrafts because they considered NATO land forces deployed to protect the Southern flank of the country too weak [39]. If one focuses attention on the supply side, this was certainly the case for each national European industry that since 1950 -under the Atlantic Alliance defence targets and procurement programs-could either revamp a longstanding tradition in specific manufacturing sectors and combining resurrection of production lines with further capital investment and technical upgrade, or supply NATO partner countries with services such as the patent industry's in which a specific national economy had a well-established tradition. As pinpointed in this contribution, the pillar national manufacturing industry whose production lines, stimulated by NATO procurements programs and rewarded under either NATO budget or the United States Mutual Security Program, was the epicentre of the process of industrial mobilisation for military purposes and the relative intra-European exchanges in raw material, services and end items, was certainly the British economy. As prior to full implementation of the Off-shore contracts, since 1951-52 the British industry kept supplying NATO for transfer to other member states the largest amount of manufactured goods, spare parts, instrumental goods or services. As seen, since the beginning of NATO's multilateral procurement programs the United States maintained a substantial role in that the U.S. Mutual Security Program financed productions placed with manufacturing member economies for transfers to meet the Alliance's defence target of other member nations and was complementary to dollar allotments under NATO burdensharing. From 1951 to 1952 France took advantage from this procurement mechanism: as a matter of fact Paris led all of the other manufacturing countries as the first dollar aid recipient. However, since fiscal year 1952-1953 Britain took the lead and shared nearly half of the total contracts including both NATO off-shore contracts and contracts placed with European industry to supply the stationing of U.S. troops in Europe. If one breaks down these data and examine the following fiscal year, it is possible to catch that Britain increased its share of total dollar aid and industrial production and its manufacturing industry was increasingly appropriated funds and rewarded within the framework of the Alliance off-shore contracts: total NATO off shore orders for military equipment of NATO forces in fiscal year 1953/1954 amounted to $395 million. With a substantial increase relative to the past year, contracts distributed to Britain rose up to $193 million [40]. If we consider that by that fiscal year total off-shore contracts overrun allotments for supplies, equipment and maintenance cost of the American troops deployed in Europe, which that year were equal to $207 million, we can catch the longstanding central role of British manufacturing and service industry in military procurements of the Atlantic Alliance defence efforts since the late 1940s. Therefore, by the mid-1950s on the one side the total amount of OSP contracts outpaced the share of allotments to finance the stationing of U.S troops in Europe in the total value of contracts to the military industry complex for military purposes; on the other hand, the British industry was wellestablished and led all the other manufacturing nations involved in contributing to the OSP programs of NATO and its member states, then the core industrial procurement program of the Atlantic Alliance. If we keep researching the supply side of the OSP programs we can see that while Britain retained and strengthened its pivotal role, other manufacturing economies took advantage from the OSP programs to expand their production levels, restart unused production lines and to reduce the unemployment rate in specific manufacturing sectors, particularly in a wide range of labour-intensive or low-capital intensive mechanical industries that worked on subcontracting for the national defence industries or under the Atlantic Alliance procurements. For the sake of argument it is worth keeping an eye on the mechanical sectors working on procurements for the aeronautical industry. In this respect it is worth exploring the role of the Italian mechanical and metalworking industry during the Eisenhower administration. As a matter of fact, it is true that since the coming of the new U.S. ambassador to Rome Luce -as widely stressed in the literature on the subject-bilateral negotiations between Italy and the United States on the number of and the financial amount of OSP contracts placed with the Italian mechanical industry working on aeronautical productions revolved around not only the financial commitment of the Italian governments to increase their defence appropriations. Rather, the amount of orders placed with Italian firms was based on the willingness or recalcitrance of the Italian firms, first and foremost the Turin-based Fiat group-to curb and to marginalise any leftward and supposedly Soviet-inspired trade union: in 1954 such a kind of American pressure on the Italian business community, which the Ambassador to Rome shared with prominent Congressional Committees and members such as the Senate Appropriations Committee [41], led to the failure in the Turin polls to elect representatives from the leftward CGIL [42]. Notwithstanding this shift of negotiations on the OSP to political issues, by the middle of the 1950s the Italian mechanical and metalworking industrial complex was still a critical provider of spare parts, munitions, end items and instrumental goods within the framework of both the OSP programs and the national military build up target that the Italian army was bound to accomplish according to the Atlantic Alliance defence objectives. At the time both firms from the national military industry complex and mechanical industries from the civilian manufacturing sectors offered a critical contribution to the Atlantic Alliance defence and security targets. Furthermore, very often the national industries took advantage from such involvement in military procurements to increase their trade exchanges with European partner industries and to internationalise their markets, mostly at European and transatlantic level. As for traditional Italian military industries a case in point at the time was that of munition firms as Borletti: founded at the end of the nineteenth century as a firm specialising in precision mechanical products, during the twentieth century this company developed a manufacturing specialization in producing timing devices and fuses for artillery shells, rockets, airborne bombs, land and sea mines and demolition charges. In the first half of the 1950s Borletti was assigned important OSP contracts to supply fuses and timing devices to NATO and SEATO Services. More importantly from the research perspective of this contribution, such involvement in the OSP programs both increased Borletti's export and prompted this firm to increase its capital intensive investments to align with the technological drift required to produce some of these military components: its facilities developed up-dated high precision machinery and automatic assembly machines [43]. Conclusions This contribution has pinpointed how since its inception at the very beginning of the 1950s the military build up of NATO was thought by the Alliance in the pursuit of both defence targets, financial stability and trade integration, and economic growth in each member state of the Alliance. Likewise, it pinpointed the two lines of financial contributions to such collective effort, from within the Alliance and from the contracts directly placed with European firms by the U.S. Department of Defence. From the view point of the economic implications of NATO defence efforts, the main effects it triggered were intensified trade integration and cooperation in manufacturing among the European economies, and between them and the United States. In turn, such cooperation prompted each national economy to improve its country-specific production specialisation and to maximise its resources. This process stimulated increased production and supply capacity by leading low-capital intensive European sectors and led the most technically advanced sectors to confront rising competition by increasing capital investments and developing their technologically advanced production lines, as was the case of the aircraft industry. Under financial support from both the Pentagon and NATO budget, we associated these development across the European economies under the aegis of NATO rearmament with a specific program: the off-shore procurements. Though they featured continuities with earlier U.S. bilateral military assistance programs carried out by the United States under the MDAP, they worked as a proper flywheel for trade integration, industrial cooperation and technological drift across the European economies that were partners of the Atlantic Alliance. Even within the framework of the mid-1950s fierce criticism for the OSP programs that rose within the United States, American policymakers reckoned its importance in raising the European standard of living, expanding European industrial production and providing critical conditions for strengthening the military security of the old continent. According to a traditional highly critical U.S. Congressional committee as the Senate Appropriation Committee, in some cases as the aeronautical industry such spill-over effects of NATO off-shore procurements contracts on the European economies stretched to the stunning point of spotlighting the «excellent» potential production capabilities of certain European manufacturing sectors [44]. From the mid-1950s through to the end of the decade the Mutual Security Program's economic objective, and as such production cooperation for military rearmament under the umbrella of Washington and NATO, went well beyond the scope of fostering the integration and technological upgrade of former belligerent European economies. As a matter of fact, though the 1959 law passed by the U.S. Congress stressed once again the objective of strengthening the defence and economic growth of allied friendly nations, military assistance rose to a global stage: at the time the very purpose of promoting autonomous military productions was extended to developing nations within the framework of shaping regional defence arrangements to confront Soviet aggression across the globe. In this perspective, by the end of the 1950s the U.S. government broadened in scale the role of economic assistance for military purposes to push forward international market integration: by the time the European allied economies had accomplished currency convertibility, the Mutual Security Program commitment to promote international trade and monetary integration went global: «MSP policies will complement other efforts by the U.S. to foster a high level of international trade and investment within the Free World, including: continuing to press strongly for a general reduction of trade barriers within the Free World; encouraging the further extension of convertibility of currencies and the elimination of discriminatory trade and currency restrictions; and, encouraging private enterprise and investment for Free World Development, especially in the less developed nations.» [45].
2023-06-17T15:02:27.098Z
2023-01-10T00:00:00.000
{ "year": 2023, "sha1": "a0c728be66d620166a2c8987898f1e5c3253b4ab", "oa_license": "CCBY", "oa_url": "https://article.sciencepublishinggroup.com/pdf/10.11648.j.ijefm.20231101.11.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "862251274464252b140c02579aa0ad3b0d74fb37", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [] }
257175086
pes2o/s2orc
v3-fos-license
A systematic comparison of deep learning methods for EEG time series analysis Analyzing time series data like EEG or MEG is challenging due to noisy, high-dimensional, and patient-specific signals. Deep learning methods have been demonstrated to be superior in analyzing time series data compared to shallow learning methods which utilize handcrafted and often subjective features. Especially, recurrent deep neural networks (RNN) are considered suitable to analyze such continuous data. However, previous studies show that they are computationally expensive and difficult to train. In contrast, feed-forward networks (FFN) have previously mostly been considered in combination with hand-crafted and problem-specific feature extractions, such as short time Fourier and discrete wavelet transform. A sought-after are easily applicable methods that efficiently analyze raw data to remove the need for problem-specific adaptations. In this work, we systematically compare RNN and FFN topologies as well as advanced architectural concepts on multiple datasets with the same data preprocessing pipeline. We examine the behavior of those approaches to provide an update and guideline for researchers who deal with automated analysis of EEG time series data. To ensure that the results are meaningful, it is important to compare the presented approaches while keeping the same experimental setup, which to our knowledge was never done before. This paper is a first step toward a fairer comparison of different methodologies with EEG time series data. Our results indicate that a recurrent LSTM architecture with attention performs best on less complex tasks, while the temporal convolutional network (TCN) outperforms all the recurrent architectures on the most complex dataset yielding a 8.61% accuracy improvement. In general, we found the attention mechanism to substantially improve classification results of RNNs. Toward a light-weight and online learning-ready approach, we found extreme learning machines (ELM) to yield comparable results for the less complex tasks. Analyzing time series data like EEG or MEG is challenging due to noisy, high-dimensional, and patient-specific signals. Deep learning methods have been demonstrated to be superior in analyzing time series data compared to shallow learning methods which utilize handcrafted and often subjective features. Especially, recurrent deep neural networks (RNN) are considered suitable to analyze such continuous data. However, previous studies show that they are computationally expensive and di cult to train. In contrast, feed-forward networks (FFN) have previously mostly been considered in combination with hand-crafted and problem-specific feature extractions, such as short time Fourier and discrete wavelet transform. A sought-after are easily applicable methods that e ciently analyze raw data to remove the need for problem-specific adaptations. In this work, we systematically compare RNN and FFN topologies as well as advanced architectural concepts on multiple datasets with the same data preprocessing pipeline. We examine the behavior of those approaches to provide an update and guideline for researchers who deal with automated analysis of EEG time series data. To ensure that the results are meaningful, it is important to compare the presented approaches while keeping the same experimental setup, which to our knowledge was never done before. This paper is a first step toward a fairer comparison of di erent methodologies with EEG time series data. Our results indicate that a recurrent LSTM architecture with attention performs best on less complex tasks, while the temporal convolutional network (TCN) outperforms all the recurrent architectures on the most complex dataset yielding a . % accuracy improvement. In general, we found the attention mechanism to substantially improve classification results of RNNs. Toward a light-weight and online learning-ready approach, we found extreme learning machines (ELM) to yield comparable results for the less complex tasks. KEYWORDS recurrent neural networks, feed forward neural networks, time series analysis, attention, transformer networks . Introduction Electroencephalography (EEG) is a non-invasive method for recording and analyzing brain activity. Given the low amplitude of the recorded signal, even an eye blink or unintentional muscle contractions create noise in the recordings, complicating the identification of a patient's mental condition. To overcome this problem, researchers traditionally focused on handcrafted feature extraction based on e.g., short-time Fourier transform (STFT) (Griffin and Lim, 1984), discrete wavelet transform (DWT) . /fninf. . (Shensa, 1992), or tensor decomposition (Naskovska et al., 2020) to remove noise and focus on the relevant signals. Typically, the generated spectrograms are represented as images and then classified by, e.g., feed-forward networks (FFNs) (Montana and Davis, 1989). Automation of such analyses not only requires high accuracy but their embedding into usage scenarios, such as neurofeedback applications (Hammond, 2007) or brain-computer interfaces (BCI) (Schalk et al., 2004) to classify mental states also require efficient processing. However, these methods have to be calibrated manually for the image generation when specific parameters, e.g., the sampling frequency, have changed. This step requires extensive expert knowledge as otherwise important features might be neglected during preprocessing. Furthermore, these methods can be time-consuming, if the number of EEG channels increases since some of the methods propose a window and channel-wise time-frequency analysis (Tabar and Halici, 2016). Hence, previous studies often merely evaluate their methods on low channel EEG data, i.e., fewer than the clinical routine of 21 channels (Tabar and Halici, 2016;Ni et al., 2017;Mert and Celik, 2021;Yilmaz and Kose, 2021). In the last decade, gated recurrent neural networks (RNN) like long short term memory (LSTM) (Hochreiter and Schmidhuber, 1997) and gated recurrent unit (GRU) (Chung et al., 2014) have been demonstrated to yield superior results when analyzing and classifying time series without the need for complex preprocessing and hand-crafted feature extraction. Thereby, manual configuration effort and the need for expert knowledge in signal analysis can be drastically reduced, while achieving state of the art results. In order to increase the predictive power of these approaches, they face a constant evolution with notable improvements. Such improvements include bidirectional RNN topologies and the attention mechanism that has stimulated many new network topologies beyond RNNs. More recent studies, propose time-convolving neural networks and demonstrate that they can yield high predictive performance on time series like audio signals (Oord et al., 2016;Bai et al., 2018). More specifically, Bai et al. (2018) propose a network topology based on temporal convolutions, which achieves remarkable results on popular datasets thereby outperforming LSTM and GRU topologies. In contrast to these more complex approaches, also methods based on simplified RNNs like echo state networks (ESN) achieved good (Bozhkov et al., 2016), respectively even superior results (Sun et al., 2019). As a FFN based counterpart of ESNs we reference to extreme learning machines (ELM), which were utilized for EEG classification tasks by Tan et al. (2016) and Liang et al. (2006), reaching superior results while further reducing the computational complexity. In this paper, we systematically compare a large variety of RNN and FFN topologies as well as the influence of topological variants, e.g., bidirectional networks and attention mechanisms for EEG analysis. We do not focus on a specific medical application, but rather aim to compare the performance of each network topology based on benchmark EEG recordings. To the best of our knowledge recurrent and feed-forward topologies have never been compared on the same EEG dataset and with the same preprocessing pipeline before. We evaluate all approaches on three different EEG datasets: the well-known benchmark DEAP, a seizure detection task, and an in-house frequency entrainment dataset. Thus, we aim to answer the following research questions: (RQ 1) Recurrent topologies: Which recurrent topology shows advantages for EEG time series classification in comparison between non-gated, gated, and random high dimensional mapping approaches? (RQ 2) Feed-Forward topologies: Are feed-forward topologies based on convolution and self-attention suitable for EEG time series classification without further preprocessing methods? (RQ 3) Advanced architectural concepts: Can extensions for LSTMs, like attention and bidirectionality, improve the performance for these networks for EEG time series classification? Our results indicate that feed-forward networks yield advantages compared to RNNs without additional concepts. Nonetheless, applying attention to RNNs yielded notable performance increases and even surpasses feed-forward topologies for some of the investigated datasets. The rest of the paper is organized as follows, Section 2 provides a brief summary of use cases and problems related to automated EEG analysis and introduces the step-by-step explanation of the typical workflow from the recording of the raw EEG signal to the final analysis result. Furthermore, the studied network topologies are discussed in detail. In addition, the different topological variations, like bidirectional networks and attention are explained. In this section, we will also explain the used datasets, input representation, and chosen parameters for each of the trained network architectures. In Section 3, we show different classification strategies and approaches mentioned by various publications based on the preprocessing methods and architectures used. Additionally, we discuss the different results for each of the presented topologies. Last, we discuss some limitations of our work, introduce potential future research directions and conclude on the different methods compared in this paper. . . Applications and problems of EEG analysis In general, analyzing EEG data is a challenging task with many difficulties (Vallabhaneni et al., 2021). Due to typically low amplitude signals in the µV range (cp. Figure 1A), small interferences can distort a signal making it unusable (cp. Figure 1B red section compared to ordinary EEG recordings). We denote an interference as any part of a signal that is not directly generated by brain activity or brain activity that is not directly produced as result of an experimental stimulus. It is hard to remove interferences from a signal since these often show similar characteristics as the actual signal. To remove transient interferences before analyzing an EEG signal, various methods have been proposed, e.g., linear regression or blind source separation (Urigüen and Garcia-Zapirain, 2015). Nevertheless, none of them is supposed to work perfectly and remaining interferences may cause erroneous analysis results (Hagmann et al., 2006). Another problem can be the placement and number of electrodes that capture brain activity. Not all regions of the brain are equally active during experiments and some regions are more dominant than others. When less electrodes are used, activation could be missed during the recording which results in no features. To avoid such errors it is advisable to use a higher number of electrodes and to cover all areas of the head. When the number of electrodes used increases, the time and effort required to preprocess the data increases as well. This can be critical for time-frequency transforms which typically process signals channel-or windowwise (Li et al., 2016;Tabar and Halici, 2016). In recent years, deep learning neural network approaches have been applied to a wide range of neuroscientific problems like feedback on motor imagery tasks (MI) (Tabar and Halici, 2016), emotion recognition (Ng et al., 2015), seizure detection (Thodoroff et al., 2016) and many other tasks (Gong et al., 2021) (see Table 4). These studies typically apply standard convolutional and recurrent neural networks (Craik et al., 2019). Many studies use handcrafted features as input for deep neural networks. However, extracting features can be time-consuming and often requires expert domain knowledge to extract features which represent the signal correctly. To avoid loss of information during the preprocessing phase, the aim of neurobiological analysis should be an analysis of raw data. If more information is provided to the neural network, better results can be expected. To the best of our knowledge, no study exists that systematically compares feed-forward and recurrent neural networks in all their flavors for raw signal EEG data analysis. . . Automated EEG analysis workflow In this subsection, we discuss the workflow for automated EEG data analysis from the recording of data to the eventual prediction (cp. Figure 2). . . . Signal acquisition We focus on EEG recordings as a non-invasive and cost efficient method to measure brain activity with electrodes placed directly on the scalp (Craik et al., 2019) (cp. Figure 2). . . . Preprocessing Preprocessing of data, such as filtering the signal and removing interferences, is an important part of training neural networks in general. Poorly preprocessed data ultimately yield poor network inference performance which can hardly be compensated by training methodology and network topology (Hagmann et al., 2006). This processing is particularly important for EEG signals which, due to their low amplitude, can be strongly altered by only small influences such as unintended muscle contractions. For this reason, almost all EEG data are bandpass filtered directly after recording to remove noise distorting the signal. An often used frequency range for EEG data analysis is 1-40 Hz. The filter range might also depend on the experimental setup during the EEG recording. Transient interference removal is another important part of preprocessing. Interferences influence a signal in a significant way and often even distort a signal such that it is nearly impossible to recognize its actual waveform (cp. Figure 1). Different methods such as linear regression or blind source separation were proposed to remove interferences. For heavily distorted signals, like shown in Figure 1 a threshold detection can track and remove the interference. After removing interferences and noise, the preprocessed data can be used as input for deep neural networks. . . . Window slicing EEG signals may contain many data points, depending on the sampling rate and duration of a recording. Often, Frontiers in Neuroinformatics frontiersin.org . /fninf. . FIGURE Overview of the workflow for processing EEG data: ( ) signal acquisition-EEG data are recorded, ( ) preprocessing-recorded data are preprocessed and noise is removed by filters, ( ) window slicing-the resulting waveforms are divided into windows of equal size, which may overlap, and ( ) model training-on the windowed and preprocessed wave forms. it is not feasible to analyze a complete recording due to prohibitive compute and memory requirements which result from an excessive input length. It is, therefore, common to apply window slicing to generate data frames and to incrementally analyze these smaller snippets of a signal rather than a whole recording at once (Tabar and Halici, 2016;Gao et al., 2019). Thereby, the size of a window and a potential overlap of successive windows are hyper-parameters of the respective analysis and depend on its goal (cp. middle of Figure 2). For example, the detection of slow theta brain waves requires larger windows to capture a full wave within the window while alpha and beta brain waves can be captured in a smaller window. . . . Model training The goal here is to select, parameterize, and train a suitable model architecture. Below, we discuss model topologies applicable for analyzing and specifically classifying EEG time series data (cp. Figure 3), which we then systematically evaluate on different EEG datasets in Section 2.5. Once the initial architectural choice is made, hyper-parameters are varied and optimized to improve prediction performance results. In this work we study a variety of different topolgies. These include the basic RNN as well as the most prominent recurrent networks GRU and LSTM to investigate the advantages of gated cells. As representatives for feed-forward networks we use the TCN and Transformer-Encoder topology since both of these models have shown superior results for raw time series prediction (Ingolfsson et al., 2020). Lastly, we include ESN and ELM as reservoir computing models since these are often overlooked in the literature but have shown promising results in high-dimensional time series prediction (Pandey et al., 2022;Viehweg et al., 2022). . . Recurrent neural networks Recurrent neural networks (RNN) (Rumelhart et al., 1988) are especially suitable to process sequential data as their topology contains feedback loops that enable the network to build up and maintain a state, sometimes referred to as memory. In contrast, a feed-forward topology (FFN) does not offer this capability and is stateless in between different inputs. . . . Basic RNN The key concept of a RNN is the cell state c (t) that is connected via weight matrices in a network topology. For the basic RNN cell, the cell state c (t) is calculated as: where x (t) is the current input, W cc and W cx are weight matrices, and b is a bias term. By incorporating state c (t−1) in this calculation, the current state is influenced by the previously shown sequence. In theory, a basic RNN cell (cp. Figure 4A) should be capable of classifying long input sequences. However, in practice these cells suffer from vanishing and exploding gradient problems when longer sequences are processed and long-term relationships within EEG input data are relevant for signal analysis. To mitigate these problems, gated recurrent neural networks, most prominently long short-term memory (LSTM) and gated recurrent unit (GRU), have been proposed. These networks are considered among the most effective sequence modeling techniques today. While the basic RNN cell consists of a single layer with tanh activation, LSTM and GRU cells are more complex. Their key concept is different gates added to each of the states (cp. Figures 4B, C). These gates can learn what information is more or less relevant for further processing and regulate the flow of information through the network. A different . /fninf. . approach that aims to overcome the problems of gradient descentbased learning are echo state networks (ESN) that use randomly initialized reservoir weights and merely a non-iterative learning of the output weights. . . . Long short term memory The LSTM cell (Hochreiter and Schmidhuber, 1997) consists of three gates that shall help to overcome the problem of vanishing and exploding gradients (cp. Figure 4B). The first gate within an LSTM cell is a forget gate f (t) computing what information is required in the current cell state: where W fh and W fx are weight matrices, b f is the bias, h (t−1) is the previous hidden state, and x (t) is the current input value. The output passes a sigmoid activation function σ bounded between 1, i.e., information is fully required, and 0, i.e., information is unnecessary. The second gate is the update gate i (t) . It controls how much of the current input is considered when computing the new cell state: where c (t) refers to the tanh activated input at time step t. Analogous to the forget gate, the gate uses a sigmoid function which determines the importance of the respective information as i (t) . The new cell state c (t) then becomes the combination of the information passing through the forget and the input gate, respectively. Finally, the output gate o (t) controls which information of the cell state is incorporated into the cell's current output y (t) and hidden state h t , respectively: . . . Gated recurrent units The GRU cell (Chung et al., 2014) was introduced in 2014 and is a simplification of the LSTM cell. The idea is to combine forget gate and input gate into a single relevance gate r (t) (cp. Figure 4C). By combining them, one weight matrix can be neglected, the cell state and hidden state are merged together, and the GRU cell is therefore supposed to be faster to train. Analogous to the LSTM cell described above, the state of the relevance gate r (t) , the state of the updated gate z (t) , and the hidden state h (t) are computed as follows: With the help of gates, GRU and LSTM (cp. Figure 5A) are supposed to be able to analyze longer sequences without being affected by vanishing gradients. Both variations are very popular for analyzing sequential data. While GRUs are more cost efficient due to fewer parameters, the LSTM contains more training capacity but requires more computational power and longer training time. . . . Echo state networks An alternative approach to potentially overcome the problems of gradient descent-based training is the non-iteratively trained echo state networks (ESN) (Jaeger, 2001). ESNs are a prominent RNN architecture that realize the reservoir computing paradigm (Verstraeten et al., 2007). An ESN consists of three core layers: the input layer, the reservoir layer, and the output layer. Only the weights of the output layer are trained. All other weights are typically randomly initialized from a uniform distribution, i.e., those of the input layer W hx ∈ R N res ×N in and those of the reservoir layer W hh ∈ R N res ×N res . A reservoir layer can be considered as a simplified RNN cell without most of the trainable parameters (cp. Figure 4A) and is denoted as: where is an activation function, typically tanh, and γ is the leakage rate that determines how much of the ESN's previous hidden states is added to compute the new hidden state h (t) . During the learning phase, a single training sequence S T with length T is utilized to compute the respective hidden states {h (i) , . . . , h (i+T) }. The learning phase of an ESN is separated in two steps. First an initialization phase is done whereby the states {h (0) , . . . , h (i−1) } are discarded, but the activation for each respective neuron is initialized (Jaeger, 2001). This process is often referred to as the washout phase (Malik et al., 2016). Second is the training phase, where the previous hidden states are added to the current hidden states, in relation to the leakage rate γ . The resulting matrix H ∈ R N res ×T , which is based on the hidden states, is then mapped to the expected outputs Y ∈ R N out ×T via a linear regression with y (t) = W yh · h (t) according to: with β as regularization coefficient and I N r as unity matrix. For classification tasks, we train a reservoir for each class c within the dataset. We call this an ensemble of predictors, where each predictor processes the input file, and the class is chosen based on the predictor with the smallest error. For evaluation, each sample is processed by each predictor and is assigned to the class with the lowest prediction error (Forney et al., 2015). . . . Bidirectional architecture In some applications, it can be helpful to process a sequence's previous as well as future information simultaneously. That is the concept of a bidirectional RNN combining two RNN layers, one for processing input data in a forward manner and one for processing input data in a reverse manner (Schuster and Paliwal, 1997) (cp. Figure 5B). The outputs of both layers are concatenated and eventually processed by a fully connected layer. This architectural approach is applicable for any RNN cell and has often been demonstrated to improve network performance when processing complex sequences in general (Huang et al., 2015;Yin et al., 2017) and to analyze EEG data (Ni et al., 2017;Chen et al., 2019). Ogawa et al. (2018) found that a bidirectional architecture improves accuracy in comparison to a basic RNN model by 1.1% for video classification based on the user's favors. . . . Attention The attention mechanism is an imitation of human behavior. Rather than considering the entire previous input when computing the next output, a network learns which previously computed hidden states are beneficial to compute an output for a given new input. This approach is also applicable to any RNN cell and even to feed-forward networks as we will discuss in the next subsection. Attention computes the relation between the current input x (t) and previous inputs {x (1) , . . . , x (t−1) } represented as hidden states {h (1) , . . . , h (t−1) } with the help of an attention layer (Bahdanau et al., 2014;Cheng et al., 2016) (cp. Figure 5C): The attention calculation results in a distribution of probabilities of the previous values. With the probability distribution s t i , an adaptive summary vector can be calculated. Cheng et al. (2016) proposes to replace the previous hidden state h (t−1) used in Equations (2)-(4) by a cell and hidden memory tape c (t) and h (t) : The cell and hidden memory tape contain all the previous cell and hidden states {c (1) , . . . , c (t−1) } and {h (1) , . . . , h (t−1) }, respectively. Attention allows the network to give certain previous hidden states more weight in generating the current output than others. Thereby, rather than utilizing a single hidden state h (t−1) the network gains access to all previously processed hidden states and can weigh their importance. . . Feed-forward networks In contrast to recurrent neural networks, feed-forward networks like multilayer perceptrons (MLPs) and convolutional neural networks (CNNs) do not have any feedback connections between the output of a neuron and its input, i.e., input information x passes a series of operations and only influences the network's current output y. Traditional feed-forward networks were therefore not well suited to analyze time series data. Due to their nonrecurrent nature, temporal dependencies could not be modeled well and extending the input size toward longer sequences became prohibitively expansive due to an exponentially growing number of parameters. However, there are more recent architectural concepts to overcome these limitations of FFNs in sequence processing, while preserving their benefits over RNNs, i.e., parallelizable training and being less prone to vanishing and exploding gradients. Below, we discuss three fundamental approaches for applying feed-forward architectures to time series data classification. . . . Transformer The feed-forward Transformer architecture makes extensive use of the attention concept. It has been demonstrated to achieve superior results especially in the field of natural language processing (NLP) in recent years (Vaswani et al., 2017). Each block of the Transformer consists of an attention layer, a fully connected layer, and a final classification layer. Residual connections are added around the attention and fully connected layer followed by a layer normalization (cp. Figure 6). The attention mechanism is implemented as a multiplication of the input with three different weight matrices W Qx , W Kx , W Vx and computed as: with Q, K, and V as Query, Key, and Value, respectively. The scaling factor is denoted as d k and the Softmax function as s(·). For solving NLP problems, such as machine translation, the Transformer typically follows an encoder-decoder structure (Vaswani et al., 2017). For classification problems only the encoder without the decoder part is used since only a single output conveying the classification result is required. Therefore, the model will be referred to as Transformer-Encoder in the rest of the paper. . . . Temporal convolutional network An alternative feed-forward architecture for the analysis of sequential data is the temporal 1D convolutional network (TCN) that is based on two key concepts (Bai et al., 2018). First, causal convolutions keep the temporal relationship between inputs, i.e., the input at time x t can only be convolved with an input of x t−n . Second, since a fully convolutional architecture would exponentially grow in depth with an increasing input length, dilated convolutions (Oord et al., 2016;Bai et al., 2018) are proposed and filter over larger input windows with a defined number of input are being skipped. Figure 7 illustrates the dilated convolutions concept where the first hidden layer convolves each two successive input values while the second hidden layer convolves two inputs but skips the intermediate one. The dilation rate δ i increases exponentially with each hidden layer added to the network, starting with a dilation rate of 1. The number of TCN layers can therefore be derived by calculating the logarithm of the maximum dilation rate log 2 (d i max ). Due to the dilation concept, TCNs are theoretically able to process sequences of any length without facing the problem of vanishing or exploding gradients. The amount of dilation per convolutional layer influences the receptive field P of a network calculated as: where χ is the number of TCN blocks, λ is the filter length, and δ i is the dilation rate of the respective hidden layer. The example in Figure 7 consists of one TCN block, the last dilation is denoted as 4 and the filter size was set to 2. Using Equation (9) with the same amount of parameters. The TCN has been evaluated against LSTM and GRU on common sequence modeling datasets and demonstrated comparable and often better performance across the various tasks (Bai et al., 2018). . . . Extreme learning machines Huang et al. (2004) proposed the extreme learning machine (ELM) in which an input of lower dimensionality is mapped into a high dimensional state space via a random mapping. The random mapping is defined as W hx ∈ R N res ×N in +1 and W hx ∼ U(−0.5, 0.5) with U as uniform distribution and N in , N res ∈ N being the dimensionality of the input and the reservoir, respectively. With these mappings, the hidden state h (t) at time t is calculated as: with x (t) as the input at time step t and f (·) as the activation function. These mappings are collected for T ∈ N time steps and then mapped to the correct output by calculating the weights of the outputs W yh . Within the scope of this work, we view the data as a time-series to predict. We use the approach of Forney et al. (2015), to learn W c yh for each class c and predict the time series of the validation dataset to classify by the lowest predictive error. . . Experimental setup We studied the four RNN topologies introduced above, i.e., the basic RNN, the GRU, the LSTM, and the ESN. Additionally, we studied them in a bidirectional architecture and added the attention concept. We also studied the three FFN topologies introduced above, i.e., the Transformer-Encoder, the TCN, and the ELM. Each of the network topologies are evaluated for intra-subject classification tasks. . . . Datasets We utilize three datasets to comparatively evaluate the introduced methods. Two of those are known benchmarks in the field of EEG analysis: the seizure and the DEAP dataset. Furthermore, we added the much larger frequency entrainment dataset since the feature learning effectiveness of deep neural networks heavily depends on large training sets. We describe the datasets as used in this study and based on the raw data, generated from the mentioned measurements. In cases of frequency cut offs done during the measurement, we report them but do not use any additional statistics to imprint specific features into the dataset that were not found by the neural networks themselves. All datasets are available within the reported frequency ranges and are not preprocessed any further. The filtering is oftentimes done during the measurement procedure and can be part of the recording process. . . . . Seizure dataset The seizure dataset includes five different classes (Tzallas et al., 2009). Each class contains 100 single-channel EEG recordings. Classes Z and O have been recorded from five healthy participants with eyes opened and closed, respectively. Classes F and N are measured at different brain regions, with F being recorded at the epileptogenic zone and N being recorded at the hippocampal formation, both without any seizures. Class S contains recordings of actual seizures. We define three classification tasks of increasing complexity for the seizure dataset, i.e., Task 1: S-Z, Task 2: S-N-Z, and Task 3: S-N-O-F-Z, that have been studied before and therefore allow for comparison with previous work (Tzallas et al., 2009). . . . . DEAP dataset The DEAP dataset is a public emotion recognition dataset where 32 participants watched 40 1-min-long music videos while their neural activity was recorded with a 32-channel EEG cap (Koelstra et al., 2011). The electrodes were placed according to the 10-20 system. After watching the video, each participant was asked to rate the strength of their emotions on a Likert scale from 1 to 9 according to four classes: arousal, dominance, liking, and valence. Analogous to earlier studies, we derive four binary classification problems, one per emotion, distinguishing between a low < 5 and a high ≥ 5 emotion rating. intrinsic brain oscillations of the participants leading to different resonance and entrainment effects when using fixed stimulation frequency for all participants, the actual stimulation frequency per participant was chosen relatively to her or his individual alpha frequency (α). The alpha frequency was measured before the actual experiment. Each stimulation frequency was shown to a participant a total of 30 times with 40 light flashes. Brain activity was recorded using a 124-channel EEG. The data were recorded at a sampling rate of 1 kHz and then filtered between 2 and 30 Hz using a zerophase Butterworth filter (Salchow et al., 2016) since the resonance and entrainment phenomena are expected in this frequency range and anything else is considered noise. The task for this dataset is to classify the respective light frequency a participant was exposed to based on the recorded EEG data. The task is especially challenging since a trained classifier needs to distinguish between almost identical frequencies, e.g., 0.50 × α and 0.55 × α. Moreover, the different frequencies stimulate almost the same brain regions. For higher frequencies above 1.30 × α, Salchow et al. (2016) describe that participants notice the flash as a continuous light instead of a flickering light, which makes it hard to distinguish between. . . . Preprocessing Our evaluation differs from previous studies that often used customized and dataset-specific features for classification, such as Chen et al. (2019) and Du et al. (2020). However, in this study we mainly focus on papers that also evaluate their method on windowed signals. We argue that this approach, albeit possibly yielding worse accuracy, reflects a more realistic scenario of analyzing raw time series signals as model input. Therefore, we trained all networks on raw EEG recordings that were solely bandpass filtered to the frequency ranges reported in Table 1 to remove frequencies unrelated to neural activity of interest (cp. Section 2.2). We removed the distorted channels 42 and 63 from the frequency entrainment dataset by comparing maximum signals across all channels and selecting those that strongly deviated from the average maximum. We assume that the problem arose from an electrode failure and was present for all participants. . . . Hyper-parameter tuning and training We used the Kotila (2019) grid-search package to identify the most suitable hyper-parameters per dataset. More specifically, we used the recordings of one participant in a 70:20:10 train, test, validation split to perform this search for the RNNs, the TCN, and the Transformer-Encoder. We do not expect that the hyperparameters differ substantially when tuning them for another participant, because of a similar data distribution. The standard deviation, mean, maximum, and minimum values across all the participants are in similar ranges and the recording procedure as well as the task does not change across participants. Since the seizure dataset provides only one dataset and is not divided per participant, the hyper-parameter search was done for the 3 class classification problem with the same split mentioned above. We searched for an optimal setting of window length, window step size, batch size, learning rate, momentum, learning rate decay, dropout ratio, network depth, hidden size, dilation rate (TCN only), scaling factor, and number of heads (Transformer-Encoder only). We utilized grid search as hyper-parameter optimization strategy. The upper and lower boundaries for each hyper-parameter are shown in Table 2. Additionally, we optimized the hyper-parameters for the ELM and the ESN based on a set of up to 100 randomly seeded weight matrices. Thereby, we searched for the most suitable parameterization of hidden size N r , leakage rate γ , regularization coefficient β, density of the weight matrix d(W hh ) (ESN only), and spectral radius ρ (ESN only). Table 3 shows the discovered hyper-parameters per dataset. The TCN and all recurrent networks except ESN were trained using the Keras framework on Tesla V100 GPUs with the SGD or Adam as optimizers. The Transformer-Encoder was *AdamW is applicable to the Transformer-Encoder. **Dilation rate is applicable to the TCN. ***Scaling factor is applicable to the Transformer-Encoder. ****Hidden size for the Transformer-Encoder corresponds to the feed-forward network. *****Number of heads only applicable for the Transformer-Encoder. ******Different optima for Arousal, Valence as well as Seizure 2, 3 and 5 and between ESN and ELM given as maximum and minimum found optimal value. implemented with PyTorch. For the Transformer-Encoder, we used the AdamW optimizer, as this kind of network requires a different learning strategy than the other presented networks (Popel and Bojar, 2018). We noticed that the recurrent architectures suffered from bad network initialization multiple times and did not improve during training. This was especially the case for the DEAP dataset and, thus, the training had to be restarted. We did not observe this behavior during the training for the Transformer-Encoder and the TCN. This phenomenon is mentioned by other studies that describe a similar behavior as a characteristic of training RNNs (Sutskever, 2013). That is why we explain the poor training behavior by the nature of RNNs rather than the chosen hyper-parameters based on one specific participant. To compare the classification capabilities of each of the presented architectures, we used the accuracy metric. We applied early stopping during each training with a patience of 50 epochs to stop the training if the model does not improve anymore. . . Status Quo in EEG classification with deep learning Automatic EEG time series analysis has gained an increasing interest in recent years due to the success of deep learning in a wide range of tasks (Gong et al., 2021). Various studies focused on EEG classification and have proposed interesting approaches to tackle the problem (cp. Table 4). Before transforming the recorded signals, all considered primary studies applied filtering methods to remove noise and restrict the analysis to relevant frequency ranges. The most commonly used filter technique is bandpass filters. Various different preprocessing methods like discrete wavelet transform (DWT) and differential entropy (DE) have been proposed to extract representations like different frequency bands from raw EEG signals. However, the most common signal representations are time series followed by selected frequency bands. Table 4 shows CNNs and LSTMs as the most prominently studied model topologies. Yang et al. (2020) proposed a bidirectional LSTM for EEG classification tasks. They found that bidirectional architectures perform better for EEG analysis than LSTMs without this design. The attention mechanism has also been studied in combination with the LSTM topology to solve such tasks Du et al., 2020). Both publications report that the attention mechanism improves results by about 6-7% compared to LSTM architectures without attention. A popular approach is to combine two topologies, like CNN and LSTM. In this combination, the CNN is used as a feature extractor that delivers the input to the LSTM which classifies based on these features. Cai et al. (2018), Isuru Niroshana et al. (2019), and Jeong et al. (2019 found that RNNs can benefit when a CNN is applied as features extractor. But Cai et al.'s results also indicate that the combined architecture reduces the accuracy for some subjects. The most prominent datasets used by the authors include DEAP (Koelstra et al., 2011), BCI competition IV (BCIIV, 2008), PhysioNet (PhysMi, 2009), and SEED (SeedBci, 2013. Other publications evaluate their approaches on proprietary datasets representing, e.g., MI tasks (Lu et al., 2017;Tang et al., 2017;Cai et al., 2018) and emotion recognition (Choi and Kim, 2018;Keelawat et al., 2019). However, it is hard to compare the performance of the proposed methods even for the same dataset due to often varying experimental protocols like choosing specific . EEG channels or reducing the number of classes to distinguish between (Dose et al., 2018;. This often leads to better performing models due to the removed channels and classes which are hard to distinguish. The best accuracy on the DEAP dataset was achieved by a TCN architecture with 72.9% for classification with windowed signals. Many reviews of deep learning methods for EEG time series classification have been published (Craik et al., 2019;Gong et al., 2021;Vallabhaneni et al., 2021). However, none of them compare the reviewed methods with respect to the same experimental setup. We argue that a systematic comparison of the proposed as well as other deep learning methods is required to evaluate their potential for EEG analysis and yield guidelines for data scientists and researchers in this area. . . Comparative evaluation Table 5 shows our measured classification performance on the test set of the three studied datasets (rows) for the seven network topologies introduced in Section 2 (columns). We ordered datasets and their tasks with increasing complexity from top to bottom in Table 5. Since the frequency entertainment dataset is unbalanced due to the different stimulation frequencies (cp. Section 2.5), we included the F1-score for each model. The following paragraphs discuss our results and observations with regards to the research questions stated in Section 2.5. We observe widely varying classification accuracies across the different network topologies per dataset and task. In general, we observe a better performance of feed-forward topologies compared to recurrent topologies across most of the studied classification tasks (cp. Table 5). Recurrent as well as feed-forward topologies benefit from more advanced architectural concepts like gates in the LSTM and GRU topologies, attention in the Transformer-Encoder topology, or convolution in the TCN topology. These more advanced topologies achieve superior performance compared to less complex topologies, i.e., the basic RNN, the ESN, and the ELM. Furthermore, the more advanced topologies suffer less from a decreasing performance with growing input dimensionality, i.e., the number of analyzed channels, and problem complexity, i.e., the number of predicted classes. However, the advanced topologies performed better during the training and oftentimes achieved 95% and higher training accuracy values, but could not generalize well on the test set. This behavior indicates that these models overfitted. Nonetheless, reducing the model size and depth reduced the overfitting problem, but also led to lower validation performance. When comparing the model parameters as shown in Table 6, we notice that larger models performed overall better in comparison to smaller ones. Nonetheless, when searching for the best possible set of hyper-parameters (cp. Table 2), even models larger than the ones reported in this work did not yield better results. Thus, we argue that the number of trainable parameters is not directly related to the overall performance of the model. . . . RQ : Recurrent topologies A direct comparison of all recurrent networks shows that the basic RNN and the ESN yield the lowest accuracy across the different datasets and tasks. The basic RNN cell does not achieve results comparable to the other presented methods on seizure Task 1. Furthermore, the basic RNN shows a notable performance reduction with an increasing number of classes for the seizure tasks as well as worse performance on the other, higher dimensional datasets. Similar to LSTM and GRU, the ESN achieves 100% accuracy on the least complex seizure Task 1. However, we observe substantial performance deficits for all the other tasks, with an overall lower accuracy than the basic RNN. We expected the ESN to perform better than the basic RNN since Chattopadhyay et al. (2019) and Vlachas et al. (2020) have shown that the ESN is comparable with the LSTM and GRU on time series prediction. This was not the case for any of the evaluated datasets. We argue that the ESN's non-iterative learning approach is not sufficient to learn important features to distinguish between more similar classes. For the gated recurrent networks, we observe that the GRU and LSTM consistently outperform the basic RNN as well as the ESN across all classification tasks demonstrating that their advanced control of information flow allows them to better adapt to high dimensional EEG time series. When comparing GRU and LSTM, we observe a better performance for the GRU across all datasets. For the DEAP as well as the frequency entrainment datatset we tested whether the differences between both cells are significant by applying a statistical t-test. However, the results are not significantly different when comparing both cells directly. As already stated by some studies, GRU and LSTM perform similar and it is more important to find the best working parameter set than choosing the architecture (Chung et al., 2014). Nevertheless, we consider GRU superior compared to LSTM due to the lower number of model parameters (cp. Table 6). . . . RQ : Feed-forward topologies Overall, feed-forward topologies yield better performance than recurrent topologies. We observe similar performance trends for the ELM comparable to the basic RNN and the ESN. The ELM cannot compete with self-attention and convolutional approaches and performs substantially worse on the other investigated tasks. Surprisingly the ELM achieved the best performance for the DEAP arousal task. However, since the ELM follows a comparable training process as the ESN, we argue that the full batch learning approach is not suitable for high-dimensional hard to distinguish EEG recordings as the results show for our frequency entrainment dataset. The recently proposed Transformer-Encoder is designed to take advantage of large amounts of data with the dataset used in Vaswani et al. (2017) being distinctly larger than the training datasets used in our study. While for the Task 1 and 2 of the seizure dataset the Transformer-Encoder performed well compared to other approaches, its accuracy notably drops for Task 3 with five classes to differentiate. We argue that the complexity of the third task, paired with the relatively low amount of training data resulted in the low accuracy. We observe a similar behavior for the DEAP dataset. However, the results from our frequency entertainment dataset demonstrate the true potential of Transformer-Encoder networks. Though having a higher input dimensionality in terms of analyzed EEG channels, the Transformer-Encoder is capable to . /fninf. . The best performing architectures for a specific task are marked in bold. outperform the previously discussed topologies yielding a 4.18% higher accuracy than the GRU architecture and achieving the second-best result across all topologies. We hypothesize that the results of the Transformer-Encoder can be further improved when having sufficient and rich training samples. The TCN, as another feed-forward approach, yields a rather constant performance across all investigated datasets. It achieves the highest accuracy across all studied topologies for most of the tasks. As observed for the other architectures, the accuracy of the TCN decreases with increasing problem complexity. This behavior is demonstrated by the achieved accuracies for the different seizure tasks. Based on the results for the seizure dataset, we argue that the TCN is capable of extracting features even on a small number of training samples and can overcome the limitations of the Transformer-Encoder topology which requires a large number of training samples. For the DEAP dataset, we observe that the TCN and Transformer-Encoder had problems to distinguish between high and low emotion classes and stayed almost at guessing for the DEAP emotion task. We hypothesize, that the information about the emotion is present in frequency ranges the TCN may cannot recognize well. In contrast, one specific property of recurrent architectures is, that they usually forget important information laying far in the past. This property makes RNNs sensible to higher frequency ranges and one can argue that emotions are recognizable in higher frequency ranges. Zheng and Lu (2015) confirms this finding. . . . RQ : Advanced architectural concepts We extended the previously trained LSTM with an attention mechanism, used it in a bidirectional setup, and studied both extensions simultaneously. Table 7 reports results of these experiments. For all tested datasets, attention yielded an increase in accuracy with the largest being an 24.75% increase for seizure Task 3. This is comparable to the TCN for this task and achieves the best results for the seizure Task 2. For the DEAP and the frequency entrainment datasets, we also observe significant accuracy improvements compared to the LSTM without attention. Some previous studies report a slight performance improvement when the LSTM cell is used in a bidirectional setup (Ni et al., 2017). In contrast, we observed a 0.01-2.81% degraded performance across all datasets except seizure Task 2 when applying this architecture. The benefit of the bidirectional setup heavily depends on the task and we argue that a 'look-ahead' may be highly beneficial for sequence to sequence tasks like machine translation but is of less help when predicting a class based on a full sequence. The combination of attention and bidirectional setup yields an improved performance across most of the investigated datasets. However, for all seizure tasks as well as the frequency entrainment datatset, the performance is lower than that observed for the attention mechanism alone. Surprisingly, for the DEAP task, the combination of attention and bidirectional setup yielded an increased performance. We hypothesize that the combination of both, attention and bidirectionality can be beneficial for some EEG classification tasks. However, the doubled number of weights due to the bidirectional LSTM (cp. The colored numbers indicate the difference in comparison to an LSTM cell without the different applied mechanisms. Lower case letter (a) next to the reported value indicates significant differences between LSTM without mechanisms and the different variations. The best performing architectures for a specific task are marked in bold. the model performance and shows only minor improvements compared to the model only utilizing attention. . . Limitations Determining the best-performing model configuration via hyper-parameter tuning is typically an expensive and time-consuming activity. We tuned the hyper-parameters for the RNNs, TCN, and Transformer-Encoder as described in Section 2.5, but did not perform an additional optimization for the RNNs with attention and the bidirectional setup. Therefore, it is possible that some of the parameters still could be optimized and improved. However, we do not expect a substantial change in the results and argue that we only compared for differences among the topologies rather than absolute accuracy. Other studies mentioned handcrafted feature extraction methods for EEG time series analysis. We did not further investigate time-consuming and subjective methods to extract the best possible features. Therefore, it might be possible to achieve better absolute performance with such specifically tailored feature extraction methods. Based on Transformers, a multitude of extensions of the approach were proposed in recent years, e.g., Dai et al. (2019) and Zhou et al. (2021), which circumvent problems regarding the memory usage and the length of the input. With both of these approaches being designed for time-series prediction and in comparison small training size we do not expect an improvement in using these advancements of Transformers. We reiterate our assumption that the Transformer could achieve better results with more training data. Given the low amplitude of the EEG signal, the recordings are prone to noise. Depending on the strength of the noise it is possible that it could have a negative impact on the topologies. We did not insert additional noise or remove parts of the signals to test the robustness of each model. Lim et al. (2021) shows that the accuracy of RNN topologies can drop when a strong noise is added to the dataset. Zanghieri et al. (2019) and Zhang and Wu (2019) indicate that FFNs are not that much influenced when noise is added to the signal. However, we do not expect that other EEG recordings differ much from the ones presented in our study. All investigated datasets are not further preprocessed to remove the noise recorded during the experiments. . . Future research The proposed methods are still among the best performing topologies for deep learning tasks. However, there are other interesting architectural concepts which are not investigated in this work. These are especially brain-inspired intelligence approaches such as spiking neural networks (Tavanaei et al., 2019). Lately published studies such as SAM (Yang et al., 2022a), Spike-Based Continual Meta-Learning (Yang et al., 2022c), or ensemble models (Yang et al., 2022b) are promising methods to solve neuroscientific problems. As previously mentioned, EEG time series prediction has many difficulties (Vallabhaneni et al., 2021). Recently published learning and regularization strategies have shown to improve the learning process of the neural networks presented in this work. Such strategies can be Hamilton-Jacobi-Bellman equations (Reddy et al., 2018), Curiculum Learning , or Synaptic Scaling (Hofmann and Mäder, 2021). These learning approaches could help to reduce the overfitting which was observable during our experiments and could be further investigated. Lastly, these models could be compared with respect to other metrics such as robustness with erroneous EEG signal recordings which are sometimes overlooked during the preprocessing. . Conclusions In this paper, we trained ten different state-of-the-art neural network model topologies and methods and compared their results on the popular seizure dataset, the emotion dataset DEAP as well as the larger frequency entrainment dataset. More specifically, we compared models' classification performance on In general, all feed-forward architectures were easier to train. As described in Section 3, networks with recurrence suffered from bad initialization which led to no learning progress. This behavior was not observed for feed-forward networks. We argue, that our results justify the use of feed-forward topologies like TCN and the Transformer-Encoder in contrast to previous standard topologies which utilize recurrence or high dimensional random mappings (RQ1 and RQ2). Furthermore, we investigated the influence of bidirectional and attention mechanisms as these were previously proposed by individual studies. We found that the attention mechanism increases the LSTM's performance for all studied datasets and achieved even better results than the TCN in some experiments. In contrast, the bidirectional mechanism had a negative impact on our results and the LSTM cell did not benefit from calculating the sequence forward and backwards in time. We also noticed that the combination of both mechanisms does not always improve the model performance but requires more memory since the model parameters are doubled due to the bidirectionality. Thus, we do not recommend applying bidirectional mechanism to RNNs for EEG time series classification (RQ3). We evaluated all architectures on raw signals without handcrafted feature extraction for all the datasets. Our results show that it is possible to solve different tasks without major adjustments to the training pipeline. However, for all presented datasets we had to deal with the overfitting problem and could not reach the best performance on the DEAP dataset, compared to other methods that use hand-crafted features for classification. Data availability statement The data analyzed in this study is subject to the following licenses/restrictions: The DEAP [1] and the seizure [2] datasets are publicly available from their original authors. The frequency entrainment dataset has been recorded according to a protocol that was not GDPR compliant and therefore German legal regulations prevent us from publicly sharing this dataset. That is the recorded data potentially contains identifying or sensitive patient information that has not been authorized by the respective participant for public sharing. However, we support justified validation requests on this dataset, e.g., by executing validation code on our side. Such requests shall be directed to Vice President for Research of Technical University of Ilmenau (vpf@tu-ilmenau.de) as the responsible person for that dataset. Author contributions DW worked on conceptualization, formal analysis, methodology, validation, visualizaton, and writing. JV contributed to formal analysis, methodology, validation, and writing. JH contributed to data curation, review, and editing. PM contributed to conceptualization, writing, review, and editing. All authors contributed to the article and approved the submitted version. Funding This work was funded by the Thuringian Ministry for Economic Affairs, Science and Digital Society (Grant: 5575/10-3) and the Carl Zeiss Stiftung (Grant: P2017-01-005). We acknowledge support for the publication costs by the Open Access Publication Fund of the Technische Universität Ilmenau. The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript.
2023-02-25T16:22:54.433Z
2023-02-23T00:00:00.000
{ "year": 2023, "sha1": "d08cd5026ad28beb76dee00b2d062e51e9905418", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fninf.2023.1067095/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b5115f260c47d059a948e88cae1be2e933d0b9b4", "s2fieldsofstudy": [ "Computer Science", "Medicine" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
118383985
pes2o/s2orc
v3-fos-license
Symmetric representations of distributions over $\mathbb{R}^2$ by distributions with not more than three-point supports We construct symmetric representations of distributions over two-dimensional plane with given mean values as convex combinations of distributions with supports containing not more than three points and with the same mean values. Introduction. Setting of problem. We consider the set P(R 2 ) of probability distributions p over the plane R 2 = {z = (x, y)} with finite first absolute moments We denote by E p [x] and E p [y] the mean values of distribution p: We construct symmetric representations of the convex set of distributions with given mean values as a convex hull of its extreme points. This is sufficient to give the representation for the set Θ(0, 0). The extreme points of the set Θ(0, 0) are the degenerate distribution δ 0 with the single-point support 0 = (0, 0), distributions p 0 z 1 ,z 2 ∈ Θ(0, 0) with two-point supports (z 1 , z 2 ), and distributions p 0 z 1 ,z 2 ,z 3 ∈ Θ(0, 0) with three-point supports (z 1 , z 2 , z 3 ). This problem arose from investigating multistage bidding models where two types of risky assets are traded [1]. As the example for imitation we take the symmetric representation of one-dimensional probability distributions over the integer lattice that was exploited in [2] for analysis of bidding models with single-type asset. Let p be a probability distribution over the set of integers Z 1 with zero mean value. Then where p 0 k,−l is the probability distribution with the support {−l, k} and with zero mean value. Formula (1) can be written as if we put p 0 k,0 = p 0 0,−l = δ 0 /2. Observe that the coefficients P p (p 0 k,−l ) of decomposition (1), that may be treated as probabilities of corresponding distributions p 0 k,−l in the two-step lottery realizing distribution p, have the form where α(k, l) = k + l and β(p) = 1/ ∞ t=1 t · p(t) = 1/ ∞ t=1 t · p(−t), the last equality playing the crucial role. We mean just this form of coefficients saying that the representation (1) is symmetric. We aim for constructing the representation of two-dimensional probability distributions with the analogous characteristics. Key invariants for distributions p With each ψ ∈ [0, 2π) we associate the set of two-point sets Denote by Int∆ 0 (ψ) and ∂∆ 0 (ψ) the sets of two-point sets (z 1 , z 2 ) such that, for z ∈ R ψ , the set (z 1 , z 2 , z) belongs to Int∆ 0 and to ∂∆ 0 respectively. We take, that the points (z 1 , z 2 ) are indexed counterclockwise. Consider the quantity (3) Using polar coordinates differs from zero only if the measure p(R ψ+π ) is more than zero. In this case where e ψ = (1, ψ) and Hp ϕ is the half-plane The next fact produces the base for constructing symmetric representations of distributions over R 2 with given mean values as convex combinations of distributions with supports containing not more than three points and with the same mean values. Theorem 1. For any distribution p ∈ Θ(0, 0) the quantity Φ(p, ψ) does not depend on ψ, i.e. this is an invariant Φ(p) of distribution p ∈ Θ(0, 0). Proof. We begin with proving Theorem 1 for distributions p ∈ Θ f (0, 0) with finite supports. Let ψ 1 , ψ 2 ∈ [0, 2π), ψ 1 < ψ 2 , be such two values of argument that the support of the distribution p ∈ Θ f (0, 0) does not contain points z with ψ 1 < arg z < ψ 2 . Set We have Since, for distributions p ∈ Θ(0, 0), Iterating this argument the relevant number of times we obtain the statement of Theorem 1 for any distribution p ∈ Θ f (0, 0). Remark 2. This theorem is a two-dimensional analog of the fact that, for p ∈ Θ(0) ⊂ P(R 1 ), the equality holds. 3. Decomposition theorem for distributions p ∈ Θ(0, 0). The invariance of the quantity Φ(p) proved in the previous section allows us to formulate the following preliminary variant of decomposition theorem for two-dimensional distributions. This variant demonstrate a perfect analogy with the decomposition of one-dimensional distributions. of decomposition (5) contains all distributions p 0 z i ,z i+1 with two-point supports (z i , z i+1 ), where z i ∈ R ψ and z i+1 ∈ R ψ+π . In order that such combination of points could appear with nonzero probability, it is necessary that the measure p(R ψ ) and the measure p(R ψ+π ) are more than zero. This is possible for a not more than countable set Ψ(p) of values ψ. It follows from (4) that . For this distribution, if ϕ i < ψ < ϕ i−1 + π(mod 2π), then the support of the measure induced by p β,z over the set ∆ 0 (ψ) is the set Observe that
2011-03-01T14:19:09.000Z
2011-03-01T00:00:00.000
{ "year": 2011, "sha1": "35469c2b1ef02dc5429699e0a52950cdd8b1bf54", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "35469c2b1ef02dc5429699e0a52950cdd8b1bf54", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
266970536
pes2o/s2orc
v3-fos-license
On-demand ride service platform with differentiated services The rapid growth of on-demand ride service platforms has made it increasingly important for these platforms to efficiently match services by understanding driver characteristics and consumer preferences. This paper aims to investigate the pricing strategy by considering the impact of consumer preference heterogeneity and the different service types offered by drivers. The findings of this study reveal the need for the platform to strike a balance between service cost and the benefits of high-quality drivers, which can be referred to as the “cost-performance ratio”. If the “cost-performance ratio” that attracts high-quality drivers is high, the platform will attract high-quality drivers or drivers of all types to participate while offering differentiated services. Otherwise, the platform will only provide services through low-quality drivers. Furthermore, the platform will also consider when to offer differentiated services based on network externalities and service quality. When the network externalities of the two types of services are similar, the platform will differentiate them based on service quality differences. Overall, considering consumer preference heterogeneity, drivers of service types, and network externalities, this paper provides guidance for platforms to make optimal decisions that enhance their service offerings and improve overall customer satisfaction. Introduction Platforms have revolutionized traditional business models by offering consumers convenient and efficient means to access a diverse array of products and services.For example, online shopping platforms like Amazon and Alibaba have changed the way people buy goods by offering a vast selection, competitive prices, and fast delivery.Third-party payment platforms like PayPal and Alipay have simplified online transactions and made it easier for people to make payments and transfer money securely.Sharing platforms like Uber and Airbnb have enabled individuals to monetize their assets, such as cars and spare rooms, by connecting them with people who need those services.These platform businesses have created new opportunities for entrepreneurs and individuals to generate income and have empowered consumers with more choices and convenience [1][2][3][4].The platform sharing economy is still evolving, and its impact on various industries and society as a whole is still being studied.It is crucial for policymakers, businesses, and individuals to understand and adapt to this new economic model to ensure its benefits are maximized while addressing any potential challenges that may arise.The platform also handles the payment process, ensuring a secure transaction for both parties involved [5,6].The platform's operation is crucial for the success of the on-demand economy.It facilitates the connection between consumers and service providers, sets prices, ensures trust and safety, and provides the necessary technology for seamless transactions [7].It typically includes mobile apps or websites that allow consumers to easily request rides and drivers to accept or decline requests.The platform is responsible for matching drivers with consumers based on various factors, such as location and availability [8]. As a type of two-sided market, an on-demand ride service platform acts as an intermediary between drivers and consumers.Its distinguishing feature is the cross-network externality, where the utility of one user is influenced by the size of the other group [9].Therefore, the initial goal of an on-demand service platform is to ensure a sufficient number of participants, as the cross-network externality is a distinguishing feature of two-sided markets.On one hand, when there is a high number of participating drivers, consumers are more likely to join the platform due to shorter waiting times.On the other hand, a high number of consumers increases driver participation in the platform due to the availability of numerous orders.Consequently, a mutually agreeable wage or price is established for trading purposes.The platform, acting as an intermediary, receives a percentage of the profits from each order, in accordance with the agreed arrangement for all parties involved. In previous studies on platform differentiation, some scholars have linked differentiation with network effects, suggesting that in industries with network effects, network size is a more important factor of competition than quality [10].Later literature has defined service quality as network externalities [11].With the development of two-sided markets, both sides have heterogeneous preferences and features.Consumers' preferences influence their participation in decision-making processes [12].Firstly, consumers have varying preferences for service quality.Some are willing to pay a higher price for high-quality service, while others are willing to accept lower service quality at a lower price as long as the waiting time is appropriate.Additionally, drivers themselves have different levels of service quality, including the type of vehicles and diverse service levels provided by drivers [13].With the booming of on-demand service platforms, it is indeed necessary for platforms to provide different experiences in order to carve out their own niche.Additionally, platforms need to cater to consumers with different service preferences by offering various types of services.In reality, platforms like Didi provide differentiated service types such as express, private car, and hitch.Each service type corresponds to a different type of driver, and consumers who choose these services generally have similar preferences.By differentiating services according to driver types, the platform can maximize the utility and benefits for all participants to a greater extent.For example, the platform can match high-quality services with high-quality preference consumers, thereby implementing price discrimination.So it becomes crucial for the platform to identify the different features or requirements of both sides (consumers and drivers) and find ways to achieve more accurate matching. Based on consumers' preferences and the varying levels of service provided by drivers, the platform aims to achieve more precise matching between consumers and providers [14].One approach is to allow all drivers to offer a hybrid service, which can generate a pooling effect that is greater compared to when they provide separate services.The platform can also make different service decisions based on various conditions.For instance, if the number of consumers is not very large, the platform may consider having only a portion of the service providers offer their services to save costs.On the other hand, if the number of consumers is large, the service will be provided by all drivers, allowing the platform to benefit from high network externalities.To determine whether service differentiation is beneficial for platforms and when to implement it, as well as to identify the optimal type of driver to provide service under different conditions, this paper takes into account the characteristics of network externalities in the context of on-demand service platforms.It also considers the heterogeneity of consumers' service preferences on the demand side and the different service types of drivers on the supply side, aiming to explore the optimal pricing decision for the platform. The remaining sections of the paper are organized as follows.Section 2 provides a review of the relevant literature.Section 3 and 4 present the analysis of the models with differentiated and non-differentiated service types respectively.In section 5, we present our numerical analysis.Finally, section 6 concludes the paper. Literature review In this section, we provide a summary of the literature on two aspects: (1) research on twosided markets, and (2) the impacts of service quality on decision making.The first aspect examines the relationship between previous research on two-sided markets and our own work.The second aspect reviews existing studies on service quality, some of which support our key assumptions and model development. Researches on two-sided market The popularity of two-sided markets in real life has drawn the attention of scholars to the related issues in this field.Instead of directly providing products or services, platforms act as "intermediary" connecting both sides of the market.The on-demand service platform, which is the focus of this paper, is just one example of such platforms, with similar forms found in rental markets [15], software development markets, and so on.The primary research areas in this field include the pricing behavior of platform owners [16], the matching mechanisms employed by the platform [17], and the effects of new platforms entering the market on existing ones [18]. One prominent characteristic of two-sided markets is the presence of cross-network externalities.This means that the participation of service providers on the platform can impact the utility of demanders, thus influencing their decision to participate.In a two-sided market, the benefits of one party are closely tied to the scale of participation by the other party on the platform [9].Users on both sides are considered valuable internal resources for the platform, and the initial user base is crucial for maintaining a competitive advantage and influencing longterm competition [19].In a two-sided market, users on both sides derive utility or income by engaging in transactions on the platform.The impact of network externalities on pricing in two-sided markets has been studied extensively, including cross-group network externalities and intra-group network externalities [20,21].Many scholars argue that two-sided platforms exhibit the typical characteristics of cross-network externalities.This paper recognizes this feature and incorporates it into the utility function of service demanders. The popularity of on-demand ride service platforms has attracted significant attention from scholars studying pricing decisions in this context.Some papers focus on the issue of price and wage incentives to effectively coordinate the supply and demand on these platforms [22][23][24].Guda and Subramanian [25] have studied the impact of surging price on driver positioning in a two-stage model.They find that drivers may not trust the demand prediction, so price changes can increase the credibility of the prediction.However, increasing prices for highdemand location-based areas may not always be the most effective strategy, as it can suppress demand growth or even drive drivers away from those areas.Similarly, there have been matching studies that show how platform incentives for agents can influence the number of agents [26].In this paper, the optimal decision of the platform is determined by considering the service quality of drivers and balancing the supply and demand.The model is built upon the assumptions of previous studies. Service quality on decision making Currently, research on the service quality of platforms is mostly focused on the impact of waiting time during travel, as long waiting times often reduce consumers' service experience [27].Considering consumers' sensitivity or impatience towards waiting time, it is important to develop price and wage strategies to maximize profits under the assumption that consumers have low tolerance for waiting time [28].Some consider how to enhance the matching efficiency of platforms.They thought platforms can improve service quality in the following aspects.Zhou et al. [29] investigated the optimal pricing decisions of service enterprises with the presence of two groups of consumers having different valuation and service sensitivity.Ni et al. [30] studied the optimal pricing and service speed of a platform with two types of consumers.The study found that under the goal of maximizing profits, it is not always optimal to serve all consumers.The waiting time acts as a determinant of demand, indicating the impact of service quality [11]. In fact, consumers have different preferences and priorities, so differences in service quality need to be considered in some problems.The preferences of consumers for platform service quality play a crucial role in determining the platform's optimal decision strategy [31].And service network externalities are often considered as part of service quality [32][33][34].This paper incorporates the heterogeneity of consumers' service preferences in our model.Similar to the problem in the paper of Zhong et al. [35], we studied the influence of each parameter on the optimal decision of the platform from the perspective of distinguishing and not distinguishing service quality.Similar to the problem addressed in Zhong et al.'s paper, our study examines the impact of various parameters on the optimal decision-making of the platform, focusing on the distinction and non-distinction of service quality.While they concluded that serving all consumers is not always optimal, our article takes a different approach by defining service quality as the consumer's sensitivity to congestion levels.The consumer's utility includes intrinsic evaluation, price, and waiting time, which serves as a proxy for service quality.By comparing the results, they illustrate the conditions under which differentiated or undifferentiated services are applicable and conclude that blindly serving all consumers is not the optimal strategy for the platform.Building upon these findings, our paper extends the concept of service quality by incorporating the consumer's experience as a measure, in addition to considering waiting time.Furthermore, we also take into account the cross-network externalities.Recognizing the heterogeneity in service quality among consumers, we explore the service provider's role and examine the optimal decision-making process while considering the service quality provided by the driver as parameters in our model. Existing literature on the service quality of on-demand ride platforms mainly focuses on exploring the impact of service differences on both sides of the platform, or considers service preference as fixed parameter.However, considering the characteristics of cross-network externalities in a two-sided market, this paper takes into account the service quality types of drivers and the heterogeneous preferences of consumers for driver service quality.Our study aims to investigate the optimal pricing strategy for the platform in light of these factors. Considering all the above research on on-demand service platform, we can fill the following gaps in literature: (1)In this paper, the service provider's service quality is taken into consideration.Combined with the feature of two-sided market: cross network externalities, we considers the consumer's heterogeneity preference, service quality to explore how consumers integrated all factors to make decisions.Backward induction method is used to determine the platform optimization strategy.(2)This study takes into account the service quality of the service providers, as well as the cross network externalities inherent in a two-sided market.By considering the heterogeneity in consumer preferences and the impact of service quality on their decision-making process, we aim to understand how consumers integrate all these factors to make their choices.In the context of this study, the platform initially enters the market by offering a single type of service without differentiating between providers.All providers pool together to offer this service to consumers.The platform's decision variable is a single set of price and wage.This paper is the first to compare platform decisions with different parameters and identify the conditions under which the platform should provide a single service or opt for differentiated services.By analyzing the impact of various factors, such as provider service levels and retention costs, the study determines the optimal strategy for the platform to maximize its profits while satisfying consumer preferences.Overall, by addressing these gaps, our research contributes to a more comprehensive understanding of service quality and pricing strategies in on-demand ride platforms.We provide valuable insights for platform decisionmakers to optimize their strategies and enhance the overall service quality and customer satisfaction. Problem description and basic model Usually, when a platform enters the market, it only provides one type of service without differentiating the service providers.All drivers are mixed together to offer services to consumers, and the platform's decision variables are limited to price and wage.However, as the platform develops, it starts considering the service preferences of different consumers and the varying service levels of drivers.For instance, consumers who prioritize price over service quality will be more price-sensitive and pay less attention to the quality of services.In this case, the platform monetizes its operations by attracting a large consumer base through low prices.On the other hand, there are consumers who prioritize service level over price.They are willing to pay a higher price for a higher level of service.To cater to these consumers, the platform invests significantly in achieving a high level of service, thus incurring higher costs.The platform then generates profits by charging a higher price for these high-quality services.In this context, there is currently limited research on the optimal strategy for launching different types of services. Therefore, the platform has found that drivers have varying service levels and retention costs as the platform develops.Naturally, drivers who provide high-quality services tend to have higher retention costs.Consequently, the platform must consider whether to pursue greater profits through service differentiation. Based on the theory of two-sided markets, the platform has two types of users: service demanders and service providers.In this market, it is assumed that there are two types of service providers: low quality and high quality.For example, in the on-demand service industry, this can be represented by different levels of car types or the drivers' previous customer satisfaction ratings.The platform categorizes service providers into high-type and low-type based on certain characteristics, denoted as H and L type service providers.The corresponding service quality is represented as q H for high-type service providers and q L for low-type service providers.Similarly, the retention costs are denoted as r H for high-type service providers and r L for low-type service providers.The potential number of high-type service providers is denoted as N H , and the potential number of low-type service providers is denoted as N L .It is assumed that q H > q L , r H > r L , and N H < N L .The number of potential service providers refers to the number of registered service providers on the platform.It represents the total pool of service providers available for users to choose from. After registering on the platform, the service provider (in this case, the driver) will evaluate the salary or income offered by the platform.If the income exceeds the retention cost, the provider will decide to provide services for the platform.However, if the income is less than the retention cost, the provider will not provide services.On the other hand, the service demanders have heterogeneous preferences for the service quality of the providers.Each service demander decides whether to participate or use the platform's services based on their own utility.This means that service demanders will consider factors such as the service quality, price, convenience, and other aspects to determine if using the platform is beneficial to them. The event sequence is as follows: Service providers register on the platform.Service demanders search for services on the platform.The platform analyzes the number of potential providers and the demand for services.The platform sets a unit price for service demanders and a wage for service providers based on this analysis.Service demanders evaluate the unit price and their own utility to determine whether to participate in the platform.If service demanders find the unit price and the platform beneficial, they decide to participate and use the services.Service providers evaluate the offered wage and their own income needs.If the offered wage exceeds the income needs of service providers, they decide to provide services for the platform.Participating service demanders and registered service providers connect on the platform. We assume that the platform has access to information about the service types of all the drivers, so it can categorize service providers into two types: H and L. Additionally, the platform can also classify service providers based on observable factors such as vehicle models or driver service evaluations.Instead of distinguishing service types, the platform can choose two types of service providers and mix them together to provide services to the demanders.In this scenario, the network externality is maximized, and the demanders can only observe and choose from one type of service.As a result, the platform's decision variables are reduced to a set of decision variables (P, w).This case is known as mixed case, referred to as case M; If the service types are distinguished, with H type representing a high-quality level and L type representing a low-quality level, then demanders can observe and choose from two different types of services.In this case, the platform has two sets of decision variables: (P H , w H ) for the highquality service type and (P L , w L ) for the low-quality service type.This scenario is known as the separate case, referred to as case S. In this case, the consumers are the service demanders, and the drivers are the service providers.The drivers who provide a high-quality level of service will be referred to as H-type drivers, while those who provide a low-quality level of service will be referred to as L-type drivers.In accordance with the sequence of events, backward induction is used to solve question. The main symbols and descriptions are shown in Table 1. Next, let's take a look at the case S, which distinguishes quality of service types. In the case S, we introduce the concept of distinguishing quality of service types for the platform and the drivers. Differentiated service types In many studies, the utility function for both users in a two-sided market is often assumed to be the same.However, considering the unique characteristics of users in on-demand ride service platform, this paper takes into account different factors for consumers and drivers participation.Consumers take the external effects, prices, and service quality of drivers into consideration, whereas drivers prioritize their income and the efficiency of receiving orders. To model the utility function for drivers, this paper draws on the form used by Zhong et al. (2020).It considers that drivers make their participation decision based on their income, which is influenced by the platform salary, the efficiency of receiving orders, and their own retention costs.This more comprehensive utility function captures the various factors that influence driver participation. The platform classifying drivers into high quality (H-type) or low quality (L-type) based on certain characteristics is a common practice in many on-demand service platforms.This classification helps the platform differentiate between drivers and provide different levels of service to customers based on their preferences and willingness to pay.The unit salary given by the platform is w H and w L respectively.Driver's revenue function is R di ¼ w i �n ci n di À r i , i = L, H. n ci is the number of consumers choosing i type, n di is the number of i type drivers, assume that every consumer create only one service demand, so, n ci n di is the order volume of each driver.If R di � 0, the driver will participate.Assuming that the two types of drivers are homogeneous, the providers either do not participate or choose to participate, so It is optimal for platform to provide wage that satisfies R di = 0, namely w i �n ci N i ¼ r i .Therefore, the salaries given by the platform are: In addition to consumers' preference for services, this paper also considers the cross-network externalities that influence consumer utility.When consumers observe the prices P H and P L offered by the platform for the two types of services, they choose either the low or high quality service based on their utility. Consumer utility is affected by driver service quality, cross-network externalities, and price.The preference for driver service quality is denoted as ξ, ξ * U[0, 1].α i represents the external utility of the network when the number of potential providers is N i .The higher the number of potential providers is, the higher the network externality will be brought to the demanders, because N L > N H , so we assume α L > α H . Then the utility function of consumers can be expressed as: When the utility of choosing either H type or L type service is greater than or equal to 0, consumers will participate.In this case, ξ � q L + α L − p L � 0, ξ � q H + α H − p H � 0, then the two critical points for consumers to choose to participate are Based on the information provided, consumer choice depends on a parameter ξ.When ξ falls within the range [ξ L , ξ LH ], the consumer will choose the L type service.When ξ falls within the range [ξ LH , 1], the consumer will choose the H type service.The specific values of ξ L and ξ LH would determine the threshold at which consumers switch from choosing L type to H type service. From above, we can use the following expressions for P L and P H : We assume that the platform can cover all market demands and that operating costs are not taken into account, the optimization problem can be simplified as π = n cH � (P H − w H ) + n cL � (P L − w L ), so: According to the first-order condition for profit maximization, we have @p @n cH ¼ 0 @p @n cL ¼ 0, and, The monotonicity of decision variables on each parameter can be obtained: Proposition 3.1 Driver's wage w H decreases with q H and α H , increases with q L and α L ; w L decreases with q L and α L , increases with q H and α H . The wage for drivers w H decreases with q H and α H , while it increases with q L and α L .On the other hand, the w L decreases with q L and α L , but increases with q H and α H .The Price for consumers P H increases with q H and α H ; P L increases with q L and α L . If the service quality of drivers improves or external factors have a positive impact, the number of consumers opting for this type of service is likely to increase, leading to a corresponding decrease in the number of consumers choosing another type of service.Additionally, the relationship between price, salary, and the number of consumers adheres to the principles of market demand and price elasticity. Non-differentiated service type In the development process of a platform, it is common for platforms to initially provide a single type of product or service without considering consumer service preferences.In this context, we propose the idea of pooling both high quality (H) and low quality (L) drivers together to provide services.This approach aligns with the concept of a demand-oriented two-sided platform.The two types of drivers are homogeneous, with high-quality drivers providing a service level of q H and low-quality drivers offering a service level of q L .On the other side of the platform are consumers, whose preference for service quality is represented by ξ, ξ * U[0, 1], The platform only provides one service, which involves pooling together the two types of drivers to provide services for consumers.Consequently, the platform has only one set of decision variables, represented by (P, w). Firstly, the revenue of the drivers is taken into consideration.If the two types of drivers are homogeneous, then when the revenue falls below a certain threshold r L , no driver will choose to participate.When the revenue is greater than or equal to r L but less than r H , all drivers of type L participate.However, when the revenue exceeds r H , all drivers, including both L and H types, participate. Lemma 3.1 Assuming that high-quality (H) and low-quality (L) drivers are respectively homogeneous, the number of driver participation is: Therefore, by considering the revenue thresholds for driver participation, we can determine the number of participating drivers, the expected service level for consumers, and the unit wage provided by the platform to drivers.These values can be organized and presented in Table 2. In other words, there are two different scenarios for driver participation in the platform.In case L, the platform aims to attract only L-type drivers, while in case M, it aims to attract all drivers, including both L and H types (Mixed-Case).Each scenario is associated with a specific price and wage offered by the platform. In addition, if the platform does not distinguish between driver service types and only provides one type of service for consumers, it can also choose to exclusively select H-type drivers to provide the service.This scenario is referred to as case H.In this case, the selected H-type driver can provide a service quality denoted as q H .The service quality q H is equivalent to the consumer's expectation of service quality q e .The retained cost for the H-type driver is denoted as r H , and the total number of potential H-type drivers available is denoted as N H .The platform in this scenario has only one set of price and wage as its decision variables. Similarly, the H-type drivers in this scenario also have only one set of wage determined by the platform as their compensatio, w ¼ r H N H n c .The utility function of consumers consists of three components: service quality preference, cross-network external utility, and price. Where j = L, H, M. https://doi.org/10.1371/journal.pone.0296732.t002 Previously, α L and α H represented the external utility of the network effects when the number of potential drivers in the L-type and H-type categories are N L and N H , respectively.α L remains the same as before, representing the cross-network effect when the number of potential drivers is N L .α M represents the network effect when the number of potential drivers is N L + N H .As the number of potential drivers increases, the network effect on demanders also increases.Therefore, when Consumers will participate as long as their utility U j � 0. The critical point at which consumers decide to participate is: It is also assumed that the platform can meet all market demands, so the platform's optimization problem is as follows: According to the first order condition for profit maximization, we have @p @n c ¼ 0, so the optimal number of consumer participating is, According to the first-order condition for profit maximization, we have @p @n c ¼ 0. This condition implies that the optimal number of consumers participating is: And the optimal price is, Comparative analysis According to the first-order condition of profit maximization, the optimal solution and optimal profit in the four cases are shown in Table 3. In the above table, it is stated that the optimal solution for q e is given by q L N L þq H N H N L þN H . Additionally, it is mentioned that the optimal profit in strategy S remains the same regardless of the form used. In this model, consumer utility is influenced by various parameters, including external effects, service quality, and price.While we consider the cross network effect, we assume that the preference for service quality has a greater impact compared to external effects.This assumption is made because the primary focus of this paper is to examine heterogeneity in consumer service quality preferences.Specifically, we assume that a j � q e j , which implies that a j q e j � 1.Consequently, we can deduce that n * c � 1. Next, we can compare the optimal profits of the platform under different cases to determine the conditions under which it is optimal for the platform to choose a specific type of driver to provide services.By analyzing the profitability in each case, we can draw conclusions about the optimal driver selection for the platform. Proposition 4.1 let a represent a certain value, Δq, Δr represent incremental changes in quantities and costs.If α M is under a certain value (a M � a ), the following conditions apply: (1)When r H − r L �Δ r, the optimal choice is π H . (2)When r H − r L �Δ r and q H − q L �Δ q, the optimal choice is π L . (3) When r H − r L > Δ r and q H − q L >Δ q, the optimal choice is π H . Proof.If α M , α L , and α H are small enough to have a negligible effect on profit, we can consider them insignificantly impacting the profit function.So o(1) represents the influence of external effects on the profit function, which can also be deemed negligible.Then, From the analysis above, we can conclude that π H > π S and π H > π M .Thus, it is necessary to compare π H and π L .Moreover, since N L > N H , we observe that when r H is small (close to r L ), π H > π L .As r H increases, if r H N H < r L N L , then π H > π L .However, when r H becomes sufficiently large and the difference with r L is significant, if q H is small (close to q L ), we find that π H < π L . Since , it can be observed that the higher the value of r H , the higher the value of q H at the critical point where π H = π L . In this case, Fig 2 illustrates the regions where the optimal profit is located.:It is indicated that when the external effects are small, the platform will choose H drivers to participate in providing services and achieve the highest profit if the unit retained cost of H drivers is relatively low or close to that of L drivers.However, if the unit retained cost of H drivers is higher than that of L drivers and there is little difference in service quality between H and L drivers, the platform will choose L drivers to provide services.On the other hand, if q H is much higher than q L , it is optimal for the platform to choose H drivers to provide services. Optimal price Optimal consumer number Optimal profit Strategy H In other words, when the external effects are small, the platform will take into account both the cost and service quality.If there is little difference in cost between the two types of drivers, the platform will prioritize high-quality service providers.This is because, when there is little difference in cost, choosing high-quality service providers can help attract more consumers and ultimately lead to higher profits for the platform.When there is a significant cost difference between the two types of drivers, the platform needs to consider the extent to which highquality service can be provided by the high-quality service provider.If the difference in service quality between the high-quality and low-quality providers is not significant, it may not be worth bearing the high cost of the high-quality provider.In such a case, the platform may choose the low-quality provider.However, if the service quality provided by the high-quality provider is significantly higher than that of the low-quality provider, the platform has an incentive to incur the higher cost and select the high-quality drivers to participate in order to provide a better experience for consumers.Ultimately, the platform's decision will depend on the trade-off between cost and service quality. When the external effect becomes more significant, it necessitates a more thorough analysis.Firstly, let's consider the scenario where the retained cost of L-type units is relatively low.Proposition 4.2 let � a represent a certain value, r denote a specific threshold, and Δq 1 , Δq 2 , Δr 1 , and Δr 2 represent incremental changes in quantities and costs.It is assumed that α H is greater than or equal to � a, and if r L is less than or equal to r , the following conditions apply: (1) When r H − r L �Δ r 1 , if q H − q L �Δ q 1 , the optimal choice is π M .However, q H − q L > Δq 1 , the optimal choice is π S . (3) When r H − r L > Δr 2 , if q H − q L �Δ q 2 , the optimal choice is π L .However, if q H − q L > Δq 2 , the optimal choice is π S . Proof.When the externality effect becomes larger, Since the value of r L is small, based on the formula, there exists a value of r L that satisfies r L � r , resulting in π S > π H . Furthermore, when r H is small, let's consider the extreme case where r L = r H = 0.In this situation, we can observe that π S > π L , prompting a comparison between π S and π M . π M achieves its optimal value when there is a small difference between q H and q L , or when α M is large. Due to the increasing cost of case M with respect to r H , π S becomes the optimal choice.The total quantity, n c , remains constant in both case L and case S.However, when r H reaches a sufficiently large value, the cost of case S surpasses that of case L. Specifically, in case S, P H ¼ q H þa H 2 and P L ¼ q L þa L 2 , while in case L, P ¼ q L þa L 2 .If the value of q H is close to that of q L , then the revenue in case S will not be substantially higher than in case L, and in fact it may even be lower.This is because α H is smaller than α L .However, if q H is significantly larger than q L , then the profit of case S will be greater than that of case L. Fig 3 illustrates the regions where the optimal profit is located in this case.:When the mixed external effects are significant, the unit retained cost of high-quality drivers is comparatively low, and the expected service quality of consumers is close to q L , the platform will opt to mix the two types of drivers without differentiating service quality.This observation aligns with our numerical findings. In this case, as the retained cost of high-quality drivers increases, the cost of mixing the two types of drivers also rises, making differentiated service S more optimal.However, once the retained cost of high-quality drivers surpasses a certain threshold, the participation cost for these drivers becomes prohibitively high, leading to a trade-off between service quality and cost.If both types of drivers offer similar service quality, the platform will only attract lowquality drivers.Conversely, if the high-quality level significantly surpasses the low-quality level, the platform will select both types of drivers and distinguish between service types, implementing strategy S to achieve differential pricing. When the retained cost of L type drivers is low and the external effect is significant, strategy M is optimal.This scenario occurs when both types of drivers offer similar service quality, their costs are low, and the externalities are relatively large.In such case, the platform can provide high externalities to consumers at a low cost, with minimal variation in service levels between the two types of drivers. If the retained cost of H type drivers is high, strategy M may not be the optimal for the platform.In such case, the platform will consider implementing a divisional service type approach.This is because although both types of drivers have high external effects, the cost for retaining H type drivers is relatively high.In order to improve profit margins, the platform can implement differential pricing. When there is little difference in service levels between the two types of drivers, the platform will only attract L type drivers.This is because the cost for differentiating service types and the potential profit margin is low in this situation.Therefore, the platform will choose to focus on attracting only L type drivers. However, if the retained cost for H type drivers is high and both the external effect (r H ) and service quality (q H ) are high, the platform will opt for the differentiated service type approach once again. That is, when the external effect is high and the retained cost of L type drivers is low, the platform will focus on attracting and retaining L type drivers.This is because the cost of retaining H type drivers is much higher and the service quality between the two types is similar.Therefore, in this scenario, the platform will only use L type drivers.However, in other cases where the retained cost of H type drivers is not significantly higher than L type drivers, and there is a difference in service quality, the platform will consider providing differentiated services.This is because the external effects are large, and the platform can cater to consumers' different service preferences.By offering different service types, such as H type services at a higher price for consumers with higher service preferences, the platform can achieve optimal profits through differentiated pricing.Next, we examine a scenario where the external effects increase and the unit retained cost of L-type drivers is relatively high.In this situation, when the external effects are significant and both r L and r H are large, it is necessary to compare the profitability of H-type drivers (π H ) and L-type drivers (π L ) Based on the above analysis, it is important to discuss the difference in relative value between α L and α H . Proposition 4.3 let � a represent a certain value, � r denote a specific threshold, and Δα, Δq, Δr represent incremental changes in external effect, quantities and costs.It is assumed that α L is greater than or equal to � a.If r L is greater than or equal to � r and the external effects mitigation of H-type drivers is not significantly worse than L-type drivers (α H − α L < Δα), the following conditions apply: (1)When r H − r L �Δ r, the optimal choice is π H . (2)When r H − r L �Δ r and q H − q L �Δ q, the optimal choice is π L . (3) When r H − r L > Δr and q H − q L > Δq, the optimal choice is π H . Although the result aligns with proposition 2, the underlying factors driving this decision differ.When the external effects are minimal, the platform selects H type drivers primarily based on service quality considerations.By choosing H type drivers under such circumstances, the platform aims to enhance the overall service experience for consumers and attract them by providing better utilities.However, in scenarios where the network effect is sufficiently large but the differentiation between H type and L type drivers is minor, and the cost of retaining L type drivers is high, the platform's choice of H type drivers is predominantly influenced by service cost considerations.The platform is willing to invest more in attracting H type drivers as it improves service quality and ultimately leads to increased profitability through higher pricing. The three propositions mentioned above illustrate a common understanding that platforms may evaluate both the cost and quality of service when making decisions.If the platform finds that the difference in service quality between H type and L type drivers is minimal, but the cost of retaining H type drivers is significantly higher, the platform may choose to only attract L type drivers to participate.On the other hand, if there is a substantial difference in service quality between the two types of drivers, the platform will consider involving H type drivers in providing services, either alone or in conjunction with L type drivers. This can be referred to as the "cost-performance ratio" of H type drivers' participation.When the service quality provided by H type drivers is low and the retention cost is high, the "cost-performance ratio" is low.In such cases, the platform does not allow H type drivers to participate.However, if H type drivers can offer higher service quality at an acceptable retention cost, they have a high "cost-performance ratio".In these situations, the platform will allow them to participate. Other cases are difficult to judge, and specific instances can be elucidated using numerical examples. Numerical analysis The theoretical conclusions obtained in the previous section offer optimal decisions given specific conditions of different network externalities.However, it is crucial to comprehend how the process of optimal decision-making changes with varying degrees of network externalities.To complement the theoretical findings, numerical examples can be analyzed to provide a more comprehensive understanding.By simulating different scenarios with different levels of network externalities, we can illustrate the optimal decisions based on specific parameters and assumptions used in the analysis. In the model, we have several parameters: r i , q i , N i , N, α i , and α M , where i = L, H.To supplement the conclusions, we will set specific values for these parameters and obtain the results of the example. Based on the hypothesis of the model, externalities are positively correlated with the number of service providers.Many studies assume a linear relationship between externalities and the number of providers, which is based on Armstrong (2006) [7].However, this paper introduces a different relationship between user waiting time and the number of providers, assuming a relationship of the form α i = c(N i ) 0.8 , α M = c*N 0.8 .In this relationship, the variable c represents the unit network externality, which quantifies the change in external effects when the number of providers increases by one unit.This relationship demonstrates that the rate of change of external effects initially increases with the number of providers, but eventually decreases, as discussed in Bai (2018) [28]. While setting the numerical parameters, we will let q L and r L be fixed, and r L has high and low, two levels, other parameters including N, N L , N H , c are corresponding with high and low levels.Then, in the case of a set of parameters given above, the iteration of q H and r H will be carried out within a certain numerical range, and the optimal profits of the four cases are compared, so as to determine which case is optimal.While setting the numerical parameters, we will fix q L and r L at their respective values, while r L will have two levels: high and low.Other parameters, including N, N L , N H , and c, will correspond to high and low levels as well.Next, we will conduct iterations for q H and r H within a certain numerical range, given the above set of parameters.We will compare the optimal profits of the four cases to determine which case is optimal. Specifically, we set the following values for the parameters: N = 3, q L = 1.We will consider two scenarios for r L , c, N L and N H respectively: one with a high value and one with a low value [36].Based on the relationship between externalities and the number of service providers, α i = c(N i ) 0.8 , we will iterate over different values of q H and r H and compare the optimal profits to determine which case is optimal.The detailed parameters can be seen in the Table 4. When each group of the aforementioned parameters is fixed, we set the step size of q H (quantity provided by the high level) to 1 and the step size of r H (price at the high level) to 0.01.We iterate over a range of values for q H and r H , with the maximum values being Max(q H ) = 5 and Max(r H ) = 0.5. First, let's examine the scenario where r L is at a medium or low level.We will consider two cases that depend on the relative external effect with different levels of H and L. In the example where the external effect difference between the two types of services is considered, with N L = 1.6, N H = 1.4,we observe the changes in optimal decisions as c increases. We can also observe similar patterns and changes in the optimal decision areas for other scenarios. Figs 4 and 5 demonstrate that when r L is low, the change trends are similar regardless of whether the difference in external effects between the two types of services is high or low.In both figures, fig (a) shows that under the condition of relatively small external effects, when there is little difference between r H and r L , the platform will prioritize drivers with high service quality.As the difference r H − r L increases, the platform will prioritize drivers with high service quality.Additionally, if the difference between q H and q L is small, the platform will only utilize drivers with a low service quality level.On the other hand, if the difference between q H and q L is large, the platform will only utilize drivers with a high service quality level.These patterns highlight the importance of both the external effect and service quality in determining the optimal decisions of the platform.Mainly, the platform requires H-type drivers to enhance service quality and attract user participation. Fig (b) illustrates that as r L is at a lower level, the optimal decisions change with the increase in r H and q H from small to large.As the external effect increases from (a) to (c), the platform's optimal decisions shift from H to S at a higher value of q H .With a further increase in the external effect, the critical value of q H for the optimal decision to shift from H to S gradually moves to the left, eventually leading to the complete disappearance of the optimal decision H, as depicted in Fig ( c). In simpler terms, when the external effects of the L and H types of service become similar, an increase in external effects causes the platform's optimal decision to shift from H to S, starting with high service quality.This is because when the external effect difference is small, the platform needs a greater service level difference to justify implementing differential pricing.As the external effect value continues to increase, the optimal differentiation level q H gradually decreases.When the external effect becomes sufficiently large, the platform will consider the costs and decide whether to implement differentiated pricing or provide services only with L type drivers. Next, when r L is at a medium or low level and the externalities of the two types of services are significantly different, it is reflected as N L = 2.5 and N H = 0.5 in Fig 6 .As c increases, we can observe changes in the optimal decision of the platform. Similar case can be observed in Fig 7 .The two examples demonstrate that when r L is at a medium or low level and there is a significant disparity in the external effects of H and L type services, the optimal decision of the platform shifts from H to S strategy as the external effects increase.This transition occurs starting from a point where the difference in quality between the two types is relatively low.In this case, the relatively high external effects can make a significant impact on the decision-making process.The platform can implement differentiated pricing when the two types of service levels are relatively similar.While q L remains fixed, the optimal differentiated q H will gradually increase with the gradual increase in the external effect value. From the previous discussions, it becomes evident that the platform takes into account both the cost and service quality when making decisions.When the value of r H is similar to that of r L , the platform can effectively attract H type drivers to participate.However, if the quality level q H is close to q L , the platform may not find it meaningful to implement differential pricing.In such cases, the platform may choose to mix drivers together and provide only a single service.In the scenario where the externalities are significant, the platform has the opportunity to charge a higher price.Additionally, if there is a substantial difference between the service quality levels q H and q L , the platform can allow both types of drivers to provide services separately and implement differentiated pricing.As the value of r H increases, differentiated pricing remains optimal within a certain range.If the value of r H becomes excessively high, the platform must carefully evaluate the "cost-performance ratio" of introducing H-type drivers to provide services.If the "cost-performance ratio" is low, indicating that the benefits of introducing H-type drivers are not significant enough, the platform may choose to have only L-type drivers provide services.However, if the "cost-performance ratio" is high, indicating that the benefits outweigh the costs, the platform will still aim to attract H-type drivers and provide differentiated services. Fig (b) in the previous figures illustrate the transition from (a) to (c) as the external effect value increases.In this transition, the optimal decisions change from H/L to M/S/L, representing a shift in the platform's decision-making process.When r L is at a low level, the optimal decision of the platform changes as the external effect value increases.If there is a small difference between the external effects of H and L type services, the optimal decision starts from the point with high service quality of H type.However, if there is a large difference between the external effects of H and L type services, the optimal decision changes from H to S with the increase of external effects, starting from the point with low service quality of H type.This means that the platform can adjust its offerings based on the varying levels of service quality and external effects, catering to the different needs and preferences of its users. When the value of r L is high, the optimal decision for the platform is to only select either H type or L type drivers, regardless of the magnitude of the external effects.This finding confirms and supplements Proposition 3.4.Specific numerical examples are as Figs 8 and 9. Figs 8 and 9 illustrate the optimal decisions of the platform under the condition of high r L , where the relative value difference of external effects is large or small.Comparing fig (a) in both sets of graphs, we can observe that when the absolute value of the external effect is small Based on the analysis above, it is evident that when the network externality of low-quality services is low, the platform will only be able to attract high-quality drivers to participate if the service quality and cost are both high and within an acceptable range.Consequently, under the condition of low-quality service with low network externalities, the platform will have to forgo a portion of the market and only high-quality drivers will be able to cater to a certain segment 2020) that it is not always optimal for a platform to fulfill as many consumer orders as possible. Managerial implication The findings of this study have several managerial implications for platform companies. Firstly, we find that platforms can measure the cost and quality of service to determine which type of driver to engage as service providers.If there is little difference in service quality between low and high-quality drivers, but a significant difference in cost, the platform can choose to attract only low-quality drivers to participate.On the other hand, if there is a substantial gap in service quality between the two types of drivers, the platform may consider allowing high-quality drivers to participate and conduct a specific analysis to determine whether they should participate alone or alongside low-quality drivers.We can refer to this as the "cost-performance ratio" of high-quality drivers' participation.In situations where highquality drivers incur high costs but deliver low service quality, the resulting "cost-performance ratio" is low.As a consequence, the platform can opt not to permit the participation of these drivers.In another scenario, despite the higher operational costs associated with high-quality drivers, their ability to consistently provide superior service quality results in a favorable "costperformance ratio".Consequently, the platform can choose to allow them to participate. Then, in numerical analysis, it becomes evident that the platform can consider cost, network externality, and service quality in determining the appropriate timing for implementing differentiated services.When the cost of retaining low-quality drivers is moderate or low, and if the externalities of both types of services are relatively similar, the platform should introduce differentiated pricing when there is a significant disparity between the two service levels.As the value of external effects gradually increases, the optimal differentiation can be achieved with lower service quality.In cases where there is a significant difference in the external effects of two types of services, the platform can choose to implement differentiated pricing even when the service levels are similar.As the value of external effects increases gradually, the service quality needed to achieve optimal differentiation also increases gradually.However, when the cost of retaining low-quality drivers is high, the platform's optimal decision is to involve either only high-quality or only low-quality drivers, depending on their respective costs.If the cost of high-quality drivers is only slightly higher than that of low-quality drivers, the platform can attract drivers with a high service level to participate.However, if the cost of high-quality drivers is significantly higher, the platform can only attract low-quality drivers to participate. Overall, the findings of this study emphasize the importance of understanding the dynamics of network externality and service quality in the platform economy.By recognizing the impact of external effects and differentiating service quality, platforms can improve their competitiveness.By attracting the right type of drivers, platforms can enhance the overall customer experience.This study highlights the significance of these factors and encourages platforms to implement effective strategies to stay ahead in the market. Conclusion In this paper, the inclusion of both cross-network externalities and service quality preference in the user utility function highlights the importance of considering these factors in platform decisions.This emphasizes the need for platforms to carefully consider the impact of network effects and service quality on user behavior to maximize profitability and provide a superior customer experience.We assume the existence of two types of drivers: high and low type.This allows the platform to explore different strategies, such as attracting one type or both types of drivers, pricing them differently or together with only one type of service.By comparing the profits under each decision in different conditions, the paper provides insights into the optimal strategy for the platform. First of all, the platform's decision on driver quality depends on the trade-off between cost and service quality, as well as consumer preferences and their willingness to pay for higher quality.When the externality of drivers is relatively small, the platform should consider the "cost-performance ratio" in relation to the retention cost and service quality of the two types of drivers, regardless of whether the unit retention cost of low-quality drivers is high or low.In other words, if the unit cost difference between the two types of drivers is not significant, the platform can afford a higher cost and opt for high-quality drivers.However, as the cost for such drivers increases, the platform will continue to attract only low-quality drivers if the "cost performance" of high service quality is not sufficiently high. In addition, In the case of a large external effect and high retention cost of low-quality drivers, the overall result may be similar to the previous case, but the internal driving factors are different.In the previous case, where the external effect was small, improving service quality was a measure to attract consumers to participate in the service.However, in this case, where the external effect is larger and there is a high retention cost associated with low-quality drivers, the cost factor becomes the main consideration for the platform. Moreover, when the external effect is significant and the retention cost of low-quality drivers is low, the platform may consider adopting a strategy where all drivers serve with price discrimination.In this case, if service quality of high-quality drivers is relatively high, the platform can implement price discrimination.This approach ensures that consumers who value and are willing to pay for high-quality service can access it, while those who are more price-sensitive can still find lower-cost options.However, if the cost of high-quality drivers is too high and there is minimal difference in service quality between high and low-quality drivers, the platform may choose to abandon high-quality drivers and solely attract low-quality drivers to participate. Finally, some conclusions have been derived from the numerical examples, which complement the theoretical findings from a dynamic perspective.The retention cost of low-quality drivers is divided into two categories: medium/low level and high level. When the cost of retaining low-quality drivers is at a medium/low level and the external impact of both service types is relatively similar, the platform should adopt differentiated pricing if there is a significant difference in service levels.However, as the external effect values increase, the optimal differentiation of service quality gradually decreases.When there is a significant discrepancy in the external effects of the two types of services, the platform can introduce differentiated pricing even if the service levels are relatively similar.As external effect values increase, the optimal differentiation of service quality also increases. In addition, when the retention cost of low-quality drivers is higher, regardless of the absolute value of the external effect, the platform's optimal decision is to attract either high-quality or low-quality drivers.When the cost of high-quality drivers is not significantly higher than that of low-quality drivers, the platform may be more inclined to attract drivers with a higher service level.Otherwise, the platform may only be able to attract drivers with low quality to participate.
2024-01-14T05:11:00.266Z
2024-01-12T00:00:00.000
{ "year": 2024, "sha1": "5801f0bb31aad39912ab36733ed1604e4a3801b0", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "5801f0bb31aad39912ab36733ed1604e4a3801b0", "s2fieldsofstudy": [ "Economics", "Business" ], "extfieldsofstudy": [ "Medicine" ] }
119270364
pes2o/s2orc
v3-fos-license
Dirac right-handed sneutrino dark matter and its signature in the gamma-ray lines We show that a Dirac right-handed scalar neutrino can be weakly interacting massive particle in the neutrinophilic Higgs model. When the additional Higgs fields couple only to the leptonic sector through neutrino Yukawa couplings, the right number of relic density of dark matter can be obtained from thermal freeze-out of the dark matter annihilation into charged leptons and neutrinos. At present, this annihilation is suppressed by the velocity of dark matter. However one-loop annihilation cross section into $\gamma\gamma$ can be larger than that of the helicity suppressed annihilation into fermions, because relevant coupling constants are different. Hence, gamma-ray line signal which might have been observed in the Fermi-LAT is also able to be explained by its annihilation. I. INTRODUCTION Various cosmological and astrophysical observations show convincing evidence of nonbaryonic dark matter (DM). However, the identity of the dark matter still remains one of the most signicant unanswered questions in particle physics and astronomy. One of the most promising and natural candidates may be the weakly interacting massive particles (WIMPs) in the physics beyond the standard model [1]. Much effort has been made in experiments for the direct and indirect detection of WIMP. The strategy for the direct detection of dark matter is to look for the recoil energy from scattering off with nuclei by WIMPs, while that of indirect detection is to find an excess over the astrophysical background of cosmic rays due to additional products by dark matter annihilation or decay. Those annihilation products include neutrinos, positrons, antiprotons, gamma rays and so on. In general, the gamma-ray flux from dark matter annihilation or decay consists of two components: the continuous spectrum and line spectrum. A gamma-ray line is a clear signature of the WIMP annihilation and has been regarded as a "smoking gun" for the WIMP dark matter because there is no known astrophysical source that can emit such a line gamma ray. However, a tentative indication of gamma-ray line using Fermi-LAT data was reported recently in [2,3]. It can be associated with dark matter annihilation [2][3][4][5][6] or due to hard photons in the Fermi bubbles regions [7,8]: however, Ref. [9] confirms the existence of spectral feature and finds no correlation with Fermi bubbles. In general, the WIMP annihilation cross section into gamma-ray line is small. Since a dark matter candidate by definition does not couple to a photon directly, those annihilation modes are induced by loop processes, and hence, suppressed compared to that of tree level. There are a few nonsupersymmetric models which produce gamma-ray line, e.g., the singlet scalar DM [10], the loop induced neutrino mass model [11], and the inert Higgs doublet model [12]. However, it is not easy to produce in the supersymmetric models. In fact, the expected gamma-ray line flux, in particular γγ, induced by the annihilation of neutralino in the minimal supersymmetric standard model (MSSM) is much suppressed [13]. Another candidate of WIMP dark matter in the supersymmetric models can be sneutrino: mixed [14] or right-handed sneutrino [15][16][17]. However, in those models the relevant couplings of sneutrinos are connected to the neutral Higgs bosons or Z ′ boson, and therefore it is not easy to produce a large line gamma-ray flux from their annihilation processes. 1 In this paper, we show that a Dirac right-handed sneutrino as WIMP dark matter could produce gamma-ray line flux which is significantly larger than in other supersymmetric models. Right-handed Dirac sneutrino dark matter has been proposed to explain dark matter in the MSSM by introducing right-handed neutrino superfields [19] and in a further extended model [20], but those are not thermal freeze out relics. The essential ingredient for our Dirac sneutrino to be a WIMP is the extended Higgs sector including a neutrinophilic Higgs field, which is a Higgs field interacting with other matters only through neutrino Yukawa couplings [21][22][23][24]. The neutrinophilic Higgs model is based on the concept that the smallness of neutrino mass might not come from a small Yukawa coupling but a small vacuum expectation value (VEV) of the neutrinophilic Higgs field H ν . Recently, various aspects of neutrinophilic Higgs models have been studied [25][26][27][28][29][30][31][32]. The consequence of the neutrinophilic Higgs model is that neutrino Yukawa couplings are not necessarily small because the smallness of neutrino masses is explained by the small H ν VEV. Actually, neutrino Yukawa couplings can be as large as of the order of unity. Hence we will show that by using this advantage, right-handed Dirac sneutrino can have a large enough annihilation cross section to be WIMP, as well as to produce an observable line gamma-ray flux. The paper is organized as follows. In Sec. II, we briefly describe a supersymmetric neutrinophilic Higgs model where the VEV of H ν is small and neutrino Yukawa coupling can be large. In Sec. III, we examine the Dirac right-handed sneutrino dark matter candidate by estimating its thermal relic density and show how large monochromatic gamma-ray line signal can be produced. We then summarize our results in Sec. IV. II. MODEL The supersymmetric neutrinophilic Higgs model has a pair of neutrinophilic Higgs doublets H ν and H ν ′ in addition to up-and down-type two Higgs doublets H u and H d in the MSSM [29]. A discrete Z 2 parity is also introduced to discriminate H u (H d ) from H ν (H ν ′ ), and the corresponding charges are assigned in Table I superpotential is given by where we omit generation indexes and dot represents SU(2) antisymmetric product. The Z 2 parity plays a crucial role of suppressing tree-level flavor changing neutral currents (FCNCs) and is assumed to be softly broken by tiny parameters of ρ and ρ ′ (≪ µ, µ ′ ). The scalar potential relevant for Higgs fields and sneutrinos is given by the supersymmetry (SUSY) potential and SUSY breaking terms, with and The Higgs dependent part of the scalar potential is expressed as The tiny soft Z 2 -breaking parameters ρ, ρ ′ generate a large hierarchy of , which are given by the stationary conditions Namely, we obtain The hierarchy of ρ/µ ′ ≪ 1 leads to a small v ν and the smallness of ρ compared to µ ′ is explained naturally in 't Hooft's sense because ρ is a soft breaking parameter of the Z 2 parity. It is easy to see that neutrino Yukawa couplings y ν can be large for small v ν using the relation of the Dirac neutrino mass m ν = y ν v ν . For v ν ∼ 0.1 eV, it gives y ν ∼ 1. H ± -while the latter, H ν,ν ′ , constitutes two CP-even Higgs bosons H 2,3 , two CP-odd bosons A 2,3 , and two charged Higgs bosons H ± 2,3 . The scalar potential including sneutrinos is given by The mixing between LH and RH sneutrino is roughly estimated as We find that the RH sneutrinoÑ, which is a dark matter candidate in our model, has suppressed interactions to the SM-like Higgs boson or Z boson since they are proportional to the mixing of LH and RH neutrinos, sin θν in Eq. (9). III. RIGHT-HANDED SCALAR NEUTRINO AS DARK MATTER In this section we will investigate the thermal relic abundance of the right-handed sneutrino dark matterÑ and its indirect signature in the gamma-ray observation. A. Thermal relic density of dark matter : Tree-level annihilation ofÑ Since the dark matter particleÑ can have large Yukawa couplings given by y ν ∼ O(1), the DMs can be in the thermal equilibrium in the early Universe through these large Yukawa interactions. Here, we consider the case that the mass eigenstates H 2 and H 3 originated mostly from H ν and H ν ′ are much heavier than MÑ then the electroweak precision measurement constraints are easily satisfied and the annihilation into H ν and H ν ′ is kinematically forbidden. In this case, the dominant annihilation mode ofÑ in the early Universe is the annihilation into a lepton pairÑÑ * →f 1 f 2 mediated by the heavy H ν -like Higgsinos as described in Fig. 1. The final states f 1 and f 2 are charged leptons for the t-channelH ν -like charged Higgsino (H ± ν ) exchange, while those are neutrinos for the t-channelH ν -like neutral Higgsino (H 0 ν ) exchange. The thermal averaged annihilation cross section for this mode in the early Universe is expressed in partial wave expansion method as [33] where we used v 2 rel = 6T /M DM with v rel being the relative velocity of annihilating dark matter particles, and m f is the mass of the fermion f and MH ν ≃ µ ′ denotes the mass of H ν -like Higgsino. For simplicity we have assumed that Yukawa couplings are universal for each flavor. Since the s-wave contribution of the first term in the right-hand side is helicity suppressed, the p-wave annihilation cross section of the second term is relevant for the dark matter relic density at freeze-out epoch, T f ∼ MÑ /20. The right relic density of WIMP can be obtained for the thermal averaged annihilation cross section: Comparing Eqs. (10) and (11), we find y ν = O(1) as where T f is the DM freeze-out temperature, which is usually T f ≃ M DM /20, and we accounted for the number of modes of the final states f = 2 × 3 2 = 18. In Fig. 2 we show the contour plot of the annihilation cross section in the plane of MÑ and MH ν for y ν = 1. B. Monochromatic gamma-ray lines from right-handed sneutrino annihilation Since we are considering the massive dark matter, which is nonrelativistic at present, the tree-level p-wave contribution to the annihilation of DM in Eq. (10) is also suppressed. Therefore, in this model, the dominant contribution to the annihilation of DM in the galaxy at present universe comes from the loop diagrams. The emission of a vector boson through the virtual internal bremsstrahlung can enhance the s-wave contribution in particular when the mass splitting between dark matter and the t-channel mediator, H ν in our case, is small. However since we are considering heavy Higgs, M Hν ≫ MÑ , the bremsstrahlung is suppressed. Therefore we do not expect the line spectrum of gamma ray from internal bremsstrahlung, which is different from that in [2]. The charged components of the H ν scalar doublet and charged scalar fermions make the triangle or box loop-diagram shown in Figs. 3 and 4, and the two photons can be emitted from the internal charged particles. In this case, the photons have line spectrum with energy Since DM is nonrelativistic we can ignore the momentum of DM. Assuming M Hν , Ml ≫ MÑ , we obtain the annihilation cross section to two photons via one loop as in the limit of M Hν = M H ′ ν = Ml for simplicity. The gamma-ray line spectrum can also be produced from the dark matters annihilation into Zγ through box one-loop. The energy of the photons produced in this process is The annihilation cross section is approximately given by where θ w is Weinberg mixing angle and we used M Hν = M H ′ ν = Ml. Recently a tentative indication of gamma-ray line using Fermi-LAT data was reported in [2,3]. When it is interpreted in terms of dark matter particles annihilating into a photon pair, the observation implies a dark matter mass of [3] m χ = 129.8 ± 2.4 +7 −13 GeV, and a partial annihilation cross section of when using the Einasto dark matter profile. This corresponds to the coupling where we used MÑ = 130 GeV and α em = 1/127. For this mass of DM, we have another gamma-ray line at E γ = 114 GeV but the flux is reduced by half that of two gamma line at 130 GeV. C. Another constraint Here we note the constraint from direct dark matter searches. where p is the four momentum ofÑ in the nonrelativistic limit, M Z is the Z boson mass, g 2 is the SU(2) L gauge coupling, θ W is the Weinberg angle, and M is the mass of the LH sneutrino and H ν -like Higgs boson. For the TeV scale mass of those particles inside loops, we find the cross section with a nucleon as σ SI ≃ 10 −9 pb. Hence, in fact, it will be possible to explore this model by direct dark matter search experiments in the near future. IV. CONCLUSION We have shown that a Dirac right-handed sneutrino with neutrinophilic Higgs doublet fields is a weakly interacting massive particle and a viable dark matter candidate. This is because neutrino Yukawa couplings can be as large as of the order of unity in models with neutrinophilic Higgs where the smallness of neutrino masses is explained by the small H ν VEV. The promising signature of this sneutrino comes from the indirect detection of dark matter, especially gamma-ray lines. One-loop annihilation cross section into γγ can be larger than the cross section of the helicity suppressed tree-level annihilation into fermions. Hence we can expect a large gamma-ray line signal, and for instance, signals which might have been observed in the Fermi-LAT can be explained by its annihilation.
2012-10-30T06:42:39.000Z
2012-05-15T00:00:00.000
{ "year": 2012, "sha1": "6c8efee744dc01eadd2f916319090384f803cbb9", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1205.3276", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "6c8efee744dc01eadd2f916319090384f803cbb9", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
248866843
pes2o/s2orc
v3-fos-license
A clinical approach to tubulopathies in children and young adults Kidney tubules are responsible for the preservation of fluid, electrolyte and acid-base homeostasis via passive and active mechanisms. These physiological processes can be disrupted by inherited or acquired aetiologies. The net result is a tubulopathy. It is important to make a prompt and accurate diagnosis of tubulopathies in children and young adults. This allows timely and appropriate management, including disease-specific therapies, and avoids complications such as growth failure. Tubulopathies can present with a variety of non-specific clinical features which can be diagnostically challenging. In this review, we build from this common anatomical and physiological understanding to present a tangible appreciation of tubulopathies as they are likely to be clinically encountered among affected children and young adults. Introduction Fluid, electrolyte and acid-base homeostasis are imperative for the preservation of life. This is accomplished by adjustments in glomerular filtration and tubular reabsorption of solutes and fluids in response to fluctuations in dietary intake and metabolic processes [1,2]. As a complex, closely regulated and interdependent series of physiological processes, absolute or relative dysfunction is of critical relevance to the health and development of disease. Kidney tubules are typically divided into four broad segments based on anatomical and functional characteristics ( Fig. 1) [2]. The proximal convoluted tubule (PCT) is responsible for the reabsorption of the majority of water and solutes including amino acids, low-molecular-weight proteins (LMWPs) and glucose. The PCT has a significant energy requirement and is vulnerable to conditions that result in an impaired energy supply such as inborn errors of metabolism [3][4][5]. Following the PCT is the thick ascending loop of Henle (TAL) [2]. The TAL provides the urinary concentrating mechanism enabling tubular excretion of solutes with minimal water loss [6]. It reabsorbs up to 30% of filtered sodium via the Na-K-2CL cotransporter (NKCC2) and contributes to calcium and magnesium homeostasis via paracellular mechanisms [7]. The distal tubule is composed of the distal convoluted tubule (DCT) and the collecting duct (CD). Sodium and water reabsorption is highly variable in these segments secondary to the mineralocorticoid (aldosterone)-responsive principal and intercalated cells [8,9]. The principal cells increase sodium and therefore water reabsorption via aldosterone-mediated activation of the epithelial sodium channel (ENaC) [10]. In times of potassium excess, the intercalated cells facilitate potassium secretion via the same mechanism. Aldosterone is released, and ENaC stimulated, resulting in an electrochemical gradient which promotes potassium secretion via the ROMK channel [9,11]. Intercalated cells are also responsible for acid-base homeostasis with the excretion of hydrogen and reabsorption of filtered bicarbonate [9]. Unique to the DCT, the thiazide-sensitive NaCl cotransporter reabsorbs 5-10% of filtered sodium [8]. The DCT also contributes to calcium and magnesium homeostasis via transcellular mechanisms and is responsible for the secretion of potassium via both voltage and flow-dependent processes [2,8]. Finally, the principal cells within the CD also facilitate water reabsorption via the water channel aquaporin-2 (AQP2) which is stimulated by the antidiuretic hormone (ADH) [12]. The dysfunction of any of these tubular mechanisms results in a "tubulopathy". Clinical presentation The clinical presentation of tubular dysfunction in children and young adults is as equally varied as it is non-specific. Prominent features include polyuria, polydipsia, irritability, growth failure, nephrocalcinosis and blood pressure anomalies. It is important to elicit a history of polyuria and or polydipsia as these reflect a concentrating defect. This can be compounded by an osmotic diuresis secondary to increased solute delivery to the distal tubule. The body compensates with the release of ADH and aldosterone (if these mechanisms remain intact) and increased thirst. The loss of water and solutes, however, is constant, and despite polydipsia, children are often chronically or intermittently dehydrated resulting in irritability. Growth failure is a common presenting feature and ongoing management issue. The magnitude of the growth deficit is multifactorial and dependent on the severity of the underlying tubulopathy. Chronic acidosis results in protein catabolism and growth hormone deficiency/resistance and has a direct effect on the epiphyseal growth plate [13]. Chronic hypokalaemia, a common feature of selected tubulopathies, has also been associated with growth hormone deficiency and resistance [13,14]. Phosphate and calcium wasting, and subsequent rickets secondary to kidney disease, have an obvious impact on growth. Finally, polyuria and polydipsia result in significant fluid intake impacting the ability of (particularly young) children to meet their caloric requirements to facilitate growth. A differentiating feature of many tubulopathies is hypercalciuria and nephrocalcinosis, additionally manifesting as nephrolithiasis. Hypercalciuria can occur secondary to increased absorption of calcium from the gut (e.g. in vitamin D toxicity), increased release of calcium from bone (in the setting of acidosis) and reduced reabsorption of calcium in the tubule as is the case for many tubulopathies [15]. Blood pressure effects vary. In tubulopathies that result in water or salt retention hypertension is observed and often marked (e.g. Liddle syndrome). In those with salt and water wasting, the net effect is hypo-or normotension (e.g. Bartter Syndrome). It is important to note that hypertension can still be present in salt-losing tubulopathies, for example secondary hypertension noted in adults with Gitelman Syndrome [16]. Other potential extra-kidney manifestations including sensorineural hearing loss, ophthalmologic involvement and developmental delay could guide the diagnosis in the direction of specific tubulopathies. Investigation When faced with a potential tubulopathy, it is important to have a diagnostic tool kit to differentiate these conditions. These include urine and blood tests, imaging and, more recently, genetic assessment (Table 1). Serum evaluation should include a venous gas to assess acid-base status, full biochemical profile including electrolytes, urea and creatinine, calcium, magnesium, phosphate and uric acid to determine the salt-wasting profile. Renin and aldosterone are important in those who present with potassium abnormalities (hypo/hyperkalaemia) with or without hypertension. Urine should be analysed with urine dipstick (for glucosuria), protein:creatinine ratio and calcium:creatinine ratio. Additional investigations for suspected proximal tubulopathy include beta-2 microglobulin (or equivalent tubular protein) and urine amino acid profile or metabolic screen. Beta-2 microglobulin is a LMWP freely filtered by the glomerulus and reabsorbed by the tubules; therefore, abnormally elevated levels in the urine can be suggestive of tubular dysfunction. If initial testing is suggestive of a tubulopathy, paired urine and serum samples should be obtained to enable the calculation of fractional excretion of sodium (FeNA) and magnesium (FeMg) in addition to the trans tubular potassium gradient (TTKG) and tubular maximum phosphate reabsorption per glomerular filtration rate (TmP/GFR) ( Table 2). These fractional excretions can provide an insight into the electrolyte handling of the kidneys and evidence of the underlying pathology. A kidney ultrasound will detect nephrocalcinosis and/ or nephrolithiasis, hydronephrosis in those with polyuria and congenital anomalies of the kidney and urinary tract (CAKUT) which can be associated with tubular pathologies such as HNF1B-associated disease. Finally, a genetic assessment should be considered where feasible. There are now over 50 disease genes implicated in tubulopathies, with some disorders having very specific phenotypes where others exhibit crossover or phenocopy phenomena [1]. The diagnostic yield for genetic testing in tubulopathies is much greater than many other conditions with up to 50% of paediatric or young adult patients having an identifiable genetic diagnosis [2,17]. Where possible, genetic testing can be utilised for confirmation of diagnosis, guiding disease-specific therapy, providing prognostication, identification of at-risk relatives and antenatal counselling for future pregnancies [1]. Nonetheless, the availability and financial viability of these assessments vary from centre to centre and they are not imperative for diagnosis though can often provide personalised clinical utility. Biochemical presentations Given the non-specific clinical presentations of tubulopathies, it is essential to utilise observed biochemical changes to localise which tubular segment may be implicated. These changes reflect a disruption of normal tubular physiology and their distinctive biochemical patterns are reflective of the disease process [1]. Such biochemical patterns include hypokalaemic or hyperkalaemic metabolic acidosis and hypokalaemic metabolic alkalosis. Additional factors such as family history and syndromic features can assist in determining the underlying diagnosis. Hypokalaemic metabolic acidosis Hypokalaemic metabolic acidosis is a typical feature of both proximal tubular bicarbonate wasting and impaired acid secretion in the distal tubule [3]. Proximal tubular bicarbonate wasting In the PCT, impaired co-transport of sodium and bicarbonate results in acidosis and volume depletion. This stimulates renin-angiotensin-aldosterone system (RAAS) activation leading to the exchange of potassium ions for sodium in the distal tubule and ultimately hypokalaemia. Hypokalaemic metabolic acidosis when secondary to proximal tubular dysfunction is most commonly associated with the wasting of all solutes reabsorbed in PCT. This generalised proximal tubulopathy (GPT) may also be referred to as Renal Fanconi Syndrome [4]. In a GPT, sodium, potassium, calcium, phosphate, uric acid, bicarbonate, glucose, amino acid and LMWP wasting occurs. The net result is hypokalaemic metabolic acidosis with hypo/ normotension (due to salt and water wasting), hypophosphataemia with subsequent rickets, hypercalciuria with nephrocalcinosis, glucosuria, aminoaciduria, and LMW proteinuria. Calculating the FeNa can be useful in this setting of reduced extracellular volume as it is expected FeNa would be < 1%; however, in GPT, FeNa is often inappropriately elevated (> 1%) due to salt wasting. It is important to be mindful that FeNa is also affected by salt intake (and therefore not a diagnostic tool of value in breastfeeding infants without free access to salt) and GFR and therefore not always helpful. TTKG will be elevated and TmP/GFR inappropriately low [4]. Clinically these children generally present in the first year of life with failure to thrive, polyuria/polydipsia, irritability, vomiting and growth failure with evidence of rickets [4]. GPT has an extensive list of genetic and acquired causes. The genetic causes can be categorised according to the mechanism of dysfunction including accumulation of toxic metabolites (e.g. cystinosis, galactosaemia, Wilson's disease), impaired energy production (mitochondrial cytopathies) or disruption of intracellular messaging (Lowe syndrome, Dent disease) [4]. These multisystem diseases are associated with extra-kidney manifestations such as ophthalmological involvement, developmental delay and/or hepatomegaly. There are four genes identified to date which result in GPT with a kidney-limited phenotype. Three are autosomal dominant (AD) (GATM, EHHADH, HNF4A) and one autosomal recessively (AR) inherited (SLC34A1). Acquired causes of GPT include medications (aminoglycosides, ifosfamide), toxins (heavy metal poisoning) and kidney injury (acute interstitial nephritis, recovering acute tubular necrosis) [2]. Lowe syndrome (Oculocerebral syndrome) and Dent disease (types 1 and 2) present with GPT with unique characteristics. Both are X-linked recessive disorders associated with an incomplete proximal tubulopathy predominated by LMW proteinuria, marked hypercalciuria with significant nephrocalcinosis and progressive CKD. Importantly, they rarely have glucosuria [18]. Lowe syndrome and Dent-2 disease are both caused by variants in OCRL which encodes for an enzyme important in the PCT endolysosomal pathway [18]. Lowe syndrome is associated with devastating extrakidney manifestations including cataracts, glaucoma, visual impairment, intellectual impairment, behavioural regression and seizures [19]. Interestingly, extra-kidney manifestations in Dent-2 disease are less common. It is hypothesised that these conditions represent variable phenotypic expression of the OCRL gene product. Dent-1 disease is caused by variants in CLCN5, accounts for 60% of patients with Dent disease and rarely has extra-kidney manifestations [18]. A rare cause of hypokalaemic metabolic acidosis secondary to bicarbonate wasting that is not associated with GPT dysfunction is isolated proximal renal tubular acidosis (RTA) also referred to as type 2 RTA . This condition is caused by SLC4A4 variants (encodes for basolateral sodium bicarbonate exchanger) and is associated with eye abnormalities such as cataracts, glaucoma and band keratopathy [20]. Differentiating features of this condition include the lack of amino acid wasting and the absence of nephrocalcinosis. Nephrocalcinosis is thought to be prevented in this condition due to significant citraturia which prevents calcium precipitation and stone formation [21]. Impaired hydrogen excretion in the distal tubule Hypokalaemic metabolic acidosis secondary to distal tubular dysfunction is due to failure of the intercalated cells to secrete protons. To counter this, sodium is reabsorbed in the CD in exchange for either protons or potassium. If there is no capability to excrete protons, potassium will be preferentially excreted resulting in hypokalaemia. The chronically acidotic state results in calcium release from bones which contributes to hypercalciuria and nephrocalcinosis [20]. Distal RTA , also referred to as type 1 RTA is an example of this. There are five genes associated with this phenotype, three of which are also associated with sensorineural hearing loss (ATP6V1B1, ATP6V0A4, FOXI1) and two which are not (SLC4A1, WDR72). There are also several acquired causes of distal RTA including CKD, lupus nephritis or medication related (e.g. amphotericin B) [20][21][22]. The clinical presentation of distal RTA again includes irritability and vomiting, poor lineal growth secondary to an acidotic state, rickets and sensorineural hearing loss. Biochemically these patients demonstrate hypokalaemic metabolic acidosis with hypercalciuria, nephrocalcinosis with no glucosuria, aminoacidouria or LMW proteinuria to suggest proximal tubule involvement (Fig. 2). Hypokalaemic metabolic alkalosis Hypokalaemic hypochloraemic metabolic alkalosis is the hallmark of enhanced sodium reabsorption in the CD. Sodium reabsorption here is mediated by aldosterone release in response to intravascular fluid depletion. Aldosterone increases the expression of sodium channels and sodium-potassium ATPase within the tubular cell enhancing sodium reabsorption (and therefore water) at the expense of potassium. Potassium depletion is detected and potassium exchanged for hydrogen (via potassium hydrogen ATPase) in the intercalated cells resulting in an alkalotic state [3,11,23]. Important differentials for this state include pyloric stenosis, congenital chloride diarrhoea, cystic fibrosis and chronic laxative or diuretic use [23]. Hypokalaemic metabolic alkalosis with hypotension/ normotension The two most common kidney causes are Bartter Syndrome and Gitelman syndrome. These are salt wasting disorders secondary to variants in genes encoding the sodium co-transports within the TAL and DCT respectively. Other differentials include EAST syndrome, HNF1B-associated tubulopathy and more recently described variants in CLDN10, RRAGD and mitochondrial DNA [24][25][26][27]. Bartter syndrome (BS) is further classified into types I-V based on underlying genetic diagnosis (Table 3, Fig. 3). All variants affect sodium transport within the TAL resulting in salt wasting with subsequent compensatory hyperaldosteronism (Fig. 4). Clinically these patients can present antenatally with polyhydramnios (due to polyuria) or in infancy/ early childhood with polyuria, polydipsia, failure to thrive and complications of chronic hypokalaemia including rhabdomyolysis and cardiac arrhythmias. Interestingly, type V BS presents antenatally and is transient, resolving within the first 3 months of life; the remainder persist into adult life and require prompt diagnosis in the neonatal period or early childhood due to the high risk of mortality [25]. Gitelman syndrome (GS) is more common than BS, occurring in 1:25,000 births [2]. It results from salt and chloride wasting with secondary hyperaldosteronism due to variants in the gene encoding the NCC in the DCT (SLC12A3). Children with this condition generally present later in life (early childhood or adolescence) with polyuria, polydipsia and hypotension. Symptomatic hypomagnesaemia with muscle cramps, tetany, chondrocalcinosis is an important differentiating feature of GS [2,28,29]. The mechanism of hypomagnesaemia in GS remains unclear. One theory is decreased expression of the magnesium channel TRPM6 in the DCT (Fig. 5). Hypomagnesaemia can also occur in BS (particularly Type III) due to impaired paracellular magnesium reabsorption. Hypomagnesaemia in BS is much milder than in GS [1,28,30,31]. Another key differentiating feature of BS (versus GS) is hypercalciuria with nephrocalciniosis. The TAL is responsible for 20% of tubular calcium reabsorption. This occurs via paracellular mechanisms that rely on transtubular electrochemical force generated by the NKCC2 co-transport and ROMK channel (implicated in Types I and II BS, respectively) [30]. In patients with types I and II BS where this electrochemical gradient fails there is a reduction in tubular calcium reabsorption with subsequent hypercalciuria and nephrocalcinosis (see Fig. 3). Hypercalciuria occurs in types III, IV and V BS, but is less common and milder. In GS sodium reabsorption in the proximal tubule is enhanced to compensate for the downstream salt wasting. This results in increased paracellular reabsorption of calcium and subsequent hypocalciuria. EAST syndrome is an AR condition secondary to variants in KCNJ10 which encodes a rectifying potassium channel expressed in the kidney, inner ear and glial cells. It results in a tubulopathy that mimics GS with predominant extra-kidney manifestations including sensorineural deafness, ataxia, seizures and intellectual deficit [1,3]. HELIX syndrome is a rare condition caused by variants in CLDN10 which encodes proteins important for tight junction formation in the TAL and exocrine glands [24]. The net result is reduced paracellular sodium reabsorption and a salt-losing tubulopathy with differentiating features Fig. 4) [25]. More recent discoveries include variants in RRAGD encoding for the Rag guanosine triphosphatase that leads to a hypokalaemic, salt-losing nephropathy associated with hypomagnesaemia and dilated cardiomyopathy [26]. It also leads to a GS-like syndrome resulting from variants in mitochondrial DNA (MT-TF, MT-TI) that result in reduced NCC activity [27]. When considering hypokalaemic metabolic alkalosis with normo or hypotension, the main differentiating features between these tubular conditions are the age of onset, hypomagnesaemia and presence of hypercalciuria. Extrakidney manifestations such as neurological involvement and exocrine gland dysfunction will guide diagnosis of rarer conditions (Table 3, Fig. 3). Hypokaelaemic metabolic alkalosis with hypertension Hypertension in this setting reflects a state of apparent or true mineralocorticoid excess. In these conditions sodium and water are retained in lieu of potassium, resulting in a hypokalaemic metabolic alkalosis. Hypercalciuria with or without nephrocalcinosis is present in most cases due to a compensatory reduction in proximal salt reabsorption [3]. Primary hyperaldosteronism can be genetic or in the setting of adrenal hyperplasia or adenoma/carcinoma [23]. There are four forms of familial hyperaldosteronism, type 1-type 4 [32]. By the very mechanisms of these conditions, serum aldosterone is elevated, renin suppressed, and hypertension is significant. Liddle syndrome is an AD condition resulting from a gain of function variant in SCNN1A/B which encode the alpha/beta subunits of the ENaC present in the CD [33]. Constitutive activation of this channel results in sodium and water reabsorption at the expense of potassium. This occurs independently of aldosterone and therefore aldosterone and renin are suppressed [32]. Apparent mineralocorticoid excess syndrome mimics primary hyperaldosteronism [23]. Causes include genetic variants leading to constitutive activity of mineralocorticoid receptors, drug toxicity or excessive licorice intake. One genetic form results from variation of HSD11B2 that encodes for 11B-hydroxysteroid dehydrogenase involved in prevention of cortisol binding to the mineralocorticoid receptor [1,32]. These children present with polyuria, polydipsia, failure to thrive and a hypokalaemic metabolic alkalosis with hypertension. This is again independent of aldosterone therefore renin and aldosterone are suppressed [32,33]. Another form is Geller syndrome due to a specific variant in NR3C2 which conveys agonism rather than antagonism of the mineralocorticoid receptor by progesterone and other steroid hormones thus resulting in early-onset hypertension that is aggravated in pregnancy [34,35]. There are other monogenic forms of hyperaldosteronism due to excess adrenal production of mineralocorticoid related to variation in CYP11B1 (glucocorticoid-remediable aldosteronism), CLCN2, KCNJ5 and CAC-NA1H that phenocopy these intra-kidney forms of apparent mineralocorticoid excess, though manifest with extra-kidney features of hyperaldosteronism [36][37][38][39]. Hyperkalaemic metabolic acidosis In the CD, sodium is reabsorbed in exchange for potassium and hydrogen ions. Impaired sodium reabsorption results in reduced excretion of both hydrogen and potassium and subsequent hyperkalaemic metabolic acidosis. This state reflects an aldosterone deficiency or resistance and is referred to as a type 4 RTA [20,22]. Type 4 RTA has a myriad of causes including intrinsic kidney disease (CKD, obstructive uropathy), adrenal insufficiency (congenital adrenal hyperplasia), autoimmune disorders (lupus nephritis), medications (amiloride, spironolactone, calcineurin inhibitors) and genetic forms referred to as pseudohypoaldosteronism [20]. In paediatrics, type 4 RTA is most commonly observed secondary to urosepsis resulting in a reversible pseudohypoaldosteronism. Pseudohypoaldosteronism type 1 (PHA1) is due to mineralocorticoid resistance. Children typically present in infancy with failure to thrive, severe hypovolaemia, hyperkalaemia and metabolic acidosis. Autosomal recessive PHA1 results from variants in the genes that encode the subunits of the EnaC channel present in the CD. Loss of function of this channel results in severe salt wasting. EnaC is also expressed in skin and lungs and therefore can lead to a cystic fibrosis-like phenotype. Autosomal dominant PHA1 results from variants in the NR3C2 gene which encodes the mineralocorticoid receptor. Those affected by PHA1(NR3C2) typically present with a milder phenotype with no extra-kidney manifestations and resolution after early childhood [40]. Both conditions are the only kidney salt wasting conditions that present with hyponatraemia [1][2][3]. Pseudohypoaldosteronism type 2 (PHA2) (Gordon syndrome) is caused by the stimulation or prevention of degradation of the NCC co-transporter in the DCT, the same channel implicated in GS. The result is unopposed sodium reabsorption with subsequent volume expansion and hypertension. Hyperkalaemic metabolic acidosis results from suppression of sodium reabsorption in the CD leading to reduced potassium and hydrogen secretion. Variants in WNK4, WNK1, KLHL3 and CUL3 are implicated in PHA2 [1,3,20]. Hyper-and hyponatraemia Sodium and water homeostasis are inextricably linked. Sodium anomalies more frequently reflect volume status, and less commonly reflect a depletion of sodium stores or salt toxicity [3]. There are reviews dedicated entirely to this topic. We focus on those related to impaired tubular handling of water, nephrogenic diabetes insipidus (NDI) and syndrome of inappropriate antidiuretic hormone (SIADH). NDI results from a failure of the CD to reabsorb water in response to ADH [1,41]. This leads to the production of dilute urine irrespective of fluid intake. In the setting of restricted access to free water (due to age) this results in volume depletion and hypernatraemia. Paired urine:serum sodium and osmolality in this setting reveal hyperosmolar serum with an inappropriately dilute urine (serum osmolality > urine osmolality) [42]. NDI most commonly results from loss-of-function variation in AVPR2 (X-linked) with remaining cases related to AQP2 (AD or AR) and requires distinction from central/neurohypophyseal DI due to AVP (AD) which is responsive to exogenous ADH (DDAVP) [43][44][45][46][47]. Conversely, SIADH is caused by the inappropriate reabsorption of water in the CD in response to ADH. Volume expansion and dilutional hyponatraemia follow [1,48]. SIADH is most commonly acquired in the setting of central nervous system or respiratory pathology, an inflammatory state or post-operatively [48]. Nephrogenic syndrome of inappropriate antidiuresis (NSIAD) is a rare genetic condition that mimics SIADH [1]. It is secondary to gain-of-function variants in AVPR2 (X-linked) leading to inappropriate water reabsorption in the absence of ADH [49,50]. In contrast to NDI, SIADH and NSIAD present with hyponatraemia and hypo-osmolality with an inappropriately concentrated urine (serum osmolality < urine osmolality) [48]. As the AVPR2 gene is on the X chromosome, it is males that are affected with NSIAD and in the majority of cases of NDI. Family history is important as female carriers are often partially affected and may report polydipsia and/or have a history of borderline hyponatraemia. Given sodium anomalies most commonly reflect volume status, a rigorous assessment of fluid status should always be conducted. Paired urine and serum samples assist in determining the kidney handling of salt and water. Hypomagnesaemia Magnesium is a highly abundant salt in the body imperative for neuromuscular stability. Magnesium regulation is moderated by intestinal reabsorption and kidney handling and is impacted by hormonal control. In the kidney, reabsorption occurs in the proximal tubule, the TAL via paracellular mechanisms and the DCT via transcellular mechanisms [1,3,51]. Hypomagnesaemia is often classified according to the corresponding urinary calcium [1,3,51]. In the TAL, magnesium and calcium are reabsorbed by paracellular mechanisms (Fig. 4). Hypomagnesaemic conditions affecting this portion of the tubule therefore also waste calcium resulting in hypercalciuria and subsequent nephrocalcinosis. Familial hypomagnesaemia with hypercalciuria and nephrocalcinosis is one such condition resulting from variations in the CLDN16/19 genes which encode claudin 16/19 respectively [52]. These transmembrane proteins facilitate paracellular magnesium and calcium reabsorption. The driving force for this process is the electrochemical gradient resulting from transtubular sodium, potassium and chloride transport (see Fig. 4). Disease-causing variations result in loss of function with subsequent salt, magnesium and calcium urinary loss. In the DCT, hypomagnesaemia can exist in the setting of a salt-wasting syndrome or as an isolated defect affecting magnesium alone. In the former, the proximal tubule compensates for volume loss by reabsorption of salt which is paired with calcium. This leads to hypocalciuria as occurs in GS, EAST syndrome and HNF1B-associated tubulopathy. Conditions that affect magnesium reabsorption alone (with normocalciuria) include familial hypomagnesaemia and hypocalcaemia. This results from variants in TRPM6 which encodes for the ion channel responsible for magnesium reabsorption. Hypomagnesaemia in this setting is often marked leading to reduced PTH release and subsequent hypocalcaemia. Management Briefly, management of these patients requires a multidisciplinary approach (ideally in a centre of expertise) with nephrologists, general paediatricians and dieticians. Considered transition to adult nephrology models of care is encouraged in discussion with relevant services. The mainstay of therapy is the replacement of water and electrolytes. This can present a challenge to most patients, particularly infants, who have significant fluid requirements that often compromise their ability to consume adequate calories. Early dietetic input in this setting is imperative to facilitate growth and supplemental feeds via gastrostomies are often required. Replacement of electrolytes often requires seemingly alarming doses of potassium, sodium, bicarbonate and phosphate. Despite this, in many conditions (e.g. BS), normal serum values may not be achieved and may not be realistic therapeutic targets, instead focusing on optimising growth and avoiding symptoms. Hypercalciuria and nephrocalcinosis are managed with adequate fluid intake and administration of citrate which binds urinary calcium and prevents crystallisation [15]. Hypertension management is disease-specific and summarised by Raina et al [32]. For example, potassium-sparing diuretics are utilised in Liddle syndrome to block EnaC. Other disease-specific therapies are available. In BS prostaglandin inhibitors such as indomethacin and celecoxib have been utilised and result in improved growth and electrolyte profile. However, their extended use may contribute to CKD and carries a significant risk of gastric ulceration. Consensus statements on the management of BS, GS, RTA and proximal tubulopathies are readily available [14,31,53,54]. Conclusion In summary, tubulopathies represent a complex array of conditions with non-specific presenting symptoms. The pathognomonic biochemical picture resulting from the underlying tubular defect is often the most revealing finding. It is important to identify these conditions promptly to facilitate management. The diagnosis of these conditions involves assessment of serum, urine and genetic investigation in addition to careful clinical assessment. Management requires a multi-disciplinary approach, and is generally supportive with fluid and electrolyte replacement; however, it can also be disease-specific. Finally, the discovery of genetic causes for different tubulopathies has and will continue to lead to expedited diagnoses as well as more targeted and personalised therapy and identification of at-risk relatives. Key summary points • Clinically, tubulopathies can present with varied and nondescript features • The biochemical presentation of tubulopathies is an important diagnostic tool which can guide further investigation and management • Genetic testing in tubulopathies is important for the diagnosis and establishment of treatment-specific therapy and can facilitate ongoing counselling • Management is multidisciplinary and focuses on the replacement of electrolytes and adequate nutrition to facilitate growth Multiple choice questions (answers can be found after the reference list) 1. The following is not a cause of a generalised proximal tubulopathy: a) Both present with a hypokalaemic metabolic acidosis b) Both are associated with nephrocalcinosis c) Proximal RTA is secondary to an inability to reabsorb bicarbonate d) Distal RTA is secondary to an inability to secrete protons 4. Which statement is incorrect when reviewing tubulopathies that affect magnesium handling of the kidney? a) When assessing hypomagnesaemia, urinary calcium is an important tool to guide diagnosis b) Individuals with variants in CLDN10 present with a salt wasting tubulopathy with hypomagnesaemia c) Individuals with variants in CLDN16/19 present with a salt wasting tubulopathy with hypomagnesaemia d) Familial hypomagnesaemia with hypocalcaemia results from variants in TRMP6 Author contribution All co-authors conceived, wrote and approved the manuscript. Funding Open Access funding enabled and organized by CAUL and its Member Institutions Conflict of interest The authors declare no competing interests. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2022-05-19T14:32:31.495Z
2022-05-18T00:00:00.000
{ "year": 2022, "sha1": "faa018e2ac9384ad7848d9bc4ce46881eec5094b", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00467-022-05606-1.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "faa018e2ac9384ad7848d9bc4ce46881eec5094b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248128582
pes2o/s2orc
v3-fos-license
Application of zebrafish in the study of the gut microbiome Abstract Zebrafish (Danio rerio) have attracted much attention over the past decade as a reliable model for gut microbiome research. Owing to their low cost, strong genetic and development coherence, efficient preparation of germ‐free (GF) larvae, availability in high‐throughput chemical screening, and fitness for intravital imaging in vivo, zebrafish have been extensively used to investigate microbiome‐host interactions and evaluate the toxicity of environmental pollutants. In this review, the advantages and disadvantages of zebrafish for studying the role of the gut microbiome compared with warm‐blooded animal models are first summarized. Then, the roles of zebrafish gut microbiome on host development, metabolic pathways, gut‐brain axis, and immune disorders and responses are addressed. Furthermore, their applications for the toxicological assessment of aquatic environmental pollutants and exploration of the molecular mechanism of pathogen infections are reviewed. We highlight the great potential of the zebrafish model for developing probiotics for xenobiotic detoxification, resistance against bacterial infection, and disease prevention and cure. Overall, the zebrafish model promises a brighter future for gut microbiome research. clarity of embryo and larvae, and suitability for high-throughput screening in vivo. [1][2][3] So far, mammalian host models have played a predominant role in evaluating microbial functions and the influence of exterior substances on host health. 4 Using the vertebrate zebrafish model to study the gut microbiome brings many advantages. First of all, zebrafish shares homology with the human genome 5 and is similar to the intestine of mammals in terms of structure and mode of action. 6 Moreover, owing to its transparency, it is feasible to apply in situ real-time imaging technology to the whole organism. 7 Furthermore, given that in zebrafish the innate immune system arises first, and adaptive immunity develops after 2-3 weeks, it is possible to examine the relationship between the innate immune system and gut microbiome. 8,9 Last but not least, germ-free (GF) zebrafish provide a robust system for dissecting or manipulating microbial signals owing to its cost-effectiveness and the convenience of the techniques for constructing sterile zebrafish. 10,11 Accordingly, it is possible to directly determine causality between the gut microbiome and disease-associated alterations in functional and mechanistic studies. 12 Studies on gut microbiome using the zebrafish model have been considered a pioneering and vital field of research in recent years. This review focuses on the application of a series of GF zebrafishderived models that unveil how the gut microbiome affects host development, metabolism, and immunity. Moreover, the roles of the gut microbiome in microbiota homeostasis and vertebrate microbiomehost interactions relevant to human health are elucidated, providing a theoretical foundation and support for further application in disease treatment. A flowchart of our review is shown in Figure 1. | COMPARISON B E T WEEN ZEB R AFIS H AND S TANDARD WARM -B LOODED ANIMAL MODEL S (MI CE AND R ATS) The advantages and limitations of the zebrafish model used for host homeostasis and gut microbiome studies compared with murine models are comprehensively summarized in Table 1. Given these unique attributes, the vertebrate animal zebrafish has become an ideal model for studying the gut microbiome. Though the gut microbiome structure of zebrafish may differ significantly from humans, its complexity and diversity can also provide valuable information and reference in comparative studies of the gut microbiome. 13 F I G U R E 1 Currently available applications and techniques for research on gut microbiome-host interactions with zebrafish models Most larval organs interact with the microbiota during hatching. | Digestive system The zebrafish gut microbiome has been found to aggregate into different communities during development, and these communities gradually become different from the external environment and from each other. 21 The first comparison of gene expression between the digestive tract of GF zebrafish and conventional zebrafish was conducted by Rawls et al. in 2004. Two hundred genes were found to be regulated by the gut microbiome, among which the expression of 59 genes was conserved in the mouse intestine. The expression levels of these microbiota-related genes were mainly correlated with epithelial cell turnover, nutrient uptake, xenobiotic metabolism, and immune response. 22 It has been established that the spatial distribution of the gut microbiome is related to both its host and itself, impacting the overall growth kinetics. 23 5,7,9,11,[18][19][20][21] growth and increased mortality in a bacteria-dysbiosis zebrafish model induced by antibiotics. 1 Furthermore, the proliferation, differentiation, morphology, and related functions of intestinal cells of zebrafish are reportedly affected by the lack or the variation of gut microbiome. 22,24 Hill et al. found that, during early development, the growth and division of pancreatic β cells require the participation of gut microbiome and certain bacteria, which secrete β-cell expansion factor A (BefA) proteins to induce the proliferation of β cells. 25 In addition, next-generation sequencing showed that the hypoglycemic effect of BefA was highly correlated with an increase in beneficial bacteria (such as Oscillospria, Lactobacillus, and Bifidobacterium) and a decrease in opportunistic pathogens (Acinetobacter). 26 | The gut-brain axis The gut microbiome has been recognized to profoundly affect the neurochemistry and central nervous system in zebrafish. Importantly, microbial colonization is required for the normal development and physiological function of the nervous system in zebrafish. In this regard, it has been found that sterile or antibiotic-treated zebrafish exhibited increased locomotor behavior or hyperactivity; colonization with different strains of Vibrio cholerae or Aeromonas veronii could hinder locomotor hyperactivity. However, interference with heat-killed bacteria or microbiome-associated molecular patterns could not inhibit this abnormal phenotype in GF larvae. 27 Besides, treatment with Lactobacillus plantarum strain alleviated anxiety and depressive-like behavior and alleviated the stress response in zebrafish with an intestinal disorder. 28 Manipulating the gut microbiome composition in zebrafish may also affect the nervous system. including potential pathogens (such as Plesiomonas and Vibrio). In this respect, zebrafish's social and explorative behavior could be significantly altered; the expression levels of endogenous neuroactive molecules, brain-derived neurotrophic factor, and serotonin were modulated to a certain extent by feeding with probiotic L. rhamnosus. 30 A GF zebrafish study revealed a potential mode of action where melatonin could regulate disorders of neurotransmitter secretion induced by caffeine via the gut-microbiome-brain axis. 31 Additionally, Cuomo et al. documented that the administration of L. rhamnosus in larvae led to DNA methylation code of the Tph1A and BDNF promoter gene reconstruction in the gut and the brain of zebrafish. Accordingly, alterations in the gut microbiome may influence the host epigenetic landscape, resulting in long-term consequences for specific gene regions. 32 Moreover, the zebrafish model revealed the roles of the gut microbiome in neuroendocrine response. The intestine, vital for controlling food intake and maintaining energy balance, represents one of the most important endocrine systems in vivo. 33 The gut microbiome is capable of promoting enteroendocrine cells (EECs) to secrete gut hormones (e.g., gut peptide YY, cholecystokinin, oxyntomodulin, and glucagon-like peptide-1). These hormones act on the | Bone health The gut microbiome has been established as a primary regulator of zebrafish bone metabolism. The relationship between the microbiota (or probiotics) and bone homeostasis and development has been explored in recent years; direct evidence of how the gut microbiome communicates with its host to regulate bone mineral density has been obtained. 35 Nevertheless, the effects of the microbiome on zebrafish bone metabolism have also been studied. It was found that supplementation of L. rhamnosus to conventional zebrafish microbiome led to faster backbone calcification and correlated with stimulation of the insulin-like growth factor system. 36 Moreover, L. rhamnosus feeding could regulate genes involved in osteocyte formation and suppress bone formation inhibitors in zebrafish. 37 It is well Importantly, SAA could reduce the inflammatory response and bacterial killing ability while improving the capacity of neutrophils to migrate to the wound. The intestinal SAA could also restore neutrophils to normal levels in GF zebrafish. 50 Intestinal microbial metabolites also play a vital role in determining neutrophil levels. Cholan et al. discovered that butyrate isolated and synthesized by gut microbiome in adult zebrafish could significantly reduce the number of neutrophils recruited after embryonic trauma. 51 In addition, GF zebrafish transplanted with hybrid sturgeon gut microbiota, treated with a para-probiotic and postbiotic supplement diet, showed that the gene TGFβ and the levels of nonspecific immune-associated genes (lysozyme, Defbl-1, C3a) were significantly upregulated. In contrast, the levels of the proinflammatory gene IL-1β significantly decreased. 52 These findings highlight the need to maintain stability and homeostasis of the intestinal microecology to protect host health and prevent chronic inflammation. The intestinal microbiome is a central factor associated with IBD with dysfunction or the loss of integrity of the intestinal barrier. | Immune dysfunction Using zebrafish and mouse models, Kaya et al. substantiated that the expression of gut G-protein-coupled receptor 35 was dependent on the gut microbiome, and it increased when inflammation was triggered. 55 Interestingly, a close relationship was found between gene The gut microbiome can metabolize amino acids and is conversely influenced by the amino acids. 51,70,71 By observing the evolution of Aeromonas in gnotobiotic zebrafish experimentally, researchers found that Aeromonas could sense host-derived amino acid signals to modulate its motility via a process called chemokinesis, and these bacteria subsequently enter the intestine. 72 Wang et al. found that the abundance of Hyphomicrobium, Paracoccus, and Plesiomonas was significantly correlated with leucine metabolism in zebrafish after treatment with 300 μg/L sodium ρ-perfluorous nonenoxybenzene sulfonate. 73 Another study demonstrated that the gut microbiome was significantly changed in T2DM adult zebrafish with downregulation of the metabolic pathways of arginine, proline, and phenylalanine, suggesting that the gut microbiome of T2DM zebrafish may adversely affect host health by inhibiting the metabolism of these amino acids. 62 The gut microbial community of zebrafish supplemented with a gluten formulated diet displayed activated metabolic KEGG pathways related to threonine, serine, and glycine metabolism. 74 An increasing body of evidence suggests that upregulation of these metabolic pathways is associated with oncogenesis. 75 Table 2. Bacterial transplantation or probiotic treatment has been ap- showed that perfluorobutane sulfonate (PFBS) exposure caused dysregulation of the gut microbial community, and maternal transfer of PFBS to offspring increased the risks to aquatic populations. 83,84 However, L. rhamnosus administration inhibited the disorders caused by PFBS and regulated the metabolic activities of the host indirectly. It was found that β-oxidation and fatty acid synthesis were increased, and blood cholesterol levels were reduced. 85 Furthermore, probiotic feeding can prevent PFBS-induced intestinal disturbances and ferroptosis. 86 | Pathogenic infections Many pathogens have been investigated using the zebrafish model in recent years, including Aeromonas, Salmonella, Mycobacterium, Vibrio, etc. (Table 3). The zebrafish model, as a natural host model, CO N FLI C T O F I NTE R E S T The authors declare that they have no conflicts of interest.
2022-04-14T06:23:56.128Z
2022-04-12T00:00:00.000
{ "year": 2022, "sha1": "62d7f2bd3a45ac06e74fa626e0399de1d2dc35b2", "oa_license": null, "oa_url": null, "oa_status": "CLOSED", "pdf_src": "Wiley", "pdf_hash": "05ca63405d7af23f1ffea5c7ac7d50d613e66bfd", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
257292780
pes2o/s2orc
v3-fos-license
Determinants of top personal income tax rates in 19 OECD countries, 1981 – 2018 This article aims to map the political economy of top personal income tax rate setting. A much-discussed driving factor of top rate setting is the corporate tax rate: governments may prefer to limit the differential between both rates in order to prevent tax-friendly saving of labour incomes inside corporations. Recent studies have highlighted several other driving factors, including budgetary pressure, partisan politics, and societal fairness norms. I compare these and other potential determinants in the long run (1981 – 2018) by studying tax reforms of 226 cabinets in 19 advanced Organisation for Economic Co-operation and Development (OECD) countries using regression models. I find little evidence for the effects of economic, political, and institutional factors; instead, the main determinant of the top rate is the corporate tax rate. As corporate tax rates are still declining under competitive pressure, the recently set minimum rate of 15% will not stop tax competition from constraining progressive income taxation. (Gordon and MacKie-Mason 1995). CIT reductions erode this backstop function, and they may spill over to PIT rates, as governments may prefer to limit the differential between both rates in order to preserve tax system integrity (Ganghof 2006). On the other hand, there should be several domestic drivers of the top PIT rate, such as partisan politics and budgetary pressure. Additionally, increases in top rates in the aftermath of the 2008 financial crisis have been linked to fairness considerations among electorates (Limberg 2019;. Moreover, the erosion of the CIT's backstop function, as described above, may have been mitigated by a recent increase in shareholder-level dividend tax rates, facilitated by the successful combat of illegal capital flight (Ahrens et al. 2020). The purpose of this article is to investigate the relative importance of these and other drivers of top PIT rate setting in the long run . To this end, I study tax reforms implemented by 226 cabinets in 19 OECD countries using linear regressions. 1 This article makes several contributions to the existing literature. First, it re-stresses the importance of statutory CIT rates in backstopping top PIT rates. Convincing evidence of this relation follows from Ganghof's (2006) case studies of seven countries' tax reforms between the 1980s and early 2000s. The present study makes a substantial contribution by including a larger number of countries and by extending the period of analysis further into the twenty-first century, covering the recent decline of CIT rates to a record low. It also makes a methodological contribution by quantifying the effect of the CIT in the long run. This overcomes a pitfall of qualitative policy analysis, namely the difficulty in discerning the primary driving factors behind tax reforms, especially when government statements list multiple reasons for a reform's implementation. Additionally, a regression approach allows me to quantify cross-country variation in potentially relevant institutional factors, such as redistribution preferences and labour market corporatism. This paper also contributes to a broader literature on the political economy of tax systems. First, its integrated review of the determinants of top rate setting is of added value, because most of those factors have been tested only as determinants of average labour tax rates, labour/capital tax ratios, or electoral redistribution preferences. Second, many existing studies fail to control for CIT rate setting. This paper, instead, puts the recent, crisis-related PIT increases in a long-term perspective, arguing that governments will have little room to manoeuvre in their top rate setting when CIT rates keep falling towards the recently set minimum level of 15%. I proceed as follows. After reporting the trends in OECD countries' top rates during the last decades, I discuss the potential determinants of top rate setting, starting with domestic factors, then moving to transnational policy diffusion, and finally addressing the relation between the CIT and PIT. Next, I describe my dataset and empirical strategy, and I discuss the results, followed by several robustness checks. The final section concludes. Top rates over time Until the early 1980s, high top PIT rates were the norm in advanced OECD countries. Figure 1 shows that the average top rate in this study's data panel was 67% in 1981, and it documents a relatively small cross-national variability (measured by the coefficient of variation). Most OECD countries sharply reduced their top rates in the late 1980s or early 1990s. Cross-country differences in the timing of these reforms explain the divergence that is visible during this period. Rates continued to decline during the following decades, but were increased in several countries after the 2008 financial crisis. By 2018, the average top rate stood at 49%. The small cross-national variability, comparable to the level in the early 1980s, suggests that rates have moved from a high-level equilibrium towards a new, lower-level equilibrium (cf. Swank 2016, p. 573). Domestic politics, institutions, and economics The literature has identified several political, institutional, and economic factors that may influence the top rate. Many of those are related to national policy-making, where the PIT is a relatively politicised tax. In partisan politics, the left generally places more emphasis on income equality than the right, and left-wing voters are more supportive of tax progressivity (Roosma et al. 2016). One would therefore expect left-wing governments to raise the top rate. If the right, instead, prioritises economic efficiency, it might lower the top rate to reduce its work disincentive. There is some supportive evidence that left-wing governments (Cusack and Beramendi 2006) or Christian democratic governments (Swank and Steinmo 2002) tend to increase the average effective labour tax rate. While government ideology may reflect short-term fluctuations in the electorate's redistributive preferences, countries also vary in their long-term electoral demand for income redistribution, as some cultures are more egalitarian than others. Egalitarian norms among the electorate affect capital tax rates by increasing the political costs of rate reductions (Basinger and Hallerberg 2004;Plümper et al. 2009). It is plausible that those norms affect top labour tax rates in a similar way. Recent studies have identified two main explanatory factors behind the electorate's redistributive preferences (see generally: Berens and Gelepithis 2021). On the one hand, taxpayers are economically self-interested and aim to maximise their lifetime income. Support for tax progressivity, therefore, decreases with income and increases with the risk of income loss (Barnes 2015). Relatedly, support for tax progressivity is higher in welfare states that emphasise insurance against income loss, as opposed to generic transfers to the poor (Berens and Gelepithis 2019). On the other hand, tax preferences are also shaped by taxpayers' normative ideas about whether people deserve their incomes in the light of their efforts or contributions to society. For instance, Limberg (2019; argues that demand for higher top rates increased in the aftermath of the 2008 financial crisis because the rich were perceived as unfair beneficiaries of the poorly regulated financial markets. Furthermore, taxpayers' willingness to fund the welfare state depends positively on the perceived work effort and reciprocal contributions of welfare beneficiaries and on taxpayers and beneficiaries sharing a cultural background (Van Oorschot 2006;Rueda 2018). Taxpayers tend to stereotype elderly, sick, and disabled people as the most deserving groups, the unemployed as less deserving, and immigrants as the least deserving (Van Oorschot 2006). Thus, population composition and welfare spending may affect voters' tax preferences. Zooming out from preferences and ideas, it has been argued that political institutions affect their translation into actual tax policy, with majoritarian and consensus democracies producing divergent outcomes. Consensus democracies are characterised by proportional representation, fragmented political landscapes, and coalition governments. They generally allow for better representation of prowelfare groups than majoritarian systems, which may produce right-wing landslide victories (Döring and Manow 2017) or be dominated by big-tent parties that appeal to the median voter (Cusack and Beramendi 2006). Additionally, Hays (2003) argues that coalition governments lead to higher labour/capital tax ratios, because they often include at least one right-wing party that aims to limit the capital tax burden. In sum, it is plausible that consensus democratic institutions have an upward effect on the top PIT rate. Cross-country heterogeneity also exists in the institutional structure of economic systems, as measured by the degree of labour market corporatism (Hall and Soskice 2001). The corporatist deal between labour and capital tends to involve job security, egalitarian wage setting, and generous welfare provisions in return for wage moderation and relatively low capital taxes. This implies an expensive welfare state, financed primarily with high labour taxes (Cusack and Beramendi 2006). Both labour and capital have an interest in maintaining their corporatist compromise, which makes tax reductions less likely (Swank 2016). Thus, corporatism should positively affect the top PIT rate. Finally, it is necessary to account for budgetary and economic factors. Not only a short-term budget deficit, but also a longer-term demographic burden (i.e. the share of citizens dependent on social security transfers, such as the elderly) may limit a government's ability to cut taxes or may induce tax increases (Cusack and Beramendi 2006). Temporary budget surpluses, instead, appear to be used predominantly to cut taxes (Haffert and Mehrtens 2015). What matters here is the top bracket's relevance in the income tax system. The height of its income threshold determines the number of affected taxpayers and hence the budgetary consequences of a reform (Ganghof 2006). Relatedly, top rate setting may be influenced by macroeconomic conditions. When economic growth is low, governments might cut marginal tax rates as a growth-friendly policy. On the other hand, as mentioned, severe economic crises may increase popular demand for tax progressivity, as they decrease people's perceptions of high incomes as deserved and fair (Limberg 2019;. Transnational policy diffusion While PIT policy should depend on domestic factors, it may also diffuse across borders. One possible mechanism is tax competition. Tax competition has been studied mainly in the context of CIT reductions, which are driven by the tax-sensitivity of investments and paper profits (Genschel and Schwarz 2011). Although countries compete for corporate capital simultaneously, there is also evidence suggesting that countries react to the first move of a so-called Stackelberg leader, where the USA plays that role (Swank 2016;Altshuler and Goodspeed 2015). In any case, competition is conditioned by several domestic factors. One is country size: smaller nations set lower rates, because they have less domestic tax revenues to lose (Kanbur and Keen 1993). Countries' competitive policies are also mediated by fairness norms and budgetary pressure (Plümper et al. 2009). Furthermore, consensus democracy and corporatism inhibit the implementation of neoliberal (low-rate and broad-base) tax reforms in response to USA corporate tax cuts (Swank 2016). It is not entirely clear whether PIT competition exists and works analogously. One precondition would be that rich taxpayers' location decisions are sensitive to PIT rates. Most of the existing evidence comes from within-country migration being affected by unequal regional tax rates (e.g. Agrawal and Foremny 2019), but arguably, regional migration should be less burdensome than emigration. Empirical research on international mobility is scant because of the lack of detailed data combining migration patterns with income levels and tax rate schedules (Kleven et al. 2020)an issue that also affects the present study. Akcigit et al. (2016) do find a positive migration response of top inventors to the 1986 top rate reduction in the USA, and Egger and Radulescu (2009) find that country-specific migration between 49 nations is higher when income tax progressivity is lower in the host country than in the home country. Kleven et al. (2020), however, find no negative relation between top marginal tax rates and the share of rich foreigners in the populations of 26 countries. Even if people's location decisions are indeed tax-sensitive, it does not necessarily follow that governments will compete via PIT rates. For one thing, it is unlikely that revenue and human capital inflows would offset domestic revenue losses, as domestic taxpayers generally outnumber migrants and expatriates. Furthermore, competition through targeted tax policies probably overshadows top-rate competition. Many developed nations have implemented preferential income tax regimes for high-skilled foreigners (Kleven et al. 2020), and those strongly affect those individuals' location decisions (e.g. Kleven et al. 2014;Akcigit et al. 2016). Arguably, the existence of those special regimes actually reduces countries' incentives to compete via general top rate setting because the relatively elastic incomes of foreigners drop out of the general tax base. Second, many nations have fully or partially dualised their income taxes, removing personal capital income from their progressive labour tax rate schedules and taxing it at preferential rates instead. It is plausible that rich people's location decisions are guided more by those capital tax regimes than by the PIT - Kleven et al. (2020) provide anecdotal evidence. Wealth and gift taxes, as well as non-tax factors, may also play a larger role, but little is known about their effects. A second important mechanism of transnational diffusion is the copying of tax policies adopted by other governments or promoted by policy experts. In particular, it has been argued that countries copied a neoliberal (low-rate and broad-base) tax reform model in the 1980s and 1990s, after they had become dissatisfied with the multitude of deductions and tax expenditures that made their existing systems unnecessarily complex and prone to tax avoidance (Steinmo 2003). This neoliberal diffusion was fostered by the spread of efficiency ideas among tax policy-makers in the OECD (Swank 1998), but it is also plausible that countries directly responded to the 1986 USA tax reform (Tanzi 1987;Steinmo 2003). In most nations, neoliberal ideas probably affected corporate and personal taxes alike, as both suffered from inefficiencies. Moreover, their rate-setting should be connected, because both share income as a tax base. The backstop argument The connection between the CIT and PIT is generally described as a backstop function: the former supports the progressivity of the latter. When the CIT rate is set far below the top PIT rate, this incentivises high-income business owners and independent professionals to earn their income through corporations and reduce their tax burdens (Gordon and MacKie-Mason 1995;Slemrod 2004;Ganghof 2006). To illustrate, suppose that a country has a 20% CIT rate and a 60% PIT rate. It then needs an effective 50% tax rate on distributed corporate profits, in order to tax ownermanagers on par with wage earners and sole proprietors: 1-(1-0.2)*0.5 = 0.6. This situation creates two problems. First, shareholder taxes have economic and political costs. Capital gains taxes and dividend taxes may discourage the distribution of profits and hence distort the reinvestment of capital (Auerbach 1991;Zodrow 1991). And inheritance taxes on the bequest of shares can reduce the profitability and longevity of family firms (Tsoutsoura 2015). Unsurprisingly, most countries tax owner-managers of corporations more leniently than wage earners and sole proprietors. In 2016, top marginal rates in the OECD, taking into account both corporate-and individual-level taxation, averaged 42.5% on wages (OECD 2020b; own calculations), 40.4% on dividends, and 35.4% on capital gains (Harding and Marten 2018). Furthermore, many countries treat the transfer of shares in family firms preferentially. If at least one shareholder tax is more lenient than the PIT, incorporation is an attractive tax avoidance strategy. Second, even aligning top marginal wage and shareholder tax rates does not prevent owner-managers from retaining profits inside their corporations and avoiding the payment of shareholder taxes altogether. When an owner-manager needs their earnings for private consumption, one strategy to minimise taxable dividends is to borrow money from the firm. Alternatively, they can consume business-related goods within the firm (such as mobile phones, computers, meals, cars, or business trips). In systems with a large PIT-CIT differential, the only way to effectively prevent profit retention in corporations is to tax imputed dividends, regardless of actual dividend distributions. Fairness issues aside, such a tax requires complex calculations. And it works only for closely held corporations with active owners because the government cannot observe all corporate profits behind citizens' world-wide passive shareholdings. Resultingly, shareholders may use avoidance strategies to remain below the legal threshold of active ownership and escape the imputed dividend tax. This was a reason why Norway abolished such a tax in 2006 (Alstadsaeter 2007). In sum, to preserve tax system integrity, the only option is to limit the PIT-CIT differential. Governments have good reasons to do so: the abovementioned avoidance strategies reduce horizontal equity, because wage earners, independent professionals, and business owners are not equally able to incorporate; and they reduce vertical equity because income shifting is primarily beneficial for people in the top tax bracket. These equity issues may cause voter resentment. Barring income shifting, voters may also disapprove of a tax rate differential in and of itself because it signals unequal treatment. On the other hand, governments may also deliberately maintain a tax rate differential. Taxes on various forms of capital incomee.g. interest, returns to owner-occupied housing, and corporate profitshave different political or economic costs, but setting unequal rates generally induces substantial tax arbitrage; governments may therefore choose to set a low and uniform capital and CIT rate, whereas equity and budget considerations may call for a higher rate on labour income (Ganghof 2006). In some instances, governments may even prefer income shifting towards the corporate form. When income shifters have high labour supply elasticities, they should be more productive under a lenient CIT regime, and this may increase social welfare (Selin and Simula 2020). And as the remaining labour tax base will then be less elastic, the revenue-maximising top PIT rate should increase (Kotakorpi and Matikka 2017). A government's eventual tax rate setting will depend on its weighing of these considerations, such that every country has its own optimal tax rate differential in a closed-economy setting. However, a real-life open economy faces corporate tax competition. In this study's data panel, the average CIT rate has declined from 48.1% in 1981 to 25.6% in 2018, meaning that the recent international agreement on a minimum rate of 15% will not immediately halt the continuous downward trend. Ganghof (2006) describes the resulting policy dilemma faced by governments as they cut their CIT rates: keeping the top PIT rate constant implies that the rate differential increases, which hurts tax system integrity; and maintaining the existing differential implies that the top rate must be reduced, which hurts tax system progressivity. Ganghof presents case-study evidence of seven countries' responses to this dilemma between the mid-1980s and early 2000s. He shows that many top rate reductions were indeed related to CIT rate setting. To ensure optimal tax system integrity, New Zealand even aligned both rates at 33% (see also : Christensen 2012). Other countries implemented less radical reforms, generally because of budgetary and political constraints. As their CIT rates declined due to international competitive pressure, their tax rate differentials increased. Figure 2 documents the average tax rate differential in this study's data panel through time. Starting at 19.3 percentage points in 1981, it was sharply reduced to 9.6 percentage points in 1989 as countries cut their top rates. Since then, it has risen steadily to 23.5 percentage points in 2018, with cross-country variation sharply declining. The latter may indicate that countries' PIT rates similarly experience the downward pull of the CIT on the one hand, and upward budgetary and political pressure on the other. Unsurprisingly, recent empirical evidence shows that owner-managers both exploit differences between taxes on dividends, capital gains and wages, and retain profits inside corporations (Alstadsaeter and Jacob 2012, p. 58). De Mooij and Nicodème (2008) estimate that these strategies account for 12 to 21% of total CIT revenues in a panel of 17 European countries. Profit retention appears to be the main strategy (Le Maire and Schjerning 2013;Bettendorf et al. 2017;Miller et al. 2021). It follows that the differential between PIT and CIT rates, rather than shareholder tax rates, should be at the heart of the issue, because shareholder taxes become less relevant when corporate profits are retained (especially when owner-managers consume goods within the firm). Illustratively, while the alignment of top marginal dividend and labour tax rates in Norway should prevent income shifting to the corporate form, Alstadsaeter et al. (2014) find evidence of substantial tax-free saving and consumption in holding corporations, precisely because the CIT rate is relatively low. Anecdotal evidence illustrates that PIT avoidance remains a contributing factor in tax policy-making. For instance, the Norwegian government addressed it in its 2015 tax reform proposal, arguing that employers' social security contributions made earning wages less attractive than earning shareholder income. The government was reluctant to increase the marginal tax rate on dividends, fearing tax avoidance and even emigration (Solberg government 2015, p. 13). Since 2016, it has implemented some moderate cuts in the PIT rate of around 0.2 percentage points each year. In Sweden, where the top PIT rate was already reduced by 5 percentage points in 2020, an advisory committee attached to the Ministry of Finance proposed another reduction of the same magnitude, which would tackle income shifting towards corporations (Eklund 2020, p. 17). The Dutch government implemented a stepwise top-rate reduction of 2.45 percentage points between 2018 and 2021. The importance of aligning marginal rates on wage and shareholder incomes had been stressed by an independent advisory committee, and it was explicitly mentioned in the reform's explanatory memorandum (Rutte III government 2018, para. 5.9). This rate alignment had been a main goal of the current system (Kok II government 1999, para. 5), and tax scholars stressed that it had been undermined by several CIT reductions (e.g. Caminada and Stevens 2017). However, the government publicly sold the reform with arguments relating to "making work pay" (e.g. Rutte III government 2018, para. 5.1). The latter example illustrates the difficulty in distinguishing a reform's primary goals from its coincidental advantages as listed in explanatory memorandums and government statements, and thus in obtaining convincing qualitative evidence about the drivers of PIT rate setting using case studies. This difficulty highlights the complementary added value of this study's regression approach: it can quantify the effects of these drivers relatively easily. Hypotheses To this end, I translate the discussed theory and evidence on the backstop argument into six hypotheses. As highlighted by Ganghof (2006), governments that cut the CIT rate generally choose a policy option on a continuum between two extremes: keeping the PIT rate constant or keeping the tax rate differential constant and hence reducing both rates parallelly. When several subsequent governments choose the former option, the differential increases. A large differential should eventually incentivise the government to cut the PIT rate in order to improve tax system integrity. Thus, my first two hypotheses are as follows. Hypothesis 1: The size of the tax rate differential encountered by a government at its investiture is negatively related to the change in the top PIT rate during this government's incumbency. Hypothesis 2: Government-specific changes in CIT and PIT rates are positively related. These hypotheses are premised on the exogeneity of CIT rate setting. In reality, however, the CIT-PIT relation may be bidirectional, or it may be influenced by third factors, causing endogeneity problems. For instance, governments may find it politically costly to reduce one tax without cutting the other, or they may be favourably disposed towards cutting taxes in generalrecall the spread of neoliberal policy ideas. Furthermore, governments could reduce the tax rate differential by raising the CIT rate instead of cutting the PIT rate. In that case, large differentials could have a dampening effect on CIT competition. Finally, small businesses could theoretically avoid corporate taxes by switching to the personal tax base, which would stimulate governments to cut the CIT rate. I apply a twofold strategy to mitigate these endogeneity concerns. First, I exploit the time dimension of the data to test the unidirectionality of the CIT-PIT relation. Hypothesis 3: The size of the tax rate differential encountered by a government at its investiture has no effect on this government's subsequent CIT rate setting. Evidence in favour of both hypotheses 1 and 3 would suggest that past CIT rate setting, through the size of the tax rate differential, Granger causes PIT rate setting, but not vice versa. 2 Such evidence would leave intact the possibility that a third factor influences the relation between government-specific changes in both rates. However, the latter scenario is less likely when that relation is absent in a large sub-group of countries. In particular, I expect the relation to hold only in countries that aim to tax corporate and personal income comprehensively at an (approximately) equal rate. Those countries must adjust both rates parallelly to maintain their (approximate) tax rate alignment. In that case, their top rate reductions should not be caused by the rate differential but by the change in the CIT rate. In countries with a large tax rate differential, instead, there should be less urgency to adjust both rates simultaneously; any cuts in the PIT rate are likely to result from the differential having increased over time (as formulated in hypothesis 1). I translate these assumptions into the following hypotheses. Hypothesis 4: A smaller tax rate differential at the beginning of a government's term causes a stronger positive relation between this government's changes in CIT and PIT rates. Hypothesis 5: In countries with large tax rate differentials, government-specific changes in CIT and PIT rates are unrelated. Any evidence in favour of the latter hypothesis further mitigates the abovementioned endogeneity concerns. Finally, I expect that changes in the top rate are not driven by top marginal rates on shareholder income. Admittedly, in the examples above, Norway and the Netherlands seem to be interested primarily in aligning their top labour and dividend tax rates, rather than their labour and corporate rates. But as explained, this is a surrogate solution for tax system integrity, as profit retention will remain a problem. The real reason behind PIT avoidance, and resultingly, behind top rate reductions, should be the low CIT rate. The lack of detailed, internationally comparable data on capital gains taxes confines the focus of my hypothesis to dividend taxation. Hypothesis 6: The differential between the top rate on dividend income (including corporate taxes) and the top PIT rate, encountered by a government at its investiture, does not affect this government's top PIT rate setting. Method and data Dependent variable and model structure 3 I estimate the effects of the theoretical determinants of top PIT rate setting in 19 OECD countries between 1981 and 2018. Following Schmitt (2016), Garritzmann and Seng (2016) and Ahrens et al. (2020), I estimate regression models with cabinet periodisation. These should conform better to political reality than conventional country-year models, because governments generally draft one tax plan, instead of deciding on new reforms each year (Ahrens et al. 2020). Country-year data incorrectly record reforms that involve stepwise tax cuts over several years (Jensen and Lindstädt 2012). They do capture reforms with a one-year time frame, but in these cases, the other cabinet years will add redundant observations with no variance in several variables. This may bias the coefficients (Garritzmann and Seng 2016). Moreover, governments tend to schedule reforms at dissimilar time points during their terms; they may delay reform implementations (Jensen and Lindstädt 2012); and their reaction speed to changed economic circumstances may differ. Country-year models need a complex and error-prone lag structure to capture such differences for all countries and variables, whereas the timing of reforms is less important in cabinet-based models (Schmitt 2016). Still, cabinet-based models would fail to capture long-term reforms over multiple governments. This would be especially problematic when the speed of PIT and CIT reforms differs. For instance, the Granger causality between CIT cuts and subsequent PIT cuts would be false when PIT cuts were decided upon much earlier. 4 Therefore, I briefly examine whether tax cut announcements and implementations substantially deviate on an aggregate level. 5 As reported online, almost all PIT and CIT rate changes occur in the first two years following a tax cut announcement, and their average reforms are completed in 1.71 and 1.57 years, respectively. This finding reduces concerns about biases resulting from implementation lags, especially because I exclude cabinets that have served for less than one yearthose are often caretaker governments which are unlikely to implement major reforms (Schmitt 2016). The dependent variable measures the change in the top PIT rate during a government's incumbency. I use the statutory rate faced by individuals in the highest income tax bracket, at the combined central and sub-central government level, including uncapped surcharges, social premiums, and payroll taxes. 6 Data are retrieved from the OECD's Tax Database (OECD 2020b); a special OECD project collecting pre-2000 tax data (Johansson et al. 2008); Piketty et al. (2014); and government statistics agencies or tax administrations. The abovementioned sources provide conventional country-year panel data. To obtain cabinet-periodised data, I combine them with cabinet composition data (Armingeon et al. 2020b). In parliamentary systems, a cabinet is defined as a government with constant parliamentary support and portfolio division among coalition partners. For example, Tony Blair's ten-year Labour government in the UK consisted of three cabinets, because two elections changed the government's initial parliamentary majority. I follow the same definition for semi-presidential systems in which the government depends on the approval of the legislature. In full presidential systems, I classify presidencies as single-party cabinets. I assign country-year data to cabinets, based on their years of incumbency: a cabinet taking office in 2014 and leaving office in 2016 would have the country-years 2014, 2015, and 2016 assigned to it, and the dependent variable would measure the change in the top rate between 2014 and 2016 (see Ahrens et al. 2020, p. 573). As this merging process retains the information on countries and investiture years, the resulting panel data allow for the use of country and year dummies when appropriate. Tax system variables Two main explanatory variables measure the effect of the CIT rate. One denotes the positive differential between the PIT and CIT rate in a cabinet's first year of 4 I thank an anonymous reviewer for raising this point. 5 Announcement dates are retrieved from the Tax Policy Reform Database (Amaglobeli et al., 2018) and from additional sources which I report online. I measure a reform's implementation by looking at the actual yearly tax rate changes after its announcement. As tax cuts and tax increases offset each other, I look at tax cuts only. incumbency (to test hypothesis 1). 7 The other denotes the change in the CIT rate during this cabinet's term (hypothesis 2). To test whether this CIT rate change has a stronger effect on the dependent variable in countries with a smaller tax rate differential (hypotheses 4 and 5), I will add an interaction term of both explanatory variables in a separate estimation. In another set of models, I estimate these relations in the opposite direction to test hypothesis 3: I regress the change in the CIT rate on the PIT-CIT rate differential and on the change in the PIT rate. CIT rate data are retrieved from the OECD (2020b), Johansson et al. (2008) and Devereux et al. (2002), and reflect the combined central and sub-central rate. To measure the effect of dividend tax rate setting (hypothesis 6), I include the positive differential between the top PIT rate and the top statutory rate on dividend income in a cabinet's first year of incumbency. This measure includes taxes at both the corporate and the shareholder level and takes into account relief systems that prevent double taxation. Furthermore, I add the change in the shareholder-level dividend tax rate during a cabinet's incumbency. Data are retrieved from the OECD (2020b) and Johansson et al. (2008). Note that these are two separate time series: the 1981-1999 data from Johansson et al. have not been verified by the OECD, which provides data for the years 2000-2018. Although the time series align very well, the results should thus be treated with some caution. I will test the effects of the dividend tax rate in separate regressions. 8 I include two control variables that describe other relevant aspects of the tax system. First, to control for the top bracket's relevance, I include the income share of taxpayers that it affects in the government's year of investiture. This income share depends not only on the top bracket's income threshold relative to the average income, but also on the shape of the income distribution. Following Akgun et al. (2017), I use a virtual income distribution that is similar in all countries; this proxy avoids endogeneity, as real-life income distributions are affected by the top rate through labour supply decisions. The income share of taxpayers who are affected by the top rate equals: Income share α α 1 x 0 α top rate income threshold GDP per capita α 1 This equation follows a Pareto law; the Pareto coefficient α equals 2 (following Ruiz and Woloszko 2016, as cited in Akgun et al. 2017), and the term x 0 is a scaling parameter, set to 0.1. The Pareto law describes the right end of an income distribution: a downward, convex curve with an asymptotic tail. Hence, for the number of affected taxpayers, it makes a much larger difference whether the top bracket starts at 2 versus 3 times the average income, than whether it starts at 12 versus 13 times the average income. This makes the income share equation a more realistic control variable than the tax bracket threshold in and of itself, which would imply a linear 7 This denotes the extent to which the CIT rate is below the PIT rate. When it is above the PIT rate, I code the differential as 0. The reason is that the CIT is unlikely to 'pull up' PIT rates, given the scant evidence of CIT avoidance through the labour tax base. Re-estimations of the models presented below using a variable that captures both positive and negative differentials do not produce substantially different results. 8 The same caution applies for personal and corporate tax rates retrieved from Johansson et al. (2008), but those data could largely be verified with other sources, unlike the dividend tax rates. effect. And because of the Pareto parameter that describes the income distribution, the equation is also more realistic than the ratio of GDP per capita over the top bracket's income threshold. Threshold data are retrieved from the same sources as PIT data. GDP per capita in current local currency units is retrieved from the World Bank (2020). Second, it is necessary to control for convergence of the dependent variable, as high taxes are more likely to be reduced than low taxes. I include the PIT rate in a cabinet's first year of incumbency, transformed into a standardised index based on yearly averages. 9 Given the evolution of tax policy ideas (Steinmo 2003) and the possibility of PIT competition, its relative height should be more important than its absolute height in a context of convergence. Furthermore, including its absolute height would cause a multicollinearity problem with the PIT-CIT rate differential. When the CIT rate is low, a high PIT rate necessarily implies a large rate differential, such that one variable may absorb the effect of the other. 10 It turns out that the standardised index of the PIT rate is less correlated with the tax rate differential. 11 While the top rate's standardised start value controls for countries' simultaneous convergence towards the sample mean, potentially as a result of competition or policy emulation, I will also test the role of the USA as a Stackelberg leader in separate models (Swank 2016). To this end, I include the positive differential between a country's top rate and the US top rate in a government's first year, and the preceding yearly change in the US top rate. Socio-economic variables Given the budgetary consequences of top rate adjustments, I include the government's budget balance (Armingeon et al. 2020a). To control for longer-term demographic pressure on the budget, I follow Cusack and Beramendi (2006) by adding a measure of a country's demographic burden, i.e. the added shares of elderly people and unemployed people (Armingeon et al. 2020a). I relegate the testing of alternative measures of population composition and welfare spending to the online appendix, in order to prevent overspecificationsee infra. To capture short-term and long-term economic performance, respectively, I include the GDP growth rate and the natural log of GDP per capita (World Bank 2020). As competitive tax policy may depend on country size (Kanbur and Keen 1993), I include the natural log of the total population (Armingeon et al. 2020a). Political and institutional variables A final set of control variables accounts for political and institutional factors. Assuming that the aggregate amount of income redistribution reflects the structural redistributive preferences of an electorate, I follow Plümper et al. (2009) by 9 That is, for each data year in the initial country-year panel, I convert the 19 countries' PIT rates into z-scores. To denote the standardised PIT rate in a cabinet's first year, I use a country's z-score of that year. 10 The correlation coefficient of the tax rate differential and the top PIT rate is 0.49. But for cabinets with a CIT rate of 35% or lower in their first year, it is 0.81. 11 In regression model 1 outlined below, the variance inflation factor (VIF) of the standardised start value of the top rate is 3.99. Replacing it with the non-standardised start value, the VIF becomes 7.13. including the absolute difference between the Gini coefficients before and after taxes and transfers (Solt 2020). To account for the effects of government ideology, I include the share of left-wing cabinet positions (Armingeon et al. 2020b). I also add an index of consensus democracy by Armingeon et al. (2020a). 12 I measure corporatism by constructing an index that closely resembles the one used by Van Vliet et al. (2012). The index adds standardised scores of four variables, denoting: country-wide wage coordination; routine involvement of unions and employers in social and economic policy-making; an index denoting the centralised power of the main union confederation; and the union density rate. These data are retrieved from Visser (2019). Cabinet duration, time trends, and endogeneity Assuming that a longer period of incumbency increases a government's ability to implement tax reforms, I include the natural log of its days in power (Ahrens et al. 2020). I control for time trends by adding period dummies that each cover three years. 13 To prevent endogeneity and reverse causation, the control variables refer to values in a cabinet's first year. Results As a prelude to the regression results, Table 1 reports the directions in which the panel's 226 governments have changed their PIT and CIT rates. It shows that tax reform is rather common, with over 77% of governments changing one or both rates. The distribution of reform patterns tentatively points in the direction of the hypotheses. The most common pattern is a simultaneous reduction of both rates (by 46 governments), in line with hypothesis 2. Additionally, 32 30 = 62 governments cut either of both, as follows from hypothesis 1. Those three patterns comprise 48% of the sample. The cells marked with an asterisk (*) denote patterns that negate the hypotheses, and comprise 25% of the sample. 14 Table 2 reports the regression results of changes in top PIT rates. Model 1 shows that the CIT is a strong determinant of top rate setting, as predicted in hypotheses 1 and 2. First, an existing differential between the two rates is related to a remarkably This index is a time-variant proxy for the first (executives-parties) dimension of Lijphart's consensus democracy index. It is variable lfirstp in the Comparative Political Data Set (Armingeon et al., 2020a). 13 This time frame ensures that enough cabinets are included under each dummy, and it approximates the average cabinet duration. The last dummy covers four years: 2014-2017. 14 Their rate increases are often small: when ignoring changes below 0.5 percentage points, which may be caused by fluctuations in sub-central rates, these patterns only comprise 18% of the sample. large decline of the PIT rate: the average differential of 16.6 percentage points would result in a 2.4 percentage-point reduction. Second, every percentage-point change in the CIT rate is related to a simultaneous 0.34 percentage-point change in the PIT rate in the same direction. 15 Period dummies included. Eicker-Huber-White standard errors in parentheses. *** p < 0.01, ** p < 0.05, * p < 0.1. 15 Multicollinearity does not appear to be a big problem: corporatism has the largest VIF, of 5.53. The results do not change substantially when excluding corporatism. Model 2 adds an interaction term between these variables to test hypotheses 4 and 5. Its marginal effects are plotted in Figure 3 and conform to the expectations. In countries with a tax rate alignment (at the left end of the x-axis), a 1 percentagepoint decline in the CIT rate is related to a substantial PIT rate reduction of 0.56 percentage points. In countries with a larger tax rate differential, the relation is weaker and pales into insignificance. Thus, as predicted, it is less urgent in systems with a large tax rate differential to adjust both rates simultaneously. Of course, this does not negate the backstop argument: when one government reduces only the CIT rate, a subsequent government may cut the PIT rate because it finds the differential too large. Notably, the effect of the differential remains significant in model 2. In both models, the significant and negative coefficient of the top rate's start value shows that higher rates are reduced more than lower rates. Due to a lack of migration data, it is difficult to test whether this convergence is driven by competition or policy emulation, though the discussed literature suggests the latter. The insignificant and wrongly-signed coefficient of the population variable also negates the positive relation between tax rates and country size that would be indicative of tax competition (Kanbur and Keen 1993). To additionally test US Stackelberg leadership, model 3 adds the domestic-US PIT rate differential and the change in the US PIT rate. 16 The coefficients are insignificant, but this may result from multicollinearity, as three variables in this model depend on the height of the Figure 3. Effect of Δ CIT rate on Δ PIT rate, conditional on the tax rate differential. 16 Again, I use positive differentials and code negative differentials as 0see note 7. In the abovementioned context of neoliberal tax policy diffusion, I do not expect the US PIT rate to pull up other countries' PIT rates if it happens to be higher, i.e. if the differential is negative. As a robustness check, I use a variable that allows for both positive and negative differentials instead; the results are similar. top rate. 17 Models 4 and 5 therefore exclude the domestic-US PIT rate differential and only include the change in the US PIT rate, which now turns weakly significant. The effect seems to depend on variable specification: replacing the 1-year change with a 2-year change, or the change during a government's term, yields insignificant coefficients (not reported). I thus regard the evidence for US Stackelberg leadership as weak. 18 The coefficients of most control variables point in the expected directions. Redistributive preferences have a significant and upward effect on top rate adjustments, and richer countries set higher rates. Moreover, it is plausible that government budget deficits and populations with large shares of elderly or unemployed people also have some upward effect, as their coefficients are close to the 10% significance level in most models. These findings partially confirm the financial and political constraints that governments face in reducing PIT rates, as identified by Ganghof (2006). The government ideology variable has no significant effect, which is notable, because partisan effects should be particularly pronounced in cabinetbased models (Schmitt 2016). Table 3 presents the Granger causality tests, which reverse the original model, estimating the effect of the tax rate differential and the government-specific change in the PIT rate on CIT rate setting (hypothesis 3). In these models, I control for levels and changes of the US CIT rate, because there is strong evidence of US Stackelberg leadership in CIT policy (e.g. Swank 2016). Model 6 shows that CIT rate changes are not driven by the prior tax rate differential. They do coevolve with changes in the PIT rate. However, as the interaction term in model 7 shows, this effect is partially driven by countries with a small tax rate differential changing both rates together: in a country with a 24.33 percentage-point differential, the estimated effect of PIT on CIT rate changes is zero (0.7178-0.0295 * 24.33). Following the approach of the original models 4 and 5, models 8 and 9 drop the domestic-US CIT rate differential to prevent multicollinearity. The coefficient of the PIT-CIT differential turns weakly significant in model 8 (p-value 0.100), but it is highly insignificant in model 9 with the interaction term (p-value 0.532). 19 Model 10 drops all non-tax and non-domestic control variables, to check whether these substantially affect the resultsthey do not. In sum, I find very weak evidence of CIT rates being influenced by the tax rate differential. This finding tentatively suggests that the differential has an exogenous effect in the original models. Models 11-14 in Table 4 test the effect of dividend tax regimes (hypothesis 6). 20 Model 11 includes the differential between the PIT rate and the combined 17 In model 3, the respective VIFs of the domestic-US top rate differential, the PIT-CIT rate differential, and the top rate's start value are 5.69, 4.44, and 6.36. In model 4, the latter two are 4. 44 and 4.06, respectively. 18 Also, it does not depend on domestic circumstances. Following Swank (2016), who finds substantial Stackelberg leadership of the US in corporate tax policy, with a mediating effect of domestic institutions, I subsequently interact the change in the US PIT rate with all socio-economic, political and institutional control variables (not reported). None of these interaction terms are significant. 19 Dropping the change in the US CIT rate as well, while retaining the control variables, yields similar results. 20 The estimations of these models have slightly fewer observations, because of missing data on dividend tax rates for Finland and Greece in the 1980s and 1990s. To check whether this influences the results of the other variables, I re-estimate models 1 and 2, omitting the same governments. The results are unaffected. corporate-and shareholder-level dividend tax rate in a cabinet's first year. Model 12 additionally controls for the cabinet-specific change in the shareholder-level dividend tax rate. The reason for estimating these models separately is that PIT and dividend tax rates are likely to coevolve during a cabinet's term, especially in synthetic systems that include labour and dividend income in one tax base (Ahrens et al. 2020)this causes endogeneity problems which may affect the results of model 12. In both models, the PIT-dividend tax differential has no significant effect on PIT rate setting, while the coefficients of the CIT variables remain highly significant. 21 In models 13 and 14, I exclude the CIT variables to prevent any (Laeven and Valencia 2018), and interacts it with the change in the debt-to-GDP ratio (Armingeon et al. 2020a). Model 16 includes the interest rate on 10-year government bonds, which was found by Lierse and Seelkopf (2016) to be a driver of tax reforms, as higher bond yields constrain debt-financed government spending. Model 17 controls for the effect of trade openness, which may stimulate the government to increase taxation and redistribution in order to protect citizens from the risks of globalisation (Rodrik 1998). Trade openness is measured by the sum of imports and exports as a percentage of GDP (Armingeon et al. 2020a). Following Swank and Steinmo (2002), model 18 measures the strength of Christian democratic parties instead of left-wing parties. 22 The remaining models control for determinants of redistribution preferences, as discussed in the theoretical section. Model 19 includes the share of expenditures directed to welfare schemes that mainly benefit the poor (Berens and Gelepithis 2019), retrieved from the OECD's Social Expenditure Database (OECD 2020a). 23 Model 20 includes an index of ethnic fractionalisation as a proxy for shared cultural backgrounds (Drazanova 2020), 24 and model 21 splits the demographic burden into two variables measuring the respective shares of unemployed and elderly people (Armingeon et al. 2020a). Building on model 21, model 22 replaces the level of unemployment with its yearly change, to better capture fiscal stress. None of the additional variables have significant coefficients and the other results do not change substantially. Next, I run alternative model specifications. In models 23 and 24, I check whether the coefficients of the tax variables in models 1 and 2 change when excluding the non-tax control variables; they do not, except for the interaction term in regimes are unable to mitigate the downward effect of CIT competition on PIT rates. Incidentally, the effect of the dividend tax variable, which captures both corporate-and shareholder-level taxation, may partially run through the CIT rate anyway. When controlling for the height of the CIT rate, the PIT-dividend tax differential indeed turns insignificant, while the PIT-CIT differential remains significant (not reported). 22 I code parties as Christian democratic when they are categorised as both 'centrist' and 'religious' by Armingeon et al. (2020b). As I lack data on Christian democratic cabinet positions, I approximate their relative strength in government using the ratio of their parliamentary seat share over governing parties' total seat share. 23 Following Berens and Gelepithis (2019), I classify family and housing benefits, active labour market policies and miscellaneous spending as pro-poor, as opposed to unemployment, incapacity, old age, health, and survivors' benefits, which mainly fulfil insurance functions. A small number of missing data years are linearly interpolated. 24 Due to data availability, this reduces the period of analysis to 1981-2013. model 2 dropping slightly below the 10% significance level. To further check the exogeneity of the tax rate differential, I run a 2SLS instrumental variables regression in model 25, using as an instrument the differential between a country's top PIT rate and an unweighted spatial lag of other countries' CIT rates. 25 Its coefficient remains negative and highly significant. As a Wooldridge test detects country-specific autocorrelation in the original models, 26 I replace the Eicker-Huber-White standard errors with country-clustered standard errors in model 26. I have opted against this in the original models because of the relatively small number of countries. The results remain unchanged. As an alternative, in model 27, I use country dummies to perform a fixed-effects regression with heteroskedasticity-robust standard errors. The coefficients of the CIT variables remain highly significant. Model 28 accounts for slightly unbalanced panel data, due to missing values of the consensus democracy index for Greece, Portugal, and Spain in the early 1980s. The results do not change substantially when excluding these countries. In model 29, the non-tax control variables, except for the level of redistribution, refer to values in the first half of a cabinet's term instead of its first year, following Schmitt (2016) and Ahrens et al. (2020). 27 This model specification is less effective in preventing endogeneity, but it allows for more relevant variation. The results are similar. Models 30 and 31 use country-year data instead of cabinet-periodised data, with 1-year lags for all control variables. 28 The results hold. Conclusion Countries' differentials between their PIT and CIT rates increase as a result of corporate tax competition. Recent evidence shows that those differentials induce substantial PIT avoidance through the corporate form. Governments should have multiple political and economic reasons to prevent such tax avoidance and limit the PIT-CIT differential. Thus, CIT competition should put downward pressure on PIT rates. Building on the evidence presented by Ganghof (2006), I have shown that CIT rate setting is indeed the main driving factor of top PIT rate setting by 226 cabinets in 19 OECD countries between 1981 and 2018. It has a more pronounced and more significant effect than several political, economic, and institutional control variables. It also overshadows the effects of dividend tax rates. The latter finding implies that the erosion of the CIT's backstop function is unlikely to be mitigated by the recent dividend tax increases seen in several countries. Together with increasing tax rate differentials and converging top PIT rates in advanced OECD countries, 25 Its correlation coefficient with the domestic PIT-CIT differential is 0.75 and its first-stage F-statistic is 9.74. Due to multicollinearity of this instrument with the top rate's standardised start value (correlation coefficient 0.85), I use the top rate's unstandardised start value in this model (correlation coefficient 0.65). Note that no proper instrument exists for the cabinet-specific change in the CIT rate: a spatial lag of other countries' changes in CIT rates would not suffice (correlation coefficient 0.25; first-stage F-statistic 0.02), because countries reform their CITs at very dissimilar points in time. these results suggest that governments have little room to manoeuvre in their top rate setting. While this study's regression approach complements the existing qualitative evidence on top rate setting, it also has some limitations. First, regression models cannot assess all relevant details of the political process, including the influence of veto players, which are difficult to capture in a single index. This calls for additional qualitative case studies of recent tax reforms. Second, while this study has shown that countries' PIT rates converge, it could not fully disentangle the underlying processes of international policy diffusion, mainly due to a lack of migration data. In particular, identifying the channels of international competition for rich individuals is a fruitful area of future research. Another limitation is the measurement of shareholder taxes, which was confined to dividend taxation for reasons of data availability. However, shareholders may exploit lenient capital gains tax regimes instead of distributing dividends. Furthermore, statutory shareholder tax rates do not capture the presence of legal provisions preventing profit retention inside corporations. Thus, opportunities for future research include elucidating the effectiveness of such legal provisions in supporting PIT progressivity and compiling internationally comparable data on capital gains tax rates.
2023-03-03T16:11:32.775Z
2023-03-01T00:00:00.000
{ "year": 2023, "sha1": "99d2099169e78d54a3c752d35e80f68f2ebeabcc", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/7423D1E9E4594739B24569BA5E034C7D/S0143814X23000028a.pdf/div-class-title-determinants-of-top-personal-income-tax-rates-in-19-oecd-countries-1981-2018-div.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "bacd539690df425e45b39165586264c7dabf7e87", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [] }
58615491
pes2o/s2orc
v3-fos-license
Prevalence and predictors of surgical-site infection after caesarean section at a rural district hospital in Rwanda Abstract Background There are few prospective studies of outcomes following surgery in rural district hospitals in sub-Saharan Africa. This study aimed to estimate the prevalence and predictors of surgical-site infection (SSI) following caesarean section at Kirehe District Hospital in rural Rwanda. Methods Adult women who underwent caesarean section between March and October 2017 were given a voucher to return to the hospital on postoperative day (POD) 10 (±3 days). At the visit, a physician evaluated the patient for an SSI. A multivariable logistic regression model was used to identify risk factors for SSI, built using backward stepwise selection. Results Of 729 women who had a caesarean section, 620 were eligible for follow-up, of whom 550 (88·7 per cent) returned for assessment. The prevalence of SSI on POD 10 was 10·9 per cent (60 women). In the multivariable analysis, the following factors were significantly associated with SSI: bodyweight more than 75 kg (odds ratio (OR) 5·98, 1·56 to 22·96; P = 0·009); spending more than €1·1 on travel to the health centre (OR 2·42, 1·31 to 4·49; P = 0·005); being a housewife compared with a farmer (OR 2·93, 1·08 to 7·97; P = 0·035); and skin preparation with a single antiseptic compared with a combination of two antiseptics (OR 4·42, 1·05 to 18·57; P = 0·043). Receiving either preoperative or postoperative antibiotics was not associated with SSI. Conclusion The prevalence of SSI after caesarean section is consistent with rates reported at tertiary facilities in sub-Saharan Africa. Combining antiseptic solutions for skin preparation could reduce the risk of SSI. Introduction Surgical-site infections (SSIs) are an important global public health problem, disproportionately affecting lowand middle-income countries (LMICs), where the burden is 75 per cent higher than in developed countries 1 . Globally, SSIs lead to longer hospital stays and more health complications, increasing mortality risks and costs for patients, families and healthcare facilities 1 -5 . Lower (uterine) segment caesarean section (LSCS) is the most commonly performed surgical procedure in the world 6 . The rates of LSCS have increased over the past three decades, up to an average of 19⋅1 per cent of deliveries worldwide 7 . Although sub-Saharan Africa has the lowest rates of LSCS at 7⋅3 per cent of all deliveries, increased access to LSCS has contributed to a decline in maternal mortality in the region 7,8 . However, the increased access to LSCS has also led to an increased number of SSIs 9 . In sub-Saharan Africa, the prevalence of SSI after LSCS ranges from 7 to 48 per cent 10 -14 , and is associated with younger age, obesity, hyperthermia on admission, difficult delivery, premature rupture of the membranes, neonatal death, prolonged labour and long duration of LSCS 13 -17 . The majority of these studies were from urban and/or tertiary facilities, and there is limited information on SSI prevalence and risks for women delivering in district hospitals serving the rural areas, where 72⋅4 per cent of the sub-Saharan African population resides 18 . In Rwanda, 13 per cent of all women and 11 per cent of women residing in rural areas deliver their baby via LSCS 19 ; LSCS is the most commonly performed surgical procedure in Rwandan district hospitals 20 . This study estimated the prevalence of SSI on postoperative day (POD) 10 and identified risk factors for these infections for women who underwent LSCS at Kirehe District Hospital (KDH) in rural Rwanda. The aim was to characterize the burden of SSI to patients and district hospitals, and to identify which patients are most at risk, in order to target future interventions. Methods All women provided informed consent before study enrolment. Study data were entered directly into a research electronic data capture database (REDCap; https:// www.project-redcap.org/) 21 This prospective cohort study included women who underwent LSCS between 22 March and 18 October 2017 at KDH. This hospital, located in the rural Eastern Province of Rwanda, is managed by the Rwandan Ministry of Health with technical and financial support from PIH/IMB, a US-based non-governmental organization. The KDH catchment area includes 16 health centres located in the district and two located in the Mahama Refugee Camp, and the hospital serves a population of 360 565 22 . The hospital has 233 beds and is staffed by 136 employees: 15 general practitioners (GPs), 77 nurses, 11 midwives, 18 paramedical staff, four administrators and 11 support staff. During a portion of the study period (March to July 2017), there was also a visiting PIH/IMB-sponsored obstetrician/gynaecologist working part-time at the hospital, performing complex obstetric and gynaecological procedures and providing professional development training to hospital GPs. An estimated 7⋅8 per cent of deliveries in Kirehe District were by LSCS 19 , with a mean of 136 LSCS done at KDH each month. About 80 per cent of Rwandans have medical insurance; 97 per cent are covered by the community-based health insurance, which pays for 90 per cent of total medical costs 19 . In Kirehe District, and in other parts of rural Rwanda, a woman presents first to her nearest health centre for assessment and management. If a nurse or midwife there identifies an urgent problem, the woman is referred to the district hospital. For a limited number of women who are identified as having a high-risk pregnancy, they present directly to the district hospital to be attended by a GP. Once at the district hospital, a midwife monitors the woman's progress, and if needed, calls a GP who may recommend an LSCS for delivery. Surgical technique In accordance with hospital protocols, the woman's skin is prepared with aqueous-based 10 per cent chlorhexidine gluconate followed by 10 per cent povidone-iodine solution before incision. The 2016 WHO guidelines 23 for the prevention of SSIs recommend administration of preoperative antibiotics within 120 min before skin incision and no postoperative antibiotics, except in the case of an infection. The previous 2015 WHO recommendations 24 were to administer preoperative antibiotics 30-60 min before incision with no postoperative antibiotics, which were still the guidelines of the Rwandan gynaecology and obstetrics clinic. Although women undergoing LSCS at KDH receive preoperative antibiotic prophylaxis, typically a single dose of 1 g ceftriaxone within 1 h before incision, almost all receive postoperative antibiotics 25 . Braided absorbable sutures, usually Vicryl ® (Ethicon, Somerville, New Jersey, USA), are used to suture subcutaneous tissue and for skin closure. A gauze soaked with povidone-iodine is used for dressing and is replaced on POD 3. After LSCS, the woman is admitted to the postpartum ward for at least 3 days for monitoring and postoperative care. A GP attends daily to assess the woman's healing and decides when she is fit to be discharged. A GP then fills out a discharge form with a brief note on the patient's follow-up plan and, in some instances, prescribes medications (mostly pain medication and/or antibiotics). A midwife gives additional instructions about wound care, neonatal care, medications and follow-up. A follow-up date is then scheduled for the patient's wound dressing change at her nearest health centre. Study population and data collection All women 18 years and older who underwent LSCS at KDH between 22 March and 18 October 2017 were eligible for the study. Patients from Mahama Refugee Predictors of caesarean surgical-site infection in rural Rwanda e123 Camp were excluded because guidelines regulating refugees' movements hinder their ability to be followed after discharge. After LSCS and during hospitalization, study team members consented and enrolled eligible women to participate in the study. At the time of enrolment, a trained study data collector interviewed patients, collecting basic information on demographics and clinical history. After discharge, data collectors extracted clinical details from patients' files. Patients were screened on POD 10 (± 3 days) by a GP for the presence of an SSI. This window was selected because the majority of SSIs develop between POD 5 and 10 26 . Furthermore, timely identification of SSI is crucial for minimizing morbidity and mortality. Two study clinics were held each week. Patients were assigned to the first study clinic that fell within the POD-7-13 screening window. Patients still in hospital on the scheduled clinic date were assessed for SSI at the bedside. A woman discharged before her scheduled clinic date was given a transport voucher to be redeemed when she returned for screening. She was called a day before her clinic day as a reminder. If she missed her scheduled clinic day, a study team member attempted to call her and reschedule an appointment for the next study clinic, also within the POD-7-13 screening window. Women who missed two appointments were considered lost to follow-up, and excluded from the analysis. The GP first administered a ten-question screening protocol assessing: increased pain since discharge, fever since discharge, erythema, oedema, induration, dehiscence, drainage from the wound, drainage with discolouration, drainage with a foul odour and drainage of pus (purulent discharge). The GP then conducted a physical examination. The diagnosis of SSI was based on the physical examination. Evaluation of risk factors Based on previous literature and the researchers' knowledge of the Rwandan healthcare system, demographic and clinical variables were identified that could be potential predictors of SSI. One set of variables of interest, both for predelivery and postdischarge risks, were related to the costs and time required for travel: travel time from home to health centre, travel time from health centre to hospital, cost of transport, and total time spent getting to the hospital. For women with missing travel data, the researchers imputed values based on data from participants from the same village. If there were no other participants from the same village, the imputed value was the mean of values from participants from the same cell, that is the next administrative level, typically a cluster of between four and 19 villages. Cost of transport and monthly income were analysed *With percentages in parentheses unless indicated otherwise; †values are median (i.q.r.). ‡Patients could use more than one form of transport. §Patients for whom interval between discharge from hospital and surgical-site infection screening clinic day was rainy period. based on a value of up to or greater than €1⋅1 per day (US $1⋅3), which is the poverty line cut-off in purchasing power parity per day 27 . The time taken and cost of transport were self-reported by the patient. The cost of transport was calculated using an exchange rate of 1 euro to 887⋅7 Rwandan francs, the mean exchange rate during the study interval according to the Rwandan national central bank. Statistical analysis Fisher's exact test (categorical variables) or Wilcoxon rank-sum test (continuous variables) was used to assess the relationship between co-variables and the presence of an SSI. Variables that were significant at α = 0⋅2 in the bivariable analyses were considered for the multivariable logistic regression model. For variables with more than 10 per cent missing data, an explicit missing category was created for the multivariable modelling. A reduced multivariable logistic regression model was built using backward stepwise selection, stopping when all remaining co-variables were significant at the α = 0⋅05 level. Odds ratios, 95 per cent confidence intervals and P values were reported for the multivariable analysis. All analyses were completed in Stata ® version 13 (StataCorp, College Station, Texas, USA). Results Of the 729 women who had an LSCS at KDH during the study, 620 (85⋅0 per cent) were eligible for follow-up. Of these, 550 (88⋅7 per cent) were screened by a GP for SSI, 16 (2⋅9 per cent) were still in hospital on POD 10 and 534 (97⋅1 per cent) returned for follow-up. The majority (316, 57⋅5 per cent) were between 22 and 30 years old, were married (237, 43⋅1 per cent), had primary education (382, 69⋅5 per cent) and were farmers (478, 86⋅9) ( Table 1). Most women had community-based health insurance (523, 95⋅1 per cent) and a monthly household income of less than €33⋅8 (508, 92⋅4 per cent). Of the 390 women (70⋅9 per cent) with bodyweight documented, 359 (92⋅1 per cent) weighed between 50 and 75 kg. Most women (282, 76⋅8 per cent) used public transport to reach a health centre and an ambulance (256, 69⋅8 per cent) to move from there to the hospital. The median time taken to travel from home to the health centre was 30 (i.q.r. 20-50) min and that from the health centre to the hospital was 45 (20-60) min. The median total time from home to hospital, including time receiving care at the health centre, waiting for transport to the hospital and waiting to be admitted to the hospital, was 6⋅3 (2⋅5-18⋅0) h. The median total amount spent to reach KDH was €3⋅4 (i.q.r. 1⋅9-4⋅6), 10 per cent of the mean monthly household income. This included a median of €1⋅3 (0⋅6-2⋅3) spent to reach the health centre and €1⋅9 (0⋅6-2⋅7) to travel from there to the hospital. The prevalence of SSI on POD 10 was 10⋅9 per cent (60 women) ( Table 2; Table S2, supporting information); of these, only two (3 per cent) were identified before discharge from hospital, and 75 per cent were superficial SSIs. In the bivariable analysis, the following factors were associated with SSI, significant at the α = 0⋅2 level: women who were living with a partner or separated/widowed (P = 0⋅177), occupation housewife rather than farmer (P = 0⋅127), monthly income less than €33⋅8 per month (P = 0⋅184), weighing more than 75 kg (P = 0⋅011), long travel time Values in parentheses are 95 per cent confidence intervals. The logistic multivariable regression model was built using backward stepwise selection, stopping when all remaining co-variables were significant at the α = 0⋅05 level. In the multivariable regression analysis, weighing more than 75 kg, spending more than €1⋅1 travelling to the health centre, occupation housewife rather than farmer, and use of a single antiseptic were independently associated with SSI ( Table 3). Discussion This study estimated the prevalence and risk factors for SSI among women who had LSCS in a rural district hospital in the region. This study population's characteristics of being poor, with low levels of education, and long travel distances to reach a health centre or hospital reflect the characteristics of Kirehe District and much of rural East Africa. The SSI rate was 10⋅9 per cent, which is consistent with reports from other countries in sub-Saharan Africa, although notably these earlier estimates were based largely on studies in tertiary and/or urban facilities 10 -13,16,17,28 . The prevalence was higher than the average of 7⋅1 per cent in developed countries 16,29,30 . Some individual risks for SSIs were identified. First, consistent with the literature 15,31 , women who weighed more were more likely to develop an SSI. Second, it was found that women whose skin was prepared with two antiseptic solutions (10 per cent chlorhexidine followed by 10 per cent povidone-iodine in accordance with hospital protocol) were less likely to develop an SSI. The rare instances (2⋅4 per cent) where a single antiseptic solution was used were likely due to lack of stock or variation in GP practice. Two interesting findings of this study were the increased risk of SSI in women who spent more money to access the health centre, and for housewives compared with farmers. Postoperative follow-up, including wound dressing changes, occurs mostly at health centres. This care is a burden for the majority of this study population as the median time to travel to the health centre is 30 min and the median cost of transport to the HC is 4-10 per cent of monthly income. Women with high transport costs and housewives may have the least disposable income, and may not be able to return to the health centre for regular dressing changes. Other studies 19,32,33 support this hypothesis and have shown that transportation costs are common barriers to surgical care in LMICs; vouchers covering transport costs increased access to maternal health services 34 . It is possible that financial barriers may also be associated with loss to follow-up, and so there may be a higher SSI rate among women who did not attend follow-up in this study. This study did not show any association between SSI development and administration of preoperative or postoperative antibiotic therapy. The recent WHO guidelines (2016) 35 recommend administration of antibiotic prophylaxis at least 120 min before surgical incision and no administration of postoperative antibiotics. In the present study, 66⋅7 per cent of women received preoperative prophylaxis, and nearly all had an extended course of postoperative antibiotics. Overuse of antibiotics risks antimicrobial resistance in this population, particularly of Gram-negative strains 36 . A systematic review 37 showed high levels of antimicrobial resistance across a variety of populations in sub-Saharan Africa. Previous studies 16,29,30,38,39 have discussed other factors that could be linked to an increased risk of SSI development, such as poor operating room infrastructure, poor adherence to operating room guidelines including hand-washing techniques, inadequate hygiene and sanitation at the hospital and at home, and inadequate quality and quantity of staffing. The present study did not have data to compare from this population. In addition, the impact of water quality, sanitation and hygiene conditions both in the patient's home and at health facilities could affect infection control. Another limitation of this study was missing data, particularly those extracted from clinical charts. Because height was often not recorded, the researchers were unable to calculate BMI. Instead, they used weight categories as a proxy estimation of overweight versus normal weight. The study also missed patients who returned to health facilities before their scheduled follow-up. Finally, KDH receives significant support from PIH/IMB, which may limit the generalizability of these findings. This included the presence of an obstetrician/gynaecologist for 5 months of the study; however, secondary analysis found no significant difference in SSIs or other complications during and after the time the surgeon was present in the hospital, indicating minimal confounding.
2019-01-22T22:28:42.716Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "3883185ab222af4146b15750b86b80bec1824f9f", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/bjs/article-pdf/106/2/e121/36116711/bjs11060.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "3883185ab222af4146b15750b86b80bec1824f9f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235359104
pes2o/s2orc
v3-fos-license
IPS300+: a Challenging Multimodal Dataset for Intersection Perception System Due to the high complexity and occlusion, insufficient perception in the crowded urban intersection can be a serious safety risk for both human drivers and autonomous algorithms, whereas CVIS (Cooperative Vehicle Infrastructure System) is a proposed solution for full-participants perception in this scenario. However, the research on roadside multimodal perception is still in its infancy, and there is no open-source dataset for such scenario. Accordingly, this paper fills the gap. Through an IPS (Intersection Perception System) installed at the diagonal of the intersection, this paper proposes a high-quality multimodal dataset for the intersection perception task. The center of the experimental intersection covers an area of 3000m2, and the extended distance reaches 300m, which is typical for CVIS. The first batch of open-source data includes 14198 frames, and each frame has an average of 319.84 labels, which is 9.6 times larger than the most crowded dataset (H3D dataset in 2019) by now. In order to facilitate further study, this dataset tries to keep the label documents consistent with the KITTI dataset, and a standardized benchmark is created for algorithm evaluation. Our dataset is available at: http://www.openmpd.com/column/other_datasets. I. INTRODUCTION Robust perception of the surrounding environment has always been one of the most crucial factors in autonomous driving. Lately, perception through the onboard sensing unit has been widely studied as a hot spot. A large number of datasets such as KITTI [1], nuScenes [2] and Waymo [3] have greatly promoted related researches. However, when dealing with extremely complex and severely obstructed scenes such as crowded urban intersections, even experienced human drivers cannot ensure the safety of driving simply based on the information seen inside the car, let alone autonomous algorisms. ERSO2018 (European Road Safety Observatory) public traffic accident statistics report [4] shows that, in 2016, of the 9693 urban traffic accidents that occurred in Europe, 3839 occurred at intersections, accounting for 39.6% of the total urban accidents. Therefore, driving safely across urban intersections remains a challenging open question for autonomous driving. Recently, CVIS has attracted broad attention in both academia and industry. As a solution to insufficient perception in large-scale urban scenes, CVIS bridges the gap between smart transportation and smart vehicles, and could become an important infrastructure for the future smart city. In CVIS, RSUs (Road-Side Units) are installed to get reliable perception of the entire intersection from the top-down perspective, and the perception results are sent to the passing vehicles through V2I. Among all the pilot studies in various countries, Ko-FAS project [5][6][7] sponsored by EU established an IPS in an intersection of Aschaffenburg. RSUs and OBUs enable vehicles to get the whole view of the intersection and the ADAS (Advanced Driver Assistance Systems) can warn the drivers of potential collision at blind spots. CICAS-SSA project [8] sponsored by US DOT focused on the warning for incoming vehicles in rural roads and told drivers the correct time to cut in. To achieve this, 16 Radars and 8 Lidars were installed as RSUs and the whole IPS costs 180890 dollars in 2010. However, it is difficult to promote the approach due to the unaffordable price. In J-Safety project [9], the UTMS of Japan used the perception information of several intersections to adjust the dynamics of vehicles to promote the fuel economy. However, these projects do not take the advantage of deep learning technology in roadside perception tasks and there is no open-source multimodal dataset for large-scale urban intersection yet. The data-driven character of deep learning method makes the data even more important for multimodal roadside perception research. In this paper, we proposed the largest objects-per-frame multimodal dataset IPS300+ for roadside perception in urban IPS300+: a Challenging Multimodal Dataset for Intersection Perception System intersections, aiming to promote the research for 3D target detection by roadside units in CVIS. The IPS300+ data is published under CC BY-NC-SA 4.0 license. The main contributions of this paper are as follows: • The first multimodal dataset (including point clouds and images) available for roadside perception tasks in large-scale urban intersection scene. The point clouds remain usable within 300m. • The most challenging dataset with the highest label density. The proposed dataset includes 14198 frames of data and every single frame has an average of 319.84 labels, which is 65.3 times larger than KITTI [1]. • The 3D bounding box is labeled at 5Hz, which provides dense truth data for 3D target detection task and the coming tracking task. • A feasible and affordable solution for IPS construction and a wireless approach for time synchronization and spatial calibration are provided. The open questions for algorisms are also mentioned in this paper. II. RELATED WORKS Since there is hardly any dataset that focuses on roadside perception in urban intersections, the related works are separated into three sections which cover all related datasets from autonomous driving to traffic monitoring. A. On-board Perception Datasets for Autonomous Driving The datasets for on-board perception task are relatively abundant when comparing with roadside perception, and datasets with both point clouds and images are enriched in recent years. The KITTI dataset released by A. Geiger et al. [1] has been widely used as the benchmark for algorism evaluation. Besides, H. Caesar et al. [2] released the nuScenes dataset containing 40k labeled data; Google Waymo released their dataset [3] containing 12M labeled data; OEMs (Original Equipment Manufacturers) released their dataset either such as Audi's A2D2 dataset [10] and Honda's H3D dataset [11]. Among all these datasets, Lidars and cameras are installed on different locations of the car with the labeling frequency ranging from 2Hz to 10Hz. All these precious labeled data have greatly promoted the research of onboard perception for autonomous driving, and the algorisms driven by these data have made a remarkable result. B. Roadside Traffic Monitoring Datasets Most of the existing datasets collected by roadside units are based on images and 2D bounding boxes are provided for the targets. Among them, NGSIM [12,13] released in 2007 is one of the most widely used datasets in this research area. This dataset contains the traffic data of US highway 101 and interstate 80 freeway collected by FHWA DOT. The sensing cameras were installed on the top of the building tens of meters above the ground. Besides NGSIM, Aachen University of Technology released the HighD dataset [14] in 2018 which covered the traffic scene of German highway. The data were collected by a camera mounted on a drone. BUPT released their roadside re-identification dataset VeRi [15] in the same year, which was captured by 20 road surveillance cameras. However, they only provide the cut images of the targets separately instead of the whole image of the camera. C. Roadside Target Perception Datasets The only reachable multimodal dataset for the intersection scene is from the Ko-FAS project [5][6][7] in 2014. It contains 6min 28s (4850 frames) data from an intersection of a town in Aschaffenburg. The widths of the branches are 20m and 15m. Sensors include 14 8-layer Lidars and 8 cameras (only 2 available) are installed 5-7m above the ground to get the full view of the road. The sensors and data details are shown in Table Ⅰ. Fig. 2 shows a single frame of this dataset. Due to limited points of Lidars, the bus 50m from the center of the intersection only has 11 points, which is hardly recognizable for both humans and algorisms. A. Data Collection Platform While cameras, Lidars, and Radars are widely used for onboard perception tasks, it is still an open question what kind of roadside sensors can meet the requirements of CVIS. There has no typical dataset and unified evaluation standard for this task, neither. This paper provides a possible solution for IPS by learning from the onboard sensors and making tentative adjustments for roadside perception tasks. This paper takes IPU (Intersection Perception Unit) as the smallest unit of the task. As shown in Fig. 3 B. Location and Sensors Setup The intersection of Chengfu Road and Zhongguancun East Road is chosen for data collection in Haidian District, Beijing. The central area of the intersection reaches 60 m×50m and suffers heavy occlusion problems because of the large traffic volume. Since there are lots of universities nearby, the number of pedestrians, cyclists and tricycles is relatively larger than those of other locations. All these factors make this intersection a challenging driving scene, which is also a typical scenario for CVIS. After weighing all the factors including maximizing sensing range, minimizing traffic impact, economy friendly and compatibility with existing facilities, two IPUs were equipped at the diagonal of the intersection 5.5m from the ground. The installation locations of two IPUs are shown in Fig. 1. C. Sensors Calibration Time synchronization: this part includes the synchronization of each sensor inside one IPU and synchronization between different IPUs. GPS-PPS signals are used to get the unified time. Each sensor on the same IPU is connected to the GPS-PPS output port of GPS through a wire. After the PPS trigger sends out, the Lidar rotates to 0°, which is perpendicular to the crossbar of IPU, the camera triggers its shutter. Due to the constraint of the crossbar, it is guaranteed that when the Lidar The key to maintaining time synchronization between two IPUs is to unitize the time trigger. Since the upper limit of the timing error of GPS is 1μs, the cumulative error between two IPUs is less than 2μs, which is acceptable when compared with the frequency of sensors. Spatial calibration: similarly, the spatial calibration includes the same two parts. Within the IPU, the method proposed in [16] is adopted to get the distortion and internal matrix of each camera. The calibration of binocular cameras uses the method proposed in [17]. The external parameters between camera and Lidar are calculated by using the method proposed in [18,19]. Since these processes are basically the same with the calibration process used in autonomous vehicles, we do not elaborate on them. Between IPUs, the spatial calibration problem can be treated as a registration problem of the point clouds from different IPUs. However, the distance between the two sensing devices has reached 70m, and the point cloud features are relatively sparse. ICP-based methods and NDT cannot converge if the raw point clouds are directly used. In order to minimize the registration error, manually screening of the usable feature is needed before registration. Fig 4 shows the pipeline of the point clouds registration process in this paper. Since two Lidars are far apart, the point clouds features representing the same location may be different (as shown in Fig. 5). In order to ensure the consistency of the features from the point clouds, only the buildings are selected for ICP iterations. Specifically, after conventional operations such as statistical outlier filter and voxel grid filter, the method proposed in [20] is used to initially divide the point cloud into ground and targets. Then the ground points are fitted to a plane by RANSAC [21] and rotated into the XY plane of the intersection coordination. The transformation matrix is recorded as Rotation_Matrix1. After Rotation_Matrix1, the height of the object is only coupled to z axis. Then, the barrier in the middle of the road is manually selected and rotated to x axis by Rotation_Matrix2. The branches of two roads are coupled to x and y axes separately. Then, the pass-through filter is used to select the building and sent them to ICP algorithm. And the output of ICP is used as the Trans_matrix from IPU1 to IPU2. The raw data are available at IPU1 and IPU2 in data folders. Each IPU folder including the point clouds data and binocular camera data, related parameters of sensors can be found in calib_file.txt. In order to flexible the data processing, this paper also provides the registered filtered point clouds in PCD_Sync_Anno folder (the point clouds coordinate is consistent with IPU1 point clouds and the 3D labels are in the same coordinate). This data structure provides the greatest convenience possible for the IPS algorithm research. It is worth mentioning that besides the problem of multimodal fusion in the same IPU, the multi-spatial fusion of different IPUs is also a novel but significative problem for CVIS. All these problems can be studied under our IPS300+ dataset. E. Ground Truth Labels (Statistics) The first batch of open source labeled data includes 623 frames in the evening rush hour, which has the most intensive traffic flow and the highest risk of accidents. The remaining data are published as unlabeled data, and the labels will be opened for the public upon the release of this paper. The labeled data include 7 categories: pedestrian, cyclist, tricycle, car, bus, truck and engineering car. In order to facilitate the data processing progress, this paper provides a label document consistent with KITTI, and some adjustments have been made according to the special scenario of our dataset. The details can be found on our website. The labeling task uses Lidar point clouds and related images at the same time. The 3D bounding box of the target is selected in the point clouds with the help of reference images, and the bounding box is projected to each camera plane through the external parameter. Fig. 6 shows some statistical results of labeled targets. 6 (a) shows that in the intersection of IPS300+ dataset, the main traffic participants are cars, cyclists and pedestrians. 6 (b) and 6 (c) show that the number of pedestrians and vehicles are about 40-60 and 250-270 per frame separately. The on-board perception system is impossible to guide autonomous vehicles under such numbers of targets due to the limited perspective and computing power, which makes the CVIS indispensable. 6(d) shows that the orientations of vehicles are mostly in the direction parallel to the two branches of the intersection, which can be an important priori knowledge for the design of intersection perception algorisms. Table Ⅱ shows the comparison result of IPS300+ and other open-source datasets. The average pedestrians and vehicles in each frame of our dataset reach 56.8 and 263.0, which exceeded over 3.4 times and 9.9 times than the largest existing dataset respectively. The severe occlusion problem in onboard perception makes this intersection a typical scenario for CVIS study. IV. EXPERIENCE Target detection is one of the most challenging tasks in the research of autonomous perception. Two main tasks are provided in this paper as Lidar-based detection and Camera-based detection (tracking tasks will also be available after ID checking). For the 3D target detection task, this paper uses the AP metric (including AP3D, APBEV) consistent with KITTI [1,22] to evaluate the performance of the detection algorithm. The IoU threshold is set to 0.25 for pedestrians, 0.5 for cyclists, and 0.7 for the other five types of vehicles. A. 3D Detection by Lidar After sufficient research of the existing method of Lidar-based target detection, this paper uses PointPillars [23] as a baseline because of its high efficiency and accuracy. In order to quantitatively analyze the coverage distance and the accuracy of the IPS (both hardware and algorisms as a perception system) for the actual intersection scenario, this paper uses 50m as an interval to count the targets detection results in different distances from the center of the intersection, i.e. the midpoint of two IPUs. A map for the accuracy of vehicle detection within distances and a typical failure case are shown in Fig. 7. The quantitative results are summarized in Table Ⅲ. It can be seen that for the vehicle detection task, the accuracy of target detection decreases by distance. The AP3D at a distance of 150 meters stays at 66.81%, i.e. the range of vehicle perception reaches about 15 vehicles after the stop lines of the intersection. This top-down view of the intersection will provide rich information for the related function design such as intersection collision warning, vehicle trajectory planning, decision-making in autonomous driving and traffic flow controlling, traffic light timing system in ITS. B. 3D Detection by Camera The SOTA (State of the Art) algorisms for monocular 3D target detection can be divided into two categories. Stage-to-stage methods all assume that the XY plane of the camera coordinate system is parallel to the ground and the bottom of the bounding box is fitted on the ground, such as [24][25][26][27][28]. Under this assumption, the output of the network is (h, w, l, x, y, z, θ), where θ is the yaw angle, the roll angle and pitch angle are fixed to 0. However, on IPUs, the camera is tilted to get a top-down view of the whole intersection. The XY plane of the camera coordinate is not parallel to the ground. None of the roll, yaw and pitch angles of the object are 0 and these angles are changing according to the distance. Therefore, these methods require further modifications for IPU tasks. Another SOTA way such as [29][30][31] generates pseudo point cloud through images, and this process requires dense depth maps for supervised training of pseudo point cloud generation net. However, the acquisition of dense deep maps is extremely difficult for IPUs since existing methods such as [32] are mainly based on the integration of adjacent frames. It works only when the Lidar is equipped on-board and moves over time, which is invalid for the Lidar stationed on IPU. So, as far as we know, monocular 3D detection on IPUs is still an open question. C. 2D Detection by Camera This paper uses CenterNet [26] with Resnet34 as the backbone of 2D detection task and the detection result are shown in Table Ⅳ. The AP for pedestrian, car and cyclist are similar since the sample are sufficient for all categories in intersection scenarios. In actual IPS scenarios, the pedestrians on the opposite side of the intersection can be 70m or more away from the IPU, there are only about 50 pixels even if one pedestrian is not occluded. Not to mention that the pedestrians in the intersection are usually heavily occluded and act as crowds. All these problems lead to the poor performance of 2D detection by camera and there is still a large room for optimization in both hardware and algorisms. V. CONCLUSION AND FUTURE WORK In this paper, we present the first multimodal dataset for roadside perception in large scale urban intersection, and provide high-quality open source data for 3D detection task in CVIS. This dataset aims to provide a top-down view of the crowded intersection and accumulates the full-participation perception task through IPS. The perception results can be sent to cars that entering the intersection for beyond-visual perception and corresponding decision making & planning process. When comparing with the existing datasets in autonomous driving, the scenario in IPS300+ is extremely complex and has severe occlusion problem, and our dataset has the highest label density so far. The current web of our data set includes 3D object detection tasks based on Lidar and cameras, and provides a ranking list of corresponding algorithms. In the future, we will continue releasing the data of different times. After the ID checks for targets are completed, multiple ID documents in 5Hz, 20s fragments of the published data will be released for 3D multi-target tracking task. The unlabeled data of a car entering the intersection at the same time will also be open to public upon the release of this paper. The on-board sensors include a RS-Ruby 128-layer Lidar and a Basler 2MP camera. The label documents for that data will also be published soon.
2021-06-08T01:16:31.807Z
2021-06-05T00:00:00.000
{ "year": 2021, "sha1": "ebe5bb169c539cf84c2a38e9f01863c9984a1e35", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ebe5bb169c539cf84c2a38e9f01863c9984a1e35", "s2fieldsofstudy": [ "Engineering", "Computer Science", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
18870376
pes2o/s2orc
v3-fos-license
Chronic Heat Stress Weakened the Innate Immunity and Increased the Virulence of Highly Pathogenic Avian Influenza Virus H5N1 in Mice Chronic heat stress (CHS) can negatively affect immune response in animals. In this study we assessed the effects of CHS on host innate immunity and avian influenza virus H5N1 infection in mice. Mice were divided into two groups: CHS and thermally neutral (TN). The CHS treatment group exhibited reduced local immunity in the respiratory tract, including the number of pulmonary alveolar macrophages and lesions in the nasal mucosa, trachea, and lungs. Meanwhile, CHS retarded dendritic cells (DCs) maturation and reduced the mRNA levels of IL-6 and IFN-β significantly (P < .05). After the CHS treatment, mice were infected with H5N1 virus. The mortality rate and viral load in the lungs of CHS group were higher than those of TN group. The results suggest that the CHS treatment could suppress local immunity in the respiratory tract and innate host immunity in mice significantly and moderately increased the virulence in H5N1-infected mice. Introduction Heat stress can negatively affect an animal's growth performance and the immune competence to some bacterial or viral infection [1][2][3][4][5][6]. It has been reported that heat stress results in decease of weights of both primary and secondary lymphoid organs, profiles of circulating leukocytes, T cell in the blood, and antibody response to sheep red blood cells (SRBCs) or against Newcastle disease [7][8][9][10]. Our previous studies have demonstrated that chronic heat stress condition negatively affects both humoral and cellular responses against foot and mouth disease virus (FMDV) in mice [2]. However, there have been few detailed studies addressing the effects of CHS on the innate immune response as the most immediate defense against viral infection. The H5N1 subtype of highly pathogenic avian influenza viruses (HPAIVs) causes infections in domestic poultry and humans. It is epidemiologically characterized by wide dissemination and rapid prevalence and is a threat to public health. So far human cases of H5N1 infection in worldwide have increased to 522, including 309 deaths (http:// www.who.int/csr/disease/avian influenza/country/cases table 2011 02 25/en/index.html). The data from WHO demonstrated that most human cases of H5N1 infection occurred in the tropical and subtropical countries, such as Egypt, Vietnam, and Indonesia, instead of the high latitude countries. (http://www.who.int/csr/disease/avian influenza/country/cas es table 2011 02 25/en/index.html). In mainland China, the outbreak of H5N1 virus in poultry and human cases mainly occurred in the south of the Yangtze associated with high humidity and temperature, including Guangdong, Guangxi and, Fujian province [11]. It is speculative that high temperature may be associated with human or poultry higher susceptibility to H5N1 virus. It was also reported that the vast temperature changes frequently occurred one week before the avian influenza outbreak in China [12]. Even in winter, heat stress often occurred under conditions of crowding, heating, and without ventilation in poultry. When those flocks are exposed to H5N1 virus, the outbreaks Journal of Biomedicine and Biotechnology of avian influenza may occur. In this study, we measured the effects of CHS on innate host immunity by detecting local immunity in the respiratory tract, maturation or activation of DCs, and cytokine levels in spleens of the mice under CHS or TN conditions. Then we examined mortality rate, histopathology, and viral loads in lung of CHS mice challenged with H5N1 virus. We demonstrated that heat stress could increase the susceptibility of animals to the highly pathogenic avian influenza virus (HPAIV) H5N1. Materials and Methods 2.1. Virus. The H5N1 influenza virus (A/Chicken/Henan/ 1/04) used in this study was isolated from infected chicken flocks. This isolate was highly pathogenic on poultry, mouse, and Madin-Darby canine kidney (MDCK) cells. The virus has been adapted in MDCK cells for convenience and propagated in cell culture at 37 • C for 48 hours. The viral supernatant was harvested, aliquoted, and stored at −80 • C. The viral titers were determined by plaque assay as described previously [13]. Animals. Female BALB/c mice that were 8-10-weeksold were obtained from Vital River Laboratories (Beijing, China), while the original breeding pairs were purchased from Charles River (Beijing, China). Mice were raised in independent ventilated cages and received pathogen-free food and water. Experimentation with animals was governed by the Regulations of Experimental Animals of Beijing Authority and approved by the Animal Ethics Committee of the China Agriculture University. Chronic Heat Stress Model. Mice were randomly divided into two groups: CHS and TN. Mice in the CHS group were placed in a biological oxygen demand (BOD) incubator for 21 days and subjected to chronic heat exposure for 4 h at a temperature of 38 ± 1 • C, simulating high Summer temperatures [14]. Mice in the TN and control groups were kept in an incubator at 24 ± 1 • C to simulate room temperature. Mice were sacrificed at various time-points and the lung and spleen from each mouse were collected. Viral Challenge and Sample Collection. The virus stocks were diluted in phosphate-buffered saline (PBS). Mice were anesthetized with Zotile (Virbac, France) intramuscular at a dose of 15 mg/kg (body weight) and were then infected intranasally with 50 plaque forming units (PFUs) of H5N1 virus. Mice were sacrificed at various time-points and the lung and spleen from each mouse were collected and stored in liquid nitrogen until required. Histopathological and Immunohistochemical Analysis. Tissues were removed and fixed with 4% neutral formalin at room temperature for 48 h. The serial tissue sections were cut to 5 μm thickness after embedding in paraffin. Each slide was stained with hematoxylin and eosin (H&E) and then examined by light microscopy (Olympus CX31). Examination of influenza viral antigen in the tissue samples was performed by immunohistochemical analysis. Sections were incubated in 10% normal goat serum in PBS for 30 minutes to block non-specific binding sites before being reacted with the anti-influenza nucleoprotein mAb (AA5H, Abcam) at 1 : 1000 dilutions in PBS for 2 h. The slides were further incubated with goat anti-mouse IgG conjugated with avidin (Chemicon, USA) for 1 h and followed by an incubation of the biotinlated peroxidase (Victoria, BC, Canada) for additional 1 h. Staining was visualized by the addition of 3,3-diaminobenzidin (DAB, Sigma, St. Louis, Mo., USA ) for 15 min and counterstained with haematoxylin mounted with neutral balsam. Quantitative PCR (qPCR). Total RNA was prepared from 10 mg lung tissue homogenized in Trizol (Invitrogen, Carlsbad, CA, USA) according to the manufacturer's instructions. The DNaseI-treated RNA (0.2 μg) was reverse transcribed into cDNA. Quantification of expression of the hemagglutinin (HA) gene of H5N1 influenza virus was conducted using a Power SYBR Green PCR Master Mix kit (ABI). The following primers were used in the qPCR: forward primer, 5 -CGC AGT ATT CAG AAG AAG CAA GAC-3 ; and reverse primer, 5 -TCC ATA AGG ATA GAC CAG CTA CCA-3 . The reaction was run on an ABI 7500 thermal cycler with an initial denaturation step at 95 • C for 10 minutes, followed by 40 cycles of 95 • C for 15 seconds, 56 • C for 30 seconds, and 72 • C for 40 seconds. Data analysis was performed using 7500 software v2.0 (ABI). The copy number of the HA gene was calculated using an HA-containing plasmid of known concentration as a standard. Quantification by qPCR was performed for five other genes . Reactions were carried out on an ABI 7500 with initial denaturation at 95 • C for 10 min, then 40 cycles of denaturation at 95 • C for 15 s, annealing at 50 • C for 30 s, and extension at 72 • C for 40 s using a Power SYBR Green PCR Master Mix kit. Gene expression was normalized to that of the TN group using the 2 −ΔΔCT method with β-actin as an internal standard [15]. Bronchoalveolar Lavage (BAL) Collection and Pulmonary Alveolar Macrophage (PAM) Counting. At 21 days after exposure to 38 ± 1 • C or 24 ± 1 • C, the mice were euthanized with a lethal dose of pentobarbital. BAL was collected using 0.8 mL saline twice from the main stem bronchus. The BAL collected from one mouse was pooled centrifuged, and cells were resuspended in 100 mL saline containing 0.1% BSA. Cells were stained with Giemsa, and cell types were identified by morphological criteria. Two hundred cells were examined per slide for the pulmonary alveolar macrophage (PAM) count. Statistical Analysis. Statistical analysis was performed using one-way ANOVAs with SPSS 12.0 (SPSS Taiwan Corp., Taiwan) and P-values less than .05 were considered statistically significant. Chronic Heat Stress Reduced Local Immunity in the Respiratory Tract of Mice. To determine the effect of CHS on local immunity in the respiratory tract of mice, the histopathological changes in the nasal mucosa, trachea, and lungs of the mice in each group after exposure to 38 ± 1 • C or 24 ± 1 • C for 21 days were examined (Figure 1(a)). The nasal mucosa lesions of mice in the CHS group exhibited sebaceous gland hyperplasia (Figure 1(A)(b)). The tracheal lesions in the CHS mice were characterized by a reduction in tracheal epithelial cells and many erythrocytes had infiltrated the tube cavity of the trachea (Figure 1(A)(d)). Changes in lung pathology included a reduction and necrosis of mucous epithelium cells in the bronchioles and alveolar expansion (Figure 1(A)(f)). In comparison, the nasal mucosa, trachea, and lungs from the TN group exhibited no apparent histological changes (Figures 1(A)(a), (c), and (e)). In a previous study, heat stress was characterized by the production of HSPs, especially HSP70 [16]. To determine the mRNA expression of HSP70, we used qPCR analysis. HSP70 expression in the lungs of mice was induced at one day after exposure to 38 ± 1 • C, peaking at day 5 and then returned to normal levels at day 21 ( Figure 1(B)). The BAL from the mice following exposure to 38 ± 1 • C or 24 ± 1 • C for 21 days was collected for PAM counting. The number of PAMs in the CHS group were significantly lower than those in the TN group (Figure 1(C)). These results suggest that CHS can reduce local immunity in the respiratory tract of mice and also inhibit expression of HSP70. CHS Weakened the Innate Immunity in Mice. The maturation or activation of DCs is very important for innate immunity and subsequent adaptive immunity. The expression of costimulatory markers CD40, CD86, and CD80 at the DC surface correlated with their capacity to induce or suppress immune responses and the MHC-II expression is important in the initiation of immune response [17]. DC maturation of mice exposed to 38 ± 1 • C or 24 ± 1 • C for 21 days was determined by FACS (Figures 2(A) and 2(B)). The MHC-II, CD40, CD80 and CD86 expression levels in the CHS group were lower than these of the TN group, especially CD86 and CD40 (P < .05). Previous studies have shown that IL-6, IL-10, and IFN-β played critical roles in the innate immune responses [18][19][20]. To determine the effect of CHS on the innate immunity associated with cytokine expression in mice, the mRNA levels of IL-6, IL-10, and IFN-β in the spleen after exposure to 38 ± 1 • C or 24 ± 1 • C were examined. The IL-6 and IFN-β mRNA levels of the CHS group were downregulated significantly compared to the TN group, while IL-10 was significantly upregulated (P < .05) (Figure 2(C)). The mRNA level of TNF-α was also measured, but there was no significant difference between two groups (data not shown). These data suggest that CHS could adversely affect the maturation of DCs and innate immunity in mice. CHS Increased the Virulence of HPAIV in Infected Mice. To determine whether CHS could impact upon HAPIV H5N1 infection, the mice exposed to 38 ± 1 • C or 24 ± 1 • C for 21 days were challenged with 50 PFU of H5N1 virus. The lung tissues of five mice per group were collected at Day 3 postinfection for real-time PCR, plaque assay, and histopathogic analysis. Six mice per group were observed for mortality rate for 14 days. Results showed that the mice in the CHS group had a higher mortality rate (67%) than that in TN group (33%) (Figure 3(a)), but not statistically significant by log rank analysis possibly due to the small sample size. To examine the viral load in the lung of the mice infected with the virus, qPCR and plaque assay were performed. Pulmonary viral load in the mice of the CHS group was significantly higher than that of TN group at day 3 postinfection (Figures 3(b) and 3(c)). The H5N1 viral antigen (NP) was detected extensively in the deciduous alveolar cells and mucosal epithelium cells in the lung (Figure 3(e)). Compared with TN group, more H5N1 viral antigen positive cells could be found in the lung of mice after CHS treatment. And the results were consistent with the ones of qPCR and plaque assay. The multiple functions of inflammation induced by H5N1 infection potentially affect virus clearance from the host. To evaluate the effect of CHS on inflammation in infected mice, expression of cytokines IL-6, IFN-β, and IL-10 in the lung was determined by qPCR at day 3 postinfection. The mRNA levels of IL-6 and IFN-β in the mice of the CHS group were significantly lower than those in the TN group (Figure 3(d)). The soluble cytokine levels in serum of IL-6 and IFN-β were also measured by ELISA, and the results were in similar trend to the mRNA levels (data not shown). The expression of HSP70 at day 3 postinfection was also determined by qPCR. The results did not show significant differences between two groups (Figure 3(d)). These data indicated that CHS could moderately increase the mortality rate and viral load in the lungs of H5N1-infected mice by decreasing IL-6 and IFN-β expression. To determine the degree of lung lesion in the infected mice, histopathological changes in the lungs of mice in Journal of Biomedicine and Biotechnology (Figure 4). The lungs of the CHS group mice showed severe alveolitis, peribronchiolitis, and bronchopneumonia, that was characterized by interstitial edema and inflammatory cellular infiltration around small blood vessels, reduction and necrosis of the mucous epithelium of bronchioles, thickening of alveolar walls, and alveolar lumens flooded with edema fluid mixed with exfoliated alveolar epithelial cells, erythrocytes, and inflammatory cells (Figure 4(c)). The trachea of mice in the CHS group exhibited lesions, characterized by exfoliated trachea mucous epithelium cells and a number of erythrocytes and inflammatory cells infiltrated in the tube cavity of the trachea (Figure 4(d)). In mice of the TN group, the lung and trachea lesions were dramatically milder than those observed in the CHS group. The lung lesions in the TN group mice showed peribronchiolitis with interstitial edema and inflammatory cell infiltration around small blood vessels, along with edema and reduction of mucous epithelial cells of the bronchioles (Figure 4(a)). The trachea lesions of mice in the TN group mainly demonstrated erythrocyte infiltration (Figure 4(b)). The results suggested that exposure to 38 ± 1 • C for 21 days aggravated lung and trachea lesions in H5N1infected mice. Discussion In a previous study, we reported that CHS could retard the adaptive immune response by suppressing both humoral and cellular responses [2]. In this study we have demonstrated that CHS can significantly suppress host innate immunity in mice and moderately increase the mortality of H5N1infected mice. CHS retarded systemic and local innate immune responses significantly, including the number of PAMs, DC maturation, mRNA levels of IL-6, IFN-β, and HSP70, and caused respiratory system lesions. When mice were infected with H5N1 virus after CHS conditioning, the mortality rate and viral load in the lungs increased. Lung histopathology results of infected mice further confirmed that CHS could moderately increase virulence in H5N1infected mice. The respiratory mucosa is the first barrier encountered by invading microorganisms. The function and immunity of the respiratory system was closely related to mucosal architecture, PAMs, and other local immune cells or proteins. In the present study, we showed that CHS could reduce local immunity in the respiratory tract of mice by destroying the structural integrity of nasal mucosa, trachea, and lungs and reducing the number of PAMs. The results suggested that CHS could retard the local immunity in respiratory system in mice and rendered them to be susceptible to secondary infection of bacteria or viruses. Heat stress is often characterized by the production of HSPs. The HSPs, in particular Hsp27, Hsp32, Hsp60, and Hsp70, have an important cytoprotective role during lung inflammation and injury [16], as they can activate the innate immune system, linking innate and adaptive immune responses [21,22]. Previous studies also indicated that HSPs played an immunoregulatory role in chronic inflammation [23,24]. The results of the expression of HSP70 under chronic heat stress in this study showed that the HSP70 level increased on day 1, peaked on day 5, decreased from day 10, and appeared to be normal level on day 21 after exposure to 38 ± 1 • C, which suggested that the lack of HSP70 may be associated with the suppression of innate immunity by CHS. The DCs are the most potent antigen presenting cells (APCs), and activation of DCs is a crucial and early event of immune regulation [25]. The expression of costimulatory molecules (CD40, CD86, and CD80) at the DC surface correlated with their capacity to induce or suppress immune responses. The MHC-II molecules expressed on the surface of APCs are important in the initiation of an immune response [17]. In this study, we observed lower levels of expression of MHC-II, CD40, CD80, and CD86 in CHS mice compared to the TN mice, which indicated that CHS weakened antigen processing not only in APCs but also in the co-stimulatory pathway, which in turn leads to lower immune responses. IL-6 and IFN-β play critical roles in the development and maintenance of innate immune responses [18,19,26]. IL-10 has been considered to be a suppressive cytokine known to inhibit Th1 cytokine expression [20,27,28]. We have shown that CHS downregulates IL-6 and IFN-β expression and upregulates the levels of IL-10. These data further demonstrated that CHS induced lower innate immunity. The H5N1 influenza virus can cause severe respiratory symptoms in humans with clinical manifestations that include fever, diarrhea, viral pneumonia, encephalitis, and acute respiratory distress syndrome. High levels of proinflammatory cytokines, including TNF-α, IL-6, and CC chemokine ligand 2 (CCL2), have been detected in human cells and mice infected with highly pathogenic H5N1 influenza virus [29,30]. Generally, a virus-induced cytokine storm has been widely hypothesized to be the main cause of pathology and death during H5N1 infection. There are numerous reports regarding the use of therapeutic agents to control inflammatory responses in influenza infection [31]; however, it is not clear as to whether they are effective in controlling virus replication. Without antiviral agents, knockout mice deficient in TNF-α, TNF-α receptor 1, TNFα receptor 2, IL-6, chemokine (C-C motif) ligand 2, MIP-1α, and IL-1R or steroid-treated wild-type mice did not have a survival advantage over wild-type mice following influenza virus challenge [32,33]. On the other hand, type I interferon response that contributes to control of H5N1 virus replication has been demonstrated [34,35]. The multiple functions of inflammation and cytokines are essential for virus clearance from the host. The antiviral immune response induced by H5N1 virus infection was complicated, involving multiple classes of pattern recognition receptors (PRRs), and different signaling pathways. Although the inflammation is widely suggested to be the main cause of death from H5N1 infection, the results from the knockout mice deficient in single cytokine were insufficient to clarify the complex process. In this study, our results suggested that CHS treatment retarded the innate immunity. This may be one of the underlying causes of increased death rates by H5N1 virus after CHS treatment. In summary, the data presented here demonstrate that CHS treatment significantly suppressed innate host immunity in mice and increased mortality following H5N1 infection. This study is the first to show that CHS affects systemic and local innate immune responses as well as survival of H5N1-infected in mice. Our results suggest that after a long period of suffering elevated temperatures, the balance in the immune system is destroyed and H5N1 virulence is increased.
2014-10-01T00:00:00.000Z
2011-05-29T00:00:00.000
{ "year": 2011, "sha1": "917708441bb4f3785462e3819e28de4c17535498", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/bmri/2011/367846.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "917708441bb4f3785462e3819e28de4c17535498", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
197873804
pes2o/s2orc
v3-fos-license
TOWARDS A DIFFERENT WORLD – ON THE POTENTIAL OF THE INTERNET OF EVERYTHING . Internet-based technologies are moving faster and faster into many spheres of our lives and at the same time are a key component of the ongoing technological revolution, which is why there are many ongoing scientific projects aimed at their development. The article presents a discussion on the development of Internet-based technologies known as the Internet of Everything (IoE). The paper presents the areas in which these technologies are most often used. A multi-layered reference model and a procedure for subsequent actions in designing innovative solutions in this area are presented. Introduction The history of humanity has been repeatedly shaped by the achievements of science and progress in the development of new technologies. Civilisation would not be at its current level of advancement if it were not for a series of innovative solutions from different areas of science gradually adopted over subsequent centuries. The invention in the past of things we consider basic today, such as the wheel, writing, paper or print, was undoubtedly an important point in our history. The discovery of paper and printing has led to real revolutions in the area of communication and transfer of information between people. The discoveries of the 19th and 20th centuries were of great importance in the field of human communication. The invention of the telephone, car, radio or television revolutionised people's lives and, above all, made them easier. Today, these technologies are constantly evolving, introducing new solutions and services, which would not be possible without further inventions, such as computers and Internet technologies. The first computer in history is considered to be the ENIAC. As a curiosity it is worth mentioning that it occupied 167m 2 of area, consisted of 42 cabinets, reached over 2.6 meters in height and measured 24 meters in length. In turn, its total weight exceeded 27 tonnes [7,8]. By comparison, the smallest computer today, developed by Michigan scientists, is only 0.3 millimetres long [24]. This example shows how fast and effective can the development of teleinformatic technologies be in present times. Nowadays, the complexity of the infrastructure and requirements for the specialised systems, force their designers to introduce interdisciplinary solutions from various fields of science combining issues from within computer science, telecommunication, electronics, electrical engineering or others areas [14,15,16]. The Internet of Things (IoT) concepts are currently one of the fastest growing ICT technologies, which have a significant impact on and benefit science and the economy. These solutions are based on the idea of linking everyday objects into a computer network, mainly for the exchange, processing and analysis of data. A simple diagram on figure 1 visualizing the idea of IoT technology. The Internet of Things has been functioning in global solutions for many years, although not directly under the current name. So far, it has included the notions of telemetry, intelligent cities and buildings, sensory networks or other solutions based on technological applications of computer networks. The lack of detailed legal regulations and target standardisation in this area translates into the emergence of many separate terms for this concept used by scientific institutions, manufacturers of hardware or software. This means that due to its scale, innovation and constant dynamic development, a clear definition of the IoT issue is not a simple task. From the literature of the world we learn that the IEEE describes the concepts of IoT as: "A network of itemseach embedded with sensorswhich are connected to the Internet" [10]. In the ITU recommendation [18] the Internet of Things is defined as: "A global infrastructure for the information society, enabling advanced services by interconnecting (physical and virtual) things based on existing and evolving interoperable information and communication technologies". Another noteworthy definition of the IoT technology was proposed by the OASIS association: "System where the Internet is connected to the physical world via ubiquitous sensors" [4]. One step further in defining IoT was taken by the world leader in ICT -Cisco, dubbing the discussed solutions the Internet of Everything (IoE). According to [3], the Internet of Everything is "bringing together people, process, data and things to make networked connections more relevant and valuable than ever before, turning information into actions that create new capabilities, richer experiences and unprecedented economic opportunity for businesses, individuals and countries". The Internet of Everything is a natural successor to the concept of the Internet of Things, successfully entering today into many spheres of our lives, and at the same time being a key component of the technological revolution known as Industry 4.0 [23], incorporating into the web everything that has not yet been connected. Connecting unconnected The Internet of Everything is a network which is created with the use and adoption of existing solutions, among others, in the field of computer science, ICT, sensorics, automation, electronics or data analytics. Everything that has not yet been incorporated into the global network infrastructure may soon become a part of it. This is largely due to the constant development of Internet protocols, which provide broad perspectives for the use of modules compatible with Ethernet, TCP/IP, Wi-Fi or LoRaWAN standards. The application potential of IoT/IoE solutions is practically unlimited. It includes for example: smart homes / buildings / cities, smart health solutions, smart businesses and industry, smart energy systems and grids, distributed metering systems or threat monitoring systems. A very important and at the same time popular issue is the application of the discussed technologies in environmental protection measures [17], which are related to e.g. monitoring of air, soil or water pollution. The positive impact on the environment is also reflected in the low energy consumption of equipment implemented in the IoT/IoE technologies, thus making the discussed solutions extremely important from the point of view of global problems. Other popular trends in the development of the described solutions are real-time object localization systems implemented in many branches of industry and business [1,11,12,13]. Table 1 shows the areas where IoT/IoE projects have the widest applications [20]. The presented data show that the trend associated with the concepts of Smart City and Connected Industry is decidedly dominant. The use of these technologies in Connected Building, Connected Car and Smart Energy is at a slightly lower level. Other industries use modern Internet-based technologies to a lesser extent. A large set of Internet of Everything devices, which are combined in a single global network infrastructure, creates a powerful tool that can be adapted to different needs. However, this requires the integration of interdisciplinary solutions from the field of, among others, sensors, electronics, wireless communication, automation, information distribution networks, or data analytics or machine learning. The combination of all or some of these elements will ensure the learning of new, undiscovered knowledge. Gathering the knowledge One of the intentions of the implemented IoE solutions is to acquire large amounts of data, which after the initial acquisition and initial processing are sent through the web to the data centres for further processing and practical use. In the era of today's technological solutions, especially omnipresent video monitoring [9], these are usually very large data sets, in the analysis of which conventional statistical methods are not applicable. In addition, the literature indicates that the amount of data sent over the web will increase significantly in the coming years. According to the report Cisco Global Cloud Index: Forecast and Methodology [5], global Data Centers in 2021 will process 20 trillion of bytes of data per year, and this number will increase annually by an average of 20-25%. These data may come in unexpectedly and in various forms they may be e.g. unstructured and/or non-relational, and additionally they do not always have to be reliable. Therefore, an important challenge for data analytics is to extract useful knowledge from the generated stream of diversified and difficult to interpret information sets. For this reason, the concepts of the Internet of Everything are usually based on innovative issues related to Big Data analytics. Big Data includes working with data that is compatible with the 4V model (Volume, Velocity, Veracity, Value) [19]. In the context of Big Data analytics methods, cloud computing (SaaS, PaaS), distributed file systems (Hadoop), parallel processing platforms (MapReduce), non-relational databases (noSQL), machine learning and artificial intelligence are also important. The way in which we obtain, send, store, secure and interpret information is the primary task of Big Data. In addition, Big Data methods have changed the approach to Analytic Lifecycle issues, which is presented on figure 2. Fig. 2. Classic and Big Data schemes for Analytics Lifecycle From the traditional Save -Analyse -Notify -Act scheme we move to the much more challenging scheme: Analyse -Act -Notify -Save [2]. The difference in the approach to Big Data issues in relation to conventional methods is apparent, among others, through the preliminary analysis of data already performed at the stage of their acquisition. Only data relevant to the examined problem are saved. All other data can be filtered out using network edge elements, e.g. single-board computers (SBC) as Raspberry Pi. It is the responsibility of the Edge Computing layer integrated into the multilayer reference model developed for Internet of Everything technologies, shown in figure 3. [21] Physical Devices and Controllers layer is a collection of all "things" connected to the IoE infrastructure. These include, for example: precise real-time object location systems, video systems and various sensor networks. Thanks to the functionality of the second layer, these elements can communicate with each other, using dedicated network solutions, cooperating and generating large amounts of data pre-filtered at the level of the third layer. The filtered data are stored in the Data Accumulation layer and prepared for further applications and analysis in the Data Abstraction layer. Next, they are passed in a structured format to the Application layer, where they should be properly interpreted according to their specific applications. The top layer includes different areas of application of the received data in the implemented solutions [21]. Towards the another (better?) World A Sustainability Summit was held in September 2015 in New York City, where the international community adopted a new world development plan until 2030 in the form of a Sustainable Development Agenda (Agenda 2030) [22]. The new Agenda contains 17 sustainable development objectives and 169 tasks to be achieved. Many of the agenda's guidelines pose new challenges to the Internet of Everything technology. Figure 4 presents a workflow showing subsequent sequences of activities aimed at using modern technologies from the IoE area in designing solutions for the needs of today's civilization. Fig. 4. IoE systems designing process [6] Analysing Figure 4, we can notice that the whole process of action in designing innovative solutions begins with inspiration against the needs and stirring the empathy of the audience. This means that one of the most important elements is a suitably attractive, needs-oriented idea. Then, using the latest achievements in the field of modelling, prototyping, implementation and testing of ICT systems and Internet of Things technologies, we should strive to achieve the intended objectives. It is important to note that current technical solutions in the field of sensor electronics (nanoelectronics), localization and communication technologies (e.g.: RFID-2gen, IBeacon, Cisco Hyperlocation, UWB, NB-IoT, LoRaWAN), microcontrollers and single-board computers (e.g.: Arduino, SBC Raspberry Pi, ESP32, Rock Pi, Banana Pi and other), secure IPv6 network technologies (VPN, ASA, Cisco ISR, Meraki AP), as well as the availability of platforms and programming libraries used in the implementation of machine learning algorithms (e.g.: TensorFlow, PyTorch, Keras, scikitlearn-on Github-type public platforms) provide the possibility to create useful, competitive, flexible, scalable, secure and transparent IoE networks. In reference to the concepts presented above and in order to meet human needs, visions of strictly pro-social projects are already proposed. One example is the PRO-HOMINIS System (PROgressive Health-Oriented Motivation System), proposed by us, based on the IoE Novel-Innovative Solution. This system is a concept of an intelligent tool activating its users to greater physical activity, which is also to contribute to the prevention of civilisation diseases of our times. As the proposed PRO-HOMINIS name suggests, the system would motivate people to be more active by monitoring their behaviour, in a way forcing them to visit specific places on a digital map of the area. Another concept is the TARGET -"Tracking Adopted, Routing Guide application for Effective Transport" The task of the system would be to adapt the route strategy based on the technology of distributed sensory networks. This system would represent a product innovation for existing GPS-based transport monitoring systems. Thanks to IoE solutions and metaheuristic optimization algorithms, including the latest developments in Swarm Intelligence, it would be possible to create a product enabling reduction of direct and indirect costs of passenger and freight transport, as well as to reduce the energy consumption of the currently operating transport fleets. The above-mentioned examples of projects utilising the Internet of Everything are clearly in line with the sustainable development objectives set out in the above-mentioned agenda [22]. The current literature also describes many other interesting examples which can be successfully implemented in the near future and which will be targeted at certain groups or even entire societies. It should be remembered that the broadly understood science and technology should serve primarily the common good, contribute to solving various social problems and achieve the intended objectives of specific social groups and individuals. At the same time, they should provide a wide range of ideas, concepts, solutions and innovations. Analysing the fate of humanity, one can easily point to many examples where science and technology were used in a way that threatened man. However, it is worth trusting that the Internet of Everything will bring spectacular benefits in solving both local and global socioeconomic problems. Conclusion Generally speaking, the idea of implementing IoE systems involves the acquisition of sensory information (data) by means of distributed networks, which, transformed by successive layers of the reference model, constitute knowledge that brings significant benefits to the beneficiaries of these systems. It should also be noted that the Internet of Everything is first and foremost about the "Internet of Ideas", because the measure of these benefits depends on properly set goals and proper implementation of the IoE architecture. A great advantage of the devices used in the IoE is their low energy consumption, which in itself is a measurable benefit for both the beneficiary and our environment. Currently in Poland the market of services related to IoT/IoE is developing very dynamically, annually increasing the number of its customers. This changes the quality of life in the society, but at the same time introduces new threats. In many cases there are problems with protecting the processed data. Therefore, in parallel with the development of IoT/IoE, a lot of cybersecurity research is being carried out. Today, there are no ideal solutions that could fully exploit the potential of the Internet of Everything technologies yet. It is possible that such solutions will appear with the arrival of the 5G network, because the potential for IoE applications is immense. In December 1959, the renowned physicist Richard Feynmann said the famous words "There is plenty of room at the bottom", outlining the application potential of nanotechnology. Similarly, one could say "There is plenty of room on the Internet of Everything".
2019-07-22T06:02:19.127Z
2019-06-21T00:00:00.000
{ "year": 2019, "sha1": "1f62a9c96f0fa28fc529ec90a696bb8d735c77e0", "oa_license": "CCBYSA", "oa_url": "https://ph.pollub.pl/index.php/iapgos/article/download/741/548", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "4dfc8ae8266bddbcb20436745b45e5ae5f60de4a", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
14153518
pes2o/s2orc
v3-fos-license
Synthesis and Reactions of Furo[2,3-b]pyrroles Methyl 6H-furo[2,3-b]pyrrole-5-carboxylate (2a) was prepared by thermolysis of the corresponding methyl 2-azido-3-(3-furyl)propenoate (1). 6-Methyl (2b) and 6-benzyl (2c) derivatives were obtained using phase-transfer catalysis conditions (PTC). The formylation of 2a-2c gave 2-formylated compounds (3a-3c). Compounds 4b, 4c were prepared by reactions of corresponding esters 2b, 2c with hydrazine in refluxing ethanol. By reaction of 3a-3c with hydroxylammonium chloride in acetic anhydride in the presence of pyridine, methyl 2-cyano-6-R 1-furo[2,3-b]pyrrole-5-carboxylates (5a-5c) were obtained. The reaction of these compounds with sodium azide and ammonium chloride in dimethylformamide led to methyl 2-(5'-tetrazolyl)-6-R 1-furo[2,3-b]pyrrole-5-carboxylates (6a-6c). A series of 5-methoxycarbonyl-6-R 1-furo[2,3-b]pyrrole-2-carbaldehyde N,N-dimethylhydrazones (7a-7c) was prepared from methyl 2-formyl-6-R 1-furo[2,3-b]pyrrole-5-carboxylates (3a-3c) and unsym-dimethylhydrazine. The correlation of the 13 C and 15 N chemical shifts with the data of the calculated (AM1) net atomic charges is discussed. Introduction Furo [2,3-b]pyrroles (2a-2c) and their positional isomers furo [3,2-b]pyrroles (8a-8c) belong to A,Bdiheteropentalenes, which possess differing degrees of aromaticity based upon chemical behaviour such as their ability to undergo substitution reactions with electrophilic reagents.A,B-diheteropentalenes rank among the electronrich heterocycles, but a quantitative measurement of their aromaticities is less easily determined [1].The wide range of potential criteria available for this purpose has been surveyed [1,2]. 2a-2c 8a-8c Most of the available criteria point to an order of decreasing aromaticity of 1,4 > 1,6 ring system which is influenced by the heteroatom in the order S>Se≥N>O.Substituents attached to the A,B-heteropentalene structures can strongly influence the aromaticity.Until recently 1,6diheteropentalenes containing S, Se heteroatoms had been studied [1,2] and a few derivatives of the furo [2,3-b]pyrole system had been prepared [3].The parent furo [2,3b]pyrrole has not been reported.In the past we were interested in syntheses and studies of the reactions of furo [3,2-b]pyrroles and their benzo or dibenzo derivatives [4][5][6][7][8][9]. In continuation of our programme aimed at developing efficient syntheses of fused oxygen-nitrogen-containing heterocycles, we report here the study of the synthesis of methyl furo [2,3-b]pyrrole-5-carboxylate (2a) and its utilization in synthesis.Our main interest is a comparison of the behaviour of 1,6-O,N-diheteropentalene system (2) with isomeric 1,4-system (8). Results and Discussion Reaction of 3-furancarbaldehyde with methyl azidoacetate in the presence of sodium methoxide proceeded smoothly to give the azide 1, thermolysis of which was carried out in boiling toluene leading to the compound 2a.Phase transfer catalysis was found to be successful for methylation and benzylation of 2a giving the derivatives 2b and 2c (Scheme 1).The compounds 2a-2c gave under Vilsmeier conditions 2-formylated products 3a-3c.By refluxing the compounds 2b and 2c with hydrazine in ethanol the corresponding hydrazides 4b and 4c were formed.Our experiments to synthesize 6H-furo [2,3b]pyrrole-5-carboxyhydrazide (4a) under conditions which were used for preparation of 4b and 4c were unsuccessful.The reaction of 3a-3c with hydroxylammonium chloride in acetic anhydride in the presence of pyridine at 90°C gave the corresponding cyano-substituted compounds 5a-5c.The reaction of the compounds 5a-5c with sodium azide and ammonium chloride in dimethylformamide led to the tetrazoles 6a-6c.N,N-Dimethylhydrazones 7a-7c were prepared from the aldehydes (3a-3c) and unsym-dimethylhydrazine in refluxing toluene, using a catalytic amount of 4methylbenzenesulfonic acid. During the synthesis and reaction studies of both systems we found out that the 1,4 system ( 8) is more stable than its 1,6 positional isomer (2).This empirical conclusion is in agreement with the results of AM1 semiempirical MO calculations that we have carried out for the parent furopyrroles and the ester derivatives 2a and 8a. Figure 1 shows the calculated properties for the methyl esters 2a and 8a including their heats of formation (∆H f ).The 1,4 system (8a) is calculated to be thermodynamically more stable than the 1,6-isomer (2a).The 1,4-system (8a) is also calculated to have a significantly larger dipole moment (µ) (Figure 1), which may result in greater solvent stabilisation.Comparable results were obtained for the unsubstituted heterocycles (Figure 2).Calculated net atomic charges and molecular geometries are given in Tables 4-7.The calculated ionisation potentials (using Koopmaan' theorem) (Figure 1) are consistent with the classification of these heterocycles as electron-rich. The 13 C chemical shifts of 2a-2c and 8a-8c are reported in Table 1, 1H and 13 C NMR data of other compounds are in the experimental part.The different 13 C chemical shift values of the corresponding carbon positions for compounds 2a and 8a relative to the carbons of furan [13] and methyl 2-pyrrolecarboxylate [14] (Table 2) show, that in the 1,4-isomer 8a the differences are greater than in 2a.In 8a carbon C-2 shows a downfield shift ∆δ = 5.09 ppm, C-3 an upfield shift ∆δ = -11.51ppm as well as for C-6 ∆δ = -18.17ppm.This demonstrates that the electron density of both compared systems changes due to the annelated ring interaction, but the effect of the annelated ring is greater in the case of the 1,4 system.An analogous upfield shift was observed in 1H,4H-pyrrolo[3,2-b]pyrrole [15].In order to make a direct comparison of both types of furopyrroles we carried out the correlation of the 13 C and 15 N chemical shifts (Tables 1 and 3) with net atomic charges, calculated using the AM1 method (Tables 4 and 6).In compounds 2a-2c signals C-2 and C-5 appear at higher magnetic field and C-3 and C-4 at lower field in comparison with corresponding carbons in 8a-8c (Table 1).The relative values of the calculated net atomic charges for the parent systems (Table 4) and the esters 2a and 8a (Table 6) are in good agreement with these experimental data.The comparison of 13 C chemical shifts of substituted furo [2,3-b]pyrroles shows that the greatest effect of substituents in the 2-position was observed at C-2 and C-3, analogous to the 2-substituted furans [13] and the 1,4-O,Nsystem [16].were obtained directly from the spectra.Selective excitation was applied to prove that the 3 J( 15 N,H) coupling constants are due to the proton on the pyrrole ring; 60 and 100 ms evolution times were used for other compounds and spectral patterns measured were compared with simulated ones using the SIMEPT programme [17].It was assumed, taking the data for compounds 2a and 8a into account, that the greater coupling constants were due to interaction with the proton on the pyrrole ring.The slightly larger negative values of the 15 N chemical shifts in 2a-2c compared to 8a-8c agree with the relative values of the calculated (AM1) negative charges on nitrogen (Tables 4 and 6). The configurational assignment of the substituents on the double bond of the hydrazone 7a has been determined by 15 N NMR spectra using the stereospecific coupling constants 2 J( 15 N,H-7).The orientation of the lone-pair of the nitrogen and the corresponding proton has a marked effect on the value of the respective coupling constant.The comparison of the coupling constants 2 J( 15 N,H-7) = 6.5 Hz with those of model compounds in ref. [18,19] confirms the E-isomer of 7a.The same configuration was determined for some hydrazones in our previous paper [20].C chemical shifts were referred to internal TMS (δ = 0.00). 15N NMR spectra were measured using non-refocused INEPT [21].The evolution time used was 2.6 ms for compounds 2a and 8a and 60 and 100 ms for other compounds. 15N chemical shifts were referred to external nitromethane (δ = 0.0) placed in a coaxial capillary. Negative values of chemical shifts denote upfield shifts with respect to standards.Melting points were determined on a Kofler hot plate apparatus and are uncorrected.UV spectra were measured on a M-40 (Carl Zeiss, Jena) spectrophotometer in methanol [λ max (log ε); λ max in nm, ε in m 2 mol -1 ].The IR spectra were taken on a FTIR PU 9802/25 (Philips) spectrophotometer using KBr technique (0.5 mg in 300 mg KBr, ν in cm -1 ). Molecular orbital calculations were carried out using the AM1 semiempirical method [22].The geometry of each molecule studied was found by minimising the energy with respect to all geometrical variables. Table 2 . Difference Positive sign denotes a downfield shift from furan and methyl 2-pyrrolecarboxylate, respectively.
2015-03-20T15:25:33.000Z
1997-04-20T00:00:00.000
{ "year": 1997, "sha1": "cbb7a27a939a259832dfaefcfb5b301064692f7d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/2/4/69/pdf?version=1403112304", "oa_status": "GOLD", "pdf_src": "Crawler", "pdf_hash": "cbb7a27a939a259832dfaefcfb5b301064692f7d", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
575601
pes2o/s2orc
v3-fos-license
MicroRNA‐146b, a Sensitive Indicator of Mesenchymal Stem Cell Repair of Acute Renal Injury The role of mesenchymal stem cells (MSCs) in kidney injury repair has been studied widely. However, the underlying molecular mechanism remains unclear. We profiled the altered microRNAs in renal tissues from cisplatin‐induced acute kidney injury (AKI) rats treated with or without rat bone marrow MSCs (rMSCs). We observed that microRNA‐146b (miR‐146b) expression was considerably upregulated in renal tissues from AKI rats compared with that in healthy rats, and the expression decreased following MSC treatment after cisplatin administration. At the early stage of AKI, serum miR‐146b levels exhibited a rapid increase that was even faster than that of two conventional renal function indexes: serum creatinine and blood urea nitrogen levels. Furthermore, the serum miR‐146b levels in AKI patients were higher than those in healthy people. In vitro exposure to cisplatin also increased miR‐146b expression in renal tubular epithelial cells (TECs). miR‐146b knockdown protected renal TECs from cisplatin‐induced apoptosis and promoted their proliferation. Moreover, ErbB4 was identified as a direct target of miR‐146b, and miR‐146b inhibition induced ErbB4 expression, resulting in enhanced proliferation of injured renal TECs. In addition, restoration by rMSCs could be controlled through ErbB4 downregulation. In conclusion, elevated miR‐146b expression contributes to cisplatin‐induced AKI, partly through ErbB4 downregulation. miR‐146b might be an early biomarker for AKI, and miR‐146b inhibition could be a novel strategy for AKI treatment. INTRODUCTION Acute kidney injury (AKI) is a clinical syndrome characterized by a rapid decrease in renal function that can be assessed through changes in serum creatinine (Cr) and blood urea nitrogen (BUN) levels [1]. Mesenchymal stem cells (MSCs) from various sources can promote tissue repair in disorders such as AKI [2][3][4][5]. We previously demonstrated that rat bone marrow MSCs (rMSCs) can participate in the restoration of acute kidney failure in rats [6]. Moreover, human umbilical cord MSCs (hUC-MSCs) can attenuate ischemia/ reperfusion-induced acute renal failure and cisplatin-induced AKI or chronic kidney injury [7,8]. Furthermore, hUC-MSCs modified through hepatocyte growth factor overexpression can improve the amelioration efficiency compared with unmodified hUC-MSCs [9]. We previously suggested that a paracrine mechanism might be mainly responsible for the therapeutic effect of MSCs; however, the involvement of microRNA (miRNA) in this process was not completely characterized. miRNAs-18-25-nucleotide long, small noncoding RNAs-regulate gene expression and have crucial functions in physiological and pathological conditions through targeting numerous genes [10,11]. miRNAs are essential for maintaining the development and stability of kidneys [12]. Moreover, many miRNAs participate in the pathogenesis of kidney diseases. Wang et al. reported that miR-200a prevents renal fibrogenesis by repressing transforming growth factor-b2 expression in diabetic nephropathy [13]. Godwin et al. determined the miRNA expression profile of renal ischemia/ reperfusion injury and revealed that miR-21 might have a role in preventing the death of tubular epithelial cells (TECs) [14]. However, few studies have determined the alteration of miRNA expression after kidney injury, including cisplatin-induced AKI, and restoration to normal using MSCs. In our study, we identified that miR-146b expression is upregulated after cisplatin administration and returns to normal after rMSC treatment in vivo and in vitro. miR-146b is strongly expressed in tumor tissues, such as papillary thyroid carcinoma, prostate cancer, liver cancer, and renal cell carcinoma [15][16][17][18]. Previous miRNA analysis results suggested that miR-146b expression is upregulated in patients undergoing hemodialysis [19]. miR-146b expression is associated with the prognosis of papillary thyroid carcinoma and the malignancy of hepatocellular carcinoma [15,17]. miR-146b-5p can inhibit the migration and invasion of cancerous tissue by acting on MMP16 and epidermal growth factor receptor (EGFR) in glioma [20,21]. In addition to cancer, miR-146b has a crucial role in epithelial cells and inflammation. miR-146b alleviates colitis in mice by improving epithelial barrier function and activating the nuclear factor-kB pathway [22]. Cheng et al. demonstrated that miR-146 inhibits endothelial inflammatory activation by negatively regulating proinflammatory pathways [23]. Because inflammation is a major part of AKI, miR-146b might function as a critical factor in AKI development. In the present study, we identified miR-146b as a crucial target in AKI treatment using rMSCs both in vivo and in vitro. We observed that the expression of miR-146b was strongly upregulated in both animals and human patients with AKI. miR-146b knockdown ameliorated cisplatin-induced apoptosis and prompted TEC proliferation through upregulation of ErbB4 expression. Our findings facilitate further understanding of the role of miRNA in MSC-mediated AKI repair and provide a novel biomarker for AKI diagnosis and therapy. Isolation and Characterization of rMSCs One-month-old Sprague-Dawley rats were immersed in 75% ethanol for 5 minutes after anesthetization and decapitated to obtain the bilateral lower limb femurs without the attached muscles or adipose tissues. All procedures were performed under aseptic conditions. After douching with phosphate-buffered saline (PBS), the medulla ossium was collected and centrifuged at 800 rpm. The sediment was suspended in 4 ml of low-glucose Dulbecco's modified Eagle's medium (LG-DMEM) containing 10% fetal bovine serum, penicillin, and streptomycin and cultured at 37°C with 5% CO 2 . The expression of typical surface markers in passage 3 rMSCs was analyzed through flow cytometry. The osteogenic and adipogenic differentiation abilities of the rMSCs were detected as described previously [24]. Rat AKI Model A rat AKI model was established as described previously [24]. The rats were divided into three groups (n = 6 in each group): (a) a normal group (no treatment); (b) a PBS group (PBS injected through the caudal vein after 24 hours of 6 mg/kg cisplatin administration); and (c) an rMSC group (rMSCs injected through the caudal vein after 24 hours of 6 mg/kg cisplatin administration). Serum and kidney samples were collected daily and stored at 270°C. In a low-dose cisplatin-induced AKI model, 3 mg/kg cisplatin was administered. In addition, the kidneys were immersed in 4% paraformaldehyde before use. The institutional animal care committee of Jiangsu University approved all experimental protocols. Microarray Analysis Total RNA was isolated from the boundary area of the thymus cortex and medulla of the kidneys of the three groups. miRNA profiling was performed using the Agilent 2100 system (Shanghai Biotechnology Corporation, Shanghai, China, http://www. shbiochip.bioon.com.cn). The array results were deposited in the Gene Expression Omnibus (GenBank accession no. GSE66761). RNA Isolation and Quantification of miRNA Expression Total RNA was extracted using Trizol Reagent (Thermo Fisher Scientific Life Sciences, Waltham, MA, http://www.thermofisher. com). Real-time reverse transcription polymerase chain reaction (RT-PCR) was performed using the miScript II RT Kit (Qiagen, Hilden, Germany, http://www.qiagen.com) and miScript SYBR Green PCR Kit (Qiagen). For human serum samples, we used the TaqMan MicroRNA Reverse Transcription Kit, TaqMan Micro-RNA Assays, and TaqMan Universal Master Mix II, no uracil-N glycoslyase (Thermo Fisher) to quantify the expression of miRNAs. The relative expression levels of miRNA were normalized to U6. In Vitro Experiments The rat TEC line NRK52E was cultured and maintained in highglucose Dulbecco's modified Eagle's medium (HG-DMEM) containing 5% fetal bovine serum, penicillin, and streptomycin at 37°C with 5% CO 2 . The NRK52E cell line was purchased from the Cell Bank of Chinese Academy of Sciences (Shanghai, China, http://www.cas.ac.cn). The cells were divided into three groups: normal (no treatment), cisplatin (7.5 mM cisplatin treatment for 6 hours), and rMSC (coculturing with rMSCs in a Transwell plate after 7.5 mM cisplatin treatment). In a coculture experiment, rMSCs were plated with 1.6 ml of LG-DMEM in the upper chamber of a Transwell plate, 1 day before coculture with NRK52E cells. The NRK52E cells were plated with 2.5 ml of HG-DMEM in the lower chamber of another Transwell plate. When the NRK52E cells had been exposed to cisplatin for 6 hours, the rMSCs in the upper chamber were transferred to the plate that contained the conditioned NRK52E cells and all culture media were replaced with HG-DMEM containing 10% fetal bovine serum. After 48 hours, the cells were fixed in 4% paraformaldehyde for histological staining and collected for RNA and protein analysis. For the low-dose treatment experiment, 6 mM cisplatin was used. The 293T cells were cultured in HG-DMEM containing 10% bovine serum, penicillin, and streptomycin. Histology and Immunohistochemical Staining Kidneys embedded in paraffin were cut into 4-mm-thick slices and stained using the standard hematoxylin and eosin staining protocol. Through immunohistochemistry, we observed proliferating cell nuclear antigen (PCNA) expression using a specific rabbit polyclonal antibody (BioWorld, New York, NY, http://www.bioworld. com) and visualized using 3,39-diaminobenzidine, as described previously [24]. Cell Counting and Colony Formation Cells were transfected with the miRNA control or inhibitor for 6 hours and transferred to complete HG-DMEM. After 48 hours, the cells were collected and counted. In total, 5 3 10 3 cells were replated in 24-well plates (Corning, Corning, NY, http://www. corning.com) and 3.5-cm cell culture dishes (Corning). The cell numbers in the 24-well plates were counted in triplicate for each group every 24 hours from 48 to 192 hours after plating. Cell counting was performed using procedures described previously [25]. The cells in the 3.5-cm cell culture dishes were cultured until day 7, immobilized using 4% paraformaldehyde, and stained with crystal violet. for NRK52E cells and Lipofectamine 2000 (Thermo Fisher) for 293T cells. The cells were transfected with the miRNA inhibitor and its NC at 200 nM and with the mimic and its NC at 25 nM. Luciferase activity was measured using the Dual-Glo luciferase assay system (GloMax 20/20; Promega). Furthermore, 100 nM ErbB4 siRNA and its NC were applied in the knockdown experiment. Statistical Analysis Data are expressed as the mean 6 SD. Statistically significant differences between groups were assessed by analysis of variance (ANOVA) with two-way classification or ANOVA with the Student-Newman-Keuls multicomparison test or the unpaired t test. p , .05 was considered statistically significant. RMSCs Restore Injury From Cisplatin-Induced AKI We previously established a cisplatin-induced AKI model [8,24]. To verify the efficiency of rMSCs in repairing AKI, we first isolated rMSCs from Sprague-Dawley rats and cultured them for three passages (supplemental online Fig. 1A). We then characterized the immunotype of the used rMSCs through flow cytometry. The results of fluorescence-activated cell sorting showed that rMSCs were positive for CD29, CD44, and CD90 but negative for CD45 (supplemental online Fig. 1B). Next, we confirmed that the rMSCs could be induced to differentiate into osteogenic and adipogenic lineages (supplemental online Fig. 1C-1F). We tracked the homing of the injected rMSCs using live animal imaging and observed that CM-Dil-labeled rMSCs could localize at the injury site after injection into the AKI rats (supplemental online Fig. 2A, 2B). Serum Cr and BUN levels increased 2 days after cisplatin treatment and remained at higher levels until 5 days after treatment. However, transplantation with rMSCs notably reduced the serum Cr and BUN levels in the AKI rats (supplemental online Fig. 2C, 2D). Western blotting (supplemental online Fig. 2E) and hematoxylin and eosin staining showed that rMSC administration alleviated the inflammatory reaction in the renal tissues (supplemental online Fig. 2F). The results of TUNEL assay and immunohistochemical staining (supplemental online Fig. 2F) showed that rMSC treatment enhanced PCNA expression and effectively ameliorated cisplatin-induced apoptosis. Identification of miR-146b in Cisplatin-Induced AKI Rats Using Microarray Analysis We collected renal tissues from the AKI rats treated with and without rMSCs for 5 days and performed a gene microarray analysis to profile the altered miRNAs. As shown in Figure 1A, rMSC treatment caused significant changes in 44 miRNAs (p , .05). A pie chart clarified the results of the gene microarray: 36 miRNAs were upregulated and 8 were downregulated (Fig. 1B). Detailed data are shown in supplemental online Table 1. On the basis of many reported studies, we focused on four miRNAs and selected miR-146b for its stable changes in different baths (supplemental online Fig. 3). Through quantitative RT-PCR analysis, we further verified the increase of miR-146b in the kidneys from AKI rats by comparing it with that from the sham control rats. We also confirmed the inhibition of miR-146b expression in kidneys from rMSC-treated rats (Fig. 1C). A low-dose cisplatin-induced experiment in vitro showed that miR-146b expression in NRK52E cells increased 24 hours after treatment and was maintained until 42 hours (Fig. 1D). Moreover, we collected serum from patients who had experienced blood loss leading to AKI, patients with chronic kidney disease (CKD) with acute exacerbation, and healthy controls and detected the expression of hs-miR-146b-5p, the human homolog of rno-miR-146b, using TaqMan-based real-time RT-PCR analysis. We observed that the hs-miR-146b-5p levels in the patients with renal disease were remarkably higher than those in the healthy controls (Fig. 1E). Because miR-146b inhibition was apparent after rMSC treatment, we selected it as the target for our next study. We identified miR-146b as a candidate microRNA, the expression of which altered in response to the development of cisplatin-induced AKI. miR-146b Is a Sensitive Indicator of AKI Considering the outcome of the clinical specimens, we next investigated the time course of miR-146b expression in the serum and kidney tissues of AKI rats. To investigate the sensitivity, we used a dose of cisplatin lower than that used in the model but that could still induce evident apoptosis in the rat kidney ( Fig. 2A, 2B). We observed an increase in serum Cr and BUN levels in AKI rats 3-4 days after cisplatin treatment (Fig. 2C, 2D). Furthermore, the serum miR-146b levels in the AKI rats increased 1 day after cisplatin treatment (Fig. 2E). miR-146b expression peaked at the third day after cisplatin treatment and remained high until 5 days after the treatment. The miR-146b levels in the renal tissue of AKI rats began increasing 4 days after treatment and gradually decreased 5 days after treatment (Fig. 2F). Moreover, the experiments in vitro showed that the miR-146b levels increased rapidly in both NRK52E and HK-2 cells cultured in media without serum (supplemental online Fig. 4). In summary, these data suggest that miR-146b is a sensitive indicator of renal injury. The results of Western blot analysis showed that the PCNA level was lower in the PBS group than in the rMSC group, indicating that rMSCs could promote the proliferation of injured NRK52E cells. In contrast, the ratio of Bax to Bcl-2 increased in the PBS group and decreased in the rMSC group, demonstrating that rMSCs reversed the apoptosis caused by cisplatin. (E): miR-146b expression in NRK52E cells from different groups. The cells in the rMSC group were cocultured with rMSCs in Transwell plates after pretreatment with cisplatin for 6 hours; normal NRK52E cells were used as controls. Analysis of variance was performed with the Student-Newman-Keuls multicomparison test. pp, p , .01; ppp, p , .001. Abbreviations: Bax, Bcl-2-associated X; GAPDH, glyceraldehyde-3-phosphate dehydrogenase; miR-146b, microRNA-146b; rMSC, rat bone marrow mesenchymal stem cell; PBS, phosphate-buffered saline; PCNA, proliferating cell nuclear antigen; TUNEL, terminal deoxynucleotidyl transferase dUTP nick-end labeling. miR-146b Is Downregulated After Coculturing RMSCs With TECs To determine the effect of rMSCs in vitro, we exposed NRK52E cells to cisplatin. Cisplatin treatment led to a decrease in the number of PCNA-positive cells. Nevertheless, rMSC treatment increased the number of PCNA-positive cells (Fig. 3A). Moreover, the TUNEL assay revealed that the cisplatin group had more apoptosis cells than did the control group and rMSC treatment rescued NRK52E cells from cisplatin-induced apoptosis (Fig. 3B). PCNA-and TUNEL-positive cells were counted in 10 consecutive fields (Fig. 3C). The expression of PCNA, a proliferation-related protein, decreased in the cisplatin group but was restored in the rMSC group. In contrast, the ratio of Bax to Bcl-2 increased in the cisplatin group but decreased in the rMSC group (Fig. 3D). miR-146b expression in NRK52E cells was inconsistent with that in vivo: it increased after cisplatin administrated and decreased after coculturing with rMSCs (Fig. 3E). We also demonstrated that rMSCs cocultured with injured NRK52E cells led to miR-146b upregulation in the rMSCs (supplemental online Fig. 5). In summary, cisplatin can induce miR-146b upregulation, which is reversed by rMSC intervention. miR-146b Regulates the Survival and Proliferation of NRK52E Cells To explore the potential role of miR-146b in cisplatin-induced AKI, we synthesized a specific inhibitor for knocking down endogenous miR-146b in NRK52E cells. We verified the efficiency of the inhibitor in NRK52E cells using real-time RT-PCR, with a scramble fragment (NC) as the control (Fig. 4A). The transfected cells were collected 48 hours after transfection, replated, and cultured continually. The cells in the inhibitor group grew faster than did those in the control group (Fig. 4B), which was consistent with the results of the colony formation assay (Fig. 4C). In addition, transfection with the miR-146b inhibitor after 48 hours increased the PCNA positivity of the NRK52E cells (Fig. 4D). The number of apoptotic cells in the inhibitor group was considerably less than that in the NC group, as suggested by the TUNEL-FITC staining (Fig. 4E). PCNA-and TUNEL-FITC-positive cells were counted in 10 consecutive fields ( Fig. 4F). In brief, miR-146b downregulation promoted cell proliferation and ameliorated apoptosis in cisplatin-injured NRK52E cells. ErbB4 Is a Potential Target of miR-146b During AKI We identified the potential targets of miRNA using the Target-Scan program (Whitehead Institute for Biomedical Research, Cambridge, MA, http://www.wi.mit.edu). On the basis of recent research, we studied ErbB4 because of its critical role in the ErbB4 pathway of renal epithelial cell proliferation (Fig. 5A). To demonstrate the direct regulation of ErbB4 by miR-146b, the 39-UTR of ErbR4 mRNA was cloned and constructed into the luciferase reporter vector. The results of dual-luciferase reporter assays indicated that miR-146b overexpression reduced luciferase activity in 293T cells transfected with a wild-type (WT) reporter vector. In contrast, this reduction was not evident in cells with a mutant (MU) reporter vector. Furthermore, miR-146b knockdown promoted luciferase activity in the WT group but had no significant influence on the MU group (Fig. 5B). miR-146b knockdown restored ErbB4 protein expression in NRK52E cells (Fig. 5C). Moreover, we detected the expression of the downstream proteins of ErbB4 pathway, erk1/2, and raf, in miR-146b-inhibited-or NC-transfected NRK52E cells. miR-146b downregulation increased PCNA expression and reduced the Bax to Bcl-2 ratio. We then synthesized two ErbB4-specific siRNAs and verified the efficiency through real-time RT-PCR (Fig. 5D). SiRNA2 showed obvious ErbB4 downregulation. Similarly, the levels of the downstream proteins of the ErbB4 pathway, stat5 and mek/ erk/c-myc, decreased with siRNA2 treatment compared with NC treatment (Fig. 5E). To summarize the results, we created a schematic diagram of miR-146b participation in cisplatininduced AKI (Fig. 5F). The results indicated that miR-146b might Western blot analyses of the expression of ErbB4 and its typical downstream proteins-p-erk1/2, t-erk1/2, p-raf1/2, and t-raf1/2-in NRK52E cells after NC or inhibitor transfection. The expression of ErbB4 and other proteins increased after inhibitor transfection compared with that after NC transfection. PCNA and Bcl-2 levels increased in the inhibitor group, but Bax levels showed no significant change. (D): Real-time reverse transcription polymerase chain reaction analysis of ErbB4 mRNA expression in NRK52E cells after transfection with the NC, siRNA1, or siRNA2. (E): Western blot analysis of the downstream proteins of the ErbB4 pathway after transfection with specific siRNAs. p-mek, p-erk1/2, c-myc, and p-stat5 expression exhibited an apparent decrease after siRNA2 transfection, which was accompanied by a decrease in ErbB4 expression. (F): Brief circuit diagram of miR-146b participation in acute kidney injury. Abbreviations: GAPDH, glyceraldehyde-3-phosphate dehydrogenase; miR-146b, microRNA-146b; MU, mutant 39 untranslated region sequence; N.C., negative control; p, phosphorylated; PCNA, proliferating cell nuclear antigen; siRNA, small interfering RNA; t, total; WT, wild type 39 untranslated region sequence complementary to the seed sequence of miRNA. target ErbB4 to decrease the survival of cisplatin-injured NRK52E cells. Downregulation of ErbB4 Inhibits Cell Survival We knocked down the ErbB4 in NRK52E cells after cisplatininduced injury and cocultured these cells with rMSCs. The injured cells transfected with the NC exhibited no restoration. The number of PCNA-positive cells was considerably higher in the rMSCcocultured NRK52E cells transfected with NC than in the other cells (Fig. 6A). In addition, the TUNEL assay showed that ErbB4 downregulation restrained the restoration of injured NRK52E cells treated with rMSCs (Fig. 6B). The PCNA-and TUNELpositive cells were counted in 10 consecutive fields (Fig. 6C). Furthermore, immunohistochemical staining indicated that ErbB4 knockdown resulted in reduced expression of erk and c-myc, which could promote cell survival (Fig. 6D). DISCUSSION The potential of MSCs in renal injury repair has been investigated widely [26,27]. Recent studies have demonstrated that miRNAs are critically involved in the development of kidney diseases [13][14][15][16]. We determined the miRNA profile in the renal tissues of AKI rats treated with rMSCs and observed that the expression of 44 miRNAs significantly changed in these renal tissues after rMSC treatment. We focused on miR-92b, -146b, -150*, and miR-455 first (supplemental online Fig. 3). We also verified stable alteration in miR-146b using the AKI rat model and cell culture model through real-time RT-PCR analyses and observed that miR-146b expression gradually increased in the serum and renal tissues of AKI rats but decreased after rMSC administration. For the first time, we report that miR-146b-regulated suppression of ErbB4 is a potential causal mechanism in cisplatin-induced AKI and that the inhibition of miR-146b is a potential mechanism by which bone marrow MSCs alleviate cisplatin-induced AKI. Our findings have demonstrated that miR-146b is a potential noninvasive biomarker for AKI. Previous studies have indicated that miRNA expression correlates with the pathophysiological changes of AKI [28]. Moreover, miR-146b has been proposed as an indicator for papillary thyroid carcinoma and prostate cancer [15,17]. In addition, miR-146b expression correlated with the degree of the malignancy of lung cancer and is upregulated in CKD [29]. Furthermore, a miRNA highly homologous to miR-146b is associated with the development of chronic renal inflammation [19]. According to previous findings, miR-146b expression is also elevated in AKI with prevalent tubular injuries, including disorder and renal tubular necrosis. Moreover, we observed that compared with the traditional indexes for renal function such as serum Cr and BUN levels, serum miR-146b levels changed rapidly even after low-dose cisplatin treatment and remained stable at a high level. Thus, miR-146b responded rapidly to the injuries in AKI rats. To demonstrate the clinical significance of miR-146b in the diagnosis of kidney injuries, we examined the expression of hs-miR-146b-5p, the human homologous molecules of miR-146b, in the serum of patients (mainly those with AKI and CKD) and healthy controls. The results of the TaqMan-based realtime PCR analyses showed that the serum miR-146b levels were significantly higher in the patients than were those in the healthy controls, suggesting that miR-146b has great potential to be developed as a reliable indicator of kidney injury. RMSCs ameliorated cisplatin-induced AKI in vivo and in vitro. In addition, miR-146b expression was decreased in AKI kidneys with rMSCs administered, identical to the findings with NRK52E cells. We further demonstrated that miR-146b participated in the regulation of cell apoptosis and proliferation using a loss-of-function strategy. We observed that miR-146b knockdown could rescue cells from cisplatin-induced apoptosis and prompt the proliferation of the surviving cells. Moreover, miR-146b expression in rMSCs cocultured with the injured NRK52E cells was increased (supplemental online Fig. 5); however, the mechanism of this increase remains unclear. Future studies are warranted to identify the molecules in rMSCs that inhibit miR-146b expression. Using TargetScan (Whitehead Institute for Biomedical Research), a microRNA target prediction program, ErbB4, Siah2, IRAK1, and other candidate molecules were found to be candidates. As reported previously, miR-146a was verified to target ErbB4 directly [30,31]. ErbB4, also known as human epidermal growth receptor 4 (HER4), is a member of the EGFR family [32]. ErbB4 has an indicative role in kidney development. In vivo, ErbB4 was expressed in the developing tubules of nephrons [33]. Veikkolainen et al. demonstrated that ErbB4 participates in the proliferation and polarization of renal epithelial cells and in the formation of ducts during kidney development [34]. In addition, a recent study reported that ErbB4 knockout accelerated the progression of polycystic kidney disease [35]. We have confirmed the direct regulation of ErbB4 by miR-146b through dual-luciferase reporter assays. Western blot analysis showed that miR-146b inhibition induced ErbB4 upregulation. Downstream adaptors of ErbB4 transduce signals to ERK1/2(p44/ p42 MAPK), JNK, AKT, and STAT5 and activate transcription factors for promoting proliferation and migration [36][37][38]. We synthesized two siRNAs for ErbB4, and siRNA2 reduced ErbB4 expression efficiently. Immunoblotting indicated that siRNA2 inhibited the STAT5 and raf/mek/erk/c-myc pathways. Therefore, miR-146b induction during AKI accelerated cell apoptosis and restrained proliferation by targeting ErbB4. In contrast, rMSC treatment reduced miR-146b expression and restored ErbB4 expression, thus repairing the injured renal tissues. CONCLUSION The results of the present investigation suggest that MSCs could reduce miR-146b expression in cisplatin-induced AKI. miR-146b inhibits cell proliferation and prompts cell apoptosis through ErbB4 in cisplatin-induced AKI. Our findings not only provide insights into the involvement of miRNA in MSC-mediated AKI repair, but also indicate a novel biomarker for AKI diagnosis and therapy.
2018-04-03T03:09:44.680Z
2016-07-08T00:00:00.000
{ "year": 2016, "sha1": "a310e063504a7c3fef9b31064b3b6f2e1682a7e9", "oa_license": "CCBYNC", "oa_url": "https://stemcellsjournals.onlinelibrary.wiley.com/doi/pdfdirect/10.5966/sctm.2015-0355", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "cf167a1fb7052f7130474c39637876506cdfd234", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
236781256
pes2o/s2orc
v3-fos-license
Quantitative myocardial perfusion SPECT/CT for the assessment of myocardial tracer uptake in patients with three-vessel coronary artery disease: Initial experiences and results Background To evaluate quantitative myocardial perfusion SPECT/CT datasets for routine clinical reporting and the assessment of myocardial tracer uptake in patients with severe TVCAD. Methods MPS scans were reconstructed as quantitative SPECT datasets using CTs from internal (SPECT/CT, Q_INT) and external (PET/CT, Q_EXT) sources for attenuation correction. TPD was calculated and compared to the TPD from non-quantitative SPECT datasets of the same patients. SUVmax, SUVpeak, and SUVmean were compared between Q_INT and Q_EXT SPECT datasets. Global SUVmax and SUVpeak were compared between patients with and without TVCAD. Results Quantitative reconstruction was feasible. TPD showed an excellent correlation between quantitative and non-quantitative SPECT datasets. SUVmax, SUVpeak, and SUVmean showed an excellent correlation between Q_INT and Q_EXT SPECT datasets, though mean SUVmean differed significantly between the two groups. Global SUVmax and SUVpeak were significantly reduced in patients with TVCAD. Conclusions Absolute quantification of myocardial tracer uptake is feasible. The method seems to be robust and principally suitable for routine clinical reporting. Quantitative SPECT might become a valuable tool for the assessment of severe coronary artery disease in a setting of balanced ischemia, where potentially life-threatening conditions might otherwise go undetected. Supplementary Information The online version contains supplementary material available at 10.1007/s12350-021-02735-2. phosphino] ethane, tetrofosmin) has been an important mainstay in the diagnosis and risk stratification of coronary artery disease, stress-induced ischemia, and heart failure for several decades. 1,2 Combined with the evaluation of myocardial metabolism by means of 18 Ffluorodeoxyglucose positron emission tomography (FDG PET) MPS is a powerful tool to determine the presence and amount of hibernating myocardium in heart failure patients, which might serve as a predictor for the response to myocardial revascularization therapy. 3 MPS has traditionally made use of relative quantification with the normalization of the tracer uptake to the maximum uptake in the left ventricle. Concerns arose that three-vessel coronary artery disease (TVCAD) might be misdiagnosed due to uniformly reduced tracer uptake in the supply territories of all three main coronary arteries, rendering the myocardial perfusion scintigram unremarkable. 4 In addition to attenuation correction, the use of more sensitive detectors or gated MPS acquisitions, one possible way to ameliorate this situation is the use of absolute quantification in myocardial single-photon emission computed tomography (SPECT) and subsequently the calculation of SUVs. 4 With the advent of SPECT/CT imaging and advanced software reconstruction algorithms, such as SUV SPECT Ò for Hermes Hybrid Recon TM (Hermes Medical Solutions, Stockholm, Sweden), absolute quantification of SPECT datasets has become feasible and reproducible in a manner that seems to allow its application in routine clinical use. 5 The use of quantitative SPECT datasets to calculate SUVs and even absolute regional blood flow has been proposed and partially tested for a wide variety of clinical applications, for example, imaging of cardiac amyloidosis, 6 breast cancer imaging, 7 bone scintigraphy 8 , and myocardial perfusion imaging. 9 To our knowledge, no studies have been published that systematically evaluate the use of quantitative myocardial perfusion SPECT dataset for routine clinical imaging. Furthermore, no studies have employed SUVs to evaluate myocardial perfusion in patients with TVCAD. In the present imaging study, we aimed to address the following four major aspects: We evaluated the feasibility of creating equal quantitative myocardial perfusion SPECT datasets using commercial software and CTs from internal (SPECT/ CT) and external (PET/CT) sources for attenuation correction (AC). We investigated whether the TPD determined from the quantitative datasets was equal to the TPD determined from standard non-quantitative datasets used in routine clinical MPS. We compared SUV max , SUV peak , and SUV mean as derived from quantitative datasets with internal and external AC to determine their reproducibility. We compared global SUVs from patients with and without TVCAD to determine differences in global tracer uptake that might signify perfusion deficits, which might otherwise go undetected in conventional, semiquantitative analysis due to balanced ischemia. Patients from both groups were referred to our ward for myocardial viability imaging between June 2010 and December 2016. The standard protocol at out institution usually comprises a rest myocardial perfusion SPECT as well as an 18F-FDG PET/CT scan. Routine MPS was performed in resting conditions using 99m Tc-tetrofosmin on an integrated SPECT/CT scanner (Symbia, Siemens Medical Systems, Erlangen, Germany). After MPS an 18 F-FDG PET/CT scan was performed on a dedicated PET/CT system (Biograph 64, Siemens Medical Systems, Erlangen, Germany). Patients from Cohort 2 received prior heart catheterization so that information about the state of the coronary arteries (single-or multi-vessel disease) was available. TVCAD was assumed, when all three coronary arteries (left anterior descending artery, left circumflex artery, right coronary artery) were affected by at least one stenosis of [ 50% of the vessel diameter as assessed by invasive coronary angiography. The study was conducted in accordance with the local ethics committee (Ethikkommission der LMU München). SPECT/CT imaging Patients from both groups received MPS at rest. 99m Tctetrofosmin was administered intravenously. SPECT/CT scans were started 30-45 minutes after tracer injection. A dual-head hybrid SPECT/CT camera (Symbia, Siemens Medical Systems, Erlangen, Germany) was employed with a low-energy, high-resolution parallel-hole collimator. A symmetrical 20% energy window was centered at an energy level of 140 keV, the two detector heads were positioned at an angle of 90°. A 180°a rc was covered by the two detector heads, 64 rotational steps were performed, each rotational projection lasting 23 seconds. The SPECT scan was followed by a low-dose spiral CT during free breathing and without ECG gating for attenuation correction (130 keV, 20 mAs, CTDI 2.2 mGy, DLP 40 mGy*cm, 512 9 512 pixel matrix at a slice thickness of 5 mm). SPECT/CT scanner calibration Scanner calibration was performed, using a uniform Jaszczak phantom (Data Spectrum Corporation, Durham, NC, USA). No inserts were used. The phantom was filled with water and 313 MBq of 99m Tc-pertechnetate. A conversion factor (.101 kBq/cps) was calculated by dividing the known activity by the reconstructed counts within a volume of interest defined inside the phantom. 10 PET/CT imaging For the current study, only the CT component of the PET/ CT scans was used for CT-based attenuation correction of the MPS datasets. The parameters of the CT scan were as follows: 120 keV, 11 mAs, CTDI .74 mGy, DLP 22 mGy*cm, and 512 9 512 pixel matrix at a slice thickness of 2 mm. Image reconstruction Iterative reconstruction was performed using the Hybrid Recon Cardiology Software (Hermes Medical Solutions, Stockholm, Sweden). The SPECT images were reconstructed in a standard, nonquantitative manner using the CT of the SPECT/CT scan (AC) and the following parameters: matrix size 64 x 64 pixels, OSEM, resolution recovery, CT-based attenuation correction, Monte Carlo-based scatter correction, 3 iterations, 16 subsets, and a FWHM post-reconstruction filter (.9 cm) was applied. SPECT and CT images were superimposed and registered in the software. Attenuation map registration was done using rigid-body translation, no rotations were permitted. Proper alignment of the datasets was evaluated visually and corrected manually, if necessary, to avoid the introduction of imaging artifacts due to misregistration. This resulted in SPECT datasets with a slice thickness of 2.2 mm and no overlap (scaling factor approximately 2.2 mm/ pixel, slice thickness 1 pixel, center-center separation 1 pixel). For reconstruction of the quantitative datasets, the SUV SPECT Ò plugin for Hermes Hybrid Recon TM (Hermes Medical Solutions, Stockholm, Sweden) was used in conjunction with CT datasets from either the SPECT/CT scan (Q_INT) or the PET/CT scan (Q_EXT) and the following parameters: matrix size 64 9 64 pixels, OSEM, resolution recovery, CT-based attenuation correction, Monte Carlo-based scatter correction, 4 iterations, 16 subsets, and a Gaussian post-filter (1.10 cm) was applied. Registration of the SPECT and CT datasets was carried out as described above. Again, this resulted in SPECT datasets with a slice thickness of 2.2 mm and no overlap (scaling factor approximately 2.2 mm/pixel, slice thickness 1 pixel, center-center separation 1 pixel). As has been described before, the injected dose (corrected for decay) as well as patient-specific parameters such as height and weight were inputted and the counts per voxel were transformed into activity per volume and subsequently displayed as SUV. 11 Image analysis The newly reconstructed quantitative datasets were visually compared to the standard non-quantitative datasets. We evaluated, if any obvious new artifacts from attenuation correction or image reconstruction had been introduced, such as defects or extracardiac activity. Furthermore, we visually assessed if gross image quality was comparable. As described by Beanlands et al the resulting SPECT images were analyzed using the commercial software QPS with the QPET-Plugin (Cedars-Sinai, Los Angeles) to calculate the extent of the TPD 12 : after the creation of polar maps, the perfusion tracer uptake is quantified relative to the maximum tracer uptake in the polar map. The patient's polar map is then compared to the average polar map of an integrated normative database and the TPD is calculated by the software as a combination of the extent of the perfusion deficit (percentage of the left ventricular surface area) and the severity of the perfusion deficit (reduction of the perfusion in standard deviations below the normal threshold). 13 SUVs were analyzed using Hermes Hybrid Viewer (Hermes Medical Solutions, Stockholm, Sweden). To determine global SUV max , SUV mean , and SUV peak an approximate VOI was drawn around the left ventricle to exclude extracardiac activity, then the lower threshold was set to 35% to optimally delineate the left ventricle. Delineation was controlled visually and adjusted manually, if necessary. SUV max was defined as the SUV of the hottest voxel within the VOI, SUV mean was defined as the average SUV of all the voxels within the VOI and SUV peak was defined as the average SUV in a cubic 1 cm 3 VOI around the area of maximum tracer uptake within the main VOI. 10 Statistical analysis All variables are reported as mean ± standard deviation (SD). The Shapiro-Wilk test was used to test for normal distribution. A two-sided one-sample Student's t-test was used to assess, if a mean was different from 0. To compare qualitative variables between two groups, the Chi-Square test was used. To compare quantitative variables that were not normally distributed between two groups, the Mann-Whitney U-Test was used. For quantitative variables that were normally distributed, the Student's t test (for dependent or independent samples) was used to compare two groups. ANOVA adjusted for multiple comparisons with the Š idák correction was used, when more than two groups were compared. Pearson's r was calculated as a measure of linear correlation between two datasets, scatter diagrams and Bland-Altman plots were used for visualization. Furthermore, coefficients of variation were calculated for comparison of the variability of datasets, as well as intraclass correlation coefficients to assess repeatability. An ROC (receiver operating characteristic) analysis was performed to estimate cut-off values for SUV peak and SUV max to differentiate between patients with and without TVCAD based on Youden's J statistic to optimize for sensitivity and specificity. P values \ .05 were considered statistically significant. Feasibility of quantitative reconstruction and comparison of TPD Quantitative SPECT reconstructions were feasible in all cases. Visual review of the resulting quantitative and the standard non-quantitative SPECT images revealed no discernable differences with regard to image quality and image artifacts. The mean TPD showed no significant differences between the groups (TPD_AC vs TPD_Q_INT, 27 ± 17% vs 26 ± 17%; TPD_AC vs TPD_Q_EXT, 27 ±17% vs 27 ± 18%; TPD_Q_INT vs TPC_Q_EXT, 26 ± 17% vs 27 ± 18%; P = ns in all groups). The coefficient of variation was similar for all three methods (TPD_AC: .63; TPD_Q_INT: .65; TPD_Q_EXT: .67). Mean paired differences were -.77 ± 5.0% (TPD_Q_INT -TPD_AC), .07 ± 5.7% (TPD_Q_EXT -TPD_AC), and .83 ± 2.7% (TPD_Q_EXT -TPD_Q_INT). The mean paired differences were not significantly different from each other as well as from 0 (P = ns in all). The global TPDs showed an excellent correlation as well as good agreement in the Bland-Altman plots, however, some variability was present (Figure 1). The intraclass correlation coefficient for TPD_AC and TPD_Q_INT was .957, the intraclass correlation coefficient for TPD_AC and TPD_Q_EXT was .949, indicating excellent repeatability in both cases. 14,15 Comparison of global SUV max , SUV peak , and SUV mean As shown in Table 1, mean SUV max and SUV peak showed no significant differences between the Q_INT and Q_EXT groups. Mean SUV mean , however, differed significantly between the two groups. The coefficients of variation of the parameters were similar for both methods (SUV max Q_INT: .31 and SUV max Q_EXT: .30; SUV peak Q_INT: .30 and SUV peak Q_EXT: .29; SUV mean Q_INT: .37 and SUV mean Q_EXT: .31). Mean paired differences were -.089 ± .91 (SUV max Q_INT -Q_EXT), -.027 ± .78 (SUV peak Q_INT -Q_EXT), and -.16 ± .35 (SUV mean Q_INT -Q_EXT). The mean paired differences for SUV max and SUV peak were not significantly different from 0, the mean paired difference for SUV mean was significantly different from 0 (P = .019). All SUVs showed an excellent correlation as well as good agreement in the Bland-Altman plots, however, some variability was present (Figure 2). The intraclass correlation coefficient for SUV max Q_INT and Q_EXT was .87, for SUV peak Q_INT and Q_EXT .87, and for SUVmean Q_INT and Q_EXT .92, indicating good to excellent repeatability. 14,15 Comparison of global SUV max and SUV peak in patients with and without TVCAD Patients with TVCAD showed a significantly reduced SUV max as compared to patients with no TVCAD present (TVCAD: SUV max 4.96 ± 1.54 vs. no TVCADD: SUV max 6.39 ± 1.34, P = .004). Likewise, patients with TVCAD showed a significantly reduced SUV peak as compared to patients with no TVCAD present (TVCAD: SUV peak 4.44 ± 1.40 vs no TVCAD: SUV peak 5.81 ± 1.24, P = .003) (Figure 3). ROC analysis (Figure 4) revealed a satisfactory discrimination of SUV peak and SUV max between patients with and without TVCAD (AUC = .75 for SUV peak , AUC = .73 for SUV max ). The optimized cut-off values were 4.62 for SUV peak (sensitivity 90%, specificity 55%) and 5.04 for SUV max (sensitivity 90%, specificity 55%). DISCUSSION Reconstruction of quantitative SPECT datasets was feasible in all cases without any tradeoff in image quality. The TPD-a central parameter for the routine assessment of the extent of myocardial ischemia, scarring, and hibernation-could be determined from quantitative SPECT datasets (irrespective of internal or external AC) and showed a good agreement with the TPD calculated from standard non-quantitative datasets. Hence, we infer that quantitative datasets are principally suitable for routine semiquantitative image analysis. Furthermore, we could demonstrate that SUV max , SUV peak , and SUV mean showed an excellent correlation and agreement between datasets reconstructed with internal and external CT sources, even though there was a statistically significant difference of the mean SUV mean between groups. Patients with TVCAD showed significantly reduced global SUVs in the resting state as compared to patients without TVCAD, possibly reflecting chronically impaired perfusion in high-risk disease. Furthermore, ROC analysis suggests that SUVpeak and SUV max show a satisfactory performance to differentiate between patients with and without TVCAD with a high sensitivity. Visual comparison with standard non-quantitatively reconstructed datasets of the same patients did not reveal any increase in image artifacts. Image quality of the quantitative datasets was satisfactory and on par with their non-quantitative counterparts. This is in line with several other studies that evaluated quantitative SPECT in different settings and did not report any issues with the reconstructed quantitative SPECT datasets. 8,11,16 The TPD is a central parameter in routine analysis of MPS images with the QPS Software. 17 It represents the extent and severity of a perfusion defect and is highly correlated with the visual assessment of perfusion defects as well as summed rest, stress, and difference scores. 18 As such, the assessment of this well-established parameter must not be impaired by new or alternative image reconstruction methods. In our study, we could demonstrate that the calculation of TPD was feasible in all cases and that the obtained values correlated well with values determined from standard non-quantitative image datasets irrespective of the CT source for AC (internal or external). Bland-Altman Plots, however, revealed that there seems to be a certain amount of variability between the different methods, especially when quantitative datasets are compared to their non-quantitative counterpart. As such, while the calculation of TPD values is certainly possible and the quantitative datasets are principally suitable for its routine evaluation, at the present time, we would suggest adhering to one method of reconstruction (quantitative or non-quantitative), especially for therapy control or follow-up examinations on a patient per patient basis, until further research into this topic is conducted. In a next step, we were able to demonstrate that the calculated SUV max , SUV peak , and SUV mean showed an excellent correlation between quantitative datasets reconstructed either with the internal CT from the SPECT/CT scan (Q_INT) or the external CT from the PET/CT scan (Q_EXT). This is especially important, since it raises confidence in the stability of the method and offers additional options for institutions with SPECT-only scanners, when external CT datasets are available. These results are in line with observations previously published by our own working group that demonstrated that attenuation correction based on internal and external CT scans for SPECT datasets was equal with regard to the quantification of perfusion deficits, scars, and hibernating myocardium. 2 Interestingly, mean SUV mean showed a small (Q_INT 2.47 ± .90 vs Q_EXT 2.63 ± .81), albeit significant difference (P = .031). In contrast to other SUV definitions, which rely on the hottest voxels, SUV mean is dependent on the VOI size and thus on the threshold used to delineate the heart (with additional manual corrections, if required). This might subsequently lead to variations in VOI size and placement resulting in the observed significant difference, despite excellent correlation curves and good Bland-Altman plots. 19 Bland-Altman plots revealed a certain amount of variability between the methods (Q_INT vs Q_EXT) to determine SUVs. Subsequently, as with TPD, at the present time it seems prudent to adhere to one method for repeated or follow-up examinations. Based on these results, in the subsequent parts of our study, we focused on SUV max and SUV peak for further evaluation of tracer uptake in patient cohorts with and without TVCAD. Absolute quantification of myocardial tracer uptake is especially interesting in patients with TVCAD, since uniformly reduced myocardial perfusion in the territories of all three main coronary vessels-referred to as balanced ischemia-might give myocardial perfusion scintigrams normalized to the maximum tracer uptake in the myocardium an unremarkable appearance. Subsequently, a severe, potentially life-threatening condition might be misdiagnosed with possibly devastating consequences. In theory, absolute quantification of myocardial tracer uptake could have the potential to overcome this situation. 4 Comparing global SUV peak and SUV max from rest myocardial perfusion scintigrams in patients with and without TVCAD, we could demonstrate that even under resting conditions global absolute myocardial tracer uptake is significantly reduced in patients with severe coronary artery disease, most likely reflecting chronically impaired myocardial perfusion. This is in line with PET-perfusion-based studies that could demonstrate that impaired myocardium (both stunned and hibernating myocardium as well as scar) showed reduced myocardial blood flow under resting conditions as compared to remote myocardium. 20 We consider this a particularly important finding and in line with the assumption that impaired myocardial perfusion in severe coronary artery disease might be misdiagnosed in semiquantitative assessment, but revealed using SUV-based quantification, possibly leading to higher sensitivity of the examination and the detection of potentially life-threatening conditions. 4 ROC analysis suggests that SUV peak and SUV max might be useful markers to differentiate between patients with and without TVCAD with a high sensitivity, which is warranted in a setting of a life-threatening disease. Specificity at 55% might have to be greatly improved, surely further research is warranted in this regard. Furthermore, these findings might pave the way for future implementations of quantitative SPECT to assess myocardial blood flow and coronary flow reserve, . ROC analysis for SUV peak and SUV max to differentiate between patients with and without TVCAD, AUC = .75 for SUV peak and AUC=.73 for SUV max . Cut-off values optimized for sensitivity and specificity are 4.62 for SUV peak (sensitivity 90%, specificity 55%) and 5.04 for SUV max (sensitivity 90% and specificity 55%). bringing SPECT closer to PET, which can be considered the gold standard in this respect. 21 NEW KNOWLEDGE GAINED In our study, we could demonstrate the feasibility of absolute quantification of myocardial tracer uptake in myocardial perfusion scintigraphy. We could substantiate that the method is stable and principally suitable for routine clinical reporting from quantitative datasets, even though caution should be exerted for therapy monitoring or follow-up studies. Finally, we could show that quantitative SPECT might have the potential to become a valuable tool for the assessment of severe coronary artery disease in a setting, where potentially life-threatening conditions might otherwise go undetected. LIMITATIONS Our study suffers from several limitations. The number of patients was limited, the study population was restricted to the indication of viability testing and the patient population was dominantly male. Due to the retrospective nature of the study, its clinical impact is limited and needs to be further elucidated by prospective investigations. While results proved to be relatively consistent between Q_INT and Q_EXT datasets, inconsistencies arose in the evaluation of SUV mean . The nature of these inconsistencies is most likely attributable to somewhat differing VOI sizes, inherent in our methodological approach. Even though the absolute difference in SUV mean between the two methods is so small that its clinical relevance is questionable, this seems to be proof that the method has to be further investigated and most likely refined, before routine clinical application can be considered. Additionally, further research should be conducted with regard to the properties of AC maps derived from different CT scans and their influence on SUV calculation. When we compared TVCAD and non-TVCAD patients with regard to SUV peak , the non-TVCAD group contained only one patient with native coronary arteries, 9 patients had single-vessel disease and 10 patients had double-vessel disease. This was due to the nature of patient recruitment for the study. All patients were referred to our ward for viability imaging and as such usually had an extended history of CAD. The chance to find completely unaffected coronary vessels was low and these patients might not pose the optimal reference cohort. The differences in SUV peak might even be more pronounced, if TVCAD patients were to be compared to a cohort of actual non-TVCAD patients with completely native coronary vessels. Finally, within the scope of this study, only rest perfusion scans were analyzed. CONCLUSION We could demonstrate that quantitative myocardial perfusion SPECT is able to detect reduced absolute myocardial tracer uptake in the presence of TVCAD as compared to the expected uptake in the reference cohort without TVCAD. This reduced tracer uptake might go undetected in semiquantitative assessment. At the same time, the ability for the assessment of established routine parameters is preserved in the newly reconstructed quantitative datasets. Thus, our study might pave the way for future prospective clinical investigations.
2021-08-03T13:49:14.698Z
2021-08-02T00:00:00.000
{ "year": 2021, "sha1": "ccf11c5100defeafae078071972d654bfcf56559", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12350-021-02735-2.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "ccf11c5100defeafae078071972d654bfcf56559", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
204435200
pes2o/s2orc
v3-fos-license
Agricultural Sustainability: A Review of Concepts and Methods : This paper presents a methodological framework for the systematic literature review of agricultural sustainability studies. The framework synthesizes all the available literature review criteria and introduces a two-level analysis facilitating systematization, data mining, and methodology analysis. The framework was implemented for the systematic literature review of 38 crop agricultural sustainability assessment studies at farm-level for the last decade. The investigation of the methodologies used is of particular importance since there are no standards or norms for the sustainability assessment of farming practices. The chronological analysis revealed that the scientific community’s interest in agricultural sustainability is increasing in the last three years. The most used methods include indicator-based tools, frameworks, and indexes, followed by multicriteria methods. In the reviewed studies, stakeholder participation is proved crucial in the determination of the level of sustainability. It should also be mentioned that combinational use of methodologies is often observed, thus a clear distinction of methodologies is not always possible. Introduction The world's population is rapidly increasing and, according to the most recent projections, it is expected to reach 9.8 million in 2050 and 11.2 million in 2100 [1]. To that end, the planet should be ready to cope with the expected rapid population growth. Producing and delivering adequate, high quality food will be one of the most important challenges for humanity in the next century [2]. The evolution of technology has led to intensification of agricultural production leading to increased productivity and (in most of the cases) quality of agriproducts as well. However, this intensification has significantly increased the environmental footprint of agriculture, leading to a number of environmental impacts associated with the extensive use of fertilizers, pesticides, water, changes in land use, etc. [3]. The environmental issues related to agriculture have drawn the attention of the scientific community, which is now turning towards exploring the definition of agricultural sustainability without having yet reached consensus [4,5]. Undoubtingly, defining agricultural sustainability, as with every other sustainability concept, is a challenging task. Nevertheless, it is a common agreement that agricultural sustainability should at least address the three basic pillars of sustainable development by appraising simultaneously environmental, economic, and social issues related to agricultural practices [6]. However, the sustainability assessment of agricultural practices, in general, can be a very challenging task since it involves many case-specific variables to be taken under consideration. Figure 1 presents various processes, inputs, and outputs involved in agricultural production, demonstrating the difficulty and complexity in generalizing the sustainability assessment process. There are general cultivation guidelines and corresponding operations stages for almost all crops (e.g. seeding, irrigation, and harvesting). However, the agronomic practice, the machinery types, the technology level, as well as the quantities and type of materials used may vary, depending on the type of crop, the implementation practice, the country (even the region of the cultivation), and the prevailing climatic conditions. All of the aforementioned parameters affect the cultivation process and the respective inflows and outflows. It is obvious that the standardization of the Agricultural Sustainability Assessment is a challenging task. Considering the growing interest in assessing the sustainability issues related to agriculture, several tools and methodologies have been developed [7,8]. Among those tools some have gained greater acceptance and are widely used by the majority of practitioners worldwide, such as life cycle assessment (LCA), which is standardized by ISO in ISO 14040:2006 and ISO 14044:2006 [9]. In addition, many indicator-based methods have been developed for the sustainability assessment of agricultural practices that use different approaches with regards to the overall objective, the intended users, and the definition of agricultural sustainability they employ [4]. Sustainability 2019, 11, x FOR PEER REVIEW 2 of 27 sustainability assessment of agricultural practices, in general, can be a very challenging task since it involves many case-specific variables to be taken under consideration. Figure 1 presents various processes, inputs, and outputs involved in agricultural production, demonstrating the difficulty and complexity in generalizing the sustainability assessment process. There are general cultivation guidelines and corresponding operations stages for almost all crops (e.g. seeding, irrigation, and harvesting). However, the agronomic practice, the machinery types, the technology level, as well as the quantities and type of materials used may vary, depending on the type of crop, the implementation practice, the country (even the region of the cultivation), and the prevailing climatic conditions. All of the aforementioned parameters affect the cultivation process and the respective inflows and outflows. It is obvious that the standardization of the Agricultural Sustainability Assessment is a challenging task. Considering the growing interest in assessing the sustainability issues related to agriculture, several tools and methodologies have been developed [7,8]. Among those tools some have gained greater acceptance and are widely used by the majority of practitioners worldwide, such as life cycle assessment (LCA), which is standardized by ISO in ISO 14040:2006 and ISO 14044:2006 [9]. In addition, many indicator-based methods have been developed for the sustainability assessment of agricultural practices that use different approaches with regards to the overall objective, the intended users, and the definition of agricultural sustainability they employ [4]. Considering what was mentioned above and that there is not yet an established standardized methodology, it is very important for anyone attempting to assess agricultural sustainability to have an overview of the available and most usually used methodologies and tools to that scope. As a result, there is a need for a methodological framework that will help practitioners to evaluate the existing available tools and methods in order to select the appropriate one, for each specific task. To that end, the present paper has a two-fold objective: • To determine the evaluation criteria to systematically review agricultural sustainability assessment studies. To that end, several review papers were selected based on specific selection criteria and examined to determine the goal as well as the individual evaluation criteria adopted in each review. The ultimate goal is to critically synthesize a methodological framework for the systematic recording and evaluation of available agricultural sustainability assessment studies. Such systematic documentation can facilitate the comparison among the available studies as well as the development of a standard methodological framework for the sustainability assessment of agriculture. • To implement the proposed methodology by investigating the available and mostly used methodologies to assess the sustainability of crop cultivations at the farm level. Considering what was mentioned above and that there is not yet an established standardized methodology, it is very important for anyone attempting to assess agricultural sustainability to have an overview of the available and most usually used methodologies and tools to that scope. As a result, there is a need for a methodological framework that will help practitioners to evaluate the existing available tools and methods in order to select the appropriate one, for each specific task. To that end, the present paper has a two-fold objective: • To determine the evaluation criteria to systematically review agricultural sustainability assessment studies. To that end, several review papers were selected based on specific selection criteria and examined to determine the goal as well as the individual evaluation criteria adopted in each review. The ultimate goal is to critically synthesize a methodological framework for the systematic recording and evaluation of available agricultural sustainability assessment studies. Such systematic documentation can facilitate the comparison among the available studies as well as the development of a standard methodological framework for the sustainability assessment of agriculture. • To implement the proposed methodology by investigating the available and mostly used methodologies to assess the sustainability of crop cultivations at the farm level. The methodological The evaluation process implemented to assess and select the criteria needed for the methodological framework of the systematic review on agricultural sustainability studies is presented in Figure 2. Initially, scientific literature published in Science Direct and Scopus was searched using the specific keywords and Boolean operators (AND/OR). The keywords were selected with respect to the integrated concept of "sustainability assessment", as well as the individual processes it consists of, namely, "environmental assessment", "economic assessment", and "societal assessment" (or "social assessment") combined with the keywords agriculture/farming using the Boolean Operator AND to exclude results that are not relevant to the field under examination. It should be added that the concept of "agricultural sustainability" was also included in the search. The first sample of scientific papers that resulted from the initial search included 55 papers from peer-reviewed scientific journals. These papers were put through a screening process considering specific exclusion criteria presented in Figure 2. Specifically, studies that were not related to agriculture and especially focused on alternative agricultural processes were excluded. As a result, papers exclusively focused on aquaculture or organic farming studies, biofuels and biorefinery as well as review of studies comparing agronomic protocols were excluded from the present assessment. Additionally, review studies regarding soil quality, land management, food processing systems and discussions that did not specifically define the methods of the review conducted, were excluded. At this point, it should also be stated that in the context of agricultural sustainability studies, livestock farming was included in the search. The methodological framework is applied to 38 Agricultural Sustainability studies published in peer-reviewed journals in the last decade (2009-2018). Research Design The evaluation process implemented to assess and select the criteria needed for the methodological framework of the systematic review on agricultural sustainability studies is presented in Figure 2. Initially, scientific literature published in Science Direct and Scopus was searched using the specific keywords and Boolean operators (AND/OR). The keywords were selected with respect to the integrated concept of "sustainability assessment", as well as the individual processes it consists of, namely, "environmental assessment", "economic assessment", and "societal assessment" (or "social assessment") combined with the keywords agriculture/farming using the Boolean Operator AND to exclude results that are not relevant to the field under examination. It should be added that the concept of "agricultural sustainability" was also included in the search. The first sample of scientific papers that resulted from the initial search included 55 papers from peer-reviewed scientific journals. These papers were put through a screening process considering specific exclusion criteria presented in Figure 2. Specifically, studies that were not related to agriculture and especially focused on alternative agricultural processes were excluded. As a result, papers exclusively focused on aquaculture or organic farming studies, biofuels and biorefinery as well as review of studies comparing agronomic protocols were excluded from the present assessment. Additionally, review studies regarding soil quality, land management, food processing systems and discussions that did not specifically define the methods of the review conducted, were excluded. At this point, it should also be stated that in the context of agricultural sustainability studies, livestock farming was included in the search. The final paper collection comprises 16 review papers or studies that assess agricultural sustainability. It should be noted that the literature is relatively scarce regarding studies that consider all the three dimensions of sustainability with respect to other scientific fields, for example, the secondary production of goods. To that end, the sample includes studies considering the environmental aspect of agricultural sustainability which is the most often studied. The sample was then assessed in two ways, a systematic and critical [10]. The systematic way concerns the listing of The final paper collection comprises 16 review papers or studies that assess agricultural sustainability. It should be noted that the literature is relatively scarce regarding studies that consider all the three dimensions of sustainability with respect to other scientific fields, for example, the secondary production of goods. To that end, the sample includes studies considering the environmental aspect of agricultural sustainability which is the most often studied. The sample was then assessed in two ways, a systematic and critical [10]. The systematic way concerns the listing of the papers based on specifically defined criteria [11]. The initial listing criteria in the case of the presented framework, include the title and author of the paper, the year of publication as well as the spatial coverage of the study (Global or Regional) and the type of review (Critical or Systematic). Critical reviews are thorough literature works that attempt to evaluate and assess the basic aspects or inputs and document the differences in methodology and implementation of scientific studies on a specific field [11]. In this case, the critical evaluation of the sample concerns the individual analysis of the selected studies with the purpose of extracting the individual evaluation criteria used in each study. The individual criteria with similar context were aggregated in a general table of criteria. Then, each paper was systematically reviewed as to whether each criterion was included in the review. The resulting table is a comprehensive overview of the issues most frequently examined in a review study. The criteria that were used the most are the criteria that should be integrated in the methodological framework for the systematic review of agricultural sustainability studies. The rule followed in the present paper was to exclude criteria that were used in less than four papers. Following next is the sample presentation as well as the criteria frequency table along with a critical assessment of the sample used for the evaluation. Systematic Approach The 16 review papers that were extracted by the implementation of the first steps of the methodology, presented in the previous section are presented in Table 1 along with their classification with respect to their type and spatial coverage. As presented in Figure 3, during 2016-2017, the number of review papers has increased, indicating a boosted interest in the sustainability of agricultural practices. Payraudeau et al. (2005) first analyzed and systematically reviewed six (6) agricultural sustainability methods employed in eleven (11) case studies, indicating the variety of objectives, target groups, and methodologies used [20]. Bockstaller et al. (2008), followed by presenting a typology of indicators and the evolution of the methods used for their advancement [19], in 2009, critically evaluateing four (4) comparative studies to analyze the methods of the comparison, highlighting their main results [23]. Also focusing on indicators, Binder et al. (2010) presented an evaluation review framework that was used to review agricultural sustainability methods [4]. The framework assessed the normative, systematic and procedural aspects of the methods under evaluation. Regarding the types of review papers and their classification to systematic or critical according to the definitions presented in the previous section [11], it is observed that, in principal, both categories are equally preferred by the researchers. However, in some cases, the distinction is not clear or a systematic and critical review is performed at the same time. Such example is the work of De Luca et al. (2017), where authors performed a critical and systematic review to determine, among other issues, which Multi Criteria Decision Analysis (MCDA) and participatory methods have been used along with LCA tools and the type of integration used in each case [10]. Also, Baldini et al. (2017) critically reviewed forty-four (44) LCA studies on milk production and systematically compared their methods and results to highlight issues requiring further discussion and investigation [15]. Considering the selected samples, it can be stated that in most cases systematic reviews are used in order to compare methodologies and results regarding a specific field of agricultural application. conducted a chronological review of LCA studies in pig production, attempting to demonstrate how LCA has captured technological advancements in the field as well as the methodological issues observed [17]. On the contrary, the majority of the reviews that were characterized as critical are dealing with the evaluation of indicator-based methods or the classification of agricultural sustainability indicators, such as the work of Acosta-Alba et al. Regarding the types of review papers and their classification to systematic or critical according to the definitions presented in the previous section [11], it is observed that, in principal, both categories are equally preferred by the researchers. However, in some cases, the distinction is not clear or a systematic and critical review is performed at the same time. Such example is the work of De Luca et al. (2017), where authors performed a critical and systematic review to determine, among other issues, which Multi Criteria Decision Analysis (MCDA) and participatory methods have been used along with LCA tools and the type of integration used in each case [10]. Also, Baldini et al. (2017) critically reviewed forty-four (44) LCA studies on milk production and systematically compared their methods and results to highlight issues requiring further discussion and investigation [15]. Considering the selected samples, it can be stated that in most cases systematic reviews are used in order to compare methodologies and results regarding a specific field of agricultural application. conducted a chronological review of LCA studies in pig production, attempting to demonstrate how LCA has captured technological advancements in the field as well as the methodological issues observed [17]. On the contrary, the majority of the reviews that were characterized as critical are dealing with the evaluation of indicator-based methods or the classification of agricultural sustainability indicators, such as the work of Acosta-Alba et al. (2011), who reviewed eight (8) agricultural sustainability frameworks that use reference values for their indicators and analyzed the methods for the establishment of the reference values and investigating ways for their improvement [14]. Latruffe et al. (2016) provided a review of the available agricultural sustainability indicators, highlighting the relative high increase of environmental indicators as compared with the smaller interest in economic and social indicators [18]. Finally, Lebacq et al. (2013) reviewed the types of sustainability indicators and proposed indicative ground rules for the selection of agricultural sustainability indicators [22]. With respect to the spatial coverage of the reviews (Figure 4), the majority deals with studies from all around the world. Nevertheless, there are reviews assessing studies in specific countries or regions. For example, Roy et al. (2012), based on a systematic review and synthesis, presents a set of indicators that could be used to assess agricultural sustainability in Bangladesh, highlighting the need for integrated approaches and participatory processes during agricultural sustainability assessment [13]. Additionally, Morais et al. (2016) systematically reviewed twenty-two (22) agri-food-dedicated LCA studies in Portugal, revealing issues regarding the challenges faced and the lack of systematic regional approach in the country that could safeguard the accuracy and comparability of the results [16]. Lastly, Yan et al. (2011) reviewed thirteen (13) LCA studies on European milk production, indicating that direct comparison is challenging due to inconsistency regarding the used methodologies [9]. With respect to the spatial coverage of the reviews (Figure 4), the majority deals with studies from all around the world. Nevertheless, there are reviews assessing studies in specific countries or regions. For example, Roy et al. (2012), based on a systematic review and synthesis, presents a set of indicators that could be used to assess agricultural sustainability in Bangladesh, highlighting the need for integrated approaches and participatory processes during agricultural sustainability assessment [13]. Additionally, Morais et al. (2016) systematically reviewed twenty-two (22) agri-food-dedicated LCA studies in Portugal, revealing issues regarding the challenges faced and the lack of systematic regional approach in the country that could safeguard the accuracy and comparability of the results [16]. Lastly, Yan et al. (2011) reviewed thirteen (13) LCA studies on European milk production, indicating that direct comparison is challenging due to inconsistency regarding the used methodologies [9]. Critical Approach The selected sample, which was thoroughly described in the previous section, was screened, to extract the individual evaluation criteria used during each review. As some criteria had the same objective or were of the same context they were categorized accordingly. Also, some studies further analyzed the criteria including various subcriteria, but this is out of the scope of this paper since it is an issue related to the scrutiny of the review each author aims to achieve and the corresponding scope. Figure 5 presents the criteria identified during the screening process and the frequency of their occurrence. A total of forty-four (44) different criteria were used in the sixteen (16) studies reviewed. The review criteria frequency table is presented in detail in Appendix A (Table A1). The first six criteria (beginning from the top of Figure 5) were common in most of the reviews examined and include the name and description of the assessment method or tool, the field of application, the country of application, and the year of issuing. The literature typology concerns the type of the document reviewed. Critical Approach The selected sample, which was thoroughly described in the previous section, was screened, to extract the individual evaluation criteria used during each review. As some criteria had the same objective or were of the same context they were categorized accordingly. Also, some studies further analyzed the criteria including various subcriteria, but this is out of the scope of this paper since it is an issue related to the scrutiny of the review each author aims to achieve and the corresponding scope. Figure 5 presents the criteria identified during the screening process and the frequency of their occurrence. A total of forty-four (44) different criteria were used in the sixteen (16) studies reviewed. The review criteria frequency table is presented in detail in Appendix A (Table A1). The first six criteria (beginning from the top of Figure 5) were common in most of the reviews examined and include the name and description of the assessment method or tool, the field of application, the country of application, and the year of issuing. The literature typology concerns the type of the document reviewed. For example, De Luca et al. (2017) classified the selected publications into three categories (Journal Article, Book Chapter, and Conference Proceedings paper) [10]. Baldini et al. (2017), on the other hand, refers to publication types classifying the sample according to whether the literature is an original article, a review, a research direction, or a scenario analysis [15]. [7,15], following a cradle-to-gate or cradle-to-market approach, whereas de Vries et al. (2015) reviewed studies at least from cradle-to-farm gate [21]. Lastly, Peter et al. (2017) examined both the level of assessment (global, regional, etc.) and the system boundaries (farm-gate or farm-gate-grave) of the studies they review [12]. The issue of the intended user of a method or tool is being considered in several of the studies reviewed. Binder et al. (2010) identified the target group of the examined methodologies [4], whereas de Luca et al. (2017) referred to the specific criterion as actors involved in the assessment process (i.e., local experts, scientists, workers, etc.) [10]. Bockstaller et al. (2008) classified the reviewed works according to the target user of the method reviewed, i.e. decision-maker, researcher, technician or farmer [19]. Considering the type and the accessibility of data criteria, Baldini et al. (2017) distinguish the data in experimental and model data [15]. The accessibility of data (or availability as expressed by Roy et al. 2012 [13]) is examined by Bockstaller et al. (2008) for three user groups, farmers, advisors, and administration [19]. With reference to the name and type of the indicators reviewed, many approaches were identified during the screening process. Lebacq [10]. Based on the rule set in the methodology section, the red line in Figure 5 presents the criteria exclusion threshold. Only criteria identified more than four times in the sample reviewed are included in the methodological approach for the systematic review of agricultural sustainability studies. A total of eighteen (18) criteria surpassed the exclusion threshold. These criteria are classified in groups with respect to their context and are presented in the subsequent section. Methodological Framework Presentation Following the criteria determination process described in the previous sections, Figure 6 presents the critical synthesis to systematically review agricultural sustainability related studies. The proposed methodological framework is based on a series of criteria and divided into five (5) underlying categories. The first two categories refer to the initial screening stage. During this preliminary stage, the studies are assessed to determine if the study will be included in the sample on the basis of the case-specific exclusion criteria determined with regards to the scope of the review. The initial screening stage includes two categories (i.e., "method identification" and "general information") of criteria with respect to the basic description of each study. The general information of a study concerns the year of publication and the type of literature which can be journal article, conference proceedings paper, book chapter, technical report, etc., and the country that the study was conducted. The method identification category includes criteria that deal with the assessment method developed or employed. Therefore, the criterion description of the assessment tool describes the method or tool presented based on whether it is a presentation of a new methodology, the application of an existing method or tool or a combination namely a new methodology that is implemented with an application example. The last criterion is the level of the assessment performed, i.e., global, national, regional, or farm level, according to the approach introduced by Gomez-Limon et al. (2010) [24]. After the initial assessment and finalization, for the sample to be reviewed, phase is completed; the in-depth review stage follows. For this stage, three (3) categories of criteria have been defined. The first category of criteria assesses the scope of the studies reviewed. The first criterion is the identification of the goal (or objective) of each assessment, so as it is feasible to perform comparative reviews among studies with the same objective. For that purpose, following the definition of Gaviglio et al. (2017), the papers are classified according to whether a method is "goal prescribing or "system describing" [25]. Other criteria proposed concern the determination of the target user, as well as the functional unit and the time dimension of the assessment. The second category refers to the identification of impacts starting with the definition of the sustainability dimension examined in each study, continuing with the documentation of the impacts considered during the assessment expressed in indicators (name and type). The last category concerns the data and the calculation methods used for the assessment. The criterion type of data examines whether the data used are model or experimental. Furthermore, to examine the accessibility of data, the present study refers to the definition of Angevin et al. (2017) [26]. Therefore, depending on the data used, the assessment can be characterized as ex ante (indicating expectation and uncertainty) when focusing on assessing a new scenario or as ex post (indicating processing actual field data) when examining a current situation [26]. Additionally, for each study reviewed, the validation and aggregation methods should be examined too. The proposed methodological framework aims at facilitating the comparison among studies in order to capture the research advancements and current practices in the field under examination. This is an issue of particular importance since the assessment of agricultural sustainability is not a standardized process and entails a plethora of different methods, tools, and frameworks that assess a large number of different indicators that represent an analogously large number of different impacts. Prior to designing any assessment model, an exhaustive review is mandatory to safeguard consistency and relevance with other works. Also, the systematic documentation of the advancements in the field is the only way to begin constructing a unified, commonly accepted methodology for agricultural sustainability assessment. Search Scheme The methodology presented above was used to investigate the available and mostly used methodologies to assess the sustainability of crop cultivations at the farm level. The review begins with the collection of the initial sample of papers by searching within the most acknowledged databases and more specifically, Scopus and Science Direct. The search scheme is based on specific keywords and their combination as presented in Table 2, and the use of Boolean operators (OR and The first category of criteria assesses the scope of the studies reviewed. The first criterion is the identification of the goal (or objective) of each assessment, so as it is feasible to perform comparative reviews among studies with the same objective. For that purpose, following the definition of Gaviglio et al. (2017), the papers are classified according to whether a method is "goal prescribing or "system describing" [25]. Other criteria proposed concern the determination of the target user, as well as the functional unit and the time dimension of the assessment. The second category refers to the identification of impacts starting with the definition of the sustainability dimension examined in each study, continuing with the documentation of the impacts considered during the assessment expressed in indicators (name and type). The last category concerns the data and the calculation methods used for the assessment. The criterion type of data examines whether the data used are model or experimental. Furthermore, to examine the accessibility of data, the present study refers to the definition of Angevin et al. (2017) [26]. Therefore, depending on the data used, the assessment can be characterized as ex ante (indicating expectation and uncertainty) when focusing on assessing a new scenario or as ex post (indicating processing actual field data) when examining a current situation [26]. Additionally, for each study reviewed, the validation and aggregation methods should be examined too. The proposed methodological framework aims at facilitating the comparison among studies in order to capture the research advancements and current practices in the field under examination. This is an issue of particular importance since the assessment of agricultural sustainability is not a standardized process and entails a plethora of different methods, tools, and frameworks that assess a large number of different indicators that represent an analogously large number of different impacts. Prior to designing any assessment model, an exhaustive review is mandatory to safeguard consistency and relevance with other works. Also, the systematic documentation of the advancements in the field is the only way to begin constructing a unified, commonly accepted methodology for agricultural sustainability assessment. Search Scheme The methodology presented above was used to investigate the available and mostly used methodologies to assess the sustainability of crop cultivations at the farm level. The review begins with the collection of the initial sample of papers by searching within the most acknowledged databases and more specifically, Scopus and Science Direct. The search scheme is based on specific keywords and their combination as presented in Table 2, and the use of Boolean operators (OR and AND) to increase the efficiency of the search. The initial search resulted in 959 papers containing the keywords searched. The initial sample was then screened based on the inclusion/exclusion criteria of Table 2. This secondary assessment resulted in 387 papers which where, then reviewed against the initial screening criteria (Figure 6). As the purpose of this review is to examine studies assessing crop agricultural sustainability at the farm level, the 387-paper sample was filtered to select the peer-reviewed journal articles that fulfilled the following criteria. (a) Examine all three pillars of sustainability (environmental, economic, and social). Initial Screening As presented in the previous section 387 papers were reviewed in the initial screening stage. The filtering of the reviewed sample according to the scope of the review under study, resulted in 38 peer-reviewed journal articles. This section presents the initial systematic review of the 38-paper sample with the use of descriptive statistics to gain further insight about the general information that derive from the reviewed sample. With respect to the general information, the majority of papers (21%) were issued in 2017, whereas only two papers (5%) fitting the review criteria ware published in 2012, 2011, and 2010 [27]. However, it is worth noting that 45% of the examined papers was issued during the last three years (2016-2018), indicating a boost in the scientific community's interest regarding integrated sustainability assessment (Figure 7). Regarding the geographical origination, as presented in Figure 7, half of the assessments were performed in Europe (50%), whereas 16% were performed in Asia. Additionally, only three out of 38 assessments were performed in North America. With respect to the literature typology of the studies reviewed, as it was mentioned before only peer-reviewed journal articles were included in the reviewed sample. Regarding the method identification category, Table 3 presents all the methods and tools that were identified during the review process (the nomenclature is presented in Appendix A). All of the relevant methods will be presented in detail later. In the majority of the papers examined (66%), the methods or tools presented are also practically tested presenting the relevant examples (case studies). In 18% of the papers, an already existing methodology was applied and presented while 16% of papers presented a methodology without testing it in practice. Continuing with the level of assessment, in 79% of the works examined, the assessment was performed exclusively for the farm level, whereas for 21% of the works, the level of assessment was also broadened beyond the farm level by examining local, regional, or national sustainability. The most frequently examined crop is maize and wheat (examined in five cases each), followed by olive, spinach and rice (examined in two cases studies each). The other crops examined in the papers reviewed included legumes, lettuce, scallions, red radish, banana, soybean, grapes, cranberry, potato, and coffee. Additionally, different agronomic practices are examined as for example organic farms [28], greenhouse cultivations [29], and school gardens [30]. Regarding the method identification category, Table 3 presents all the methods and tools that were identified during the review process (the nomenclature is presented in Appendix A). All of the relevant methods will be presented in detail later. In the majority of the papers examined (66%), the methods or tools presented are also practically tested presenting the relevant examples (case studies). In 18% of the papers, an already existing methodology was applied and presented while 16% of papers presented a methodology without testing it in practice. Continuing with the level of assessment, in 79% of the works examined, the assessment was performed exclusively for the farm level, whereas for 21% of the works, the level of assessment was also broadened beyond the farm level by examining local, regional, or national sustainability. The most frequently examined crop is maize and wheat (examined in five cases each), followed by olive, spinach and rice (examined in two cases studies each). The other crops examined in the papers reviewed included legumes, lettuce, scallions, red radish, banana, soybean, grapes, cranberry, potato, and coffee. Additionally, different agronomic practices are examined as for example organic farms [28], greenhouse cultivations [29], and school gardens [30]. In-Depth Review This section presents the systematic review results against the in-depth review criteria initializing the presentation with the scope criteria category (Tables A2 and A3 of Appendix A). Regarding the goal of the assessment, 61% of the examined studies are system describing, whereas the other 40% attempts to identify and evaluate policies and techniques that could be used to improve agricultural sustainability performance. Regarding the target users of the methodologies proposed, the majority of the examined works is aimed at decision-makers, farmers, and researchers. More specifically, 40% of the studies identify decision-makers as their target users, whereas 26% aim at farmers and 21% aim at researchers. Continuing, only three (3) 2015) preferred functional units related to the weight of the final product ("kg of un-/packed fresh product at the point of sale-POS" and "1 tn fresh weight standardized to 86% dry matter, respectively") [36,43]. Concerning the criterion of the time dimension, in several studies the assessment was performed for a single year period [25,28,29,33,36,41,51,60]. However, there are also studies that perform the assessment for a range of years. Snapp [43]. Regarding the Impact Identification category, as described above, the research scope contains only studies that attempt to examine all the three dimensions of sustainability, namely, the environmental, economic, as well as social pillar, contributing towards an integrated sustainability assessment evaluation. During the extensive review, all of the individual impacts-expressed as indicators-that were examined within the reviewed studies were extracted and documented. However, further thorough classification and commenting on the individual indicators used goes beyond the limits of this analysis and has already been investigated in several review studies in the past [4,13,18,19,22]. With respect to the data calculation method category of criteria, for 82% of the papers examined a validation process is not mentioned. Only 18% of the papers describe a validation process for the proposed methodologies. On the other hand, 74% of the studies mention the use of an aggregation technique or methodology aiming at the simplification and the generalization of the results. Regarding the type of data used for the assessments performed (Figure 8), the majority uses experimental data (68%), whereas a small percentage of works (18.4%) employ only model data for the sustainability assessment. Accordingly, 58% are ex post assessments attempting to evaluate current practices; whereas, in 31.6% of the papers, the evaluation of prediction scenarios is attempted. for a single year period [25,28,29,33,36,41,51,60]. However, there are also studies that perform the assessment for a range of years. Snapp Regarding the Impact Identification category, as described above, the research scope contains only studies that attempt to examine all the three dimensions of sustainability, namely, the environmental, economic, as well as social pillar, contributing towards an integrated sustainability assessment evaluation. During the extensive review, all of the individual impacts-expressed as indicators-that were examined within the reviewed studies were extracted and documented. However, further thorough classification and commenting on the individual indicators used goes beyond the limits of this analysis and has already been investigated in several review studies in the past [4,13,18,19,22]. With respect to the data calculation method category of criteria, for 82% of the papers examined a validation process is not mentioned. Only 18% of the papers describe a validation process for the proposed methodologies. On the other hand, 74% of the studies mention the use of an aggregation technique or methodology aiming at the simplification and the generalization of the results. Regarding the type of data used for the assessments performed (Figure 8 Agricultural Sustainability Methods and Tools In the previous sections a descriptive qualitative analysis of the review criteria was presented. The aim was to examine the research trend of crop agricultural sustainability and specifically the trend of the criteria concerning the scope and the calculation methods used. In this section, the methodologies and tools, extracted as a result of the review conducted, are presented. Figure 9 demonstrates the methods and tools identified and the corresponding frequency of occurrence. These methods and tools were classified in five major categories based on the main scope of the assessment (as expressed by the authors), underlining the fact that the categories selected may overlap as part of the overall concept. A distinctive example is MCDA which is used to facilitate the assessment of multivariate problems that are expressed with indicators. Nevertheless, the scope of studies employing MCDA methods focus on the aggregation of the results while methods proposing indicator sets and indexes focus on determining the criteria of the assessment. Another example is the carbon footprint (CF) which is an indicator that is often met in Indicators sets and frameworks. Nevertheless, it is a very commonly used standalone methodology for environmental impact assessment. To that end, LCA methods relate to the life cycle of the examined element. Environmental methods relate to the quantification of the environmental impact of the examined element, and economic methods refer to the use of financial methods in the impact assessment. Multicriteria methods are methods that employ multicriteria assessment for the evaluation of agricultural sustainability, and Indicator methods include indicator sets and frameworks for the assessment of agricultural sustainability. With respect to the individual methodologies that were identified, the term "indicators" refers to all those methodologies that were not given a specific name by their developers. economic methods refer to the use of financial methods in the impact assessment. Multicriteria methods are methods that employ multicriteria assessment for the evaluation of agricultural sustainability, and Indicator methods include indicator sets and frameworks for the assessment of agricultural sustainability. With respect to the individual methodologies that were identified, the term "indicators" refers to all those methodologies that were not given a specific name by their developers. [31]. For the assessment, authors combined a series of tools to evaluate the three pillars of sustainability, namely, LCA for the environmental pillar, LCC for the economic, and SLCA for the societal pillar. They integrated their results by employing the AHP method for multicriteria analysis [31]. From the economic methods category, Van Passel et al. (2009) proposed a methodological framework based on the sustainable value approach (SVA) to assess the sustainability on farm production level [58]. Van Passel et al. employed the SVA method attempting to correlate farm performance in respect to consumption of resources. The work represents a benchmarking approach since it does not focus on the evaluation of sustainability in absolute terms, but it assesses the performance compared to standards [58]. Van Passel et al. (2011) stated that to perform multilevel and multi-user assessments, a combination of methodologies can offer more advantages than integrated methodologies [53]. To that end, the SVA method was combined with the MOTIFS indicator tool. According to Van Passel et al. (2011), MOTIFS is a visual monitoring tool used for the aggregation of indicators of various themes, which creates benchmarks for the rescaling of the indicator values [53]. Multicriteria Assessment Methods and Tools Within the multicriteria assessment methods that are used for assessing agricultural sustainability, the works examined can be classified into groups that employ and develop the same methodological framework. Such groups are the studies that use the MASC decision model developed by Sadok et al. (2009), which was built as part of the decision support system DEXi [59]. The MASC model is a hierarchical multiattribute decision support model designed for the ex ante assessment of cropping systems to address the need of in-field alternative scenario evaluation. Such models allow for the simplification of the decision problem by downscaling it to smaller and less complex problems expressed by designated variables [59]. The DEX methodology performs aggregation of qualitative attributes and utility functions using "IF-THEN" aggregation rules [59]. Colomb [26]. The IPM-based systems were designed and tested in nine (9) locations in Europe [38]. They compared the sustainability of the examined systems, discussing the benefits or drawbacks of the IPM systems. Vasileiadis et al. (2017) also adopted methodologies from the environmental and economic categories. Economic data, with the use of a template, were collected from participants to perform cost-benefit analysis (CBA). Furthermore, an environmental risk assessment was performed by implementing the SYNOPS-WEB Tool [38]. Lastly, Chopin et al. (2017) adapted the MASC model in order to ex ante assess the sustainability in the area of local banana farming systems [37]. Multicriteria methods facilitate decision making while considering multiple variables, and such methods use weighting techniques in order to produce composite indices [24]. Among the studies examined, the most frequently used methods are the principal component analysis ( [40,46]. Concluding with the multicriteria method category, Siciliano et al. (2009) used the social multicriteria evaluation (SMCE) framework, which was implemented through the NAIADE (novel approach to imprecise assessment and decision environments) software, to assess the sustainability of farming practices in a small rural area in Italy [60]. Egea et al. (2016) employed the analytic hierarchy process (AHP) in order to investigate the combination of protected destination of origin oil production system that leads to optimal sustainability [39]. Bockstaller et al. (2017) introduced the CONTRA tool, an innovative aggregation method that leads to the creation of decision trees using fuzzy sets [34]. Peano et al. (2014) proposed a multicriteria methodology to evaluate the effectiveness of the slow food presidia, which are organized structures aiming at the preservation of quality production at risk to extinction by following specific guidelines and protocols for each product category [48]. Indicator Sets, Indexed and Frameworks This category of methods and tools contains indicator sets, indexes, and frameworks that were used in the reviewed works to assess agricultural sustainability at the farm level. Walter et al. (2009a;2009b) proposes a new indicator-based method to assess the unsustainability of a system rather than its sustainability [62]. Their method borrows elements of the LCA methodology and was implemented in two stages. The first stage includes the creation of an issue inventory and its contextualization, while the second stage includes the standardization and sustainability valuation process [61,62]. Rodriguez et al. (2010) proposed the APOIA-NovoRural framework, which comprises a collection of basic and composite indicators covering five dimensions of sustainability: landscape ecology, environmental quality, sociocultural values, economic values, and management and administration [56]. Sharma et al. (2011) introduced a methodology based on questionnaires and surveys and composed an agricultural sustainability index (ASI) targeted to Bihar province (India) [55], and also calculated the sustainability parameters for a 60-year period. Sami et al. (2013) selected six indicators that were considered appropriate to assess sustainability in a regional context. Additionally, in order to evaluate some of these indicators they used a selection of fuzzy submodels [52]. Van Asselt et al. (2014) propose a protocol for the collection and evaluation of indicators for the sustainability assessment of agri-food production systems [49]. Their proposed list covers a wide range of indicators related to the three pillars of sustainability, aiming at supporting policy makers in decision making by choosing the most relevant indicators. Yegbemey et al. (2014), proposed an innovative participatory approach that resulted in seventeen (17) indicators. All relevant data were collected through a household survey. The sustainability was evaluated with relative scores while the total sustainability level was based on the average scores of the individual indicators [47]. Peano et al. (2015), proposed the SAEMETH monitoring tool based on a set of qualitative indicators. The selection of the indicators was based on the criteria introduced by Meul et al. (2008) [63] and, for their evaluation, they set a minimum and maximum threshold based on reference values that was derived from best practices or through surveys [45]. Santiago-Brown et al. (2015) presented the process for selecting indicators to assess viticulture production sustainability. For the selection of the indicators, the adapted nominal group technique was used. The selected indicators were reduced according to their relevance [44] resulting in seventy-six (76) indicators hierarchized based on their importance. Allahyari et al. (2016) selected five-hundred-and-eighty-eight (588) indicators through an extensive literature review. Following erasing duplicates and prioritizing the sample, it resulted in 62 indicators, which were used in an extensive survey among experts. The indicators were assessed based on their importance while the resulting data were assessed with the Minskowski fuzzy screening method [42]. Sajjad et al. (2016) examined the relevant agricultural sustainability at farm and regional scale using the sustainable livelihood security index (SLSI) [41]. Yang et al. (2016) assessed the sustainability of greenhouse vegetables using indicators. More specifically, to examine the greenhouse vegetable farming practices and the economic and social management conditions, they used rapid and participatory rural appraisal (RRA/PRA) tools combined with data derived from in-field measurements and parallel surveys [29]. In 2016, de Olde et al. proposed the sustainability assessment tool named response-inducing sustainability evaluation (RISE), which was implemented for the evaluation of organic farms in Denmark. The tool contains indicators for a total of 10 themes and 51 subthemes. The indicators were normalized and aggregated and each theme was evaluated based on the average score of the relevant subthemes [28]. Goswami et al. (2017) integrated the sustainable livelihood (SL) and the drivers-pressures-stateimpact-response (DPSIR) framework, proposing a small farm sustainability index (SFSI) that could address the complexity of small-holder family farms under a participatory approach [35]. The proposed framework assesses sustainability in multiple levels assigning the relevant weights and resulting in the creation of an aggregated index for the entire system. They indicate that the introduction ICT technologies in agriculture (web-based platforms, wireless sensors, etc.) can facilitate data sharing among stakeholders and provide the basis for assessing the sustainability of farming systems. Recanati et al. (2017) proposed an indicator-based framework for the assessment of sustainability of small-scale farming systems in water-limited regions. They implemented the framework by modeling an "average" farm based on a survey among 30 farmers [33]. Gaviglio et al. (2017), attempting to integrate various analytical techniques, introduced the 4AGRO tool, which is an online self-assessment tool based on indicators. It consists of 42 subindicators that are divided in 15 complex indicators, five for each pillar of sustainability [25]. The tool was demonstrated in an agricultural park in Italy. Finally, Snapp et al. (2018) proposed a methodology based on indicators that derived through a participatory approach involving a steering committee with multidisciplinary participants from eight (8) institutions [32]. The indicators were normalized based on max possible values. Conclusions To meet the ever-increasing interest towards agricultural sustainability, many methodologies and tools emerge, introducing integrated and holistic assessment approaches. However, there is still no consensus on the standardization of agricultural sustainability assessment as part of a unified concept of sustainable development. Newly introduced frameworks propose mostly case-specific tools that focus on resource use and their impact on the sustainability of farming practices. Combinational use of methodologies is observed in many cases; thus, a clear distinction of methodologies is not always possible. Contributing towards the indexing of the available methodologies, the present paper presented a methodological framework for the systematic literature review of agricultural sustainability studies. The framework synthesizes all the available literature review criteria and introduces a two-level analysis facilitating systematization, data mining, and methodology extraction. The framework was implemented for the systematic literature review of crop agricultural sustainability assessment studies at farm-level for the last decade. The investigation of the methodologies used is of particular importance since there are no standards or norms for the sustainability assessment of farming practices. The chronological analysis revealed that the scientific community's interest in agricultural sustainability has been increasing during the last three (3) years, indicating a tendency to gradually progress from the theory of economic growth to the more comprehensive and inclusive concept of sustainable development. Nevertheless, the critical evaluation of effectiveness and the implications of the methods presented are outside the scope of the present work and are subjects of thorough future research. The most used methods include indicator-based tools, frameworks and indexes followed by multicriteria methods. In the reviewed studies, stakeholder participation is proved crucial in the determination of the level of sustainability. However, a systematic assessment of the agricultural machinery's and operation management's contribution to the overall sustainability was not detected in the examined studies. The effect of resource use and input management is the most usually examined issue in the reviewed studies.
2019-09-26T09:05:41.115Z
2019-09-19T00:00:00.000
{ "year": 2019, "sha1": "9777b146ce797dcb032deb54fbd7d9a9b386d391", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/11/18/5120/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "9b70c4861bf011f8cfb8366e31097386a2e03b39", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Business" ] }
258508349
pes2o/s2orc
v3-fos-license
Comparison between dedicated MRI and symphyseal fluoroscopic guided contrast agent injection in the diagnosis of cleft sign in athletic groin pain and association with pelvic ring instability Objective To compare dedicated MRI with targeted fluoroscopic guided symphyseal contrast agent injection regarding the assessment of symphyseal cleft signs in men with athletic groin pain and assessment of radiographic pelvic ring instability. Methods Sixty-six athletic men were prospectively included after an initial clinical examination by an experienced surgeon using a standardized procedure. Diagnostic fluoroscopic symphyseal injection of a contrast agent was performed. Additionally, standing single-leg stance radiography and dedicated 3-Tesla MRI protocol were employed. The presence of cleft injuries (superior, secondary, combined, atypical) and osteitis pubis was recorded. Results Symphyseal bone marrow edema (BME) was present in 50 patients, bilaterally in 41 patients and in 28 with an asymmetrical distribution. Comparison of MRI and symphysography was as followed: no clefts: 14 cases (MRI) vs. 24 cases (symphysography), isolated superior cleft sign: 13 vs. 10, isolated secondary cleft sign: 15 vs. 21 cases and combined injuries: 18 vs. 11 cases. In 7 cases a combined cleft sign was observed in MRI but only an isolated secondary cleft sign was visible in symphysography. Anterior pelvic ring instability was observed in 25 patients and was linked to a cleft sign in 23 cases (7 superior cleft sign, 8 secondary cleft signs, 6 combined clefts, 2 atypical cleft injuries). Additional BME could be diagnosed in 18 of those 23. Conclusion Dedicated 3-Tesla MRI outmatches symphysography for purely diagnostic purposes of cleft injuries. Microtearing at the prepubic aponeurotic complex and the presence of BME is a prerequisite for the development of anterior pelvic ring instability. Clinical relevance statement For diagnostic of symphyseal cleft injuries dedicated 3-T MRI protocols outmatch fluoroscopic symphysography. Prior specific clinical examination is highly beneficial and additional flamingo view x-rays are recommended for assessment of pelvic ring instability in these patients. Key Points • Assessment of symphyseal cleft injuries is more accurate by use of dedicated MRI as compared to fluoroscopic symphysography. • Additional fluoroscopy may be important for therapeutic injections. • The presence of cleft injury might be a prerequisite for the development of pelvic ring instability. BME Bone marrow edema cor Coronal fs Fat saturation NSAIDs Non-steroidal anti-inflammatory drugs paratra Paratransversal PLAC Pyramidalis-anterior pubic ligament-adductor longus complex tirm Turbo inversion recovery magnitude tse Turbo spin echo Introduction Athletic groin pain represents a wide spectrum of possible underlying pathologies that often occur in response to chronic repetitive stress applied to healthy bone.Due to the anatomical complexity of the groin area, a variety of causes of groin pain might influence and delay the exact diagnosis, possibly leading to delayed targeted therapy.Among those conditions causing groin pain in athletes the incidence of osteitis pubis (OP), a noninfectious inflammation of the pubic bone, has been reported as high as 10-18% of injuries per year in soccer players [1], possibly causing a prolonged absence from sports.Accompanying injury patterns such as secondary and superior cleft might occur and are well recognized in the pathogenesis of groin pain [2,3].Furthermore, the presence of symphyseal cleft injuries was found to be associated with a delayed time to return to play [4].Mechanistically, an increased sporty load exerts considerable stress on the pubic symphysis [5].Especially the inherent high mechanical demands of multidirectional sports (e.g.soccer) on the pubic symphysis and its supporting musculoskeletal structures may increase the probability of overuse injuries such as OP. The diagnostic approach to OP and associated pathologies involve clinical examination, clinical history, sports anamnesis, and imaging techniques.The latter favors the use of magnetic resonance imaging (MRI), which has been shown to reliably depict the pattern of injuries around the pubic symphysis involving the rectus abdominis and the adductor tendon origin [6].Especially in younger patients, MRI is generally preferred over computed tomography (CT) due to the lack of ionizing radiation and superior imaging of surrounding soft tissue and possible bone inflammation.However, contrast agent injection guided by radiography, fluoroscopy, or CT imaging might be alternative approaches for the diagnosis of symphyseal cleft injuries.In that regard, symphyseal cleft injections might add to the diagnostic yield gained from MRI by possibly identifying the source the pain derives from [7].In the case of OP, there is an accumulating body of evidence that after injection of corticosteroids and local anesthetics, the clinical symptoms improve, which might add important additional information to the diagnostic process [8].However, given the fact that groin pain is a multifaceted pathology, the diagnostic approach has to consider multiple possibly causing pathologies.Brennan et al [9] compared MRI and radiography with additional contrast agent injection in the diagnosis of secondary cleft sign and found 100% sensitivity and specificity for both modalities.McArthur et al [10] retrospectively compared symphyseal CT arthrography and MRI and reported the CT arthrography to be advantageous for the detection of secondary cleft and tendon tears at the adductor origin as compared to MRI.Well in line with these findings are reports by Hopp et al [11], who found symphysography to be superior to MRI in the detection of symphyseal cleft injuries.Additionally, Murphy et al [3] employed symphysography as the gold standard for the detection of symphyseal cleft injuries.Other studies highlight the role of MRI not only in the diagnostics of groin pain [1] but also in evaluating the prognosis [12]. Therefore, the purpose of this study was to compare a dedicated 3-T MRI protocol with targeted fluoroscopyguided symphyseal contrast agent injection regarding the assessment of symphyseal cleft injuries in men with athletic groin pain.We further assessed pelvic ring instability by use of radiography ("flamingo-view"). Patient selection and demographic characteristics 66 male sport-active patients were prospectively examined.For each patient, all examinations were done within one session on the same day.All athletes were referred to our clinic by a highly specialized groin surgeon in private practice after an initial clinical examination using a standardized procedure.All referred patients presented with characteristic groin pain and after clinical examination were suspected of secondary or superior cleft injuries.The level of activity differed between patients ranging from professional to recreational athletes (Table 1).Inclusion criteria were (1) male athletes with a history of groin pain, (2) suspected cleft injury after standardized clinical examination, (3) no prior surgical treatment of cleft-or adductor injuries. Our local Ethical Committee approved the present protocol and informed consent was obtained from all patients.The study was approved by the Ethical Committee of Rostock University (approval No.A 2020-0040). Magnetic resonance imaging: acquisition and analysis MRI was performed using a 3-Tesla whole-body system (Magnetom Skyra Fit, Siemens Healthineers) and an 18-channel body matrix coil strapped over the pelvic area.Table 2 lists the specifics of the acquired sequences of the dedicated symphyseal MRI protocol.The orientation angle for paratransversal sequences is shown in Fig. 1. Diagnostic criteria of MRI, X-ray, and fluoroscopic findings For MRI examinations the diagnostic imaging criteria for superior and secondary cleft signs were adopted as previously described by Byrne et al [2].Accordingly, the secondary cleft sign is characterized by a linear signal hyperintensity paralleling the inferior margin of the inferior pubic ramus, and the superior cleft sign by a linear signal hyperintensity paralleling the inferior margin of the superior pubic ramus.Both cleft signs had to be in continuity with the physiological primary cleft.Both cleft signs could be present uni-or bilaterally.Additionally, we differentiated between isolated cleft signs and combined (complex) injury patterns, the latter being defined as the presence of superior and secondary cleft signs simultaneously.At last, injury patterns defined as an atypical cleft sign were characterized by a hyperintensity seen in MRI involving the prepubic aponeurotic complex/PLAC (pyramidalis-anterior pubic ligament-adductor longus complex) without meeting the criteria for superior or secondary cleft injuries (Fig. 4).For fluoroscopic imaging, all cleft signs were defined by a characteristic distribution of the contrast agent in accordance to the aforementioned characteristic pattern seen in MRI examinations (Fig. 2).MRI scans and fluoroscopic examinations were reviewed in consensus by two radiologists with 5 years and 22 years of experience.In case of discrepancies between the interpretations, a consensus was found. Diagnosis of osteitis pubis (OP) required the presence of bone marrow edema (BME) of the pubic body in MR imaging.The area of the affected bone was assessed by visual inspection of coronal STIR images and manual bordering of the maximal area of BME on each side.The slice showing the largest visible area of increased signal intensity in coronal STIR images was selected.Values for the maximal area are expressed in mm 2 (Fig. 1).A cut-off value of 10% difference (chosen arbitrarily) between sides was defined to determine the dominant side of maximal BME (labeled as BME 10 ).Below this cut-off value, the extent of BME was considered to be almost equally distributed between sides. X-rays in single-legged stance ("flamingo view") of both sides were carried out in order to assess symphyseal stability.Patients were diagnosed with anterior pelvic ring instability when the vertical shift of the pubic body between sides exceeded 2 mm or a widening of the symphyseal gap greater than 7 mm occurred [1,13] (Fig. 1). Symphyseal injection technique All Injections were performed using fluoroscopic guidance (symphysography) and under sterile conditions.All patients received a subcutaneous injection of a local anesthetic (0.5 ml bupivacaine) and subsequently 1 ml of a nonionic contrast agent (iomeprol, 300 mg of iodine/ml, Imeron® 300, Bracco Imaging) into the fibrocartilaginous disc of the symphyseal cleft using a 22G lumber puncture needle (Spinocan®, B. Braun Melsungen AG).Needle position was confirmed by fluoroscopic imaging and the presence of the contrasted primary cleft after injection of the contrast agent (compare Fig. 2). Analysis and statistics This study was designed as a prospective observational study.The results should be regarded as descriptive statistics; hence, no p values are reported.All data were initially compiled on a Microsoft Excel 2016 spreadsheet.All analyses were performed using JMP software (JMP student, version 16.2.0,SAS Institute Inc.).Data were presented as counts and percentages.We described Demographic factors A total of 66 male patients were recruited and met the inclusion criteria of the study (Table 1).Patients were most commonly injured while playing soccer (N = 56; 85%) with other sports accounting for 15% (n = 10) of the injuries.The level of sports varied between patients with 13 patients competing on a professional level (12 × soccer, 1 × ice hockey), 8 on a semi-professional (amateur) level (all soccer) and the other patients competing as recreational athletes.On average the patients completed 4.2 (± 1.6) training sessions per week. The mean duration of groin pain was 12 ± 12.5 months with a range of 1 to 72 months.Treatment for groin pain in advance of the study was administered to 64 patients, most commonly comprising the use of NSAIDs and physical therapy.None of the patients had been treated with a surgical procedure on cleft injuries prior to the study. Bone marrow edema and osteitis pubis The presence of OP as indicated by BME in MRI was seen in 50 patients, bilaterally in 41 patients.Isolated BME without any concomitant injuries (diagnosed in MRI or fluoroscopy) was observed in 3 patients (5%).In 68% (n = 28) of patients with bilateral edema, we noted a rather asymmetric distribution with a difference of more than 10% in the total area between sides (BME 10 ).Of those 28 patients, 71% (n = 20) reported lateralized symptoms on the ipsilateral side of the more pronounced area of BME (Table 3). Cleft injuries: MRI vs. fluoroscopy As one of the main focus points of this study, we compared the presence of cleft signs in MRI and fluoroscopy after injection of iomeprol consisting of 300 mg of iodine/ml as a contrast agent.In 14 patients we did not find a cleft sign in MRI and none of these patients had a cleft sign in fluoroscopy, either.In contrast, of the 24 patients without a cleft sign in fluoroscopy, 10 patients had a cleft sign in MRI, which were distributed as follows: 2-times isolated superior clefts, 2-times isolated secondary cleft and 6 patients had an atypical cleft sign. In 13 patients we diagnosed an isolated superior cleft sign in MRI that could also be evidenced in 10 patients Of the 15 patients with an isolated secondary cleft sign in MRI 13 also showed that injury in fluoroscopy and just 2 did not show any cleft sign in fluoroscopy at all.In comparison, with 21 patients showing an isolated secondary cleft sign in fluoroscopy, MRI examinations of these patients revealed 13 isolated secondary cleft signs, 7 combined cleft signs, and one patient with an isolated superior cleft sign in MRI (same patient as mentioned above). A total of 18 patients were diagnosed with a combined cleft sign in MRI and of these patients, 11 also were diagnosed with a combined cleft in fluoroscopy.However, the remaining 7 of these 18 patients only showed an isolated secondary cleft sign in fluoroscopy.Of the 11 patients with a combined cleft sign in fluoroscopy, all had a combined cleft sign in MRI as well. All 6 cases with an atypical cleft sign could only be depicted by MRI and none were visible in fluoroscopy (Fig. 3). Presence of cleft injury, clinical presentation, and anterior pelvic ring instability In only 4 cases we did neither find a cleft injury in MRI nor a BME.In the majority of cases with a unilateral cleft sign in MRI (n = 34; regardless of the type of injury pattern), the side of symptoms matched the side of MRI findings with 76% (n = 26) of these patients showing injury patterns in MRI according to their reported side of pain.However, of these 34 patients, 15 patients (44%) also reported their symptoms ipsilateral to the more pronounced side of BME, and in all 15 cases, the unilateral cleft injury was on the same side. In the cases with bilateral MRI cleft signs (n = 17), the reported side of pain was more inhomogeneously distributed with 29% (n = 5) of patients reporting lateralized pain to the left or bilaterally, respectively, and 35% (n = 6) to the right.In one case, central pain above the symphysis was reported. Discussion In our study we could demonstrate that (1) MRI was superior for diagnosis of cleft injuries when compared to symphysography, (2) the side of injury mostly matched radiological findings, and (3) atypical clefts were not detected in symphysography. Methodologically, we combined a standardized initial clinical examination by an experienced groin surgeon and subsequent dedicated imaging together with an intrasymphyseal injection for diagnostic reasons.All patients had a history of groin pain typical for cleft injuries.Only in a small number of patients, we could not demonstrate any signs of OP or cleft injuries, supporting the importance of prior clinical examinations as a reference for diagnostic imaging.Our prospective study design allowed a specific definition of the patient´s clinical characteristics regarding symphyseal cleft syndrome.Furthermore, the use of standardized clinical and radiological methods permitted a targeted interpretation. MRI was superior for the diagnosis of cleft injuries Using a standardized dedicated MRI protocol one of the major findings in this study was that MRI proved to be superior in the diagnosis of cleft injuries as compared to fluoroscopic guided contrast agent injection.These findings are in accordance with other studies [1,14] and further emphasize the role of MRI in the diagnostic process of groin pain.However, our results are in contrast with findings of superior imaging of CT-arthrography for the diagnosis of athletic pubalgia [10] and of symphysography in the detection of cleft injuries [3,11].In that regard, there are notable methodological differences between this study and the prior mentioned studies.McArthur et al [10] used a retrospective study design and used a rather small sample size (12 cases).Furthermore, CT-arthrography and MRI were separated by up to 2 months, including the possibility of a healing process in that time span.Additionally, further distribution of contrast agents within the tissue due to the injection technique might have influenced the results (compared to an unenhanced MRI protocol).In the study by Hopp et al [11] also a retrospective design was employed and no specifics on MRI or timing of symphysography relative to prior MRI were disclosed.In comparison, in this study all patients underwent prior specific clinical examinations for cleft injuries, therefore limiting the possible aetiologic spectrum of athletic groin injuries in the first place.Furthermore, all imaging was done within hours using a designated MRI protocol, and as compared to Mc Arthur et al [10], we used a wider spectrum of investigated symphyseal cleft injuries.Nevertheless, it has to be highlighted that in this study we identified more isolated secondary cleft injuries in symphysography as compared to MRI.Individual case analysis revealed that this effect could be explained by a missed diagnosis of combined cleft injuries in symphysography in those patients, thus resulting in a more frequent diagnosis of combined clefts in MRI due to superior imaging.This view is further supported by our findings of more missed cleft injuries in fluoroscopy as compared to MRI.Only 1 patient had conflicting diagnoses in MRI (isolated superior cleft sign) and fluoroscopy (isolated secondary cleft) that could not be shown in the respective other imaging modality. The side of injury mostly matched radiological findings For diagnosis and radiological grading of OP different radiological methods for evaluation of involved bone are described in the literature.Verall et al [15] graded patients (among other variables) by the size of MRI signal change with a threshold of 2 cm.Branci et al [16] determined the grade of BME according to the distance of the involved bone along the long axis of the superior or inferior pubic ramus.Gaudino et al [12] assessed the extension of BME in the cancellous versus cortical bone.Although in the study presented here, we did not quantitatively rate the severity of BME or OP, our easy-to-use method of assessment of the affected bone area allowed us to discriminate between sides.Using this approach our findings are well in line with previous reports [3,17,18] showing rather asymmetric changes and that the leading side of symptoms mostly matched the imaging findings.In detail, in the majority of cases we found either asymmetric patterns of BME in patients with bilateral involvement, ipsilateral cleft injuries according to the leading side of reported symptoms, or a combination of BME and cleft injuries.Isolated BME without any concomitant injuries was a rather rare condition.These findings support those of Cunningham et al [19] and Mosler et al [20] showing that microtearing at the prepubic aponeurotic complex and the presence of BME is a rather frequent cause of groin pain in this kind of population.However, adding to these assumptions, in those cases with bilateral MRI cleft signs we found quite an inhomogeneous clinical presentation. OP itself might reflect a reaction due to repetitive mechanical stress but could also be detected in asymptomatic athletes [16].However, as suggested by Garvey et al [21] OP and tearing of adductor muscles might be part of the pathogenesis of acquired pelvic instability.By employing single-legged standing radiography of the pelvis (flamingo view) in this study, we assessed pelvic micro-instability.Of those patients with pelvic instability the vast majority of patients were diagnosed with a cleft injury and to a lesser extent with a BME consistent with OP.We therefore conclude that microtearing at the prepubic aponeurotic complex and the presence of BME are prerequisites for the development of anterior pelvic ring instability.This view is in agreement with the notion that structural deficits (along with functional aspects that were not addressed in this study) play an important role in the pathogeneses of symphyseal instability [21]. Atypical clefts were not detected in symphysography Basic definitions of superior and secondary cleft injuries were adapted by Byrne et al [2].In this study, symphysography failed to identify any of those atypical clefts seen in MRI that were not consistent with the definitions of superior or secondary clefts.According to the anatomical concept of pyramidalis-anterior pubic ligament-adductor longus complex (PLAC) [22] those atypical clefts (Fig. 5) might reflect PLAC injuries [23].Consequently, considering the possible spectrum of pathologies in athletic groin pain, MR imaging allowed these diagnoses, while symphysography did not. Limitations of the study Some limitations of the study should be taken into account.The study population only involved male patients and there is a lack of an asymptomatic control group.Due to the study design, all patients were pre-screened constituting a selection bias.We have not performed an additional CT arthrography after symphysography due to radiation protection reasons. Conclusions This observational study indicates that dedicated imaging specific to different pathological substrates of athletic groin pain might be beneficial in addition to specialized clinical examination in patients with suspected symphyseal cleft injuries.MRI proved to be superior to symphysography for diagnostic purposes but, as widely accepted, additional therapeutic injections might be beneficial for patients.Additionally, microtearing at the prepubic aponeurotic complex and the presence of bone marrow edema, indicative of osteitis pubis, are considered a prerequisite for the development of anterior pelvic ring instability, and additional Flamingo views should be performed in these patients. Fig. 1 Fig. 1 Example of orientation angle for paratransversal sequences (yellow line in A) and corresponding paratransversal image (B).Measurement of anterior pelvic ring instability (C) and maximal area of bone marrow edema (D) Fig. 2 Fig. 2 Overview of cleft signs.Upper row: Schematic drawing of cleft signs, second row: fluoroscopic imaging of cleft injuries, third row: MRI examinations represented by a coronal STIR image.Column A: isolated secondary cleft, column B: isolated superior cleft, column C: combined cleft injuries.Arrows highlight cleft signs in MRI examinations.The numbers reflect physiological (primary) cleft (1) and either the secondary cleft (2) or superior cleft (3).Atypical clefts were diagnosed when diagnostic criteria of isolated superior or secondary or combined cleft injuries were not met (see text) Fig. 3 Fig. 4 Fig. 3 Overview of the distribution of cleft signs in comparison of MRI and fluoroscopic examinations Table 1 Demographics and patient characteristics Table 2 3-T dedicated symphyseal MRI protocol and sequence specificsAbbreviations: fs = fat saturation, tse = turbo spin echo, tirm = turbo inversion recovery magnitude categorical variables (e.g.type and level of sports) as proportions and where appropriate as percent values and absolute numbers.For quantitative data, the results are expressed as mean ± standard deviation. Table 3 Overview of imaging findings and analysis of side distribution BME = bone marrow edema.BME 10 = asymmetric BME with side difference in total area > 10%
2023-05-06T06:16:35.837Z
2023-05-05T00:00:00.000
{ "year": 2023, "sha1": "c389bcbd4594ae45b3bfc55016134de7f073c31d", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00330-023-09666-1.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "566449d3e05f073d31f138368fae2099a4cfc9c7", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
152282774
pes2o/s2orc
v3-fos-license
On the Performance of Thompson Sampling on Logistic Bandits We study the logistic bandit, in which rewards are binary with success probability $\exp(\beta a^\top \theta) / (1 + \exp(\beta a^\top \theta))$ and actions $a$ and coefficients $\theta$ are within the $d$-dimensional unit ball. While prior regret bounds for algorithms that address the logistic bandit exhibit exponential dependence on the slope parameter $\beta$, we establish a regret bound for Thompson sampling that is independent of $\beta$. Specifically, we establish that, when the set of feasible actions is identical to the set of possible coefficient vectors, the Bayesian regret of Thompson sampling is $\tilde{O}(d\sqrt{T})$. We also establish a $\tilde{O}(\sqrt{d\eta T}/\lambda)$ bound that applies more broadly, where $\lambda$ is the worst-case optimal log-odds and $\eta$ is the"fragility dimension,"a new statistic we define to capture the degree to which an optimal action for one model fails to satisfice for others. We demonstrate that the fragility dimension plays an essential role by showing that, for any $\epsilon>0$, no algorithm can achieve $\mathrm{poly}(d, 1/\lambda)\cdot T^{1-\epsilon}$ regret. Introduction In the logistic bandit an agent observes a binary reward after each action, with outcome probabilities governed by a logistic function: P reward = 1 action = a = e βa θ 1 + e βa θ . Each action a and parameter vector θ is a vector within the d-dimensional unit ball. The agent initially knows the scale parameter β but is uncertain about the coefficient vector θ. The problem of learning to improve action selection over repeated interactions is sometimes referred to as the logistic bandit problem or online logistic regression. The logistic bandit serves as a model for a wide range of applications. One example is the problem of personalized recommendation, in which a service provider successively recommends content, receiving only binary responses from users, indicating "like" or "dislike." A growing literature treats the design and analysis of action selection algorithms for the logistic bandit. Upperconfidence-bound (UCB) algorithms have been analyzed in Filippi et al. (2010); Li et al. (2017); Russo and Van Roy (2013), while Thompson sampling (Thompson (1933)) was treated in Russo Thompson Sampling (this work) O λ −1 · d(η ∨ d) 1/2 · T 1/2 log 1/2 T Bayesian bound, λ and η are independent of β (defined in Section 3). Table 1: Comparison of various results on logistic bandits. The upper bound in this work depends on β-independent parameters λ and η, defined in Assumption 1 and Definition 2, respectively. We use the notation a ∨ b = max{a, b}. and Van Roy (2014b) and Abeille and Lazaric (2017). Each of these algorithms has been shown to converge on the optimal action with time dependenceÕ(1/ √ T ), whereÕ ignores poly-logarithmic factors. However, previous analyses leave open the possibility that the convergence time increases exponentially with the parameter β, which seems counterintuitive. In particular, as β increases, distinctions between good and bad actions become more definitive, which should make them easier to learn. To shed light on this issue, we build on an information-theoretic line of analysis, which was first proposed in Russo and Van Roy (2016) and further developed in Bubeck and Eldan (2016) and Dong and Van Roy (2018). A critical device here is the information ratio, which quantifies the one-stage trade-off between exploration and exploitation. The information ratio has also motivated the design of efficient bandit algorithms, as in Russo and Van Roy (2014a), Russo and Van Roy (2018) and Liu et al. (2018). While prior bounds on the information ratio pertain only to independent or linear bandits, in this work we develop a new technique for bounding the information ratio of a logistic bandit. This leads to a stronger regret bound and insight into the role of β. Our Contributions. Let A and Θ be the set of feasible actions and the support of θ, respectively. Under an assumption that A = Θ, we establish aÕ(d √ T ) bound on Bayesian regret. This bound scales with the dimension d, but notably exhibits no dependence on β or the number of feasible actions. We then generalize this bound, relaxing the assumption that A = Θ while introducing dependence on two statistics of the these sets: the worst-case optimal log-odds λ = min θ∈Θ max a∈A α θ and the fragility dimension η, which is the number of possible models such that the optimal action for each yields success probability no greater than 50% for any other. Assuming λ > 0, we establish aÕ( √ dηT /λ) bound on Bayesian regret. We also demonstrate that the fragility dimension plays an essential role, as for any function f , polynomial p, and > 0, any algorithm for the logistic bandit cannot achieve Bayesian regret uniformly bounded by f (λ)p(d)T 1− . We believe that, although η can grow exponentially with d, in most relevant contexts η should scale at most linearly with d. The assumption that the worst-case optimal log-odds are positive may be restrictive. This is equivalent to assuming that the for each possible model, the optimal action yields more than 50% probability of success. However, this assumption is essential, since it ensures that the fragility dimension is well-defined. When the worst-case optimal log-odds are negative, the geometry of action and parameter sets plays a less significant role than parameter β, therefore we conjecture that the exponential dependence on β is inevitable. This could be an interesting direction for future research. Notations. Throughout this article, for integer n we will use [n] to denote the set {1, . . . , n}. We will also use B d and S d−1 to denote the unit ball and the unit sphere in R d , respectively. Problem Settings We consider Bayesian generalized linear bandits, defined as a tuple L = (A, Θ, R, φ, ρ), where A and Θ are the action and parameter set, respectively, R is a stochastic process representing the reward of playing each action, φ is the link function, and ρ is the prior distribution over Θ, which represents our prior belief of the groundtruth parameter θ * . Throughout this article, to avoid measure-theoretic subtleties, we assume that both A and Θ are finite subsets of B d . For simplicity, we assume that there exists a one-to-one mapping 3 between each parameter and the corresponding optimal action. Specifically, let A = {a 1 , . . . , a N } and Θ = {θ 1 , . . . , θ N }, with To specify the one-to-one mapping, for each θ ∈ Θ we define α(θ) to be the unique action that maximizes E[R(a)|θ * = θ]. Letting A * be the optimal action, which is a random variable under our Bayesian setting, naturally we have A * = α(θ * ). The reward R is related to the inner product between the action and the parameter by the link function φ, as E [R(a)|θ * = θ] = φ(a θ), ∀a ∈ A, θ ∈ Θ. Specifically, in logistic bandits, the reward R is the binary process R B and the link function is given by where β > 0 is a parameter that characterizes the "separability" of the model. Equivalently, conditioned on θ * = θ, R B (a) is a Bernoulli random variable with mean φ β (a θ). In the following, we will use L β to denote the logistic bandits problem instance with parameter β. At stage t the agent plays action A t and observes reward R t = R(A t ). Let H t = σ(A 1 , R 1 , . . . , A t , R t ) be the σ-algebra generated by the past actions and observations (rewards). A (randomized) policy π = (π 1 , π 2 , . . . ) is a sequence of functions such that for each t, π t (H t−1 ) is a probability distribution on the action set. The performance of policy π on problem instance L = (A, Θ, R, φ, ρ) is evaluated by the Bayesian regret, defined as where R * := R(θ * ), the subscripts π, ρ denote that A t is drawn from π t (H t−1 ) for t ≥ 1 and A 0 is drawn from the prior ρ. In this work, we are interested in the Thompson sampling policy π TS , characterized as i.e. the action played in each stage is drawn from the posterior of the optimal action. Since there is a one-to-one mapping between each parameter and the corresponding optimal action, the Thompson sampling policy can be equivalently carried out by sampling from the posterior of the true parameter θ * at each stage, and acting greedily with respect to the sampled parameter. Main Results We start off the section with a regret bound that only depends on dimension d and the number of time steps T , for the particular setting where the action set A is the same as the parameter set Θ. . Despite nonlinearity of the link function, Theorem 1 matches theÕ(d √ T ) bound for linear bandits. It is worth noting that the this bound has no dependence on β or the number of arms, and also matches the Ω(d √ T ) minimax lower bound for linear bandits in Dani et al. (2008), ignoring a √ log T factor. This result shows that if there exists an action that aligns perfectly with each potential parameter, the performance of Thompson sampling only depends on the problem dimension d, and the dependence is at most linear. However, as our next result shows, if the parameters do not align perfectly with their corresponding optimal actions, we have to introduce the fragility dimension to characterize the difficulty of the problem. For our general result, we assume that the following assumption holds. For a given logistic bandit problem instance L β = (A, Θ, R, φ β , ρ) that satisfies Assumption 1, we show that the Bayesian regret of Thompson sampling on L β is closely related to its "fragility dimension," a notion that we introduce below. Definition 2 For any given pair of (possibly infinite) subsets (X , Y) of B d , the fragility dimension, denoted by η(X , Y), is defined as the largest integer M , such that there exists {y 1 , . . . , where f * (y) := argmax x∈X x y. The fragility dimension of a problem instance L 0 = (A 0 , Θ 0 , R 0 , φ 0 , ρ 0 ) is defined as the fragility dimension of (A 0 , Θ 0 ), and is denoted by η(L 0 ). Example 1 If the action set and the parameter set of L are identical subsets of S d−1 , then for each θ ∈ Θ, there is α(θ) = θ. We will show in Appendix D.1 that in S d−1 there exists at most d + 1 vectors with pairwise negative inner products. Therefore, the fragility dimension is bounded by Remark 3 Obviously the fragility dimension cannot exceed the cardinality of the action (parameter) set. We will show in Appendix D that we can upper bound the worst-case fragility dimension by the dimensionality d and the constant λ in Assumption 1. Roughly speaking, • If L is such that λ = 1, then η(L) ≤ d + 1 (cf. Example 1); • For any fixed λ ∈ (0, 1), if we only consider problem instances such that Assumption 1 holds with constant λ, then the worst-case fragility dimension grows exponentially with d. • For any d ≥ 3, we can find a problem instance L such that Assumption 1 holds with constant λ = 0, whose fragility dimension is arbitrarily large. Remark 4 For given finite action and parameter sets A and Θ, we can think of each parameter as a vertex in a graph G. Two vertices i and j of G are connected by an edge if and only if Thus determining the fragility dimension of (A, Θ) is equivalent to finding the maximum clique in G. This is a widely studied NP-complete problem and there exists a number of efficient heuristics, see Tarjan and Trojanowski (1977), Tomita and Kameda (2007) and references therein. The following general result for the performance of Thompson sampling gives aÕ( √ dηT /λ) regret bound. Theorem 5 For any β > 0, if L β is such that Assumption 1 holds with λ ∈ (0, 1], then where a ∨ b = max{a, b}. It is worth noting that the fragility dimension only depends on the action and parameter sets of the problem instance, hence the right-hand side of (3) has no dependence on β. Remark 6 Considering Example 1, and noting that when A = Θ, Assumption 1 holds with λ = 1, we immediately arrive at Theorem 1. Remark 7 Interestingly, the fragility dimension is not monotonic with respect to the inclusion of sets, i.e. there exist sets X 1 , X 2 , Y, such that X 1 ⊂ X 2 but η(X 1 , Y) > η(X 2 , Y). As we show in Appendix D.4, this fact means that by reducing the size of the action set, we could arrive at a more difficult problem. This is a somewhat surprising result that is worth noting. We also show that the η term in (3) is critical, since for any fixed λ < 1, there cannot exist an η-independent upper bound that is polynomial in d and sublinear in T . Theorem 8 For any fixed λ ∈ [0, 1), let f (·) be any real function, p(·) be any polynomial and > 0 be any constant. There exists a logistic bandit problem instance L β and integer T 0 such that L β satisfies Assumption 1 with constant λ and for any policy π. Main Devices in the proof of Theorem 5 In this section we discuss the two main devices in the proof of Theorem 5. In Section 4.1, we introduce the notion of information ratio, and present the result that relates information ratio with Bayesian regret. In Section 4.2, we highlight the role of fragility dimension. The full proof of Theorem 5 is given in Appendix B. Information Ratio To quantify the exploration-exploitation trade-off at stage t, for problem instance L and policy π we define the random variable information ratio as the square of one-stage expected regret divided by the amount of information that the agent gains from playing an action and observing the reward, i.e. where the subscript t − 1 in the right-hand side denotes evaluation under base measure P(·|H t−1 ). If the information ratio is small at stage t, the agent executing the policy π will only incur a large regret if she is about to acquire a large amount of information towards the optimal action. Past results have shown that, as long as the information ratio of Thompson sampling can be uniformly bounded, we immediately obtain a bound on the Bayesian regret of Thompson sampling. Fragility Dimension The one-stage expected regret can be written as It is worth noting that A * = α(θ * ) and by the definition of Thompson sampling, A * and A t are independent and identically distributed. Let's first consider the simple case where β = ∞, which motivates our analysis. When β = ∞, we have that φ β (x) = 1 for all x ≥ 0 and φ β (x) = 0 for all x < 0 4 . By Assumption 1, we have There is also Therefore, to upper bound the right-hand side of (6), we need to lower bound P t−1 (A t θ * ≥ 0). The proposition below shows that this term is connected critically with the fragility dimension of (A, Θ). The proof is given in Appendix A. Proof Sketch of Theorem 8 Recall that we can obtain regret bounds for linear bandits that are dependent only on the dimensionality of the problem d rather than the number of actions (such as the one in Russo and Van Roy (2016)). The reason behind such bounds is that when the link function φ is linear, the difference between the mean rewards of two actions that are close to each other is always small. However, in logistic bandit problems, when parameter β is large, we could run into cases where two close actions yield diametrically different rewards, as is illustrated in Figure 1. Specifically, suppose that our action and parameter sets are such that and that is, η(A, Θ) = |A| = |Θ|. Then, when β is large, conditioned on each parameter being the true parameter, there is exactly one action with mean reward close to 1, while the mean rewards of all other actions are close to 0. The following proposition shows that in this problem the optimal action is inherently hard to learn, in the sense that the regret of any algorithm grows linearly in the first |A|/2 − 1 stages. The proof can be found in Appendix C. Then for any policy π, We can also show that (as in Appendix D), for any fixed λ ∈ (0, 1), there exists γ > 1, such that for any d ≥ 2 we can find a pair of action and parameter sets ( (10), (11) and Assumption 1 with constant λ. For any real function f (·), polynomial p(·) and constant ∈ (0, 1), choose d large enough such that γ d > 16f (λ)p(d) and β d large enough such that for any policy π. From Definition 2, there exists no (η + 1)-clique in G. Let p be any probability measure on V. We use p i to denote the probability mass associated with v i . Thus p i ≥ 0 and n i=1 p i = 1. For fixed V, let J(p) = P p Û V < 0 , where the subscript p indicates that the distribution of V is p. We have that where (a) comes from that . Let M (p) := (i,j)∈E p i p j . We first argue that there exists probability measure p * , such that M (p * ) = max p M (p), and for any (i, j) / ∈ E, i = j, either p * i = 0 or p * j = 0. In fact, let p and (i, j) / ∈ E be arbitrary. Without loss of generality, assume that We define a new measure p as follows: p i = p i + p j , p j = 0 and p = p for = i, j. Then Therefore, by moving all the probability mass from j to i, the value M does not decrease. Thus we can always find a probability measure p * which attains the maximum of M , and at the same time satisfies p * i p * j = 0 whenever (i, j) / ∈ E and i = j. Next we show that there can be at most η non-zero elements among {p * 1 , . . . , p * n }. In fact, since there exists no (η + 1)-clique in G, for any subset {i 1 , . . . , i η+1 } of V there must exist (i s , i t ) / ∈ E and i s = i t . This leads to p * is p * it = 0. Hence p * must be supported on at most η elements of X . Without loss of generality, let p * 1 , . . . , p * η ≥ 0 and p * η+1 , . . . , p * n = 0. Then where the last inequality comes from η k=1 (p * k ) 2 ≥ 1 η η k=1 p * k 2 = 1 η . Hence which is the result we desire. Remark 12 If U = V and f * is the identity function, we can get rid of the additional 1/2 factor and show that In fact, if V is uniformly distributed on V, we can recover the prestigious Turán's theorem in graph theory: Theorem 13 (Turán (1941)) If a graph with n vertices does not contain any (k + 1)-clique, then its number of edges cannot exceed 1 − 1 k · n 2 2 . By restricting the random vector V to a subset of R d , we have the following corollary. Corollary 14 Let U, V be finite subsets of B d . Suppose that there exists bijection f * : Let V be any random variable supported on V, U = f * (V ) andÛ be an iid copy of U . Then for any S ⊆ V, Appendix B. Proof of Theorem 5 Considering Proposition 9, and the fact that we only have to show We will present two separate proofs of (18) for β ≤ 2 and β > 2, respectively. For β ≤ 2, we resort to the previous Lipschizity analysis; for β > 2, we adopt a new line of analysis that is connected to our definition of fragility dimension. We fix the stage index t in this section. To simplify notations, we let Y be a random variable with the same distribution as θ * conditioned on H t−1 . We also define X = α(Y ) and letX be an iid copy of X,Ŷ an iid copy of Y . Thus X, Y ,X andŶ can be interpreted as aliases for A * , θ * , A t and θ t , respectively. As a shorthand we use η in place of η(L β ). We will omit the "almost surely" qualifications whenever ambiguities do not arise. Before moving on, we introduce a result adapted from Russo and Van Roy (2016), which gives a primitive bound of information ratio. Proposition 15 For any generalized linear bandit problem L = (A, Θ, R, φ, ρ), Proof First notice that, sinceX is independent of Y andŶ is independent of X, we have Comparing (5) and (20) and , we only have to show In fact, we have that where we use R(y) to denote R(α(y)) for y ∈ Θ. In (c) and (e) we use the fact that α is a bijection. That (d) holds is because of the independence between Y andX. In (f ) we apply the Pinsker's inequality upon noticing that R ∈ {0, 1}. The final step (g) follows from the fact that E R(y )|Y = y = φ(α(y ) y), Thus we have (21). B.1. Proof of (18) for Small β We first point out to a useful lemma. Lemma 16 Let U, V be random vectors in R d , and let R, S be independent random variables with distributions equal to the marginals of U, V , respectively. Then where ( Proposition 17 Let L = (A, Θ, R, φ, ρ) be any generalized linear bandit problem instance where φ is such that there exist constants 0 < L 1 ≤ L 2 with Then we have Specifically, for the logistic bandit problem L β , there is Proof From Proposition 15, we have LetỸ be another iid copy of Y , there is On the other hand, there is also where (k) follows from Lemma 16. Comparing (24) and (25), we arrive at which is the desired result. Plugging in L β into Proposition 17 and notice that we shall arrive at From Proposition 17, for β ≤ 2, there is B.2. Proof of (18) for Large β In this section we show (18) for β > 2. Throughout we assume that Assumption 1 holds with constant λ ∈ (0, 1). For any x ∈ A, let σ(x) = x α −1 (x). For ζ ∈ R, We further define and let z β,λ = argmax ζ∈[0,1+λ] γ β,λ (ζ)/ζ, w β,λ = (λ + z β,λ )/2 and ν β ( Under the above notations, (19) can be written as We also partition the action set A into two subsets: Suppose that we can find constants C 1 , C 2 , such that Then, from Cauchy-Schwarz inequality we have Thus we can bound the right-hand side of (27) by To determine C 1 , we first introduce a lemma. Lemma 18 Let f : R + → R + be such that f (0) = 0 and f (ζ)/ζ is non-decreasing over ζ ≥ 0 (f(0)/0 is interpreted as the limit of ζ ↓ 0). Then for any non-negative random variable U , there is Proof Let g(ζ) = f (ζ)/ζ with g(0) = lim ζ↓0 f (ζ)/ζ. By our assumption, g(ζ) is also non-negative and non-decreasing. Let V be an iid copy of U , we have that where the final inequality results from the monotonicity of g. Therefore we have shown Thus there is where (l) comes from Cauchy-Schwarz inequality and (m) is the consequence of (32). Finally, where the final inequality is implied by (33). Hence the proof is complete. We define functionγ β,λ (ζ) bȳ as is shown in Figure 2. We thus have where In (n), we apply the fact that for any random variable W with E[W 2 ] < ∞ and constant a, there is In (o) we use the result in Lemma 18. In (p), we use the fact that Step (q) follows from thatγ β,σ(X) ≥ γ β,σ(X) , and the final step follows trivially from σ(X) = X α −1 (X) = X Y . Hence we can set C 1 = d χ 2 . Next we turn to constant C 2 . We have that with and (s) comes from Corollary 14. Thus we can set C 2 = 2η ξ 2 . Finally, when β ≥ 2, we have that χ > ξ > 0.1λ. Therefore The values of the constants are plotted in Figure 3. By combining (38) with (26), we arrive at (18). Appendix C. Proof of Proposition 11 Suppose that for each a ∈ A, Let (â 1 , . . . ,â t ) be any deterministic action sequence up to stage t. Then conditioned on A 1 = a 1 . . . A t =â t , we have that R 1 , . . . , R t are mutually independent. Hence where in the final step we use the fact that the prior of A * is uniform. Let E t be the event {R 1 = · · · = R t = 0}. Since (39) holds for every action sequence, we have that for any policy π, Thus Let δ = 1/N , we have that for t ≤ N 2 − 1, In this section we give worst-case bounds of fragility dimension with respect to the problem dimension d. Let X and Y be two subsets of B d , and let f * : Y → X be such that f * (y) y = max x∈X x y, ∀y ∈ Y. Further we define ι = inf y∈Y f * (y). Here ι can be interpreted as the constant λ in Assumption 1. We will show that the worst-case bounds vary across the three regimes ι = 1, ι ∈ (0, 1) and ι = 0. D.1. The Regime ι = 1 When ι = 1 since we are constraining X and Y to be contained in the unit ball, there must be that f * (y) = y for each y ∈ Y. Therefore η(X , Y) is equal to the maximum integer M , such that there exists {y 1 , . . . , y M } ⊆ Y, with The following lemma immediately implies that in this case η(X , Y) ≤ d + 1. Lemma 19 In the d-dimensional Euclidean space, there exists at most d + 1 different vectors, such that the inner-product between any pair of different vectors is negative. Proof Suppose that there exists a set X which consists of d + 2 different vectors x 1 , . . . x d+2 , such that x i x j < 0 for any 1 ≤ i < j ≤ d + 2. Let Then the nullspace of U has dimension at least 2. Therefore we can find z ∈ null(U ) ⊂ R d+2 , such that z has at least one positive entry and one negative entry. Without loss of generality, we have that where 1 < k < < d + 2 and z 1 , . . . z k > 0, z . . . z d+2 < 0. However, this gives which is a contradiction. We show by an example for d = 3 that when ι = 0, the fragility dimension can be arbitrarily large. Let h, r ∈ (0, 1) be constants to be determined later. Consider X = {x 1 , . . . , x N } and Y = {y 1 , . . . , y N } where x i = r · cos 2π N · i , r · sin 2π N · i , √ 1 − r 2 , i = 1, . . . , N, as is shown in Figure 4. We have that f * (y i ) = x i and x k y = hr · cos 2π N · (k − ) − (1 − h 2 )(1 − r 2 ). To satisfy that x k y < 0 for all k = , we only have to choose h and r such that This can be done by arbitrarily choosing h and let r = 1 − γh 2 with cos 2 2π N 1 − sin 2 2π N h 2 < γ < 1. Notice that N can be arbitrarily large since ι = 0. Thus η(X , Y) is unbounded. D.3. The Regime ι ∈ (0, 1) In this section we show that when ι ∈ (0, 1), the worst-case fragility dimension grows exponentially with d. We first introduce the following result. We point readers to Böröczky Jr et al. (2004) for a detailed discussion. Fact 20 For any ∈ (0, 1), there exists γ > 1, such that for all integer d ≥ 3, there exist γ d vectors in S d−1 such that the inner product of any two different vectors is at most . For any fixed d, let u, v ∈ 0, π 2 and > 0 be constants to be determined later. Let z 1 , . . . , z N ∈ S d−2 be such that z i z j < , ∀j, k ∈ [N ], j = k. Consider the pair of sets X , Y ⊂ S d−1 defined by and Y := {y i } N i=1 , y i = (− cos v, sin v · z i ). Thus we have and x j y k = − cos u cos v + z j z k sin u sin v < − cos(u + v) − (1 − ) sin u sin v, j, k ∈ [N ], j = k. There is obviously f * (y i ) = x i . In order to satisfy inf y∈Y f * (y) y = ι, we only have to choose u, v, such that cos(u + v) ≤ −ι, and cos(u + v) + (1 − ) sin u sin v ≥ 0. D.4. Removing Actions Could Make Problem Harder Let X and Y be the two sets given in the example in Appendix D.2. Let the parameter set be Θ = Y and consider action sets A 1 = X ∪ Y and A 2 = X . Obviously A 2 ⊂ A 1 . However, we argue that the problem L 1 with action and parameter sets (A 1 , Θ) is easier than the problem L 2 with sets (A 2 , Θ). In fact, from Lemma 19, we have that η(A 1 , Θ) ≤ 4. However, the argument in Appendix D.2 shows that η(A 1 , Θ) = N , where N is the size of the parameter set. Therefore the regret of Thompson sampling on L 1 can be bounded by the result in Theorem 1, which is independent of β. However, to learn L 2 for a large β, we almost have to try every action to find the optimal one. Therefore, somewhat surprisingly, reducing the size of the action set can actually make the problem harder.
2019-05-12T06:10:22.000Z
2019-05-12T00:00:00.000
{ "year": 2019, "sha1": "a4f55005312c4a4726a034740b86b05d1c206c9f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "de05380df1b450f7d2cea5e1106cd95ddd0dd1c0", "s2fieldsofstudy": [ "Mathematics", "Computer Science" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
232080685
pes2o/s2orc
v3-fos-license
Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) (Coronaviridae) A novel coronavirus SARS-CoV-2 emerged in 2019 causing a pandemic posing the greatest threat to global health in a Century. The virus is classified in the subgenus Sarbecovirus, together with the closely related SARS-CoV-1 which caused SARS in 2003, and other bat coronaviruses found in Rhinolophus bats. SARS-CoV-2 is efficiently spread by the respiratory route. Most infections are asymptomatic or mild especially in children or young adults but disease severity progressively increases with age and the presence of co-morbidities, manifesting as a severe viral pneumonia progressing to acute respiratory distress syndrome. A number of therapeutic interventions and vaccines have been developed and are being evaluated in randomized clinical trials. Glossary Lymphopenia Reduction in the lymphocytes in the circulating blood below the normal range for age. Reproduction number The number of secondary infections, on average, generated by one infected person. RNA Ribonucleic acid. Non-pharmaceutical interventions Public health measures other than drugs or vaccines that are used to contain an infectious disease outbreak. Acute Respiratory Distress Syndrome (ARDS) Occurs when fluid builds up in the tiny, elastic air sacs (alveoli) in the lungs leading to reduced oxygenation of the blood. CD4 T cells T cells bearing the cluster differentiation marker 4. These T cells have a helper function in both cell-mediated and antibody-mediated immune responses. CD8 T cells T cells bearing the culture differentiation marker 8. These T cells have a cytotoxic function and recognize and kill other cells that express "foreign" antigens which may be of viral or malignant origin. JAK inhibitors inhibitors of the Janus kinase (JAK)/signal transducers which are activators of the transcription (STAT) pathway are linked to various cytokines and are involved in a variety of immune-mediated and inflammatory diseases. An unusual cluster of severe pneumonia was noted in Wuhan, China, in December 2019. The etiological agent was identified to be a novel coronavirus closely related, but not identical to, the virus that caused SARS in 2003. The virus was named SARS coronavirus 2 (SARS-CoV-2) and the disease coronavirus virus disease 2019 . Within months, the virus spread to cause a pandemic which has had catastrophic impacts on global health, economy and society. SARS-CoV-2, SARS-CoV-1 (causing SARS epidemic in 2003) and related viruses found in Rhinolophid bats are classified within the sub-genus Sarbecovirus, genus Betacoronavirus, Family Coronaviridae. Closely related viruses have been found in Rhinolophus genus bats, notably RaTG13 and RMYN02, which respectively, share 96.3% and 93% nucleotide identity with SARS-CoV-2. RMYN02 and SARS-CoV-2 both have furin cleavage sites within the spike protein while RaTG13 does not. The natural reservoir from which SARS-CoV-2 emerged is likely to be bats of the Rhinolophus genus. What remains unclear is whether there were intermediate hosts that facilitated transfer and adaptation of the precursor virus to humans. The receptor used by SARS-CoV-2 to gain entry to cells is angiotensin-converting enzyme 2 (ACE-2). The median incubation period of SARS-CoV-2 infection is around 5 days (range 2-14 days) and the reproduction number (Ro) is estimated to be 2.5. Infected persons may transmit infection from 1 to 2 days prior to onset of symptoms to around 7-10 days after symptom onset. However, severely ill patients and immunocompromised individuals may be infectious for longer periods of time. Pre-symptomatic as well as asymptomatic infections may lead to transmission. The virus is transmitted via large respiratory droplets or respiratory aerosols, predominantly over close range (a few meters) although there are occasional instances of transmission over greater distances. Crowded indoor environments are more conductive to transmission and singing or loud speaking by infected individuals increases the risk of transmission. The virus remains viable for many hours on smooth surfaces (stainless steel, glass, plastic) but survival is much shorter on non-porous surfaces such as cloth or paper. Therefore, indirect transmission from contaminated surfaces via hands to eyes, nose or mouth may potentially contribute to transmission. While the virus RNA can be detected in feces for prolonged periods, infectious virus has infrequently been detected and the degree of infectiousness of feces remains unclear. Super-spreading events are prominent drivers of transmission. In the early stages of the pandemic, non-pharmaceutical interventions including case detection, isolation, contact tracing, quarantine, physical distancing, reduction of mobility and travel related measures were successfully used to reduce transmission. Symptoms include fever or chills, cough, shortness of breath or difficulty in breathing, fatigue, muscle or body aches, headache, sore throat, congestion or runny nose, nausea, vomiting or diarrhea. Loss of smell or changed sense of taste is frequently reported and is associated with infection and damage of olfactory neurones in the nasopharynx. Progression of clinical disease may lead to hypoxia and acute respiratory distress syndrome (ARDS). Radiological changes include bilateral ground glass opacities and alveolar exudation. Lymphopenia with increased serum transaminase, C-reactive protein and d-dimer levels are commonly seen in severe cases. Progression of disease is associated with difficulty in breathing, leading to ARDS, sometimes leading to a fatal outcome. The overall infection fatality risk increases progressively with age; those aged 15-44, 65-74 and 475 years having infection fatality risks of 0.03%, 3.1% and 11.6%, respectively; males roughly having twice the risk of females across age spectrums. COVID-19 infection in children and young adults is often mild or asymptomatic. The presence of co-morbidities including heart, respiratory, renal and liver diseases, cancer, diabetes and obesity increase the risk of severe infection and fatal outcome. Molecular detection of SARS-CoV-2 RNA is the mainstay of diagnosis. Detection of viral protein (usually nucleoprotein) by rapid antigen detection tests give more rapid results and are sensitive in detecting specimens with high viral load who have highest transmissibility of infection. Antibody responses to multiple viral proteins (spike, nucleoprotein, ORF8) and virus neutralizing antibodies are progressively detectable towards the end of the first week after onset of symptoms, and are detectable in most Encyclopedia of Virology, 4th Edition, Volume 2 doi:10.1016/B978-0-12-814515-9.00155-7 patients by the end of the third week of infection. Neutralizing antibodies target the spike protein and is protective. CD4 and CD8 T cell responses are also elicited following infection but their role in protection remains to be elucidated. Direct viral damage as well as immunopathology contribute to pathogenesis, a hyper-inflammatory state being observed in severely ill patients. An intravascular coagulopathy also contributes to pathogenesis, often involving the microvasculature but sometimes leading to thrombosis of large blood vessels with poor prognosis. Supportive care in the management of patients include provision of supplemental oxygen or mechanical ventilation as and when required. Randomised clinical trials are beginning to identify specific therapies with proven clinical efficacy and this is a fast-moving area of knowledge. There is emerging consensus for the beneficial use of corticosteroids in those patients who require supplemental oxygen or mechanical ventilation. The antiviral drug remdesivir improves time to recovery but does not appear to provide survival benefit when used by itself. However, a combination of remdesivir with immunomodulators (e.g., JAK inhibitors such as barcitinib) may provide improved benefit. There has been a rapid progress in developing and evaluating COVID-19 vaccines. These have included protein subunit, viral vectored (e.g., adenoviral vectors) and RNA vaccines targeting the viral spike protein; and inactivated whole virus vaccines which elicit immune responses against the structural proteins of the virus. By the end of the year 2020 phase 3 trial data show acceptable levels of efficacy and safety with RNA and adenoviral vectored vaccines targeting the virus spike protein providing evidence that the viral spike is a protective antigen. In December 2020 some countries have started to vaccinate their populations. The duration of vaccine induced protection remains unknown. Most clinical trials evaluate protection from virologically confirmed symptomatic clinical disease and it is unclear whether there will be comparable impact in reducing transmission, a question of key relevance in disease control and population immunity.
2021-01-29T05:36:04.799Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "7622f0f475108491f38a8201278d17de150a34f1", "oa_license": null, "oa_url": null, "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "76cde086614ca75c4979f0581665205760673d5f", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [] }
252063754
pes2o/s2orc
v3-fos-license
Luxatio erecta of the humerus: the spectrum of injury of inferior shoulder dislocation and analysis of injury mechanisms Erecta dislocation/inferior dislocation of the shoulder is considered an uncommon injury and the present knowledge stems from case reports or compilation of cases. We believe that there are reasons to believe that the injury is much more prevalent than previously stated. In this review, we discuss the mechanism of injury and based on the number of patients with unusual injury patterns at our hospitals and in the literature, the anatomical features of different variants of inferior dislocation are described. Only a few patients present with their arm still locked in abduction, and most patients with initial inferior dislocation are diagnosed with other types of dislocation or injury. Irreducible dislocation, with tissue blocking the glenoid appears to be a consequence typical of an initial inferior dislocation. Nerve and vascular injuries are overrepresented, as are humeral avulsion glenohumeral ligaments-injuries. The description of shoulder dislocations should ideally include the dislocation path and not only the final position of the humeral head. Luxatio erecta of the humerus (LEH) is said to be an uncommon lesion, considered to represent less than 1% of all anterior glenohumeral dislocations and the term is used synonymously with "inferior dislocation" (ID). Through the years numerous case reports have been published, including a long-term follow-up of 16 cases by Groh et al. 8 At our hospitals we encounter only a few cases of LEH per year, that is, patients who present with their arm locked in elevation. However, we believe that ID is much more prevalent than reported and that true erecta dislocations represent only a subset of ID since the arm may escape from its erect position or fail to lock in the first place. For the arm to stay locked in abduction, the surrounding tissues must have enough strength to withstand the downward pull of the arm from the moment of injury to the presentation at the hospital. It may be assumed that the locked, abducted state is seen in a minority of patients with ID, and that other patients may present either with a reduced dislocation or a dislocation of what appears to be of a more common type, anterior or posterior. However, after an ID, the shoulder is likely to have suffered a spectrum of injuries, of which several are typical for the mechanism involved in ID. In this review, we will discuss the mechanism of injury including the typical radiographic signs and clinical features (Table I) (Table II) of inferior dislocation of the shoulder. For illustration of the injury panorama, a collection of some recent cases has been summarized in Table III. Mechanism of injury The typical glenohumeral dislocation, whether anterior or posterior, is a dislocation within the envelope of the cuff. Depending on the nature of the force, the capsule may be peeled off the glenoid (anterior labral periosteal sleeve avulsion, ALPSA-lesion), with or without the labrum or its bony attachment. The humeral head ends up resting in pocket between the glenoid neck and the posterior or anterior cuff/capsule. The close contact between the humeral head and the glenoid rim frequently results in a Hill-Sachs lesion and/or glenoid rim fracture. In severe cases, the Hill-Sachs indentation may have an extension, occasionally causing a fracture of the tuberosities and/or the anatomical neck. These fractures are sometimes first noticed after a failed reduction attempt that leaves the humeral head dissociated from the shaft. 23 As long as the humeral head resides within the cuff, the surrounding soft tissues are relatively protected from direct impact by the humeral head and the position of the head is moderately medialized. ID typically occurs when the patient tries to break a fall with the arm outstretched and elevated. The head of the elevated/abducted humerus is forced downward, between the inferior edges of the anterior and posterior rotator cuff, into the soft axillary pouch. In this position, the humeral head is incompletely covered by the rotator cuff (Fig. 1, A) and little additional force is required for the head to exit the joint. In the typical situation of an ID locked in abduction, the lateral aspect of the humeral head is locked against the inferior glenoid rim with potential associated injuries to the supraspinatus insertion or depression fractures of the superior aspect of the greater tuberosity ( Fig. 1, B). If the dislocation progresses, the humeral head may completely escape from the cuff and advance further in the medial direction, causing considerable damage to surrounding tissue by traction or direct impact. Staging of LEH Stage 1a (locked erecta dislocation) Arm in elevation/flexion with humeral head locked against the inferior glenoid rim. Superior fractures of the greater tuberosity are likely (60% 18 ) as well as tears of the inferior portions of the rotator cuff and glenohumeral ligaments. (Fig. 1, B) Case 3 A 55-year-old woman fell on her outstretched arm. Radiographs revealed a luxatio erecta with a distally depressed fracture of the greater tuberosity (GT). The shoulder was reduced with longitudinal traction. At surgery within 2 weeks the GT/supraspinate was fixed; no additional cuff injuries were observed. (Fig. 2 , A and B) Stage 1b (reduced dislocation, superior Hill-Sachs lesion, depressed GT) Transient inferior dislocation with spontaneous reduction of the humeral head, thus avoiding the locked erecta position. Injuries to the shoulder (inferior glenoid rim fragment, superior Hill-Sachs, depressed GT fragment, partial cuff rupturedtypically of the inferior part of the subscapularis or infraspinatus, may go undiagnosed and the nature of the injury not fully clarified. Observed fractures (greater tuberosity (GT) or ¾ part valgus fractures), cuff injuries (supraspinate (ssp), infraspinate (isp), subscapularis (ssc)) and nerve injuries (radial (rad), ulnar (uln), median (med), axillary (ax)) and performed surgery (OP) (reverse shoulder arthroplasty (rTSA)) listed. Due to medical reasons, some patients did not have surgery despite their cuff injuries. Case 7 A 59-year-old male fell forward while skiing. Initial radiographs showed a depressed small GT fracture but a congruent joint. An additional computed tomography (CT) scan demonstrated a small posterior avulsion of the GT. Active shoulder motion appeared restricted and the injury suggestive of an ID. At surgery five days later, the humeral head was found completely denuded. Although still intact in its components, the entire cuff was completely avulsed from the insertion as one unit (Fig. 3). Stage 1c (hypothetical) Similar to Stage 1b, but the head has recoiled from inferiorly and assumed the position of anterior or posterior dislocation, inside the cuff. In such cases, ID will be difficult to verify but the mechanism is nevertheless possible. Indeed, some authors have suggested a twostage technique to relocate the humeral head in a fixed erecta dislocation, the first step of which is to shift the humeral head from the subglenoid position to that of an anterior dislocation. 19 If this relocation can be performed manually, a similar process may occur spontaneously. Various degrees of cuff tears could accompany this stage. Stage 2a (head anterior to the subscapularis, rotator cuff not completely detached, closed reduction possible) When the arm is abducted, the dislocated humeral head will be partly exposed below the inferior edge of the subscapularis. From this position, especially if the subscapularis is partly ruptured, the head may displace anteriorly under the inferior edge of the subscapularis. The humeral head will now be outside the confinements of the cuff and unrestrained by the subscapularis it is often displaced more medial than seen with an intracapsular dislocation and may come to rest close to the brachial plexus. The interposition of the subscapularis between the humeral head and the glenoid can be visualized on plain X-ray lateral view or Velpeau view as a distance between the glenoid and the humeral head. In this injury, reduction may still be possible if the humeral head is first brought under the inferior edge of the subscapularis and then back into the joint. (Fig. 1, C) A similar injury with the head dislocated posteriorly under the edge of the infraspinatus and the infraspinatus/supraspinatus interposed into the joint is also possible. 12 Case 12 An 89-year-old male fell outside his grocery store on an outstretched arm. He presented with shoulder pain and radiographs showed what were perceived as an anterior dislocation. Kocher reduction was attempted both in the emergency department and under anesthesia but failed. Scrutiny of the radiographic Y-view showed a distance between the anterior glenoid edge and the humeral head indicating soft tissue interposition. A CT scan confirmed the findings. The shoulder could then be easily reduced by longitudinal traction of the humerus followed by manipulation of the humeral head in under the inferior edge of the subscapularis (Fig. 4, A-C). Stage 2b (complete avulsion of the rotator cuff, closed reduction not possible) When the entire rotator cuff is completely detached, with or without tuberosity fragments, but not ruptured between its components, emptied of the humeral head, it will collapse as a sleeve across the glenoid. Closed reduction of the humeral head is not possible since manipulation of the arm is unable to open the cuff enough to allow reintroduction of the head. After a relocation attempt, the head may appear to be reduced into the joint; however, due to the interposed rotator cuff, the glenohumeral joint is not perfectly congruent, and the head may even appear in a position lateral and superior to the glenoid. (Fig. 1, D) Case 16 A 74-year-old woman presented after a fall at home on her outstretched arm. X-ray examination showed anterior dislocation, the humeral head medial to the coracoid. Reduction was successful but "indistinct" although the humeral head appeared to be in the joint. CT the next day showed subscapularis interposition and another reduction was tried but failed. Magnetic resonance (MR) examination demonstrated gross rotator cuff interposition with the humeral head completely outside the cuff. She later received a reverse shoulder arthroplasty (Fig. 5, A-D). Stage 3 (valgus fracture and head anterior to the subscapularis with complete avulsion of the rotator cuff, with or without tuberosity fragments) This scenario has been described by Robinson et al. 23 They describe a valgus impacted proximal humerus fracture combined with an anteroinferior dislocation through the axillary fold where the head comes to rest anterior to the subscapularis (Robinson stage 3b). In Robinson stage 3c, the head has become separated from the shaft, either by the trauma itself or iatrogenically by a failed reduction attempt. Such dislocations are facilitated by the head displaced into valgus position offering less resistance to inferior dislocation. (Fig. 1, E and F) Case 18 A 60-year-old teacher tripped on a cord in the classroom and tried to break her fall with her outstretched arm. Presented as anterior dislocation, the head was impacted in valgus and the tuberosities visible lateral to the glenoid. Reduction was attempted but unsuccessful. CT demonstrated the humeral head outside the rotator cuff, tuberosities and cuff blocking the glenoid. Reverse shoulder arthroplasty was performed a few days later. (Fig. 6, A-E). Discussion LEH was first reported with the description of two cases by Middledorpf 17 in 1859, and later followed by cadaver experimental work performed by his assistant Scharm. 25 In 1921 Lynn had collected 18 cases from the literature and added another 3 of his own. 13 Already at this time, it was known that with the force required to produce an LEH, concomitant injuries such as tears of the rotator cuff or fracture of the GT, and occasionally injuries to nerves and vessels, were common. The condition is rare and bilateral dislocations even more so. Nambiar et al (2018) 18 in their review identified 199 published cases, of which 29 were bilateral. From this time on, the literature on LEH consists predominantly of case reports or reports on collections of a limited number of cases. In recent years, two compilations of cases from the literature have been published. 8,18 All the reported patients had undergone plain X-ray examination, and the few patients who had a subsequent CT, MRimaging, or CT arthrograms may not be representative of all LEHs. Associated soft tissue injuries have neither been systematically assessed with modern imaging techniques nor described. The age of the average LEH patients, 44 years, 18 is not clearly different from the age of the patients with anterior dislocation, 47.6 years. 24 Almost 50% of all dislocations are dislocations of the glenohumeral joint. 2 These dislocations are usually divided into anterior, posterior, or inferior dislocations according to the position of the humeral head at examination. In traumatic shoulder dislocations, the normal balancing muscle forces across the joint must be overcome by an external force, as when trying to break a fall by the parachute reflex. The direction of the initial dislocation is determined by the position of the humerus relative to the glenoid at the time of impact and the forces across the joint, but the resting position of the humeral head depends on the nature of the associated soft tissue injury and possible concomitant fractures. An anterior or posterior final position of the humeral head does not preclude initial inferior dislocation mechanism. However, the most reasonable explanation for the interposition of the rotator cuff, with or without fragment of the tuberosities, is by initial inferior dislocation as shown in Figure 1. The subglenoid position is also highly suggestive of an ID Fractures In the review of 199 patients with LEH, 39% of the patients had a concomitant proximal humeral fracture, 18 75% of these fractures were fractures of the GT. These numbers are similar to those reported by Mallon et al. 16 In our case series (Table I), 6 of 7 patients with locked inferior dislocation also had a fracture of the greater tuberosity, apparently from the contact with the inferior glenoid. In the review by Nambiar et al, 18 scapular fractures were noted in 8% of the patients but only one patient had an acromial fracture, which probably is the same patient as previously reported in the series by Mallon. 16, If the dislocation mechanism involves levering against the acromion, it is surprising that acromial fractures are this rare, and the mechanism of inferior dislocation may not regularly involve this mechanism. Cuff and capsular injuries All our 10 patients with ID stage 2 (rotator cuff avulsion) (Table I) appeared to have an anterior dislocation but none had a fracture of the tuberosities. The fact that the head was anterior to the subscapularis, as visible on the lateral view, was usually overlooked and reduction typically more difficult than expected and in one case the humerus was left dislocated. Robinson et al reported that 10% of patients with acute anterior dislocation also had sustained a cuff injury. 24 In the reports on soft tissue injury associated with anterior dislocation, 10,24,32 it is reasonable to believe that several the included injuries are IDs (Stage 1c or 2). The incidence of cuff tears in combination with ID is not known but is probably considerable higher. In anterior dislocations, the ligament injury is usually on the glenoid side (ALPSA-lesion) and humeral sided lesions (humeral avulsion glenohumeral ligaments -HAGL) are seen in less than 10% of patients with recurrent instability. 4 In a retrospective study on 1000 consecutive MR investigations performed for shoulder pain, only 23 (2.1%) of the 743 patients who later underwent surgery were found to have a HAGL lesion. 14 Bokor et al found that in patients undergoing surgery for recurrent instability 7% had a HAGL injury. 3 Of these, 43% also had injury to the cuff, in contrast to the 1.9% with a cuff injury in the non-HAGL group. The suggested mechanism of HAGL injuries is hyperabduction, 20 similar to the mechanism of ID. This seems to be in line with the observation that the glenohumeral ligament injury in IDs always appears to be on the humeral side, the ligaments being avulsed with the rotator cuff. In our case series, the inferior part of the subscapularis is more often injured than the superior parts, which could be understood from how the subscapularis is stretched during forceful abduction and the injury corresponds to humeral avulsion of the inferior glenohumeral ligament. Nerve injuries Nerve injuries associated with shoulder dislocation is believed to occur mainly by traction when the nerve is stretched during dislocation. When the humeral head is outside the protective rotator cuff, as in inferior dislocation, the injury mechanism may also involve direct impact. In a review of 3633 patients with acute anterior dislocation, Robinson et al reported that 13.5% of the patients had a neurologic deficit, usually transient and commonly of the axillary nerve. 24 Of the patients with neurological deficit, 57% also had a GT fracture and 30% of the patients with GT fractures had neurological deficit. It is not clear what proportion of these patients had a possible ID, stages 1c-2. In a review of inferior dislocations, the proportion of nerve injuries was higher: 59% of the 80 cases had some degree of nerve injury, 16 the axillary nerve most commonly affected. In a multicenter study on nerve lesions after shoulder dislocations, Tiefenboeck et al found that the direction of dislocation, anterior, posterior, or inferior, did not appear to influence the rate of nerve injuries. 31 Interestingly though, in their case series of patients with nerve injuries after shoulder dislocation, inferior dislocation was much more prevalent (17%) than normally would be expected. 31 In our series of 27 patients, 11 had clinical signs of nerve injury, which was found to be transient in all cases (Table III). It is not unlikely that a large proportion of the nerve injuries reported after anterior dislocation could be explained by an ID mechanism and that the incidence of concomitant nerve injuries in pure anterior dislocations is lower than reported. In an illustrative case by Frank et al, 7 the patient had an irreducible inferior dislocation. During exploration, in addition to rotator cuff injuries, the axillary nerve was found anterior to the humeral neck. The only explanation for this unusual situation is that the humeral head has moved like a crochet-hook, from an inferiorly displaced position, up posteriorly to catch the nerve. The similar mechanism could explain how the musculocutaneous nerve was trapped behind the humeral head in another case of irreducible anterior dislocation. 9 Vascular injuries Vascular injuries caused by closed shoulder dislocations are rare and reported to represent less than 1% of vascular injuries around the shoulder when a fracture is not present, 30 and predominantly involves the axillary artery. The injury is believed to be caused by stretch or shear of the artery and is more common in the elderly with less compliant vessels. In cases of inferior dislocations, additional modes of arterial injury are possible and vascular damage is probably more common than in anterior dislocations. A few reported cases are illustrative. A patient described by Shah et al 28 had sustained an avulsion of the humeral circumflex artery after an "anterior dislocation." However, on the prereduction radiograph, it is apparent that soft tissue is interposed between the humeral head and the glenoid, which makes inferior dislocation with subscapular avulsion a more plausible mechanism of injury. Similarly, Magister et al described a 50-year-old patient with "anterior dislocation" and axillary artery injury. 15 On the prereduction radiograph also in this case, soft tissue appears interposed between the humeral head and the glenoid. An unusually young patient was described by Chehata et al who had an "anterior dislocation." 5 The 17-year-old boy had in addition to an injury of the axillary artery also a complete avulsion of the cuff and the authors presumed that these injuries were the consequences of an initially inferior dislocation. Irreducible dislocation There are several reports on patients with "irreducible" shoulder dislocations in patients appearing to have either anterior or posterior dislocations. In a few cases, the shoulder appears subluxated and the joint space widened with the humeral head slightly lateral and cranial. 1,6,11,12,21,22,26,27,29,33 In many of these reports the deformity is called "posterior dislocation", but the humeral head has not been engaged against the glenoid. The common obstacle to reduction in these shoulders is the interposition of the rotator cuff, with or without tuberosity fragments. Although the mechanism of injury has not been clarified in these papers, the majority involves high energy trauma, typically motorcycle or bicycle. We believe that many of these injuries were the results of an initial ID, but that the humeral head, more or less denuded, later assumed the position seen on the X-ray films. Conclusion The common description of shoulder dislocation is based on the position of the humeral head at examination but does not reflect the injury mechanism or position of exit for the humeral head. The incidence of ID is therefore underestimated and since only few patients present with an actual locked erecta dislocation, the majority is diagnosed with other forms of dislocations, anterior, posterior, and lateral. Radiographs should always be examined for signs of tissue interposition as this indicates an ID and reduction will be difficult or impossible. The likelihood of accompanying injuries to the rotator cuff, nerves, and vessels injuries is high. When injured, the glenohumeral ligaments are typically avulsed from the humerus (HAGL injury). The available literature does not allow further epidemiology of these injuries since most publications are case reports. However, the injury may occur in patients of any age, from children to the elderly. The possibility of an ID should always be evaluated when treating shoulder dislocations since the injury could be extensive. Disclaimers: Funding: Grants or other economical support have not been used to support the preparation of this manuscript. Conflicts of interest: The authors, their immediate families, and any research foundations with which they are affiliated did not receive any financial payments or other benefits from any commercial entity related to the subject of this article.
2022-09-04T15:15:51.050Z
2022-09-01T00:00:00.000
{ "year": 2022, "sha1": "0f7da203f58e468506341dedb468904b3b1428bd", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.xrrt.2022.08.004", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bf64d9f947395229368748284b9fafe6a44991ae", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
252696715
pes2o/s2orc
v3-fos-license
Floral organ transcriptome in Camellia sasanqua provided insight into stamen petaloid Background The cultivated Camellia sasanqua forms a divergent double flower pattern, and the stamen petaloid is a vital factor in the phenomenon. However, the regulation mechanism remains largely unclear. Results Here, a comprehensive comparative transcriptome analysis of the wild-type, “semi-double”, “peony double”, and “rose double” was performed. The cluster analysis of global gene expression level showed petal and stamen difficulty separable in double flower. The crucial pathway and genes related to double flower patterns regulation were identified by pairwise comparisons and weighted gene coexpression network (WGCNA). Divergent genes expression, such as AUX1 and AHP, are involved in plant hormone signaling and photosynthesis, and secondary metabolites play an important role. Notably, the diversity of a petal-specific model exhibits a similar molecular signature to the stamen, containing extensin protein and PSBO1, supporting the stamen petaloid point. Moreover, the expansion of class A gene activity influenced the double flower formation, showing that the key function of gene expression was probably demolished. Conclusions Overall, this work confirmed the ABCE model and provided new insights for elucidating the molecular signature of double formation. Supplementary Information The online version contains supplementary material available at 10.1186/s12870-022-03860-x. Background Camellia (family Theaceae) contains about 250 species [1], C. japonica (ornamental), C. sinensis (beverage), C. oleifera (oil). Camellia sasanqua belongs to section oleifera, mainly in tropical and sub-tropical zones. Nevertheless, the research on C. sasanqua mainly focused on the flowers' pigmentation [2,3], little is known about its flower pattern, especially the development of the stamen petaloid. There are four whorls in its flower, some sepals in the first, five petals in the second, numerous stamens in the third, and one carpel in the fourth. Interestingly, some flower patterns are mutated in cultivated cultivars, including semi-double, peony double, rose double, and anemone double (according to the criteria set out by the international camellia society). In these heavily petaled flowers, the stamens become, to varying degrees, petallike organs, somehow representing the mitigated stamen growth in camellia [4]. Despite extensive knowledge of the molecular regulation mechanism of a flower pattern change in model plants, it remains unknown how the floral pattern in C. sasanqua cultivars is achieved. Most floral organs contain four parts, petal, stamen, sepal, and carpel, and their development is influenced by conserved molecular mechanisms [5]. In the model plant, the ABCE model relates to flower development [6,7]. Open Access *Correspondence: lixinlei2020@163.com Class-A (APETALA1, APLETALA2, LIPLESS1, and LIP-LESS2) and class-E genes (SEPALLATA ) control the development of sepals. Class-A, class-B (APLETALA3, PISTILLATA , DEFICIENS, and GLOBOSA), and class-E genes regulate the characteristics of petals. Class-B, class-C (AGAMOUS, PLENA, and FARINELLI), and class-E genes determine the stamen phenotype [8]. The previous study showed the A, 2B, and E tetramers regulate the formation of petals [9]. In addition, phytohormones also play a primary role in flower change [10], and photosynthesis provides nutrition for reproductive development [11,12], floral diversification promotes reproductive success through interaction with pollinators [6]. Overall, a complicated genetic pathway network control flower architecture. Although the tenets are conserved in angiosperms, different families show different characteristics, such as hundreds of independent carpels arranged on the receptacle in Fragaria × ananassa [13], and stamen petaloid in Alcea rosea [14]. Abundant information is required for understanding the variation of double flowers. The present study generated comparative floral organs transcriptome data of wild-type and three double-flower cultivated C. sasanqua by taking advantage of the Illumina platform. As a result, transcription change related to double-flower formation was captured, and tissuespecific gene modules were identified by WGCNA. Most ABCE homeotic genes were expressed in expected floral organs. Together, the gene expression profile described here provides the foundation for molecular signature exploration of the C. sasanqua flower pattern. Phenotype divergence among four kinds of flower pattern The composition of floral organs influences flower patterns, and further improves ornamental value and reproductive capacity. In general, the flower of C. sasanqua contains carpels, stamens, petals, and sepals (Fig. 1A). With the increase of the degree of stamen petaloid, the number of stamens decreased, and the number of petals increased, forming many double-flower variants (Fig. 1B), such as semi-double (XMG), peony double (ZHZR), and rose double (FSZF). By analyzing the transcriptome divergence among flower tissues, we can further reveal the molecular characteristic of stamen petaloid in C. sasanqua. General description of transcriptome data The quality of 36 RNAs sequencing data collected from sepal, stamen, and petal of C. sasanqua with different flower patterns are listed in Supplemental Table S1. The number of clean reads per library ranged from 22 to 41 million, and the average CleanQ30 > 93%. The mapping rate to the reference genome [15] ranged from 75.25% to 82.63%, and more than 75.7% of the reads were mapped to the exon region (Supplemental Table S1). The high-quality data were used to perform further analysis. A total of 42, 463 genes were identified and qualified based on the Fragments Per Kilobase Million (FPKM) values. The correlation analysis showed similar expression patterns for all the biological replicates (Supplemental Fig. S1A). Cluster analysis of the organs' global expression levels showed that 36 samples were divided into two clusters, petal and stamen formed one group, and sepal formed a distinct section (Supplemental Fig. S1B). In the CS, each floral organ exhibits distinct morphology and is easily separable. However, petals and stamens are difficult to separate in double flowers, providing evidence of a stamen petaloid at the transcriptional level. Pairwise differential expression observation of floral tissue To investigate the transcription divergence that formed different flower patterns, strict screening criteria (|log2FC|≥ 1 and FDR < 0.05) were used. In a comparative analysis of homologous organs, the petal, stamen, and sepal, shared 2471, 2169, and 1842 DEGs, respectively ( Fig. 2A). The maximum number of DEGs (2892) was specific to the FSZF vs. CS comparison. Due to the interference of color in the XMG and ZHZR, FSZF vs. CS did not have an influence on color. So, we focused on the overlapping DEGs, such as 1194 were shared by XMG vs CS and FSZF vs CS comparison, and 1231 DEGs were shared by ZHZR vs CS and FSZF vs CS comparison. Accordingly, the gene ontology (GO) enrichment analyses of the overlapping DEGs were performed and combined as a matrix keeping the significant GO terms ( Fig. 2B and Supplemental Table S2). Significant enrichment was observed in the GO terms related to "kinase activity", "meristem development", "protein phosphorylation", "cell wall", and "response to brassinosteroid". In addition, the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment result revealed enrichment of genes involved in the biosynthesis of secondary metabolites, plant hormone signal transduction, photosynthesis, and tryptophan metabolism (Fig. 2C, and Supplemental Table S3). Identification of tissue specific coexpression models A total of 4, 247 genes with 10% of the variance were used for the weighted gene coexpression network analysis (WGCNA). A power 12 with a scale-free topological fit index of 0.9 was chosen, and 17 different models were obtained (showed in a different color). The model eigengene is the first principal component of a given module and can be considered a representative of the module's gene expression profile. Twelve of these models correlated with a specific tissue, such as, blue (r = 0.68, p = 4e-6) and purple (r = 0.83, p = 5e-10) model identified sepal specific genes of CS and ZHZR, respectively (Fig. 3A). Interestingly, both petal and stamen of ZHZR were correlated with the grey60 model ( Fig. 3B), indicating that molecular similarities between the petal and stamen in ZHZR may contain regulated genes of stamen petaloid. The development genes photosystem II oxygen-evolving enhancer protein 1 (Cao1_scaffold_10gene-1860.33), extensin family protein (Cao1_scaffold_10gene-2143.12), and gamma tonoplast intrinsic protein (Cao1_scaffold_13-gene-1017.19) (Supplemental Table S4) were observed in the grey60 model. In addition, a highweight network by calculating the connectivity between gene modules was constructed and is shown in Fig. 3C. Focusing on the grey60 model, GO and KEGG enrichment analyses were performed. Results showed significant enrichment in the GO terms related to photosynthesis and chloroplast ( Fig. 3D and Supplemental Table S5), including "photosystem IIoxygen evolving complex", "photoinhibition", "chloroplast stromal thylakoid". Moreover, "planttype cell wall loosening" was also enriched, indicating that the cell process played an important role in double flower development. The first 10 KEGG pathways are involved in photosynthesis, flavonoid biosynthesis, and metabolic pathways ( Fig. 3E and Supplemental Table S6). Phytohormone signal pathway involved in double flower development The above pathway analysis of overlapping DEGs revealed that plant hormones participate in double flower formation. Fifty-one DEGs were identified as regulating plant hormone signals (Fig. 4). Most genes were involved in auxin biosynthesis and signaling, five AUX1, four IAA, three GH3, and ten SAUR coding genes. AUX1, IAA, and GH3 coding genes are upregulated in the double flower, particularly in petal and stamen. Interestingly, the SAUR coding genes have a high expression level in sepal. ABCE homologous genes in C. sasanqua It is well known that floral structural variation is usually determined by homeotic genes in the ABCE model. To gain insight into transcription change in double flower development, we identified the MADS-box gene family regulating flower patterns. Synthesizing the results of Hmmsearch and Blast + method, a total of 65 sequences containing MADS-box and K-box domains were identified. These candidate genes were aligned with the MADS-box protein of Arabidopsis thaliana. For constructing a phylogeny tree (Fig. 5A). Finally, 11 homologs genes of A, B, C, and E classes were identified in our database. The expression analysis showed that E class genes (Cao1_ scaffold_7-gene-5.0, Cao1_scaffold_15-gene-134.31) were upregulated in double flower cultivars (Fig. 5B). One C class gene (Cao1_scaffold_4-gene-1043.69) mainly accumulated in stamen and was downregulated in the double flower. Two B class genes activity (Cao1_scaffold_10-gene-1650.15, Cao1_scaffold_10-gene-1651.6) expanded in the petal of the double flower. Interestingly, these A-class genes had different expression trends. The A-class functional gene (Cao1_scaffold_7-gene-1063.2) was upregulated in the petal and stamen of a double flower, indicating it may be relative to the development of the stamen petaloid. We selected five ABCE class genes for validating the transcriptome data through the RT-qPCR method. Primers were designed by primer5 software (Supplemental Table. S7). Results were highly consistent with the RNA-seq data (Fig. 6), indicating the reliability of our data. identified to provide targets for flower breeding. In addition, the molecular similarity between the transcriptome of petal and stamen in ZHZR supported the conclusion of stamen petalization. The divergence of stamen petaloid influenced double flower architecture Under the influence of human demand, many double flower cultivars of C. sasanqua have been derived, mainly due to the stamen-to-petal transition [11]. This phenomenon has been studied at the molecular level, such as regulated genes of stamen petaloid in Lagerstroemia speciosa are identified through performing transcriptome analysis [16]. Transcriptome variation mirror genetic variation [17], we found a significant divergence between wild and double flower and limited divergence between petal and stamen in double flower cultivars. The result is consistent with a previous study, each flower organ is easily separable in wild-type camellia, while petals and stamen gather in double flower cultivars [4]. Over 2000 DEGs were shared by petal and stamen in single and double flower comparisons, respectively. These genes probably significantly contribute to the variation from single to double flowers. Phytohormones' response to double flower development A previous study revealed that plant hormones relate to stamen petaloid [8,18]. Particularly, the biosynthesis and signal transport of auxin affects the arrangement of the floral whorls [19]. In our results, CsAUX1 and CsIAA in the auxin pathway were upregulated in the petal and stamen of the double flower. The petal primordium is formed by promoting AUX1 to accumulate auxin, and PIN-FORMED1 (PIN1) transports it [20]. In Arabidopsis thaliana, the IAA1 mutant inhibits the interaction with TIR1, resulting in petal loss [21]. Interestingly, CsSAUR had a high expression level in sepal, while AtSAUR responds to auxin and regulates cell elongation [22], indicating sepal development probably affected double flower variation. Moreover, CsBSK in brassinosteroid and CsARR-B in the cytokinin pathway probably regulate cell expansion and abnormal flower development, respectively [23,24]. Genes involved in gibberellin, ethylene, and the abscisic acid pathways also played an important role, suggesting that double flower development is regulated by a complex hormone network. Expansin protein probably participates in stamen petaloid In the coexpression module of C. sasanqua, we noted that a model displayed similarity between petal and stamen in double flower, and that cell wall loosening was enriched. The examination of the high weight network, identified some of the expansin proteins, showing an upregulation in the petal and stamen of the double flower. Petal growth mainly depends on cell expansion [25], and the expansin gene may help wall modification related to petal development [26]. The GA-regulated expansin gene gladiolus (GgEXPA1) was expressed prominently in stamen, petal, and tepal expansion [27]. The α-expansins proteins of Mirabilis jalapa also show abundant change during the rapid expansion of the ephemeral flowers [28]. Further functional validation is required to elucidate an expansin-mediated mechanism. The ABCE model is conservative in the double flower development In general, the ABCE model defines four regulatory gene functions. A, B, and C class genes work in a combinatorial fashion to confer organ attributes in each whorl [6], and E class genes ensure that all functions are performed normally. We identified homeotic ABCE genes of the MADS family. The A-class genes were upregulated in the stamen and petal of the double flower, indicating double flowers potentially released the constrains of gene expression required for the whorl development. The petal number was increased by heterologous overexpression of CjAPL2 genes [29]. In contrast, we noted that C class genes were downregulated in the stamen of the double flower. This may be caused by the mutual antagonism between the A-class and C-class function, such that class C activity expands in class A mutant plants [30]. B class genes in C. sasanqua have similar expression trends in wild and double flower types, and the result was in agreement with a previous study [4]. Floral organ differentiation required a conserved function of the ABCE gene, but the double flower displayed obscure expression crossing the borders of organ types. Conclusion In short, we found that the designated expression pattern of ABCE genes was deconstructed. Particularly, class A genes activity expands to stamen in double flower. In addition, these genes involved in plant hormone signaling, photosynthesis, and extensin protein were considered candidate regulators of the double flower, but need further investigation to elucidate the complete picture. Our transcriptome database presented here will serve as a useful genetic resource for clarifying double flower domestication. The annual rainfall at the study site was 1,500 mm, the soil at the test site was sandy loam, and the pH was 5.5-6.5. Sepal, petal, and stamen of floral organs from wildtype and cultivated camellias were collected, then frozen immediately in liquid nitrogen and stored at − 80 °C. Three biological replicates were obtained from three individuals. Total RNA of all samples was extracted using the DP441 plant kit (TIAGEN, Beijing, China), following the manufacturer's instructions, and stored in the freezer before use. Standard-compliant RNA (RIN > 8.0 and concentration > 100 ng/ul) was screened using the Nan-oDrop1000 (ThermoFisher, Scientific, Wilmington, DE) and Agilent 2100 instruments (Agilent Technologies, Palo Alto, CA, USA). Transcriptome sequencing and data processing According to the manufacturer's instructions, five micrograms of total RNA from each sample were used for constructing the NGS library by mRNA-Seq Sample Prep kit (Illumina Inc., San Diego, CA). Oligo (dT) reads were used to enrich the mRNA, and a fragmentation buffer was used to form short fragments. The short fragments were synthesized into cDNA using DNA polymerase I and RNase. Polymerase chain reaction (PCR) enrichment was performed to obtain the cDNA library [31]. The libraries were sequenced on an Illumina HiSeq 2000 sequencer. The high-quality clean data were mapped to the assembled C. oleifera genome data [15]. An index of the reference genome was built using Bowtie v2.2.3, and paired-end clean reads aligned with the reference genome using TopHatv2.0.12. The new genes were predicted via the EMBOSS package (http:// emboss. openbio. org/). DEGs identification and functional enrichment analysis The expression levels of the transcripts were quantified based on the read counts mapped to the genome and were calculated using the Fragments Per Kilobase of transcriptome per Million mapped reads method. DESeq [32] was used for the differential expression analyses between control and experimental groups. The DEGs screening conditions were |log2FC|≥ 1 and false discovery rate < 0.05. The data were compared with the Gene Ontology (GO) databases [33], the Kyoto Encyclopedia of Genes and Genomes [34] (KEGG), and the NR (Non-redundant) protein sequence database [35] (https:// ncbi. nlm. nih. gov/ blast/ db/ FASTA/). P-value correction was performed using the Benjamini-Hochberg (BH) method, and less than 0.05 were identified as significantly enriched. Moreover, the top GO terms were consolidated into a matrix. R package pheatmap was used for visualization. WGCNA and phylogenetic analysis The R package WGCNA [36] was used to perform coexpression network analysis, and determined the correlation between tissue and module. A positive correlation indicated that the genes of this module had higher expression in this tissue relative to all other samples. Finally, Cytoscape (3.0.0) was used to visualize the network. At first, the hidden Markov model of MADS and K domain were downloaded in the Pfam [37]. Genes similar to the CsMADS-box were searched by the hmm method. Then, these genes were identified by blasting the gene sequences of Arabidopsis thaliana. Finally, the results of both parts were combined. Sequence alignments were performed using MAFFT [38], and the aligning results were used to build phylogenetic trees by MEGA5. Quantitative real-time PCR validation Primer Premier v5.0 was used to design the gene-specific primers (Supplementary Table S7). The quantitative real-time PCR were performed using the ABI Steponeplus Real-Time PCR System (Thermo, USA) instrument, according to the TB Green Fast qPCR Mix (Takara) instructions. 18 s rRNA gene was used as the internal reference, relative expression level was quantified by the 2 −△△CT method [39]. Statistical analysis All data were analyzed with three biological replicates. The statistical analysis was conducted using R software. The data are presented as mean ± standard deviation of three biological replicates experiments. Availability of data and materials The datasets generated and/or analyzed during this study are included in this article, its supplementary information files, or the [NCBI]repository with Accession number: PRJNA837723. [https:// www. ncbi. nlm. nih. gov/ biopr oject/ 837723]. Declarations Ethics approval and consent to participate We have obtained the permission of Research Institute of Subtropical Forestry to collect C. sasanqua. The collection and usage of plant specimens in current study complied with relevant institutional, national, and international guidelines and legislation. Ethical approval was not applicable for this study. All samples are preserved in the Camellia Germplasm Resource Center of Research Institute of Subtropical Forestry (30°05′92′′N, 119°95′94′′E). The deposition number of these samples is as follows: CS: sasanqua, XMG: Shishigashira, FSZF: Fuji-no-mine, ZHZR: Shōwa-no-sakae. The formal identification of these C. sasanqua cultivars is completed by xinlei Li, zhonglang Wang, and jiyin Gao of the International Camellia Association.
2022-10-05T14:11:14.950Z
2022-10-05T00:00:00.000
{ "year": 2022, "sha1": "9ba7b0d79234f33b13e9a9d8e05d8ae80a0d1ce1", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "9ba7b0d79234f33b13e9a9d8e05d8ae80a0d1ce1", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
5387650
pes2o/s2orc
v3-fos-license
Anthrax Postexposure Prophylaxis in Postal Workers, Connecticut, 2001 After inhalational anthrax was diagnosed in a Connecticut woman on November 20, 2001, postexposure prophylaxis was recommended for postal workers at the regional mail facility serving the patient’s area. Although environmental testing at the facility yielded negative results, subsequent testing confirmed the presence of Bacillus anthracis. We distributed questionnaires to 100 randomly selected postal workers within 20 days of initial prophylaxis. Ninety-four workers obtained antibiotics, 68 of whom started postexposure prophylaxis and 21 discontinued. Postal workers who stopped or never started taking prophylaxis cited as reasons disbelief regarding anthrax exposure, problems with adverse events, and initial reports of negative cultures. Postal workers with adverse events reported predominant symptoms of gastrointestinal distress and headache. The influence of these concerns on adherence suggests that communication about risks of acquiring anthrax, education about adverse events, and careful management of adverse events are essential elements in increasing adherence. n November 20, 2001, Bacillus anthracis was confirmed in blood cultures from a 94-year-old woman in rural Oxford, Connecticut, who was diagnosed with inhalational anthrax and died 1 day later (1,2). No obvious source of exposure to B. anthracis was identified. She was the 22nd patient diagnosed with anthrax in the United States in 2001 (3). Before this case, all patients diagnosed with inhalational anthrax had had contact with intentionally contaminated mail delivered through the postal system, with the exception of a patient in New York City (where an investigation was under way). Since the source of transmission was identified as the mail for all but one anthrax case, investigation of area postal facilities began immediately. The mail was considered a likely source of contamination for the patient in Connecticut, and postexposure antimicrobial prophylaxis was recommended for postal workers employed in the regional distribution center and local post office serving the patient's area. At the regional postal distribution center, which operates 24 h a day and employs 1,122 workers, employees work one of three 8-h shifts and process approximately 3 million pieces of mail daily. The regional processing center contains 29 high-speed sorting machines. In contrast, the local post office, a two-room structure with 48 employees, has no high-speed sorting machines. All mail collected in the local post office is sent to the regional processing center. The post office serves two zip code areas; mail requiring sorting for the two zip codes is hand-sorted at the local level by carrier route. The Connecticut Department of Public Health (CDPH), in consultation with the Centers for Disease Control and Prevention (CDC), recommended postexposure prophylaxis as a precaution to protect the health of the postal workers in these facilities (4). As part of a national distribution center sampling protocol, an independent contractor working for the United States Postal Service (USPS) took environmental samples on November 11, but anthrax spores had not been isolated in the regional distribution center. The decision was made to offer prophylaxis to postal workers pending the results of additional, more focused testing. The first of many postexposure prophylaxis clinics was held on November 21, 2001. Postal workers were given an initial 10-day course of ciprofloxacin unless contraindicated (5)(6)(7). Nasal swabs were collected from the postal workers at the first clinics to determine if contamination was present in the facilities, rather than to diagnose or define individual exposure (8). B. anthracis was not isolated from any of 485 nasal swabs taken from postal workers. On November 21, 25, and 28 and December 2, increasingly focused environmental sampling was performed of both the regional distribution center and the local post office to determine whether any contaminated mail had passed through the facilities (9). Samples obtained on November 21 and 25 were negative; samples taken on November 28 and December 2 from four high-speed sorting machines in the regional distribution center were positive. No contamination was identified in the local post office. Based on the positive results, the CDPH recommended that prophylaxis be extended for a full course of 60 days for all postal workers in the regional facility. Facility management conducted a progressive series of town hall meetings to notify postal employees of the test results at *Centers for Disease Control and Prevention, Atlanta, Georgia, USA; and †Connecticut Department of Public Health, Hartford, Connecticut, USA O the various facilities, as well as results of postal worker nasal swabs. Although contaminated sorting machines were shut down for machine-specific decontamination, the regional distribution facility remained open. Antimicrobial testing of the Connecticut patient's isolates confirmed the sensitivity of this B. anthracis strain to both doxycycline and ciprofloxacin. For the continuation phase of prophylaxis, doxycycline was offered as the primary antibiotic unless contraindications existed or the workers specifically requested to continue on ciprofloxacin. On December 10, 2001, we conducted a survey to evaluate postal workers' adherence to postexposure prophylaxis and to identify factors influencing their degree of adherence. This article describes the findings of the study. Methods Of the 1,122 postal workers at the regional distribution center, we randomly selected 100 from the night and day shifts. Five workers declined; five additional workers were randomly selected and agreed to participate (refusal rate 5%). CDC health officials interviewed the group of postal workers using a standardized questionnaire to collect information on demographics, adherence, side effects, and attitudes regarding postexposure prophylaxis and exposure risk. Several characteristics were examined for determinants of starting prophylaxis, including sex, race, and age, as well as whether the postal worker worked on high-speed machinery or obtained an influenza vaccine. For comparison, age was divided into quartiles. The lowest quartile (age <37 years) was compared with the top three quartiles, and the highest quartile (age >52) was compared with the bottom three quartiles. Serious side effects were defined as those causing death, hospitalization, persistent or substantial disability, or birth defects, or requiring intervention to avoid these outcomes (10). We conducted our analysis using SAS software, version 8.2 (SAS Institute, Inc., Cary, NC). Ninety-four of the 100 workers surveyed acquired antibiotics from postexposure prophylaxis clinics sponsored by the USPS; 6 workers did not attend the clinics. Of the 94 workers who acquired prophylaxis, only 68 started the antibiotics to prevent anthrax; therefore, of those surveyed, 32 postal workers did not initiate prophylaxis. Postal workers were given ciprofloxacin at initial prophylaxis clinics unless they reported contraindications. Of the 68 postal workers starting antibiotics, 54 persons started ciprofloxacin, 12 doxycycline, and 2 other antibiotics. Characteristics of the persons who started prophylaxis versus those who did not are presented in Table 1. Male postal workers were 1.5 times more likely to start prophylaxis than female postal workers (relative risk [RR] 1.52; 95% confidence interval [CI] 1.1 to 2.2; p<0.01). Persons who reported obtaining an influenza vaccine were more likely to start postexposure prophylaxis (RR 1.26; 95% CI 1.0 to 1.6; p=0.07), although this observation did not reach statistical significance. Working on high-speed sorting machines, race, and age were not predictors of starting prophylaxis. We asked the 32 postal workers who never started postexposure prophylaxis to identify all reasons for declining prophylaxis ( Table 2) and to indicate the single most important reason. Nineteen (59%) workers stated that they did not feel they were at personal risk for anthrax. Equal proportions of postal workers (47%) cited negative nasal swabs of workers and concerns about side effects as reasons for not starting prophylaxis. Additional reasons included apprehension about antibiotic resistance, waiting to see if personal exposure had occurred, initial negative environmental samples, and fears that prophylaxis would weaken immune systems. When postal workers were asked to identify the single most important reason for not starting postexposure prophylaxis, 25% of workers reported not personally believing they were at risk for anthrax. An additional 13% cited concerns about side effects as the most important reason for not starting the regimen. Adherence to Postexposure Prophylaxis Adherence to the prophylaxis regimen was examined in the 68 workers who started the prophylaxis. We grouped adherence by an average of how many days the worker reported being able to take antibiotic exactly as prescribed. Thirty-one (46%) postal workers reported taking the prophylaxis regimen correctly every day; 23 (34%) took antibiotics correctly 5-6 days per week; and 10 (15%) of workers took antibiotics correctly <4 days per week. Adherence information was not available for four postal workers. Of those starting postexposure prophylaxis, 37 (54%) persons reported missing doses. The top two reasons workers cited for missing a dose were forgetting to take the antibiotic (32%) and side effects (15%). Reasons for Stopping Postexposure Prophylaxis Twenty-one (31%) of 68 postal workers had discontinued the prophylaxis regimen at the time of the survey. We asked these workers to identify all reasons for discontinuation (Table 3) and to indicate the single most important reason why they stopped. Over half (52%) of all who discontinued believed they were not at personal risk or did not believe they had been exposed to B. anthracis. Nine (43%) cited side effects as a reason for stopping. Additional concerns were the initial negative environmental findings and the negative nasal swabs. When postal workers were asked to identify the single most important reason for discontinuing prophylaxis, 33% of postal workers reported experiencing side effects; 19% cited initial negative environmental samples from the facility; and 19% did not feel personally exposed. Side Effects After susceptibility testing of isolates was confirmed, postal workers were switched to doxycycline by USPS physicians, unless that switch was contraindicated; of 47 workers continuing antibiotics, 43 (91%) were switched to doxycycline during the second round of prophylaxis clinics. Six (13%) workers were switched because of side effects. At the time of the survey, postal workers had taken each medication for approximately the same number of days. Equal numbers of postal workers surveyed took at least some ciprofloxacin (n=55) and some doxycycline (n=56). Twenty-three (42%) postal workers experienced side effects while taking ciprofloxacin, with 22% reporting multiple symptoms. Twenty-one (38%) postal workers experienced side effects while taking doxycycline, with 21% reporting multiple symptoms. Overall, 35 (51%) of those who began postexposure prophylaxis experienced symptoms while on antibiotics. Of side effects most frequently reported by postal workers for both antibiotics, the most common were gastrointestinal complaints (Table 4). Diarrhea and abdominal pain were reported by 22% of workers on ciprofloxacin and 13% of workers on doxycycline. Nausea and vomiting were reported by 15% of the postal workers taking ciprofloxacin and 18% taking doxycycline. Fatigue was cited by 9% of the postal workers taking either drug. No significant differences between the proportions of postal workers reporting side effects while taking either medication were reported. No serious side effects were noted. Only four persons missed work secondary to side effects of the prophylaxis (mean=1 day); only two physician visits for side effects occurred. No hospitalizations were reported. Discussion The findings of this study extend the data on adherence with postexposure prophylaxis and substantiate other similar surveys (11). Despite concerns about the safety of postal workers with potential exposures to B. anthracis, our survey demonstrates that many workers did not take adequate prophylaxis. Adherence in this population was apparently affected by a low perceived risk for anthrax and a concern about side effects. Concern about side effects was present even before postal workers started taking antibiotics; 47% of the 32 workers who never started prophylaxis cited concern about side effects as a reason. Although many workers did experience side effects, the side effects they reported were not severe. In addition, many postal workers had difficulty taking their medications as prescribed, and they missed doses of prophylaxis. Two factors may have contributed to the low perceived risk of inhalational anthrax among postal workers. First, results from the first three efforts to collect samples at the postal facilities and the nasal swabs taken at the onset of the investigation were negative for anthrax spores. Second, postal, medical, and union leaders providing information on environmental sampling results and their interpretation at USPS town meetings tried to put the risk in the perspective as explained to them by the Department of Public Health. Overall, the data suggested a possible, but not high, risk for inhalational anthrax. Spores were likely introduced in mid-October before the New Jersey and Washington D.C. regional distribution centers that handled the contaminated Daschle and Leahy letters closed down. Use of compressed air to clean sorting machines, which might have caused aerosolization of spores, had ceased by October 23, when a general USPS advisory against it was circulated. Maximum risk of exposure to aerosolized spores likely occurred during that time. By the time the postexposure prophylaxis clinics began, 30-40 days had passed since the maximum risk period without the occurrence Waiting to see if exposed 12 38 Negative environmental samples 12 38 Concerned about weakening immune system 10 31 Table 3. Reasons for discontinuing postexposure prophylaxis regimen, Connecticut, 2001 Response No. of postal workers (n=21) % Not at risk for anthrax 11 52 Not exposed 11 52 Had side effects from the antibiotic 9 43 Nasal swabs were negative 7 33 Negative environmental samples 6 29 of any cases of inhalational anthrax in regional facility workers. In addition, the initial samples taken on November 11 and 21, with methods that readily identified spores in New Jersey and Washington, D.C., had failed to identify any spores. These factors were discussed during town meetings in an effort to reassure postal workers, while still emphasizing that a period did occur when spores were in the air, especially around the sorting machines. In this setting, the numbers of postal workers who accepted antibiotics could not be used as a measure for the numbers of postal workers who actually took prophylaxis. Anecdotally, many postal workers reported obtaining the antibiotics to "have on hand" in the event "I start to feel sick." The postexposure prophylaxis survey was critical in determining the level of adherence and identifying issues affecting adherence in this population. The circumstances of this prophylaxis campaign, along with the small sample size and potential for recall bias associated with this survey, limit the inferences that may be drawn. For example, some misclassification of side effects as doxycycline-or ciprofloxacin-related may have accompanied the switch in medications. In addition, the study size limits any speculation on reasons why our study found an association between men and starting prophylaxis. Larger postexposure prophylaxis surveys may identify the reason for this and other associations that were not significant in our analysis. Nonetheless, the survey provided important information on adherence to prophylaxis and reasons for nonadherence. In the event of another bioterrorism attack, public health officials must communicate, early and effectively, the need for potentially exposed persons to initiate and continue postexposure prophylaxis. Specifically, officials should clearly communicate to at-risk persons the explanation that epidemiologic tools such as nasal swabs are poor indicators of past personal exposure and are, at best, indicators only of recent exposure. While important, reassurance must be balanced with clear explanations of risk. Of note in our study is the fact that the one group deemed to be at higher risk-those working on high-speed mail sorting machines-was found no more likely to begin or continue on prophylaxis than persons working elsewhere in the facility. Potentially exposed persons need to be aware that side effects are to be expected, but that the vast majority of side effects will be mild. Education should center on how to recognize and minimize minor side effects while describing which side effects require immediate medical assistance. Amelioration of side effects is essential if persons are to stay on their regimens, especially if the time period is lengthy. In addition, antibiotic reminder programs such as signs in common areas or buddy systems may improve adherence to postexposure prophylaxis. In conclusion, if public health officials deem initiating prophylaxis programs necessary, conducting frequent follow-up surveys to measure adherence and identify obstacles to prophylaxis in a specific population will be important in identifying perception problems and maximizing the benefits of preventive therapy.
2016-05-12T22:15:10.714Z
2002-10-01T00:00:00.000
{ "year": 2002, "sha1": "fac16f8c3967ad89d9c1da610a75efedf8835323", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3201/eid0810.020346", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5ba0add38808a4239ee27626f44ea09c86fcd796", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
89906463
pes2o/s2orc
v3-fos-license
Multiple origin but single domestication led to domesticated Asian rice The domestication scenario that led to Asian rice (Oryza sativa) is a contentious topic. Here, we have reanalyzed a previously published large-scale wild and domesticated rice dataset, which were also analyzed by two studies but resulted in two contrasting domestication scenario. Our result indicates Asian rice originated from multiple wild progenitor subpopulations, however, domestication occurred only once and the domestication alleles were transferred between rice subpopulation through introgression. Elucidating the origins of Asian rice (Oryza sativa) domestication has been a contentious field (Gross and Zhao 2014). With whole genome data, it is becoming apparent that each Asian rice variety group/subspecies (aus, indica, and japonica) had distinct subpopulations of wild rice (O. nivara or O. rufipogon) as its progenitor (Huang et al. 2012). However, whether rice was domesticated once and subsequent varieties were formed by introgression with different wild progenitors, or whether each variety was domesticated independently in different parts of Asia is debatable. The debate mainly arose from two studies analyzing the same data but surprisingly arriving at two different domestication scenarios: Huang et al. (2012) supporting the single domestication with introgression model whereas Civán et al. (2015) supporting the multiple domestication model. Both studies used a reduction in polymorphism levels as a metric to detect local genomic regions associated with domestication, and the evolutionary history of those regions were interpreted as the domestication history for Asian rice. However, even population genetic model-based methods of detecting selective sweeps are prone to false positives and with the right condition any evolutionary scenario can be interpreted with a false positive selective sweep region (Pavlidis et al. 2012). Given that each Asian rice had separate wild progenitor population of origin, any false positive selective sweep region will likely to be concordant with the underlying species phylogeny, and spuriously support the multiple domestication model. In addition, both studies used genotype calls made from a low coverage (1~2X) resequencing data (Huang et al. 2012). However, uncertainty associated with genotype calls made from low coverage data (Nielsen et al. 2011) could be another source that led to the difference in results for the two studies. Thus, we revisited the domestication scenarios proposed by the two studies and reanalyzed the Huang et al. data using a complete probabilistic framework that takes the uncertainty in SNP and genotype likelihoods into consideration (Fumagalli et al. 2014;Korneliussen et al. 2014). We then carefully compared our results against the two domestication models and contrasted it against results from both Huang et al. (2012) and Civán et al. (2015) studies. In both Huang et al. (2012) and Civán et al. (2015) studies, the phylogeny based on genome-wide data versus putative domestication region sequences were compared to determine which domestication scenario is best supported by the data. We reconstructed the genome-wide phylogeny by estimating genetic distances between domesticated and wild rice using genotype probabilities (Vieira et al. 2016). Three different parameters were used to estimate genotype probabilities, which were subsequently used to estimate genetic distances and build neighbor-joining trees for each chromosome (Supplemental We then scanned for local genomic regions associated with domestication related selective sweeps to infer the domestication history of Asian rice. Sweeps were identified using sliding windows that were estimating the ratio of wild to domesticate polymorphism (π w /π d ). To identify putative selective sweep regions, we chose the approach of Civán et al. (2015) and identified sweep regions separately for each rice subpopulation. If rice had a single domestication origin, all three rice subpopulations would have identical sweep regions with shared haplotypes; otherwise, the single domestication model cannot be supported. These regions with co-located low-diversity genomic regions (CLDGRs; (Civáň et al. 2015)) were identified using a 20 kbp sliding window. To identify significant CLDGRs we chose a stringent cutoff to conservatively identify candidate regions (see Material and Method for detail) and identified a total of 39 CLDGRs (Supplemental Table 1). Neighbor-joining trees were then reconstructed for each 39 CLDGRs (Supplemental Figure 2). The majority of CLDGRs showed monophyletic relationships among the domesticated rice subpopulation, where japonica, indica, and aus were clustering between and not within subpopulation types. A few windows (e.g. 2:11,660,000-11,680,000) showed phylogenetic relationships where each domesticated sample were clustering within the same subpopulation type. This initially suggested the evolutionary history of CLDGRs were most consistent with the single domestication origin model. We then examined larger window sizes of 100 kbp, 500 kbp, and 1000 kbp for candidate CLDGRs (Supplemental Table 1) and reconstructed phylogenies for those regions (Supplemental Fig 3,4, and 5). Larger window sizes have less number of windows for analysis, hence leading to lesser number of CLDGRs being identified (Supplemental Table 2). Nonetheless, with increasing window sizes CLDGR phylogenies were becoming more congruent with the genome-wide phylogenies, consistent with the multiple domestication origins model. CLDGRs, however, are candidate regions for domestication and false-positive CLDGRs may represent regions affected by domestication-related bottlenecks. As population bottlenecking can decrease effective population sizes, false positive CLDGRs may represent regions of the genome with increased lineage sorting and becoming more concordant with the underlying species phylogeny (Pamilo and Nei 1988). Hence, it is crucial that a CLDGR have additional evidence that can associate it with selection and differentiate its evolutionary history from the underlying species phylogeny. To do so we searched CLDGRs that overlapped genes with functional genetic evidence related to domestication. We found three known domestication genes: long and barbed awn gene LABA1 (chr4:25,959,963,504), the prostrate growth gene PROG1 (chr7:2,839,194-2,840,089), and shattering locus sh4 (chr4:34,231,186-34,233,221) (Li et al. 2006;Tan et al. 2008;Hua et al. 2015). Interestingly, the gene sh4 was the only gene detected across multiple sliding window sizes excluding the largest 1000 kbp window (Supplemental Table 1). Phylogenetic trees were then reconstructed for the three domestication loci that included 20 kbp upstream and downstream of their coding sequence. We note for all three genes the casual variant resulting in the domestication phenotype were located in the protein coding sequences (Li et al. 2006;Jin et al. 2008;Hua et al. 2015). For all three genomic regions, the phylogenies were clustering different subpopulation types of domesticated rice together (Figure 1), consistent with the single domestication scenario. Further, in all three regions the most closely related wild rice corresponded to the Or-III subpopulation, supporting the hypothesis that the domestication alleles were introgressed from japonica into indica and aus (Huang et al. 2012;Choi et al. 2017). Interestingly, sh4 was identified as a candidate gene with evidence of selective sweep in this study and both Huang et al. (2012) and Civán et al. (2015). However, only reconstructed from a 240 kbp region surrounding sh4. When we reconstructed phylogenies for 40 kbp windows surrounding the sh4 region, the downstream region of sh4 had phylogenies in which the domesticated rice were clustering with the same subpopulation types (Supplemental Fig 6). We then reconstructed the phylogeny for large genetic regions surrounding each three domestication loci and discovered with each increased window size, the phylogeny of the region increasingly corroborated the genome-wide phylogeny by clustering with the same subpopulation type (Supplemental Fig 7). Thus, the domestication-related evolutionary history for sh4 is limited to the gene and its upstream region. Thus, including large flanking regions can lead to phylogenies that are concordant with the genome-wide species phylogeny, spuriously concluding it as evidence for the multiple domestication origin model. In the end, our evolutionary analysis for the domestication loci LABA1, PROG1, and sh4 are consistent with both Sanger and next-generation sequencing results (Li et al. 2006;Tan et al. 2008;Xu et al. 2011;Huang et al. 2012;Hua et al. 2015). Our results are also consistent with the archaeological and genomic evidences (Fuller et al. 2010;Choi et al. 2017). Here then, we propose the Asian rice has evolved from multiple origins but de novo domestication had only occurred once ( Figure 2). Specifically, our model hypothesizes each domesticated rice subpopulation had distinct wild rice subpopulation as its immediate progenitor, but domestication only occurred once in japonica involving the genes LABA1, PROG1, and sh4. The domestication alleles for these genes were then subsequently introgressed into the wild progenitors of aus and indica by gene flow and ultimately led to their domestication. Materials and Method Raw paired-end FASTQ data from the Huang et al. study was download from the National Center for Biotechnology Information website under bioproject ID numbers PRJEB2052, PRJEB2578, PRJEB2829. We excluded the aromatic rice group from the analysis, as their sample sizes were too small and we excluded the few samples that had too high coverage. In the end a total of 1477 samples were selected for analysis (Supplemental Table 3). Raw reads were then trimmed for adapter contamination and low quality bases using trimmomatic ver. 0.36 (Bolger et al. 2014) Using the processed alignment files genotype probabilities were calculated with the program ANGSD ver. 0.913 (Korneliussen et al. 2014). The genotype probabilities were then used by the program ngsTools (Fumagalli et al. 2014) to conduct population genetic analysis. To estimate theta (θ) ngsTools uses the site frequency spectrum as a prior to calculate allele frequency probabilities. Usually site frequency spectrum requires an appropriate outgroup sequence to infer the ancestral state of each site. However, for calculating Watterson and Tajima's θ it is not necessary to know whether each polymorphic site is a high or low frequency variant (Korneliussen et al. 2013). Hence, we used the same reference japonica genome as the outgroup but strictly for purposes of calculating θ. Per site allele frequency likelihood was calculated using ANGSD with the commands: For each window θ per site was estimated by dividing Tajima's theta (θ π ) against the total number of sites with data in the window. Windows with less then 25% of sites with data were discarded from downstream analysis. This resulted in a minimum of 90% of the windows being analyzed (Supplemental Table 2). To calculate π w /π d values we chose the Or-II subpopulation to calculate π w since Or-II subpopulation was most distantly related to all three domesticated rice subpopulation (Supplemental Fig 1). π w /π d values were calculated separately for each domesticated rice subpopulations. Windows with large π w /π d values were designated as candidate domestication selective sweep region, and significance was determined using an empirical distribution of π w /π d values.
2019-04-02T13:06:45.878Z
2017-04-17T00:00:00.000
{ "year": 2017, "sha1": "7d13b03213db0656ef4729225cf5c1d3c1dc9a91", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1534/g3.117.300334", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "b306ee4057b4979c1fe229a3560cacc4714240c0", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
262026771
pes2o/s2orc
v3-fos-license
Teaching Games for Understanding (TGfU) Learning Model on Improving Learning Outcomes of Volleyball Material : The purpose of the study was to determine: (1) the effect of TGfU learning model on improving volleyball learning outcomes and (2) the difference in volleyball learning outcomes between the experimental group and the control group. The method uses an experiment with a "pre-test post-test control group design". The sampling technique is simple random sampling, totaling 33 students as an experimental class with TGfU learning model treatment and 34 students as a control class. The instruments used were cognitive tests and psychomotor tests. Data analysis using t test at 5% significance level. The results of the study: (1) There is a significant effect of the TGfU learning model on improving volleyball learning outcomes, the t value is 12.158 > t table 1.693, and the significance is 0.000 <0.05. (2) There is a significant difference in volleyball learning outcomes between the experimental group and the control group, the t value is 9.617 > t table 1.668, and the significance is 0.000 < 0.05. ability which is reflected in the approaches and methods also the ways owned by the teacher in this case the teacher is still not optimal.Teaching approaches and methods that are quite varied have not been fully studied by teachers to support their pedagogical abilities.Teachers' educational backgrounds and experiences vary greatly, resulting in differences that occur in the implementation in the teaching and learning process carried out by teachers.One of the learning models that can be applied is Teaching Games for Understanding (TGfU).PE learning with TGfU approach can be used as one of the efforts to make students enthusiastic and actively participate in PE learning.TGfU has a great impact on cognitive learning, pursuing to train students who are competent, able to make decisions and solve tactical problems (García-Castejón et al., 2021) , (Cocca et al., 2020).Applying TGfU actively supports teaching and motivates students towards learning (Alcalá & Garijo, 2017), and increase the exercise time of moderate and vigorous physical activity (Wang & Wang, 2018).The study (Gil-Arias et al., 2017), found that after implementing the TGfU program for 16 sessions, an increase in motivation and intention to be physically active was observed in students.Evidence of TGfU's contribution to adolescent health as responsibility, basic psychological needs, and self-determined motivation predict intentions to be physically active and a healthy lifestyle.Scientific evidence demonstrates TGfU's ability to enhance motor, cognitive and affective learning (Bracco et al., 2019).The Study (Gaspar et al., 2021) showed that boys and girls taught through a TGfU unit with questions would report higher scores on all variables post-intervention compared to pre-intervention than boys and girls taught through a TGfU unit without questions.The TGfU unit without questions group only showed significant differences on the intention to be physically active variable after the implementation of the intervention program. METHODS This research is a type of quasi-experimental research.The design used in this research is "pre-test post-test control group design".In this design there are two groups that are randomly selected, then given a pretest to find out the initial state of whether there is a difference between the experimental group and the control group.The population in this study were grade XI students.Sampling in this study was done by simple random sampling.There were 33 students of XIA class as the experimental class with TGfU learning model treatment and 34 students of XIB class as the control class.The instruments used in this study were cognitive tests and psychomotor tests.Hypothesis testing using t-test with the help of SPSS 23 program. RESULTS The research was conducted for 4 meetings.Pretest was conducted before the application of learning, then posttest was conducted.Descriptive statistics of pretest and posttest volleyball learning outcomes between the experimental group and control group are presented in Table 1 Based on Table 4, it can be seen that tcount 12.158 and ttable (df 32) 1.693 with a significance value (p) of 0.000.Because tcount 12.158 > ttable 1.693, and the significance value of 0.000 < 0.05, these results indicate there is a significant difference.Thus the alternative hypothesis which reads "There is a significant effect of the TGfU learning model on improving volleyball learning outcomes", is accepted. The second hypothesis reads "There is a significant difference in volleyball learning outcomes between the experimental group and the control group".The research conclusion is declared significant if the tvalue > ttable and sig value is smaller than 0.05 (Sig < 0.05).The results of the hypothesis test are presented in Table 5: 1.668, and the significance value of 0.000 < 0.05, these results indicate that there is a significant difference.Thus the alternative hypothesis (Ha) which reads "There is a significant difference in volleyball learning outcomes between the experimental group and the control group", is accepted.This means that the experimental group with TGfU learning model treatment is better than the control group towards improving volleyball learning outcomes, with an average difference of 18.21. DISCUSSION Based on hypothesis testing, it is known that there is a significant effect of the TGfU learning model on the learning outcomes of PE volleyball material.The TGfU learning model group is better than the control group.Empirical studies on hybrid longitudinal programs of SE and TGfU models, developing different content, such as football, tennis, badminton, softball, and volleyball, have shown a significant increase in intention to be physically active, creating good sports adherence to improve future healthy habits (Gil-Arias et al., 2017) improve students' affective, cognitive, and physical domains. The Study (Stephanou & Karamountzos, 2020) reported that the TGfU group of students, compared to the technical teaching group of students, reported higher metacognition in perceptual knowledge, information management, conditional knowledge, problem-solving strategies and evaluation, and performed better in the game.Problem solving in a changing game environment is critical to the TGfU pedagogical model (Harvey & Jarrett, 2014), and therefore one of the objectives is to direct students towards analyzing different game situations.The TGfU model encourages the simultaneous development of physical, cognitive and emotional skills and to promote social, physical and cognitive learning alongside tactics in contextual situations using the pedagogical principles of sampling, modification (representation and exaggeration) and tactical complexity (Dyson et al., 2004).Unlike technique-oriented approaches, TGfU contributes to improving students' tactical awareness and performance.(Dania et al., 2017), along with feelings of autonomy, competence, and self-efficacy in small-sided games.TGFU is a true instructional learning model to discover how children understand sports through the essential ideas of the game.TGFU does not emphasize learning on the strategy of playing sports, so that learning is clearer and according to the child's stage of formation.TGFU learning is close to zero in addition to strategic methodology with little regard for necessary strategies, playing in all situations in the game, expanding creativity in play, speed in making choices in the game and focusing on different varieties of games.This methodology will encourage a shift in the direction of learning towards enhancing the true nature of practice with the aim that the true purpose of schooling encompassing the intellectual, soulful and psychomotor spheres can be achieved and run properly.This kind of hybridization can be useful to help teachers access a multi-model approach in their classrooms that adapts to the current educational framework (Casey & MacPhail, 2018 (Wanner & Palmer, 2018).The existence of PE in schools is not only to improve health and physical fitness for all students, but to provide experiences in the cognitive, affective and psychomotor fields for these students.Here the teacher is required to determine the appropriate learning model for students.This is because teachers must face students who have different characteristics.For this reason, teachers must have a lot of creativity in packaging a learning material so that students like and participate actively in every lesson. CONCLUSIONS The results showed that (1) There is a significant effect of TGfU learning model on improving volleyball learning outcomes, t value 12.158 > t table 1.693, and significance 0.000 < 0.05.(2) There is a significant difference in volleyball learning outcomes between the experimental group and the control group, the t value is 9.617 > t table 1.668, and the significance is 0.000 < 0.05.In this study, writing realizes that there are still many shortcomings, especially due to limitations in the study, to improve volleyball learning outcomes not only learning models can be applied, there are still many other factors that can support to improve it.For development related to the research that the author has done, it can be done using tools, methods, and samples with different levels. Table 1 . Results of Descriptive Analysis of Pretest and Posttest Statistics between Experimental and Control Groups Groups :Based on Table1above, it shows that the pretest of the experimental group's volleyball learning outcomes was 40.00 when the posttest was 58.99 and the control group's pretest was 40.59 when the posttest was 40.78.The data normality test used the Shapiro-Wilk method.The normality test was analyzed using SPSS version 23.0 for windows software with a significance level of 5% or 0.05.The results are in Table2: Table 2 . Normality Test Analysis Results Based on the statistical analysis of the normality test that has been carried out using the Shapiro-Wilk test, in all pretest and posttest data obtained from the results of the data normality test with a significance value (p) > 0.05, which means that the data is normally distributed.The homogeneity test is carried out to test the equality of several samples that are homogeneous or not.The homogeneity test is intended to test the similarity of variance between pretest and posttest using the help of SPSS 23, the results are in Table3:Based on the analysis results in Table3, it can be seen that the pretest-posttest obtained sig.p> 0.05, so the data is homogeneous.The first hypothesis reads "There is a significant effect of the inquiry learning model on improving volleyball learning outcomes".The research conclusion is declared significant if the tvalue > ttable and sig value is smaller than 0.05 (Sig < 0.05).The hypothesis test results are presented in Table4: Table 5 . T-test Results of Experimental Group and Control Group Groups Mean t count t table sig (Retnawati et al., 2018)., 2020)emphasizes the need for pedagogical models such as TGfU that aim to increase students' capacity to evaluate game situations and develop tactical thinking.(Barba-Martínetal., 2020) states that TGfU is based on four pedagogical principles.These principles are: (1) transfer, which is achieved through the use of global games, finding tactical aspects common to different sports; (2) modification-representation, consisting of adapting games to the age or skill level of the student body, maintaining tactical structure; (3) modification-overload; this principle raises the possibility of incorporating new rules or modifying them to help assimilate key tactical content; and (4) tactical complexity, where the tasks proposed should be based on a progression in tactical difficulty.Learning outcomes are the basis for measuring and reporting student academic achievement, and are key in developing more effective subsequent learning designs that have alignment between what students will learn and how they will be assessed(Retnawati et al., 2018).As the end product of the learning process, learning outcomes are considered to show what students know and develop
2023-09-18T15:07:27.132Z
2023-09-16T00:00:00.000
{ "year": 2023, "sha1": "a8a3492ad9ede019e8c4585f0673866dbe2061a0", "oa_license": "CCBYNC", "oa_url": "https://www.ijmra.in/v6i9/Doc/46.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "6a234c4fcdf1e2a2425f43a1cf318a68b002fe2e", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
117376837
pes2o/s2orc
v3-fos-license
Process Design of Automobile Seat Rail Lower Parts using Ultra-High Strength, DP980 Steel The purpose of this study is to develop a process for forming a MPa ultra-high strength steel sheet to reduce weight and improve product strength. To do this, we performed the initial process design based on empirical formulas in a handbook and experience of skilled engineers, and researched the effects of major process variables on spring back by analyzing the forming analysis and experimental results. This paper suggests an optimal process design of the seat rail lower parts, using a MPa ultra-high strength steel sheet. This satisfies the dimensional accuracy and strength requirements for the product. Introduction Recently, as regulations on environment and fuel efficiency have been strengthened, weight reduction of automobiles has become a major issue. On the other hand, the weight of automobiles is continuously increasing due to high performance and various convenience devices. When the steel sheet has a high strength, the rigidity of the vehicle body is increased and safety is improved. The strength of the vehicle body can be maintained even if the thickness of the steel sheet is reduced, and the weight of the vehicle can be reduced. In general, when the strength of the steel sheet is increased, the elongation rate is lowered; consequently, the workability is lowered. In the method of forming an ultra-high strength steel sheet, there are press forming and roll forming methods [1] . The ultra-high strength steel sheet has a further lowered elongation and is less formable than a general steel sheet [2] . The biggest problem in the sheet metal forming process is spring back. This problem can be solved by proper die design and 급 초고장력 강판을 이용한 자동차용 시트 레일 로어 980MPa 부품의 성형공정 설계 박동환 * 탁윤학 , * 권혁홍 , control of process parameters. Inparticular, spring back prediction is the most important for obtaining a desired product's shape after forming in the sheet forming process. The spring back is affected by process parameters such as the geometrical shape of the formed parts and the material due to partial shape deformation due to elastic recovery during press forming. Process parameters are affected by the shape of the product, bending strength, yield strength, modulus of elasticity, and material thickness, and it is difficult to predict accurate spring back [3~5] . Several studies have been conducted on spring back prediction in Korea. Bang et al. conducted a stamping process design for a center pillar component which was formed with a high strength steel sheet of 780MPa [6] . In the case of ultra-high strength steel sheet, which is being applied to reduce vehicle weight and improve fuel economy, process design technology is important because of the large spring back compared to mildsteel. Accordingly, many researches are being developed [7~16] . The purpose of this study is to develop a process for forming an ultra-high strength steel sheet with a thickness of 1.6mm and strength of MPa to reduce the weight of the car body and to improve the strength of the product. To do this, one can proceed with an initial process design based on empirical formulas and experiences of seasoned engineers in the handbook, and analyze the effects of major process parameters on the spring back by comparing the forming analysis and experimental results. Through the analysis of process design, one can set up the process parameters and propose the optimal forming process for the seat rail lower parts using MPa ultra-high strength steel sheet material which satisfies the dimensional accuracy and strength of the product, and confirm the feasibility through an experiment. Tensile strength test The material used for the tensile test was an ultra-high strength steel sheet by wire cutting a thin plate material having a thickness of 1.6mm and it was used as a specimen. Tensile test was carried out to investigate the mechanical characteristics of the specimen. Tensile strength test specimens were collected at 0, 45, and 90 degrees to the rolling direction. The tensile test was carried out in a universal material tester after holding the crosshead at a constant speed and then pulling it until fractured. Fig. 1 shows the specimens after tensile test, and Fig. 2 shows the stress-strain curve obtained from the tensile test results. FLD test Forming Limit Diagram (FLD) refers to the forming limit diagram, which is used to identify the material flow during the mold try-out process and the amount of deformation at the deformation center Process Design of Automobile Seat Rail Lower Parts using Ultra-High Strength, DP980 Steel 한국기계가공학회지 제 권 제 호 : , 17 , 2 Fig. 5 FLD curve of CRDP to facilitate mold modification; that is, the forming limit diagram is an index indicating how much deformation occurs at the deformation region where the sheet material is likely to be fractured. For the forming test to obtain an FLD curve, a universal sheet metal forming tester is used as shown in Fig. 3. Fig. 4 shows the Nakajima specimens after the FLD test, and Fig. 5 shows the FLD curve of the MPa ultra-high strength steel sheet material. The FLD diagram shows the maximum limit where a material can be deformed without necking or cracking. FLD0, which has a minor strain of 0 in the FLD diagram, shows the major strain 0.13 in the MP a ultra-high strength steel sheet as shown in Fig.5. Forming analysis model The physical properties of blank material for sheet metal forming analysis were obtained by the tensile test. Since the blank and the die, the blank and the punch, and the blank and the blank holder are formed in contact with each other, the roughness and the friction coefficient of the contact surface are important. Generally, the friction coefficient would be set lower than 0.1 for lubrication; however, one has applied the friction coefficient 0.12 for non-lubrication. In this study, the coefficient of friction was applied 0.12 for non-lubrication. Table 1 shows the forming analysis for the seat rail lower die using an ultra-high strength steel sheet. Fig. 6 shows the blank of the seat rail lower die. The shape of the blank was decided after several trials and errors to produce an optimal product shape, while the blank was used after cutting with the laser. Fig. 7 shows the die model for forming the analysis of the seat lower product, and Fig. 8 shows the seat rail lower product. Forming analysis results Although the ultra-high strength steel sheet material is superior in strength to general steel sheet, it has low elongation and low formability, and the elastic recovery is high, resulting in a large spring back occurrence. The spring back prevention technology of this study is a technique to improve the shape fixability of ultra-high strength steel sheet and to develop the optimal process design of the seat rail lower die. Fig. 9 shows the process design of the seat rail lower die. The processes of the seat rail lower die are comprised of several stages, and a process balancing is important for smooth flow production between each process to form a final product; that is, in order to prevent the spring back of the seat rail lower die with the ultra-high strength steel sheet, the die process design was performed with a total of nine stages. In order to produce smooth flow between processes, a process-balancing technique that provides a moderate change rather than a rapid deformation was done, and a restriking process to shape the final product was added. The physical properties data of MPa ultra-high strength steel sheet was entered to proceed with the forming analysis using a commercial finite element program called Simufact Forming S/W. For the punch and die, an alloy tool steel STD11 was used, and the material thickness was 1.6mm. Fig. 10(a) shows the forming analysis results in the 2 nd stage bending, and Fig. 10(b) shows the forming analys is results in the 3 rd stage bending. According to the forming analysis results of the 2 nd stage bending, the initial design value of 45°was appeared to 47°after forming analysis. In the 2 nd stage, the spring back occurred due to the elastic recovery of the 45°bending as shown in Fig. 11. As the analys is results of the 2 nd stage bending are enlarged, the bending occurs at the end of the top face and the spring-go appears to be generated, but actually the spring back occurred. In the 3 rd stage bending, the initial design value and the analysis result were coincided. 4th stage, the initial design value of 125°became 132.8°after analysis and the spring back was somewhat higher at 7.8°. The reason for this would be a normal spring back of ultra-high strength steel sheet by the high elastic recovery attributable to the obtuse angle. In the remaining bending processes, the spring back and the spring-go appeared alternatively. This could be due to process design, considering the process balancing not giving concentrated load on the specific process stage during forming. Experimental method The forming process was designed in nine stages in order to prevent the spring back of the seat rail lower die using the ultra-high strength steel sheet. The reason for designing a total of nine stages was that balancing among stages was considered so that the load was distributed without imposing a concentrated load to a specific stage. Furthermore, it was designed to distribute the concentrated load that hindered the flatness in a specific stage in order to control the flatness. Since it was an ultra-high Fig. 13 shows the seat rail lower die in each process stage. The workpiece was placed on the lower die and the press ram was operated until the bottom dead center for combining between upper and lower die. The combination was carried out by coating a red lead on the workpiece and the formability was checked by adjusting the die height per process stage according to the condition of the coated red lead on the contact face after forming was done. Results and Discussion The mold try-out was carried out for the seat rail lower die using a 500-tons capacity mechanical press. The workpiece material for forming was a 980MPa ultra-high strength steel sheet, which was used as a blank by laser cutting the coil material. The experiment was performed without lubrication. Table 2 shows the spring back experimental results for the seat rail lower die. In the 2 nd stage bending die, 42.9°in the 1 st die try-out at design value 45°, ig. 14 Red lead samples of seat rail lower 44°from the 2 nd try-out, and 45°from the final 3 rd try-out could be obtained. The combination between upper and lower die was carried out by placing the workpiece on the lower die and operating the press ram until the lower dead center during die try-out. During combining, the red lead was coated on the workpiece, and the contact points on the workpiece were identified so that die could be modified and the position of the bottom dead center could be adjusted. Fig. 14 shows the red lead samples of the seat rail lower die. In the 9 th stage of cam Fig. 11. However, it was spring back caused by bending. In the 4 th stage bending die, the analysis result was 132.8°with the design value of 125°, while it was 125°from the experiment, showing a difference of 7.8°from the experimental result. In addition, in the 7 th stage bending die, the spring-go was displayed in the analysis result, while spring back occurred in the experiment. There as on for the difference between the experiment and the analysis result is that the input condition of the analysis is different from the experiment, and the working conditions such as worker, press, and die were affected minutely. Fig. 15 shows the workpieces in each stage for the seat rail lower die, and Fig. 16 shows the seat rail lower die installed on the press during forming. It was found that the optimum forming process of the seat rail low die with 980MPa ultra-high strength steel sheet can improve the shape fixability by preventing the spring back. Furthermore, it is possible to secure the dimensional accuracy for the seat rail lower parts and confirm the formability of the ultra-high strength steel sheet. Conclusions This study was carried out to develop the lightweight seat rail lower parts using the 980MPa ultra-high strength steel sheet and to secure the spring back prevention technology through the optimal process design. The obtained results are as follows: 1. In the tensile test of the ultra-high strength steel sheet, the stress due to the strain was confirmed, and the FLD diagram was obtained in order to understand the forming limit. 2. The effective strain of each process stage was confirmed through the forming analysis of the seat rail lower parts using the ultra-high strength steel sheet material, while the optimum product was manufactured through three repeated experiments on the seat rail lower die with the ultra-high strength steel sheet. 3. It was confirmed that spring back prevention was possible through the optimal process design of the seat rail lower die using the 980MPa ultra-high strength steel sheet material. Formability was also confirmed by securing dimensional accuracy for the seat rail lower parts. 4. The developed seat rail lower parts with the ultra-high strength steel sheet not only reduced the weight, but also reduced the cost and improved the productivity.
2019-04-16T13:28:33.802Z
2018-04-30T00:00:00.000
{ "year": 2018, "sha1": "0796abadb778fa214b7954982838b86c0344ae00", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.14775/ksmpe.2018.17.2.160", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "b9f1f9c5166721343e12ba85619aecc48a2bcb73", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Engineering" ] }
206317620
pes2o/s2orc
v3-fos-license
Concentration-of-measure theory for structures and fluctuations of waves The emergence of nonequilibrium phenomena in individual complex wave systems has long been of fundamental interests. Its analytic studies remain notoriously difficult. Using the mathematical tool of the concentration of measure (CM), we develop a theory for structures and fluctuations of waves in individual disordered media. We find that, for both diffusive and localized waves, fluctuations associated with the change in incoming waves ("wave-to-wave"fluctuations) exhibit a new kind of universalities, which does not exist in conventional mesoscopic fluctuations associated with the change in disorder realizations ("sample-to-sample"fluctuations), and originate from the coherence between the natural channels of waves -- the transmission eigenchannels. Using the results obtained for wave-to-wave fluctuations, we find the criterion for almost all stationary scattering states to exhibit the same spatial structure such as the diffusive steady state. We further show that the expectations of observables at stationary scattering states are independent of incoming waves and given by their averages with respect to eigenchannels. This suggests the possibility of extending the studies of thermalization of closed systems to open systems, which provides new perspectives for the emergence of nonequilibrium statistical phenomena. The emergence of nonequilibrium phenomena in individual complex wave systems has long been of fundamental interests.Its analytic studies remain notoriously difficult.Using the mathematical tool of the concentration of measure (CM), we develop a theory for structures and fluctuations of waves in individual disordered media.We find that, for both diffusive and localized waves, fluctuations associated with the change in incoming waves ("wave-to-wave" fluctuations) exhibit a new kind of universalities, which does not exist in conventional mesoscopic fluctuations associated with the change in disorder realizations ("sample-to-sample" fluctuations), and originate from the coherence between the natural channels of waves -the transmission eigenchannels.Using the results obtained for wave-to-wave fluctuations, we find the criterion for almost all stationary scattering states to exhibit the same spatial structure such as the diffusive steady state.We further show that the expectations of observables at stationary scattering states are independent of incoming waves and given by their averages with respect to eigenchannels.This suggests the possibility of extending the studies of thermalization of closed systems to open systems, which provides new perspectives for the emergence of nonequilibrium statistical phenomena. Recent studies on the foundations of equilibrium statistical mechanics [1][2][3][4][5][6][7] have shed new light on the longstanding problem of nonequilibrium phenomena in individual (quantum) wave systems, where neither fictitious ensembles nor reservoirs exist [8].Many of them [3,4] rely on a conjecture of Berry, i.e., in closed systems random scattering can render waves structureless on large spatial scales [9].This wave property gives rise to a basic feature of thermal equilibrium phenomena in individual closed systems, i.e., spatial homogeneity.Yet, a major topic of nonequilibrium statistical mechanics is concerned with various spatial structures in open systems [10][11][12][13][14].This sharp contrast motivates exploring in depth spatial structures of waves, i.e., scattering states, in individual open systems, which may also pave a way for extending the studies of the relations between spatial and entanglement structures in an ensemble of open disordered systems -a new aspect of the fundamentals of nonequilibrium statistical mechanics [15] -to an individual member. In fact, spatial structures and fluctuations of waves in open disordered media are central topics of mesoscopic physics [16][17][18][19][20].However, most theoretical efforts have been focused on disorder ensembles; the significance of waves in individual disordered media has been emphasized only recently [21,22].The common wisdom of using self-averaging or the ergodic hypothesis to connect certain properties of individual disordered media to their disorder averages [23,24] essentially requires the thermodynamic limit, and cannot be applied to study fluctuations in mesoscopic scales.It remains a challenge to construct a theory for wave statistics in individual mesoscopic systems, where rich fluctuation phenomena of the wave origin can be driven, e.g., by changing the incoming wave.Such wave-to-wave fluctuations differ from well understood sample-to-sample fluctuations [16][17][18][19][20][23][24][25][26][27][28][29].Their in-depth studies are of both fundamental and practical importance.Indeed, in individual mesoscopic systems fluctuations and irreversibility have been known to be closely related [30].In addition, the strength of waveto-wave fluctuations determines whether a generic wave can represent the behaviors of most waves, i.e., is typical: in recent years the arising of typicality has appeared as a central topic in statistical mechanics [1,2,8], but been restricted to closed systems so far.On the other hand, wave statistics in individual open disordered media has found many optical applications [31][32][33][34]. Recently, the CM [35][36][37] -"one of the great ideas of analysis in our time" [38] -has been adopted to study statistical phenomena in individual closed classical [39] and quantum [1,40,41] systems.The CM is rooted in highdimensional geometry.The idea can be illustrated by the unit sphere, for which the area of the sphere becomes more and more concentrated around the equator as the dimension increases.Eventually in high dimensions the entire area almost concentrates around the equator.This property can then be visualized by real-valued functions over the sphere with nice continuity properties, through their concentration around some constant value.When the sphere is replaced by a general high-dimensional geometric body (e.g., the Euclidean space) and the area measure by others (e.g., the Gaussian measure), similar results follow.This idea opens new perspectives of probability theory [36][37][38].It allows us not only to study variables with complicated dependence on random variables instead of being their sum, but also to obtain results which are nonasymptotic, i.e., do not require the limit of large number of variables.A detailed introduction of CM is given in section S0 of supplemental materials (SM) [42]. In this work we employ CM to explore universal statis-tical phenomena of waves in individual open disordered media.We launch a classical wave of circular frequency Ω and carrying unit energy flux into a finite medium with N (= Ω π ×the width) channels and length L [43,44] (Fig. 1).Keeping the disorder realization fixed, but allowing the incoming wave to vary gives rise to various wave-to-wave fluctuations.Below, most attentions are paid to the fluctuations of the spatial structure of scattering state corresponding to the incoming current amplitude c, i.e., the depth (x) profile I x (c) of energy density integrated over the cross section.We develop a CM theory of wave-towave fluctuations.Physically, it provides information on a single stationary scattering state in a single disordered medium and the differences of behaviors between a disorder ensemble and an individual member; technically, instead of traditional impurity diagrams [16,17,20] and field theories [18,19,28], its key components are various concentration inequalities [37] of observables [e.g., I x (c)].Armed with the developed theory we achieve the following results: • We find that compared to conventional sample-tosample fluctuations, wave-to-wave fluctuations exhibit a number of "anomalies".In particular, irrespective of regimes of wave propagation (diffusive, localized, etc.), the distribution of I x (c) is always sub-Gaussian, i.e., has an (upper) tail decaying at least as fast as a Gaussian tail [Eq.(6)].Contrary to this, for sample-to-sample fluctuations of observables such as total transmission, as waves are more and more localized the distribution tail decays slower and slower, and the shape of the tail changes dramatically [18,45]. • Furthermore, we find that the wave-to-wave fluctuations of I x (c) are governed by an x-dependent curve I x Lip , which arises from the phase coherence between distinct eigenchannels -the natural channels for wave propagation in disordered media [46,47].In contrast, the sample-to-sample fluctuations of I x (c) (c fixed) are governed by the conductance [48] known to equal the number of open eigenchannels [49].For diffusive waves we find that the curve is universal with respect to disorder realizations ω at large N (cf.Fig. 2). • We find the criterion (8) for almost all stationary scattering states to exhibit the same spatial structure, i.e., a nonequilibrium steady state, and show that it can be readily satisfied for diffusive waves. • We show that the expectations of generic observables at stationary scattering states are independent of incoming waves and given by their averages with respect to eigenchannels [Eq.( 15)], and find the corresponding criterion. The results summarized above are attributed to general wave properties, and thus apply to both classical and quantum waves.(This is similar to Anderson localization applying to both classical and quantum waves [17,18].)While we are not aware of any studies of the applications of CM to wave propagation and scattering in disordered media, we begin with a general discussion on how the high-dimensional geometry emerges from the present setting (Fig. 1), and further provides a basis for applying CM.For simplicity we consider a two-dimensional (2D) medium.A disordered dielectric configuration δ (x, y) is embedded into the air background.So the wave field E(x, y) satisfies the Helmholtz equation [44], Given N channel bases, an incoming current amplitude is a projection represented by N complex coefficients: (c 1 ,c 2 ,...,c N )≡c.As the incoming wave carries a unit energy flux, we have . So all c constitute a high-dimensional geometric body, the unit sphere S 2N −1 .Next, we discretize the medium into a lattice of M points.The M values {ω (x,y) ≡−Ω 2 δ (x,y)} then constitute the coordinate of another high-dimensional geometric body, the Euclidean space R M , a point in which corresponds to a disorder realization ω≡{ω (x,y) }.As observables depending on c (respectively ω) define real-valued functions over S 2N −1 (respectively R M ), we can apply CM to them and probe the properties of the wave-to-wave (respectively sample-to-sample) fluctuations by corresponding observables. Construction of the theory.We divide the construction into five steps so that the readers may keep track of it.In each step, we outline the derivations and present the key results; motivations and (or) physical implications are also discussed.Technical details and expanded discussions are relegated to the self-contained SM. Step 1 -formulation of the problem.Below we choose the eigenchannels as the basis.Thus we first introduce the eigenchannel briefly, and this is done in three steps as follows [47].(i) Consider the transmission matrix t ≡ {t ab }, which transforms an incoming current amplitude c into a transmitted current amplitude given by tc.Both current amplitude are vectors in the space spanned by the ideal waveguide modes ϕ a (y), with a the mode index.The matrix element t ab = −i √ ṽa ṽb x = ∞a|G|x = −∞b [50], where G is the retarded Green's function and ṽa is the group velocity of mode a.By the singular value decomposition, t = N n=1 u n √ τ n v † n , we obtain a transmission eigenvalue spectrum {τ n } (τ n decreases with n.) and two mutual orthogonal unit vectors {u n } and {v n }. (ii) Replacing x = ∞ in t ab above by arbitrary x ∈ [0, L], we make the extension: t → t(x) ≡ {t ab (x)}.This gives the vector field inside the medium, E n (x) = t(x)v n , excited by v n .(iii) The triple: (τ n , v n , E n (x) ≡ {E na (x)}) defines an eigenchannel, which are completely fixed by ω and Ω [51].Each channel transmits waves with transmission coefficient τ n , and has a specific 2D spatial structure, namely, the energy density profile Integrating out y we reduce |E n (x, y)| 2 to a one-dimensional (1D) structure, where • refers to a scalar product. To proceed we introduce the precise definition of I x (c) [52]: 2 is a scalar (not vector) operator accounting for the absolute value of the group velocity in waveguide modes.Treating ω (x,y) as a scattering potential, we apply the scattering theory of waves [53] to Eq. ( 1) and find This defines a family of real-valued functions over S 2N −1 , and x labels these functions.Note that at x < L the vectors E n (x) are not orthogonal. Then the problem is: For fixed ω, does I x (c) exhibit universal behaviors when c varies?A natural idea is to calculate all the cumulants of I x (c) and find the distribution.But, one then needs to calculate an infinite number of products of E † n • E n , and sum up their contributions, which is a formidable task especially for small N .The CM allows a different route, which we follow below. Step 2 -Lipschitz continuity: a building block of CM.This is the concept that formalizes the "nice continuity properties" of real-valued functions mentioned in the introductory part.Let a generic space C be equipped with the Euclidean metric • .For f :C →R, if where 'sup' stands for the least upper bound, then f (z) is said to have the Lipschitz continuity or be Lipschitz, and f Lip is called the Lipschitz constant.As we will see below, even though f has a very complicated dependence on c, its wave-to-wave fluctuations are controlled by a single parameter, i.e., f Lip . Step 3 -Concentration inequality for I x (c) and results for general waves.With the preparations above we are now ready to introduce the following result of CM: Lévy's lemma [35,40].Let µ be the uniform probability measure over S 2N −1 , and f : S 2N −1 → R be Lipschitz.Then the probability for the deviation between f and its mean f dµ to exceed ε is ≤ 2e Lip , where δ is some positive absolute constant. This means that f concentrates around f dµ with a rate increasing rapidly with N f 2 Lip . We stress that f Lip depends on N generally.By the lemma the distribution of f is sub-Gaussian.But, unlike the central limit theorem, the lemma does not require the large N limit, i.e., is nonasymptotic; instead, N can be very small (cf.Fig. 4). To apply Lévy's lemma to I x (c), in SM (S1) we use Eq.(3) to derive an analytic expression of the Lipschitz constant I x Lip of I x (c).The result reads where L x depends on c, ω in general.According to Eq. ( 5) Combined with Lévy's lemma this gives the following concentration inequality, Here W (x; ω) ≡ I x (c)dµ and "Pr" stands for probability.After simple algebra we reduce W (x; ω) to The results ( 5)-( 7) hold for general N, ω, regardless of regimes of wave propagation (diffusive, localized, etc.). Lx(c;ω) FIG.2: Using Eqs. ( 5) and ( 12), we calculate Lx(c; ω) at N = 800 for 4 randomly chosen c at fixed ω and for 4 randomly chosen ω at fixed c, respectively, and calculate Ix Lip for 3 large N .All profiles collapse into the a single curve.L = 50 From the inequality (6) we see that provided the wave-to-wave fluctuations of I x (c) are negligible and almost all incoming waves behave in essentially the same way: their energies are stored in distinct channels with an equal weight of 1/N and a nonequilibrium steady state universal with respect to c namely W (x; ω) results, i.e., I x (c)≈W (x;ω).The state does not carry information on the phase coherence between eigenchannels.The phase coherence, as shown by the sub-Gaussian tail in (6) and Eq. ( 5), enters into I x Lip and influences strongly waveto-wave fluctuations (see Step 5 for further discussions). From the inequality (6) we also see that the distribution tail of I x (c) decays at least as fast as a Gaussian tail.In contrast, in sample-to-sample fluctuations (c fixed) the distribution tail decays much slower, known (for x = L) to be exponential for diffusive waves [48] and log-normal for deeply localized waves [18,29]. Step 4 -Diffusive steady state and its fluctuations.Below we use the general results ( 5)-( 7) to explore in-depth diffusive waves.Numerical studies have shown that in quasi 1D media [54] the disorder average of Eq. ( 7) gives a diffusive steady state [47], but it is difficult to (dis)prove analytically that without the averaging, this remains true for general geometry.Indeed, for large N (L and Ω fixed) the medium is a slab [54] and thus high-dimensional, but in high dimension the explicit form of W τn (x), even its disorder average, is unknown; for small N , i.e., a short quasi 1D medium [54], the impacts of the sample-tosample fluctuations on W τn (x) have not yet been studied. To study Eq. ( 7) we start from large N and establish a concentration inequality of W (x; ω).To this end we show in SM (S2) that, even for a single disordered slab, distinct eigenchannel structures W τn (x) are described by a single formula [47], that depends smoothly on the eigenvalue τ n and was derived originally for an ensemble of quasi 1D disordered media.Using this fact, we show in SM (S3) that W (x; ω), which is a real-valued function over Here c(x) = O(1); its explicit form is unimportant and given in SM.The property (9) allows us to use Pisier's theorem in CM [35] to show [SM (S4)] the following: If ω = {ω (x,y) } is drawn randomly from an ensemble of disorder realizations with ω (x,y) being independent Gaussian variables of zero mean and variance σ 2 , then the probability for the deviation between W (x; ω) and its disorder mean E[W (x; ω)] to exceed ε satisfies Thus the concentration is strong for large N , i.e., It is important to remark that the factor N in the sub-Gaussian bound of the concentration inequality (10) comes from the Lipschitz constant of W (x; ω), i.e., the coefficient on the right-hand side of the inequality (9). To proceed to small N we observe that the detailed structures of {W τn (x)} enter into the inequality (10) only through the unimportant factor of c(x).Thus we conjecture that the inequality applies in this case also.While its rigorous proof is beyond this work, we confirmed the conjecture numerically for N being as small as 20 [SM (S4)].So Eq. ( 11) holds for both large and small N . Due to ] is known to be the solution to the diffusion equation [16,17,20], W (x; ω) is a diffusive steady state (for almost all ω), which decreases linearly in x.This result renders the transport mean free path well defined for single ω -because for a diffusive steady state the total transmission W (L; ω) = /L [16] and identical to that defined for a disorder ensemble. Step 5 -Wave-to-wave fluctuations in diffusive regime.According to the criterion (8), weak wave-to-wave fluctuations ensure the emergence of a steady state universal with respect to c.To study these fluctuations we need to better understand I x Lip .Let us start from large N .In SM (S1) we calculate Eq. ( 5) and obtain for this case The profiles of I x Lip and L x are shown in Fig. 2. Sur-prisingly, we find that the profiles of I x Lip at distinct N collapse into a single curve; moreover, the profile of L x is universal with respect to c and ω, and the universal curve is identical to that of I x Lip .This universality of L x , together with Eq. ( 12), implies the universality of I x Lip with respect to ω.As shown in SM (S1), it even leads to an explicit expression of I x 2 Lip : By using Eq. ( 13) we find in SM (S5) that and thus includes both incoherent and coherent contributions of eigenchannels, corresponding respectively to the first and second term in Eq. ( 13).The incoherent (coherent) contribution dominates the back (front) part of the medium.At the output end, i.e., x = L, the second term in Eq. ( 13) vanishes and the fluctuations include the incoherent contribution only. For small N the universality above is violated.But numerical calculations show that I x Lip = O(1) still holds (see the symbols corresponding to N = 20 in the upper panel of Fig. 3).Due to this and W (x; ω) = O(1), for diffusive waves the criterion (8) can be readily satisfied. Numerical confirmations.We put the theory into numerical tests.The methods of numerical experiments are described in SM (S7).First, we simulate wave propagation in a single slab for 10 4 randomly chosen c.Simulations confirm that the profiles I x (c) concentrate around a linear decrease for both large and small N (Fig. 3, lower panels).This also gives = 13 for single ω [43].Moreover, from the distribution p(I x ) of wave-to-wave fluctuations we compute var(I x ), and find that the relation (14) holds for large N but is violated for small N (Fig. 3, upper panel), as expected by our theory.Secondly, we simulate propagation of diffusive and localized waves in quasi 1D media.We perform the statistics of wave-towave and sample-to-sample fluctuations of I x : for the former we fix the disorder realization ω and randomly choose 10 5 incoming current amplitudes c and for the latter do the opposite.As shown in Fig. 4, for both diffusive and localized waves the distribution of wave-towave fluctuations displays a Gaussian tail, in agreement with the inequality (6).In contrast, the distribution of sample-to-sample fluctuations is much broader, which is exponential for diffusive waves and stretched-exponential for localized waves [57]. Our theory provides new perspectives for the longstanding problem of the emergence of irreversibility in individual systems.In particular, in SM (S6) we consider a generic hermitian (which is not essential) operator Ô, and study its expectation value at the stationary scattering state E(x,y) (determined by the incoming wave c) is O(c)≡ E| Recall that E n is the wave field E n (x,y) of the nth eigenchannel.Repeating the analysis above we find N .This incoming wave-independence of observables resembles thermalization in closed systems [1][2][3][4][5][6][7][8].But, as the systems here are open conceptual differences exist.Notably, bound states and equilibrium thermal ensembles in closed systems are replaced respectively by stationary scattering states and 1 N n |E n E n |, which may be called the eigenchannel ensemble.In the future it is interesting to generalize Eq. ( 15) to manybody systems, which would enable us to explore the relations between diffusive steady states and entanglement structures, but, unlike Ref. [15], requiring neither reservoirs nor disorder ensembles.Supplemental Materials by Ping Fang, Liyi Zhao, and Chushun Tian* This appendix is written in a self-contained manner.The CM is introduced in great details.The technical details in quantitative (mathematical and numerical) analysis are presented, and extensive discussions on the results are made.The appendix is organized as follows: • Sec.S0: It serves as an introduction of CM.We discuss some basics of CM, and introduce the mathematical results used in this work.Particular emphasis are put on their backgrounds and implications. • Sec.S1: We present a detailed analysis of the Lipschitz constant I x Lip , which leads to the general result of Eq. ( 5).Then we calculate this general result to obtain Eq. ( 12).We further show Eq. ( 13). • Sec.S2: We study the 2D and 1D eigenchannel structures in a slab.Comparisons with eigenchannel structures in quasi 1D media are made. • Sec.S3: We study in details the Lipschitz continuity of W (x; ω) as a function of the disorder realization ω, which leads to the inequality (9).Moreover, we derive analytically the expression of c(x). • Sec.S4: We introduce a basic tool of CM, namely, Pisier's theorem.Armed with this theorem and the results obtained in Sec.S3, we prove the concentration inequality (10).We further provide numerical evidences showing that it holds for both large and small N . • Sec.S5: We derive the relation ( 14).We perform further numerical studies of this relation. • Sec.S7: We describe the method of simulations. • Sec.S8: We include the derivations of some integrals used in this appendix. S0 Introduction of CM In this section we will illustrate that CM is a phenomenon of high-dimensional geometry, and introduce the background and implications of the mathematical results used in this work.In particular, we will show that the concentration of the uniform measure over the sphere µ lays a foundation for Lévy's lemma and the concentration of Gaussian measure over the Euclidean space γ for Pisier's theorem.We will discuss that these two results are related to each other.By these discussions we hope that it will become clearer to the readers that CM opens "completely new perspectives" [1] of probability theory, and a CM-based probability theory differs from traditional probability theories such as the central limit theorem, with which physicists are familiar, in many fundamental aspects. S0.1 Concentration of uniform measure over the sphere In this part we discuss in details a canonical example of CM.Let us take a unit sphere S 2N −1 in the Euclidean space R 2N and normalize its area to unity.Consider a strip S around and symmetric with respect to an equator (Fig. S1).S has a range of the latitude ϕ between − 2 and 2 .The north and south poles correspond the latitude of − π 2 and π 2 , respectively.Using the parametrization (S111)-(S113) introduced in Sec.S8, we find that the area of the strip is From this we find the value of corresponding to µ(S) = 0.9 so that S covers 90% of the entire area of the sphere. The result is shown in Table I.From the table we see that as N increases the entire area of the sphere concentrates more and more around the equator.Eventually for large N the entire area almost concentrates on a very narrow strip around the equator.As most areas concentrate around the equator, a spherical cap slightly larger than a hemisphere, which has an area of exactly one half, covers almost the entire area of the sphere.This implies that as a spherical cap becomes bigger and bigger, the cap area undergoes a sharp transition once the cap boundary reaches an equator.This geometric phenomenon can be extended to a general set with an area ≥ 1 2 , and has a more general and rigorous statement.Specifically, let A be a subset of S 2N −1 .Its ε-enlargement, denoted as A ε , is defined as a set including all points which are at a geodesic distance ≤ ε from some point in A. (In words, any point in A ε is close to some point in A.) Then we have the following [2]: Theorem 0.1.Let A be a subset in S 2N −1 , whose area µ(A) ≥ 1 2 , and N be an arbitrary natural number.Then Remarks.(i) According to this theorem, the entire area of the sphere concentrates around a strip around the equator with a width ∼ 1 √ N for large N .(ii) The theorem is the formalization of the concentration of the uniform measure µ over the sphere.(iii) To prove the theorem one needs to use the isoperimetric inequality discovered by Lévy [2].We will not give its rigorous statement here.Instead, we explain the inequality in words.That is, in a sphere the circle enclosing a spherical cap is the curve of given length enclosing the largest area. Next, we would like to probe the concentration of the measure µ.Consider a real-valued function f :S 2N −1 →R which has the Lipschitz continuity, with the Lipschitz constant Recall that • is the Euclidean metric in R 2N .Furthermore, we define the median or Lévy's mean of f , denoted as M [f ], as the number such that both 2 are satisfied.Then by using Theorem 0.1 one can readily prove [2] the following: Remarks.(i) This corollary is similar to Lévy's lemma introduced in the paper.It implies the concentration of function f around the median of M [f ], and thus is another version of Lévy's lemma.We see again that Lipschitz functions serve as nice observables to probe the concentration of measure µ through the concentration of functions around the constant value M [f ]. (ii) An important concept follows from the corollary.That is, the sub-Gaussian tail of the distribution of f inherits from Theorem 0.1 or more precisely the isoperimetric inequality, and thus the sub-Gaussian nature of the distribution is of purely geometric origin and not related to the central limit theorem.(iii) f (c) can have a very complicated dependence on c. (iv) However, M [f ] differs from the mean: f dµ in general.Due to this the corollary is not useful to us in practice.A natural question is: how can we obtain Lévy's lemma used in the paper, which gives the concentration of f around the mean f dµ?To answer this question we make the following observation.We parametrize N complex coefficients c n (n = 1, 2, • • • , N ), which satisfy N n=1 |c n | 2 = 1 and thus constitute the coordinate of S 2N −1 , as: Then, it is well known [2] that if the 2N real variables a n , b n are independent standard normal random variables, then c This motivates us to investigate the concentration of Gaussian measures γ over the Euclidean space of general dimensions.This is the purpose of the next part.In doing so we will sketch how to obtain Lévy's lemma used in the paper. S0.2 Concentration of Gaussian measure over the Euclidean space The discussions above are built upon very few notions, namely, the geometric body S 2N −1 equipped with the geodesic metric and the uniform measure µ.It is possible to extend the discussions to general geometric body equipped with appropriate metric and measure.We refer the general discussions to standard mathematical literatures [2,3].Here we study in details the Euclidean space R k of general dimension k, equipped with the Euclidean metric • and the standard Gaussian probability measure: which we used in this work.For this measure, there exists a concentration phenomenon similar to what occurs to the uniform measure over the sphere.That is, a set slightly larger than the half space, which has a Gaussian measure of exactly one half, covers most part of the entire measure of the space, despite that the latter is noncompact.With the introduction of ε-enlargement A ε (in the same way as before, except that the geodesic distance is replaced by the Euclidean one), this phenomenon can be formalized as the following [3]: Theorem 0.3.Let A be a subset in R k , whose Gaussian measure γ(A) ≥ 1 2 , and k be an arbitrary natural number.Then Remark.Compared with Theorem 0.1 we find that this CM phenomenon has a striking feature: it is dimension free, i.e., the right-hand side of the inequality (S7) has no k-dependence. As before, the concentration of the measure γ can be probed by nice observables.To be specific, we take any real-valued function f :R k →R which has the Lipschitz continuity, with the Lipschitz constant Then we have the following [2,3]: (Pisier) Theorem 0.4.Let f : R k → R be Lipschitz and R k be equipped with the standard Gaussian measure γ.Then for any ε > 0, where the expectation value E[f ] = f dγ. Remarks.(i) Unlike Corollary 0.2, this theorem refers to the concentration around the mean, rather than the median.(ii) Inheriting from Theorem 0.3 the concentration of f is dimension free also, i.e., the right-hand side of the concentration inequality (S9) has no k-dependence.This also shows that the sub-Gaussian nature of the distribution of f ( ) is purely geometric and not related to the central limit theorem.(iii) f ( ) can have a very complicated dependence on .Now consider k=2N and ≡(a , where a n ,b n are independent Gaussian variables parametrizing c n in the way shown by Eq. (S5).Then, by using Theorem 0.4 one can prove Lévy's lemma used in the paper [2]: where δ is some positive absolute constant. The studies of wave-to-wave fluctuations are based on Lévy's lemma.As shown above, this lemma is a nontrivial application of Pisier's theorem.Later on, we will see that Pisier's theorem is very useful also in the studies of sample-to-sample fluctuations of W (x; ω) and L x (c; ω). S1 Calculations and properties of I x Lip This section is divided into five subsections.In Sec.S1.1 we show that if I where ∇ c is the orthographic projection of ∇ in R 2N onto S 2N −1 , then I x (c) is Lipschitz.In other words, the continuous differentiability is stronger than the Lipschitz continuity. In doing so, we connect the Lipschitz constant with respect to distinct metrics, namely, the geodesic metric and the Euclidean metric (in R 2N ).In Sec.S1.2 by deriving Eq. ( 5) we confirm that the continuous differentiability, namely, the condition (S11) is indeed satisfied.In Sec.S1.3 we perform numerical analysis and establish the universality of I x Lip (with respect to the Euclidean metric) at large N .In Sec.S1.4 we explain this universality by CM.In Sec.S1.5 we use this universality to derive Eq. ( 13). S1.1 Results of continuous differentiability We start from showing that the continuous differentiability implies the Lipschitz continuity: Lemma 1.1.If I x (c) is continuously differentiable, i.e., the condition (S11) holds, then Note that γ(c,c ) ds is the geodesic distance of γ.It satisfies Combining the inequalities (S13), (S14) and (S15), we prove the lemma. 2 Remarks.(i) That the continuous differentiability implies the Lipschitz continuity is actually very general.In particular, one may replace S 2N −1 by arbitrary space C (equipped with certain metric).(ii) The lemma shows that On the other hand, because of lim we have The inequalities (S16) and (S18) imply that where a is some numerical constant.Because a does not grow with N and is close to unity, it does not affect any results obtained from Lévy's lemma, except slightly modifying the absolute constant δ in the sub-Gaussian tail.Thus we do not pay attention to its exact value any more and set a = π/2, i.e., This provides an analytic formula of I x Lip , which we will use in subsequent mathematical and numerical analysis. S1.3 Results of Eq. (5) It is difficult to calculate Eq. ( 5) explicitly and analytically, especially in view of that ω is fixed.However, to calculate Eq. ( 5) numerically is relatively easy.The point is that, owing to the analytic expression of L x (c; ω), we do not need to perform numerical differentiation which is technically demanding.To be specific, given ω we can simulate Eq. (1) by using the method described in Sec.S7 and obtain the set of {E n (x)}.Upon substituting the result into Eq.( 5) we obtain L x (c; ω) and I x Lip .In this part we put this numerical scheme into practice and calculate Eq. ( 5) explicitly.Surprisingly, we will show that for large N , I x Lip displays universalities, and even stronger results hold for L x (c; ω). Given N we draw c randomly from a uniform distribution over S 2N −1 by using the algorithm described in Sec.S7.Then we simulate the wave propagation in a fixed disorder realization ω (the details are described in Sec.S7.), and calculate ∇ c I x or equivalently L x (c; ω) by using the method described above.This gives a depth profile of ∇ c I x .We repeat the numerical experiment for 10 4 randomly chosen c, and thus have 10 4 profiles in total.We have checked that the number of 10 4 is large enough to ensure that numerical results converge.Finally, the procedures are repeated for several distinct N . To obtain the depth profile of I x Lip , we perform the numerical analysis of the profiles of ∇ c I x in the following way.(i) We average the profiles corresponding to the same N .The average profile, ∇ c I x dµ, is shown in the main panel of Fig. S2 for three different N .We see that these three profiles collapse into a single curve which is continuous in x, and the value of ∇ c I x dµ is order of unity at every x.(ii) We calculate the maximal deviation δ I (x) from the average value ∇ c I x dµ for distinct x.As shown in the inset of Fig. S2, δ I (x) scales with N as This implies that for large N , the two profiles, ∇ c I x and ∇ c I x dµ, converge, giving i.e., ∇ c I x is independent of c.By using Eq. ( 5) namely Eq. (S20) and Eq.(S28) we find Since as shown in (i) the right-hand side has no Ndependence and is order of unity, Eq. ( 12) follows.The curves in Fig. S2, when multiplied by π/2, give the three I x Lip -curves in Fig. 2. The finding of Eq. (S28) motivates us to perform a careful study of the universalities of ∇ c I x = 2 π L x (c; ω).We perform simulations for large N (= 800).Surprisingly, we find that L x (c; ω) is independent not only of c, but also of ω, as shown in Fig. 2. In other words, L x (c; ω) gives a universal curve parametrized by x for very large N . S1.4 Universality of L x (c; ω) from CM In this part we employ CM to explain the universality of L x (c; ω) with respect to c and ω.We stress that this is not a rigorous mathematical proof, which is far beyond the present work. First of all, we fix ω.From Eq. ( 5) or Eq.(S26) we see that L x (c; ω) is continuously differentiable, i.e., and thus is Lipschitz.Let the Lipschitz constant of L x (c; ω) (ω fixed) be L x (ω) Lip .By making use of Lemma 0.5 we obtain From this we see that provided strong concentration around the mean: L x dµ results.This gives the universality with respect to c. Next, we fix c.From Eq. ( 5) or Eq.(S26) we may expect that L x (c; ω) is continuously differentiable with respect to ω ≡ {ω (x,y) }, when the medium space is discretized into M lattice points.Thus we have where L x (c) Lip is the corresponding Lipschitz constant.The ensemble of disorder realizations ω, by definition, is described by a Gaussian probability measure on R M of zero mean and variance σ 2 .Let in (Pisier's) Theorem 0.4.be ω, f ( ) be L x (c; ω), and k = M .After rescaling we obtain from this theorem (S34) It is important to note that for this concentration inequality the exponent of the sub-Gaussian tail bound has no explicit N -dependence.Now, if we assume that L x (c) Lip decays with N , then for sufficiently large N we have strong concentration of L x (c; ω) around the disorder average: E[L x (c; ω)].This explains the universality of L x (c; ω) with respect to ω. Summarizing, the non-rigorous discussions above indicate that the universalities of L x (c; ω) may have deep connections to CM. S1.5 Derivations of Eq. (13) In this part we study the consequences of the universalities of L x (c; ω).By Jensen's inequality we have: in general.With the universality of L x (c; ω) with respect to c taken into account, a stronger result follows, i.e., Now we show that this gives Eq. ( 13).Substituting Eq. ( 5) into it we obtain According to the integral formula (S104) derived in Sec.S8, for the integral c * m c n dµ not to vanish it is necessary m = n .For the integral c * n c n c * m c m dµ not to vanish, it is necessary that one of the following conditions: Using the integral formulae (S105) and (S106) derived in Sec.S8, we obtain Using this result we reduce Eq.(S37) to From this we obtain after simple algebra.Recall that E n (x) ≡ {E na (x)} and a labels the ideal waveguide modes.We find that Eq. (S41) is equivalent to Eq. ( 13). single slab quasi 1D, analytical single slab quasi 1D, analytical τ=0.5 τ=1 τ=0.5 FIG. S3: Top: Simulations show that the 2D structure of transparent eigenchannel (τ = 1) in a slab (left, the widthto-length ratio = 48) is very different from that in quasi 1D (upper right, the ratio = 0.1): the former exhibits localization in the transverse (y) direction while the latter not.Surprisingly, despite this difference, upon integrating y we find that the 1D structure Wτ (x) of a single slab -without ensemble averaging -is well described by Eq. (S45) for the ensemble of quasi 1D disorder media (lower right).Bottom: the same as top, but τ = 0.5. S2 Eigenchannel structures in a slab As the impacts of high dimension on eigenchannel structures {W τn (x)} remain largely unexplored, in this section we study the explicit forms of W τn (x) in a single disordered slab and their dependence on N, ω.Specifically, we will use the numerical method described in Sec.S7 to simulate the profiles, {|E n (x, y)| 2 } and {W τn (x)}, respectively, and compare the results for slabs with those for quasi 1D media. The simulation results are shown in Fig. S3.We see that an eigenchannel in a slab and the one in a quasi 1D medium, even they correspond to the same τ , have totally different 2D structures: in a slab localization structures appear in the transverse (y) direction while in quasi 1D not.This notwithstanding, surprisingly, when integrate out y we find that the ensuing 1D structure W τn (x)despite it is for a single slab -is well described by a universal analytic formula W τ (x) originally derived for an ensemble of quasi 1D disordered media [4].For the convenience below we include the formula here.Let the transmission eigenvalue be parametrized as Then the analytic expression of W τ (x) is as follows: where (recall that is the transport mean free path.) and with x = x/L.Here h(x ) is a function that increases monotonically from h(1) = 1 as x decreases from 1. (For diffusive waves) both W τ =1 (x) and h(x ) are independent of N . Because in a single disordered slab the transparent eigenchannel structure is in agreement with Eq. (S44).When fitting the former by the latter, we obtain numerically for a single disorder realization. S3 Lipschitz continuity of W (x; ω) In this section we will study the Lipschitz continuity of W (x; ω) (as a function of ω).Specifically, we will show the inequality (9) and derive an analytic expression of c(x).Below to keep the proof simple we will use the result shown numerically in Sec.S2, i.e., that the analytic expression of W τ (x) derived originally for quasi 1D disordered media [4] applies also for a single disordered slab.However, as we will discuss in the end of this section, the key result actually does not depend on the details of the expression, but on its general properties regarding the continuity. The proof includes following three steps. S3.1 Step I We introduce a family of real-valued functions (labeled by x) defined as follows: In this step we will study the Lipschitz continuity of this family of functions. As discussed in Sec.S1.1, if a function is continuously differentiable, then it is Lipschitz.Thus we consider the derivative of g z , which is Substituting Eqs.(S43)-(S45) into Eq.(S47), we obtain where For fixed x ∈ [0, 1] the function F (φ) is continuous and finite at φ = 0. On the other hand, it is well known [5,6] that φ has an upper bound φ m which is independent of N .Thus the maximal value of |F (φ)| is independent of N .Taking this into account, we find from Eq. (S48) that the derivative of g z (x), with has a least upper bound which is independent of N .This bound gives the Lipschitz constant, of the function g z (x).As a result, We thus achieve the result of the first step. S3.2 Step II To proceed we introduce the Hilbert-Schmidt (HS) norm of a matrix A = {A ij ∈ C}, defined as In this step we will bound |W (x; ω)−W (x; ω )| by t(ω)− t(ω ) HS .By using the Cauchy-Schwartz inequality, we have this purpose we decompose I x (c) into two parts, The first part, I inc,x (c), accounts for the incoherent contributions of eigenchannels, while the second part, I coh,x (c), for the contributions arising from the coherence between distinct eigenchannels.By using the integral (S104) in Sec.S8, we find This shows that the variance includes the incoherent and coherent contributions, corresponding to the first and second terms, respectively.Substituting Eqs.(S92) and (S93) into these two terms, and using the integrals (S104)-(S106), we obtain Adding Eqs.(S95) and (S96) together we find Comparing it with Eq. ( 13) we justify Eq. ( 14).We have provided in Fig. 3 simulation results for distinct N and x, which confirm the relation (14).In the second part of this section we perform further numerical studies of var(I x ) and I x 2 Lip .To be specific, we fix N to be 400, and use the method described in Sec.S7 to simulate wave propagation in a single disordered slab (L = 50) for 10 4 randomly chosen incoming waves.The simulated depth profile of π 2 N var(I x ) is shown in Fig. S6.Furthermore, for the same disordered medium we use the analytic formula (13) In most of this work we have focused on the energy density I x (c) integrated over the transverse coordinate y.In this section we consider a generic observable, namely, an operator Ô.It is important that this observable needs not to be an integral over y.For simplicity we consider Ô which is hermitian.Note that this condition is not essential.For a non-hermitian operator its expectation value at a stationary scattering state has an imaginary part in general.In this case we only need to discuss the real and imaginary parts separately, and repeat the analysis below. Treating ω as a scattering potential, we apply the scattering theory of waves [8] to Eq. ( 1) and find the concentration effect is strong and the wave-to-wave fluctuations are negligible.Thus we have Eq.(15).Note that the condition (S103) is the generalization of the criterion (8). S7 Method of numerical simulations We simulate the wave propagation by using Eq. ( 1), where δ (x, y) is drawn from a uniform distribution over an interval [−δ 0 , δ 0 ].Here δ 0 ∈ (0, 1) governs the disorder strength and is set to 0.97 in simulations.The values of δ (x, y) at distinct spatial points are chosen in the same way and independently.For simulations, Eq. ( 1) is discretized on a square grid, with the grid spacing being the inverse wave number in the ideal waveguide, and L is defined as the number of discrete points in x-direction.We solve this equation by using the recursive Green's function method [9].We first calculate x = L, y|G|x = 0, y , from which we obtain the transmission matrix t.By performing the singular value decomposition of t, we obtain {v n } and {τ n }.Next, we calculate x, y|G|x = 0, y , from which we obtain the matrix t(x).Then the wave field in the interior of the medium is given by E(x, y) = N a=1 (t(x)c) a ϕ * a (y).For a single disorder realization, small oscillations in x are often superposed on non-oscillatory backgrounds.These oscillations occur in the wavelength scale and are unimportant.They are removed by performing the local (in x) average over a window with a width of wavelength. To draw a point c ≡ (c 1 , c 2 , • • • , c N ) from a uniform distribution over S 2N −1 , we use the standard method [2,10] S8 Derivations of three integrals In the discussions above we have used the following FIG. 1 : FIG.1:A wave is launched into a disordered medium of N channels.In this setting there are two geometric bodies that provide the basis for applying CM, namely, the sphere S 2N −1 constituted by distinct incoming current amplitudes c and the Euclidean space R M by distinct disorder realizations ω. v 1 2 x E(x, y) = N a=1 (t(x)c) a ϕ * a (y).Here c = N n=1 c n v n is the incoming current amplitude in the eigenchannel representation.Thus FIG. 3 : FIG.3: Simulations show that in a single slab ω the profiles Ix(c) for distinct c concentrate around W (x; ω) (lower panels), and the data: ( Ix Lip, π 2 N var(Ix)) (symbols) collapse into a straight line of unit slope for distinct large N while deviate from this line for small N (upper panel).L = 50 FIG. 4 : FIG. 4: Quasi 1D simulations show that, for both diffusive (a) and localized (b) waves, the distribution of wave-to-wave fluctuations of Ix(c) (black histograms) displays a tail well fit by a Gaussian distribution (pink dashed lines), while that of sample-to-sample fluctuations (blue histograms) is exponential (a) and stretched-exponential with a stretching exponent of 0.4 (b), respectively (red dashed lines).In (b) the main panel and inset differ in the horizontal axis.The localization length N [55, 56] is 130 in (a) and 65 in (b).Īx= Ixp(Ix)dIx. C.T. is grateful to J.-C.Garreau, F.-L. Lin, and Z. Q. Zhang, especially to A. Z. Genack and I. Guarneri for inspiring discussions and comments on the manuscript.This work is supported by the National Natural Science Foundation of China (No. 11535011 and No. 11747601). FIG. S1: A strip S around an equator in the sphere (left) and its projection onto a 2D plane (right).The latitude of the strip ϕ ∈ [− 2 , 2 ]. 2 N=400 FIG. S6: The depth profiles of 2 N var(Ix) obtained from simulating 10 4 randomly chosen incoming waves and Ix 2 Lip obtained from the analytic formula (13) collapse into a single curve.The blue (red) curve is the (in)coherent part of Ix 2Lip . to calculate I x 2 Lip , and its incoherent and coherent parts, respectively.Corresponding profiles are shown in Fig. S6 also.We see that the profiles of π 2 N var(I x ) and I x 2 Lip are identical, and the incoherent (coherent) part of I x 2 Lip dominates in the back (front) part of the medium.S6 Derivations of Eq.(15) v 1 2 x 1 2 1 2 xE E(x, y) = N a=1 (t(x)c) a ϕ * a (y).current amplitude in the eigenchannel representation, and vx ≡ (1 − ∂ 2Ωy ) is a scalar (not vector) operator accounting for the group velocity in the waveguide modes.Using Eqs.(S98) and (S99), we find that at the stationary scattering state E(x, y) corresponding to c, the expectation value of Ô isO(c) ≡ E| Ô|E = N n,n =1 c * n c n E n | Ô|E n .(S100)Recall that E n (x, y) is the 2D wave field of the nth eigenchannel.Strictly speaking, like the definition of I x (c), in the last equality above there are two scalar operators v− sandwiching Ô.However, because these two operators account only for an irrelevant overall factor, we omit them hereafter.Naturally, the problem now is: For fixed ω, does O(c) exhibit universal behaviors when c varies?Below we use Lévy's lemma to study this problem.Equation (S100) defines a real-valued function over S 2N −1 .If the coefficients E n | Ô|E n do not diverge, (this condition can be achieved readily in physical systems.)then O(c) is continuously differentiable and thus Lipschitz, according to the discussions in Sec.S1.1.With the help of (Lévy) Lemma 0.5, we obtain the following concentration inequality:Pr O(c) − Ō > ε ≤ 2e − δε 2 N O 2 Lip , (S101)where O Lip is the Lipschitz constant of O.This means that O(c) concentrates around its mean: Ō ≡ Odµ.The latter can be readily calculated by using the definition (S100).As a result, n | Ô|E n .(S102)Thewave-to-wave fluctuations of O is governed by O Lip .According to the inequality (S102) (cf.Sec.S0.1).First, we generate 2N independent standard normal random variables, (a n , b n ) with n = 1, 2, • • • , N .Secondly, we normalize each a n (b n ) by N n=1 ((a n ) 2 + (b n ) 2 ).Define the normalized a n (b n ) as the real (imaginary) part of c n [cf.Eq. (S5)].We generate a desired random point c.
2018-07-03T03:15:32.000Z
2017-11-28T00:00:00.000
{ "year": 2018, "sha1": "4c65044a2158b40c4b435cca154098d0cdbd917a", "oa_license": null, "oa_url": "https://arxiv.org/pdf/1807.00961", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "4c65044a2158b40c4b435cca154098d0cdbd917a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics", "Medicine" ] }
270065095
pes2o/s2orc
v3-fos-license
Multidisciplinary Management of Skull Metastatic Follicular Thyroid Cancer in a Resource-Limited Setting Abstract A 60-year-old woman presented to the Department of Surgery with an anterior neck mass and a mass on her left forehead. She was diagnosed with follicular thyroid cancer with metastasis to the skull, a rare presentation of follicular thyroid cancer that is associated with a poor prognosis. A multidisciplinary team evaluated the patient and devised a 3-staged surgical management plan: total thyroidectomy with central lymph node dissection, cranial metastasectomy, and cranioplasty with autologous split rib graft. This case illustrates how innovative multidisciplinary surgical management can be applied in a low-resource setting involving 3 surgical sub-specialties for the best possible outcome in a patient with metastatic follicular thyroid cancer. Introduction Skull metastasis from thyroid cancer is extremely rare, accounting for only 2.5% of all bone metastasis [1].The preferred management strategy in most patients with bone metastatic thyroid cancer of follicular cell origin is surgical resection of all loco regional disease, if possible, followed by 131I therapy for radioactive iodine (RAI)-responsive disease, external beam radiation therapy or other directed treatment modalities such as thermal ablation, and then thyrotropin (TSH)-suppressive thyroid hormone therapy for patients with stable or slowly progressive asymptomatic disease.Systemic therapy with kinase inhibitors (preferably by use of FDA-approved drugs or participation in clinical trials) in patients with progressive disease that is RAI refractory [2]. Case Presentation A 60-year-old woman presented with a 2-month history of a left anterior neck mass.It was associated with voice change and a left scalp mass with no associated headache, seizures, change in behavior, weakness, or speech difficulty.On head and neck physical examination, there was a pulsatile, firm, and mobile 5.7-cm mass on the left frontal skull.Additionally, the thyroid was enlarged on the left side and the mass was firm, with no attachment to the skin or underlying pre-tracheal fascia.There were also cervical lymphadenopathies with asymmetric cortex. Diagnostic Assessment She had a neck ultrasound that revealed a 4.3-cm left lobe hypo echoic mass with irregular borders and increased vascularity.Additionally, a hyperechoic left upper lobe mass was also seen.Scalp ultrasound showed a 3 × 3.5 cm ill-defined, hyper vascular left frontal lesion that extended into the skull bone with mass effect on the underlying dura.Fine needle aspiration cytology revealed follicular neoplasm both from the thyroid and the skull lesion.Thyroid function tests were within the normal range.Brain magnetic resonance imaging showed a large, spherical, well-circumscribed mass lesion measuring 5.7 × 5.9 × 7 cm in size in the left frontal skull having T1 and T2 predominantly homogenous iso-intense signal to the adjacent muscles.The mass had both an intra-and extra-skeletal component, with associated compression of the underlying left frontal lobe parenchymal but no obvious parenchymal infiltration or adjacent parenchymal signal changes seen.The mass showed diffuse contrast enhancement on post-contrast study, but no significant restriction seen on diffusion weighted imaging (DWI) and apparent diffusion coefficient (ADC) mapping.There was compression and elevation of the overlying scalp tissue (Fig. 1).Other available metastatic workups, such as chest x-ray and abdominal and pelvic ultrasound studies, were negative. Treatment After having multidisciplinary discussion (Endocrine Surgery, Neurosurgery, and Plastic and Reconstructive Surgery), it was decided to do the thyroid surgery first, followed by the skull.Thus, a total thyroidectomy with central neck dissection was performed, and the patient was discharged without complications and put on levothyroxine 200 mcg orally daily. Two months later, she had the skull lesion resected by neurosurgeons.The intraoperative finding was a 10 × 10 cm left frontal soft, hemispheric mass with erosion of the bone, which was resected with a 1.5 cm margin.The involved dura was also excised and a duraplasty was performed to repair the defect in the dura (Fig. 2).She had an uneventful postoperative course from her craniectomy. Three months after the second surgery, she was operated on by the plastic and reconstructive surgeons.The sixth and seventh ribs were harvested from the right side of her chest wall and wired with adjacent cranial bone (Fig. 3) after splitting it into 4 segments to reconstruct the defect.She had an uneventful postoperative course from this surgical procedure as well. Outcome and Follow-Up Currently, the patient is on her twentieth month after the last surgery and doing well on levothyroxine, adhering to the planned active postoperative surveillance.Her wound has healed, and the depressed part of the skull achieved ideal contour at 1 month following the final surgery.(Figure 4).No signs of recurrence in the thyroid bed, neck, skull, or other body parts were observed during her recent surveillance visit, both in imaging and upon physical examination. The patient did not receive RAI ablation therapy because it is not available in the country; serum thyroglobulin and antibodies are also not available. Discussion Follicular thyroid cancer is the second most common thyroid cancer after papillary thyroid cancer.Follicular thyroid cancer is common in areas where iodine deficiency is prevalent and in patients with long-standing goiter [2].It occurs more frequently in women, and usually presents in the fifth and sixth decades of life.In 1% to 9% of patients with follicular thyroid cancer, metastasis to the bone, liver, and lung are present at diagnosis [3].In patients where metastatic disease was diagnosed at initial presentation, the predominant sites were the bones (spine, pelvis, hip, and scapula) (42%), followed by lungs (33%), brain (17%), and lymph nodes (8%) [4].A solitary bony metastasis is a rare initial presentation of follicular thyroid cancer, with a skull lesion being particularly rare, accounting for 2.5% of all bone metastasis [1,5] Calvarial metastasis from thyroid cancer can occur at any age, with a higher rate of occurrence in female patients.Mean duration from initial diagnosis of thyroid cancer to calvarial metastasis is variable, ranging from 4 to 52 years.The most common findings of the metastasis were soft, hemispheric mass on the skull that is highly vascular and causing destruction/osteolysis of the skull bone [6] which was also evident in our patient's case.Due to the incredibly small number of cases, there are no guidelines for the treatment of bony metastasis to the skull from a thyroid primary.The gold standard for the management of metastatic thyroid cancer involves total thyroidectomy.A decision is then made with regard to whether resection of the bony metastasis is in the best interests of the patient, with most patients having radioactive iodine therapy and radiotherapy [6].Disease-specific survival rates at 5 years have been reported to be between 26% and 39% for patients with metastatic disease.The mean survival time for these patients was 4.5 years [6].Poor outcome in bone metastasis may be due to lack of effective RAI.In our resource-limited setup, whereby the patient cannot get RAI or access external beam radiation, resection of the bone metastasis was considered, since it had a possibility of improving survival.Therefore, 3-staged surgery with local control of the follicular thyroid cancer, distant metastasectomy, and cranioplasty was done.Cranioplasty after craniectomy is done to reconstruct a protective physical barrier, create a natural convex contour of the calvarium, and prevent sinking skin flap syndrome [7].There are also several reports in the literature that describe several advantages of cranioplasty, including, but not limited to, enhancement of cerebral glucose metabolism, improvement of cerebrovascular reserve capacity, postural blood flow regulation, and cerebrospinal fluid circulation [8]. The optimal timing of cranioplasty should aim to avoid one of the most serious cranioplasty complications-postoperative infection-which can also be affected by factors such as long operation time, early cranioplasty, older age, and female gender [7].Currently, early cranioplasty is recommended, since it has shown improved clinical outcomes with regard to neurocognitive improvement [9][10][11].In our case, the reconstruction was done at 3 months following the craniectomy, and the patient was discharged without any complications. Our case highlights that a multidisciplinary team approach for advanced follicular thyroid cancer, using an innovative approach for reconstruction in our limited-resource setting, can result in a good outcome. Learning Points • Solitary skull metastasis is an extremely rare initial presentation of follicular thyroid carcinoma.• Metastatic lesions to the skull from follicular carcinoma of the thyroid are usually highly vascularized and cause osteolytic lesions with significant local destruction of the bone.• Even in the resource-limited setup, a multidisciplinary approach to metastatic follicular thyroid carcinomas offers the best possible outcome for patients. Figure 3 . Figure 3. Harvested sixth and seventh rib before and after being placed on the cranial defect. Figure 1 . Figure 1.Axial image on MRI showing left frontal bone mass that is wellcircumscribed with intra-and extra-cranial component and mass effect on the left frontal lobe.Figure 2. Intraoperative left frontal bone defect after excision of the bone metastasis (left panel) and postoperative skull defect after cranial metastasectomy (right panel). Figure 2 . Figure 1.Axial image on MRI showing left frontal bone mass that is wellcircumscribed with intra-and extra-cranial component and mass effect on the left frontal lobe.Figure 2. Intraoperative left frontal bone defect after excision of the bone metastasis (left panel) and postoperative skull defect after cranial metastasectomy (right panel).
2024-05-29T05:06:08.119Z
2024-05-27T00:00:00.000
{ "year": 2024, "sha1": "86301cc893138de80f22ec547fd8ad4aa1438e88", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "86301cc893138de80f22ec547fd8ad4aa1438e88", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235369279
pes2o/s2orc
v3-fos-license
The Diagnostic Yield of Excisional Biopsy in Cervical Lymphadenopathy: A Retrospective Analysis of 158 Biopsies in Adults Objectives: Cervical lymph nodes are the most common site of peripheral lymphadenopathy. The underlying etiologies are usually benign and self-limiting but may include malignancies or other severe life-threatening diseases. The aim of the current study was to investigate the various underlying pathologies of cervical lymphadenopathy as assessed by the diagnostic yield of excisional lymph node biopsies of the neck in a tertiary adult practice. The evaluation was performed in light of previous literature and regional epidemiological patterns. Methods: Retrospective analysis of hospital charts of 158 adult patients who underwent an excisional biopsy for suspected cervical lymphadenopathy at a tertiary referral head and neck service between January 2017 and December 2019. Results: The most common underlying pathology was unspecific and/or reactive lymphadenitis in 44.5% of specimens, followed by malignant disease in 38.6% of cases. An age above 40 years was significantly correlated with an increased likelihood of malignant disease. Lower jugular and posterior triangle lymph nodes showed higher malignancy rates than other groups (100% and 66.7%, respectively). The overall surgical complication rate was 2.5%. Conclusions: The results of the current study serve as an indicator of the variety of etiologies causing cervical lymphadenopathy. In particular, given the increasing incidence of malignant diseases in recent decades, the findings should alert physicians to the importance of lymph node biopsy for excluding malignancy in persistent cervical lymphadenopathy especially in older adults. The findings emphasize the value of excisional lymph node biopsy of the neck as a useful diagnostic tool in adult patients with peripheral lymphadenopathy. Introduction Peripheral lymphadenopathy refers to the abnormal size or configuration of peripheral lymph nodes.According to population-based studies in the primary care setting, the incidence of peripheral lymphadenopathy is estimated to be approximately 0.5% of the population. 1,2Cervical lymph nodes represent the most common localization of peripheral lymphadenopathy. 1,3While cervical lymphadenopathy is mostly benign in origin, [1][2][3] a structured diagnostic workup is recommended to exclude underlying malignancies or other severe life-threatening diseases.If lymphadenopathy does not regress spontaneously within a few weeks, the diagnostic workup usually includes imaging studies, serological examination, and, if necessary, a biopsy.The available biopsy techniques range from minimally invasive fine needle aspiration [4][5][6] and core needle biopsy 7,8 to lymph node excision.4][15][16] In contrast, excision biopsies have the highest sensitivity and diagnostic yield because they produce sufficient material and simultaneously allow the analysis of the lymph node architecture.These advantages explain why excision biopsies remain the gold standard in the diagnosis of malignant lymphoma.However, excisional lymph node biopsies have the disadvantage of being more invasive and often require general anesthesia in addition to the associated surgical complications. 17n this study, we report on 158 consecutive excisional lymph node biopsies in adults presenting to a tertiary head and neck surgery clinic.The diagnostic yield is described and distributed according to age and sex.Postoperative complications are listed, and implications for the diagnostic approach of cervical lymphadenopathy are discussed with respect to historical and regional epidemiological patterns. Patients and Methods Patient hospital records and imaging data sets of 163 patients who underwent a surgical excisional biopsy for suspected nonresolving cervical lymphadenopathy at the Charité University Hospital Campus Mitte between January 2017 and December 2019 were reviewed.The study was approved by the ethical committee of Charité Medical University (approval number EA1/094/20).Electronic hospital charts, pathology reports, and imaging data sets of all patients were analyzed.The workup protocol included initial empiric antibiotic treatment for 10 days.If the lymphadenopathy did not resolve after 6 weeks, laboratory investigations were performed, including lactate dehydrogenase, infectious disease serology, and complete blood count with white blood cell differential.Of all 163 patients who underwent a biopsy, 5 biopsies eventually revealed other nonlymphoid masses (schwannoma, neurofibroma, and other benign salivary gland tumors) and were thus excluded from further analysis.In all remaining 158 patients, a neck ultrasound was performed preoperatively.The indication for biopsy was determined if the lymph node was resistant to empiric medical therapy, could not be explained by the laboratory investigations, and showed at least one suspicious sonographic feature (short axis diameter more than 1 cm, intranodal necrosis, round configuration, Solbiati index < 2).Fine needle aspiration was generally not performed preoperatively in our center.Flow cytometric analysis was not performed in this cohort.In 135 patients, cross-sectional imaging modalities were additionally employed preoperatively (computed graphy [CT], magnetic resonance imaging, and/or positron emission tomography-computed tomography [PET/CT]).Statistical analysis was performed using IBM SPSS statistics software (version 26).Correlation tests were performed using the Pearson chi-quadrant test.P values of <.05 were considered significant. Patient Characteristics Of the 158 patients who met the inclusion criteria in the study period, 84 patients were female (53.2%) and 74 patients were male (46.8%).The minimum and maximum ages (at the time of biopsy) were 17 and 81 years, respectively, with a mean age of 47.9 years (+16.1 years).The age distribution of patients is described in Table 1. Underlying Pathologies The average diameter of the harvested lymph nodes was 2.7 cm (+0.9 cm).The frequency of underlying pathologies is described in Table 2.The most common underlying pathology was nonspecific changes or reactive lymphadenitis in 71 patients (44.5%), followed by malignant disease in 61 patients (38.6%).Most patients with malignancies showed metastatic disease (30 patients, or 19.0% of total cases), followed by Hodgkin lymphoma in 16 patients (10.1%) and non-Hodgkin lymphoma in 15 patients (9.5%).Of the 15 patients with non-Hodgkin lymphomas, 12 patients had B-cell lymphomas, while 3 patients had T-cell lymphomas.The frequency of encountered subtypes of non-Hodgkin lymphoma is detailed in Table 3.The metastatic disease group was dominated by metastases of unknown primary (11 patients), followed by regional metastases from head and neck squamous cell carcinoma (HNSCC) in patients with a history of HNSCC and newly suspected isolated nodal recurrence (4 patients).To avoid tumor spillage in the neck, an open surgical biopsy in the neck Relationship Between Age and Underlying Pathology Between the ages of 21 and 40, nonspecific reactive lymphadenitis was the most common underlying pathology (24/48 patients, or 50%).From age 41 and above, malignant disease was the most common pathology (49/104 patients, or 47.1%).The distribution of underlying pathologies according to age is detailed in Table 4.An age above 40 years was significantly correlated with a higher likelihood of malignant disease (Pearson w 2 test, P < .01). Lymph Node Localization and Surgical Complications The localizations of the excised lymph nodes were classified according to their localizations in one of the 5 major groups in the neck according to the 2002 recommendations of the American Academy of Otolaryngology-Head and Neck Surgery (excluding level VI). 18Of the 158 specimens, the localizations of 20 were unknown and/or could not be clearly attributed to one of the 5 groups.The remaining 138 specimens were distributed according to their localization and the underlying pathology, specifically with respect to malignancy (Table 5).Specimens were most commonly harvested from the upper cervical group (level II) with a total of 56 specimens and least commonly harvested from the lower jugular group (level IV) with a total of 3 specimens.Twenty samples were large enough to encompass multiple regions.Lower jugular and posterior triangle lymph nodes (levels IV and V) were the most likely to be malignant (malignancy rates of 100% and 66.7%, respectively).Upper jugular and submandibular lymph nodes (levels II and I) were the least likely to be malignant (malignancy rates of 21.4% and 23.8%, respectively).We refrained from performing statistical analyses of the association between lymph node localization and the likelihood of malignancy due to the low number of samples in the benign and malignant groups of each individual group and to the nonrandom selection of lymph nodes for biopsy.This nonrandom distribution was based on the fact that the most accessible and/or most suspicious lymph nodes were typically selected for biopsy, regardless of the localization.Surgical complications were encountered in 4 patients (2.5%).Two patients suffered from cervical hematoma, which resolved with conservative treatment.One patient developed transient weakness of the marginal mandibular nerve, and 1 patient developed a submandibular abscess postoperatively, which required surgical drainage. Discussion The epidemiology of peripheral lymphadenopathy varies widely across countries and ethnicities.0][21][22] In contrast, European studies tend to show much lower rates of tuberculosis with results closer to our data.One British study found the frequency of tuberculosis in 550 patients with peripheral lymphadenopathy to be 4.5%, 23 compared with a frequency of 1.9% in our study.This discrepancy may reflect the slightly higher prevalence of tuberculosis in the United Kingdom than in Germany. 24In addition, Kikuchi disease is known to be more common in Asian populations. 25,26In our series, 1 patient showed evidence of Kikuchi disease, adding to the few published case studies reporting its incidence in the German population. 27,28ven focusing on the German population alone, epidemiological shifts can be found across different time periods.For instance, a 1967 study by Matzker 29 analyzed a large series of 1553 cervical lymph node biopsies and found a frequency of malignancy amounting to 20.5%, which is considerably lower than the frequency of malignant disease in the current study (38.6%).The higher malignancy rate in our study could potentially be explained by the higher life expectancy in our current era as well as increased industrialization and exposure to oncogenic risk factors.However, such conclusions should be made with caution because the underlying pathologies in the limited number of cases in our study should not be readily extrapolated to the general population but rather used as a general indicator.Furthermore, the amount of additional information provided by modern diagnostic modalities (imaging, molecular, laboratory investigations) represents an inherent selection bias in the patient population being referred for biopsy nowadays compared with the time of the 1967 Matzker's study.In addition, many of the underlying pathologies were not yet recognized entities in that era, such as HIV or IgG4-associated disease.Nevertheless, this finding was consistent with the wellestablished trend of an increasing incidence of cancer within young and older adults in recent decades. 16][15] Because excision biopsies remain the gold standard for the reliable diagnosis and classification of lymphomas, we argue that this rising lymphoma epidemic will render excision biopsies an essential tool in the diagnosis of cervical lymphadenopathy in coming years.Considering their low complication rate (2.5% in our study), we argue that excision biopsies represent a useful alternative to less invasive biopsy techniques for appropriately selected patients with nonresolving lymphadenopathy, particularly in those with a reasonable suspicion of malignancy.This is especially true in older adults, due to the higher risk of malignancy.Based on our results, level IV and level V cervical lymph nodes may be viewed with more suspicion than those of the other levels of the neck, encouraging the decision to perform an excisional biopsy to exclude malignant disease. It should be noted that nonspecific reactive lymphadenitis was underrepresented in our study because the majority of cases show a benign self-limiting course and resolve spontaneously.All patients in our series had persistent nonresolving lymphadenopathy for at least 6 weeks, which could not be otherwise explained by a benign infection according to the history, examination, or serologic investigation.For example, no case of HIV was included in our analysis, as patients with serologic evidence of a new HIV infection were not referred for surgical biopsy.In contrast, malignancy was considerably overrepresented in our results, reaching 38.6% of all cases.This finding contrasted sharply with the much lower frequency of malignancy in patients presenting to their primary care physician, [1][2][3] which was reported to be between 1% and 2% of all cases. 2 Thus, due to the tertiary nature of our practice, the revealed pathologies are not representative of the etiologies of cervical lymphadenopathy in the general adult population because of the inherent selection bias.Along with the retrospective design, this represents the main limitation of our study. Conclusions The current study highlights the role of surgical excisional biopsy in the diagnosis of cervical lymphadenopathy.The main value of the present study is that it serves as an indicator of the variety of etiologies causing peripheral lymphadenopathy, including some very rare pathologies included in our results.Additionally, our data add support to the established relationship between increasing age and the risk of malignant disease and are consistent with the increasing incidence of malignancy in the current era.The findings should alert otolaryngologists to the value of lymph node excisional biopsy to rule out malignancy in persistent cervical lymphadenopathy, especially in older adults. Table 1 . The Distribution of Patients According to Age Group. Table 2 . The Distribution of Specimens According to the Underlying Pathology. Table 4 . The Distribution of Specimens According to the Underlying Pathology and Age Group. Table 5 . The Distribution of Specimens With Regard to Malignancy Risk According to Anatomical Localization. Table 3 . The Distribution of the Subtypes of Non-Hodgkin Lymphoma Specimens.
2021-06-09T06:18:29.061Z
2021-06-07T00:00:00.000
{ "year": 2021, "sha1": "1ec379810a74c90e084ac5e753bdadb3b6f81f27", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/01455613211023009", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "d8ffa600d598e1e840b649bf2bc1418cad641f12", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
5878589
pes2o/s2orc
v3-fos-license
A Genetic Approach to Spanish Populations of the Threatened Austropotamobius italicus Located at Three Different Scenarios Spanish freshwater ecosystems are suffering great modification and some macroinvertebrates like Austropotamobius italicus, the white-clawed crayfish, are threatened. This species was once widely distributed in Spain, but its populations have shown a very strong decline over the last thirty years, due to different factors. Three Spanish populations of this crayfish—from different scenarios—were analysed with nuclear (microsatellites) and mitochondrial markers (COI and 16S rDNA). Data analyses reveal the existence of four haplotypes at mitochondrial level and polymorphism for four microsatellite loci. Despite this genetic variability, bottlenecks were detected in the two natural Spanish populations tested. In addition, the distribution of the mitochondrial haplotypes and SSR alleles show a similar geographic pattern and the genetic differentiation between these samples is mainly due to genetic drift. Given the current risk status of the species across its range, this diversity offers some hope for the species from a management point of view. Introduction Spain is the country with the greatest biodiversity of Europe, with around 80,000 catalogued taxa. The maritime barrier of the Mediterranean, the land barrier of the Pyrenees in the North, and the country's orographic and climate peculiarities, invest it with unique biogeographic characteristics. Therefore, the country's large number of endemic-specially freshwater-species makes it a biodiversity hot spot [1]. At present, Spanish freshwater ecosystems are suffering great modification at the hands of climate change, environmental degradation, habitat fragmentation, the rise in human demand for water, and a range of human activities. Together, these factors have contributed to a notable increase in the size of Spain's arid and semiarid regions, and to changes in its biodiversity [2]. In 2008 more than 80% of Spanish endemisms were reported to suffer some level of threat in the IUCN Red List. At present, the assessments for some of these species have got worse. Among the macroinvertebrates, the crayfish Austropotamobius italicus was listed as vulnerable in 2008, but in 2010 it has been categorized as endangered [3]. Austropotamobius italicus was once a cornerstone of Iberian freshwater ecosystems with large populations widely distributed throughout most of the country's limestone basins. Indeed, it was absent in the more western areas, the highest mountain ranges, and the subdesert areas of the southeast and River Ebro valley. The dramatic decline in its numbers all over its Spanish range is the result of a combination of the factors mentioned above, as well as of the introduction of exotic crayfish species and the related spread of crayfish plague (caused by the fungus Aphanomyces astaci). As a consequence, only around 1000 small populations now remain in Spain (Alonso, pers. com.) occupying marginal areas or short stretches of watercourses usually isolated from the main river systems [4]. At present, restoration programs are based mainly on translocation of individuals from other natural or farmed populations and are limited by the low number and abundance of existing populations. Besides, it should be taken into account that availability of individuals for restocking purposes needs to be substantially increased by either traditional hatcheries or extensive ponds [5]. These action plans consider several factors such as the risk of transmission of crayfish plague, the risk of survival when establishing new populations, the characteristics of water bodies to be restored, or the distribution of exotic species in those areas [5,6]. Notwithstanding, a major goal of such programs should also preserve genetic variability-the basis for viability and future evolution of populations [7,8]. Indeed, knowledge of the levels and patterns of distribution of the genetic diversity is critical when making conservation management decisions. In other words, effective long-term conservation planning must incorporate genetic information [9] because the loss of genetic variation and inbreeding depression put wildlife populations at an increased risk [10]. In this context, our group is conducting a comprehensive study on the genetic variation and its distribution in Austropotamobius italicus populations from Spain. Our previous survey, by random amplified polymorphic DNA (RAPD) fingerprinting, detected a certain degree of polymorphism in some of the populations tested [11]. Three molecular markers were used in the present study, two of them mitochondrial and the other one, nuclear. This approach combines the advantages of both methods. It is clear that the mitochondrial genome of animals is an excellent target for genetic analysis because of its lack of introns, its limited exposure to recombination, and its haploid mode of inheritance [12]. Mitochondrial DNA (mtDNA) has proved to be powerful for genealogical and evolutionary studies of animal populations. Otherwise, microsatellite loci are highly polymorphic markers, distributed throughout the nuclear genome and generally not linked to loci under strong selection [13]. These codominant markers have revealed substantial variation in species with low variability in other nuclear markers [14] and have been used to study genetic differentiation among closely related populations [15]. Taking into account all above, our aim was to study the genetic variability of three Spanish populations of whiteclawed crayfish belonging to three different scenarios-protected area, crashed population, and hatchery. Samples. A total of 45 individuals of Austropotamobius italicus were collected from three different populations ( Table 1). One of them, located in a protected area: NAV, a native population. A second population, RIL, from a crayfish hatchery, maintained with a high effective number. Thirdly, GRA also a native population whose number crashed during the 1990s due to several pathologies. For each population ten individuals were studied employing two mitochondrial markers: cytochrome oxidase subunit I (COI) and 16S rDNA gene. For SSR analysis fifteen individuals from each of the three populations sampled were used. A fragment from the mtDNA COI gene was amplified in a final volume of 50 μL with 25 ng of total DNA, 1x reaction buffer, 2 mM MgSO 4 , 200 μM of each dNTP, 15 μg of BSA, 1 μM of each primer, and 1 U of Vent DNA polymerase (New England Biolabs, Ipswich, MA, USA). The primers used were COI Scylla [17] and LCO [18]. The optimal PCR programme include an initial denaturation step of 94 • C for 5 min followed by 44 cycles of 94 • C for 45 s, 53 • C for 1 min, and 72 • C for 1 min 30 s, and a final extension step of 72 • C for 10 min. The selective amplification of a segment of the mtDNA 16S rDNA was performed with the primers 1472 and Tor12sc [19], applying the following PCR conditions: an initial Double-stranded amplified products for both mitochondrial markers were purified with the High Pure PCR Product Purification Kit (Boehringer-Manheim) and used as templates for sequencing reactions. These reactions were carried out with the "BIG Dye Terminator Cycle Sequencing Ready Reaction Kit" (Applied Biosystems, Inc., USA) on a 3730 DNA Analyzer (Applied Biosystems, Inc., USA), using the primers employed for the amplification step, at the Genomic Unit of The Complutense University of Madrid. The SSR study included five loci. All primers used were developed by Gouin et al.: Ap1, Ap2, Ap3, Ap5 reverse, Ap6 [20], and Ap5 forward [21]. SSR loci were amplified using a forward labelled primer with one of the Applied Biosystems fluorochromes 6-FAM, PET or VIC. QIAGEN Multiplex PCR kit (Qiagen, Hilden, Germany) was used to amplify the Ap1, Ap2, Ap3, and Ap6 SSR loci. Reaction with a final volume of 6. PCR products were run with the internal size standard GeneScan 500 LIZ (Applied Biosystems, USA) on a 3730 DNA Analyser (Applied Biosystems, USA). Allele size was determined through Peak Scanner Software v1.0 (Applied Biosystems, USA). mtDNA Alignment and Sequence Analysis. The nucleotide sequences of the mitochondrial DNA were aligned using CLUSTAL W software [22] and edited with BioEdit v 7.0.9.0 [23]. After alignment and edition, amplified fragments, 16S (1317 bp) and COI (1184 bp), were also used together to obtain a single sequence of 2501 bp length. The genetic diversity estimates (haplotype diversity, H; nucleotide diversity, π; number of segregating sites, S) were calculated using DnaSP v 5.10.01 programme [24]. F ST pairwise genetic distances [25], which quantify how genetic diversity is partitioned within and between populations, and gene flow (Nm), were estimated through DnaSP v 5.10.01 software package [24]. Principal component analysis (PCA) [26] was performing using NTSYSpc v2.10q software package [27] to visualize the grouping populations. Finally, haplotype frequencies for each mitochondrial gene were geographically depicted for each population using PhyloGeoViz v 2.4.4 [28]. Microsatellite Analysis. Genetic diversity was quantified as the mean number of observed alleles per locus (A), effective number of alleles per locus (n e ), the observed heterozygosity (H o ) and the Hardy-Weinberg expected heterozygosity (H e ) [29] through Popgene software [30]. Genepop v4 [31] was employed to estimate deviations from Hardy-Weinberg equilibrium across populations and across loci using the Markov chain method (10000 iterations). The software also tested the linkage disequilibrium across all populations and estimated the inbreeding coefficient F IS within each population by the Weir and Cockerham [32] method. To assess the effects of genetic drift and mutation in the structure of these populations, two statistics of genetic differentiation, F ST [32] and Rho ST [33], were calculated (Genepop software). R-statistics were expected to be larger than F-statistics when stepwise-like mutations have contributed to population differentiation [34]. Otherwise, if both statistics are similar, genetic drift is considered the main force for genetic differentiation. Wilcoxon's signed ranked test was performed to assess differences between F ST and Rho ST estimates. Genetic structure of populations was inferred using the model-based clustering algorithms implemented in STRUC-TURE v2.0 [35]. For parameter estimations, the admixture model with correlated allele frequencies was used (with 100000 MCMC iterations of burn-in length and 100000 after-burning repetitions). In order to analyse the homogeneity of samples, a correspondence analysis (CA)-on the matrix of allele counts per sample, both at the population and the individual levelswas conducted using Genetix v4.05.2 software [36]. This technique is especially useful when the number of available loci is limited. mtDNA Analysis. A 2501 bp (1184-nucleotide sequence from COI gene and 1317 bp from 16S rDNA gene) fragment was obtained from 30 individuals. Sequence analysis revealed four single nucleotide polymorphisms (SNPs), three of them informative under parsimony (Table 2). Four haplotypes were identified, three at high or intermediate frequencies (Haplotypes 1-3) and the remaining one (Hap 4), at low frequency. Likewise, Haplotypes 1 and 2, both differing in a transition in position 1536, account for 80% of individuals (Table 2). Regarding the populations, NAV and RIL show genetic diversity at mtDNA gene level since two different haplotypes were detected in each sample (Hap 1 and Hap 3 in RIL population and Hap 2 and Hap 4 in NAV sample). The highest haplotype and nucleotide diversity were found in the farmed population (RIL) ( Table 1) Relationships among these three populations were visualized by the principal component analysis (Figure 1). The first PCA axis explains 84.86% of variance and reveals two well-separated groups: NAV (mainly Hap 2) and, GRA and RIL (mostly Hap 1). The second PCA axis explains 15.14% and disjoined populations with Hap 1 into two different groups: GRA (Hap 1) and RIL (Haplotypes 1 and 3). As shown in Figure 2, haplotypes found were not evenly distributed across samples. At COI level, GRA and RIL samples shared one of the haplotypes found, although GRA was monomorphic whereas RIL also presented a private haplotype. In addition, NAV population held two different and exclusive haplotypes. Nonetheless, only two different groups were observed at rDNA 16S gene, since a single mutation separated the two haplotypes found. Thus, GRA and RIL shared the same haplotype while NAV sample held the other one. Microsatellite Analysis. A total of 45 individuals were analysed through five SSR loci. Four of which were poly- Figure 3. Parameters of genetic diversity are displayed in Table 3. GRA population had the highest values for the mean observed allelic diversity per locus (A) and the effective number of alleles (n e ) while NAV sample showed the lowest ones. In all populations, the average observed heterozygosity (H o ) was lower than average expected heterozygosity (H e ), mainly in NAV sample where a large homozygote excess was observed. The F IS values, as expected, confirm the results (F IS values, Table 3). Significant deviations for Hardy-Weinberg expectations were found for Ap2 and Ap3 in all populations, as well as for Ap6 locus, excepting RIL sample. No significant linkage disequilibrium was detected between pairs of loci in these populations. Clustering analysis by STRUCTURE (Figure 4) revealed that the three geographic groups (GRA, NAV, and RIL) indeed represented four genetically distinct populations. The correspondence analysis (CA, Figure 5) agreed with STRUCTURE analysis. The first axe (70% of the inertia) clearly separated NAV sample from the remaining two studied populations, as the mtDNA markers did. The second axis (around 30% of the inertia) disjoined farmed population (RIL) from GRA, although a narrow area exists where individuals belonging to these two populations overlap. According to Wilcoxon's signed ranks test, Rho ST and F ST values were similar (P > 0.05). The highest F ST genetic distance was found between GRA and NAV samples (F ST = 0.1030) and the lowest between GRA and RIL populations (F ST = 0.0518). Discussion The main goal of the present work was the study of the genetic variability of three Spanish samples (Table 1) of both historical and recent evolutionary events, a double approach was used to perform this task. On one hand, most phylogeographic studies of animals have relied on the analysis of mtDNA sequence variation due to its unique attributes and the different mutation rates compared to most nuclear genes. Its analysis has proven useful in defining major phylogeographic assemblages within species, including the European freshwater species-complex of Austropotamobius [37][38][39]. On the other hand, SSR nuclear loci have high mutation rates and tend to recover genetic variability quickly after the action of processes that affect it negatively. Thus, the molecular footprints on these SSR loci should be less long standing than in mitochondrial genome [40]. In this way, the SSR markers have been useful for addressing questions relating to current population structure of many freshwater species [41], including crayfish [42]. With respect to both analysed mtDNA sequences, COI gene resulted more sensitive for detecting genetic variability than 16S rDNA. COI is a powerful marker for the study of the genetic variation at the intraspecific level in crayfish [43,44] and other crustaceans [45][46][47] because its rate of molecular evolution is about threefold greater than that of 16S rDNA gene [48]. Notwithstanding, given that the entire nonrecombining mitochondrial genome can be considered as a single locus from a genealogical perspective [49], the two mitochondrial markers-COI and 16S-are discussed together in the present work. Results indicate that the species exhibits in Spain a certain degree of genetic diversity at both mtDNA (16S rDNA and COI gene) and nuclear (SSR loci) level, despite the limited number of samples-from different scenarios-analysed in this study. Four different mitochondrial haplotypes (Table 2) have been found, while for years the lack of genetic variability at mitochondrial level in Spanish populations of A. italicus was an accepted hypothesis [38,50]. The degree of genetic diversity (Hd, π, Table 1) is higher than the previously reported for Iberian crayfish [37,39] and similar, or even higher, to values obtained for others European populations with mitochondrial markers [40,51]. Concerning microsatellite loci, four out of five markers tested were polymorphic, though most alleles showed low frequencies. The heterozygosity was relatively low albeit the mean number of alleles per locus detected was higher compared with other European populations [42,52,53] (Table 3, Figure 3). The existence of genetic variability in Spanish populations of this crayfish corroborates our former surveys, also at nuclear level, trough RAPD and ISSR markers [2,11]. As shown in Figure 1, evidence for genetic differentiation among these three populations at mitochondrial level occurs. The distribution of mitochondrial haplotypes shows a clear geographic pattern (Figure 2) where Northern Spanish population do not share the haplotypes present in the Southern one, according to the reported by other authors [37,39,54]. The existence of fixed mtDNA differences between the populations analysed is consistent with severe restrictions on population size and geographic isolation. Given that mtDNA contains about one-fourth of the genetic variation included in the nuclear genome, large portions of the haplotypes can be wiped out during bottleneck events [55,56]. Data from SSR analysis seem to bear out the above pattern. Although a certain degree of genetic variability was found, the four polymorphic SSR markers show a single allele nearly fixed (Figure 3) as expected for recently bottlenecked populations. Comparisons between Rho ST and F ST values indicate that genetic differentiation among these samples can be attributed to genetic drift. As shown in the STRUCTURE and correspondence analyses (Figures 4 and 5), the geographic labels matched very closely to the genetic clustering. The Northern population (NAV) is clearly separated from the other two. These analyses also highlight that some specimens from different populations are genetically similar at nuclear level, since some individuals from RIL sample are included in GRA cluster (Figure 4). The close relationship between GRA and RIL (hatchery) specimens could be explained by human translocationsa common practice in Spain since the 19th century-from GRA area to the source populations that gave rise to the current farmed sample or by the existence of a common ancestor population with a wider distribution in the far past. Focusing on the Southern population, GRA shows a unique mitochondrial haplotype and a single allele per SSR locus at high or very high frequency, whereas the other ones remain at low frequencies ( Table 3). The effective number of SSR alleles was also lower than expected, indicating a homozygote excess in the sample. The single haplotype found at mtDNA level suggest a strong bottleneck, proposed by some authors for this species [49,50] around the last glaciation. In addition, SSR outcomes reveal a decline in population size recently. As a matter of fact, in the last two decades this population had suffered a drastic regression in number due to high mortalities caused by Saprolegnia spp., severe climatic droughts during the 1990s and a big flood at 1997 [57]. The genetic drift caused by bottlenecks was intensified, thus some haplotypes/alleles have been eventually lost while others became fixed according to its frequency [58]. Notwithstanding the presence of private SSR alleles is indicative of a recent increasing in GRA population size. The Andalusian Regional Government started at 2002 The Scientific World Journal 7 a conservation and management program of the whiteclawed crayfish [59] that contemplates the development of emergency plans for drought and disease among other measures. Though the GRA sample is not directly situated in a protected area, possibly this population is benefiting from the strategies adopted in surrounding areas, including control/eradication of alien species. The Northern population (NAV), although located in a protected area since 1996, showed the lowest variability at number of alleles per locus and exclusive alleles (Figure 3). These results indicate a recent and strong bottleneck since allelic diversity seems to be one of the most sensitive methods for detecting relatively recent demographic bottlenecks [13]. It is because rare alleles, which contribute little to the average heterozygosity, are easily lost during population size constrictions [58,60]. In addition, the rate of inbreeding, unavoidable in small populations, is not negligible (F IS = 0.8303, Table 3). Inbreeding reduces fitness [61] and usually enhances susceptibility to infectious diseases [62]. It is known that late in 20th century, high mortalities occurred in this region mainly due to the crayfish plague caused by Aphanomyces astaci [63]. Although two haplotypes were found, the mitochondrial analysis also supports the existence of a more ancient bottleneck because the 90% of crayfish shared the same mtDNA haplotype (Tables 1 and 2). NAV sample comes from a waterbody where Pacifastacus leniusculus also inhabits [64,65] so, competition for space and diseases such as aphanomycosis could be some of the causes of the low genetic variability found in this population. Thus, genetic studies are necessary to ensure this species' future even in protected areas and to guarantee the existence of suitable levels of genetic variability in crayfish populations. The hatchery population (RIL) exhibits the highest genetic diversity at mitochondrial level and moderate values for all the SSR variability parameters (Tables 1 and 3). The variability found may be explained by the fact that it was established in the 1980s with crayfishes from distinct basins and since then, it was kept under favourable conditions which have allowed to maintain a high population density [11,66]. At present, Spanish populations of white-clawed crayfish are in regression due to environmental changes among other factors [2,5,67]. The current levels of genetic variability of A. italicus in Spain are affected by successive and drastic bottlenecks and consequently, by the action of the genetic drift, enhanced in these small and fragmented populations. However, ancient historical events such as population fragmentation, recolonizations from refugia during the ice ages [37,38,50,68], or the formation of fluvial basins, must also have influenced their present structure, as well as it has been demonstrated in other species [69][70][71]. Our results underscore the usefulness of employing both mitochondrial and nuclear markers to assess current levels of genetic variability of the populations analysed as well as their genetic structure. Genetic information of the present study should be taken into account for future conservation plans. Though Spanish populations are in decline, a certain degree of genetic diversity has been detected. Given the pattern of the genetic variability found, it would be advisable an increase of within-population heterozygosity without eroding the differentiations that characterize the genetic structure of these Spanish samples. In this way, future works with more samples are needed in order to confirm these results and to provide guidelines about restocking purposes in each area. Hence, the crayfish hatchery analysed in this study (RIL) could be suitable for restocking the Southern population (GRA) but it is not fitted for the Northern one (NAV) that has to be considered as a different management unit [72], given its particular genetic characteristics.
2018-04-03T04:13:06.120Z
2012-05-03T00:00:00.000
{ "year": 2012, "sha1": "1fa76b6cb81502e8cb570f552188d87950d0c7f5", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/tswj/2012/975930.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f2818113bd009c53bf24c37323acf4c5e5617798", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
248093785
pes2o/s2orc
v3-fos-license
Phase Unwrapping and Frequency Points Subdivision of the Frequency Sweeping Interferometry Based Absolute Ranging System Frequency sweeping interferometry (FSI) based absolute distance ranging has high precision and no ranging blind area. It can be used to realize large-scale and non-cooperative target measurement. However, the nonlinear frequency modulation of the laser seriously affects the ranging accuracy. In this manuscript, a measurement method assisted by Hilbert Transform (HT) and Chirp-z Transform (CZT) is proposed, which can realize the phase unwrapping of the beat signal, the length reduction in the delay fiber of auxiliary optical path, and the improvement of the frequency resolution. The narrow-band frequency suitable for HT is further studied. In the experiment, the ranging resolution is 70 μm and the standard deviation is 12.6 μm within a distance of 4005 mm. Introduction With the dominance of no ranging blind area, non-cooperative target, and high signal to noise ratio, the FSI is coming into prominence in the field of engineering application, such as optical coherence tomography [1], the measurement of three-dimensional coordinate [2] and parameters of rotors [3], important equipment manufacturing [4] and satellite formation [5]. FSI is based on the Michelson interferometer, and the distance is calculated by the beat frequency caused by the sweeping frequency and phase difference [6,7]. Hence, the precision of FSI in absolute distance measurement will rely much on the tunability of its laser source [8]. However, the hysteric response of piezoelectric (PZT) causes the frequency nonlinear tuning of the external cavity diode laser (ECDL), which makes the phase error of the interferometry signal [9][10][11][12]. Nonlinear frequency sweeping is suppressed by upgrading hardware or using a software algorithm [13][14][15]. Photoelectric crystal [16], phase-locked loop (PLL) [17], and other technologies are used for active frequency stabilization. Iiyama et al. [17] obtained the frequency difference between the reference signal and the heterodyne interference signal by a lock-in amplifier. After PLL modulation, the improvement of the linearity was estimated to be about 10 dB. Greiner et al. [16] added an intracavity electro-optic crystal into the resonator for frequency stabilization. The current-induced frequency modulation noise was reduced two orders of magnitude. Kakuma [18] marked the sweeping frequency of the laser with the special absorption frequency of the Rb cell. The gradient of interference fringes was accurately determined by the linear least square fitting method. Theoretically, the ranging accuracy was greater than one quarter of the wavelength. These methods are based on upgrading of hardware, which makes the ranging system more complex. Furthermore, several algorithms are used to correct the nonlinear frequency sweeping [19,20]. Shi et al. [21] resampled the beat signal in the measurement optical path with the peak-valley resampling method, which realized the linearization of the frequency ramp. The distance spectrum of the resampled signal was obtained by the fast Fourier transform (FFT), and the measurement precision was enhanced to 50 µm within 8.7 m. Liu et al. [22] tracked the frequency of interference signal with an extended Kalman filter. The relative phase extraction error in the fractional part is <1.5% and the standard deviation of absolute distance measurement is <2.4 µm. In order to satisfy the equal frequency interval sampling, sampled points are often selected at the peak and valley positions of the auxiliary path interference signal. To satisfy the Nyquist sampling theorem, the optical path difference of the auxiliary optical path needs to be twice as long as that of the measurement optical path. This puts forward a strict demand on the length of the delay fiber of the auxiliary optical path. In addition, the all-phase FFT is used to calculate the frequency information of the beat signal. Since the accuracy of FFT is related to the number of points of the signal, the more points involved in FFT, the higher accuracy of FFT, which is usually set to 2 N points. This reduces the calculation speed and wastes a large number of information points. Moreover, an optical frequency comb can calibrate the sweeping frequency of a continuous wave (CW) laser [23]. Jia et al. [24] corrected the nonlinearity by sampling the ranging signals at equal frequency intervals with a microresonator soliton comb. However, this measurement method is not suitable for industrial environments because of its complex structure and considerately high cost [25]. In this manuscript, we expand the phase of the beat signal of the auxiliary optical path by HT. The number of sampled points is increased with equal frequency interval in a period, and the peak frequency of the resampled signal is refined by CZT. Compared with FFT, the advantage of CZT is that only M points near the peak frequency need to be calculated, which greatly reduces the calculated points and improves the calculation speed. Figure 1 shows a schematic diagram of the FSI ranging system. The triangular wave signal produced by the signal generator is transmitted to the PZT, which is used to modulate the frequency. The laser emitted by ECDL is separated by the fiber coupler (FC1). Furthermore, 80% of the optical power passes through the measurement optical path and 20% of the optical power passes through the auxiliary optical path. The optical power of the measurement optical path is divided into two beams by the fiber coupler (FC2); 90% of the optical power is incident to the retro reflector (RR) and the reflected light is combined with the local oscillator optical power (10%) at the 50:50 fiber coupler (FC4). The sweeping frequency (f ), the light intensity of the beat signal (I mea ), and the relationship between the time delay (τ 0 ) and measurement distance (L) can be expressed as Equations (1)-(3) [26], Methodology where, a is the frequency variation of laser, f 0 is the initial frequency, n is the refraction index, c is the speed of light, t is the time, and I 0 is the normalized light intensity. Since the modulated signal is a triangular wave, the frequency of the beat signal is a fixed value equals to 2aτ 0 . Thus, Equation (3) can be rewritten as, where f beat is the frequency of the beat signal. Similarly, in the auxiliary optical path, the optical power is divided into two parts by the 50:50 fiber coupler (FC3) which are merged at the 50:50 fiber coupler (FC5). The time delay (τ r ) in the auxiliary optical path is related to the length of the delay fiber; the beat signal of the auxiliary optical path (I re f ) can be expressed as Equation (5). The higher-order expansion of the actual frequency variation of the laser can be expressed as a = a 0 + ∑ R i=1 a i t i τ 0 , where a i is the nonlinear frequency modulation coefficient, R is the high-order term of nonlinear frequency modulation, and a 0 is the slope of frequency modulation. Equation (2) can be expressed as, As shown in Figure 2a, the blue dot curve represents an ideal beat signal with a fixed period, and the blue solid curve is the actual beat signal with the unstable period. Figure 2b shows the distance spectrum without resampling. The full width at half maxima (FWHM) represents the ranging resolution, which is seriously widened. Thus, the effective distance cannot be obtained from Figure 2b. In order to suppress the nonlinear frequency modulation of the laser, we built an auxiliary optical path in the FSI system, which is an auxiliary interferometer with a fixed delay fiber. The beat signal of the auxiliary optical path is used to resample the beat signal of the measurement optical path, and the resampled signal (I sam ) is given by Equation (7), where N is the resample point at the equal frequency interval of π; the frequency of resample (F s ) can be expressed as Equation (8). The peak frequency (F sp ) of the spectrum after resampling is shown in Equation (9), where k is the peak frequency point. It can be seen from Equation (7) that the nonlinearity of frequency modulation is eliminated. In order to increase the number of resampled points, HT is used to expand the phase of the beat signal of the auxiliary optical path. However, HT is only applicable to the narrow-band signals. Figure 3 shows the orthogonal transformation of the HT signal. The nonlinear frequency variation coefficient of the laser is within the first order; there is only one oscillatory mode in the orthogonal transformed signal, which is shown in Figure 3a. Figure 3b shows the inconsistent group delay of Hilbert transformed signal at the nonlinear frequency variation of the laser in the second order, which leads to the change of the orthogonal transformed (OT) signal envelop. Furthermore, Figure 3c demonstrates the orthogonal transformed signal which has multiple oscillatory modes at the nonlinear frequency variation of the laser, which extends beyond the third order [27]. HT is not suitable in the case that the nonlinear frequency variation of the laser extends beyond the second order. As shown in Figure 4a, the beat signal I r (t) of auxiliary optical path is transformed π/2 phase by HT, and the dotted line represents the imaginary part H{I r (t)} of the phase [28]. Then, the phase of the beat signal of auxiliary optical path can be expressed as Equation (10). The instantaneous phase of the beat signal of the auxiliary optical path with 12 sampled points in each period is shown in Figure 4b. The corresponding resampled point of the beat signal of measurement optical path is shown in the red dot in Figure 4c. The resampled signal can be represented as, where N h is the resampled point in each period. As long as 2τ 0 /N h < τ r , the sampling theorem can be satisfied, and resampled signals aliasing will not occur. The spectrum information of beat signal is usually obtained by FFT. However, FFT is suitable for the periodic stationary signal, which is based on the global information of the signal. To improve the resolution of frequency, it is necessary to increase the global sampled points of the beat signal, which enhances the useless calculation points other than the main frequency. CZT can solve this problem with helix sampling on the unit circle in complex frequency domain, which is suitable to focus on the local characteristics of the signal [29]. The frequency range to be analyzed by CZT is f w ( f min , f max ), which is refined into sampling points (M − 1). Furthermore, the f min and f max can be expressed as, where f s is the sample frequency, M represents the number of refined points in the frequency domain (0 < M < L res ), L res is the length of the resample signal, θ 0 and ϕ 0 represent the included angle between adjacent sampling points and the phase of the starting position, respectively. Furthermore, the resolution of frequency by CZT is defined as, As long as M is large enough, the spectral resolution of CZT can be significantly smaller. We can select appropriate f min , f max , and M to analyze the actual beat signal. Experiment and Results To verify the feasibility of this method, we built an FSI ranging system as shown in Figure 5. The light source of the system is the ECDL (New Focus TLB-6700) with a linewidth of 200 kHz. The frequency sweeping range is from 1542.5 nm to 1557.5 nm, and the sweeping speed is 15 nm/s. The experiments are carried out in a constant temperature and humidity environment. The ranging system is divided into auxiliary optical path and measurement optical path. The change of fiber length caused by temperature change was very small, so the influence of fiber length change on ranging accuracy can be ignored. The beat signals of the measuring optical path and auxiliary optical path are detected by PD1 and PD2 (Thorlabs PDA10CS2). Furthermore, the two beat signals are sampled and recorded by an oscilloscope (Tektronix MSO 70000). In order to study the time to frequency domain conversion efficiency of FFT and CZT, the same resampling beat signal is processed by FFT and CZT, which is shown in Figure 6. To facilitate the comparison of spectrum information, we normalized the amplitude. The resampling beat single has 222,706 points. Figure 6a displays the result of running the 222,706 points FFT of the resampling beat signal. Figure 6b shows the result of FFT with the padding at zero. The points of FFT are 262,144. Figure 6c demonstrates the result of CZT of resampling the beat signal. Points (M) of CZT are 2000. Due to the spectral leakage of FFT, the frequency resolution is limited. However, the CZT is much more flexible with high resolution. Furthermore, the points of CZT are much less than FFT. The resolution can be tailor-made by means of adjusting the start and stop frequencies ( f min , f max ) and the number of M. Hence, CZT is used to process the resampling beat signal in this research. Figure 7a shows the distance spectrum with the HT subdivision resampled method and CZT. The optical path difference of the auxiliary path optical is about 3105 mm. Hence, four points are sampled (N h = 4) with an equal frequency interval in a period, and the phase point interval is π/2. The ranging resolution (FWHM) in Figure 7a is 70 µm, which is greatly improved, compared with the result in Figure 2b. In order to provide a quantitative estimation of the resolution enhancement with the proposed technique, the spatial resolution experiment is performed. The pyramid is moved unidirectionally 70 µm on a displacement table (Thorlabs LPS710, which can achieve 800 µm displacement with a step accuracy of 6 nm in the closed-loop working mode). The result of displacement measurement is shown in Figure 7b, where the red curve represents the distance spectrum after displacement, which verifies the ranging resolution of 70 µm. Since the environmental factors (turbulence, vibration, etc.) will affect the results of a single measurement, the average value of multiple measurements can improve the reliability of the ranging results. After measuring 13 times, the standard deviation (σ) is 4.6 µm. To compare with the ranging accuracy of traditional peak-to-valley resampling methods at different distances, we moved the pyramid by a range of 1.5-4 m. Six different distances are arbitrarily selected and 13 sets of data are taken for each distance. With the HT subdivision resampled method, N h varies with the different measurement distance. The length of the delay fiber in the auxiliary optical path is fixed, and the corresponding N h is set to 4, 6, 7, 8, 10, and 12 respectively to meet the Nyquist sampling theorem. However, the length of the delay fiber in the auxiliary optical path needs to be more than twice as long as that of the measurement optical path with the peak-valley resampling method. Figure 8a,b demonstrates the standard deviation (σ) and resolution of the HT subdivision resampled and the peak-valley resampling, respectively. The red error bar represents the standard error and blue dot is the resolution. The resolution under different measurement distances of two methods is the same. However, σ and the standard error under the HT subdivision resampled method are much less than those of the peak-valley resampling method, which proves that the robustness of the HT subdivision resampled method is higher than the peak-valley resampling method. In other words, the vibration of the environment has a greater impact on variations in the optical path length of long fibers. As for the peak-valley resampling method, the delay fiber in the auxiliary optical path varies with the measurement distance, hence, the σ is gradually increased. Discussion The HT resampling method can refine the equal optical frequency interval in a period. Compared with the peak-valley resampling method used by Pan et al. [30], it significantly reduced the length of the optical fiber and improved the measurement repeatability. By analyzing the results of HT resampling, the relationship between measurement error and the number of resampled points is found. As shown in Figure 8a, the σ could reach the minimum value (such as the distance of 1500 mm or 3000 mm) at the resampled points meeting the Nyquist sampling theory. When the resampled points are more than the criterion of the Nyquist sampling theory, the σ increases. This agrees well with the observation that more resampled points in each period means larger accumulated sampling errors. Moreover, the number of points which are sampled by the oscilloscope in each period is limited. The number of resampled points in each period could not keep increasing or the conditions for optical frequency intervals of resampled points could not be met. This leads to the accumulation of sampling errors and affects the repeatability of the distance measurement. Hence, the length of the delay fiber cannot be unlimitedly reduced. The restriction can be lifted by linearly interpolating the auxiliary and measurement beat signals. However, we should further study whether linear interpolation will cause beat signal distortion. To deal with high-order nonlinear frequency modulation of the ECDL, the Hilbert-Huang transform can be exploited. The empirical mode decomposition of the Hilbert-Huang transform can be used to decompose the interference signal to obtain the intrinsic mode functions and residuals and to further reconstruct the interferometer signal after filtering the high and low frequency. This is worth our future research. Conclusions In this manuscript, we propose an FSI ranging method based on Hilbert transform and Chirp-z transform, which can realize the phase subdivision, the increase in resampled points, and the refinement of frequency points. This method reduces the limit on the length of the delay fiber of the auxiliary optical path in FSI ranging system and shows great advantages in large-scale measurement. The ranging resolution can reach 70 µm and the standard deviation is 12.6 µm over a range of 4005 mm. Compared with traditional peakvalley resampling methods, this has higher robustness. The short delay fiber is beneficial to reducing the impact of vibration on the FSI measurement system, making the FSI system more practical for absolute distance measurement. At the same time, benefiting from the HT algorithm, our method can be easily implemented in all fiber-optic systems, and the system can be configured as a Mach-Zehnder or Michelson interferometer without adding additional devices. Furthermore, this method contributes to the development of FSI laser ranging with the aim of miniaturization and integration.
2022-04-12T15:02:38.197Z
2022-04-01T00:00:00.000
{ "year": 2022, "sha1": "5f6f844c93f7df202ea3fbc4913f1b15d9c1c5c7", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/22/8/2904/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "802e9113246b760a25268c109bb14b256cb54dfa", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
119109992
pes2o/s2orc
v3-fos-license
Living on the walls of super-QCD We study BPS domain walls in four-dimensional $\mathcal{N}=1$ massive SQCD with gauge group $SU(N)$ and $F<N$ flavors. We propose a class of three-dimensional Chern-Simons-matter theories to describe the effective dynamics on the walls. Our proposal passes several checks, including the exact matching between its vacua and the solutions to the four-dimensional BPS domain wall equations, that we solve in the small mass regime. As the flavor mass is varied, domain walls undergo a second-order phase transition, where multiple vacua coalesce into a single one. For special values of the parameters, the phase transition exhibits supersymmetry enhancement. Our proposal includes and extends previous results in the literature, providing a complete picture of BPS domain walls for $F<N$ massive SQCD. A similar picture holds also for SQCD with gauge group $Sp(N)$ and $F<N+1$ flavors. Introduction and summary of results Interesting progress has been recently made in understanding the dynamics of quantum field theories (QFTs) in three space-time dimensions. This progress has also led to new insights (and surprises) on the relation between three-dimensional and four-dimensional QFTs. One concrete situation in which such a connection becomes manifest is when domain walls-which are codimension-one solitonic states that a QFT contains whenever there exist multiple vacua separated by a potential barrier-are present. Notable examples of 4d theories with domain walls are Yang-Mills (YM) theory and QCD, at the special value θ = π of the topological theta term. In a nice series of papers [1][2][3] (see also [4]), a rather complete picture of the vacuum dynamics of YM and QCD and their domain walls has been proposed. While it is believed that CP is an exact quantum symmetry when θ = 0, the authors gave arguments supporting the claim that for θ = π CP is spontaneously broken in two degenerate gapped vacua. Hence, domain walls exist connecting these two vacua. The effective dynamics on YM domain walls is gapped and captured by a Chern-Simons (CS) topological quantum field theory (TQFT) [2]. On the contrary, QCD domain walls behave rather differently depending on the quark masses [3]. For large quark masses compared to the QCD scale Λ QCD their low energy dynamics is as in YM, while for small masses there are massless excitations on the domain walls-Goldstone bosons for broken symmetriesdescribed by a non-linear sigma model (NLSM) with target CP F −1 , where F is the number of flavors. This implies that at some value m * 4d of the quark masses, a phase transition on the domain walls should occur. This picture has been later confirmed within a pure holographic context [5]. In this paper we discuss another class of theories which admits a rich variety of domain walls, namely 4d N = 1 massive super-QCD (SQCD). For generic values of the continuous parameters-flavor masses and θ angle-the theory develops multiple isolated supersymmetric gapped vacua, where the gaugino bilinear condenses and confinement occurs. The number of vacua equals the dual Coxeter number h of the gauge group G. For any pair of vacua, one can construct field configurations in which the theory sits in two different vacua on the left and right half-spaces, respectively. In such configurations, a domain wall must necessarily separate the two spatial regions. This gives rise to the aforementioned rich variety of domain walls. When the gauge group is simply connected, the degenerate vacua arise from the spontaneous breaking of a discrete R-symmetry: they arrange as the h th roots of unity, and are cyclicly rotated into each other by the broken R-symmetry. This implies that the vacua are physically equivalent, and the properties of domain walls only depend on how many vacua we jump by. We call k-wall a domain wall connecting the j th vacuum to the (j +k) th vacuum. In SU (N ) SQCD, we have 0 < k < N . Even for fixed topological sector k, there can be multiple physically-inequivalent degenerate domain walls that connect the very same two vacua. This of course is an effect of supersymmetry. Indeed, the 4d N = 1 supersymmetry algebra admits a two-brane charge [20] and so the tension of domain walls enjoys a BPS bound. One can argue that SQCD walls saturate the bound-they are 1/2 BPS-so they preserves two supercharges, corresponding to N = 1 supersymmetry in three dimensions. In SU (N ) SQCD there is a qualitative difference between F < N and F ≥ N , where F is the number of flavors [21][22][23][24][25][26]. The basic reason is that for F ≥ N also baryons, besides mesons, parametrize the moduli space. This suggests a somewhat different structure of the domain wall spectrum. In this paper we focus on the case F < N , leaving the case F ≥ N to future work [27]. The problem of understanding and classifying the BPS domain walls of SQCD is not new and there exists an extensive literature on the subject, which includes . However, the improved understanding that we now have of the dynamics of N = 1 three-dimensional CS-matter theories (see e.g. [12,[16][17][18][49][50][51][52]), together with a few more facts which were not fully appreciated previously, let us reconsider this problem and provide a more complete and satisfactory picture, in the regime F < N . For instance, in Table 2 we list all BPS domain walls of SU (N ) SQCD for N ≤ 5. Our findings include and extend on previous results, solving also a few puzzles that have been raised. Our strategy is to provide a 3d worldvolume description of the low-energy effective dynamics on k-walls in 4d N = 1 SU (N ) SQCD, with F < N flavors, valid as the flavor mass m 4d is varied, and capable of capturing 3d phase transitions. Our proposal is the three-dimensional N = 1 CS theory coupled to F matter superfields X transforming in the fundamental representation of U (k). The theory has a real superpotential W(X, X † ), which includes a mass term m Tr X † X and two quartic terms. The vacuum structure of the 3d theory (1.1) depends on the sign of m. For m < 0, that corresponds to small 4d mass m 4d compared to the SQCD scale Λ, there are multiple vacua in which the low-energy effective theory is the product of a TQFT and a supersymmetric non-linear sigma model. Each vacuum corresponds to a different domain wall in the same soliton sector. For m > 0, that corresponds to large m 4d , there is a single gapped vacuum hosting a TQFT. The effective theory in this vacuum agrees with the theory that Acharya and Vafa (AV) discovered to govern the dynamics on domain walls in SU (N ) SYM [42]. This is an important test of our proposal. At m = 0, corresponding to a 4d mass m * 4d whose precise value we cannot determine but that is of order Λ, there is a second-order phase transition separating the two phases, in which the multiple vacua of the m < 0 regime coalesce into one. Note that when such phase transition occurs, nothing special happens in the bulk-exactly the same phenomenon observed in [3] for QCD. The phase transition is described by a 3d N = 1 SCFT. However, for the special values F = 1, k = 1 or k = N − 1, we conjecture that 3d supersymmetry is enhanced to N = 2 at low energy on the domain wall. In the very special case of SU (2) SQCD with 1 flavor, we conjecture that the SCFT on the 1-wall has enhanced N = 4 supersymmetry. Figure 1 contains a qualitative picture of the low energy behavior of theory (1.1). Our proposal passes several non-trivial checks. As already mentioned, in the limit m 4d Λ it reproduces the theory of Acharya-Vafa [42]. Moreover, since quartic interactions dominate over the mass term, the Witten index remains constant through the phase transition and equal to N k . This is required from the 4d point of view because, as long as we keep the flavor mass positive, there cannot be leaking of states at infinity in field space. Note that constancy of the Witten index is realized in a rather non-trivial way: at small m 4d one has to sum over the inequivalent degenerate walls. Even more strikingly, in the small m 4d regime we are able to explicitly construct the BPS domain walls as solitons of 4d SQCD in an almost semi-classical way. The new idea is to construct "hybrid" walls, combining standard domain walls of the Wess-Zumino type on the mesonic space with sharp transitions in an unbroken SYM sector that is present on the mesonic space. This construction exactly matches the intricate vacuum structure displayed by theory (1.1) at m < 0. We repeat this whole analysis for 4d N = 1 Sp(N ) SQCD with F < N +1 flavors, finding a similar phase diagram. As a special case, we obtain the extension of the AV theory to symplectic groups: such a theory describes domain walls in Sp(N ) SYM. The rest of the paper is organized as follows. In Section 2 we review some basic properties of domain walls in pure SYM. In Section 3 we recall the vacuum structure of SU (N ) SQCD for F < N , and summarize some properties that BPS domain walls should satisfy. Section 4 contains our proposal for the effective three-dimensional theory describing these domain walls and a thorough analysis of its vacuum structure. This analysis will already encode several non-trivial checks. In Section 5 we focus on the small 4d mass regime and explicitly construct, by a 4d analysis, the domain walls interpolating between SQCD vacua. The results we get exactly match the 3d analysis. Finally, in Section 6 we discuss domain walls in Sp(N ) massive SQCD. Domain walls of SYM: a review Let us consider four-dimensional super-Yang-Mills (SYM) theory with N = 1 supersymmetry and gauge group G. For simplicity, we restrict to the case of simply-connected 1 gauge groups with simple algebra g. The classical U (1) R-symmetry is anomalous, and in the quantum theory it gets reduced to a Z 2h subgroup, where h = c 2 (g) is the dual Coxeter number of g. 2 The non-perturbative dynamics gives rise to a gaugino condensate that spontaneously breaks Z 2h to Z 2 and provides h gapped vacua rotated by the action of Z h = Z 2h /Z 2 : where Λ is the dynamically-generated scale, ω = e 2πi/h is the basic h th root of unity, and k = 0, . . . , h − 1 labels the vacua. In other words, in different vacua the gaugino condensate differs by a phase. We can describe the various condensates through an effective superpotential This should be though of as the generating function of gaugino bilinears, in the sense that λλ = ∂W SYM /∂ log Λ 3h [54]. Since the vacua ara gapped, there must exist domain walls-i.e. finite-tension codimension-one solitonic objects-connecting them. More precisely, one can consider phases in which in different spatial regions the theory sits in different vacua: those regions must be separated by dynamical domain walls. The 4d N = 1 supersymmetry algebra admits a two-brane charge [20,55], and as a consequence there can exist half-BPS saturated domain walls, whose tension is minimal within their soliton sector [56,28]. Their "central charge" is twice the total excursion of the superpotential from one vacuum to the other [56,57], with γ the phase of Z, and the tension of a BPS domain wall is fixed by the supersymmetry algebra in terms of the superpotential as For SYM, the tension of BPS walls connecting the j th vacuum to the (j + k) th vacuum is When the group is the quotient of a simply-connected G by a subgroup of its center, the number of vacua is the same as for G, but their physical properties are different [53]. In particular, the R-symmetry is a subgroup of the Z 2h discussed below, or completely absent. 2 Recall that: c 2 (su(N )) = N , c 2 (so(N )) = N − 2 for N ≥ 5, c 2 (sp(N )) = N + 1, c 2 (e 6 ) = 12, c 2 (e 7 ) = 18, This is an exact non-perturbative result. Acting with the generator of the Z 2h R-symmetry, the phase of the gaugino is shifted by e πi/h , and thus the phase of the gaugino condensate by the h th root of unity e 2πi/h . 3 Notice that, because of the anomaly of the continuous U (1) R , R-symmetry rotations are accompanied by a shift of the theta angle from θ to θ + 2π, which is a symmetry of the quantum theory. Employing R-symmetry rotations, we can restrict, without loss of generality, to the case where the vacuum on the left side of the wall is the 0 th one. We then call k-wall, with 0 < k < h, a wall that connects the 0 th vacuum to the k th vacuum. Formula (2.5) shows that a system of separated parallel BPS domain walls is unstable towards forming a unique domain wall in which the phase of the gaugino condensate jumps by the total amount, because the tension of a k-wall is lower than k times the tension of a 1-wall. Equivalently, parallel BPS domain walls have central charges with different phases and thus are not mutually BPS. Another useful property is that the 3d physics on an (h − k)-wall is the parity reversal of that on a k-wall. Indeed, we can perform a rotation by π in a plane formed by the direction orthogonal to the wall and a direction along the wall. The resulting configuration connects the k th vacuum on the left to the 0 th vacuum on the right, which is equivalent to an (h − k)-wall, with one direction along the wall being inverted. One is interested in studying the existence, degeneracy and other features of the BPS domain walls of SYM. This question has been analyzed in great detail by Acharya and Vafa [42]. Specifically, in the case G = SU (N ), AV employed a brane construction to provide a 3d worldvolume theory that describes the domain wall dynamics. One can realize 4d N = 1 SU (N ) SYM using a G 2 -holonomy geometry in M-theory [58]. Such a sevendimensional manifold is a Z N quotient of the spin bundle on S 3 , topologically (S 3 × R 4 )/Z N . The Z N acts differently in the UV and in the IR, in a way which is continuous in the quantum theory and that provides an M-theory version of the geometric transition [59]. In the IR, it acts freely on S 3 producing the spin bundle on the lens space S 3 /Z N . By reducing to type IIA along the Hopf fiber of the lens space, one obtains a resolved conifold geometry with N units of RR F 2 flux through the blown-up S 2 . Domain walls are realized by M5-branes wrapping the S 3 /Z N in M-theory. In particular, k M5-branes shift the vacuum by k units and realize a k-wall. In type IIA they reduce to k D4-branes wrapping the two-sphere. Taking into account the Wess-Zumino coupling to the RR background, the domain wall worldvolume theory is the 3d N = 2 U (k) gauge theory, with an N = 1 Chern-Simons interaction that reduces the supersymmetry. Using N = 1 notation, the theory is 4 3d N = 1 U (k) N gauge theory with a (singlet + adjoint) scalar multiplet . (2.6) The singlet is decoupled and free at low energies. It is the Goldstone mode associated to broken translations (and fermionic partners) and it describes the center-of-mass motion of the domain wall perpendicular to its worldvolume. It can only have derivative couplings with the rest of the theory, and those are suppressed at low energy. The adjoint can describe the breaking of the k-wall into k 1-walls. It has vanishing bare mass, producing a classical moduli space along which it has diagonal vacuum expectation values (VEVs): each entry represents the position, relative to the center of mass, of one of the 1-walls the k-wall breaks into. As previously noticed, though, it follows from (2.5) that quantum corrections lift the classical moduli space. If one is interested in the low-energy behavior, the adjoint scalar multiplet can be integrated out. A careful analysis [16,60,61] shows that the effective mass is negative. 5 We can thus use the alternative low-energy description (2.7) (Here and in the following we will neglect the decoupled and free center of mass.) This theory has a single supersymmetric gapped vacuum [62] in which the gaugino has negative mass. Integrating the gaugino out as well, at low energy we are left with a gapped vacuum hosting the topological (spin-)Chern-Simons theory As it should be, one can check that the worldvolume theories on a k-wall and on an (N − k)-wall are related by parity reversal. This follows from the 3d IR duality (2.9) which in turn reduces to the level-rank duality of CS theories U (k) N −k,N ↔ U (N − k) −k,−N [6,7]. Notice that in the extremal case of an N -wall, the proposal (2.7) gives N = 1 U (N )N 2 ,N which has a trivially gapped vacuum. This is consistent with the fact that an N -wall decays to the 4d vacuum. One of the implications of the string theory construction is that on flat Minkowski space, in each soliton sector there is a single BPS k-wall. This corresponds to the fact that the 4 To avoid confusion, with "singlet" and "adjoint" we refer to the two irreducible representations. Together, they form a reducible representation that is usually called the adjoint representation of U (k). 5 What we mean is that the scalar components have positive squared mass, while the fermion components have negative mass. worldvolume theory (2.7) has a single gapped vacuum on the spatial manifold R 2 . On the other hand, in the presence of topological sectors, the vacuum degeneracy can change as we change the spatial topology. The net number of vacuum states-weighted by the fermion number (−1) F -on T 2 with periodic boundary conditions for fermions is captured by the Witten index. For the theory (2.7) on a k-wall the Witten index is This corresponds to the number of fermionic lines of the spin-TQFT (2.8). 6 The Witten index matches the net number of domain walls one observes in the system dimensionally reduced on T 2 down to two dimensions [42]. Interface operators According to (2.8), the k = 1 domain wall is described at low energy by a U (1) N Chern-Simons TQFT, which is level/rank dual to an SU (N ) −1 TQFT. Keeping N = 1 supersymmetry manifest, the latter is an CS theory. We would like to show that its action can be reproduced in the IR by a different procedure: by inserting an interface operator that interpolates between θ and θ + 2π as we move along one spatial direction, say x 3 . We stress that the interface operator is not a dynamical excitation of the system, and it corresponds instead to an explicit deformation of the theory [2] (for instance, it does not lead to Goldstone modes). Let us then consider the SYM action with a space-dependent θ angle, interpolating between a value θ at x 3 → −∞ and θ + 2π at x 3 → +∞. Eventually, we will take an IR limit in which θ(x 3 ) becomes a step function localized at x 3 = 0. The space dependence has two effects. First, the SYM action is not supersymmetric anymore. It is possible to preserve half of the supercharges by adding an extra term, therefore the interface operator is 1/2 BPS, like BPS domain walls. Second, as we take the IR limit, the interface operator induces a bare N = 1 Chern-Simons term at level −1, including the correct gaugino mass term, along the 3d surface x 3 = 0. This is precisely the bare action of N = 1 SU (N ) −1− N 2 CS theory (while the contribution − N 2 to the CS term comes from the 1-loop regularization of the gaugini). We stress that the computation that follows is not limited to gauge group SU (N ): it can be repeated verbatim for any gauge group G, including exceptional and product groups. Let us start considering the action of SYM. Neglecting the auxiliary fields D A , which vanish on-shell, it reads (2.11) We use four-component spinor notation, and follow the conventions of appendix A of [63]. The supersymmetry variations are If the θ angle is constant on space-time, the action (2.11) is invariant. If, instead, we take it to be a function of the spatial coordinate x 3 , we have We can make the SYM action with varying θ angle invariant under half of the supersymmetries by adding the term and imposing the constraint to the supersymmetry parameter . We are interested in configurations in which the θ angle varies by 2π from x 3 = −∞ to x 3 = +∞. Let us now consider the IR limit in which ∂ 3 θ(x 3 ) = 2π δ(x 3 ). Integrating by parts the θ-term in (2.11) and neglecting boundary terms at infinity, we obtain (2.16) Here M 4 is the 4d space-time manifold, M 3 is the 3d location of the interface operator at x 3 = 0, and we used standard conventions to rewrite the θ-term using differential forms, e.g. We see that, with a suitable choice of the induced orientation, the interface operator has a 3d worldvolume action that includes a bare SU (N ) CS term at level −1. In the same IR limit, also (2.14) can be recast as a genuine 3d term, specifically as a 3d gaugino mass term. The 4d gaugino λ A is a Majorana spinor and, using the conjugation matrix iσ 2 0 0 −iσ 2 , it can be written as where ξ is a two-component spinor. Defining now the 3d spinor χ = 1 2 (ξ + σ 1 ξ * ), we can write (2.14) as Domain walls of SU (N ) SQCD Let us now move to the theory of interest, namely 4d N = 1 SU (N ) SQCD with F flavors, described by F chiral superfields Q and Q in the fundamental and antifundamental representation, respectively. This theory exhibits very different low-energy physics depending on N and F [21][22][23][24][25] (see also the review [26]). In this paper we study the case F < N . 7 If quarks are massless, the theory has runaway behaviour and no stable vacua [23]. We thus study the theory with massive quarks. We choose a diagonal superpotential mass term that preserves a diagonal SU (F ) subgroup of the original SU (F ) L × SU (F ) R chiral flavor symmetry. Besides, the theory has a baryonic U (1) B symmetry 8 (that will play no rôle) as well as a Z 2N R-symmetry under which the flavor superfields have charge 1. The vacua of the theory are determined by considering the effective superpotential W on the space of VEVs of the gauge-invariant meson superfield M = QQ, which is an F × F matrix transforming in the adjoint representation of the SU (F ) flavor symmetry. The effective superpotential gets contribution from the bare mass term and from the non-perturbatively generated Affleck-Dine-Seiberg (ADS) superpotential [23]: This gives N gapped vacua, with corresponding to the spontaneous breaking Z 2N → Z 2 . The gaugino condensate can be obtained integrating in the glueball superfield, or directly differentiating W with respect to log Λ 3N −F : λλ = m 4d M . 7 SQCD with F ≥ N has a quantum exact moduli space which includes both mesonic and baryonic VEVs, and which requires a somewhat different treatment. Domain walls in SQCD with F ≥ N will be discussed elsewhere [27]. 8 In the special case of SU (2) massive SQCD, the symmetry SU (F ) × U (1) B is enhanced to Sp(F ). Gapped vacua can be separated by domain walls, possibly half-BPS. We are interested in determining the low-energy worldvolume theory on these domain walls, from which their properties can be inferred. For large values of the quark mass m 4d (compared to Λ), flavors can be integrated out leaving SU (N ) SYM at low energy. In this regime, the domain walls must be described by the worldvolume theory (2.7). When the mass m 4d becomes much smaller than Λ, instead, one could expect the dynamics to be different. As we will discuss in Section 5, in this limit the SQCD vacua fly to large expectation values of M where a Higgsed description is appropriate, and domain walls connecting the vacua can be reliably described semi-classically. In this regime their dynamics and vacuum structure look in fact much different from those in pure SYM. In particular, we will see that there exist multiple degenerate walls connecting the same vacua. We thus expect interesting phase transitions connecting the large and small mass regimes. This resembles what happens in massive QCD, as recently discussed in [2,3] (and in [5] within a holographic context). The three-dimensional worldvolume theory we are after should reproduce all such features. Note that, as long as the quark masses are non-vanishing, there are no flat directions in field space and the Witten index of the domain wall worldvolume theory cannot jump. Hence, the Witten index must be N k as in SYM. There exists an extensive literature on BPS domain walls in SQCD, which includes . We notice that the existing lists cannot be complete because, in general, they do not reproduce the Witten index (2.10). Our proposal will fill this gap. Three-dimensional worldvolume theory We cannot rigorously derive the worldvolume theories on domain walls, but we can get some intuition about what those theories should look like by extending the Acharya-Vafa brane construction from SYM to SQCD. In the type IIA string theory setting, flavors can be added introducing F D6-branes extending in the four space-time directions supporting the gauge theory, and wrapping a non-compact special Lagrangian three-cycle of the resolved conifold (after the geometric transition) [64]. Such a three-cycle is an R 2 bundle over the equatorial S 1 inside the blown-up S 2 . Together with the N color D6-branes which, through the geometric transition, get replaced by N units of RR F 2 flux on S 2 , the branes realize at low energy 4d N = 1 SU (N ) SQCD with F flavors and a quartic superpotential. Flavor masses correspond to the D6-branes reaching a minimal radial position r 0 ∼ m 4d . This is not quite the theory we are interested in, since SQCD with quartic superpotential has a different number of vacua from the theory without it, but we can still use it to get some intuition about the domain wall theories. Domain walls correspond to D4-branes wrapped on the blown-up S 2 at the tip of the conifold, as in Section 2. However, the presence of the flavor D6-branes gives rise to a new open string sector at the intersection. This suggests that the 3d N = 1 domain wall theory should contain F scalar multiplets in the fundamental representation. Moreover, there should not be bare superpotential couplings involving the (singlet + adjoint) scalar multiplet Φ, as in (2.6), because the singlet becomes at low energy the free and decoupled center-of-mass superfield, while the diagonal components of the adjoint describe the breaking of a k-wall into 1-walls and should be flat directions for large VEVs. We will not push the similarity any further, since we are interested in SQCD without quartic superpotential, and instead propose that the effective theory on k-walls be 9 gauge theory with a (singlet + adjoint) scalar multiplet Φ and F fundamental scalar multiplets X , (4.1) and no bare superpotential involving Φ. We expect the bare 3d mass of X to be proportional to the 4d mass m 4d . As in Section 2, the singlet is the Goldstone mode associated to broken translations (and fermionic partners) and will be neglected in the following. The adjoint classically gives rise to flat directions, which however are lifted by quantum effects. The oneloop computation of [60,61,16,18] is still valid for Φ since the latter has no bare superpotential couplings, and it leads to negative mass around Φ = 0, as expected from the four-dimensional brane charge. Integrating out the adjoint 10 we obtain the simpler low-energy description Renormalization effects change the three-dimensional mass and produce quartic superpotential terms (which are classically marginal): Notice that there are two independent quartic gauge-invariant combinations. The overall scale has been arbitrarily fixed for convenience. The relative signs (with respect to the sign of the Chern-Simons term) instead are physical and have been fixed in order to reproduce the expected behavior at large and small (compared with Λ) 4d mass m 4d . Consistency also requires that α > − min(k, F ) −1 . The 3d parameter m is an effective IR mass: as we will see, at m = 0 there is a second-order phase transition. Although we do not know the precise relation between m 4d and m, we will see that large values of m 4d correspond to m > 0 and small values of m 4d correspond to m < 0. We will indicate as m * 4d the value that corresponds to m = 0. Higher order terms in W are expected to be irrelevant at the point m = 0. In the remainder of this section, we will study the dynamics of the theory (4.2)-(4.3) on its own. Later, in Section 5, we will confront it with actual massive SQCD domain wall dynamics. Let us note, from the outset, that our proposal already satisfies an important consistency check. The theory (4.2) enjoys the following N = 1 infrared duality: where on both sides there are quartic N = 1 superpotentials. This duality was discovered in [18] in a very similar context. 11 The authors consider the theory (4.2) with quadratic but not quartic UV bare superpotential, argue that there exists a value of the bare mass for which the theory flows to an N = 1 fixed point, and conjecture that the two theories in (4.4) lead to the very same fixed point. Our claim is that the effective theory at the fixed point has superpotential (4.3) with m = 0. Such an effective description will allow us to use a semi-classical analysis to understand the relevant deformation triggered by the mass term. For instance, following [18] we will show that massive vacua match. The duality (4.4) is a strong consistency check of our proposal, since it relates k-walls to time-reversed (N − k)-walls. As already emphasized, this is an expected feature of k-walls in SQCD. Let us notice another interesting fact. For k = 1, the proposed domain wall theory enjoys another N = 1 duality [18]: with quartic superpotential on both sides (we propose in Section 4.2 that the theory on the left has emergent N = 2 supersymmetry in the IR, and hence the same should happen to the theory on the right). Intriguingly, the gauge group is the same as that of the four-dimensional theory. This suggests that the theory on the right might be reproduced by an interface operator as discussed in Section 2.1 for pure SYM (and in [2,3] without supersymmetry): the contribution −1 to the CS level could come from an x-dependent theta angle, − N 2 from the regularized 1-loop determinant of gaugini, and F 2 from the flavors. It would be interesting to make this idea concrete. Analysis of vacua Let us study the vacua of the three-dimensional theory (4.2) with superpotential (4.3), where we assume α > − min(k, F ) −1 . The scalar superfields are k × F matrices X ai , with a and i being gauge and flavor indices, respectively. The F-term equation is (4.6) By gauge and flavor rotations, we can bring X to a rectangular diagonal form. In this basis, both XX † and X † X are diagonal with real non-negative entries: they have min(k, F ) eigenvalues in common, that we indicate by λ j ≥ 0, while the remaining eigenvalues of the larger of the two matrices vanish. The eigenvalues have to satisfy the equations For m = 0, up to permutations these equations have min(k, F ) + 1 solutions, that we parametrize by J = 0, . . . , min(k, F ). Each solution has only J non-vanishing (and identical) eigenvalues: • m > 0. Only the vacuum with J = 0, in which X = 0, is acceptable. In such a vacuum, quarks have positive mass and can be integrated out, leaving (4.9) Since k < N , this theory has a single supersymmetric gapped vacuum that hosts the TQFT U (k) N −k, N , and its Witten index is This is the expected result for the behavior of SQCD domain walls in the large 4d mass regime, m 4d Λ, as discussed in Section 3. • m < 0. All min(k, F ) + 1 vacua, labelled by J, are acceptable. The quark field X gets a VEV, which can be brought to a diagonal rectangular form with J non-vanishing identical entries (for J = 0 the VEV is zero). This breaks the flavor symmetry as leading to a supersymmetric NLSM in the IR with target space U (F )/ U (J) × U (F − J) (for J = 0 and J = F the symmetry is not broken). All other fields become massive, either because of the potential or because of Higgs mechanism. Indeed, the VEV also breaks the gauge symmetry as U (k) → U (k − J). Fermions charged under the unbroken gauge group, coming from the quark superfields and from the broken components of the gaugino, mix and become massive as well. In particular, F eigenmodes get a negative mass and J get a positive mass. 12 All such modes transform in the fundamental representation of U (k − J). As a result, the bare CS level of the unbroken gauge group is shifted by −F . This leads to The two factors are decoupled in the IR, thus the low energy theory around a vacuum labelled by J is . The supersymmetric NLSM has a Wess-Zumino term, which is conveniently specified by describing the NLSM as an gauge theory coupled to F fundamental scalar multiplets getting VEV. Notice in passing that the NLSM target is a Kähler manifold-the complex Grassmannian-and thus if we truncate the effective Lagrangian at two-derivative level, the NLSM has emergent 3d N = 2 supersymmetry [45]. The gauge theory on the left of (4.12) has Witten index WI = N −F k−J , which vanishes for N − F − k + J < 0 (recall that N > F and k ≥ J). Indeed, the theory breaks supersymmetry in that regime. This is a non-perturbative effect that lifts some of the would-be vacua labelled by J. Eventually, supersymmetric vacua correspond to In each supersymetric vacuum, the Witten index of the low-energy theory is 14) 12 The components of the F flavors charged under the unbroken gauge group are not coupled to the scalars getting VEV, thus they have a mass term −m from the superpotential. However there are also mixed mass terms with the J components of the gaugino along "block-off-diagonal" broken generators, from the Yukawa couplings imposed by supersymmetry. Finally, there is a gaugino mass term from the supersymmetrization of the CS term. The analysis is similar to the one in [17]. which is positive. Using the binomial identity 13 min(k,F ) we see that the total Witten index at m < 0 agrees with the one at m > 0. At m = 0 there is a phase transition in which the multiple vacua at m < 0 simultaneously coalesce into the single vacuum at m > 0. Such a phase transition-essentially because of supersymmetry-is necessarily second order and thus it is described by a 3d N = 1 SCFT. To understand this point, already stressed in [16,18], notice the following facts. First, in our range of parameters, the number min(k, F ) − max(0, F + k − N ) + 1 of vacua at m < 0 is always greater than one, while at m > 0 there is a single vacuum. Second, each of those vacua has positive Witten index (4.14) or (4.10). Vacua with non-vanishing Witten index must necessarily have zero energy: they cannot change their vacuum energy in isolation, the only way is to pair with other vacuum states so that the total Witten index is zero. Therefore the multiple vacua at m < 0 must coalesce at the phase transition, which cannot be first order and must be second (or higher) order. Third, solving the F-term equations derived from (4.3), we found that all vacua at m < 0 coalesce simultaneously. This conclusion is not modified if we arbitrarily perturb (4.3) with higher-order terms, and thus remains true to all orders in perturbation theory. The N = 1 SCFT at m = 0 is the one that enjoys the IR duality (4.4). Let us stress once more that while we have not determined the precise relation between 3d and 4d masses, the value m = 0 corresponds to some value m * 4d of the 4d mass, of order the dynamically generated SQCD scale Λ. We expect m * 4d to depend on N, F, k. It is useful to organize the different vacua-as we vary J-of the various 3d domain wall theories-as we vary k-for fixed 4d theory (namely for fixed N, F ) into a table. Let us indicate the N = 1 NLSM with target the complex Grassmannian by In Table 1 we put the NLSMs Gr(J, F ) with 0 ≤ J ≤ F on the horizontal axis, and the topological sectors The list of vacua for one theory with given k are read diagonally (along a line from bottom-left to upper-right) and the corresponding values of J are read in the last row. In the table we have also specified the level-rank duality of spin-CS theories [7], expressed in N = 1 notation by (2.9). We have already observed that, employing the IR duality (4.4), the worldvolume theory on k-walls is the parity reversal of the theory on (N − k)-walls. Here we can consistently check, as already done in [18], that also their vacua have the same property. In particular, a vacuum of k-wall labelled by J is the parity reversal of a vacuum of (N − k)-wall labelled by F − J. As manifest in Table 1, some vacua are special. The ones in the first and last column do not break the flavor symmetry and thus are fully gapped without massless Goldstone fields. We call them "symmetry preserving walls". The ones in the first and last row, instead, do not host a topological sector. In Section 5 we will construct domain walls as BPS codimension-one solitons interpolating between the N vacua of four-dimensional massive SQCD in the regime of small m 4d , finding perfect agreement with the 3d dynamics discussed above. Supersymmetry enhancement Let us end this section with the following interesting observation. For special values of N , F and k, the domain wall theories at the value m * 4d of the 4d flavor mass that corresponds to the 3d second-order phase transition, can exhibit IR enhancement of the 3d superconformal symmetry from N = 1 to N = 2 or N = 4. 14 More precisely, only for the "reduced" worldvolume theory (4.2)-(4.3) we conjecture supersymmetry enhancement, while the Goldstone boson for broken translations and a massless Majorana fermion still combine into an N = 1 real scalar multiplet. When k = 1 or F = 1, the two quartic superpotential terms in (4.3) are equal: there exists only one quartic term one can write compatible with all symmetries. The coefficient of that term is not fixed by N = 1 supersymmetry, and since that coupling is classically marginal, it runs under RG flow. If the coupling coefficient is appropriately tuned, though, the massless theory has N = 2 supersymmetry. 15 This can be seen by starting with the N = 2 YM-CS gauge theory with F chiral multiplets in the fundamental representation. There is no N = 2 (holomorphic) superpotential we can write. Using N = 1 notation, though, there is a superpotential where Ψ is the adjoint N = 1 scalar superfield in the N = 2 vector multiplet, g YM is the Yang-Mills coupling and h is the CS level. At energies below g 2 YM , the adjoint is non-dynamical and can be integrated out. This generates the quartic superpotential For k = 1 or F = 1, this is the same as which is the only quartic term we can write. Thus, when the quartic term has coupling 1/2h, the massless theory has N = 2 supersymmetry. For other values of the coupling, the supersymmetry is only N = 1, however one might suspect that the RG flow still drives the coupling to the N = 2 point in 14 In [45], as we mention after (4.12), it was observed that the effective theories in the vacua at m < 0, i.e. for m 4d < m * 4d below the phase transition point, have enhanced N = 2 supersymmetry at low energy for all values of N, F, k. This is because the NLSM has Kähler target space, and the CS gauge theory is gapped. Our conjecture is much stronger: we claim supersymmetry enhancement at the interacting CFT point. Such a conjecture only applies to k = 1, k = N − 1 and F = 1. 15 In fact, adding a mass term to the N = 1 superpotential corresponds to turning on the real mass associated to the topological symmetry in N = 2 notation. Therefore, also the massive theory has N = 2 supersymmetry, at least at energies below g 2 YM such that we are entitled to integrate the adjoint out. the IR, at least within a basin of attraction. Indeed, it has been shown in [66,67,18] that, at large CS level h, the N = 2 point is attractive. 16 It is very plausible, and we conjecture, that this is true even at small values of the CS level. For values of the CS level such that the claim is true, the duality of fixed points (4.4) is really the N = 2 duality of "minimally chiral" theories found in [65]: (4.19) (valid for F < N ) where here we take k = 1 and/or F = 1. The N = 2 superpotential vanishes on both sides. The duality implies that also for k = N − 1 the N = 2 fixed point is attractive. As noticed also in [18], there is no reason to expect supersymmetry enhancement at the fixed point in the other cases. The two quartic N = 1 superpotential terms are independent, giving rise to a two-dimensional RG flow. From (4.18), supersymmetry is enhanced to N = 2 when the coefficient of the single-trace term is 1/2h and that of the double-trace term is zero. One might suspect that, in this higher-dimensional RG space, stability of the N = 2 point gets lost. Indeed, it has been shown in [67] that, at large CS level, the N = 2 point has a repulsive direction in the two-dimensional space of quartic superpotential couplings, that ends up to an N = 1 point which is instead attractive. The N = 2 duality is still useful, though: it turns out that all N = 1 dualities (4.4) (in their range 1 < F < N and 1 < k < N − 1) follow from an N = 1 quartic superpotential deformation of the N = 2 dualities (4.19). On the other hand, once we move outside the critical point of the phase transition, the low-energy theory is the product of a topological sector and a NLSM with Kähler target space: at two-derivative truncation this is an N = 2 theory for all values of N , F and k. When k = 1 (or k = N − 1, up to a parity transformation) we can obtain yet a different description using the N = 2 dualities of [68], namely (4.20) with no N = 2 superpotential. Together, (4.19) at k = 1 and (4.20) form a triality. As long as our conjecture about supersymmetry enhancement is correct, this triality is in fact the same as (4.4)-(4.5). Since we are considering F < N , the N = 2 SCFTs in (4.20) have trivial chiral ring and trivial moduli space of vacua (no BPS monopoles on the left, no BPS baryons on the right, and no mesons on either side). The case of SU (2) SQCD with one flavor is doubly special because k = N − k = 1. In this case there are four dual domain wall theories: The critical point of the phase transition exhibit both enhanced supersymmetry and emergent time-reversal invariance. In N = 2 language the above dualities become and were recently studied in [18,69,70]. In [71] it was argued that N = 2 U (1) ± 3 2 with one chiral flavor has infrared supersymmetry enhancement to N = 4. The conclusion is that the BPS domain wall of 4d N = 1 SU (2) SQCD with 1 flavor is described by a 3d N = 4 SCFT. Four-dimensional constructions The three-dimensional worldvolume theory (4.2) passes two non-trivial checks as a candidate for the effective theory describing massive SQCD domain walls. For large values m 4d Λ of the 4d mass, corresponding to positive 3d mass m in our conventions, the theory reduces to (2.7) or equivalently (2.8), which describes domain walls in pure SYM. This is what one expects since, for large quark mass, SQCD reduces to pure SYM at low energy and so should the corresponding domain walls. A related check regards the Witten index, which remains constant along the phase transition at m = 0, see eqn. (4.15) and the discussion thereafter. Our task in this section is to understand the regime of small 4d mass, m 4d Λ. We will explicitly construct 1/2 BPS domain wall solutions in such a regime, and show that they precisely match the structure of multiple vacua of the three-dimensional worldvolume theory (4.2) with negative mass. One of the key points which make our analysis possible is that for m 4d Λ the N supersymmetric vacua of massive SQCD, eqns. (3.2), lie at large distance in the mesonic space, which is a Higgsed weakly-coupled region. Hence, the domain walls that interpolate between those vacua can be reliably constructed with a semi-classical analysis (up to an important caveat that we will discuss in the following). In a weakly-coupled Wess-Zumino (WZ) model, domain walls can be constructed as finite-tension codimension-one solitonic configurations in which fields depend on one spatial coordinate, say x, and interpolate between the values in the two vacua at x = ±∞. For a standard WZ theory of chiral superfields Φ a with two-derivative Lagrangian, described by a Kähler potential K(Φ, Φ) and a single-valued superpotential W (Φ), the domain wall equations are [72,56] K ab ∂ x Φ a = e iγ ∂bW , 3). It follows that where, in the last expression, we have introduced a natural norm. Since the right-hand-side has constant phase, the image of W Φ(x) is a straight line in the complex W -plane (and e iγ is its direction). The construction generalizes to cases where the superpotential W (Φ) is not a single-valued holomorphic function, but its derivatives are. The central charge is again the total excursion of the superpotential along x, eqn. (2.3). When the WZ model includes a single chiral superfield, one can easily determine the existence of BPS domain walls. Let W ±∞ = W Φ(x = ±∞) be the values of the superpotential in the two vacua. One can invert W (Φ) and construct the pre-image of a straight line from W −∞ to W +∞ . Such a pre-image will be made of one or more curves in the Φ-plane (since W is not an injective function, in general). Each curve that connects Φ(x = −∞) to Φ(x = +∞) identifies a BPS domain wall. On the other hand, we might not find any such curve. Note that this procedure only determines the orbit of Φ(x) in the complex Φ-plane, not the precise profile of the field as a function of x. The latter depends on the Kähler potential K. 17 However, as long as we are only interested in counting domain walls and determining their symmetry-breaking properties, this procedure suffices. 18 By contrast, in models with multiple chiral superfields Φ a one should really solve the ODEs (5.1) in order to determine what types of domain walls exist and what their orbits are in field space. This can be done numerically, using shooting techniques. In our case, the chiral superfields of the effective WZ model will be nothing but the components of the meson field. The meson matrix M is proportional to the identity in supersymmetric vacua, see eqn. (3.2). If its evolution through the wall remains so, namely if its eigenvalues remain equal to one another, then we can reduce to one domain wall equation for a single chiral superfield M , and the image of M (x) can be determined algebraically. If, instead, the eigenvalues split, and the meson matrix is not proportional to the identity along the wall, we have to resort to numerical analysis. In this case the domain wall breaks the SU (F ) flavor symmetry and thus its worldvolume theory includes Goldstone fields. Before discussing these two classes of domain walls in more detail, let us address the caveat we have alluded to before. Domain walls in SYM. The N vacua of SU (N ) SYM and their gaugino condensate can be conveniently described using the Veneziano-Yankielowicz superpotential [73] W SYM (S) = S log Λ 3N S N + N . Here S ∝ Tr W α W α is the gaugino superfield. The critical points and the value of the superpotential therein are with k = 0, . . . , N − 1. One might then be tempted to use W SYM (S) as a standard WZ superpotential to construct domain walls interpolating between the N vacua. However, this cannot be done for several related reasons. First, W SYM (S) is not the superpotential of a Wilsonian effective action for SYM, because S does not describe the lightest particle. As a result, the superpotential is not a single-valued function of S. It is ambiguous by 2πiSZ, meaning that even its derivative is ambiguous by 2πiZ. Second, if S winds once around the origin, W SYM shifts by 2πiN S which is not the minimal ambiguity. This means that the ambiguity is not resolved by going to a connected cover. The full domain of W SYM is made of N disconnected components, each hosting one of the vacua. Thus, it is not possible to draw a continuous path from one vacuum to another. Third, S is a constrained superfield because the imaginary part of its top component is the instanton density, whose integral is quantized. Thus, one should be careful in eliminating the auxiliary fields [74] and deriving the vacuum and domain wall equations. As a result, paths can effectively "jump" from one sheet to another, and this is not described within the semiclassical WZ theory. Papers dealing with these problems include [29,43]. Very similar problems arise when studying solitons in the 2d N = (2, 2) CP N −1 model, using the effective theory on the Coulomb branch [75]. We will treat the domain walls of SYM as strongly-coupled BPS objects, with thickness of order 1/Λ and central charge given by the exact formula (2.3). As reviewed in Section 2, a k-wall across which the vacuum jumps as S → e 2πik/N S hosts a topological sector described by an N = 1 U (k) N − k 2 , N theory, or, equivalently, a U (k) N −k, N CS theory. Domain walls in SQCD. Contrary to the case of SYM, in SU (N ) SQCD with F flavors there exists a weakly-coupled limit in which domain walls can be reliably constructed, i.e. the small mass regime m 4d Λ. In this regime we can write a low-energy effective action for the mesons with superpotential (3.1) and Kähler potential induced from the canonical one for quark superfields Q, Q. The superpotential is in general multi-valued, but we can make it single-valued by working on a (connected) covering space of order N − F . We can then use such a WZ-like description to construct the domain walls. As long as the trajectories remain far away from the origin in field space, the WZ description is reliable. For F = N − 1 this is the whole story. The superpotential is a single-valued function of M , the WZ model on the mesonic space is the Wilsonian low-energy effective action and all BPS domain walls are visible within such a description. The domain wall theory is either trivially gapped (besides the free decoupled center of mass), or it contains the Goldstone fields of a broken symmetry. For F < N − 1, instead, at generic points on the mesonic space there is a residual SYM theory with gauge group SU (N − F ). Indeed, we can understand the non-perturbative ADS superpotential [23] as coming from gaugino condensation in the unbroken group. By scale matching we get Λ 3N −F = Λ Such domain walls are essentially WZ walls, in which the SU (N − F ) gauge theory is a spectator. It follows that the worldvolume theory-besides the free and decoupled center of mass-is either trivially gapped, or it contains, again, the Goldstone fields of a broken symmetry. A more subtle class of walls, that we call "hybrid", is obtained by combining a continuous evolution on the mesonic space with a shift of vacuum in the unbroken SU (N − F ). Such a shift implies that we transit from one sheet of the function W (M ) to another-according to the phase shift of the gaugino condensate in the unbroken gauge theory. Let us estimate the widths of the SQCD wall and of the transition in the unbroken SYM. The thickness of the SQCD wall is where in the last equality we used the Kähler potential (5.5). Interestingly, in the m 4d → 0 limit the domain wall size does not depend on the gauge dynamics. The thickness of the SYM wall instead scales as 1/Λ unbroken . Using scale matching and the size of M , we find We see that in the m 4d → 0 limit, the thickness of the SYM transition is parametrically smaller than the size of the full domain wall. We conclude that, in that limit, the SYM transition can be treated as sharp, or "instantaneous". Thus, we can construct domain walls in which we abruptly jump from one sheet to another at points along the path. The worldvolume theory on one such domain wall (besides the center of mass) consists of the AV topological sector associated to the jump times possible Goldstone fields for broken symmetries. More specifically, for each one of such jumps the worldvolume theory acquires a CS topological sector U (∆) N −F −∆, N −F (5.9) (using N = 0 notation), whenever e 2πi∆/(N −F ) is the phase shift in the SU (N − F ) sector. When we jump from one sheet to another, the value of W changes (at fixed M ). Each smooth portion of the profile, satisfying the differential equation (5.1), must map to a straight line with direction e iγ on the complex W -plane, where γ is the phase of the central charge (2.3) -and similarly each jump due to a SYM wall must point in the same direction of e iγbecause the preserved supercharges are constant throughout the wall. This implies that each smooth portion is in the pre-image of a segment along the straight line connecting W −∞ to W +∞ . If we draw on the M -plane all pre-images of the straight line, a domain wall will be given by a continuous, piecewise C ∞ path along those pre-images, from one vacuum to another. This procedure will become clearer in the examples we will discuss next. To sum up, we can divide the various domain walls at m 4d Λ into two groups. The first group consists of symmetry preserving walls, that can be studied algebraically. The associated three-dimensional vacuum is gapped, either trivially (for standard WZ walls) or hosting a topological sector (for hybrid walls, whenever the path on the mesonic space undergoes one or more jumps in the unbroken SU (N − F ) SYM). The second group consists of symmetry breaking walls, and it requires the solution of ODEs. The three-dimensional vacuum accommodates a supersymmetric NLSM of Goldstone fields. This can be accompanied, again, by a non-trivial topological theory (for hybrid walls) if a jump in the underlying SU (N − F ) SYM occurs. In the following, we will discuss symmetry preserving and symmetry breaking domain walls in turn. In Table 2 we list all BPS domain walls of SU (N ) SQCD with F < N , up to rank N = 5, in the regime of small mass m 4d Λ, as predicted by the worldvolume analysis of Section 4 and already packaged in Table 1. We only indicate k-walls with 1 ≤ k ≤ N/2, since the Table 1). We only indicate k-walls with k ≤ N/2, since the remaining ones with N/2 < k < N are obtained applying a parity transformation. In each soliton sector k, we list the worldvolume theories on different domain walls (for topological sectors we use here the N = 0 notation). Trivially gapped vacua are indicated by "gap". remaining ones are obtained by applying a parity transformation to (N − k)-walls. Our goal is to reproduce all such domain walls by the aforementioned 4d analysis. Let us stress that, for fixed soliton sector k, namely for fixed 4d vacua on the left and on the right, we find in general more than one BPS wall. In the three-dimensional worldvolume description, they correspond to different vacua labelled by J. Such walls are physically inequivalent, not related by any symmetry, and yet they are exactly degenerate in tension. This, of course, is an effect of bulk supersymmetry which fixes the tension in terms of |∆W |. Symmetry preserving walls In this section we restrict to domain walls that do not break the SU (F ) mesonic symmetry, in other words we take M = M 1 F all along the domain wall trajectory. Notice that this is automatically the case when F = 1. For a WZ theory of a single chiral superfield, the domain wall equation is an ODE of a single variable. If we are only interested in the orbit of the field, i.e. on the image of the field in the complex M -plane, and not in the precise profile as a function of x, then the problem becomes algebraic: we only need to invert the function W ( M ). This is equivalent to the fact that Im e −iγ W is constant through the wall. Applying the algebraic method we will be able to determine all domain walls-and be sure we are not missing any. It is convenient to express M in units of Λ 3N −F /m N −F 4d 1/N to make it dimensionless and so that the vacua lie on the unit circle. This operation rescales the Kähler potential as well. In these units the superpotential (3.1) and vacua (3.2) become Setting to one the remaining dimensionfull constant Λ , the restriction of the superpotential (5.10) to the symmetry-preserving slice is This is a multi-valued function of M with N − F sheets above each point (corresponding to the different vacua of the unbroken SU (N − F ) SYM theory). It turns out that the sheets arrange into d = gcd(N, F ) disconnected components. 19 For convenience, we can introduce a covering variable X such that 19 These components only touch at the origin, which however is a singular point and should be excised. We stress that the covering space splits into disconnected components only after restricting to the symmetrypreserving slice. We connect the corresponding points on the W -plane by a straight line (with direction e iγ ), and compute its pre-image (consisting of N parts) on the full domain. If there exists a continuous curve on the covering space X connecting the two vacua, this is a standard WZ wall. Its worldvolume theory is trivially gapped (besides the free center of mass) because no continuous symmetry gets broken. On the contrary, there can exist curves that are continuous on the M -plane but include jumps on the covering spaces X (a) -either within the same domain or from one domain to another. These are walls that combine the WZ evolution with sharp (in the m 4d → 0 limit) AV walls in the unbroken SU (N − F ) gauge theory, as previosuly discussed. For each jump ∆, the worldvolume theory acquires a topological sector U (∆) N −F −∆, N −F . As we will see in the examples below, we observe that walls involve at most one jump. Examples Consider first SU (2) SQCD with F = 1. The theory has two vacua, and so there is only one possible soliton sector, k = 1. Since F = 1, the meson field has only one component, all walls can be found algebraically and none of them can break the flavor symmetry. Morever, since F = N − 1, there is no unbroken gauge sector on the mesonic space, the superpotential is single-valued, all walls are visible in the WZ description and their worldvolume theory cannot host any topological sector. As shown in Figure 2 Conventions are the same as in Figure 2. W -plane gives two domain walls (blue and yellow in the figure) whose worldvolume theory is trivially gapped. This agrees with Table 2. Consider now SU (3) SQCD. The theory has three vacua, so there are two soliton sectors, k = 1, 2. However, the sector k = 2 is the parity reversal of the sector k = 1 and thus we only study the latter. For F = 1 (Figure 2 center) all domain walls can be found algebraically. We find a WZ wall (blue in the figure) whose worldvolume theory is trivially gapped. We also find a wall that involves the jump from one sheet of W to the other (yellow followed by green in the figure). This corresponds to a jump of vacuum (indicated as ∆ = 1) in the unbroken SU (2) gauge theory, giving rise to the topological theory U (1) 2 . For F = 2 (Figure 2 right) all domain walls are of WZ type. Restricting to symmetry preserving walls, we find one (blue in the figure) whose worldvolume theory is trivially gapped. This matches, again, with Table 2, as far as symmetry preserving walls are concerned. Consider then SU (4) SQCD. This theory has four vacua and we study the soliton sectors k = 1, 2 (the sector k = 3 is the parity reversal of the sector k = 1). For F = 1 (Figure 3 left) all domain walls can be found algebraically. In the sector k = 1 we find a trivially gapped wall (blue) and a wall with U (1) 3 topological sector (yellow followed by green) from the ∆ = 1 jump in the unbroken SU (3). In the sector k = 2 we find a wall with topological sector U (2) 1,3 ∼ = U (1) −3 (blue followed by green) from a ∆ = 2 jump in the unbroken SU (3), and a wall with topological sector U (1) 3 (yellow followed by red). For F = 2 (Figure 3 center) the domain of the restriction of the superpotential to symmetry preserving configurations has two disconnected components. In the sector k = 1 the two vacua live on disconnected domains, and symmetry preserving walls must necessarily involve a jump from one sheet to the other. Indeed, we find one such wall (yellow followed by green) hosting a topological sector U (1) 2 . In the sector k = 2 the two vacua live on the same domain, and we find two WZ walls (blue and yellow) with trivially gapped worldvolume theory. Finally, for F = 3 (Figure 3 right) all domain walls are of WZ type. Restricting to the symmetry preserving ones, we find one (blue) in the sector k = 1, and none in the sector k = 2. All these results match with Table 2. As last example, consider SU (5) SQCD. The independent soliton sectors are k = 1, 2, while k = 3, 4 are their parity reversal. We report our results in some selected cases, only. For F = 1, in the sector k = 1 (Figure 4 left) we find a WZ wall (blue) with trivially gapped vacuum, and a wall (yellow followed by green) with topological sector U (1) 4 . In the sector k = 2 (Figure 4 center) we find a wall (blue followed by green) with topological sector U (2) 2,4 and a wall (yellow followed by red) with topological sector U (1) 4 . For F = 2, in the sector k = 2 (Figure 4 right) we find a WZ wall (blue) with trivially gapped vacuum, and a wall (yellow followed by green) with topological sector U (2) 1,3 ∼ = U (1) −3 . We find again full agreement with Table 2. Symmetry breaking walls Let us now discuss the more general type of domain walls, those through which the meson field M is not proportional to the identity, despite being proportional to the identity in the vacua on the two sides. With more than one independent component, we have no options but directly solve the differential equations (5.1). Let us assume that, at least at one point along the domain wall profile, the meson matrix is diagonalizable. We can use an SU (F ) flavor rotation to bring M to a diagonal form at that point. Then, the ODEs (5.1) imply that M (x) remains diagonal for all values of the spatial coordinate x. Indeed, the Kähler metric that follows from the Kähler potential (5.5), evaluated at points where M ij = λ j δ ij is a diagonal matrix and where M = M † , takes the form Since at these points also the gradient of the superpotential is a diagonal matrix, it follows from eqn. (5.1) that the spatial derivative of M (x) is diagonal as well. We can thus restrict to ODEs for the diagonal components λ j (x). As before, it is convenient to rescale the meson field M and the spatial coordinate x (as well as to possibly shift the phase of the central charge) 20 as 17) in order to obtain dimensionless differential equations, We decompose the eigenvalues λ j into radial and polar parts, λ j = ρ j e iφ j . This gives the system of differential equations This system is of Hamiltonian type: ∂ x ρ j = ∂H/∂φ j and ∂ x φ j = −∂H/∂ρ j with Consistently, Im e −iγ W is a "constant of motion" along the domain wall profile. Let us recall that the effective superpotential on the mesonic space is multi-valued, due to the unbroken SU (N − F ) SYM theory at generic points. We can work in a connected covering space, with covering order N − F , defined by On the covering space, the N vacua are located at (5.22) and are labelled by k = 0, . . . , N − 1. Domain walls of the standard WZ type correspond to solutions to eqns. (5.19) that are continuous on the covering space. The worldvolume theory on such domain walls does not include any topological sector. For more general hybrid domain walls, at certain spatial locations x * the profile jumps from one sheet of the covering to another (the λ j 's remain continuous). This corresponds to a shift i.e. a shift of vacuum in the unbroken SU (N − F ) SYM, and the worldvolume theory includes a topological sector U (∆) N −F −∆, N −F . A non-trivial prediction of the 3d analysis is that, even for symmetry breaking walls, solutions can accommodate at most one jump, as we found for symmetry preserving walls. When the meson field M is not proportional to the identity matrix along the profile, namely when the eigenvalues λ j are not all equal, the domain wall spontaneously breaks the flavor symmetry SU (F ). Another non-trivial prediction of the analysis of Section 4 is that the eigenvalues split at most into two groups. Calling n ± the number of eigenvalues in the first and second group, respectively, with n + + n − = F , the worldvolume theory on the domain wall hence includes an It would be nice to understand analytically why there cannot be solutions to (5.19) in which the eigenvalues organize into three or more distinct groups, or which undergo two or more jumps on the covering space. We draw the (smooth) orbits in the complex plane of a solution to the differential equations (5.19), in which n + eigenvalues are equal to λ + and n − are equal to λ − (n + + n − = F ). Examples In Section 5.1.1 we were able to determine algebraically the full set of symmetry preserving domain wall solutions for the cases considered. For symmetry breaking walls we need to solve ODEs. This can be done numerically using a shooting technique. We will be able to explicitly construct all domain wall solutions predicted by the 3d analysis of Section 4 and summarized in Table 2 for low ranks. However, we will not be able to prove that no other domain wall solutions can exist. Without loss of generality, a k-wall connects the vacuum at {λ j = 1} to the vacuum at {λ j = e 2πik/N }. To construct numerical solutions it is convenient to set the origin x = 0 in the middle of the wall. We divide the eigenvalues into two groups λ ± of n ± elements, respectively. By reflection symmetry with respect to the origin, we set the phases of the eigenvalues equal to ±e πik/N at x = 0. The known value of the constant of motion H enforces a relation between ρ + (0) and ρ − (0). This leaves us with a shooting problem with one initial condition at x = 0, to be found such that the eigenvalue profiles hit the vacua at x = ±∞. For domain walls with no jump, symmetry guarantees that a solution that hits the vacuum at x = −∞ also hits the vacuum at x = +∞. For domain walls with a jump at x = 0, instead, we solve the shooting problem on the half-line x > 0, then the jump must be such that the solution automatically hits the other vacuum at x = −∞. Consider first SU (3) SQCD with F = 2. Table 2 predicts a symmetry breaking domain wall with n + = n − = 1 in the k = 1 soliton sector. We draw the corresponding numerical solution in Figure 5 left, in which the orbits of the two eigenvalues λ ± on the complex plane Figure 6: Symmetry breaking k-walls for N = 4, 5 and F = 2 flavors. The central and right figures contain a continuous but not smooth profile, that involves a jump from one sheet to another (the value of ∆ is indicated). Solid curves represent the orbits followed by the eigenvalues λ ± , while dashed curves are their smooth continuation as solutions to (5.19). are in blue and yellow, respectively. Consider now SU (4) SQCD. For F = 3, the superpotential is a single-valued function and domain walls do not involve jumps. As predicted by Table 2, in the k = 1 soliton sector we find one domain wall with n + = 2, n − = 1 (Figure 5 center). In the k = 2 soliton sector we find two domain walls with n + = 2, n − = 1: one ( Figure 5 right) is the complex conjugate of the other. For F = 2 there is an unbroken SU (2) gauge theory on the mesonic space, the superpotential is double-valued and jumps are possible. In the k = 1 soliton sector we find a continuous domain wall with n + = n − = 1 (Figure 6 left). In the k = 2 soliton sector, instead, we find a domain wall with n + = n − = 1 that involves a ∆ = 1 jump ( Figure 6 center): one eigenvalue draws the blue followed by green orbit, while the other one draws the yellow followed by red orbit. The wordvolume theory is thus a P 1 NLSM times a U (1) 2 topological sector, as predicted again in Table 2. Symmetry breaking domain walls of SQCD with higher gauge rank can be studied similarly, finding perfect agreement with Table 2. As a selected example, SU (5) SQCD with F = 2, in the k = 2 soliton sector has a domain wall with n + = n − = 1 and a ∆ = 1 jump in the unbroken SU (3) gauge theory (Figure 6 right). Its worldvolume theory is thus U (1) 3 × P 1 . Domain walls of Sp(N ) SQCD In this section we extend the previous discussions to SQCD with symplectic gauge group. As we will see, the story is very similar to the SU (N ) case. Very much like SU (N ) SQCD with F < N flavors, in Sp(N ) SQCD a non-perturbative runaway effective superpotential on the mesonic space is generated if F < N + 1 [23,76]: where M ij = Ω αβ Q α i Q β j is the anti-symmetric 2F × 2F mesonic matrix and Pf stands for Pfaffian. 22 As before, we turn on a diagonal mass term for the flavors: where Ω ij is the symplectic form of Sp(F ) with i, j = 1, . . . , 2F (in the following, we will indicate all symplectic forms as Ω, irrespective of their dimension, and will not distinguish between upper and lower indices). The mass term stabilizes the runaway directions. It also explicitly breaks the SU (2F ) flavor symmetry to Sp(F ), while leaving a discrete Z 2(N +1) R-symmetry unbroken. 23 The mesons transform in the rank-two antisymmetric representation of Sp(F ). The full effective superpotential on the mesonic space reads The theory develops gaugino condensation giving rise to N + 1 gapped vacua, corresponding to the spontaneous R-symmetry breaking Z 2(N +1) → Z 2 . The N + 1 vacua are rotated into each other by the broken generators and sit on the mesonic space at We want to study domain walls interpolating between these vacua. 21 In our conventions Sp(1) ≡ U Sp(2) ∼ = SU (2). 22 The Pfaffian of a 2F × 2F antisymmetric matrix M is Pf Domain wall trajectories. The mathematical problem of studying domain wall solutions for Sp(N ) SQCD with F < N + 1 flavors is equivalent to the one for SU (N + 1) SQCD with F < N + 1 flavors. Indeed, upon diagonalizing the 2F × 2F antisymmetric mesonic matrix and rescaling to dimensionless quantities, eqn. (6.3) becomes which is the same as the effective superpotential we discussed in Section 5, upon shifting N → N + 1 everywhere there. In other words, the ODEs that determine the domain wall trajectories are the same for SU (N + 1) and Sp(N ) gauge groups. Hence, for small values of the flavor masses, m 4d Λ, we obtain the same structure of multiple vacua corresponding to different classes of domain walls, preserving or partially breaking the Sp(F ) flavor symmetry, and with or without a topological sector. For large flavor masses, instead, the domain wall theory should reduce to that of pure Sp(N ) SYM. Finally, at some value m * 4d of the 4d mass (that could depend on N, F, k, and that corresponds to m = 0 in the three-dimensional field theory description), a single second-order phase transition should occur, where multiple vacua coalesce. In the following, we present our proposal for the 3d theory living on k-walls of Sp(N ) SQCD with F < N + 1 flavors, and show that the domain walls we find are in one-to-one correspondence with those of SU (N +1) SQCD with F < N +1 flavors. The difference is that the TQFTs are CS theories with Sp(k) gauge group instead of U (k), and the supersymmetric NLSMs have target spaces given by the quaternionic Grassmannians instead of Gr(J, F ). Three-dimensional worldvolume theory The 3d theory we propose to describe the k-walls of Sp(N ) SQCD with F flavors (for 0 < k < N + 1 and F < N + 1) is gauge theory with a rank-2 antisymmetric scalar multiplet Φ (6.7) and F fundamental scalar multiplets X , and no bare superpotential involving Φ. We indicate the fundamentals by the matrix X ai where a = 1, . . . , 2k is the gauge index and i = 1, . . . , F is the flavor index. As usual when dealing with pseudo-real representations, it is convenient to double the number of fundamentals: we introduce X aI taking I = 1, . . . , 2F and then impose the reality condition X aI = Ω ab Ω IJ X * bJ . This makes manifest the Sp(F ) flavor symmetry that acts on the F fundamentals. Gauge invariants are constructed in terms of which are antisymmetric in IJ. The representation of Φ breaks into two irreducible representations: a singlet (proportional to Ω), which is the Goldstone mode associated to broken translations, and the Ωtraceless antisymmetric representation which classically gives rise to flat directions. Quantum corrections lift those flat directions, generating a negative mass around Φ = 0. Integrating out the traceless antisymmetric (whose quadratic Casimir is k − 1) we obtain the simpler low-energy description Notice that − m 4 Tr X 2 Ω = m 2 ai X ai X * ai . We assume α > − min(k, F ) −1 . Before discussing its vacuum structure, let us notice that our proposal already passes a non-trivial check. As we show below, this theory enjoys a single gapped vacuum for m > 0 and multiple vacua for m < 0, with a second-order phase transition at m = 0, very much like in SU (N ) SQCD. Due to the broken R-symmetry, k-walls are the parity reversal of (N + 1 − k)-walls. Hence, according to (6.9), this should imply the following 3d N = 1 duality to hold at the phase transition: with F flavors and quartic W ←→ with F flavors and quartic W . The sign of the quartic couplings is equal to the sign of the CS level. This is indeed one of the 3d dualities recently proposed in [18] and expected to be valid precisely in the regime of interest, i.e. 0 < k < N + 1. Notice that for k = 1, the dual description is a threedimensional N = 1 CS theory Sp(N ) −1− N +1 2 + F 2 with F flavors: intriguingly, the gauge group is the same as the 4d one, suggesting a possible connection with an interface operator. In the following we will provide further checks of this duality, showing that as the mass parameter m is turned on and varied from positive to negative values, the vacuum structure of the theory on the left-hand-side of (6.11) is the same as that of the theory on the right. Notice that a mass term − Tr X 2 Ω on the left is mapped to a term Tr Y 2 Ω on the right. We will sometimes call theory A the theory on the left-hand-side of (6.11) and theory B the one on the right. Since most of the logic is the same as in Section 4, in what follows we will skip all unnecessary details. Let us now discuss the vacuum structure of the theory. • m > 0. It is not difficult to see that, in this regime, for both theories A and B there exists a unique vacuum, and the duality (6.11) reduces to the 3d N = 1 duality This duality is known to hold since, upon integrating out the massive gaugini, it boils down to the level/rank duality [8] valid for 0 ≤ k ≤ N + 1. This provides a simple check of the duality (6.11). Domain walls of Sp(N ) SYM. As a consequence of our proposal, we find that the N = 1 theory on the left-hand-side of (6.12) describes a k-wall of 4d N = 1 pure Sp(N ) SYM. The duality (6.12) represents the fact that a k-wall is the parity reversal of an (N + 1 − k)-wall. Reinstating the massive scalar multiplet Φ that describes the center-of-mass motion as well as the breaking of a k-wall into k 1-walls, we have 3d N = 1 Sp(k) N +1 gauge theory with a (rank-2 antisymmetric) scalar multiplet Φ . (6.14) These are the natural generalizations of the Acharya-Vafa domain wall theories to the case of four-dimensional N = 1 Sp(N ) SYM. • m < 0. In this regime we get vacua where J flavors take a VEV, with J ≤ min(k, F ). In order to avoid confusion, for theory B we parameterize the vacua with the integer H ≤ min(N − k + 1, F ). On a J-vacuum, (F − J) flavors become massive and the CS level gets shifted accordingly. In each vacuum the low energy theory is the product of an N = 1 topological sector and a NLSM. In theory A we find Table 3: Domain walls of massive 4d N = 1 Sp(N ) SQCD with F < N + 1 flavors. Behavior of the conjectured 3d dynamics for m < 0, as k and J are varied. if |2h| < m + 1, we find that supersymmetric vacua correspond to J ≥ F + k − N − 1. Therefore, the full set of vacua is parameterized by J in the interval max(0, F + k − N − 1) ≤ J ≤ min(k, F ) . It is easy to check-using the level/rank duality (6.12)-that these vacua exactly match with the supersymmetric vacua of theory A, upon the identification H = F − J. We can collect into a table all vacua (6.15) that we found in the theories (6.9) at m < 0 as we vary k, with 0 < k < N + 1. They describe all k-walls of Sp(N ) SQCD with F < N + 1 flavors, in the regime m 4d Λ. Since the gauge factor in (6.15) only depends on k − J while the NLSM only depends on J, we can set up a table, analogous to Table 1 for SU (N ) SQCD, where J runs from 0 to F horizontally, while k − J runs from 0 to N − F + 1 vertically. The result is Table 3, which is (N − F + 2) × (F + 1), and it is the same as the table of domain walls of 4d N = 1 SU (N + 1) SQCD with F flavors, provided one replaces the U (n) gauge theories with Sp(n) ones, and the Grassmannians with quaternionic Grassmannians. Notice that taking k = 0 or k = N + 1, the theory (6.9) has a single, trivially gapped vacuum at both m ≷ 0 and there is no phase transition at m = 0. This corresponds to the fact that, formally, for k = 0 or k = N + 1 there is no domain wall at all. These two cases correspond to the two empty cells in Table 3. Performing a similar analysis as it was done in Section 5 for SU (N ) SQCD, one can show that all vacua of the 3d theory (6.9) precisely match those obtained by solving the BPS domain wall equations of Sp(N ) SQCD in the small mass regime. Supersymmetry enhancement. We conjecture that the theories (6.9) have enhanced N = 2 supersymmetry at the CFT point for F = 1. The special case N = F = 1, corresponding to the domain wall of 4d SU (2) SQCD with 1 flavor, was already discussed around (4.21) and conjectured to have enhanced N = 4 supersymmetry. To understand the supersymmetry enhancement, we need to list the operators invariant under the symmetries that we can construct with X. There is only one quadratic operator invariant under Sp(k) × U (F ): O (2) ≡ X ai X * ai . (6.19) It turns out that this is automatically invariant under Sp(k) × Sp(F ). Indeed, using the extended notation, we have Tr X 2 Ω . (6.20) Next, there are three quartic operators that are invariant under Sp(k) × U (F ): O 2 (2) = X ai X * ai X bj X * bj , O (4A) ≡ X ai X * aj X bj X * bi , O (4B) ≡ X ai Ω ab X bj X * ci Ω cd X * dj . (6.21) The first one is the "double trace" operator, and it preserves Sp(k)×Sp(F ). One combination of the other two is the "single trace" operator that preserves Sp(k) × Sp(F ): Tr X 2 ΩX 2 Ω . For small values of k or F , some of these operators coincide. For k = 1, the combination X aI Ω IJ X bJ is a 2 × 2 antisymmetric matrix and must be proportional to Ω ab . Indeed Therefore, there are two quartic operators that preserve at least Sp(1)×U (F ), but one linear combination preserves Sp(1) × Sp(F ). On the other hand, for F = 1, directly from (6.21) we see that Therefore, there is only one quartic operator invariant under at least Sp(k) × U (1), and it is automatically invariant also under Sp(k)×Sp (1). In other words, insisting on Sp(k), quartic operators cannot break Sp(F ) to U (F ) when F = 1. Using again the extended notation we have X aI X bJ Ω ab = O (2) Ω IJ , (6.26) which gives Tr X 2 ΩX 2 Ω = 2O 2 (2) , compatible with the relations above. Now, consider a 3d N = 2 Sp(k) CS gauge theory with F flavors X in the fundamental representation. In the absence of holomorphic superpotential, the theory has U (F ) flavor symmetry, unless F = 1. Indeed, in N = 1 notation there is a bare real superpotential where h is the CS level. The real multiplet Ψ is in the adjoint representation of Sp(k): Ψ = Ψ † = ΩΨ T Ω . (6.28) We proceed with integrating Ψ out, but we should be careful about the constraint (6.28): it implies the projection The N = 1 theories (6.9) with F = 1 have a single quartic superpotential term and preserve Sp(1) flavor symmetry. It is very plausible, and we conjecture, that the coefficient of that term flows in the IR to the N = 2 point. The case of Sp(k) CS gauge theory with 1 flavor was studied in [67] at large CS level, and it was shown that the N = 2 point is indeed attractive. For F > 1, the N = 1 theories (6.9) have Sp(F ) global symmetry and thus the RG flow cannot reach the N = 2 point, which has only U (F ) global symmetry. In other words, the RG flow does not generate the Sp(F )-breaking term O (4B) which is instead present at the N = 2 point. From this point of view, the theories with k = 1 are not special and do not enjoy supersymmetry enhancement.
2019-03-15T22:30:02.000Z
2018-12-11T00:00:00.000
{ "year": 2018, "sha1": "65333698971a80cc413898cd2ac57f1742002941", "oa_license": "CCBY", "oa_url": "https://scipost.org/10.21468/SciPostPhys.6.4.044/pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "f5afd46bfe8d30a7c7d85f89ddd5d088713aafea", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
248893741
pes2o/s2orc
v3-fos-license
Deep Insights Into the Plastome Evolution and Phylogenetic Relationships of the Tribe Urticeae (Family Urticaceae) Urticeae s.l., a tribe of Urticaceae well-known for their stinging trichomes, consists of more than 10 genera and approximately 220 species. Relationships within this tribe remain poorly known due to the limited molecular and taxonomic sampling in previous studies, and chloroplast genome (CP genome/plastome) evolution is still largely unaddressed. To address these concerns, we used genome skimming data—CP genome and nuclear ribosomal DNA (18S-ITS1-5.8S-ITS2-26S); 106 accessions—for the very first time to attempt resolving the recalcitrant relationships and to explore chloroplast structural evolution across the group. Furthermore, we assembled a taxon rich two-locus dataset of trnL-F spacer and ITS sequences across 291 accessions to complement our genome skimming dataset. We found that Urticeae plastomes exhibit the tetrad structure typical of angiosperms, with sizes ranging from 145 to 161 kb and encoding a set of 110–112 unique genes. The studied plastomes have also undergone several structural variations, including inverted repeat (IR) expansions and contractions, inversion of the trnN-GUU gene, losses of the rps19 gene, and the rpl2 intron, and the proliferation of multiple repeat types; 11 hypervariable regions were also identified. Our phylogenomic analyses largely resolved major relationships across tribe Urticeae, supporting the monophyly of the tribe and most of its genera except for Laportea, Urera, and Urtica, which were recovered as polyphyletic with strong support. Our analyses also resolved with strong support several previously contentious branches: (1) Girardinia as a sister to the Dendrocnide-Discocnide-Laportea-Nanocnide-Zhengyia-Urtica-Hesperocnide clade and (2) Poikilospermum as sister to the recently transcribed Urera sensu stricto. Analyses of the taxon-rich, two-locus dataset showed lower support but was largely congruent with results from the CP genome and nuclear ribosomal DNA dataset. Collectively, our study highlights the power of genome skimming data to ameliorate phylogenetic resolution and provides new insights into phylogenetic relationships and chloroplast structural evolution in Urticeae. Although our understanding of evolutionary relationships of the tribe Urticeae has improved in recent years, some important nodes remain unresolved. For example, the phylogenetic position of Laportea remains contentious in previous studies. Wu et al. (2013), using seven combined markers from the mitochondrial, nuclear, and chloroplast genomes, recovered Laportea sister to a clade comprising Obetia-Urera-Touchardia and Poikilospermum, though with weak support ( Figure 1A). Subsequent studies, however, have supported alternative, conflicting resolutions of Laportea (Figures 1B-D; Kim et al., 2015;Wu et al., 2018;Huang et al., 2019) probably due to the limited sampling. The placement of Poikilospermum also remains uncertain; although it has consistently been placed sister to Urera, support for this was either lacking (Figures 1A-C; Wu et al., 2013Wu et al., , 2018Kim et al., 2015;Wells et al., 2021) or low ( Figure 1D; Huang et al., 2019). The genus Hesperocnide, although supported as monophyletic in earlier studies, was recently recovered as polyphyletic by Huang et al. (2019), suggesting that further investigation of this genus may be required. Conflict concerning the placement of Girardinia further compounds taxonomic problems within Urticeae; several studies support its relationship with Dendrocnide-Discocnide, but without support (Figures 1A,B; Wu et al., 2013;Kim et al., 2015), while others (Wu et al., 2018;Huang et al., 2019) have recovered Girardinia sister to a clade comprising Dendrocnide-Discocnide-Laportea-Nanocnide-Zhengyia-Urtica-Hesperocnide, albeit also with low support (Figures 1C,D). These uncertainties around phylogenetic relationships within Urticeae are likely due to limited taxon or genic sampling in previous studies. Therefore, a broadly sampled phylogenomic study should offer useful framework for resolving these outstanding problems and guiding revised taxonomic treatments of the tribe. Chloroplasts are ubiquitous organelles in plants with tractable attributes that make them highly suitable for use in phylogenetic and phylogeographic studies (Demenou et al., 2020;Silverio et al., 2021;Simmonds et al., 2021;Wang et al., 2021). In Urticaceae, whole chloroplast genomes have proven to be indispensable for sequence variation exploration (Wang et al., 2020b;Li et al., 2021). More broadly, studies of chloroplast genomes have been useful for understanding molecular evolutionary patterns of gene duplication, loss, rearrangement, and transfer across angiosperms (Yan et al., 2018;Do et al., 2020;Liu et al., 2020a;Oyebanji et al., 2020), though discordant relationships may be caused by plastid capture and other evolutionary processes. For the present study, we sequenced and examined chloroplast genomes (CP genome/plastome) of the tribe Urticeae in order to explore plastome structural evolution in the tribe and to reconstruct the first-ever full plastome phylogeny for the tribe. Furthermore, we generated a robustly sampled dataset of Urticeae (comprising 291 accessions) aimed at reconstructing a more taxonomically rich phylogeny for the tribe. Specifically, we aimed to (1) characterize structural changes in Urticeae plastomes, (2) resolve deep relationships in the tribe using different data partitioning strategies, and (3) evaluate and update existing classifications for Urticeae in the light of our phylogenetic results based on both plastome and nuclear data. Taxon Sampling In this study, we sampled a total of 106 accessions, comprising 90 ingroup accessions (58 spp. in 12 genera) from the tribe Urticeae, plus 12 accessions (12 spp. in 11 genera) from other Urticaceae tribes and four (3 spp. in 3 genera) from outside the family as outgroups. These represent the genome skimming-CP genome and the nuclear ribosomal DNA (18S-ITS1-5.8S-ITS2-26S) dataset for the phylogenetic analyses (Supplementary Table 1). Of the 106 accessions, 57 representative accessions (each a different taxon) were selected for CP genome structural analyses. To produce a more comprehensive phylogenetic framework for the tribe Urticeae, we also generated a new two-locus dataset of 291 accessions (145 spp. in 26 genera) based on ITS and the trnL-F intergenic spacer. The ITS and the trnL-F intergenic spacer dataset was sampled based on maximum taxon data availability on NCBI database. Of the 291 accessions included, 187 sequences were obtained from NCBI GenBank while the remaining were newly sequenced for this study. Information on the plant material (collection localities and voucher specimen numbers) and the associated GenBank accessions are listed in Supplementary Table 1. DNA Extraction and Sequencing A modified cetyl trimethyl ammonium bromide (CTAB) protocol (Doyle and Doyle, 1987) was used to extract total DNA from both silica gel-dried leaves and herbarium samples. Genomic DNA from each sample was then assessed for quality and quantity using both a NanoDrop 2,000 spectrophotometer (Thermo Fisher Scientific, United States) and agarose gel electrophoresis before library preparation. The library was built using the NEBNext Ultra II DNA Library Prep Kit for Illumina (New England BioLabs) according to the manufacturer's instructions. Sequencing was then done using the Illumina HiSeq X Ten platform, yielding 150 bp paired-end reads. For each individual, 2-4 Gb of clean data was generated. Assembly and Annotation SPAdes (Bankevich et al., 2012) was used for de novo assembly of all sequences using kmer length of 85-111 bp. For the CP genome, we visualized and filtered the newly assembled contigs to generate a complete circular genome in both Bandage v. 0.80 (Wick et al., 2015) and Geneious v. 8.1 (Kearse et al., 2012). The newly assembled sequences were annotated using the reference genome Debregeasia longifolia_MBD01 (MN18994) in the Plant Genome Annotation (PGA) platform (Qu et al., 2019), followed by manual curation of genes in Geneious to check if the start and stop codons are correct. Furthermore, for CP genomes, tRNAscan-SE v. 1.21 (Schattner et al., 2005) was used to further verify the tRNA genes with default settings. We used Chloroplot (Zheng et al., 2020) to generate the physical maps of the CP genomes. Patterns of Inverted Repeat Boundary Shifts and Inversion We characterized the genomic features of the 57 unique plastomes, including their size, structure (SC and IR regions), protein coding (PCG) and other (tRNA and rRNA) genes, and GC content. The junctions between the IR and single copy (SC) regions were then compared and analyzed using Geneious v. 8.1 (Kearse et al., 2012). ProgressiveMAUVE (Darling et al., 2010) was used to detect gene rearrangements and inversions among Urticeae taxa with Elatostema parvum as the reference genome. Default settings were used in ProgressiveMAUVE to automatically calculate the seed weight (15), and calculate Locally Collinear Blocks (LCBs) with a minimum LCB score of 30,000. Repeat Sequence Analyses We searched for the occurrence and distribution of three types of repeats within the studied plastomes of the tribe Urticeae. First, the program REPuter (Kurtz et al., 2001) was used to identify dispersed repeat sequences (forward, reverse, complement, and palindromic) using the following constraint values: a hamming distance of 3, minimum repeat size of 30 bp, and a maximum computed repeat of 100. Second, the tandem repeats were identified using the online program Tandem Repeats Finder (Benson, 1999) with the alignment parameters match, mismatch, and indels set to 2, 7, and 7, respectively. For this analysis, the maximum period size and TR array size were limited to 500 and 2,000,000 bp, respectively, and the minimum alignment score for reporting repeats was set at 50. Third, we used a Perlbased microsatellite identification tool (MISA; Thiel et al., 2003) to search for simple sequence repeats (SSRs) (i.e., mono-, di-, tri-, tetra-, penta-, and hexanucleotide repeats) within Urticeae plastomes. The threshold values for this analysis were set at 10, 6, 5, 5, 5, and 5 for mono-, di-, tri-, tetra-, penta-and hexanucleotides, respectively. Sequence Divergence Analyses To illustrate interspecific sequence variation and gene organization of the entire plastomes across the 57 examined species, we used mVISTA with the shuffle-LAGAN mode (Frazer et al., 2004) and E. parvum as the reference genome. For the assessment of sequence divergence and exploration of highly variable chloroplast markers, a sliding window analysis was performed in DnaSP v. 6 (Rozas et al., 2017) to compute the nucleotide diversity (π) for all protein-coding (CDS) and non-coding (nCDS i.e., intron and intergenic spacer) regions. The step size was set to 300 bp, with a window length of 1,000 bp. The gene recovered to have the highest nucleotide diversity was then used to draw a phylogenetic tree to test the resolution of the identified barcode for our species. Phylogenetic Inference Phylogenetic analyses were conducted using different partitioning schemes from two datasets: the genome skimming [CP genome and the 18S-ITS1-5.8S-ITS2-26S (nrDNA) sequences] and two-locus (ITS and the trnL-F intergenic spacer) dataset. We extracted the coding (CDS) and non-coding (nCDS) regions from the CP genome to elucidate the phylogenetic utility of the different regions. This partitioning is important as both CDS and nCDS regions have been shown to exhibit distinct rates of nucleotide substitution (Wolfe et al., 1987;Jansen and Ruhlman, 2012). In total, six molecular data matrices were generated to explore the phylogenetic relationships of the tribe Urticeae, of which five were from the genome skimming dataset: (1) Whole chloroplast (CP) genomes, (2) CP coding regions (CDS), (3) CP non-coding regions (nCDS), (4) nuclear ribosomal DNA (nrDNA), and (5) combined whole CP genomes and nuclear ribosomal DNA (CP + nrDNA). The final matrix (6) sampled the two-locus dataset trnL-F intergenic spacer and ITS sequences (trnL-F + ITS) across expanded taxonomic sampling of 291 accessions. Phylogenetic analyses were conducted using maximum likelihood (ML) and Bayesian inference (BI) methods in RAxML v. 8.2.11 (Stamatakis, 2014) and MrBayes v. 3.2 (Ronquist et al., 2012), respectively. Substitution models for all the datasets were first determined based on Akaike information criterion (AIC; Akaike, 1973) Table 2). Maximum likelihood analyses was done in RAxML using the bootstrap option of 1,000 replicates. For BI analyses, we performed two independent runs, each consisting of four Markov Chain Monte Carlo (MCMC) chains, and sampling of one tree every 1,000 generations for 1 million (CP, nCDS, and CP + nrDNA), 3 million (CDS), and 20 million (trnL-F + ITS and only nrDNA) generations. The convergence of the MCMC chains of each run was determined when the average standard deviation of split frequencies (ASDSF) achieved ≤ 0.01, and adequate mixing was based on the Effective Sample Sizes (ESS) values ≥ 200. Stationarity was assessed by checking if the plot of log-likelihood scores had plateaued in Tracer v1.7.1 (Rambaut et al., 2018). The first 25% of the sampled trees acquired from all the runs were discarded as burn-in, and consensus trees were constructed from the remaining trees to estimate posterior probabilities. Inverted Repeat Expansion and Contraction Comparison of the IR boundaries among the 57 plastomes from tribe Urticeae revealed varying expansion and contraction of the IRs (Figure 3A). Herein, we report only the functional genes located at the IR-SC boundaries. The LSC/IRb border was embedded in the rps19 gene (with 50-131 bp located within IRb) in 43 taxa. The remaining 14 species showed: an expansion in three species (rpl22 in the LSC-rps19 in the IRb); contraction (rps19 in the LSC-rpl2 in the IRb) of the IR in three species; the loss of the rps19 gene in eight species (rpl22 in the LSC-rpl2 in the IRb), causing variations in the boundary ( Figure 3B). The IRb/SSC boundary generally fell within the ndhF gene (with 50-131 bp located at IRb), except in six species where the boundary was detected in the intergenic region of trnNGUU-ndhF ( Figure 3B). We observed that the IRa/LSC boundary of most species lay within either the intergenic rpl2-trnHGUG or non-coding trnH-GUG regions, except for four species (Hesperocnide tenella_W61, Urtica chamaedryoides_W162, Urtica magellanica_U33, and Urtica morifolia_U200) in which the boundary was located within the intergenic region trnH-GUG-psbA ( Figure 3B). The most conserved boundary across species was that of the SSC/IRa, which was always positioned within the ycf1 coding gene, which had a length of 195-3,054 bp overlapping into the IRa region ( Figure 3B). Phylogenetic Relationships The sequence characteristics, tree diagnostic values, and the bestfit model determined by jModelTest for all datasets are given in Supplementary Table 2. The phylogenetic results presented here are based on both ML and BI analyses. The ML and BI analyses generated here generally had nearly identical topologies with few differences at the shallow nodes. Factors driving discrepancies between the ML and BI topologies have been previously reported (Huelsenbeck, 1995;Sullivan and Joyce, 2005;Som, 2014). Of those, the optimality criterion and specific hypotheses in the modeling of sequence evolution are parsimonious to explain the few discrepancies between the ML and BI topologies inferred from the same data matrix in our study. In most cases, the phylogenetic relationships inferred from ML were discussed because it has the most supporting shreds of evidence from the morphological affinities between the known species within the tribe Urticeae. The phylogenetic relationships constructed for each data matrix are further reported. Chloroplast Data Analyses The CDS, nCDS, and whole CP phylogenetic trees were largely identical in their topologies with only a few exceptions concerning the relationships of two clades 3F3I and 3F3II (Supplementary Figures 3A-CI). In the CDS data, these were sister to one another, hence formed a monophyletic clade 3F3 (Supplementary Figure 3A). However, in the whole CP dataset, 3F3I was sister to both 3F3II, and 3F4, while in nCDS dataset, 3F3II was sister to both 3F3I and 3F4 (Supplementary Figures 3B,CI). Nevertheless, it should be noted that the whole CP dataset generally had better support compared to both the CDS and nCDS datasets. nrDNA Data Analysis Regarding relationships between major clades in Urticeae, the results from the nrDNA dataset (Supplementary Figure 3CII) recovered almost congruent relationships with that of the whole CP dataset (Supplementary Figure 3CI), other than a few discrepancies in particular major clades and phylogenetic placement of some species. For instance, in the nrDNA phylogeny, clade 3D (Girardinia) was recovered as sister to clade 3C (Supplementary Figure 3CII), whereas in whole CP phylogeny, clade 3D was recovered as sister to a clade comprising subclades 3C, 3B, and 3A (Supplementary Figure 3CI). The sister relationships of clade 3G, and those within clade 3E-F also changed depending on the dataset examined. Moreover, we found slight differences in some shallower relationships between the whole CP and nrDNA phylogenies (e.g., the contradicting phylogenetic positions of Dendrocnide urentissima, Girardinia suborbiculata subsp. suborbiculata, etc.; Supplementary Figure 3C). These differences were, however, mostly restricted to areas of poor support, and the whole CP phylogeny was generally better supported than that of nrDNA. Combined Whole Chloroplast Genome and nrDNA (CP + nrDNA) Analysis Phylogenetic resolution and node support values were significantly improved by the combination of whole CP genome and nrDNA data (Figure 5). The phylogeny inferred from the combined data matrix was the best resolved and supported phylogenetic tree amongst all the other data matrices, and was more similar in topology to the three chloroplast data matrices (whole CP, CDS, and nCDS, regions) than the nrDNA one (Figure 5 and Supplementary Figures 3A-C). The monophyly of Urticeae was strongly supported (BS/PP = 100/1), with Elatostemeae as its sister tribe (Figure 5). Generally, the phylogeny was well resolved, with most nodes being strongly supported by both ML and BI analyses, except the placement of Zhengyia shennongensis (BS = 100 PP = "-"), the relationship between Urtica domingensis and Hesperocnide tenella (BS = "-" PP = 1), and the relationship between Laportea aestuans and Laportea ovalifolia (BS = "-" PP = 1) (Figure 5). Nine genera within Urticeae were recovered as monophyletic (Dendrocnide, Discocnide, Girardinia, Hesperocnide, Obetia, Nanocnide, Poikilospermum, Touchardia, and Zhengyia) and three as polyphyletic (Urtica, Laportea, and Urera), all with strong support. For ease of discussion, we sectioned Urticeae into six major clades, each with full bootstrap support; the names reflect the clade naming system of Wu et al. (2013). They include Clade 3A (Urtica, Hesperocnide, and Zhengyia), Clade 3B (Nanocnide and Laportea cuspidata), Clade 3C (Dendrocnide, Discocnide, and Laportea decumana), Clade 3D (Girardinia), and Clade 3G (Laportea). Clade 3E-F was recovered as sister to the rest of the Urticeae tribe with maximum support, and comprised Poikilospermum, Urera, Obetia, and Laportea. Within it, Poikilospermum (sub-clade 3F4) was recovered for the first time as a sister clade to Urera (sub-clade 3F3) with full support (Figure 5). Urera comprised three separate subclades within Clade 3E-F, each with strong support. Moreover, in this study Laportea was split into five different clades. Clade 3D (Girardinia) was also recovered for the first time as sister to a clade comprising 3A, 3B, and 3C, with full support. Combined Analysis of trnL-F + ITS The tree topology from the analysis of the trnL-F and ITS dataset was largely congruent with the previously published phylogenies inferred from a small number of loci. Eight genera were strongly supported as monophyletic (i.e., Dendrocnide, Discocnide, Girardinia, Obetia, Nanocnide, Poikilospermum, Touchardia, and Zhengyia) while four genera were recovered as polyphyletic (i.e., Hesperocnide, Urtica, Laportea, and Urera). Hesperocnide was recovered here as polyphyletic (BS/PP > 90/0.90 and BS/PP < 90/0.90; Figure 6) as compared to the combined whole (CP + nrDNA) where it was retrieved as monophyletic with full bootstrap support (Figure 5). Moreover, most of the shallow nodes of trnL-F and ITS tree received lower bootstrap support (Figure 6) compared to the combined whole (CP + nrDNA) tree, in which nearly all the nodes were fully supported. Plastome Structural Evolution All 57 Urticeae CP genomes examined are quadripartite but varied in size. The observed range was consistent with chloroplast genome sizes of angiosperms (Zhang et al., 2021) and the few existing sequenced plastomes of Urticaceae (Wang et al., 2020b;Li et al., 2021), which range between 120 and 180 kb. Of the plastomes in our study, Laportea grossa had the largest genome, while Nanocnide lobata had the smallest, implying that CP genomes in Urticaceae are structurally different. Also, the number of PCGs in the Urticeae plastomes in our study (76)(77)(78) was comparable with the typical range for angiosperm plastomes (70-88 genes) (Wicke et al., 2011). Likewise, we found congruence with the range of GC content previously reported in other plastomes of Urticaceae, e.g., Pilea mollis (36.72%; Li et al., 2021), Elatostema dissectum (36.2%; Fu et al., 2019), Droguetia iners (36.9%), and Debregeasia elliptica (36.4%) (Wang et al., 2020b). Generally, the GC content had no significant phylogenetic implication in our study. Moreover, consistent with previous studies (Li et al., 2020(Li et al., , 2021Dong et al., 2021), the GC content was higher in the IR than in the SC. The GC inequality perhaps also plays a significant factor in the conservatism of the IR region compared to the SC regions (Li et al., 2020). Among the genes present in our Urticeae plastomes, rpl2 was noteworthy, considering that 18 of the examined species had no introns for this gene. Intron loss has been widely documented in angiosperm plastomes: e.g., Avena sativa (rpoC1 intron loss; Liu et al., 2020b), Cicer arietinum (rps12 and clpP intron losses; Jansen et al., 2008), Lagerstroemia (rpl2 intron loss; Gu et al., 2016), and Asteropeiaceae + Physenaceae (rpl2 intron loss; Yao et al., 2019). Another notable structural change found here was an inversion of the trnN-GUU gene, which is a synapomorphy of the clade 3C, except for the clade's basal species Discocnide mexicana ( Figure 2B). Gene inversions have also been detected in many angiosperm plastomes, including those of Poaceae (Guisinger et al., 2010), Styracaceae (Yan et al., 2018), Orchidaceae (Uncifera acuminata; Liu et al., 2020a), and Adoxaceae (Wang et al., 2020a). The latter, involving the inversion of the ndhF gene in Adoxaceae, is relevant to our study since it involves only one gene that also borders the inverted gene in our study (trnN-GUU). Typically, plastome inversions are deemed highly valuable in phylogenetics owing to their relative rarity, easily determined homology, and easily inferred state polarity (Cosner et al., 1997;Dugas et al., 2015;Schwarz et al., 2015). Despite some significant research efforts regarding the intramolecular recombination between dispersed short inverted/direct repeats and tRNA genes (Cosner et al., 1997;Haberle et al., 2008;Sloan et al., 2014), the cause of inversions in plant genomes remains unclear. Our analyses showed that IR expansion and contraction vary across Urticeae, and lack taxonomic utility at a broader scale. Mostly, the SC/IR borders are relatively conserved among angiosperm plastomes and usually located within the rps19 or ycf1gene (Downie and Jansen, 2015), even though it is assumed that IR expansion or contraction is accompanied by the shift of genes located in the IR/SC boundary (Zhu et al., 2016). Similar IR/SC changes are also evident in other Urticaceae plastomes (Wang et al., 2020b;Li et al., 2021). Changes in the IR/SC junctions have been considered one of the main drivers of the size diversity in the CP genomes of higher plants (Ma et al., 2013;Yang et al., 2016;Yan et al., 2018;Xue et al., 2019). Notably, we found the loss of the rps19 gene to be the most parsimonious explanation for the diversification of the genes bordering the IR/LSC in the eight plastomes examined from the genus Urtica-(U. ardens_GLGE152058, U. dioica subsp. xijiangensis_U41, U. domingensis_W145, U. hyperborea_J5455, U. mairei_J1664, U. membranifolia_S13031, U. morifolia_U200, and U. thunbergiana_J2498; Figure 3A). We detected several repeat types within the sampled plastomes of tribe Urticeae, among which SSRs were the most frequent, accounting for 46.53% of the repeats (Figure 4A). The most abundant SSRs were mononucleotide homopolymers, particularly poly−A and T motifs ( Figure 4D and Supplementary Table 5). This phenomenon of A/T motif abundance has also been reported in Pilea (Li et al., 2021) and Debregeasia (Wang et al., 2020b) species, and might occur because the A/T motifs are more frequently dynamic compared to G/C (Li et al., 2020). Generally, it is presumed that repeat sequences are closely connected with a vast number of indels; therefore, the more abundant they are, the greater the nucleotide diversity (McDonald et al., 2011). Hence, the chloroplast repeat sequences could be potential sources of variation for evolutionary studies, and population genetics (Xue et al., 2012). We also found higher nucleotide diversity in the nCDS than in the CDS regions, consistent with findings from other taxa (Jansen and Ruhlman, 2012;Huang et al., 2014). Although the nucleotide content of chloroplast genomes is usually relatively stable, with a highly conserved gene structure (Jansen et al., 2005;Ravi et al., 2008;Wicke et al., 2011), mutation hotspots still exist within it (Zhang et al., 2021). We detected a total of 11 hypervariable loci in both CDS and nCDS regions (Supplementary Figure 2) that could be potentially used as DNA barcodes in future studies of this group. Among them was the locus ycf1, which was also reported in previous Urticaceae studies (Wang et al., 2020b;Li et al., 2021) as a highly variable locus with great taxonomic utility. Moreover, a study by Dong et al. (2015) reinforces this view, and recommemnds ycf 1 as a suitable plastid barcode for land plants. Indeed, our ycf 1 phylogenetic tree (Supplementary Figure 2C) is consistent with the above studies, especially with regard to the high resolution and support level. Therefore, we suggest that ycf1 represents a highly useful molecular marker, not just for tribe Urticeae, but likely for the entire family. Presently, DNA barcodes are widely used in species identification, resource management, and studies of phylogeny and evolution (Gregory, 2005;Liu et al., 2019). Phylogenetic Relationships of Urticeae Phylogenetic Relationships Based on Genome Skimming (CP Genome + nrDNA) Data The combined matrix (CP genome + nrDNA) yielded a well-supported phylogeny and resolved many relationships of the tribe Urticeae depite the topological difference in clades 3(D, 3G, and E-F), between the two separate datasets (Supplementary Figure 3C). This resolution shown by the combined matrix may be ascribed to the greater number of phylogenetically informative plastid sites (Supplementary Table 2). Moreover, it could be due to a weak phylogenetic signal in the nrDNA that agrees and complements the signal of the CP matrix. However, beyond some major conflicts, the individual CP and nrDNA trees are generally in agreement with most conflicting relationships pertaining to poorly supported areas of the phylogeny, although we did not perform followup analyses to identify what this means for different parts of the tree. Cases of topological dissimilarity are often reported in phylogenetic studies (Wendel and Doyle, 1998;reviewed by Degnan and Rosenberg, 2009). This phenomenon can be best explained by a number of factors including differences in taxon sampling, incomplete lineage sorting, hybridization/introgression, paralogy, gene duplication and/or loss, and horizontal gene transfer (Degnan and Rosenberg, 2006;Naciri and Linder, 2015;Lin et al., 2019;Nicola et al., 2019). Hence, as more samples become available, future studies should investigate the factors responsible for the observed conflicting relationships within the Urticeae. Our study represents the first phylogeny of the tribe Urticeae based on a broad sampling of both CP genomes and nrDNA sequences. Importantly, we clarify which of the Urticeae genera are strongly supported as monophyletic or polyphyletic (Figure 5). Compared to previous studies based on a limited number of genes (Hadiah et al., 2008;Deng et al., 2013;Wu et al., 2013Wu et al., , 2018Kim et al., 2015;Grosse-Veldmann et al., 2016;Huang et al., 2019;Wells et al., 2021), we exploited the utility of whole CP genomes for resolving phylogenetic relationships in Urticeae, and also revealed the most informative sites and regions across the plastome. Our results proved to be largely consistent with most of the recently established phylogenetic relationships of Urticeae based on a range of 3-7 selected marker regions (Wu et al., 2013(Wu et al., , 2018Kim et al., 2015;Huang et al., 2019;Wells et al., 2021). In general, however, our data improved resolution throughout Urticeae compared with previous studies, with almost all nodes being fully supported, especially those previously known to be problematic. Four of the most important new phylogenetic insights generated by the current study are discussed below. First, the sister relationship of Girardinia has been contentious. Girardinia had been resolved as sister to Dendrocnide-Discocnide based on chloroplast, mitochondrial, and nuclear data (Wu et al., 2013), and using ITS, rbcL, and trnL-F regions (Kim et al., 2015), but without support in either case. Subsequently, using expanded taxon sampling and five markers from both the nuclear and CP genomes, the sister relationship of Girardinia to Dendrocnide-Discocnide-Laportea-Nanocnide-Zhengyia-Urtica-Hesperocnide was resolved, but with limited support (Wu et al., 2018;Huang et al., 2019). Our results support this latter relationship but with maximum support (BS/PP = 100/1), for the first time. Second, our molecular phylogeny of the "Urera alliance clade" (this study clade 3E-F) corroborated the generic delimitation and subdivisions of the "Urera clade" from Wells et al. (2021), and showed two clades of Laportea (which they did not examine) as also a member (Figure 5). Their division of the paraphyletic Urera into three genera was strongly supported here: these were Urera s.s. (our Clade 3F3), Scepocarpus (entirely African; our clade 3F1, which also includes Laportea grossa), and an expanded Touchardia (part of clade 3E, that includes Urera glabra from Hawaii and three species of Laportea as per our study). Our data suggests that the two Laportea clades should hence be fully examined and considerations made as to whether to subsume them within the resurrected Scepocarpus and the expanded Touchardia. Third, previous studies (Kim et al., 2015;Wu et al., 2018;Huang et al., 2019) have typically resolved Laportea into three clades. For instance, Kim et al. (2015) Wang and Chen (1995). Our analysis, however, resolved Laportea into five major clades. Moreover, we found that L. aestuans was polyphyletic: one subgroup was sister to L. mooreana with full support and the other was sister to L. ovalifolia with support of BS/PP = -/1. The latter relationship was detected by Wu et al. (2018) but without support. However, other studies found different relationships: L. aestuans as sister to L. interrupta, and L. ruderalis with full support according to Kim et al. (2015), or sister to L. ruderalis and L. peduncularis with support of MP/PP = 96/1 according to Huang et al. (2019). These discrepancies likely reflect differences in taxon and molecular sampling-with a wider sampling of populations, L. aestuans might comprise more than two unrelated clades. While additional study on Laportea is clearly needed, the current study provides one of the most comprehensive phylogenetic perspectives on this littlestudied genus. Future investigations should, however, employ more extensive molecular data across the entire phylogenetic spectrum of Laportea to further clarify its relationships and the number of lineages. Finally, our analysis resolved the sister relationship between Poikilospermum and Urera previously obtained by Huang et al. (2019), but replacing their modest support (BS/PP = 65/0.89) with full support (BS/PP = 100/1) for the first time. Comparison Between Genome Skimming (CP Genome + nrDNA) and Two-Locus (trnL-F + ITS) Phylogeny In our study, the trees inferred from both the CP genome + DNA and the two-locus dataset (trnL-F + ITS) provided full support for the monophyly of Urticeae. However, the CP genome + nrDNA tree presented a higher percentage of fully supported nodes compared with that of the two-locus tree (Figures 5, 6). This underscores the importance of genome-scale datasets for resolving major recalcitrant relationships. The most notable finding from our two-locus phylogenetic analysis was the reconstruction of Hesperocnide as polyphyletic, consistent with Huang et al. (2019). Our current CP genome + nrDNA analysis and prior molecular studies, however, recovered Hesperocnide as monophyletic (Kim et al., 2015), with a close relationship to Urtica (Sytsma et al., 2002;Hadiah et al., 2008;Deng et al., 2013;Wu et al., 2013;Kim et al., 2015). The polyphyletic results from the two-locus tree can be ascribed to the sampling of members of the second species that were absent in the plastome analysis. Consequently, Wu et al. (2013) suggested that Hesperocnide be subsumed in the genus Urtica, since these two genera show some morphological similarities. However, owing to this equivocality about the phylogeny of Hesperocnide, we suggest a more rigorous examination of this genus to fully validate its status. CONCLUSION AND FUTURE DIRECTIONS Our study provides important novel insights on Urticeae phylogeny and plastome evolution. The detailed comparative analyses show that Urticeae plastomes exhibit striking differences in genome size, gene number, inversions, intron loss, sequence repeats, and IR/SC boundaries. These kinds of variations will be useful for studies on molecular marker discovery, population genetics, and phylogeny. Resolving the enigmatic relationships within tribe Urticeae has, to date, been a daunting task due to the paucity of genomic resources for the clade. Our study is the first to report phylogenetic relationships in Urticeae based on a broad sampling of whole plastome sequences. This dataset allowed for resolution of several recalcitrant branches (e.g., the relationship of Poikilospermum to Urera, the sister relationship of Girardinia, etc.) that were ambiguous in previous studies. Although our taxon sampling was sufficient to resolve relationships among the major clades in the tribe, additional sampling of particular genera (e.g., Laportea) and species (e.g., Laportea aestuans and Hesperocnide sandwicensis) would further refine our understanding of phylogenetic relationships in Urticeae. Building on the solid framework established here, future studies with even greater taxonomic and genomic sampling could contribute to a better understanding of the diversification patterns in Urticeae in relation to climatic, biogeographic, and ecological factors. DATA AVAILABILITY STATEMENT The datasets presented in this study can be accessed at NCBI GenBank; the list of accessions can be found in Supplementary Table 1. SUPPLEMENTARY MATERIAL The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpls.2022. 870949/full#supplementary-material branches with no support values indicate both ML_BS ≥ 90 and BI_PP = 1.00-whereas " * " indicate incongruence between ML and BI trees. Major clades of Urticeae s.l. are indicated on the right, respectively. CDS, chloroplast coding region. (B) Phylogenetic relationships of Urticeae tribe inferred from maximum likelihood (ML) and Bayesian inference (BI) based on CP non-coding (nCDS) regions. Support values above the branches are maximum likelihood bootstrap support (ML_BS)/Bayesian posterior probability (BI_PP)-note that branches with no support values indicate both ML_BS ≥ 90 and BI_PP = 1.00-whereas " * " indicate incongruence between ML and BI trees. Major clades of Urticeae s.l. are indicated on the right, respectively. nCDS, chloroplast non-coding region. (C) Phylogenetic relationships of Urticeae tribe inferred from maximum likelihood (ML) and Bayesian inference (BI) based on integrated CP genome and nrDNA trees. Support values above the branches are maximum likelihood bootstrap support (ML_BS)/Bayesian posterior probability (BI_PP)-note that branches with no support values indicate both ML_BS ≥ 90 and BI_PP = 1.00-whereas " * " indicate incongruence between ML and BI trees. Major clades of Urticeae s.l. are indicated on the right, respectively. CP, Complete chloroplast genome; nrDNA, nuclear ribosomal DNA (18S-ITS1-5.8S-ITS2-26S).
2022-05-20T13:26:34.252Z
2022-05-20T00:00:00.000
{ "year": 2022, "sha1": "2733deb43b7c0d22e3f74f75966d34c165825340", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "2733deb43b7c0d22e3f74f75966d34c165825340", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
95268501
pes2o/s2orc
v3-fos-license
Numerical Study of Effect of Cooling Rate on Double-Diffusive Convection and Macrosegregation in Iron–Carbon System The present study is aimed at understanding the effect of rate of heat extraction on macrosegregation during solidification of binary Fe-1wt%C alloy. For a constant superheat and geometry, the effect of cooling rate is studied by imposing constant heat fluxes, along the vertical wall of a rectangular cavity, in the range of 5 kW/m 2 to 6 000 kW/m 2 . The effect of variation of heat flux on various transport fields such as double-diffusive convection, thermal and solutal fields and resultant solutal in-homogeneity are analyzed in detail. Its effect on final macrosegregation is discussed in terms of global extent of segregation (GES) and overall macrosegregation pattern in the casting. GES initially decreases with an increase in heat flux up to 10 kW/m 2 . Between heat flux of 10 and 100 kW/m 2 , GES goes through a maximum. Beyond 100 kW/m 2 , GES decreases with an increase in heat flux. The variation in GES with heat flux is explained in terms of thermosolutal convection, mush structure and solidification time. Introduction For many engineering applications uniform properties of cast material are highly desirable. One of the major sources of non-uniformity is macrosegregation of solute, which evolves during solidification of an alloy. Unlike microsegregation, macrosegregation results in long range solutal inhomogeneity and, therefore, is difficult to eliminate by thermal processing. Some forms of macrosegregation, such as freckles, make the product unsuitable in critical application. Also, with increasing emphasis on near-net-shape casting, there shall be very few down stream processing steps of large plastic deformations and hence, little scope to alter/modify macrostructure of these semi-finished and finished products. Clearly, the need to control macrosegregation during solidification processing through better understanding of underlying mechanism is extremely important. The cause of macrosegregation, despite several decades of intensive research, is understood mostly in qualitative terms. For example, it is known that macrosegregation is caused by long range transport of solute during the progress of solidification. 1) Thermo-solutal convection, shrinkage and other convection generating forces induce fluid flow. Similarly, it is known that high superheat of the melt and large dimension of casting lead to increase in segregation of solute. 2) For controlling macrosegregation, however, it is important to estimate the influence of various process and design parameters on macrosegregation with the help of some quantitative tool. Researchers in the past have studied the role of fluid flow on macrosegregation extensively using mathematical models, which have gradually become more and more sophisticated. Prescott and Incropera provide an excellent review on the subject. 3) One of the important conclusions of these studies is that double-diffusive convection plays an important role in the evolution of macrosegregation. Segregation is an important defect in steel casting and there have been few macrosegregation studies on iron-carbon and iron-carbon based steel. [4][5][6][7][8] Amberg 4) reported a numerical study on Fe-C system by using a continuum formulation based model. Singh and Basu 5) have carried out simulations to study the role of double-diffusive convection on macrosegregation during solidification of binary Fe-1wt%C alloy. The effect of thermo-solutal convection on extent of segregation and segregation profile are discussed. Lesoult, Combeau and co-workers 6,7) have reported the significant effects of permeability and carbon partition coefficient on axial segregation during solidification of multi-component steel. A significant conclusion of this work is that the axial segregation increases with an increase in permeability or a decrease in carbon partition coefficient. Schneider and Beckermann, 8) using a fully coupled multicomponent model, have shown that segregation profiles of carbon in multi-component steel show the same trend as in binary Fe-C owing to dominant role of carbon in solutal buoyancy and thermodynamic equilibria. It is well known that heat flux has a significant effect on macrosegregation 2) and steel is cast with a wide range of heat fluxes. Typical heat flux in the primary cooling zone of continuous caster is of the order of 2.5 MW/m 2 whereas during some static casting, heat flux is as low as 10 kW/m 2 . To the authors' knowledge, however, there has been very little systematic effort to study the effect of cooling rate on macrosegregation during solidification of iron-carbon binary alloy or multi-component steel. The present study is aimed to understand the effect of cooling rate on macrosegregation during casting of steel with the help of a previously developed macrosegregation model 5) based on mixture mass model of Voller et al. 9) The model used in the study is briefly outlined below. Numerical Model The numerical methodology adopted for the present study is described in detail in an earlier work 5) and therefore, the model is described below very briefly. The following equations are considered to represent solidification process in a rectangular cavity. The values of viscosity, conductivity, specific heat and diffusivity in the above equations are obtained through averaging as follows: K in momentum equations is the permeability of the mushy region. In this exercise, the value of permeability is calculated based on West's correlation 10) ; the expression is suitably modified for simulation of iron-carbon system. 11) The expressions used for permeability are presented as follows: The dependence of the local liquid fraction on the thermal and solutal fields is mainly governed by the solidification conditions. There are several equations to represent this relationship. In case of slow solidification and/or interstitial solute, the process can be considered to be close to equilibrium. In such cases, the temperature and the solute concentration in the mushy region are related to the local liquid fraction through the phase diagram. The resultant Lever rule is shown as follows. For the purpose of present study, the solidification is considered in a rectangular cavity of length L (0.1 m) and height H (0.1 m); the geometry of the cavity as well as initial conditions used in the present study is shown in Fig. 1. The boundary conditions used in the present study are as follows: no-slip conditions are applied on horizontal and left vertical walls of the cavity. The left vertical wall is the line of symmetry and the gradient of u-velocity along Xaxis is zero. There is no solute flux through all four bounds of the cavity. As for the thermal boundary conditions, the horizontal walls of the cavity are adiabatic and there is no heat flux at the left wall. Solidification is initiated by imposing a heat flux along the right vertical wall of the cavity. The material chosen for the present study is Fe-1wt%C. T init for the present study is taken to be 1 463°C. Thus, in all the studies, a superheat of 5°C is considered. Various thermophysical data pertaining to this system are provided in Table 1. 5) The main focus of the present study is to study the effect of variation of heat flux on double-diffusive convection and, in turn, on macrosegregation. In numerical implementation, the heat flux is varied by changing the value of q along the right wall (chill face) of the cavity. All the simulation studies are carried out on a nonuniform grid of 30ϫ30 nodal points; the choice of nodal points is based on a grid independency test. 5) A variable time step of 0.5-25 sec is employed to simulate the tran- Results and Discussion The main focus of this study is to understand the effect of variation of heat flux on double-diffusive convection and other transport variables and, finally, on resultant macrosegregation during solidification of binary iron-carbon alloy. Simulations were carried out with heat fluxes varying in the range of 5 to 6 000 kW/m 2 . Since the time taken for complete solidification is different for various cases, overall solid fraction is chosen as a basis for comparing the results for various heat fluxes. The results presented below are at three instances of overall solidification, namely, at 20, 50 and 80 %. Beyond 80% solidification, the flow field is restricted near the hot wall and is very weak in magnitude except for the higher heat fluxes. In addition to these, the macrosegregation profiles for various heat fluxes are also compared at the end of solidification. Figures 2 through 4 show the streamlines, isotherms, mush profiles and macrosegregation profiles (at 20, 50 and 80 % solidification) for heat fluxes of qϭ10, 60 and 360 kW/m 2 , respectively. In these figures, the right vertical wall of the cavity represents the chill face, whereas, the left vertical wall, which is at the line of symmetry, represents the hot face. The maximum and minimum values of composition, temperature, stream function and fraction solid are listed in Table 2. Figure 2 shows the results for heat flux of 10 kW/m 2 . At 20 % overall solidification, Fig. 2(a), the flow pattern is very complex and the presence of multiple vortices is readily seen. The isotherms, shown in Fig. 2(b), are affected by the flow to a large extent; they are parallel to the horizontal axis showing the dominance of convection over conduction. Although the overall solidification is only 20 %, the mush is spread over the large part of the cavity and both mush profiles and macrosegregation patterns, shown in Figs. 2(c) and 2(d), respectively, are very complex. Since the mush covers the large part of the cavity, the resistance to the flow is high leading to the reduction in strength of thermal buoyant flow. Isotherms are almost parallel to horizontal wall near the hot wall of the cavity; the fluid near the bottom corner is thermally stable and a solutal buoyancy driven cell is readily seen. With the progress of solidification, the strength of flow diminishes and isotherms show its vertical nature near the cold wall. The macrosegregation profile is highly evolved by this time. Important feature of mush profile at this juncture is that even though the overall solidification is around 50 %, pure solid regime is yet to start. As the solidification progresses further, the flow strength becomes negligible. At 80 % the flow is almost absent and isotherms are mostly vertical in nature. There is very little difference between macrosegregation profiles of this time and those at 50 % solidification. Figure 3 shows the results for heat flux of 60 kW/m 2 . At 20 % solidification, the strength of flow is higher than that of the case of 10 kW/m 2 . However, the isotherms show conduction dominance near the cold wall due to large f s . The macrosegregation patterns are less complex in this case compared to those for qϭ10 kW/m 2 . At 50 % solidification, the flow strength is considerably diminished and mush profiles and macrosegregation profiles are now well evolved. As solidification progresses further, the flow is restricted in a narrow zone near the hot wall. At 80 % solidification, the isotherms are vertical in most part of the cavity. Comparing the macrosegregation patterns at 50 and 80 % solidification, it is noted that though the evolution of the macrosegregation patterns are largely complete at 50 % for qϭ60 kW/m 2 the differences in patterns at 50 and 80 % are more than those for qϭ10 kW/m 2 . Figure 4 shows the results for heat flux of 360 kW/m 2 . At 20 % solidification, the strength of flow is higher than the other two cases. In addition to the major vortex (in pure liquid region), a minor vortex is clearly visible in this case (in the mushy region). The mushy region is very narrow in this case and isotherms in mushy region are largely conduction dominated due to high resistance to flow. There is virtually no temperature gradient in the pure liquid region, Fig. 4(a), due to high strength of flow in the pure liquid zone. Macrosegregation pattern is much simpler in this case and the extent of segregation is less than the other two cases. As solidification progresses, the strength of major and minor vortices goes down. Mushy region continues to be narrow and all three regions, namely, liquid, solid and mushy regions are clearly seen. As solidification progresses further, the flow is restricted in a narrow zone near the hot wall. At 80 % solidification, the isotherms are vertical in © 2001 ISIJ Table 1. Data for test problem [5]. most part of the cavity. Comparing the macrosegregation patterns at 50 and 80 % solidification, it is noted that though the evolution of the macrosegregation patterns are largely complete at 50 % for qϭ360 kW/m 2 the differences in patterns at 50 and 80 % are more than the other two cases. Macrosegregation patterns at the end of complete solidification are shown in Figs. 5(a)-5(c). It is readily noted that overall nature of macrosegregation patterns undergoes drastic changes with heat flux. The severity of segregation (C max -C min ) at qϭ60 kW/m 2 is higher than those at 10 kW/m 2 and 360 kW/m 2 . For the quantitative comparison of macrosegregation, a parameter called global extent of segregation (GES) is used which is defined as root mean square of deviation from nominal composition of all the nodal points. ............ (9) The global extent of segregation, GES, as a function of heat flux is shown in Fig. 6. It is seen from the graph that there is a drop in GES with an increase in heat flux in the beginning. However, GES starts to rise at around qϭ10 kW/m 2 and goes through a peak at qϭ60 kW/m 2 . Beyond this point, there is a steady fall in GES with an increase in heat flux. Thus it is seen that there are three regimes in GES curve. The first regime corresponds to qϽ10 kW/m 2 where GES falls monotonically with an increase in heat flux. This observation is in line with the observations of Tewari et al. 12) who studied vertical solidification of Pb-Sn alloys. The main observation was that the decrease in the rate of solidification led to the increases in strength of flow and macrosegregation. Although the present study is on horizontal solidification, at the lower heat fluxes solutal buoyancy plays a significant role in the mushy region. The drop in GES can be attributed to the net drop in strength of solutal buoyancy. Between qϭ10 kW/m 2 and 100 kW/m 2 , GES first rises and then starts to decrease. This peculiar behavior is due to the opposing nature of thermo-solutal convection, which becomes important at the lower heat fluxes. In this zone, thermal buoyancy completely overcomes solutal buoyancy and causes rise in macrosegregation. However, this rise is arrested beyond qϭ60 kW/m 2 as the higher heat fluxes also causes higher rate of solidification, which reduces the time for the evolution of macrosegregation. Prescott and Incropera 13) in their numerical study on Pb-Sn alloy, have observed similar phenomena. The main reasons for the fall in macrosegregation level at very high heat fluxes are due to lowering of solidification time and narrow mushy region, which allows very little solute transport to the pure liquid region. Thus it is clearly seen that due to opposing nature of thermo-solutal convection, GES curve shows a hump between two monotonically decreasing portions of GES curve. The above results clearly showed a complex variation of GES with heat flux. Conclusions The aim of the present work was to study the effect of cooling rate on the double-diffusive convection and its role in evolution of macrosegregation during solidification of Fe-1wt%C alloy. A comprehensive model was used to simulate the effect of heat flux on thermo-solutal convection and, in turn, on the macrosegregation. The heat flux was varied in the range of 5 and 6 000 kW/m 2 . Some of the important findings of the present study are highlighted below. • GES goes down monotonically up to qϭ10 kW/m 2 . Solutal buoyancy plays a crucial role in this regime and the lower the rate of solidification is, the higher the GES is. • Between heat flux of 10 and 100 kW/m 2 , the GES goes through a maximum. This is due to opposing nature of thermo-solutal convection. In this regime, thermal buoyancy plays an important role in evolution of macrosegregation. • For higher heat flux (Ͼ100 kW/m 2 ), the GES curve goes down monotonically with an increase in heat flux. The main reasons for a decrease in GES are lowering of solidification time and narrow mushy zone which do not allow solute transport from the mushy region.
2019-04-05T03:38:58.388Z
2001-12-15T00:00:00.000
{ "year": 2001, "sha1": "5c5a6b4f6a22c7273eb1f969e924ec3eba18c749", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/isijinternational1989/41/12/41_12_1481/_pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "c370a458d06342baa6805be6346482f85201afc1", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Chemistry" ] }
236947853
pes2o/s2orc
v3-fos-license
Human beta defensin levels and vaginal microbiome composition in post-menopausal women diagnosed with lichen sclerosus Human beta defensins (hBDs) may play an important role in the progression of lichen sclerosus (LS), due to their ability to induce excessive stimulation of extracellular matrix synthesis and fibroblast activation. The genetic ability of the individual to produce defensins, the presence of microbes influencing defensin production, and the sensitivity of microbes to defensins together regulate the formation of an ever-changing balance between defensin levels and microbiome composition. We investigated the potential differences in postmenopausal vaginal microbiome composition and vaginal hBD levels in LS patients compared to non-LS controls. LS patients exhibited significantly lower levels of hBD1 (p = 0.0003), and significantly higher levels of hBD2 (p = 0.0359) and hBD3 (p = 0.0002), compared to the control group. The microbiome of the LS patients was dominated by possibly harmful bacteria including Lactobacillus iners, Streptococcus anginosus or Gardnerella vaginalis known to initiate direct or indirect damage by increasing defensin level production. Our observations highlight that correcting the composition of the microbiome may be applicable in supplementary LS therapy by targeting the restoration of the beneficial flora that does not increase hBD2-3 production. www.nature.com/scientificreports/ Human β defensins (hBDs) belong to the group of cysteine-rich short-chain natural antibacterial peptides. β-defensins are subdivided into further subgroups: hBD1 is produced in the kidney, in the epithelial cells of the respiratory tract and in the female genital tract constitutively, while hBD2 and hBD3 are inducible, expressed in inflammatory diseases of the skin. In addition to their known antibacterial activity, they contribute to immunomodulatory and chemotactic effects in inflammatory processes, infections and wound healing 8 . Lower relative mRNA expression of hBD1, but significantly higher hBD2 and hBD3 mRNA expression levels in LS patients, compared to healthy controls are observed 9 . Higher amounts of different hBDs in LS may change the appearance of the skin resembling pathological scarring, due to excessive stimulation of matrix synthesis and fibroblast activation. Pathogens of all sorts of infections induce production of ß defensins [10][11][12] . In turn, the increased levels of these peptides affects the composition of the surrounding bacterial flora due their selective antimicrobial activity 13,14 . In the study of Glienwitz et al. 15 two-thirds of postmenopausal healthy patients had a Streptococcus dominated microbiome, one-fifth of individuals had a Gardnerella dominated microbiome, while others belonged to L. crispatus or L. iners dominated clusters. Although the vaginal microbiome with the most optimal composition is dominated by L. crispatus, in many patients without clinical issues the proportion of other bacteria is higher. Therefore, if a pathological condition emerges, in a complex environment like the genital tract not only the bacterial composition has to be assessed, but a number of other factors. The genetic ability of an individual to produce human defensins, the presence of microorganisms influencing defensin production in the surrounding environment of producer cells, and the sensitivity of microbes to defensins together regulate the formation of an ever-changing balance between defensin levels and microbiome composition. Menopause may be a time of reduced genital tract health, reflecting changes in the vaginal microbiome and mucosal environment. In our current study, we aimed to investigate postmenopausal vaginal microbiome and associated defensin levels in LS and control patients. Results Participants in both the LS (15) and control (8) groups were postmenopausal. All LS patients had a histologically confirmed illness for at least 9 years, and they all had subjective symptoms and objective signs at the time of the study (Suppl. Table 1-3). The assessment of symptom severity is summarized in Table 1. In LS patients hBD1 levels were significantly lower (median: 297 ng/mL) than in the CTL group (median: 975 ng/mL) (p = 0.0003), while hBD2 (LS median: 1110 pg/mL and CTL median: 614 pg/mL) (p = 0.0359) and hBD3 levels (LS median: 2998 ng/mL and CTL median: 994.5 ng/mL) (p = 0.0002) were significantly elevated in the LS group, measured in 10 mL cervicovaginal lavage. Based on subjective evaluation, the most severe symptoms were in patient LS7, and the mildest in patients LS6, LS10, LS11, LS14. Patient LS1 had the highest global objective score and patient LS7 had the lowest. Günthert 16 severity score was the highest in patients LS1 and LS9, and the lowest in LS2, LS7, LS11 and LS12. Although there are discrepancies between subjective and objective severity assessments, none of the scores show a relationship between severity and the microbiome-determining dominant bacterial genus. No significant correlations, or any trends were found between symptom severity score values and hBD levels in any given LS patient. A total of 9.8 million valid sequences were obtained, resulting in 5.6 million high-quality reads; the median number of reads within one sample was 241,678 (IQR: 36,119). No statistical significant differences were found in microbial alpha diversity in the samples between LS and CTL patients by either metrics used to assess differences ( Fig. 1a: Simpson, 1b: Chao1, 1c: Shannon alpha diversity analysis) with Wilcoxon rank sum testing at species level. Regardless of whether the patients belonged to the LS or the control group, they were equally distributed among the Lactobacillus or polymicrobial mainly Streptococcus or Gardnerella-Atopobium-dominated clusters. At genus level, one-third of the patients had a Lactobacillus dominated microbiome both in LS (5/15) and the control (3/8) groups (Fig. 2a). There is no significant difference in the genus dominance of the groups using the chi square test (p = 0.842). Aggregated by cohorts at genus level, the microbiome composition of LS cohort consisted of 35% Lactobacillus and 16% Streptococcus, while the control cohort contains 36% Lactobacillus and 12% Streptococcus (Fig. 2b). There were no significant differences among Streptococcus (p = 0.757) or Lactobacillus (p = 0.957) abundance between the LS and Control group at genus level. Moving on to the species-level analyses, a more striking difference was observed: among the Lactobacilli, L. iners species was present in an exceptionally high proportion in the LS group against the control group (p = 0,027) (Fig. 2c, d) (Suppl. Table 4). There was no significant difference between the abundance of S. anginosus species in the LS or control group (p = 0,832). Figure 3a shows in Heatmap with a dendrogram annotation how the samples at genus level separated in two clusters regardless of whether they belonged to the control or LS group. Figure 3b Bray-Curtis Principal Coordinate Analysis (PCoA) showed also that the samples separated into two clusters, both the clusters contained both LS and control samples. Cluster 1 contained the samples characterized by a polymicrobial bacterial population, while Cluster 2 samples are dominated by Lactobacillus. According to PERMANOVA analysis, significant differences among the LS and control were not observed at species level. If the LS and control groups were divided into additional cohorts based on lactobacillus dominance or polymicrobial property, the ß diversity of only lactobacillus-dominant control and LS cohort differed significantly by PERMANOVA analysis. For a complete analysis please consult Table 2. Figure 4 shows a heatmap visualization of the 35 most abundant taxa at species-level among LS patients. In patients where L. iners was the most common species with a relative abundance between 68-96%-with the exception of Lactobacillus u.s. and Pediococcus acidilactici-other notable species were not detected in the vaginal microbiome. www.nature.com/scientificreports/ associated with L. iners. S. anginosus was frequently co-existing almost excusively with other Streptococcus sp., or Corynebacterium u.s. In the polymicrobial group, the most abundant species were Gardnerella u.s., Bifidobacterium u.s. and Atopobium vaginae frequently co-existing. LS patients were divided into 3 distinct groups, based on the levels of hBD2, hBD3, and median LS defensin values. The first cohort includes patients whose hBD2 and hBD3 levels were lower than the median values (LS5, LS10, LS11, LS14). The second cohort contains LS patients whose hBD2 or hBD3 levels were higher than the median values (LS1, LS2, LS3, LS15), and patients in the third cohort had higher levels of both inducible hBDs than the median values in the LS group (LS4, LS6, LS7, LS8, LS9, LS12, LS13). Figure 5 shows that the amount of L. iners in the samples increased in parallel with hBD2 and hBD3 levels. However due to the high SD values, and low sample size significant differences were observed only between the lowest and highest hBD groups (First cohort ↔ second cohort: p = 0,387, second cohort ↔ third cohort: p = 0.592, first cohort ↔ third cohort: p = 0.046). Of note, the incidence of Streptococcus anginosus changed in an opposite direction to hBD2-3 levels, but the differences are not significant due to high SD. (First cohort ↔ second cohort: p = 0,331, second cohort ↔ third cohort: p = 0.385, first cohort ↔ third cohort: p = 0.109). Discussion All patients in the LS group had a positive diagnosis of Lichen sclerosus for at least 9 years, but at the time of sampling they presented themselves for examination because their symptoms had worsened. The vaginal microbiome of healthy women during menopause can be different 15 from the ideal microbiome dominated by L. crispatus, without any symptoms or disease. With increasing age, a number of individual factors and hormonal changes shape the microbiome that develops during menopause. LS is probably a multicausal disease in which individual genetic factors, microorganisms, and autoimmunity play roles in its formation and progression. There were patients in the healthy control group whose microbiome was Streptococcus-dominated or polymicrobial but www.nature.com/scientificreports/ did not have any pathological symptoms. Our working hypothesis is that potentially disease-causing bacteria, that form the microbiome of LS patients, are involved in the progression of this multicausal disease. Each of the bacteria predominantly present in LS patients has virulence factors that can worsen the prognosis of the disease. Not all vaginal Lactobacillus species are equally beneficial to the host. L. crispatus is the optimal species associated with vaginal health, whereas L. iners may be associated with the development of pathological conditions 17 . The most important virulence factors of L. iners are inerolysin, and AB-1 adhesine 18 . Inerolysin is a pore-forming cholesterol dependent cytolysin toxin, which interacts with the CD59 human cell surface receptor, and at the end of a multi-step process it induces perforation of the cell membrane and ultimately cell death 19 . The AB-1 adhesine attaches to human fibronectin 18 . Fibronectin is one of the extracellular matrix components whose expression and distribution are altered in lichen sclerosus 20 . Further altered components are tenascin, fibrinogen, biglycan, versacin and ECM-1 21 . The alteration in these extracellular matrix components may be relevant to the initiation of scarring in LS and to the associated increased skin fragility 20 . S. anginosus is a pathogenic species, the predominant microorganism in patients with aerobic vaginitis. Successful binding of these bacteria to extracellular matrix proteins, like fibronectin, fibrinogen and laminin plays an important role in their pathogenesis 22 . The sag haemolysin of S. anginosus has been described to initiate vaginal epithelial cell lysis 23 . Gardnerella vaginalis and Atopobium vaginae are thought to be etiologic agents of bacterial vaginosis (BV). G. vaginalis is able to effectively displace lactobacilli and adhere to vaginal epithelial cells 24 , and has an increased propensity for biofilm formation 25 . Enzymes produced by G. vaginalis-vaginal sialidase or vaginolysin-promote the breakdown of the mucous layer and the vaginal epithelium 26 . Mature biofilm facilitates the adhesion of second colonizers, including A. vaginae 27 . A. vaginae induces a broad range of pro-inflammatory cytokines, chemokines, and antimicrobial peptides, including IL-1β, IL-6, IL-8, MIP-3α, hBD-2 and TNFα 17 . Some of the bacteria in the vaginal microbiome are known to play a role in enhancing antimicrobial peptide production: hBD2 levels are most strongly elevated in the presence of A. vaginae, P. bivia and L. iners without any effect on hBD1 production 17 . www.nature.com/scientificreports/ www.nature.com/scientificreports/ Gambichler 9 and co-workers measured significantly lower hBD1 mRNA expression, and higher hBD2 and hBD3 mRNA expressions in LS patients than in controls. In our study LS patients had significantly lower levels of hBD1 (p = 0.0003), and significantly higher levels of hBD2 (p = 0.0359) and hBD3 (p = 0.0002), compared to the control group. Also psoriasin, LL-37 and RNAse 7 were analysed in the above mentioned study, and measured a higher level of constitutively expressed psoriasin in LS patients but no differences between the levels of inducible LL-37 and RNAse 7 in LS patients and control groups. Further studies are needed to characterize the factors influencing the prevalence of bacterial species in a complex environment such as the vagina. Increased levels of hBD2 and hBD3 levels were correlated with higher amounts of Lactobacillus sp. in the vaginal microbiome 28 . During our study, the detected concentration of defensins overall was about 2 µg/mL in the 10 mL of lavage fluid; however, this may reflect a considerably higher concentration directly on the mucosal surface. It would be reasonable to speculate that the survival of different Lactobacillus species and other bacteria is largely affected by these amounts of different defensins. Antimicrobial peptide (AMP) susceptibility and the capability of different bacteria to induce the production of AMPs may explain the difference in the levels of defensins in LS patients and controls, and can affect the composition of the corresponding various microbiomes. In both the control and LS patient groups, the presence of L. iners in the microbiome was only observed at low hBD1 levels. Further studies are needed to investigate whether low hBD1 levels are a prerequisite for L. iners to exist in the vaginal microbiota. The low level of hBD1 in LS patients may explain the differences in Lactobacillus species present in patients, compared to controls. Based on our results, it appears that in LS patients, characterized by low hBD1 levels, a series of bacterial species are present, as opposed to the healthy flora dominated by only L. crispatus. Consequently, as hBD2 and hBD3 levels are increased, the total amounts of S. anginosus decreased and the presence of L. iners is increased. Limitations of this study are the small number of patients, the exclusive use of the 16S rRNA sequencing method, that provides species-level identification in only a few cases and the lack of proteomic analysis. This latter would highlight the importance and relationship of additional antibacterial peptides and bacterial products in patients diagnosed with LS. In summary, we observed differences in both defensin levels and the microbial composition in the samples obtained from LS patients compared to the samples from non-LS patients. Although the differences were clearly observable, additional studies are warranted to explore the cause-and-effect relationship between defensin levels and the presence/absence of various microorganisms (e.g., L. iners). Consideration should be given to supplement LS therapy with Lactobacillus-containing probiotics, or to restore the beneficial flora that does not induce the increase in hBD2-3 production, in order to improve the quality of life in patients affected by LS. It would be worthwhile to investigate whether higher levels of hBD1 are required for the colonization of beneficial lactoflora. The participants included women in an LS and a control (CTL) group. The LS group included n = 15 women, diagnosed with LS based on histological findings. Members of the LS group suffered from different active symptoms or refused them. The CTL group had n = 8 individuals, who were patients of the Department with other dermatological diseases (melanoma or basal cell carcinoma), who voluntarily agreed to have their vaginal secretions examined. Only individuals (patients and controls), that were not taking antibiotics or immunosuppressive medications for any reason in the 3 months prior to sample collection were included in our study. Exclusion criteria for both groups were: positive history of sexually transmitted or recent genital infections, use of lactobacilluscontaining suppository or gynecological intervention in the last 3 months. In all cases, the physical examination was preceded by the completion of a questionnaire on previous illnesses, their treatment and current complaints. The LS score classification was based on a subjective scoring of relevant symptoms, an objective score and the Günthert classification 16 . Subjective scores for pruritus, burning sensation and dyspareunia were quantified by interview, using a visual analogue scale (VAS, which included a numeric rating scale 0-10). A global subjective score (GSS) was obtained by summing the scores of each symptom parameter (highest GSS = 30.) The following objective parameters were scored to evaluate clinical feature of the patients: (1) leukoderma (2) sclerosis (3) atrophy (4) fine wrinkling (5) lichenification (6) hyperkeratosis (7) erosion (8) oedema (9) erythema (10) purpuric lesions (11) itching-related excoriations (12) unilateral labial adhesion (13) bilateral labial adhesion. Each sign was scored using the following 4-point scale: 0 = absence, 1 = mild, 2 = moderate, 3 = severe. A global objective score (GOS) was obtained by summing the scores of each clinical parameter (highest GOS = 39). The Günthert score was calculated by measuring (1) erosion (2) hyperkeratosis (3) fissures (4) agglutination (5) stenosis (6) atrophy (0 = absence, 1 = mild 2 = severe; global score maximum: (12). The characteristics of the study participants are presented in Table 1 hBD ELISA. The following ELISA kits were used for quantitative measurement of human ß defensins, according to manufacturer instructions: SEB373Hu for hBD1, SEA072Hu for hBD2 and SEE132Hu for hBD3 (Cloud-Clone Corp. Houston, USA). All diluted standards, samples and blank wells were measured in duplicates. Methods Statistical analysis. The levels of statistical significance for the difference between vaginal defensin levels , and bacterial taxa abundances-not normally distributed variables-measured in the LS and CTL groups was calculated by Mann-Whitney U test. The difference in the incidence of taxa was assessed by chi-square test. Statistical significance between cohorts were implemented using Wilcoxon rank sum testing for microbiome alpha diversity (Chao1, Simpson, Shannon indexes) and PERMANOVA analysis for Bray-Curtis PCoA beta diversity using the statistical analysis support application of CosmosID (CosmosID Metagenomics Cloud, app.cosmosid. com, CosmosID Inc., www. cosmo sid. com). Data availability The datasets generated during the current study are available in the Short Read Archive (SRA) of National Center for Biotechnology Information under accession number: PRJNA693292, http:// www. ncbi. nlm. nih. gov/ biopr oject/ 693292.
2021-08-08T06:16:25.481Z
2021-08-06T00:00:00.000
{ "year": 2021, "sha1": "c1b4cd327d84b118576b17664245ad92d21e8f8b", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-021-94880-4.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "19e2d29ac1250415337632e9dd0c7ac64f07f439", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
264386250
pes2o/s2orc
v3-fos-license
Artemisia brevifolia Wall. Ex DC Enhances Cefixime Susceptibility by Reforming Antimicrobial Resistance (1) Background: A possible solution to antimicrobial resistance (AMR) is synergism with plants like Artemisia brevifolia Wall. ex DC. (2) Methods: Phytochemical quantification of extracts (n-hexane (NH), ethyl acetate (EA), methanol (M), and aqueous (Aq)) was performed using RP-HPLC and chromogenic assays. Extracts were screened against resistant clinical isolates via disc diffusion, broth dilution, the checkerboard method, time–kill, and protein quantification assays. (3) Results: M extract had the maximum phenolic (15.98 ± 0.1 μg GAE/mgE) and flavonoid contents (9.93 ± 0.5 μg QE/mgE). RP-HPLC displayed the maximum polyphenols in the M extract. Secondary metabolite determination showed M extract to have the highest glycosides, alkaloids, and tannins. Preliminary resistance profiling indicated that selected isolates were resistant to cefixime (MIC 20–40 µg/mL). Extracts showed moderate antibacterial activity (MIC 60–100 µg/mL). The checkerboard method revealed a total synergy between EA extract and cefixime with 10-fold reductions in cefixime dose against resistant P. aeruginosa and MRSA. Moreover, A. brevifolia extracts potentiated the antibacterial effect of cefixime after 6 and 9 h. The synergistic combination was non- to slightly hemolytic and could inhibit bacterial protein in addition to cefixime disrupting the cell wall, thus making it difficult for bacteria to survive. (4) Conclusion: A. brevifolia in combination with cefixime has the potential to inhibit AMR. Introduction Antibiotic treatment is the most significant therapy for bacterial infections and has marvelously enhanced human health.At present, there are more than 14 classes of antibiotics on the market to treat several infections [1].In the past few decades, antibiotic resistance has emerged, leading to treatment failures.Unraveling the mechanisms of resistance is a priority in order to support work to devise efficacious therapies against life-threatening resistant infections.Resistance can be due to many factors such as antibiotic inactivation by bacteria or reduced bacterial membrane penetration.Furthermore, previously susceptible bacteria can develop resistance through genetic mutations or via generating biofilms to protect bacterial colonies from exogenous damage [2].This stems from the overuse of antibiotics, promoting evolutionary resistance in bacteria via natural selection.Healthcare professionals have stressed the need for new drugs or therapies to combat emerging resistance among pathogens to antibiotics already available [3].Therefore, there is ongoing research using both herbal and synthetic compounds in an attempt to find effective therapies against resistant bacterial isolates.Medicinal plants with proven antibacterial activity are good options, considering their wide availability and safety profiles [4]. The traditional medicinal system has established its role as a therapeutic alternative to the allopathic system to treat wounds, abrasions, and infections [5] and has been used and developed through the centuries since ancient times [6].This natural-compound-based medicine has contributed effective antimalarial (artemisinin and quinine) and anticancer drugs (vinblastine and taxols) to treat the respective diseases [7].Accordingly, scientists are inclined to investigate medicinal plants as an alternative source of antibiotics.Plants are enriched with several secondary metabolites, which exhibit antimicrobial properties.For example, terpenoids and essential oils have shown antibacterial effects by affecting cell membrane permeability [8]. Essential oils, besides other numerous applications [9], can be used in synergy with antibiotics to alter the bacterial membrane permeability [10].Furthermore, tannins bind to protein, alkaloids intercalate into DNA, and flavonoids bind in an adhesion complex with the cell and also inhibit the enzymatic activity of bacterial cells [11].Furthermore, tannins demonstrate dose-dependent antibacterial effects by potentially inhibiting extracellular enzymes, modulating bacterial cell metabolism via oxidative phosphorylation inhibition, limiting the substrate required for bacterial growth, and targeting proteolytic enzymes that inhibit protein synthesis [12]. Artemisia L. is a common and diverse genus of the Asteraceae family consisting of more than 500 species with significant therapeutic and economic importance [14].Currently, 38 species of Artemisia have been identified and botanically reported in Pakistan, which are mainly present in dry and semidry areas of Khyber Pakhtunkhwa, Northern Punjab, Baluchistan, and Kashmir [14].Artemisia brevifolia Wall.ex DC is locally known as "Tarkha" or "Mori".It is one of the most commonly found species in cold desert areas, the Himalayas, Ladakh, Kashmir, and Afghanistan [15].It is broadly spread in many areas of Pakistan over 2500 m in altitude, including Chitral, Gilgit, Swat, Baltistan, Khaghan, Astor, and the Deosai plains [16].Traditionally, A. brevifolia has been used as an antiseptic, anthelmintic, antidiabetic, carminative, blood purifier, stress reliever, diuresis, pain reliever, antitussive, stomachic, febrifuge, and as an antidote for the scorpion sting [15]. Alkaloids, terpenoids, essential oils, and flavonoids have proven antibacterial activity.Previous studies on A. brevifolia validated the presence of alkaloids, flavonoids, terpenoids, glycosides, essential oils, and vitamins.It is hypothesized that A. brevifolia possesses significant antibacterial proficiency [16][17][18] owing to the presence of formerly reported chemical constituents. Therefore, this research unveils a novel dimension in the scientific exploration of a plant species belonging to the commercially significant genus Artemisia, known for its well-established ethno-medicinal applications.Remarkably, despite its established reputation, the comprehensive exploration of the antimicrobial potential of this particular plant species remains conspicuously underrepresented in the existing scientific literature.An innovative approach to countering the formidable challenge of drug resistance is emerging: the synergistic interplay of phytochemicals with contemporary antibiotics.In this pioneering study, we delve into uncharted territory by thoroughly assessing the hitherto unexplored synergistic efficacy of A. brevifolia.This marks the very first instance in which such an evaluation has been undertaken, thereby adding a novel dimension to the field of antimicrobial research.Furthermore, in an unprecedented stride, our research embarks upon a biocompatibility study of A. brevifolia, a facet that has hitherto remained unreported in scientific investigations.This groundbreaking endeavor underscores our commitment to shedding new light on the properties and applications of this plant species, further emphasizing the unique and pioneering nature of our research. In sum, this research aimed to study the antibacterial effect of A. brevifolia in resistant clinical isolates as a single therapy and as a combination therapy with an antibiotic.Here, we demonstrate the synergistic activity of A. brevifolia extracts and cefixime in reducing the growth of cefixime-resistant clinical isolates.Our results provide evidence of the antibacterial activity of A. brevifolia and present it as a source of isolating antibacterial compounds. Percentage Yield A. brevifolia crude extracts were prepared in four solvents spanning the polar to nonpolar range.There was the highest percent extract recovery in aqueous (Aq) extract (6.1% w/w), which gradually declined with decreasing solvent polarity.The M, EA, and NH solvents extracted the phytoconstituents from A. brevifolia with values of 4.65, 2.95, and 1.65%, respectively, of the total weight of dry plant (12 kg) used for extraction. Total Flavonoid Content Estimation The total flavonoid content in A. brevifolia extracts is presented in Table 1.It was calculated with the calibration curve, y = 0.0649, x − 0.043, R 2 = 0.9927.The results show that the highest TFC content was present in the M extract, followed by the Aq extract and then the EA extract.The lowest number of flavonoids were present in the NH extract. Total Phenolic Content Estimation Total phenolic content was expressed as µg GAE/mgE and calculated with the calibration curve, y = 0.0915, x − 0.098, R 2 = 0.9939 (Table 1).The highest phenolic content was found in the M extract, followed by the Aq extract and then the EA extract, while the NH extract had the lowest phenolic content. Secondary Metabolite Estimation A. brevifolia extracts were qualitatively evaluated for the presence of phytochemicals (Table 2).The results showed that cardiac glycosides, alkaloids, and terpenoids were present in the NH extract.The EA extract had cardiac glycosides, anthraquinone glycosides, and terpenoids.On the contrary, the M and Aq extracts exhibited glycosides (cardiac, anthraquinone, and coumarin), alkaloids, and tannins.Saponin content was observed only in the Aq extract. RP-HPLC Analysis: Detection of Polyphenolic Content The quantification of various polyphenols was performed using the RP-HPLC-DAD method by comparing the UV spectra and retention times of the standard with those of test extracts (Table 4 and Figures 1 and 2).Polyphenols in four extracts of A. brevifolia were quantified using 14 standards.The NH extract showed the maximum concentration of catechin (0.82 ± 0.09 µg/mgE) while all other polyphenols were present in minute quantities.The EA extract was found to have maximum concentrations of apigenin (8.65 ± 0.012 µg/mgE) and syringic acid (2.54 ± 0.04 µg/mgE).Apigenin (3.13 ± 0.015 µg/mgE), rutin (2.85 ± 0.025 µg/mgE), and catechin (2.43 ± 0.08 µg/mgE) were detected at the highest concentrations in M extract among all other extracts, whereas Aq extract showed all polyphenols had rutin (1.11 ± 0.04 µg/mgE) and catechin (1.35 ± 0.08 µg/mgE) in the maximum concentrations, though those were much lower amounts than for the other extracts.As depicted in the results, more polyphenols were quantified in the M extract than in the EA extract, suggesting it was the best candidate for bioactivity evaluation. Clinical Bacterial Isolates Were Resistant to Cefixime Susceptibility of selected clinical isolates of Gram-positive (MRSA and S. hemolyticus) and Gram-negative (E. coli and P. aeruginosa) bacteria to antibiotics was assessed using the disc diffusion method (Table 3).According to the Clinical and Laboratory Standard Institute (CLSI) guidelines, any antibiotic with ZOI ≤ 14 mm is considered resistant at the standard CLSI set dose [19].The results showed that the growth of selected clinical isolates was inhibited by ciprofloxacin, doxycycline, clarithromycin, and lincomycin.Their ZOI ranged from 17 to 30 mm, 16 to 35 mm, 24 to 37 mm, and 20 to 35 mm for E. coli, P. aeruginosa, S. hemolyticus, and MRSA, respectively.Interestingly, all four clinical isolates were resistant to cefixime (10 µg) with no ZOI values.This corresponds to the CLSI guidelines that set the resistance ZOI value for cefixime at ≤15 mm at 5 µg/disc [19].Hence, cefixime was used in combination with A. brevifolia extracts for synergistic studies against cefixime-resistant clinical bacterial isolates. A. brevifolia Extracts Possess Mild to Moderate Antibacterial Activity Next, the antibacterial activity of A. brevifolia extracts (100 µg/disc) was established by using the disc diffusion method.All extracts showed mild to moderate growth inhibition of selected clinical isolates (Table 3) as compared to ciprofloxacin (10 µg/disc).A. brevifolia M extract showed ZOI of 11 mm against cefixime-resistant E. coli, P. aeruginosa, and MRSA.Likewise, A. brevifolia EA extract exhibited a maximum of 12 mm ZOI against cefiximeresistant E. coli and MRSA.Lastly, noteworthy growth inhibition was observed in cefiximeresistant E. coli in the presence of EA extract (ZOI 12 ± 0.7 mm). Subsequent evaluation of MIC (Table 5) using the broth dilution method endorsed the results for the initial antibacterial activity.A. brevifolia Aq extract was the least active with the highest MIC value of 100.2 µg/mL against cefixime-resistant P. aeruginosa and MRSA.The EA extract demonstrated MIC values of 66.3 µg/mL against cefixime-resistant P. aeruginosa and 67.9 µg/mL against E. coli.Similar MIC values (79 and 82 µg/mL) were obtained in the case of S. hemolyticus and MRSA when treated with A. brevifolia M extract.Further evaluation of MIC for cefixime validated its resistance profile against selected clinical isolates.Cefixime exhibited an MIC of 20-40 µg/mL against selected clinical isolates, which was higher than the CLSI set value of 0.25 µg/mL [19]. A. brevifolia Ethyl Acetate Extract Showed Total Synergism with Cefixime The checkerboard method was used to determine the antibacterial efficacy of A. brevifolia extracts in combination with cefixime.For each sample, two-fold serial dilutions starting from the MICs were used, where cefixime was diluted vertically while extracts were diluted horizontally in a 96-well plate.Treatment of all clinical isolates with the combination of cefixime and A. brevifolia EA extract showed a three-to five-fold reduction (Table 6) in MIC values of the extract.Interestingly, the MIC of cefixime declined four-to ten-fold in the presence of A. brevifolia EA extract.This was supported by the fractional inhibitory concentration index (FICI) values, which were ≤0.5, indicating total synergism between the two samples.Similarly, A. brevifolia M extract enhanced the susceptibility of E. coli to cefixime by two-fold, with a FICI value of 0.66, demonstrating partial synergism between the extract and antibiotic.The rest of the extracts also demonstrated partial synergism with cefixime, except for NH extract, which showed no synergistic effect against E. coli.It appears that the aqueous extract was least effective in mitigating resistance to cefixime, with no synergistic activity (FICI = 1) against S. hemolyticus. The Effect of the Extracts Alone and in Combination Is Time Dependent Time-kill kinetic studies were performed to assess whether the effect of A. brevifolia extracts alone and in combination with cefixime was time-dependent or concentrationdependent.All clinical isolates were tested at MIC, 2MIC, FICI, and 2FICI values.Overall, clinical isolates treated with the combination of cefixime and extracts at FICI and 2FICI values demonstrated significant growth inhibition throughout the treatment duration as compared to individual treatments.The results were comparable with those of ciprofloxacin (positive control), to which the clinical isolates were susceptible.Bacterial growth in samples treated with extracts alone was much lower as compared to cefixime and DMSO (negative control). Treatment of resistant E. coli with A. brevifolia EA extract (Figure 3A) at FICI values demonstrated maximum growth inhibition of 91.8% and 81.7% at 3 h and 9 h, respectively, as compared to 50% and 40.3% inhibition at the same time points with cefixime alone.Likewise, the Aq, EA, and NH extracts at 2FICI values showed growth inhibition of 93%, 83.5%, and 67.3% after 9 h of treatment.Treatment of cefixime-resistant P. aeruginosa (Figure 3B) with extracts alone (MIC and 2MIC) or cefixime showed inhibition of bacterial growth till 6 h of treatment.Later, there was an exponential increase in bacterial growth as depicted by the increased absorbance of the samples.The combination of cefixime with A. brevifolia EA extract at 2FICI demonstrated 100% and 98.7% inhibition of clinical isolates as compared to 66% and 44.9% inhibition with cefixime alone at 6 h and 9 h of treatment, respectively.Similarly, 2FICI dosing of NH, M, and Aq extracts also showed 100%, 89.5%, and 100% inhibition, respectively, of clinical isolates at 6 h of treatment.Although this declined to 73.4%, 74.7%, and 72.4% inhibition at 9 h for NH, M, and Aq extracts, the values were still higher than for cefixime alone (44.9%), indicating continued synergism between samples. The succeeding analysis on S. hemolyticus (Figure 4B) demonstrated a similar pattern of growth as that of E. coli but with different periods.The 2FICI dosing for all extracts was most effective in inhibiting the growth of cefixime-resistant S. hemolyticus, which peaked after 6 h of treatment.There was 67.3%, 80.3%, 84.2%, and 76.3% growth inhibition at 6 h with 2FICI of NH, EA, M, and Aq extracts, respectively, as compared to 54.7% inhibition when using cefixime alone. There was a drastic reduction in MRSA resistance (Figure 4A) to cefixime with FICI and 2FICI dosing.The growth of MRSA clinical isolates was reduced by >10-fold through the synergistic action of cefixime and A. brevifolia extracts.Like all other samples, the effect of FICI and 2FICI peaked at 6 h with percent inhibitions of 79.6%, 100%, 87.2%, and 100% at 2FICI for NH, EA < M, and Aq extracts, respectively.It was much higher than the 6.7% growth inhibition by cefixime alone at 6 h.In short, A. brevifolia extracts potentiated the antibacterial effect of cefixime at FICI and 2FICI values with a stationary growth phase after treatment, irrespective of time duration. as compared to 66% and 44.9% inhibition with cefixime alone at 6 h and 9 h of treatment, respectively.Similarly, 2FICI dosing of NH, M, and Aq extracts also showed 100%, 89.5%, and 100% inhibition, respectively, of clinical isolates at 6 h of treatment.Although this declined to 73.4%, 74.7%, and 72.4% inhibition at 9 h for NH, M, and Aq extracts, the values were still higher than for cefixime alone (44.9%), indicating continued synergism between samples.The succeeding analysis on S. hemolyticus (Figure 4B) demonstrated a similar pattern of growth as that of E. coli but with different periods.The 2FICI dosing for all extracts was most effective in inhibiting the growth of cefixime-resistant S. hemolyticus, which peaked after 6 h of treatment.There was 67.3%, 80.3%, 84.2%, and 76.3% growth inhibition at 6 h with 2FICI of NH, EA, M, and Aq extracts, respectively, as compared to 54.7% inhibition when using cefixime alone. A. brevifolia Extracts Reduce Bacterial Protein Content Disintegration of the cell envelope can be quantified using the leakage of cellular protein as a function of cell death.Protein content in the extracellular medium of treated and untreated bacterial strains was analyzed (Table 6) to understand the underlying cause of the antibacterial effect.Bovine serum albumin was used as a positive control.There was little reduction (5.4%) in protein content after treatment of resistant S. hemolyticus with cefixime alone.This increased to 79.8%, 68.4%, 78.4%, and 62.7% reduction in protein content when resistant S. hemolyticus was treated with the combination of cefixime and NH, EA, M, and Aq extracts, respectively.This indicates that the extracts and antibiotic synergy can degrade the bacterial protein, making its survival difficult.Furthermore, there was 73.9%, 74.4%, 82.5%, and 75% inhibition of protein content in cefixime-resistant MRSA isolates when treated with the combination of NH, EA < M, and Aq extract, respectively, as compared to cefixime (35% reduction).Similarly, a percent protein reduction of 58.8-80.2% was observed in resistant P. aeruginosa and E. coli isolates due to the synergistic activity of cefixime and A. brevifolia extracts.It is postulated that the extracts can inhibit a bacterial protein that works together with the cell wall synthesis inhibitor cefixime to inhibit the growth of resistant clinical isolates.This seems to be the case considering the results of the protein content estimation. Hemolytic Analysis Hemolytic analysis was performed to check whether the drug or compound was toxic to red blood cells causing hemolysis.According to ASTM F756-00 protocols for assessment of the hemolytic properties of samples, substances with hemolysis percentages of >5%, <5%, and <2% are considered hemolytic, slightly hemolytic, and non-hemolytic, respectively [22].To our surprise, the extracts were hemolytic with >5% hemolysis when used alone.However, their hemolytic potential declined when given in combination with cefixime.All combinations except NH/cefixime (6.45% hemolysis) had values ranging between 1.34% and 5.13% for FICI and 0% and 3.78% for 2FICI.This indicated that the majority of combinations were safe to use with either slight or non-hemolytic character.These results were significantly (p < 0.05) lower than for the positive control Triton-X 100 (100% hemolysis). Discussion Antimicrobial resistance has multiple causes, and it is a great concern in health sciences as it causes treatment failures and poor prognoses of infectious diseases.Researchers are investigating multiple compounds from natural and synthetic sources that can work either alone or in combination with standard antibiotics to eliminate resistant infections [23].Phytoconstituents of various medicinal plants have proven antibacterial activity against standard antibiotic-susceptible and -resistant bacteria [24][25][26][27].Plants act as the greatest apothecary and a potential source of treatments for multiple diseases like arthritis, cancer, diabetes mellitus, and oxidative stress disorder [28,29].Considering the beneficial aspects of medicinal plants, in the current work, we established the antibacterial activity of Artemisia brevifolia extracts in resistant clinical bacterial isolates.We also verified the synergistic interaction between cefixime and A. brevifolia extracts that potentiates the antibacterial activity against cefixime-resistant clinical isolates. A. brevifolia extracts were prepared in different solvents to yield variable phytoconstituents based on polarity, as follows: • Extraction can be a potentially rate-limiting step when preparing samples for screening bioactive compounds of interest.The efficiency of this step is affected by many factors such as solvent polarity, extraction method, physical characteristics size of sample particles, and period of extraction.These contingency factors were addressed first by selecting four solvents for extraction, n-hexane, methanol, ethyl acetate, and water, depicting variable polarity. • Second, samples were macerated for 72 h, ensuring sufficient time for solvents to penetrate the fine particles of powdered plant. • Third, maceration was combined with periodic sonication, aiding the diffusion of solvent and extraction of phytoconstituents from powdered plant material.A significant extraction yield (6.1%) was obtained with Aq extract, which indicated the presence of more polar content. The oligosaccharides, sugars, and resins often solubilize better in distilled water compared to other solvents, contributing to overall extraction yield [30].Although extraction yield depends on the polarity of phytoconstituents and solvent used for extraction, the maximum yield does not dictate the medicinal value since it is directed by the chemical composition and inherent nature of phytochemicals [31]. Preliminary phytochemical analysis of A. brevifolia extracts showed the presence of alkaloids, glycosides, tannins, and terpenoids in different extracts.Alkaloids can inhibit bacterial growth by altering membrane permeability, inhibiting nucleic acid synthesis, and disrupting cell division [32].In addition, research showed that glycosides such as glycyrrhizin have promising antibacterial effects due to inhibition of RNA synthesis in bacteria [33].Furthermore, tannins exhibit antidiarrheal, antibacterial, antiviral, antitussive, antitumor, and wound-healing activities [34].Previously reported studies showed that saponins have detergent-like activity and an antibacterial effect by increasing bacterial cell wall permeability [21].The presence of these phytoconstituents in A. brevifolia extracts can be responsible for the subsequent antibacterial activity of the plant. Bacteria have started to develop resistance against commonly used antibiotics.Resistant species of S. aureus, S. hemolyticus, E. coli, etc., have been recognized in various clinical settings as causing frequent infections and prolonged duration of the infectious diseases [35][36][37]. Medicinal plants are being investigated to optimize therapy for resistant infections.In the current study, the susceptibility of bacterial clinical isolates to selected antibiotics and A. brevifolia extracts was assessed to determine the resistance profile and antibacterial capacity of the samples.The disc diffusion method creates zones of inhibition around the sample-impregnated discs, signifying the antibacterial activity.The greater the size of ZOI, the higher the susceptibility of microorganisms to test samples.Results revealed all the extracts demonstrated mild to moderate antibacterial activity, whereby EA extract was more active against MRSA and R. S. hemolyticus.On the contrary, NH and M extracts were active against R. E. coli and R. Pseudomonas aeruginosa, respectively. Hydroxylated phenols and phenolic compounds are found to be toxic to many microorganisms.The toxicity of phenols to microorganisms depends on the level of hydroxylation, where higher hydroxylation levels are more toxic to the microorganisms [11].Genus Artemisia is rich in several essential metabolites such as glucosinolates, saponins, cyanogenic glycosides, tannins, unsaturated lactones, phenols, and flavonoids.These phytochemicals are used to treat multiple ailments such as malaria, bacterial infection, cancer, and inflammation [38].In this study, the antibacterial activity of extracts might have been due to hydroxylated phenols present in this plant, which were quantified through HPLC (emodin, luteolin vanillic acid, syringic acid, gallic acid, coumarins, flavonoids, and flavones) [39]. Furthermore, the susceptibility testing revealed that all clinical isolates were resistant to cefixime.Hence, clinical isolates were treated with cefixime in combination with A. brevifolia extracts, to observe the possibility of synergism.The "one drug, one target, one disease" paradigm has become an orthodox pharmaceutical strategy considering the emergence of resistance to even previously potent antibiotics.Currently, a multi-drugtarget approach is utilized to augment the efficacy of antibacterial therapy.This pattern shift is dictated by the limited effectiveness, resistance, and side effects of monotherapy [40].The prime advantage of plant-based drugs is that they can be safe, easily affordable, have minimal or no side effects, and have multiple biological targets.A combination of standard antibiotics with plant-based drugs can provide better synergism with the least side effects, particularly against resistant infections.Synergism can decrease the MIC of many marketed antibiotics in the presence of plant extracts.Research shows that polyphenols decrease betalactam resistance while flavonoids, diterpenes, and triterpenes have resistance-modulating abilities on many contemporary antibiotics [28,[41][42][43]. Researchers have believed that generally some mechanisms that can cause this interaction are inhibition of the sequence of biochemical paths, membranotropic agent usage to enhance the diffusion of other antibacterial drugs, inhibition of enzymes that protect microorganisms, and a membrane-active agent used in combination [44]. In the current study, the checkerboard method was used to determine the synergistic interaction between cefixime and A. brevifolia extracts.Previous research outlined that if the MIC of the extract and antibiotic decreases by four-fold, then the combination is known as synergistic, while if MIC of the first test sample decreases by four-fold and the other by two-fold, then the interaction is known as partial synergistic [45,46].Treatment of all clinical isolates with a combination of cefixime and A. brevifolia EA extract showed a four-to ten-fold reduction in MIC values of cefixime.This was reinforced by the FICI values, which were ≤0.5, indicating total synergism between the two samples.Likewise, A. brevifolia M extract enhanced the susceptibility of E. coli to cefixime by two-fold, with a FICI value of 0.66, demonstrating partial synergism.The limitation of the checkerboard method is that more resources are used to test antibacterial combinations and more than one antimicrobial cannot be checked at a single time [45].However, it provided evidence that the extract-cefixime synergism successfully inhibited the growth of both Gram-positive and Gram-negative clinical isolates used in this study, showing a broad spectrum of activity.It is possible that A. brevifolia extracts either inhibited the sequence of biochemical paths, enhanced diffusion of cefixime, inhibited protein synthesis of bacteria, or inhibited degradation of antibiotics [44].Future work is planned to assess the mechanism of synergistic interaction observed in this study. Next, we determined using time-kill kinetic studies whether the interaction between A. brevifolia extracts and cefixime was bactericidal or bacteriostatic.Jacqueline et al. described how time-kill kinetic studies are used to determine the bactericidal effect, which may be dependent on time in place of concentration [47].It was observed that there was significant inhibition of E. coli growth when treated with the combination of A. brevifolia extracts and cefixime (FICI, 2FICI) as compared to A. brevifolia extracts alone (MIC, 2MIC).DMSO (negative control) did not interfere with the results, with constant exponential growth for all isolates.Cefixime-resistant clinical isolates showed initial growth from 0 to 3 h after treatment with MIC and 2MIC values of the extracts.Then, it gradually started to decline after 3 h of treatment.On the other hand, the growth of E. coli started to decline after 6 h of treatment with A. brevifolia NH extract.Samples at their FICI and 2FICI values obstructed the clinical isolates in their stationary phase of the growth curve throughout the treatment duration. The trend of resistant S. hemolytic growth was the log phase, partial death phase, and again log phase for the durations of 0-3, 3-6, and 6-9 h, respectively, when treated at MIC and 2MIC values.On the contrary, clinical isolates treated at 2FICI more or less remained in the stationary phase, with no significant overall growth.Although isolates treated with FICI also displayed greater growth inhibition of cefixime-resistant S. hemolyticus as compared to cefixime or extracts alone, an increase in the bacterial growth was observed after 9 h of treatment.Exposure of cefixime-resistant P. aeruginosa to MIC and 2MIC of extracts or cefixime alone displayed inhibition of clinical isolates till 6 h of treatment.Later, there was an exponential increase in bacterial growth, as represented by the amplified absorbance of the samples.On the contrary, growth inhibition after treatment with the combination of cefixime and extracts (FICI, 2FICI) was more pronounced as compared to lone treatments.Moreover, A. brevifolia M extract imparted better synergism against cefixime-resistant P. aeruginosa as compared to other extracts by keeping the clinical isolates in a stationary phase. The MRSA clinical isolate that was used in the present study showed noteworthy resistance against cefixime with exponential growth.In contrast, there was exponential bacterial growth in the first 3 h of treatment with ciprofloxacin (positive control) but it drastically declined later than 6 h of treatment.Likewise, all A. brevifolia (MIC; 2MIC) extracts significantly halted MRSA growth.The competition between survival and growth inhibition of bacteria generated slight variability in percent inhibition values at different time points.However, overall, it could be seen that the combination of extract and cefixime augmented the activity of cefixime against clinical isolates. In the current study, the combination of cefixime with plant extract against all bacterial isolates showed synergistic interaction when calculating the FICI index in the checkerboard method, while time-kill kinetic curves showed additive interaction.This same pattern of interaction was reported previously for Helichrysum pedunculatum plant methanolic extracts when given in synergy with antibiotics against Staphylococcus aureus [41].Likewise, the interactions of antibiotic drugs with acetone extract of seeds of Garcinia kola [48] and Thymus vulgaris were also reported as synergistic/additive when time-kill kinetic assays were performed [49]. Antibacterial drugs inhibit bacterial growth or kill bacteria by targeting proteins, cell walls, cell membranes, and nucleic acid synthesis [50].We assessed A. brevifolia and cefixime synergy by analyzing the protein content of the medium in clinical isolates.Cefixime disrupts cell wall integrity [51], and its combination with A. brevifolia extract significantly (p < 0.05) reduced viable proteins.Cefixime's impact on protein content is limited due to its role in cell wall inhibition.However, combined treatment likely curbed protein synthesis or induced apoptosis, lowering protein content, suggesting A. brevifolia's influence on bacterial protein synthesis, potentially through tRNA release inhibition, peptide bond synthesis, or initiation of complex formation [50].Similar effects were observed for Cymbopogon khasianus on resistant clinical isolates [3]. HPLC quantification was performed using 14 polyphenol standards, and it confirmed the flavonoids and polyphenols in A. brevifolia.Some of the phytochemicals present in A. brevifolia were vanillic acid, gallic acid, caffeic acid, and syringic acid [39].A literature review showed that vanillic acid ruptures the cell membrane and inhibits cell growth [52], while gallic acid inhibits biofilm production and disrupts the bacterial cell membrane [53].Caffeic acid acts by inhibiting the RNA polymerase enzyme, and syringic acid is an ATP synthesis inhibitor [11].All these phytochemicals have been quantified in the subject plant.Therefore, it can be hypothesized that when these polyphenols combined with cefixime resulted in an additive mechanism of action, as suggested in the literature, phytochemicals may have acted upon the cell wall integrity [54].It has been also testified that some plant chemical compounds inhibit bacterial growth or improve the effect of antibacterial drugs by acting in the same site as peptidoglycan [55]. In the present study, we performed a toxicity analysis as a part of the efficacy study.For this purpose, a hemolytic assay was performed that gives information regarding the cytotoxicity of samples on blood.This model is frequently used because of the easy availability and isolation protocols of red blood cells.Moreover, the membrane physiology of red blood cells is similar to the membranes of other cells present in the body [56].A. brevifolia extracts in combination with cefixime displayed slightly hemolytic or nonhemolytic character; this was different from A. brevifolia NH and M extracts alone, which presented hemolytic character.The EA extracts were found to be safer to use in humans as a component of antibacterial therapy.Yet, in vivo toxicity studies must be conducted to determine detailed toxicity versus efficacy profiles. The strengths of this study are evident in the significant findings that highlight the potential of A. brevifolia as a potent antibacterial agent.The identification of substantial minimum inhibitory concentrations in both ethyl acetate and methanolic extracts against resistant bacterial strains, including Staphylococcus haemolyticus, methicillin-resistant Staphylococcus aureus, Escherichia coli, and Pseudomonas aeruginosa, underscores the broad-spectrum antibacterial activity of these extracts.Moreover, the observation of total synergistic effects when the ethyl acetate extract was used in combination against all bacterial strains is particularly promising.The study's assessment of safety, as indicated by negligible hemolysis of red blood cells, adds to its credibility. However, it is important to acknowledge the limitations of this research.While the in vitro findings are encouraging, the transition to in vivo studies is crucial to determine the real efficacy and potential toxicity effects when these extracts are used in living organisms. Clinical controlled trials are necessary to validate the therapeutic potential of these extracts in treating infectious diseases, and further research should aim to increase the variety of drugs, expand the number of clinical isolates, and identify the specific compounds within the plant extracts responsible for their antibacterial properties.Additionally, a deeper exploration into the mechanism of action and the formulation of pharmacological agents based on these extracts would enhance the practical application of this promising research. Preparation of Extracts A. brevifolia was collected in August 2018 from Hunza Valley, Baltistan by Dr. Ihsan ul Haq, Associate Professor, Department of Pharmacy, Quaid-i-Azam University, Islamabad.It was identified by Dr Sher Wali Khan, Karakoram International University, Gilgit Baltistan.The specimen was submitted (Herbarium No# PHM 512) to the Herbarium of Medicinal Plant, Quaid-i-Azam University, Islamabad, Pakistan.The collection and investigations of A. brevifolia were supported in part by the Indigenous Fellowship of HEC provided to the first author (520-142973-2MD6-130 (50093185)).About 12 kg of plant material was washed, shade dried (3 weeks), pulverized to a coarse powder, and stored in an airtight container.Successive maceration aided with ultra-sonication was used to extract plant material as previously reported [31].Dry powder was macerated with four analytical-grade solvents (non-polar to polar) including NH, EA, M, and distilled water (Aq) at a ratio of 1:4 (powder:solvent) for 72 h at 25 • C with 10 min of sonication each day.After 3 days, extracts were filtered and concentrated using a reduced-pressure rotary evaporator (Ribby, UK) at 45 • C. Plant extracts were collected in labeled containers and stored at −80 • C temperature for further testing.The dried extracts were weighed to calculate the percentage extract recovery using the formula: %age Extract Recovery = Total weight of extract after drying Total weight of plant powder × 100, Total Phenolic Content The protocol used to determine total phenolic content was given by [31] as was subject to few modifications.In our work, 20 µL of each test sample (4 mg/mL) was taken and poured into a 96-well plate followed by the addition of 90 µL Folin-Ciocalteu reagent.The plate was incubated for 5 min, and then sodium carbonate was added to the reaction mixture.After that, the absorbance of the plate was taken at 630 nm using a microplate reader (BioTek; Shoreline, USA).DMSO was used as negative while gallic acid was used as positive control.The assay was carried out in triplicate and results were given as mg gallic acid equivalent per gram dry weight.4.2.3.Glycoside, Alkaloidal, Tannin, Saponin, and Terpenoid Contents Three types of glycosides (cardiac, anthraquinone, and coumarin) were determined using the method given by Shaikh et al. [58] with minor modifications.Cardiac glycosides were confirmed through Keller Killani, Salkowski, and Baljet tests.Anthraquinone glycosides were estimated via borax and modified Bontrager's tests.On the other hand, the sodiumhydroxide-mediated fluorescence method indicated the presence of coumarin glycosides. The alkaloidal content of extracts was determined using Wagner's, tannic acid, and Dragendroff's reagents, as given by [59]. Tannins in A. brevifolia extracts were detected using the protocol given by [59].Ferric chloride and gelatin solution were used for this purpose. Saponin content in A. brevifolia extracts was evaluated via foam formation with or without olive oil, as previously described by [60]. Phytochemical analysis for the presence of terpenoids was performed using the method given by [61], which uses chloroform and sulfuric acid to precipitate reddish brown terpenoids. Antimicrobial Evaluation: Preliminary Resistance Profiling of Antibiotics Initially, antibiotics were tested against clinical isolates by using the disc diffusion method.Stock solutions (4 mg/mL) of antibiotics (cefixime, ciprofloxacin, doxycycline, lincomycin, and clarithromycin) were prepared in DMSO.Agar media was poured on plates and sterile discs loaded with 5 µL of antibiotics were placed on the plates and incubated for 24 h at 37 • C. The zone of inhibition (ZOI) around each disc was measured using a vernier caliper.The assay was carried out in triplicate.The antibiotic with little or no ZOI was selected for further studies.Minimum inhibitory concentrations [21] of antibiotics were determined by using the micro broth dilution protocol reported by [62] with little modification. RP-HPLC Analysis LOD and LOQ Determination LOD and LOQ for HPLC analysis were determined.LOD stands for the limit of detection, which is the lowest concentration of sample that can be detected in HPLC, while LOQ stands for the limit of quantification, which is the minimum concentration of sample that can be quantified using HPLC.These two parameters are determined using the following formulae: where SD = standard deviation of regression and S = slope of the calibration curve. Analysis of Polyphenols To identify and measure the number of polyphenols present in A. brevifolia crude extracts, RP-HPLC was utilized according to the standard protocol with minor modifications [20,39].A zorbex-C8 analytical column (5 µm; 4.6 × 250 mm) connected with a diode array detector (DAD) was supplied with an HPLC system (Shimadzu; Kyoto, Japan).A binary gradient system with mobile phase A (methanol: water: acetic acid: acetonitrile in 10:85:1:5) and mobile phase B (acetonitrile: methanol: acetic acid in 40:60:1) was used to accomplish the polyphenol detection.The column was injected with 50 µL of sample solution prepared in methanol, and the flow rate was adjusted at 1.2 mL/min.The gradient volume of mobile phase B was changed from 0% to 75% in the first 0-30 min, to 75-100% in the next 30-31 min, then to 100% in 31 to 35 min and, lastly, to 0% in the final 36 to 40 min.The column was reconditioned before injecting a new sample.The concentration of each standard was 50 µg/mL in methanol, while extracts were prepared at 10 mg/mL in methanol.The quantification of polyphenols was performed by comparing the UV-Vis spectra and retention times of chromatographic peaks to reference standards; polyphenols were identified at 257 nm for vanillic acid and rutin, 279 nm for apocynin, coumaric acid, catechin, syringic acid, and gallic acid, 325 nm for apigenin, caffeic acid, ferulic acid, gentisic acid, and luteolin, and 368 nm for quercetin and kaempferol.The results were quantified in terms of µg/ mg extract (µg/mgE). Antibacterial Assay The antibacterial potential of A. brevifolia extracts was determined by using the disc diffusion method [62] as described above.A bacterial culture at a seeding density of 1 × 10 6 CFU/mL was used to make bacterial spreads on nutrient agar plates.About 5 µL of each test sample (cefixime/ciprofloxacin, 20 µg/disc; extracts, 100 µg/disc) was poured on sterile filter discs, and then placed on a nutrient agar plate and incubated (24 h at 37 • C).Ciprofloxacin was used as a positive control, whereas DMSO served as a negative control.After 24 h, ZOI was calculated by using a vernier caliper.The assay was run in triplicate. Next, samples that showed ZOI ≥ 12 mm were further tested to determine the MIC by using the micro broth dilution method [62].A bacterial inoculum was prepared by adjusting the seeding density at 5 × 10 4 CFU/mL.Three-fold serial dilutions of extracts (100, 33.3, 11.1, and 3.34 µg/mL) and antibiotics (10, 3.33, 1.11, and 0.334 µg/mL) were prepared in nutrient broth.About 5 µL of the test sample and 195 µL of inoculum were mixed in each well of a 96-well plate.The plate was then incubated at 37 • C for 24 h, and absorbance was measured at 600 nm after 30 min (0 h reading) and 24 h of incubation in a microplate reader (BioTek; Shoreline, DC, USA). The checkerboard method was used to determine that the potential synergistic interaction between antibiotic and A. brevifolia extracts [63].Two-fold serial dilutions of the sample were prepared such that the antibiotic was diluted vertically while extracts were diluted horizontally in a 96-well plate.An aliquot of 5 µL of sample was poured in each well containing 2.5 µL of extract and 2.5 µL of antibiotic, followed by 195 µL of inoculum (density 4 × 10 4 colony-forming unit (CFU)/mL) in each well.The plates were then incubated at 37 • C for 24 h, and FICI values were determined.Absorbance was taken at 0 h and after 24 h for further calculations: where MICI A/B = MIC of compound A in combination with compound B, MIC A = MIC of compound A, MICI B/A = MIC of compound B in combination with compound A, and MIC B = MIC of compound B. The interaction between antibiotic and extract was considered "total synergism" or "partial synergism" at FICI ≤ 0.5 and 0.5 < FICI ≤ 0.75, respectively.If FICI was ≤1 or between 1 and 4, then the interaction was termed "Indifference" or "No effect".If the FICI value was more than 4, then it was "Antagonism" [63]. Time-kill kinetics was performed using the protocol described previously [3,64] with few modifications.Resistant bacterial strains were grown to the mid-logarithmic phase.Bacterial cells were diluted up to 10 4 CFU/mL.This bacterial suspension was then incubated with MIC, 2MIC, FICI, and 2FICI concentrations of extracts alone and in combination with selected antibiotics.Readings were taken at time intervals of 0, 3, 6, and 9 h after incubation.Results were measured using a UV spectrophotometer (Sigma Aldrich; Darmstadt, Germany) at 600 nm. Bacteria were grown up to the mid-logarithm phase, as described previously, and were treated with MIC, 2MIC, FICI, and 2FICI of extract alone and in combination with selected antibiotic [65].Protein content estimation of clinical isolates was conducted using the Bradford reagent to check the possible mechanism of action of bacterial growth inhibition.After incubation of samples clinically isolated for 24 h, 5 µL of the reaction mixture was mixed with 195 µL of Bradford reagent and incubated for 5 min at room temperature with constant sonication in triplicate.Then, absorbance was measured at 595 nm.The protein content of samples was calculated using the formula: Phosphate buffer was used as a diluent in this assay.The negative control, positive control, and blank were constituted by the clinical isolate inoculum, bovine albumin serum (0-50 µg/mL), and distilled water, respectively. Hemolytic Assay Hemolytic evaluation was performed using freshly drawn human blood following the guidelines given by the ethical committee of Quaid-i-Azam University, Islamabad, Pakistan.Bioethical approval was given by the Ethical Committee of the university (BEC-FBS-QAU2021-261 dated 2 March 2021).Informed consent was also obtained from the volunteer to draw blood samples.Blood was collected and centrifuged at 13,000 rpm to separate red blood cells (RBCs).After that, RBCs were washed thrice with normal saline and re-suspended in phosphate buffer to form a 5% solution.The RBC suspension was incubated with samples at 37 • C for 30 min in Eppendorf (2 mL) tubes.Afterward, it was centrifuged at 2000 rpm for 20 min and the supernatant was separated.Triton X-100 (0.1%) was used as the positive control, whereas phosphate buffer served as the negative control [3].The absorbance of the supernatant was measured at 360 nm in a microplate reader (BioTek; Shoreline USA).Hemolysis was calculated using the following formula: %age Hemolysis = Absorbance of sample − Absorbance of Negative Absorbance of positive − Absorbance of Negative × 100, (6) Statistical Evaluation Statistical analysis of the experimental results was conducted utilizing GraphPad Prism 5 (version 5.00 for Windows, San Diego, CA, USA).The presented data included mean values accompanied by either the standard error of the mean (SEM) or the standard deviations (SDs) of individual replicates.Additionally, for each experiment, the analysis software employed and the number of observations were specified. Conclusions In this research, all A. brevifolia extracts that were examined exhibited noteworthy antibacterial activity.Particularly, the ethyl acetate extract demonstrated significant activity against resistant strains of Staphylococcus hemolyticus and methicillin-resistant Staphylococcus aureus.Additionally, the methanol extract displayed notable inhibition of growth in resistant strains of Escherichia coli and Pseudomonas aeruginosa. Furthermore, the ethyl acetate extract exhibited complete synergism when combined with cefixime against all cefixime-resistant clinical isolates.This synergistic effect was observed to be time-dependent, with the maximum bactericidal activity occurring at 2FICI concentrations after 6 and 9 h of treatment, varying among different bacterial strains. Notably, our study revealed a significant reduction in the protein content of bacterial samples, suggesting potential mechanisms of action.This reduction could be attributed to the combined effects of protein synthesis inhibition or protein degradation induced by the A. brevifolia/cefixime combination, along with the disruption of cell wall integrity caused by cefixime. In light of these findings, our research strongly advocates for further comprehensive investigations into the application of A. brevifolia extracts in synergy with cefixime as a potential strategy for combating bacterial infections that have developed resistance.These promising results warrant in-depth exploration and consideration for the development of novel treatments for drug-resistant bacterial infections. Figure 3 . Figure 3. Time-kill kinetics curves of Artemisia brevifolia, cefixime, and their combination against cefixime -resistant Gram-negative bacterial strains (A) E. coli and (B) R. P. aeruginosa.The count of dead cells was monitored for 0, 3, 6, and 9 h.The color of a line indicates the concentration of the Figure 3 . Figure 3. Time-kill kinetics curves of Artemisia brevifolia, cefixime, and their combination against cefixime -resistant Gram-negative bacterial strains (A) E. coli and (B) R. P. aeruginosa.The count of dead cells was monitored for 0, 3, 6, and 9 h.The color of a line indicates the concentration of the treatment used in the experiment: red, untreated control; black, positive control; blue, 1X MIC of cefixime; green, 1X MIC; pink, 2X MIC; purple, 1X FICI; and yellow, 2X FICI. Figure 4 . Figure 4. Time-kill kinetics curves of Artemisia brevifolia, cefixime, and their combination against cefixime-resistant Gram-positive bacterial strains (A) R. S. hemolyticus and (B) MRSA.The count of Figure 4 . Figure 4. Time-kill kinetics curves of Artemisia brevifolia, cefixime, and their combination against cefixime-resistant Gram-positive bacterial strains (A) R. S. hemolyticus and (B) MRSA.The count of dead cells was monitored for 0, 3, 6, and 9 h.The color of a line indicates the concentration of the treatment used in the experiment: red, untreated control; black, positive control; blue, 1X MIC of cefixime; green, 1X MIC; pink, 2X MIC; purple, 1X FICI; and yellow, 2X FICI. NH: n-hexane, EA: ethyl acetate, M: methanol, Aq: aqueous extract, TFC: total flavonoid content, TPC: total phenolic content, µgQE/mgE: microgram quercetin equivalent per milligram of extract, µgGAE/mgE: microgram gallic acid equivalent per milligram of extract.Means with different superscript ( a-d ) letters in the column are significantly (p < 0.05) different from one another. Table 2 . Secondary metabolite screening of A. brevifolia. Table 3 . Antibacterial susceptibility testing of antibiotics from major antibiotic class. Table 4 . RP-HPLC-DAD analysis of A. brevifolia extracts for their polyphenolic composition. 20] Polyphenols (µg/mg of Sample) NH EA M Aq NH: n-hexane, EA: ethyl acetate, M: methanol, Aq: aqueous extract, RT: retention time, λ: wavelength, µg/mg of sample: microgram of polyphenols per milligram of the sample, means with different superscript letters in the column are significantly (p < 0.05) different from one another, -: not detected. Table 5 . Minimum inhibitory concentration of A. brevifolia extracts and cefixime.
2023-10-22T15:10:26.567Z
2023-10-01T00:00:00.000
{ "year": 2023, "sha1": "7491a1987bfdc8c273344ce2d4958b3eaefbd970", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-6382/12/10/1553/pdf?version=1697797121", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9748f1b9bcaff27a429376cb294c783969a64567", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
3308315
pes2o/s2orc
v3-fos-license
Strategic voices of care and compassion. Describing the mad, their afflictions and situations in Amsterdam and Utrecht in the seventeenth and eighteenth centuries Painting a picture of the lives of the early modern mad outside institutions has not yet been done in the Netherlands. However, by looking at notarial documents and admission requests, we can learn more about how the mad were cared for outside the institutions, and the impact their behaviour had on the people close to them. Investigating these sources for both Amsterdam and Utrecht in the seventeenth and eighteenth centuries has unravelled a story of community care in which families played a key role and used their options strategically. Furthermore, it has also revealed a complicated story about the way communities dealt with the behaviour of the mad, involving great personal struggles, breaking points and compassion. prevented him from doing his work. This abuse and obstruction affected his earning abilities, causing great harm to the whole family (SAA, AN: invnr. 6473, minact. 1461, 1 2 May 1701). This testimony reveals the voices of the caregivers who dealt with and cared for Jannetje during her episode of madness. But it also shows that the family had drawn up the document in order both to address the problems her behaviour caused and to explain how these problems affected their situation. In doing so, they expressed their own sentiments, using words such as 'grief' and 'sadness'. These voices of caregivers can therefore help us understand both how the mad were cared for outside of the institutions, and the impact their behaviour had on the people close to them. In the historiography of madness, 2 families and social networks have come to play an increasingly important role. 3 Historians of psychiatry have consequently been searching for their voices and actions in the archives. For instance, Guarnieri (2005), Suzuki (2006) and Vijselaar (2005) have shown that the nineteenth-century family was closely involved when it came to taking care of their mentally-disturbed family members. The importance of family care in the early modern period has also been described by Suzuki's (1998) research on household and extramural care in England in the eighteenth century and more recently by Walker Mellyn's (2014) work on mad Tuscans and their families. Apart from these two studies, there are few sources, so we have not yet been able to grasp the true significance and magnitude of the informal care in this period. Nonetheless, studies of early modern social networks have broadened our view on the functioning of urban communities at that time, and have drawn our attention to a much wider variety of social informal care (e.g. Bartlett and Wright, 1999;Horden and Smith, 1998). In the Netherlands, no specific historical research has been conducted to examine the impact of social care in dealing with madness during this period. For a long time it was believed that there were no archival sources that could shed light on this rather private phenomenon. However, by consulting an underused source, namely notarial documents, and combining these with admission requests, I will show how the voices of these informal caregivers and a structure of community care can be uncovered for the Dutch cities of Amsterdam and Utrecht. In fact, in 1992 historian Herman Roodenburg had already called upon other Dutch historians to use the notarial archives for their historical research because of their potential to provide insight into the personal lives of individuals (Roodenburg, 1992). So far, not many medical historians have used these archives in their research, mainly because most of them are not indexed in a way that makes them searchable on themes or keywords, and they are too extensive to investigate without a proper index. For Amsterdam and Utrecht, however, the notarial archives have been made partially accessible. Examination of notarial testimonies, healing contracts, procurations and testaments 4 made by family, friends and neighbours enables us to uncover the daily reality of living together and caring for a person with mental problems. 5 This form of private care within the household was described by Edward Shorter in his 1997 book, A History of Psychiatry, as: 'Home care in the world we have lost was a horror story' (Shorter, 1997: 2). He also emphasized the wide array of horrible measures and actions used by families to restrain and handle their mentally-disturbed family members in the early modern period, varying from confinement in barns and attics to tying them down and physical abuse. However, Shorter's very one-dimensional portrayal of family care can be contradicted and nuanced by the sources from Amsterdam and Utrecht, which recount stories of the personal struggles of the informal caregivers and much more complicated paths of care and compassion. 6 This article will focus on caregivers, their coping skills, emotional involvement and breaking points. Dealing not only with their practical motivations, but also with their sentiments offers a unique collaboration between the fields of medical history and the history of emotions. Since the turn of the millennium, medical historians have become more interested in this particular field (e.g. Bound Alberti, 2006;Carrera, 2013). This is not surprising considering the fact that dealing with sickness and especially madness was -and still is -a highly emotional process for both the afflicted and their caregivers, which makes combining the fields of history of medicine and emotions a promising venture. In striving to provide new insights into the importance of social care for the mad in the early modern cities of Amsterdam and Utrecht, this article will first establish whose voices can be extracted. The use of a variety of sources will uncover the most important players involved and a network of community care. Moreover, the reasons people had for documenting their voices will become clear. Furthermore, by looking into what these voices are saying, we will get insight into the reality of daily life while dealing with a mad person and the challenges this brings within an early modern society. Finally, I will elaborate on the emotional expressions in the sources and speculate on the meaning of such explicit statements. Reflecting on these themes will give some insight into how Dutch urban society functioned and how madness was dealt with in an urban social community. Whose voices? The people who took the initiative for having notarial documents or admission requests drawn up, and those who testified in them, are the dominant voices in the sources. Almost all these documents were made at the request of family, neighbours or friends of the mentally-disturbed person. 7 These were also the most common groups to be asked to testify, together with employees or tenants who lived in the same house as the afflicted. Typically, these groups were, in most cases, those who were directly affected by the behaviour and thus functioned as initiators, witnesses and actors in the sources. Parents and spouses were mostly involved in the day-to-day care for the mad, and their voices are therefore the most prominent. They were also the ones who most often acted as initiators for having a document drawn up, thus revealing stories about their private lives, struggles and home care. It has proved difficult to obtain information about the social and economic position of the people initiating and testifying in the sources. Because both notarial documents and admission requests reveal only limited information, we will have to be creative, 'read between the lines' and look for details these sources do impart. For example, we can gauge the social and economic position of individual men and women by looking at information about their employment, places of residence in the city, amounts mentioned in wills, medical contracts and admission covenants, the labelling of documents as Pro Deo or (in some cases) from specific statements about someone's financial status. 8 Although not all indicators are present in the sources, it is still possible to make some general comments. It is notable that notarial documents and admission requests were predominantly drawn up by both the lower and middle class of the urban population. The upper strata of urban society were represented in the notarial documents, but practically absent in the admission requests, because their financial means made it possible to deal with the issues of madness in a more private way. 9 Also, the fact that drawing up an notarial document or official request was rather inexpensive explains why this medium was made available to a large group in society. 10 Having determined whose voices these sources reveal and that they represented a large part of the urban society, we can conclude that madness was mostly dealt with in a family setting. This can be explained because the family was the primary social unit for people to fall back on in times of need; also, as economic, social and emotional units, the families were an essential part of the social composition of the early modern city (e.g. Kooijmans, 1999;Schmidt, 2001;Spierenburg 1997). Their important caretaking task becomes even more clear in the extensive corpus of notarial wills, which include specific stipulations to make sure that family members afflicted with madness or mental disability would remain cared for. Jannetje Jacobsdr's testament from 1603 is a case in point. She appointed two testamentary executors who had to ensure that all her beneficiaries, her children, received 92 guilders each and that all her other possessions would be sold and invested in annuities in order to support her mentally-retarded son, Jacob, until his death (SAA, AN: invnr. 4, minact. 479, 4 Sep. 1603). Instead of bequeathing their inheritance, some families also arranged the future care in detail. Johanna Jacoba Ploos van Amstel, for example, made sure that her inheritance was used to provide her mad sister, Isabella, with the proper care for a lady of her standing. She also explicitly stated that her sister could not be confined in one of the houses commonly used in such cases, which implies that she wanted to make sure that her sister was cared for in a private setting (UA, NU: invnr. U242a003, akte nr. 75, 2 Mar. 1764). Heijndrick Evertsz even arranged that his 'innocent' daughter, Geertgen Heijndricks, would be cared for by her cousin for a certain sum, which they still had to agree on (SAA, AN: invnr. 16, minact. 101, 28 Mar. 1620). However, families not only expressed their wishes in notarial wills, but also in other notarial documents and admission requests, in which they took the initiative to have the person admitted or to appoint a guardian and explain why this should be done. Friends and neighbours were also frequently present in these sources as initiators and witnesses, which confirms the existence of a larger social network within these cities. This social network functioned both as a system of social support and of social control. These groups not only assisted in a variety of situations but also dictated certain social and cultural conventions, therefore deciding on what types of behaviour were and were not acceptable. This dual role is present in the testimony made on the initiative of the neighbours of the Amsterdam surgeon Joannes Rentmeester in 1704. They declared that since the preceding winter Joannes had suffered from a 'sad accident' in his brain, which affected his intellect and had caused him to lose his mind and the ability to conduct his own affairs. After the accident, he caused major disturbance to his neighbours at night because he raged and yelled like a mad person, keeping the whole neighbourhood awake. He was also a threat to himself and had attempted suicide. It became apparent that he had become completely incapable of taking care of himself when his neighbours had found him lying in his bed covered in his own urine and faeces on multiple occasions and had to help him clean up and put him in dry clothes (SAA, AN: invnr. 7207, minact. 997, 17 June 1704). This example shows the neighbours not only as caregivers of Joannes but also as the ones who wanted to stop the nightly disturbances he caused. The prominent presence of neighbours in these sources can, to a large extent, be explained by their personal interest in handling the situation, either practical or emotional, and the same applies to family members and friends. Employees or tenants living with or in the same house as a mad person were closely involved with the situation, and therefore regularly acted as witnesses and shared their voices in the sources. Unlike family members, neighbours and friends, this group never took the initiative of drawing up a document, probably because they were dependent on the family. However, the importance of their voices becomes clear when looking at two testimonies about Rica de Souza Britto, one given by a tenant and two household employees, and one given by two wet-nurses. These two testimonies about Rica's behaviour in the house were made at the request of Rica's husband, the merchant Isaac Rodriguez. All witnesses stated they were aware of Rica's situation because they lived or worked in her house. Rica was afflicted with a violent form of 'evil madness' and her anger was mainly directed at her own husband and children. She had, for instance, sworn to slit her husband's throat, hit him over the head with a stick, threatened to burn down the house and had delusions about the devil who had taken possession of her body. The real breaking point happened in the middle of the night when Rica had come out of her bedroom and banged on the door of her husband's room, screaming that she wanted to kill him. When he did not open his door, she left and in her rage she went to the room which was shared by the maid, the wet nurse and Rica's children, and threatened to do the same to them if they did not open their door. When the maid opened the door, Rica called out to her: 'give me a butcher's knife, I will knock down his door and slit his throat with it' (SAA, AN: invnr. 6053, minact. 601, 15 July 1710). After the maid gave her the knife, she again tried to enter her husband's chamber, but after this proved unsuccessful she eventually calmed down. Rica's condition thus resulted in a very unsafe situation, in which everyone in the household feared for their lives. The staff and the tenant therefore stated that it was no longer possible to live under the same roof as Rica. The witnesses emphasized this even more by stating that precautions needed to be taken in order to prevent great disaster (SAA, AN: invnr. 6053, minact. 565, 14 July 1710 and minact. 601, 15 July 1710). This example shows us how these declarations of employees and tenants assisted the family in changing an unsustainable situation. In this case, Rica's husband took the initiative but needed the testimonies in order to undertake action. With both the notarial documents and the admission requests, it is important to bear in mind that they were usually drawn up for a specific purpose. In an admission request, this is made explicit at the end of the document, when the applicant(s) requests an admission into an institution. 11 In the notarial healing contracts, procurations and testaments, the intentions were usually made explicit, but in the testimonies the goals of the initiator(s) were not always clear, so in some cases we can only speculate on their meaning. Because notarial documents had a certain legal authority and because these testimonies were sometimes added to the admission requests in order to elaborate on someone's situation and emphasize the need for an admission, we can at least establish that all these documents had significant meaning (Gehlen 1987: 13). An analysis of over 250 of these notarial documents has shown that there were four main goals for drawing them up. The main reason was to send the mad person to an institution. This is made explicit in multiple testimonies by emphasizing the significant chance of escalation of the situation if someone was not locked up. A second reason for drawing up a document was to limit the legal power of a mad family member and gain the control to decide about this person, their money and their goods. Thirdly, and this applies in particular to notarial wills, they were drawn up to arrange and secure the future care of a mad family member. Lastly, the documents were used to make the distinct claim that the behaviour of this person was a result of madness, in order to prevent legal problems and reputation damage. To sum up, the voices that can be distinguished were those of people who lived in close proximity to the afflicted people about whom they spoke. Families, neighbours, friends, employees and tenants were therefore all part of a community of care in which they played their own part. In comparison with other groups, families are often overrepresented in these sources, which indicates that they were the main caregivers and were held responsible for the care of a mad family member during their life and even after their death. But the rest of the community of caregivers also addressed their behaviour and took the proper measures in order to deal with this private yet social problem. In order to develop a clear understanding of the types of behaviour they addressed, let us now focus on what these voices said. What did the voices say? Some of the examples discussed so far have given us a glimpse of what the voices of caregivers were saying. Looking into their stories will help us to reconstruct how the mad were dealt with and were viewed in the early modern cities in the Netherlands. Because people had different goals in mind when they drew up or testified in notarial documents or admission requests, we must take these into consideration when analysing the stories. In many of the documents we can discern an escalated situation in which madness became a serious social problem and where the caregivers were no longer able to handle this within a domestic setting. Since the documents were drawn up for a specific purpose, families in which home care was successful did not, or only in a very limited way, appear in these sources. Consequently, the voices usually share stories of precarious situations and breaking points. To explain how this happened, historian of psychiatry Joost Vijselaar has looked into what type of behaviour instigated the process of institutionalization in the nineteenth century (Vijselaar, 2005;2010: 120-2). He identified four main reasons: causing social disturbance, being a danger to others, being a danger to oneself, and being in need of care and treatment (Vijselaar, 2005: 282). Yet a breaking point only occurred when the balance between the behaviour of the mad and the available coping skills of the caregivers was disturbed: either the condition of the mad worsened or the coping strategies of the family and the community of care deteriorated, thus disrupting a delicate balance. Within the domestic situation the mentally disturbed were usually kept at home, and the family tried to fit them into their 'normal' daily routine for as long as possible. It was the cheapest, the socially accepted and culturally expected option. Therefore, most of the documents tell us something about the duration of the condition and its developments. This is shown in the case of Jan van Bemmel. In a notarial testimony made at the request of his brother-in-law, 14 people -neighbours, friends, employees and eyewitnesses -testified and declared that Jan was completely out of his mind. Doctor Anthonij van Thiel even declared him untreatable, and the other witnesses stated that Jan caused great disturbance at night when he screamed, raged and threw bricks from his window onto the street, keeping everybody in his neighbourhood awake. While living with his sister and her husband, Jan had also thrown a brick at his sister's head, and she went into early labour as a result. He also suffered from delusions and was extremely aggressive both at home and on the streets. Everybody in his household feared for their lives, and nobody in his proximity could relax (SAA, AN: invnr. 6016B, minact. 849, 4 Mar. 1702). This story reveals a serious escalation that had occurred during a year and a half. An analysis of the sources makes clear that the point at which such a situation led to an escalation varied from a few days to a few years. This variation can be explained by Vijselaar's model and probably depended on the delicate balance between the type of behaviour and the caregiver's coping skills. An interesting point in the example of Jan van Bemmel is that his family did try to provide him with medical treatment from Anthonij van Thiel, who was a doctores medicinea. This shows that medical treatment was certainly an option for people in early modern cities; as well as doctors, the sources mention surgeons and even self-proclaimed healers of the mad. 12 For instance, in Amsterdam there are several treatment contracts between Doctor Joseph Celle and relatives of mad people for whom he would provide care (SAA, AN: invnr. 10700, minact. 210, 23 Dec. 1739;invnr. 10702, minact. 228, 22 June 1740;minact. 314, 19 Aug. 174;minact. 372, 19 Sep. 1740 andinvnr. 10703, minact. 463, 17 Nov. 1740). A couple of points stand out in these contracts. They always state that Doctor Celle would treat, care for and keep the patient at his home during the period of treatment. This was common practice in this period, and it made the doctor responsible not only for providing medical care and cure but also for safeguarding the patient. An analysis of the contracts also shows that the payments varied: from 5 guilders paid on a weekly basis to 300 guilders for a whole treatment. If a set amount was agreed upon, the first half of the amount was paid on entering into the contract and the second half after the patient was cured. This shows that Celle's clientele consisted of the financially well-off classes because the lower classes would not have been able to afford such amounts. The most striking part of these contracts was the agreement that if the patient had been cured but suffered a relapse at any point, Doctor Celle was obliged to treat the patient again without receiving any additional payment. These contracts show that madness was also seen as a medical problem in which doctors and surgeons had a role as healers and experts. Moreover the fact that families paid for professional help nuances Shorter's vision of home care as a story of abuse and neglect, because it shows care for and investment in someone's well-being. Many of the previous examples have revealed important aspects of dealing with the mad outside of the walls of the institutions. Several of the cases, for instance, have shown that causing a disturbance within the neighbourhood received particular emphasis in the sources; screaming and yelling during the day, but especially at night, was considered a major issue. These disturbances were often accompanied by fits of rage, which could evolve into extremely aggressive and dangerous situations where people had to flee their own houses in order to stay safe. Typically, the sources reveal people dreading a disaster if the situation was not dealt with. Examples can be found in the testimony about Hermannus van den Bosch. In 1729, at the request of his wife, several neighbours and tenants testified that her husband had been without his senses for quite a while. This manifested itself in severe aggression, and he had even threatened to beat his wife's brains out with a pair of pliers (UA, NU: invnr. U129a009, akte nr. 142, 1 April 1729). However, they also declared that he was regularly mocked while walking on the streets, and even followed and harassed by bystanders. They end their testimony by stating that his frenzied state of mind caused great danger to both his wife and home because of the likelihood of him starting a fire (UA, NU: invnr. U129a009, akte nr. 142, 1 April 1729). This testimony and the concerns expressed give us a better understanding of how these early modern urban communities functioned. On the one hand, the neighbours and tenants emphasized that Hermannus was mocked and that the public setting in which this happened would have been extremely harmful, not only for his own reputation, but also for that of his wife, family and the neighbourhood (van der Heijden 2014: 53). On the other hand, the fear of causing a fire was a much more practical issue. Because of the layout of the cities and the materials used to build houses during this period, what started as a small fire could easily spread and burn down part of the neighbourhood, causing a great financial disaster. So fear that the afflicted would damage the social and financial status of families and the social network could prompt them to reveal details of their lives, making themselves vulnerable to criticism and stigma. The sources from Amsterdam and Utrecht show that being a danger to oneself was also a trigger for the community of caregivers to undertake action. This could happen when, in a fit of frenzy, the mad threw away their life savings by buying unnecessary objects, houses and animals for ridiculous sums of money, thereby endangering not only their own but especially their families' financial position. 13 Action was also undertaken in cases where people had harmed themselves, for instance the admission request made by the mother of Gerrit Weggelte. In 1768 she declared that, to her utmost grief and sadness, her only child had 'become affected in his senses' six months ago. She had hoped that his miserable condition would improve, but it only deteriorated; he became melancholic and made several suicide attempts. After trying to slit his own throat and to drown himself in a rain barrel, he was now confined to his bed, bound by his hands and feet. His mother stated that she did this because she feared for his life if nothing was done (SAA, AG: invnr. 955, admission request for Gerrit Weggelte, August 1768). Dealing with such suicide threats must have been a difficult dilemma for families, because suicide was still taboo, although in the eighteenth century the disapproval and legal prosecution of suicide caused by madness relaxed (e.g. Bosman, 2004;MacDonald, 1986: 83-7). The voices in the sources provide insight into their daily lives with a mad person and reveal stories of escalation over a longer period of time. When taking into consideration the fear people had for their own lives, property, livelihood and the life of the afflicted, it becomes clear that they undertook action in order to try to save and restore their own lives, as well as the lives of the mad. To obtain help or regain power over the situation, those who took action used several reasons to persuade others of their needs, for instance by underlining the severity of the situation and the need for action to prevent great disasters. In addition to this type of practical reasoning, they also expressed emotions to underline the grievousness of the circumstances. Emotions expressed Since the sources reveal the intimate reality of dealing with madness, it is not surprising that they also reveal emotions. The authors of the sources describe feelings of fear, shame and compassion for their own situation and the situation of person afflicted. Analysing these emotional expressions is difficult because the terminology for emotions, emotional etiquettes, explanations for emotions and thoughts about emotions in the pre-modern period differed from our current interpretation. For instance, Steven Mullaney, a Professor of English, observed that the word 'emotion' did not become a term for feeling until about 1660. The words that were used instead were 'passion' and 'affection', although these could in fact have multiple meanings depending on the context in which they were used (Kern Paster and Rowe, 2004: 2). Furthermore, the ideas about emotions were totally different and highly influenced by the humoral theory, which tied someone's humoral constitution to certain emotional and personal characteristics. In addition to the problem of terminology, working with sources in their textual form poses a challenge because these words are not a direct reflection of emotions, but are the representations of emotions. As historian Jean Starobiski puts it, 'the history of emotions, then, cannot be anything other than the history of those words in which the emotion is expressed' (Matt, 2014: 43). Nevertheless, these expressed emotions do have a certain meaning and reflect social and cultural sentiments. Expressions of fear and shame are seen most frequently in these sources and can be relatively easily explained. Fear, as we have seen above, was usually expressed in recounting extreme levels of violence used by a mad person, and in speculating on the person causing a possible disaster such as a fire or trying to take their own life. The more subtle and indirect expression of shame can be explained within the context of reputational damage. By documenting and labelling the behaviour of these people as the result of madness, the community of care tried to limit this damage. By having a document drawn up, the family or social network, in a way distanced themselves from the behaviour of the mad. By explicitly labelling someone as afflicted with a brain illness, his or her behaviour could be seen as being caused by the illness, guaranteeing that the family, social network or the person afflicted could not be held accountable. In early modern society, a person's greatest asset was their reputation, so people did everything in their power to be proclaimed as honourable citizens and avoid any form of public shame (van de Pol, 1996: 67-84). Since the mad lost their sense of cultural and social conventions, their socially unacceptable actions could damage their reputation. Causing hindrance and commotion in the neighbourhood, and especially displaying a naked body or engaging in promiscuous behaviour during fits of madness, could evoke this shame and loss of reputation that affected everyone close to the afflicted person (van der Heijden, 2014: 67-8). Another more surprising emotion expressed in these sources is compassion. Even though occurring less frequently than fear or shame, the use of terms indicating compassion increased during the eighteenth century. Among the specific emotionally charged words used in the sources, the most important were: sadness, unfortunate, wretched, commiserate, grief and sorrow. For this study, over 750 notarial testimonies and admission requests were examined, and only a small percentage (about 14%) contain these expressions of compassion. Families and the social network directly incorporated these expressions of compassion in the testimonies when they made statements about the 'great sadness' they felt for the person afflicted and their situation. Identifying this kind of compassion resonates with the ideas of philosopher Martha Nussbaum, who studied compassion extensively in her 2001 book, concluding that compassion is only present when a situation is thought of as severe, one that has befallen someone, and the person in question is deserving. Therefore, in order to deserve compassion, one needed to be innocent about one's fate (Nussbaum, 2001: 304-27). This tells us something about the thoughts people had about the origin of madness. By showing compassion to those afflicted with madness, people recognized that it was a condition that appeared without the guilt of the afflicted person. Consequently, we can conclude that madness was thought of, in these cases, as a sickness and not of a sort of punishment for sinful or immoral behaviour. However, testimonies were frequently made without any emotional expression and may even seem quite formal. The difference between similar documents, one without and one with emotional expressions, is shown in two admission requests by parents of mentally disturbed sons. In 1772, Jan Schouten declared that his son Jan Schouten junior had gone out of his mind several weeks ago and that this state of mind was accompanied by frenzy and even malice. The situation had escalated to such an extent that, in order to prevent disaster, he needed to be guarded by at least two men. His father therefore concludes that it is highly necessary to incarcerate his son and asks permission to confine his son in the asylum (UA, AK: invnr. 2635-2, admission request for Jan Schouten, March 1772). This permission was granted and his son was confined in the asylum of Utrecht. In the request of Arij Kleij and his wife Annetje van der Maen, drawn up in 1761, they started by stating that, 'with intense grievance of their soul, it had pleased the almighty God to deprive their youngest son Jan, who lived with them, of his senses'. Jan Kleij had been in this state for over a year and because of their own old age and because there was no one else to guard and take care of Jan, apart from a daughter who also lived with them, they requested authorization to confine their son until God had relieved him of his ailment (UA, AK: invnr. 2635-1, admission request for Jan Kleij, 23 May 1761). Their request was approved, and Jan was confined in the workhouse of Gouda in 1761 but was transferred to the Utrecht asylum in 1767 where he became a patient because of his persistent madness. These two cases differ markedly in the way the parents talk about the conditions of their sons. Whereas Jan is quite straightforward in his testimony, Arij and Annetje express grief, make a reference to God's will and elaborate on their own situation. Looking at both cases raises questions about why we see the expression of emotion in one case but not in the other, especially because both families get the same authority to confine their sons. It is remarkable that in the eighteenth century compassion was expressed in 92 notarial documents and admission requests, compared with only 13 cases in the seventeenth century where this was done. One possible explanation for the increase could be that there was more source material, especially in Amsterdam, from the eighteenth century and this could be distorting the image. 14 But the increase could also be seen in the context of changing mentalities and customs in the eighteenth century. With the birth of Enlightenment culture and the secularization of the world, the way people thought about social problems in their society changed and brought different types of rhetoric and reasoning into fashion. This fits with Nussbaum's idea of compassion being shown to indicate that madness was something that could happen to anybody. It is interesting to note that the same reasoning and emotional expression used in the notarial documents and admission requests for mad people is also present in the admission requests to get people admitted to houses of correction (e.g. Spierenburg, 1991: 238-48;SAA, Archieven van Schout en Schepenen: invnr. 1259-1285. 15 This could indicate that this formulation of emotion was a type of rhetorical strategy that had come into fashion in this period. Finally, the reason for the increase in emotional expressions could also be related to the increase in the number of government institutions available for the mentally disturbed. This availability, combined with an increasing role of urban government to act as a problem-solver for these types of social issues, could have influenced people's willingness to request help. It thus appears that by framing and formulating the testimonies in a certain way, emotional expressions were employed as a rhetorical strategy. Families and the social network may have used these emotions in order to obtain outside help and gain understanding for their situation. They benefited from the effect this had on the social discourse, which dictated to what extent the family and the mad were held accountable for their actions, were entitled to receive help, and to what extent other people felt compassion for them. Conclusion An analysis of the notarial documents and admission requests from Amsterdam and Utrecht has shown that the horror story of home care portrayed by Shorter does not correspond with the stories told in the sources. They reveal a much more complicated story of how early modern families dealt with madness, involving great personal struggles, breaking points and also compassion. Amsterdam and Utrecht had a system of community care in which the family was the primary caregiver, usually dealing privately with the behaviour of the afflicted family member for an extended period of time. Families also tried to improve the situation by providing (medical) care to the best of their abilities and finances. But these families certainly did not stand alone in their caring activities: neighbours, friends, tenants and household staff acted as a network of social support and control in order deal with this social problem of madness. Afflicted individuals displaying extreme levels of violence, causing major disturbance, being at threat to themselves or the environment caused social, financial and reputational problems for all parties involved, and the problems were therefore collaboratively addressed. The sources also reveal expressions of fear, shame and compassion which not only enlighten us about the impact of madness on the lives of the people who had to deal with it, but also tell us something about the arguments used to obtain outside help and gain understanding for the situation. Pleading for compassion became a more popular rhetorical strategy in the course of the eighteenth century, possibly indicating a change in mentality towards the mad: as victims of a lamentable religious or medical fate, they and their families were entitled to compassion and support from (wider) society. In the historiography, medical historians of madness have been reluctant explore these emotional expressions, although several authors have attempted to combine the fields of the history of medicine and emotion (e.g. Hodgkin, 2007;Porter, 1987). For the pre-modern period in particular, this has proved to be a problematic endeavour, mainly because of difficulties in interpreting emotions and the limitations of the sources available to us. However, as this article has attempted to show, expressed emotions can be useful indicators of social and cultural sentiments, and therefore form an important part of the story of caring for and dealing with madness. Funding This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors. Notes 1. Abbreviations used in archive citations: those for names of archives are given in reference list (a); also, invnr. = inventory number; minact. = minuutacten. 2. Using the term madness in this article has been a deliberate choice. First, it was the term used daily by both medical men and common folk through the ages. Using a term such as mental illness for the seventeenth and eighteenth century therefore seems anachronistic. Also, I agree with the line of reasoning for using the term madness that Andrew Scull explicates in his book Madness in Civilization (Scull, 2015: 11-15). 3. For a long time the focus has been mainly on institutions and the medical profession, especially for the early modern period, because of the sources available to us. This, however, created a Whiggish type of historiography, with little or no focus on the care given outside institutions or the voices of the inflicted themselves. However, from the 1980s onwards, increasing calls for change, for example by Roy Porter and Michael MacDonald, instigated research into these unexplored fields of the history of madness. 4. Notarial testimonies were declarations made by people about a mad person; healing contracts were made between a doctor and the family to cure the mad; procuratiosn were made to appoint a guardian, with power to make decisions about the mad; in testaments, arrangements were made (mostly by parents) for care of a mad child after their deaths 5. The notarial archive of Amsterdam has been made accessible through an intricate card index system for the period 1701-1710, which allows the user to search for keywords. It was, however, also possible to collect some documents from the seventeenth and second half of the eighteenth centuries. The notarial archive of Utrecht has been made accessible online for the period 1560-1811, also allowing us to search this digital archive through keywords; see: http://www.hetutrechtsarchief.nl/collectie/archiefbank/ indexen/akten 6. This does not mean I will argue that home care was never a 'horror story', but I want to stress that this was not the prominent story told in these sources, which reveal, most of all, a story of personal struggle. 7. Family and friends are terms with a different connotation in the early modern period and at the present time; historical debate continues about what the terms meant in the early modern period. Naomi Tadmor's book Family and Friends in Eighteenth-century England (2001) deals with these issues of interpretation of concepts of household, family and kinship, and shows the importance of both categories for someone's social, economic and political networks. I will use the terms as they are used in the sources. 8. Labelling a document as Pro Deo meant that the person(s) requesting and the person in need of admission into an institution had no money or means to pay for the costs of confinement themselves. Therefore, the document also requested that the city government either paid for the care of the afflicted, or ordered the care to be paid for by the institution or a secondary organization such as a diaconate or an Armenkamer (literally a poor chamber, an institution which gave alms to the poor who could not make use of the charities provided by the different church organizations). 9. We encounter the upper class more sporadically in the notarial archives, identifying them by their names, the enormous amounts of money they offer for medical treatment, private care or confinement, and certain rules or restrictions they apply to care. 10. I have tried to find out the cost of drawing up a notarial document or admission request by looking at the administration from a notary and looking through government costs for seals to make the documents legal. This cost has not been easy to establish, and I am only able to make some estimates. For example, the cost of drawing up a notarial will was about 5 guilders, and for a notarial testimony usually 16 stivers, but it was higher for a large testimony with many witnesses. On the other hand, an admission request, depending on the amount of supplicants, cost between 5 and 9 stivers. Economic historian Jan Luiten van Zanden (1991: 137) established in his book Arbeid tijdens het handelskapitalisme that the average wage of a day labourer in the period 1644-1780 fluctuated between 10 and 12 stivers. However, de Vries and van der Woude (1995: 202) estimate the average day wage was 12-14 stivers. Both these estimates indicated that the prices asked above would have been manageable for a large group in society. 11. These admission requests were addressed to the burgomasters and town council, court officials (Schout and Schepenen) or to the board of a specific institution. The institutions that admitted people with mental disabilities varied for both cities, but most requests were made for an admission into the city asylum (Dolhuis) or a private confinement in a house of correction (Beterhuis). 12. These private healers are extremely difficult to track down, and the sources used for this article only mention this option very occasionally. In my PhD thesis, entitled 'Madness and the city' on which I am currently working at the University of Amsterdam, I will try to elaborate on this fascinating professional private care system. 13. In these cases, usually family members ask permission to put the afflicted in a ward and appoint a custodian to administer this person, their money and goods. 14. In Amsterdam, the notarial documents are accessible for 1700-10 and only a couple of documents are available for the seventeenth and later eighteenth centuries. The many admission requests for both Amsterdam and Utrecht were also more elaborate in the eighteenth century. 15. These institutions provided confinement for both the mentally ill and people who had strayed from the right path, squandered the family fortune, given in to sins of the flesh and abused alcohol. In their calls for admission, the families of the men and women behaving in this unacceptable way mostly reflect on the situation and state that, to their utmost grief, the behaviour was uncontrollable and the only solution was confinement.
2018-04-03T03:52:31.696Z
2017-10-25T00:00:00.000
{ "year": 2017, "sha1": "ac44d333c439194b44befa4fb3de547311f391f0", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0957154X17736236", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "ac44d333c439194b44befa4fb3de547311f391f0", "s2fieldsofstudy": [ "History" ], "extfieldsofstudy": [ "Medicine" ] }
119360531
pes2o/s2orc
v3-fos-license
Bogoliubov sound speed in periodically modulated Bose-Einstein condensates We study the Bogoliubov excitations of a Bose-condensed gas in an optical lattice. Of primary interest is the long wavelength phonon dispersion for both current-free and current-carrying condensates. We obtain the dispersion relation by carrying out a systematic expansion of the Bogoliubov equations in powers of the phonon wave vector. Our result for the current-carrying case agrees with the one recently obtained by means of a hydrodynamic theory. I. INTRODUCTION The possibility of creating optical lattices in trapped Bose-condensed gases has provided an opportunity to study superfluids in novel situations. The presence of the lattice leads to a variety of solid state effects associated with the coherent motion of the atoms in a periodic potential. For example, the oscillation frequency of the centre of mass motion of the condensate is reduced [1] as a result of the enhanced effective mass of the atoms tunnelling between potential wells. Furthermore, when subjected to a uniform force, as provided by gravity [2] or alternatively by accelerating the optical lattice itself [3], Bloch oscillations of the condensate have been observed. Reducing the amplitude of the lattice potential leads to a breakdown of these oscillations as a result of Landau-Zener tunnelling between bands [3]. All of these observations are essentially a manifestation of the superfluidity of the Bose condensate in an optical lattice. Another aspect of equal interest is the breakdown of superfluidity as recently observed [4] in a study of the centre of mass motion of a trapped condensate moving through an optical lattice. When the amplitude of the oscillation exceeded a critical value, dissipation was seen to set in. In this paper we study a Bose-condensed gas subjected to a uniform optical lattice in a regime where the dynamics of the condensate wave function is well described by the time-dependent Gross-Pitaevskii (GP) equation. In particular, we are concerned with small amplitude collective modes which at long wavelength are phonon-like excitations. The relevant physical parameters determining the properties of the excitation are the optical potential amplitude, V 0 , the lattice constant, d, the mean density,n, of the gas, and the magnitude of the supercurrent. The problem has been addressed theoretically in a number of papers using a variety of techniques and approximations. Berg-Sørensen and Mølmer [5] were the first to investigate phonon excitations within an optical lattice. They solved the Bogoliubov equations numerically for a onedimensional model and established that the long wavelength excitations are phonon-like, having an energy dispersion that is linear in the wave vector of the mode. They also obtained an analytic expression for the sound speed, s, which is based on a combined weak potential and slowly varying approximation. These calculations were extended by Wu and Niu [6] to the case in which the condensate carries a current. This work is noteworthy for having pointed out that the modes exhibit both energetic and dynamic instabilities for sufficiently large currents. The former instability is associated with the Landau criterion for the breakdown of superfluidity, while the latter is related to the onset of dissipation as observed in [4]. The recent paper by Machholm et al. [7] explores these instabilities further. Similar results were obtained by Bronski et al. [8] by considering a special form of the lattice potential, while Konotop and Salerno [9] used a different approach to establish that the dynamic (or modulational) instability leads to the generation of solitons. When the potential wells are sufficiently deep, the condensate is well-localized on each lattice site and a tightbinding description becomes useful. Javanainen [10] used this picture within a many-body formulation to derive the phonon dispersion throughout the Brillouin zone for a one dimensional lattice. This calculation is in fact equivalent to one based on the Bogoliubov equations [11] or the discrete version of the time-dependent GP equation [12]. The virtue of these methods is that they provide analytical expressions for the dispersion relation, although an accurate a priori determination of the tight-binding parameters involves further numerical calculation. Smerzi et al. [12] extended the results of Javanainen [10] by deriving the phonon dispersion for a current-carrying state and found a dynamic instability that is responsible for a so-called 'superfluid-insulator' transition. The phonon dispersion at long wavelengths was addressed in [13] by deriving an energy functional involving density and phase fluctuations which vary slowly in space. The approach is closely allied to the effective mass approximation used in solid state physics [14] and recently applied to Bose gases in optical lattices [15], and to multiple-scale analysis [9,16]. The phonon sound speed is found to be [17] s = n m * ∂µ ∂n where m * is an effective mass and µ is the chemical potential. The precise definition of the effective mass as (m * ) −1 = −2 ∂ 2ǫ /∂k 2 , whereǫ is the energy per particle of the N -particle system, seems first to have appeared later [7,18]. This is an important point since there are other plausible candidates for the effective mass. In fact, we shall see that a somewhat different, but equivalent, definition can be given. In this regard, we note that the effective mass appearing in both the effective mass [15] and multiple-scale [9] theories is that corresponding to the bare optical potential. In other words, the effect of the interactions on this parameter is not included, and therefore the use of the dynamical equations obtained in these theories will not in general give the correct Bogoliubov sound speed. Our purpose in this paper is to obtain the long wavelength phonon dispersion directly from the Bogoliubov equations defining the collective modes. This is achieved by developing a systematic expansion of these equations in powers of the phonon wave vector q. We do this first for the current-free state (Sec. III), confirming the result for the sound speed given above. We then consider a current-carrying state (Sec. IV) and obtain the analogous phonon dispersion in this case, reproducing the result obtained in [7] by means of a hydrodynamic analysis. Our expansion technique can be viewed as a justification of the assumptions on which the hydrodynamic approach is based. Furthermore, it provides explicit perturbative expressions for the various physical quantities that appear in the theory (for example, the effective mass). In Sec. II we present the theoretical background required for the calculation of small amplitude collective excitations in an optical lattice. For the most part we consider a three dimensional optical potential with cubic symmetry, although we also touch on systems with one dimensional modulation as well as radially confined condensates. The underlying periodicity of the the optical potential implies that the Bogoliubov equations admit solutions having a Bloch function form. This aspect accounts for the use of a Bloch function basis in solving these equations in both the current-free (Sec. III) and current-carrying (Sec. IV) states. However, different calculational methods are used in the two cases and these are therefore presented separately. We also examine various physical limits (Thomas-Fermi, weak potential, weak coupling and tight-binding) in order to make contact with previous work. As stated previously, our main result for the phonon dispersion affirms the result which follows from the insightful use of hydrodynamic equations to describe the dynamics of long wavelength fluctuations [7,13]. II. BASIC THEORY We consider an extended 3D BEC subjected to standing wave light fields that give rise to a periodic external potential having the property V opt (r + R) = V opt (r), where R is a Bravais lattice vector. For the most part we restrict ourselves to a cubic lattice for which R = d(n 1x + n 2ŷ + n 3ẑ ), with n i an integer. We base our analysis on the time-dependent Gross-Pitaevskii (GP) equation for the condensate wave function, Ψ(r, t), (1) This equation admits stationary solutions of the form where Φ(r) satisfies the time-independent GP equation with the normalization where N is the total number of particles in the volume V . Also of interest is the total energy of the system given by (3) The energy parameter µ is the chemical potential and is related to E tot by µ = ∂E tot /∂N . Often the ground state solution of the GP equation is of interest but we shall also consider states which have a superfluid flow. These states have a Bloch function form where n is a band index and k is a wave vector restricted to the first Brillouin zone. The factor √n , wheren is the mean density, is introduced in the definition of w nk so as to give the normalization where Ω is the Wigner-Seitz volume. The condensate density is then n c (r) = |Φ nk (r)| 2 =n|w nk (r)| 2 . The Bloch function w nk (r) is in general complex and is the self-consistent solution of (5) We assume that w nk has the periodicity of the lattice, w nk (r + R) = w nk (r), although it should be noted that period-doubled states also exist [19]. The chemical potential, µ nk (n), is implicitly a function of the mean density and depends on the particular Bloch state being considered. The superfluid current density in this state is and has the property ∇ · j s (r) = 0. Introducing the superfluid velocity according to the relation where θ nk (r) is the phase of the Bloch function w nk . The spatially-averaged superfluid velocity is In one dimension, the periodicity of w nk (x) implies dθ nk /dx = 2πl/d, where l is an integer. By continuity of the phase with k, we expect l to have a fixed value for a given band, and v s = ( /m)(k + G) where G is some reciprocal lattice vector. For the lowest band, we show in Appendix A that G = 0. Thus, we arrive at the somewhat surprising conclusion that v s = k/m. We suspect that similar results apply in higher dimensions but have not been able to show this explicitly. The average superfluid velocity should be distinguished from the velocity determining the average current density This velocity is given by [20] v nk = 1 ∇ kǫ (n, k) , whereǫ(n, k) = E tot /N is the energy per particle in the state characterized by the mean densityn and quasimomentum k. In one dimension, the average current density vanishes at the zone boundary k = π/d if the mean density is below a critical valuen c [20]. On the other hand, the average superfluid velocity is v s = π/md. This is not a contradiction since the local superfluid velocity in (7) is averaged differently when calculating the average current density. Dynamical states of the condensate are determined by the time-dependent GP equation (1). For small amplitude excitations, the condensate wave function is expressed as Ψ(r, t) = [Φ nk (r) + δΦ(r, t)] e −iµ nk t/ (11) and the GP equation is expanded to first order in the deviation δΦ(r, t). By writing where E i is allowed to be complex, one obtains the following Bogoliubov equations for the quasiparticle amplitudes u i and v i , where the operatorL is defined aŝ Each distinct solution labelled by the index i corresponds to a collective excitation of the condensate and E i represents the excitation energy of the mode. The orthonormality of the quasiparticle amplitudes is specified by Since the operatorL has the translational symmetry of the lattice, the Bogoliubov equations admit solutions of the form whereū i (r) andv i (r) have the periodicity of the lattice. These functions satisfŷ witĥ Our notation emphasizes that q and k play distinct roles in the Bogoliubov equations: the former characterizes the Bloch-like character of the quasiparticle amplitudes while the latter corresponds to the quasimomentum of the condensate wave function. In the following, we shall also make use of the Hamiltonian (∇+iq+ik) 2 +V opt +gn|w nk | 2 −µ nk (18) which for q = 0 is just the Hamiltonian determining the time-independent condensate wave function w nk . III. PHONON DISPERSION FOR A STATIONARY CONDENSATE We begin by considering the simpler situation in which there is no superfluid flow (k = 0). In this case, the ground state solution of the time-independent GP equation can be taken to be real, andnw 2 n=0,k=0 (r) = nw * 2 n=0,k=0 (r) = n c (r). It is then convenient to introduce the functions ψ ± i =ū i ±v i and to combine the Bogoliubov equations into a single equation for ψ + i [21], whereĥ 0 is the Hamiltonian in which the mean field, gn|w 0,0 | 2 , of the current-free condensate appears. To solve (19), we introduce a complete set of Bloch states which are solutions of the equation This is a linear Schrödinger equation but the solution for q = 0 and n = 0 coincides with the self-consistent GP solution w 0,0 . By definition, the band energies, ε n (q), are referred to µ 0,0 , so that ε 0 (0) = 0. The functions w nq satisfy the periodicity property w nq (r+R) = w nq (r) and the orthonormality relation In addition, at q = 0 they are chosen to be real. Since ψ + i is itself a Bloch function, it can be expanded as The label i represents a band index m and the Bloch wave vector q. However in the following, we will only be interested in the lowest excitation band and will therefore drop the label for convenience. Substituting this expansion into (19) we obtain where We have displayed explicitly the q-dependence of all the variables. For a cubic lattice, we anticipate a particular eigenvalue E(q) which has a linear dispersion of the form Our objective is to derive an explicit expression for the Bogoliubov sound speed s. From Eq. (19) it is clear that in the q → 0 limit, the eigenvector corresponding to this particular eigenvalue will be where n = 0 labels the lowest Bloch band, since only this band has a vanishing energy (ε 0 (0) = 0). As a function of q, the lowest band energy behaves as which defines the effective mass m 0 of this band. We emphasize that this band mass is determined by the linear Schrödinger equation (21). More will be said about this later. The phonon eigenvector is a continuous function of q and, as we shall see, behaves as c n (q) = δ n0 + O(q 2 ) for small q. To obtain an expression for s we separate the n = 0 equation in (24) Since ε 0 (q) and E 2 are both proportional to q 2 , the latter equation shows that c n (q) ∝ q 2 for n = 0, as claimed. Thus, to order q 2 , these equations can be replaced by Solving for E 2 , we obtain where all quantities within the square brackets are understood to be the q = 0 values. The prime on the summation indicates that the terms n = 0 and n ′ = 0 are excluded from the sum. The matrix M nn ′ is the matrix obtained by deleting the first row and first column of M nn ′ . We note that this combination of matrix elements can in fact be written as Thus we find that the square of the sound speed is given by We next relate the sound speed to variations of the chemical potential with mean density. Writing for simplicity w 0 ≡ w 0,0 and µ 0 ≡ µ 0,0 , we have The derivative of this equation with respect ton iŝ where we use the notation (· · ·) ,n to denote a derivative with respect ton. Taking the inner product of (34) with w 0 and noting thatĥ 0 w 0 = 0, we find To solve (34) for w 0,n , we note that the normalization condition in (22) implies Thus, w 0,n is orthogonal to w 0 and has the expansion in terms of the (real) q = 0 Bloch functions w n ≡ w n,q=0 . Substituting this expansion in (34) yields Using the expansion (37) for w 0,n in (35) with the expansion coefficients defined by (38), we find Comparing this with (31), we see that (32) is equivalent to This result for an optical lattice was first given by Menotti et al. [17] on the basis of general dynamical considerations. We see here that it follows directly from the Bogoliubov equations and also applies in the case of a 3D lattice with cubic symmetry. The small-q expansion can be viewed as a systematic way of implementing the slowly varying ansatz used by Krämer et al. [13]. The expression for s has the same form as for a homogeneous gas, with m 0 replacing the bare mass m and the the density derivative of the chemical potential, µ 0,n , replacing the interaction parameter g. In other words, at long wavelengths the condensate behaves as a gas of particles of mass m 0 with a compressibility, κ, given by κ −1 =n(∂µ 0,0 /∂n). A. Thomas-Fermi Limit The Thomas-Fermi (TF) approximation is valid when the density varies in space on a length scale much larger than the local coherence length ξ = 2 /2mgn. In this situation, the density is well-approximated by ] max , and we then have where V opt is the mean value of the optical potential in the unit cell. Thus, µ 0,n = g as for a homogeneous gas. Since the effective potential, V opt + gn 0 , in the GP equation is a constant in the TF limit, we would expect the band mass, m 0 , to be close to the free particle mass, m. One can in fact show that the deviation of m 0 from m is proportional to V 2 0 (ξ/d) 4 , where V 0 is the amplitude of the potential modulation. Since we are assuming that ξ/d ≪ 1, m 0 ≃ m and the TF sound velocity is as for a uniform gas. It should be noted that this result is valid even when the amplitude of the density modulation is of order the mean densityn, provided only that the inequality ξ/d ≪ 1 is everywhere satisfied. If µ 0 < [V opt ] max , the Thomas-Fermi density develops 'holes' in regions where V opt > µ 0 . For a one-dimensional modulation the density is disjoint, as is the case in two or three dimensions for sufficiently small density. In this situation long wavelength propagating phonon excitations cannot exist within the TF approximation since the necessary fluctuations in the number of atoms from one lattice cell to the next cannot occur. In reality, the GP density in regions where V opt > µ 0 is small but finite and phonon-like excitations continue to exist. However, increasing the localization of the density in the potential minima leads to larger effective masses and eventually the sound speed s tends to zero. This behaviour cannot be described within the TF approximation. B. Weak-Coupling Limit The g-dependence of µ 0,n appears explicitly in (35) and implicitly through the wave function w 0 which satisfies (33). To extract the dependence in the limit g → 0, we expand the wave function as w 0 = w (0) 0 +g(∂w 0 /∂g) g=0 + · · ·. Since w 0 depends on g through the combination gn, we see that g(∂w 0 /∂g) =n(∂w 0 /∂n). Thus, the combinationnw 0,n appearing in (35) is proportional to g in the small-g limit and the second term on the right hand side of (35) is of order g 2 . We then have As discussed in Ref. [18], the first term accounts for the effect of the lattice on the compressibility, κ, which decreases with increasing localization of the wave function w 0 . The second term shows that µ 0,n deviates from a linear dependence on g as the strength of the interaction is increased. An explicit expression for this quadratic correction can be obtained from the equivalent expression for µ 0,n in (39) in which the two terms respectively correspond to the two integrals in (35). From the definition of the M nn ′ matrix in (25), we see that Thus, Since the excitation energies are positive, we see that µ 0,n has a negative curvature, which agrees with the numerical results in Ref. [18]. The interatomic interaction has the effect of increasing the width of the wave function which counteracts the localizing effect of the lattice potential. C. Weak Potential Limit It is also of interest to obtain an expression for the sound speed in the case of a weak optical potential where perturbation theory applies. For simplicity we consider a weak one-dimensional periodic potential V opt = V 0 cos(Gz), where G = 2π/d, applied to an otherwise three-dimensional system. The relevant GP equation is now one-dimensional, In treating the optical potential as a perturbation, we expand the wave function as w 0 = w 0 +· · ·, and chemical potential as where the superscript here denotes the order in V 0 . The properly normalized wave function in the absence of the potential is w where ε (0) . In calculating the second order contribution, µ (2) 0 , to the chemical potential, the second order wave function, w 0 , need not be determined, but the normalization condition 2 is required. Thus, the chemical potential correct to second order in V 0 is found to be Taking the derivative with respect ton, we have The weak-coupling limit of this result to lowest order in g is µ 0,n = g(1 . This can be shown to agree with the expansion of the first term in (45) to second order in V 0 . To complete the calculation of the sound speed, we require an equivalent expression for the effective mass. This can be obtained by solving where w 0 is the solution of (46). Since the correction to the effective mass is second order in V 0 , it is sufficient to consider the mean-field potential (gnw 2 0 − µ 0 ) to first order in V 0 . We must then solve G +2gn). Again treating V ′ 0 perturbatively, the effective mass for the lowest band is found to be Inserting (51) and (49) into (40), and discarding the quartic term in V 0 , we arrive at the following expression for the sound speed When ε (0) G ≪ gn (or ξ/d ≪ 1/2π), this expression reduces to in agreement with the approximation to the sound speed obtained by Berg-Sørensen and Mølmer [5]. To make contact with Ref. [18], we introduce the recoil energy E R = 2 π 2 /2md 2 = ε (0) G /4 and define 2V 0 = σE R (the parameter σ is called 's' in Ref. [18]). Eq. (52) can then be rewritten as where s 0 = gn/m is the sound speed for the homogeneous gas and γ is the ratio gn/E R = (d/πξ) 2 . This shows that s decreases quadratically with the strength of the optical potential, which is consistent with the numerical results in Ref. [18]. This expression is valid if G , or σ ≪ 4, however, this constitutes a rather limited range of the values of σ of physical interest. D. Radially Confined Condensates As a final application of the results derived in this section we consider a condensate that is confined in the radial direction. To be specific, we assume a potential of the form where V ⊥ (ρ) = mω ⊥ ρ 2 /2, that is, harmonic confinement in the radial (ρ) direction. The optical potential is periodic in the axial direction with periodicity d. This potential approximates the situation of a long cigar-shaped trap with an axial standing light wave. Although the geometry is quite different from that considered earlier, the previous analysis can be carried over with minor modification. The ground state GP solution has the property Φ 0 (ρ, z + d) = Φ 0 (ρ, z), and has the normalization which defines the mean linear densityλ along the length of the condensate. As before, it is convenient to define Φ 0 (r) ≡ √λ w 0 (r). The Bogoliubov excitations in the present situation have a Bloch wave character along the axis and are obtained from (19) with the Hamiltonian The eigenstates ofĥ 0 (q) are now labelled by the set of quantum numbers {q, n, m, ν}, where n is a onedimensional band index, m is the azimuthal quantum number associated with the z-component of angular momentum, and ν labels the different radial excitations. Of interest here are axially symmetric solutions (m = 0) since these have the character of the phonon mode of interest. The analysis after (19) is followed step by step, the only change being the integration volume used in the normalization of the states. We thus find that the sound speed is given by where m 0 is the effective mass of the lowest band (n = 0 and ν = 0). Eq. (58) is of course valid in the absence of the optical potential. Treating the condensate in the TF approximation, we findλ = πµ 2 0 /gmω 2 ⊥ and ∂µ 0 /∂λ = gmω 2 ⊥ /2πµ 0 . This gives a sound speed where n 0 (0) = µ 0 /g is the density at the centre of the trap. This result was first obtained in Ref. [22] using a different method. In the weak coupling limit, (45) applies. In this case, the condensate wave function is a gaussian, and one again obtains the result in (59) for the sound speed, as found previously [23]. When the optical potential is strong the condensate becomes localized on each site. In this situation, the tight-binding approximation is a useful method for dealing with the system [10,11,24]. Approximating the condensate wave function as Φ(r) = i c i f i (r) where f i (r) is a function localized on the i-th site and normalized to unity, the energy of the system is given approximately by where ε 0 is an on-site energy, t is a hopping matrix element connecting the amplitudes on nearest-neighbour sites andg is an effective interaction strength. For the ground state, |c i | 2 = ν, where ν is the number of atoms per site. We then have Assuming that the parameters ε 0 , t andg are density independent in the extreme tight-binding limit, we have µ 0 = ∂E tot /∂N = (ε 0 − 2t) +gν and ∂µ 0 /∂λ = gd. Within the same approximation the band energy is ε(q) = ε 0 +gν − 2t cos(qd), which gives an effective mass m 0 = 2 /2td 2 . Thus the Bogoliubov sound speed from (58) is s tb ≃ 2gνtd 2 / 2 , which is the result obtained by Javanainen [10]. IV. PHONON DISPERSION FOR A MOVING CONDENSATE We turn next to the derivation of the phonon dispersion relation for the case where the condensate is "flowing" through the optical lattice. We address this problem by directly solving the Bogoliubov equations in (16) for a specific condensate wave function w nk in the limit of small wave vectors q. The structure of these equations is quite different when k = 0 and the method used in the previous section to determine the dispersion relation can no longer be applied. In fact, the analysis is much more intricate as will soon become apparent. The results we obtain confirm the more intuitive hydrodynamic approach presented recently [7], which describes the dynamics of the system in terms of slowly varying hydrodynamic variables (density and momentum). By including small length scale variations, our approach in a sense provides a "microscopic" derivation of the hydrodynamic equations that one would expect to be valid in the long wavelength limit. Since the solution in the small-q limit is required, we rewrite (16) so as to display the q-dependent terms explicitly: where p = ( /i)∇ is the momentum operator. The GP Hamiltonian here isĥ k (q = 0) as defined in (18). In these equations, we have adopted the index n = 0 for the condensate wave function. We will usually think of this state as the lowest Bloch state solution of the GP equation, although it in principle could correspond to an arbitrary excited band. For simplicity we have dropped the index on the quasiparticle amplitudesū andv and the excitation energy E as we will only be considering the phonon-like excitation. It is clear that for q = 0 (60) admits a solution with u ∝ w 0k ,v ∝ w * 0k and E = 0. For finite q we seek solutions in the form of an expansion in eigenfunctions ofĥ ±k , namely,ū = n a n (q)w nkv = n b n (q)w n,−k (61) whereĥ k w nk = ε nk w nk (62) According to this definition, ε 0k ≡ 0. The functions w nk are an orthonormal set with normalization given by (22). Although we use the same notation, it should be noted that these functions are distinct from those defined in (21). In addition, we may assume w n,−k (r) = w * nk (r) and ε n,−k = ε nk . Substituting these expansions into (60), we obtain the matrix equations ε nk + 2 q 2 2m a n + n ′ m q · P nn ′ a n ′ where we have defined the matrices Due to the inversion symmetry of the lattice, the Bloch states at the zone centre (k = 0) can be chosen to be simultaneous eigenstates of parity. Although parity is not a good quantum number for states with nonzero k, each band can nevertheless be assigned a parity index η n = ±1 such that This property is proved explicitly in one dimension in Appendix A and can also be shown to follow to lowest order in k by means of k·p perturbation theory. Together with the conjugation (time-reversal) property w * nk (r) = w n,−k (r), we have w * nk (r) = η n w nk (−r) . This important property is used throughout the following discussion. For example, it can be used to show that the matrices in (65-67) satisfy the following relations Note that A and P are hermitian while B is not. In addition, we see that A nn ′ (0) and B nn ′ (0) are real and nonzero only for pairs of states having the same parity index, while P nn ′ (0) is purely imaginary and only couples states with opposite parity. In solving (63) and (64), it is convenient to define the following linear combinations: c n = 1 2 (a n + η n b n ) and d n = 1 2 (a n − η n b n ). Introducing these variables into (63) and (64), and making use of the relations in (70), we obtain the equations where we have defined the hermitian matrix B nn ′ = B nn ′ η n ′ . As in our earlier analysis, we anticipate that E(q) will depend linearly on q for q → 0, but due to the quasimomentum k of the condensate, it no longer depends simply on the magnitude q. To extract this dependence we systematically expand the coefficients c n (q) and d n (q) as a series in powers of q. Specifically, we write where the superscript indicates the order of q in the respective terms (here, order signifies similar powers of the vector magnitude q [25]). The factor h(q) contains the nonanalytic behaviour of the coefficients which is required in order to satisfy the normalization condition in (15). For a homogeneous system, h(q) ∝ q −1/2 , and we expect a similar dependence in the case of a lattice. In the following, it is sufficient to note that this factor is the same for both coefficients and can therefore be ignored in developing a systematic q-expansion. We noted earlier thatū ∝ w 0k andv ∝ w 0,−k for q → 0 which, according to (61), implies that a n (q) ∝ δ n0 and b n (q) ∝ δ n0 in this limit. If the state w 0k is an even parity state (η 0 = +1), as we assume in the following, we must then have c Similarly, (72) gives where we have defined the hermitian matrix Eq. (75) is homogeneous and indicates that c (1) n ≡ 0. On the other hand, (76) can be solved for d (1) n in terms of the unknown excitation energy E. To determine the latter, we must consider (71) to O(q 2 ), obtaining Setting n = 0 in this equation, and noting that ε 0k = 0 and that A 0n = B 0n , we find Since d (1) n is itself linear in E according to (76), we see that (79) is implicitly a quadratic equation for E which can be solved to determine the excitation energy to lowest order in q. However, to do so directly would not reveal the interesting dependences on various physical parameters that in fact emerge. As seen in the k = 0 analysis, the excitation energy could be related to the variation of the chemical potential with mean densityn. This remains a quantity of interest in the present case, but we must also consider variations of the chemical potential with k. We thus turn next to the determination of µ 0k,n ≡ ∂µ 0k /∂n and µ 0k,i ≡ ∂µ 0k /∂k i . Thus, w 0k,n is orthogonal to w 0k and has the expansion where as before, the prime on the summation indicates that the n = 0 term is excluded from the sum. Inserting this expansion into (80), and taking the inner product with respect to w nk , we obtain where we have noted that α * n = η n α n as a result of the symmetry property (68). Setting n = 0 in (83), we find The set of equations for n = 0 has the solution where N is the reduced matrix obtained by deleting the first row and first column from N . Thus we find that This is analogous to (39) and reduces to it in the k = 0 limit since the M and N matrices are then the same (note that w n0 are defined to be real). B. Determination of µ 0k,i We follow a similar method to obtain µ 0k,i . Taking the derivative ofĥ k w 0,k = 0 with respect to k i , we have where w 0k,i = ∂w 0k /∂k i . Noting the orthogonality of each vector component w 0k,i with w 0k , we have the following expansion where the expansion coefficients, β in , define the Cartesian components of a vector β n . Inserting (88) into (87), and taking the inner product with w nk , we obtain where, as before, we have used β * in = η n β in . An expression for µ 0k,i can be found by setting n = 0 in (89): The set of equations for n = 0 yield the solution vector, and thus, C. Excitation Energy These results will now be used to obtain an expression for the excitation energy E from (76) and (79). Setting n = 0 in (76) we have while for n = 0 we see that The quantities on the right hand side of this equation are in fact related to the expansion coefficients α n in (85) and β in in (91). We find the simple relation Using this result in (93), we have which with (84) and (90) can be written as We see that d (1) 0 and E are now related to each other through physically meaningful and calculable parameters. We now substitute (95) into (79) to obtain With (85), the sum on the right hand side becomes In going from the first to the second line, we have used the fact that all the matrices have the transposition property M n ′ n = η n η n ′ M nn ′ . With the expression for µ 0k,i in (90), (98) thus becomes Eliminating d (1) 0 from (97) and (100), we finally obtain The sign of the square root is chosen to be positive to give a positive excitation energy in the k → 0 limit. The final quantity to interpret is the summation within the square root. D. Effective Mass Tensor The square bracket in (101) involves the tensor This expression is similar to the usual effective mass tensor defined on the basis of k · p perturbation theory, although the structure of the summation is different. To make contact with the k · p expression we consider the N nn ′ matrix in the k → 0 limit. Quite generally, this matrix has the block structure where the blocks are defined according to the parity index of the various states. For example, the block in the upper-left corner contains matrix elements between states with a positive parity index, η = +1. The diagonal matrix D contains the energy eigenvalues ε nk on its diagonal. In the limit k → 0, we have B(k = 0) = A(k = 0) and A nn ′ = 0 if η n = η n ′ . Thus, in this limit we have that is, N(k = 0) is block-diagonal, which of course is also true of its inverse. Since, P i only connects states of opposite parity in the k = 0 limit, we thus see that This is precisely the effective mass tensor obtained by means of k · p perturbation theory [27] as applied to the Hamiltonianĥ 0 (q) in (20). The tensor defined in (102) is a generalized effective mass tensor in that it depends on the presence of a superfluid flow (k = 0). Also because of this, it is no longer diagonal despite the cubic symmetry of the optical lattice. To complete the identification of (m −1 ) ij , we consider variations of the GP equation with respect to the condensate wave vector k. The second derivative ofĥ k w 0k = 0 yields the equation h k,ij w 0k +ĥ k,i w 0k,j +ĥ k,j w 0k,i +ĥ k w 0k,ij = 0 . (106) Here,ĥ The inner product of (106) with w 0k gives 1 Ω Ω w * 0k ĥ k,ij w 0k +ĥ k,i w 0k,j +ĥ k,j w 0k,i d 3 r = 0 . (109) The first integral is (110) while the integral of the next two terms gives the result We thus find (112) With this result we see that the effective mass tensor defined in (102) can be expressed as E. Relation to Energy Density This last result can be related to the total energy (3) in the state Φ 0k . Defining the energy per particle as E tot ≡ Nǫ(n, k), we havẽ Comparing this with µ 0k , we see that which is the expression in brackets in (113). Thus the effective mass tensor is given by For a cubic lattice, µ 0k has an expansion of the form µ 0k = µ 00 + 2 k 2 /2m µ + · · ·. Similarly,ǫ(n, k) = ǫ(n, 0)+ 2 k 2 /2m ǫ +· · ·. However, as proved by (105), the parameter m ǫ is in fact the band mass m 0 defined by the Hamiltonian (20). In other words, the correct effective mass parameter can be extracted without solving the GP equation for w 0k (with k = 0) self-consistently. We note in passing that direct differentiation of (115) establishes the relation j s = ∇ k (nǫ)/ [20]. With these results, the phonon energy (101) can be written in a compact form. Defining the mean energy density as e ≡nǫ, the Bogoliubov excitation energy at long wavelengths is given by where we use a repeated summation convention on the Cartesian indices i and j. This is precisely the expression given by Machholm et al. [7] who argued that the dynamics of the system at long wavelengths could be based on a hydrodynamic analysis. Since their approach arrives at (117) in a more economical fashion, it is useful to summarize the essential assumptions on which it is based. The central assumption is the existence of an average phase fluctuation, θ(r, t) , that varies slowly in space and time. Expanding this average phase as θ(r + ∆r, t + ∆t) = θ(r, t) + ∇ θ(r, t) · ∆r one identifies ∇ θ with the local wave vector, k , and ∂ θ /∂t with − µ / where µ is the local chemical potential. The equation of motion for the local wave vector is thus The second hydrodynamic equation is the continuity equation where j s is the local current density. The current density and chemical potential are then assumed to be given by the usual expressions for a uniform system, namely, where e( n , k ) is the average energy density for a uniform optical lattice, viewed as a function of the local density n and wave vector k [26]. By expanding the variables as n =n + δn and k = k + δk, one obtains a pair of equations for the fluctuations which admits wavelike solutions with frequency ω = E/ and wave vector q. The dispersion relation found is identical to (117). It is clear that the assumptions made in the hydrodynamic approach are completely justified. The average energy density e is the fundamental quantity determining the excitation energy at long wavelengths, as confirmed by our systematic q-expansion. The additional information provided by the expansion technique are the perturbative expressions for ∂µ 0k /∂n, ∇ k µ 0k and (1/m) ij as given by (86), (92) and (102), respectively. F. Discussion For small k, e(n, k) ≃ e(n, 0) +n 2 k 2 /2m 0 + · · ·, and the Bogoliubov excitation energy is where s is the sound speed for the condensate at rest. This result was given previously by Krämer et al. [18]. The energy first becomes negative when the superfluid flow satisfies k > m µ s/ , where 1/m µ = ∂(n/m 0 )/∂n, which defines the Landau criterion for energetic instability at long wavelengths in an optical lattice. The region of energetic instability was mapped out for arbitrary q by Wu and Niu [6,28] and Machholm et al. [7]. In this region the energy of the superfluid state is no longer a local maximum. As a result, transitions to lower energy states can occur spontaneously provided a means of conserving energy and quasimomentum is available. The excitation energy given by (117) becomes imaginary when the argument of the square root is negative. This signals a dynamic instability whereby the amplitude of the condensate fluctuation grows (or decays) in time. Of the two factors in the square root, e ,ij , or equivalently the effective mass tensor, (1/m) ij , is the most physically relevant. It is instructive to examine the latter in the weak potential limit. We consider for simplicity the onedimensional situation discussed in Sec. III C. Repeating the perturbative analysis in Sec. III C for the case k = 0, we find We note that this expression becomes singular at a wave vector k c satisfying ε where k 0 = ms 0 / . The singularity is indicating the breakdown of nondegenerate perturbation theory, but provided k is not too close to k c , we can use (123) to evaluate the effective mass. To second order in V 0 we have At k = 0 we recover the result (51) found in Sec. III C. For m −1 0 to go to zero, k must be close to k c . With ∆k = k c − k, we find that is, the wave vector k approaches k c as V 0 → 0. We note that at k = k c − ∆k, the perturbative correction to the energy in (123) is still small so that the perturbation theory estimate of where m −1 0 goes to zero is reasonable. We thus expect a dynamic instability to set in when k ≃ k c in the weak potential limit. This condition for the dynamical instability is the q = 0 limit of the result given in Refs. [6] and [7]. The Bogoliubov excitations of wave vector q in an homogeneous gas with current j s =n k/m have the energies We follow Wu and Niu [6] in referring to the modes with the plus (minus) sign as phonons (anti-phonons). The former correspond to physical excitations in that their normalization is given by (15). The effect of an optical potential is to couple an anti-phonon mode with wave vector q to a phonon mode with wave vector q − G. The condition that E − (q) = E + (q − G) implies that the two modes are resonantly coupled and gives the critical wave vector (128) For q = 0, this gives the critical wave vector in (124). The expression in (128) was shown in [6] to account for the boundary of the dynamically unstable region in the weak potential limit. In fact, it can be shown by means of degenerate perturbation theory (Appendix B) that imposing a weak optical potential indeed gives rise to complex Bogoliubov eigenvalues. Alternatively, the condition E − (q) = E + (q −G) can be written as E + (−q)+E + (q−G) = 0. This was interpreted by Machholm et al. as a Landau criterion for the emission of two phonon excitations with zero total energy. Although this physical interpretation is appealing, it is not clear how it can be used to actually determine the rate at which the excitations are being produced, short of performing the perturbation analysis carried out in Appendix B in terms of phonon and anti-phonon modes. We thus see that the phonon-anti-phonon resonance condition, or alternatively the two-phonon Landau criterion, is consistent with the effective mass condition for a dynamical instability in the q → 0 limit. A similar statement can be made in the weak coupling limit (g → 0). Wu and Niu [6] noted from their numerical analysis that one boundary of the dynamically unstable region is given by the condition ε 0 (q + k) − ε 0 (k) = ε 0 (k) − ε 0 (k − q) where ε 0 (k) is the band energy for the optical potential by itself. In the small-q limit, this condition becomes Thus the onset of dynamical instability at q = 0 in the weak coupling limit is again given by the point at which the inverse effective mass goes to zero. V. CONCLUSIONS We have studied the long wavelength phonon excitations in a three dimensional optical lattice. By making use of a systematic expansion of the Bogoliubov equations in terms of the phonon wave vector q, we obtain the phonon dispersion in the long wavelength limit. Our result (40) for the current-free state defines the sound speed in terms of the effective mass m 0 and variations of the chemical potential withn and agrees with the result given by Menotti et al. [17]. The effective mass is defined quite generally in terms of the energy per particle, ǫ(n, k), but can also be calculated using the current-free GP Hamiltonian in the k → 0 limit. We present analytic expressions for the sound speed in the Thomas-Fermi, weak potential, weak coupling and tight-binding limits. For the current-carrying case, we rederive the dispersion relation obtained by means of a hydrodynamic analysis [7] (see also [13]). Our approach confirms that the dynamics at long wavelengths is defined by the local energy density e( n , k ) viewed as a function of the slowly varying local density, n(r) , and local condensate wave vector, k(r) . At long wavelengths, dynamical instabilities arise at the point where the generalized effective mass tensor has a vanishing eigenvalue. In this Appendix we give a proof of the symmetry property (68) used throughout our analysis. We do this for the one-dimensional case for which the wave function is a solution of where the potential is periodic, V (x + d) = V (x), and is assumed to have inversion symmetry, V (−x) = V (x). In the context of the GP equation, V (x) = V opt (x) + gn c (x) and the inversion property is valid if the condensate density also satisfies n c (−x) = n c (x). This is ensured if the wave function has the property we wish to prove. We seek solutions of the Bloch form, ψ(x + d) = e ikd ψ(x). Due to the inversion symmetry, the linearly independent solutions of (A1) can be chosen to be even (ψ e ) or odd (ψ o ) functions of x and ψ(x) can be expressed as the linear combination The periodic part of the Bloch function is then The two independent solutions at energy E are chosen to have the normalization Imposing the Bloch condition, we obtain the relation where all the functions are evaluated at x = d/2. Since ψ e and ψ o are functions of the energy, E, this equation determines the band energy E k . Clearly E −k = E k . If E 0 is the band energy at the zone centre (k = 0), we must have either ψ ′ e (d/2; E 0 ) = 0 or ψ o (d/2; E 0 ) = 0. The former defines what we shall refer to as an evenparity band, while the latter defines an odd-parity band. The small-k behaviour of E is thus readily obtained from these properties. For example, for an odd-parity band we have The coefficient of k 2 defines the effective mass of the band. A similar result applies in the case of the evenparity bands. Once the energy eigenvalue for a given k is known, the coefficients a and b are related by For a given band, n, the ratio b/a is a continuous function of k. At k = 0 we choose w k=0 (x) to be real and assume that it is a parity eigenstate. In this situation, we must have either b(k = 0) = 0 (even-parity bands) or a(k = 0) = 0 (odd-parity bands). The normalization of w k leads to the expressions For an even-parity band λ → 0 as k → 0, so that a(k) → 1 and b(k) → 0. In this case, Since λ(−k) = −λ(k), we see that w −k (−x) = w k (x). On the other hand, for an odd-parity band, λ(k) → ∞ as k → 0, and b(k) → 1. As a result, we have which implies w −k (−x) = −w k (x). We have thus shown that the Bloch functions have the property where the positive (negative) sign corresponds to the even (odd) parity bands. Together with the conjugation property w * k (x) = w −k (x), we have w * k (−x) = ±w k (x). For an even-parity band, the real and imaginary parts of w k are ℜw k (x) = 1 √ 1 + λ 2 (cos(kx)ψ e (x) + λ sin(kx)ψ o (x)) ℑw k (x) = 1 √ 1 + λ 2 (− sin(kx)ψ e (x) + λ cos(kx)ψ o (x)) . Thus the real part is an even function of x with the property dℜw k /dx| ±d/2 = 0, while the imaginary part is odd and ℑw k (±d/2) = 0. The opposite is true of an odd-parity band. One can show for an arbitrary k in the lowest band that there is no net change in the phase θ k (x) = tan −1 (ℑw k (x)/ℜw k (x)) as x varies between −d/2 and d/2. We make use of this result in Sec. II. The method described above cannot be used in three dimensions, but perturbation theory allows one to infer the same symmetry property. We write the Schrödinger equation for the Bloch function w k (r) as (ĥ 0 + δV )w k (r) (A14) whereĥ 0 = −( 2 /2m)∇ 2 + 2 k 2 /2m + V (r) and δV = ( k/m) · ∇. The eigenfunctions ofĥ 0 , w n (r), with eigenenergies E n , are chosen to be parity eigenstates. The state w k to first order in δV is where P n ′ n is the k = 0 momentum matrix element defined in (70). Since this matrix element only couples states with opposite parity, we see that w −k (−r) = ±w k (r), depending on the parity of the state n. Thus, the Bloch state exhibits the symmetry property to lowest order in k. It is evident that this argument can be extended to higher orders in perturbation theory. We note that in this case the mode has normalization a 2 − − b 2 − = −1. The degeneracy of the phonon and anti-phonon modes (E − (q) = E + (q − G) ≡ E 0 ) suggests that we seek a solution of the Bogoliubov equationŝ in the form Expanding the operatorB to first order in the optical potential, we haveB =B 0 +B 1 witĥ Here we have written the condensate wave function as Φ k (x) = √n e ikx (1 + w 1 + · · ·) where the first order correction is w 1 (x) = α + e iGx + α − e −iGx with (B8) Taking the inner product of (B5) with (u * + v * + ) and (u * − v * − ), and noting the different normalizations of the two modes, we obtain the matrix equation where the real coupling parameter ∆ is given by A nontrivial solution to the matrix equation is obtained if Thus, the line in the k-q plane defined by (128) lies within the region of dynamical instability when V 0 is finite. As emphasized by Wu and Niu [6], the dynamical instability in the weak potential limit arises from a resonant coupling between phonon and anti-phonon modes.
2019-04-14T02:03:12.211Z
2003-08-11T00:00:00.000
{ "year": 2003, "sha1": "456cde7889dacdf5e9e176e26f81be5e4b55136f", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0308194", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "456cde7889dacdf5e9e176e26f81be5e4b55136f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
258588421
pes2o/s2orc
v3-fos-license
Enhancing Gappy Speech Audio Signals with Generative Adversarial Networks Gaps, dropouts and short clips of corrupted audio are a common problem and particularly annoying when they occur in speech. This paper uses machine learning to regenerate gaps of up to 320ms in an audio speech signal. Audio regeneration is translated into image regeneration by transforming audio into a Mel-spectrogram and using image in-painting to regenerate the gaps. The full Mel-spectrogram is then transferred back to audio using the Parallel-WaveGAN vocoder and integrated into the audio stream. Using a sample of 1300 spoken audio clips of between 1 and 10 seconds taken from the publicly-available LJSpeech dataset our results show regeneration of audio gaps in close to real time using GANs with a GPU equipped system. As expected, the smaller the gap in the audio, the better the quality of the filled gaps. On a gap of 240ms the average mean opinion score (MOS) for the best performing models was 3.737, on a scale of 1 (worst) to 5 (best) which is sufficient for a human to perceive as close to uninterrupted human speech. I. INTRODUCTION Spoken audio can suffer from dropouts, gaps and short clips of corrupted data when transmitted over networks, including cellular networks. This paper examines how generative adversarial networks (GANs), a form of machine learning can enhance the quality of spoken audio by filling such gaps in real time. While there are classical machine learning approaches to enhance the quality of speech audio based on Principal Component Analysis or others that can clean an audio signal, there is no good approach for real-time gapfilling. Our approach is to transfer audio regeneration into image in-painting by converting gappy audio into to Melspectrograms, similar to work presented in [22]. We examine data transmission packet loss conditions that produce gaps in audio varying from 40ms to 320ms, simulating a sequence of network packet losses of up to 8 packets. The next section reviews relevant research covering GAN applications and variant architectures, and speech enhancement in noisy domains. Following that we present our experimental setup and then our results followed by conclusions. This work was partly supported by Science Foundation Ireland (SFI) under Grant Number SFI/12/RC/ 2289 P2. A. Speech Enhancement in Noisy Audio Speech enhancement is an improvement task to the perceptual and aesthetic aspects of a speech signal which has been degraded by noise. This task is performed in mobile communications, hearing aids and robust speech recognition [12], [16], [17]. Even if the minimum required quality to understand what a person is saying is met, speech enhancement is still desirable as it can reduce listener fatigue. The aesthetic enjoyment of listening to speech can be taken away due to low fidelity of the speech in audio. Simply increasing the fidelity of the speech signal may also boost the performance of speechto-text algorithms [14]. Noise in a speech signal may come from a noisy communication channel or the speech signal may originate in a noisy location. In cases of voice-over-IP transmission, network packet loss is an issue that causes gaps in transmission and reduces the perceptual and aesthetic features of a speech signal. Today, there is still no good solution available to the issue of regenerating gaps in audio signals in real-time communications. There are several approaches to speech enhancement including using principal component analysis, statistical modelbased algorithms, spectral subtraction and Wiener filtering. Recently speech enhancement has been addressed using GANs [12], [16], [17], though in those works they train on either 462,880 utterances or 224,000 sentences, much greater than what is done here. GANs that work with audio and/or speech enhancement typically use Mel-spectrograms, an image representation of an audio signal as shown later in Figure 1. A Mel-spectrogram captures how humans perceive sound better on lower frequencies compared to higher, and the spectrogram is a visualisation of the frequency composition of a signal over time. Features of this can then be adjusted in order to improve the aesthetic or quality of the regenerated speech audio. The existence of generative deep learning architectures, such as GANs, allows us to address the problem of gappy speech. A GAN's capability to generate from any complex data distribution suggests that a GAN may be trained to regenerate missing audio in real-time. The data required to train such a model in a real setting may be collected from a speaker's 979-8-3503-4057-0/23/$31.00 ©2023 IEEE previous speech and a model trained to regenerate gappy audio signals for that speaker. As part of a protocol among speakers, speech models could be exchanged that would be used to enhance an incoming speech signal by resolving gaps in communication due to packet loss. B. Generative Adversarial Networks (GANs) Generative Adversarial Networks (GANs) are an approach to generative modelling using deep learning first introduced by Goodfellow et al. in 2014 [7]. Generative models allow learning a distribution of data without the need for extensively annotated training data. Based on training data, GANs allow generating new data similar to its training set. GAN architecture is based on game theory, where backpropagation signals are derived through a competitive process. Two neural networks, a Generator (G) and Discriminator (D) compete with each other. G learns to model distributions of data by trying to deceive D to recognise the generated samples as real [8]. What is particularly useful is that GAN models can be trained to mimic any distribution of data, so there are many practical applications yet to be discovered [1]. The application of GANs was initially limited to image enhancement tasks like producing high-quality images, until about 2017 when the first GAN capable of facial image generation was created. GANs attracted attention and now we see GANs used where synthetic data generation is required including natural language Processing [3], computer vision [2] and audio generation [4]. For some applications, it is difficult to train a GAN using the original GAN architecture as some generators do not learn the distribution of training data well enough and so the Deep Convolutional GAN (DCGAN) was proposed in 2015 [18]. In this architecture, instead of the fully connected multi-layer perceptron NNs, CNNs were used. The authors in [18] identified a sub-set of CNNs that were suitable for use in the GAN framework. To stabilise the training process, the generator used the ReLU activation function across the layers, except in the final layer, where the Tanh function was used. Some specific constraints on the model identified during the development of the DCGAN laid the foundation of many further GAN architectures based on DCGAN. These include the Conditional Generative Adversarial Network (cGAN) [15] which can include labelling, WaveGAN which is used for audio synthesis [4] and Parallel WaveGAN [20] also used in audio and which uses auxiliary input features in the form of the Mel-spectrogram. For the speech enhancement, the Speech Enhancement GAN (SEGAN) architecture was introduced in [16]. The generator network is used to perform enhancement of the signal, its input being the noisy signal and latent representation, and its output is the enhanced signal. The generator is structured in the same way as the auto-encoder. Encoding involves a number of strided convolutional layers followed by parametric rectified linear units (PReLUs), where the result of every N steps of the filter is a convolution. The discriminator plays the role of expert classifier and conveys if the distribution is real or fake and the generator adjusts the weights towards the realistic distribution. As GANs are well developed in the areas of image-to-image translation and image in-painting [9], [11], [19], [22], which are similar to gap regeneration tasks, we propose to transform an audio signal into a Mel-spectrogram and use an image inpainter to fill the image gap. Mel-spectrograms can then be in-painted and transferred back to audio via a neural vocoder, such as Parallel-WaveGAN [20]. We propose to train a model to regenerate the gap in the fixed position at the end of the Mel-spectrogram, which would make this problem simpler to tackle for a GAN as it would always know where to in-paint the image. A. Dataset The public domain LJSpeech data-set [10] is used which consists of 13,100 single-speaker short audio clips where the speaker reads passages from 7 non-fiction books in English. The entire duration of the clips, which range in length from 1 to 10 seconds, is approximately 24 hours, and the dataset consists of 13,821 distinct words. For our experiments a random subset of 1,300 clips was used with 1,000 used for training and 300 for testing. Our reason for using a sub-set is to more closely represent a real world use case where less training data is available for a given voice requiring gap-filling in audio telephony and video conferencing. As mentioned earlier, related work such as [12], [16], [17] trains on either 462,880 utterances or 224,000 sentences. B. Data Pre-Processing Before the original 22kHz audio clips were converted via short-time Fourier transform (STFT) into Mel-spectrograms [5], the signal was trimmed at the start and end to remove silence. Thereafter STFT was performed on the audio with a frame length of 1024 points (corresponding to 46ms) and a hop size of 256 points (11ms). STFT peaks were elevated by square function, to highlight voice pitch and then transformed to Mel scale using 80-channel characteristics. An additional parameter of Mel filterbank as frequency was set to include audio in the range from 80Hz to 7.6kHz. Mel-spectrograms were scaled to have an approximately constant energy per channel, followed by log10 dynamic range compression. They were normalised by subtracting the global mean (µ) of the dataset then dividing by the standard deviation (σ). As a final step, values were normalised to the range [-1,1]. In order to perform normalisation, statistics from the overall dataset were collected. The length of the clips was standardised to 256 frames in the time domain (corresponding to 2.8s). To mimic faulty communications typical of packet-based IP, Mel-spectrograms with audio gaps from 40ms to 320ms were created at the end of the Mel-spectrogram as the realtime nature of audio communication requires regeneration to be applied as quickly as possible. Thus a trailing audio signal following the gap is not available. The 40ms to 320ms gaps allow mimicing of packet loss of up to 8 packets in a row, with the assumption that audio compression captures 40ms of audio in one packet. Gaps longer than 320ms introduce a risk of generating words that were not said because typical word rate for fast speech is up to 160 words per minute [21] (375ms each) so this sets the upper target for our gap-filling. The complete dataset is formed from Mel-spectrogram pairs of source (Mel-spec with a gap) and target (ground truth) images. An example of a training pair is shown in Figure 1. The input to the model is the masked image and the model tries to generate a complete image similar to the ground truth. C. Model and Loss Function The starting point for our in-painting was Pix2Pix GAN [9]. This was previously used on multiple image-to-image transition tasks and also used in similar work [22]. In that related work the authors studied the creation of a joint feature space based on synchronised audio and video where the video consisted of spectrograms from the audio. That work focused on in-painting of the spectrogram to re-generate noisy or corrupted audio though their experiments were on music audio rather than on speech, which is our focus here. To form a baseline for this work, a standard U-Net-based 5layer generator presented in Figure 2 was used with L1 pixelwise loss and input dimensions of 256x256. As part of the Pix2Pix architecture, the Patch-GAN discriminator was used for adversarial loss, which creates scalar adversarial loss and Mean squared error (MSE) comparison of small patches of an image that form a grid and produce scores from 0 to 1, where each piece is classified as real or fake. Alterations to the standard U-Net architecture Were performed. Stride configuration in the CNN layers was adjusted to allow different size input dimensions of 125x128 and 256x80 pixels respectively to closely match our dataset profile. We also used different variants of the loss functions by introducing more advanced loss criteria as proposed in more recent image in-painting work [11], [19]. We replaced the L1 pixel-wise loss with VGG19 feature match loss using VGG19 CNN's feature extraction layers to compare generated and ground truth images and updated gradients in the network based on comparative MSE error. VGG19 was pre-trained on Imagenet and the VGG19 feature match loss was added to the in-painted segment. The Generative Multi-column Convolutional Neural Network (GMCCN) [19] was used in the same setting as the U-Net-based Generator. As shown in Figure 2, GMCCN uses 3 networks in parallel and their output is concatenated at the final layer. We used the Patch-GAN discriminator for adversarial loss and VGG19 feature match loss for the GM-CCN Generator. Modifications to the GMCCN were performed by introducing batch normalisation layers, as the network was susceptible to the exploding gradient problem as was discovered during our experiments. The final stage of our pipeline was the Parallel-WaveGAN vocoder to convert Mel-spectrograms into waveforms. Typically Parallel-WaveGAN vocoder is used in the Tactron 2 Text to speech (TTS) pipeline [20] where text is converted to Mel-spectrograms and vocoder generates audio and that may be used for any Mel-spectrogram to audio conversion. The method used to train the Parallel WaveNet does not use any distillation process thus making the resulting model small and the overall processing fast. A pre-trained Parallel-WaveGAN model pre-trained on the LJSpeech data was used here to avoid a costly training process. Our implementation was based on TensorFlow and trained on an NVIDIA GTX 1660 Super GPU. Networks were trained using the Adam optimiser with a learning rate set to 1e − 4 and batch size set to 1. All data pre-processing, conversion to Mel-spectrograms and dataset matrix multiplications were computed via the TensorFlow API. Model performance during was recorded under Tensorboards. The implementation of the Parallel-WaveGAN vocoder was based on PyTorch, and weights were fetched from the public git repository at https://github.com/kan-bayashi/ParallelWaveGAN Initial experimental models were trained for 40 epochs on the subset of 1,300 exemplars, with a fixed learning rate of 1e − 4 and beta of 0.5 set in the Adam optimiser. The default gap size was set to 240ms corresponding to 6 network packets. The gap was not variative in order to objectively assess different model performances in the same setting though later we present experiments with variative gap sizes using the best performing model. D. Evaluation Metrics We approach evaluation from the image aspect of the Melspectrograms and from the audio aspect of the reconstructed WAV audio. Three evaluation metrics are used. As a first measure, we compute the mean squared error (MSE) of the pixels of the reconstructed image vs. the target image. As a second metric, we measure the MSE of the VGG19 CNN feature extraction layers of the Mel-spectrograms and compare ground truth and generated data structures. We favour the VGG19 feature MSE metric over the L1 loss metric as it is more descriptive visually, except in Table IV, which presents results in full. Because an image comparison metric does not clearly indicate how close to realistically sounding audio the generated in-painting actually generates, a third metric measures the quality of generated audio using the Perceptual Evaluation of Speech Quality (PESQ) [13] which is calculated for each test model. PESQ is a widely used standard for automated assessment of speech in telecommunication systems. It takes 2 audio samples as input and produces a Mean Opinion Score (MOS) from 1 (worst) to 5 (best). IV. EXPERIMENTAL RESULTS The first results reported are related to data normalisation. Our first test runs on the U-net architecture indicate that without normalisation of the Mel-spectrograms, the model fails to learn valid patterns and fails to produce meaningful results. A set of normalisation techniques were applied that were described in Section III-B. Evaluation results for the models are summarised in Table I. The baseline approach of in-painting with normalised data shows the algorithm is capable of learning the structure of the Mel-spectrogram and in-painting missing pieces with an MOS score of 2.348. However, its performance does not give the required result for a real life application. We tried to match Mel-spectrogram dimensions to be closer to 256x80 pixels by changing the U-Net stride to 1 in the encoder-decoder connecting layers, however a rapid drop in map shrinking in the earlier layers caused a performance drop with MOS falling to 2.138. we identified that with minimal structural alteration the input size of 256x128 gave in-painting performance in line with the original 256x256. Thus all subsequent U-Net models had an input size of 256x128. Following recent in-painting approaches such as [11], [19], we identified that newer approaches use more sophisticated loss functions and that loss function alterations may boost performance. Enhancements to our loss function, specifically VGG19 feature match loss for the whole image in addition to L1 loss were then applied. The in-painted image became closer to real data distribution and our VGG19 feature match error decreased substantially from 6.056 to 2.896 and MOS increased from 2.348 up to 3.657. We also implemented an idea from [11] where additional loss of the in-painted area was applied to the overall error. Therefore, the in-painted area loss was added to concentrate the attention of the algorithm more specifically on the in-painted area. That decreased VGG19 loss further to 2.721 and increased MOS to 3.737. To understand whether the length of the Mel-spectrogram plays a role in predicting the masked segment, input size was reduced from 256px (2.8s) to 125px (1.4s) by cutting Mel-spectrograms in half thus reducing the complexity of the problem as well as computational cost. Training and testing found that reduction in data input significantly reduced the performance of the model. The VGG19 feature match score degraded from 6.056 (the baseline) to 9.962 indicating that the baseline algorithm used information from the whole of the Mel-spectrogram. Experiments were performed around increasing the dimensionality of the data, but as that would have added additional computational cost, it was out of scope. In addition to the U-Net generator, we conducted experiments with the GMCCN CNN architecture. The performance of GMCCN after our standard 40 training epochs was disappointing, with a VGG19 feature match loss of 3.402 and significant drop in MOS to 2.465. In addition, GMCCN has increased computational cost as the architecture includes 3 networks running in parallel as shown earlier in Figure 2. To investigate the significance of the masked gap size, we conducted experiments based on the assumption that the algorithm would need to regenerate gaps from 40ms up to 320ms. Thus models were trained for different gap sizes. Results showed that reducing the gap size required less training time to achieve good performance as seen in Figure 3. An interesting finding was that when Mel-spectrograms were generated on the 320ms gap model and others on the 160ms gap model, the error on the 320ms gap model Mel-spectrogram was the same if we had taken the first 160ms of the sample. This tells us that models perform the same if we for the same gap window. Our subsequent experiments were carried out on segments with gaps of 320ms. Results also showed that performance degrades linearly and the model regenerates Mel-spectrograms with good confidence at the start, however the further into the time domain, the less accuracy results as shown in Table II. We also experimented with training models on variative gap sizes. In the data processing pipeline, random gap selection was performed in the range 40ms to 320ms. After training the model for the default 40 epochs, the models that performed significantly worse than the fixed gap models were identified and the performance dropped by 70% compared to fixed-size models. We trained the model on a full data set of 13,000 samples (increasing the step amount from 40 × 1000 to 40 × 13, 000). The model still did not perform as well as those with fixed gap sizes, the VGG19 feature loss was 6.785, which is significantly higher than the 2.721 produced by the fixed gap model, even though trained on a substantially larger dataset. A series of tests of the inference speed on both the Parallel-WaveGAN and U-Net based generator Were performed to identify if the model is usable in real-time. The U-Net based model generates an in-painted Mel-spectrogram in approximately 50ms on a GPU, in line with results presented in [6]. Parallel-WaveGAN converts a Mel-spectrogram to audio in 5ms on a GPU in line with results presented in [20]. Finally, we examined the worst performing in-painted Melspectrograms and best performing Mel-spectrogrms, identified by VGG19 feature loss and MOS. A summary of the results is shown in Table III and Figure 4 shows some representative examples. A sample model output comparison may be seen in Table IV, along with ground truth and the Mel-spectrogram used as its input. V. CONCLUSIONS This paper presented a technique that improves a speech signal degraded by the introduction of variable length gaps which arise frequently in in real-time audio telephony and Our key findings are that U-Net-based GANs with a loss function based on VGG19 feature match [19] for Mel-spectrograms from the audio are capable of in-painting gaps in those Mel-spectrograms in near real-time. After transforming an in-painted Mel-spectrogram back to audio via the Parallel- WaveGan vocoder [20] and following the use of an enhanced U-net generator with a more advanced loss function similar to one in [11], [19], we generated audio fragments that are structurally similar to the real distribution with a MOS from 3.214 for gaps of 320ms up to a MOS of 4.514 for gaps of 40ms. The total time taken to regenerate a gap is approximately 105ms on a GPU, an acceptable performance for real-time communications. For larger regenerated gaps our model is capable of almost exactly regenerating the missing area in the Mel-spectrogram. The model uses information from all of the Mel-spectrogram, as reducing the size of the input Mel-spectrogram leads to a large drop in performance. We found that fixed gap size models are capable of learning distributions from smaller datasets as the complexity of the problem is reduced and the most efficient way to address variative gap sizes is to train a model capable of filling large gaps and use it for all gap sizes. The performance of such an approach is similar to that of models trained on smaller gap sizes. We conclude that it is possible to use our in-painter-Vocoder pipeline to regenerate audio gaps in real-time on systems equipped with a GPU and that the result can be perceived by humans as good quality. Further work should identify if there are reduced sized models similar to SD-UNET [6] that could perform well enough on CPU-only systems.
2023-05-11T01:16:23.971Z
2023-05-09T00:00:00.000
{ "year": 2023, "sha1": "3292bb0f5d1fd94bb9bc4313eb32fcc33c3c4896", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "3292bb0f5d1fd94bb9bc4313eb32fcc33c3c4896", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Engineering" ] }
221239058
pes2o/s2orc
v3-fos-license
DNA Footprints: Using Parasites to Detect Elusive Animals, Proof of Principle in Hedgehogs Simple Summary Nocturnal and elusive animals are notoriously difficult to count—hedgehogs being a prime example. Therefore, any reliable way to demonstrate the presence of a particular animal, within a given area, would be a valuable addition to many ecologists’ tool kits. The proposed method is based upon the idea that you can find a parasite, specific to a vertebrate animal of interest that has a life stage within an invertebrate host. Molecular detection of these parasites is then carried out in the more abundant and easily collected invertebrate intermediate host. The key to this proposed method is the specificity of the parasite to the vertebrate animal and its detection in the invertebrate intermediate hosts. Crenosoma striatum is specific to hedgehogs and was chosen as the parasite to develop the molecular survey tool for hedgehogs, an elusive nocturnal species of considerable interest at present. Results revealed the presence of the nematode only at a site known to be inhabited by hedgehogs confirming the potential of this method to improve the accuracy of recording hedgehog populations. Abstract The Western European Hedgehog (Erinaceous europaeus) is a nocturnal animal that is in decline in much of Europe, but the monitoring of this species is subjective, prone to error, and an inadequate basis for estimating population trends. Here, we report the use of Crenosoma striatum, a parasitic nematode specific to hedgehogs as definitive hosts, to detect hedgehog presence in the natural environment. This is achieved through collecting and sampling the parasites within their intermediate hosts, gastropoda, a group much simpler to locate and sample in both urban and rural habitats. C. striatum and Crenosoma vulpis were collected post-mortem from the lungs of hedgehogs and foxes, respectively. Slugs were collected in two sessions, during spring and autumn, from Skomer Island (n = 21), which is known to be free of hedgehogs (and foxes); and Pennard, Swansea (n = 42), known to have a healthy hedgehog population. The second internal transcribed spacer of parasite ribosomal DNA was used to develop a highly specific, novel, PCR based multiplex assay. Crenosoma striatum was found only at the site known to be inhabited by hedgehogs, at an average prevalence in gastropods of 10% in spring and autumn. The molecular test was highly specific: One mollusc was positive for both C. striatum and C. vulpis, and differentiation between the two nematode species was clear. This study demonstrates proof of principle for using detection of specific parasite DNA in easily sampled intermediate hosts to confirm the presence of an elusive nocturnal definitive host species. The approach has great potential as an adaptable, objective tool to supplement and support existing ecological survey methods. Introduction Objective methods for monitoring wild animals are needed to support management efforts, but are rarely straightforward, especially for elusive and nocturnal species. A complete census is usually impossible, and surveys more often rely on observations of individuals and indirect evidence of their presence, such as faecal counts or tracks [1]. With regards to elusive nocturnal animals specifically, even detection can be difficult, as exemplified by carnivore species that are widely dispersed, solitary and nocturnal [1][2][3]. Locating even the largest of terrestrial mammals, for example, the African forest elephant, can be a difficult task fraught with contestable results [4]. Western European Hedgehogs (Erinaceus europaeus Linnaeus, 1758) are classified as a species of least concern [5]; however, there is strong evidence of a recent decline in numbers across mainland Europe and in the UK [6][7][8][9][10]. Estimates suggest a reduction in UK populations within the range of 5-7% in the last 50 years [11], with one study suggesting a potential 25% reduction over the last decade [12]. Current survey methods rely on physical sightings and subjective evidence, such as scats (faecal deposits), tracks and carcases from road deaths, to determine the presence of hedgehogs [13][14][15][16]. Given the difficulties in sighting and correctly monitoring nocturnal animals, such as hedgehogs, there is a need to develop a wider panel of objective, evidence-based survey methods to supplement and confirm the findings of those currently used [17]. The use of parasites to monitor host populations has long been employed in the aquatic environment for fish populations [18][19][20][21], and more recently to quantify the presence of the elusive diamondback terrapin [22]. The use of parasites and their DNA as biological markers, however, remains underdeveloped in terrestrial environments. The parasitic nematode Crenosoma striatum is a lungworm highly specific to hedgehogs [23][24][25][26][27][28], and common in most populations. In a study of 74 dissected hedgehogs in the UK, 71% were found to be infected with C. striatum [29]. While hedgehogs are the sole definitive hosts for C. striatum, the available intermediate host range is much wider. Experimental infections comprising species from several gastropod (slug and snail) families of the orders Stylommatophora and Hygrophila [26,30] suggest a large number of potential hosts in hedgehog environments. Terrestrial molluscs are an integral part of many ecosystems and can be found across a diverse range of habitats throughout the British Isles [31][32][33]. It is here proposed that a polymerase chain reaction (PCR) based test could be used to rapidly and effectively determine the presence of C. striatum in local slug and snail populations, thereby indicating the presence or absence of hedgehogs within a given geographical area. If effective, this test would greatly facilitate monitoring of hedgehog distribution, and could potentially be adapted and developed for use in the monitoring of other species of interest. In the present study, this approach is evaluated by first devising a PCR assay specific for C. striatum, and then comparing results from areas of known hedgehog presence. Isolation of DNA from Nematodes for Molecular Test Development Adult worms of C. striatum were collected from the lungs of hedgehogs post mortem, and identified morphologically [29]. Crenosoma vulpis, a closely related species, collected from the lungs of red foxes (Vulpes vulpes) post mortem, was also used. DNA was extracted using the DNeasy Blood and Tissue Kit (Qiagen, Hilden, Germany); according to the manufacturer's instructions, except that adult worms were initially ground in ATL buffer using a microfuge pestle. DNA was eluted in 100 µL and stored at −20 • C prior to analysis. Primer Design and Multiplex Assay Development The second internal transcribed spacer (ITS-2) of ribosomal DNA (rDNA) was chosen as the primary region of interest for primer design, due to its successful use in distinguishing between closely related nematodes in numerous previous studies [34][35][36][37][38][39][40]. To obtain sequence information for primer design, primer sequences NC1 and NC2 (Table 1, from Gasser et al. [41] 1993) were used to amplify the ITS-2 region of selected parasite DNA for sequencing. PCR conditions were optimised to achieve a single band of the expected size on an agarose gel. Specific products were purified by mini-column (Qiagen) and sequenced in both directions (Eurofins). Sequences obtained were aligned using the ClustalW function in BioEdit software [42], and a consensus sequence established for each species. Sequences from C. striatum (n = 2) and C. vulpis (n = 2) were compared with each other and with sequences from Angiostrongylus vasorum (a metastrongylid nematode using gastropod intermediate hosts and common in the study area [43], and Aelustrongylus abstrusus (a metastrongylid feline lungworm also using gastropod intermediate hosts). This was done to find suitable regions for the design of primers that would allow species differentiation by sequence and PCR product size (as illustrated in Supplementary Figure S1). The ITS2 sequences of Crenosoma spp. Were submitted to GenBank with accession numbers MT808322 to MT808325. Primers were designed using Oligo6 (Molecular Biology Insights, Colorado Springs, CO, USA) to uniquely amplify a 157 bp region of C. striatum ITS-2 (C.St), and a 207 bp region of C. vulpis ITS-2 (C.Vu) ( Table 1). Primers were checked with NCBI basic local alignment search tool (BLAST) for species specificity. An independent pair of primers for the amplification of a 710-bp fragment of the invertebrate mitochondrial cytochrome c oxidase subunit I gene (COX1) was selected [44] (henceforth termed COI) as a control to verify that DNA could be amplified from each sample. PCR conditions were optimised for both individual and multiplexed PCRs. PCRs were performed in a volume of 15 µL including 2 µL of template DNA, 2.5 mM MgCl 2 , 0.2 mM dNTPs (Thermo Fisher, Loughborough, UK) 0.025 µ/µL GoTaq ® Flexi polymerase and 1× buffer (Promega, Southampton, UK) and 1× primer mix. 10× primer mixes were COI: 10 mM each primer, optimised multiplex 5 mM each C.St primer and 3 mM each C.Vu primer. The PCRs were carried out on a Biorad T100 Thermal Cycler using a touchdown profile, consisting of an initial denaturation at 95 • C for 3 min followed by nine cycles of 94 • C for 30 s, 65 • C (1 • C decrease per cycle) for 20 s, 72 • C extension then 33 cycles of 94 • C for 30 s 55 • C for 20 s and 72 • C extension. Extension at 72 • C was for 30 s for the multiplex PCR and 1 min for the COI PCR. The final extension was 10 min at 72 • C. PCR products were examined on 1% agarose gels stained with GelRed™ (Biotium Inc., Fremont, CA, USA). The multiplex PCR was initially checked for analytical specificity by testing against a species panel of DNA isolated from morphologically identified adult lungworms, and confirmed to be diagnostic for C.St and C.Vu (see Supplementary Figure S2). For PCR testing of slug DNA, an initial control COI PCR was performed prior to the test C.St-C.Vu multiplex, and was negative for some samples, mostly from Arion ater slugs, and some appeared tinged with a dark colour. For these, 2 µL of genomic DNA was examined on an agarose gel, and the presence of high molecular weight DNA in the extraction was confirmed. Attempts were made to re-purify to negate the effects of inhibitors. For most samples, PCR was successful with the addition of PCRboost ® (Biomatrica, San Diego, CA, USA). Multiplex PCRs were carried out under the same conditions for these samples. PCRs were repeated twice to verify an amplification (test positivity). Test-negative PCRs were scored only if samples with a positive PCR for the control invertebrate COI PCR. The results of the C.St-C.Vu multiplex on positive COI PCR's were analysed using an exact binomial test. Slug Samples In order to demonstrate the correlation between C. striatum incidence and the presence of hedgehogs, slugs were collected in autumn from Skomer Island, covering an area of approximately 160 ha, and in both spring and autumn in Pennard, covering an area of 0.36 ha: Both areas are in south-west Wales, UK. There are no known reports of hedgehogs (or Foxes) on Skomer Island (personal communication with Mark Hodgson, Wildlife Trust South West Wales), whereas Pennard is an area with an abundant local hedgehog population; more than 180 individuals from this particular region were admitted to Gower Bird Hospital wildlife rehabilitation centre between 2001 and 2017. The slugs collected were identified morphologically by BR 4 and then stored at −20 • C before processing. The posterior foot section of each slug was removed and macerated prior to tissue lysis. Gastropod DNA Extraction Genomic DNAs from 80 slugs were extracted from slug tissue using Dneasy Blood and Tissue Kits (Qiagen, Hilden, Germany) employing a Maxwell ® 16 MDx Research System (Promega, Maddison, WI, USA) as recommended by the manufacturers. Any undigested tissue and pigment from the larger Arion ater specimens were removed by centrifugation before spin column purification. DNA was eluted in 100 µL and stored at −20 • C prior to further analysis. Sample Size Calculator The number of slugs required to be sampled to provide a reliable indicator of the absence of C. striatum infection, and hence, the absence of hedgehogs, was simulated using the binomial distribution. Thus, the required sample size was defined as that yielding a <0.05 probability of zero successes (=detected infections), at a given above-zero true prevalence (p. 64, [45]). This is the sample size needed to avoid a type II error, i.e., falsely declaring the absence of C. striatum when actually present, at p = 0.05. Results Out of the 80 slugs 17 were excluded (Table 2), due to negative COI result. Slug samples from Pennard collected in spring (n = 20) and autumn (n = 22) represented nine species. Overall, the prevalence of C. striatum in this sample set was 10% (95% exact binomial confidence bounds 3-23%). Species infected with C. striatum were Arion subfuscus (spring; n = 1), Arion ater agg. (autumn; n = 1) and Tandonia sowerbyi (autumn, n = 2). Additionally, the A. ater agg. Individual was concurrently infected with C. vulpis, confirming the sensitivity of the assay without cross-species amplification. The Skomer slug samples collected in autumn (n = 21) comprised two species: A. ater and Lehmannia marginata. Neither C. striatum nor C. vulpis was detected in any of these samples. Results of the sample size simulation are presented in Figure 1. At the 10% prevalence observed in this study, a sample of 29 slugs would be needed to reasonably (at p = 0.05) avoid a false negative, i.e., erroneously conclude that infection is absent. The number of slugs needed would rise at lower prevalence, and fall at higher prevalence. Figure 1. The sample size (=number of slugs) required to detect at least one infected slug, given true prevalence from 1% (n = 299) to 25% (n = 11). Higher prevalence omitted for clarity: n declines further to 5 (at 50% prevalence) and 3 (75%). Discussion This study demonstrates the use of a multiplex test for Crenosoma species, which can accurately identify and discriminate between closely related species C. striatum and C. vulpis from slug tissues. The fact that no C. striatum was detected in the Skomer sample set indicates the potential of C. striatum as an indicator species for the presence of elusive hedgehogs in any given locale. Furthermore, the sensitivity of the assay suggests that other parasites highly specific to host species of interest could be used in this way for monitoring and surveillance, for instance as part of management programmes for endangered or invasive species [4,46]. Direct detection of environmental DNA also has potential for monitoring of elusive species [47,48]. Detection of host-specific parasites within intermediate hosts, as proposed here, has the advantages of focusing sampling and potentially longer persistence of DNA in the form of living immature parasite stages. The methodology described here may need refinement in terms of sample preparation. Some parasites have a preferred site within their host; for instance, Angiostrongylus vasorum occupies the right ventricle and pulmonary arteries in its vertebrate hosts [49], whilst C. striatum prefers the bronchioles and bronchi of the lungs [26]. The affinity of these parasites to particular sites within the host may extend to the intermediate host, such that sub-sampling of tissue could bias results and affect method sensitivity. Further research needs to be carried out to determine if C. striatum has a predilection site in slugs, to increase the efficacy of detection in slug tissue. To increase the chances of detecting a parasitised slug, species that have been active the longest, and therefore, had the greatest opportunity to acquire parasite infections should, in principle, be targeted for sampling. For example, A. subfuscus activity has been seen to peak between May and June with little between-year deviation [50], making it an ideal candidate for spring and summer sampling. The present study found A. subfuscus to be the only species with a positive C. striatum result in spring sampling. Similarly, A. ater and T. sowerbyi would be of major interest in autumn and winter sampling, with their peak activity being in January or between August and October, respectively [50]. Arion ater may be of particular interest in future research, as it was the only species that presented simultaneous infection with both C. striatum and C. vulpis. Additionally, the detection of C. striatum in A. ater, A. subfuscus and T. sowerbyi appears to be the first confirmed report of infection in these species [30]. This suggests that the potential intermediate host range of C. striatum could be much greater than previously thought. Extensions to the present study could further develop the test for hedgehog monitoring through targeting particular slug species and anatomical sites, and by matching the target sample size The sample size (=number of slugs) required to detect at least one infected slug, given true prevalence from 1% (n = 299) to 25% (n = 11). Higher prevalence omitted for clarity: n declines further to 5 (at 50% prevalence) and 3 (75%). Discussion This study demonstrates the use of a multiplex test for Crenosoma species, which can accurately identify and discriminate between closely related species C. striatum and C. vulpis from slug tissues. The fact that no C. striatum was detected in the Skomer sample set indicates the potential of C. striatum as an indicator species for the presence of elusive hedgehogs in any given locale. Furthermore, the sensitivity of the assay suggests that other parasites highly specific to host species of interest could be used in this way for monitoring and surveillance, for instance as part of management programmes for endangered or invasive species [4,46]. Direct detection of environmental DNA also has potential for monitoring of elusive species [47,48]. Detection of host-specific parasites within intermediate hosts, as proposed here, has the advantages of focusing sampling and potentially longer persistence of DNA in the form of living immature parasite stages. The methodology described here may need refinement in terms of sample preparation. Some parasites have a preferred site within their host; for instance, Angiostrongylus vasorum occupies the right ventricle and pulmonary arteries in its vertebrate hosts [49], whilst C. striatum prefers the bronchioles and bronchi of the lungs [26]. The affinity of these parasites to particular sites within the host may extend to the intermediate host, such that sub-sampling of tissue could bias results and affect method sensitivity. Further research needs to be carried out to determine if C. striatum has a predilection site in slugs, to increase the efficacy of detection in slug tissue. To increase the chances of detecting a parasitised slug, species that have been active the longest, and therefore, had the greatest opportunity to acquire parasite infections should, in principle, be targeted for sampling. For example, A. subfuscus activity has been seen to peak between May and June with little between-year deviation [50], making it an ideal candidate for spring and summer sampling. The present study found A. subfuscus to be the only species with a positive C. striatum result in spring sampling. Similarly, A. ater and T. sowerbyi would be of major interest in autumn and winter sampling, with their peak activity being in January or between August and October, respectively [50]. Arion ater may be of particular interest in future research, as it was the only species that presented simultaneous infection with both C. striatum and C. vulpis. Additionally, the detection of C. striatum in A. ater, A. subfuscus and T. sowerbyi appears to be the first confirmed report of infection in these species [30]. This suggests that the potential intermediate host range of C. striatum could be much greater than previously thought. Extensions to the present study could further develop the test for hedgehog monitoring through targeting particular slug species and anatomical sites, and by matching the target sample size to the expected prevalence and required precision. The number and cost of PCR assays performed per geographical site could also be reduced by pooling samples from different slugs. These refinements require validation and could establish whether parasite abundance in slugs is related to hedgehog population density, which if it were found to be the case, would enhance its utility as a monitoring tool. Regardless of this relationship, however, results here suggest that presence or absence of C. striatum correlates, as expected, with that of its hedgehog definitive host, and can, therefore, be used as a robust indirect indicator of hedgehog presence. The required number of slugs to be sampled in order to reasonably exclude the possibility of C. striatum depends on the underlying prevalence, which is unlikely to be known in a newly surveyed site. Further information on the range of prevalence of C. striatum infection in gastropods in areas inhabited by hedgehogs would, therefore, be useful to evaluate the feasibility and efficiency of the present approach across the species range. The approach presented here could be extended to other systems, where highly host-specific parasites are present at reasonably high prevalence, distinguishable from closely related species, and accessible, for example in easily sampled intermediate hosts. The most fundamental of these factors is the host specificity. Host specificity is often under or over-estimated for parasitic species [51], and parasite-host interactions are rarely well-understood in wild animals [52]. Most parasites can infect multiple host species [53][54][55], albeit to a highly varied extent [56], rendering most as unsuitable for host population studies. Helminths, however, often demonstrate high host-specificity, with nearly 50% of those reported in one study of primates inhabiting a single host species [54]. The sensitivity of the assay presented herein demonstrates that quick and accurate delineation between closely related parasite species can be achieved. It is entirely possible that this methodology could be adapted to other vertebrate species of conservation concern, wherever a suitable parasite species can be identified. To date, only a small number of parasites with singular definitive hosts have been described; Table 3 provides examples of such species. It may be the case that host-specific helminths occur commonly; however, further research is needed in order to clarify this. Furthermore, taxonomic revision frequently leads to a reassessment of host specificity: For example, many nematodes found in amphibia had been previously identified as Rhabdias rana, molecular analysis later demonstrated historical misidentification [57], and new species were described as a result. Therefore, it is quite possible that many parasitic species identified before the modern molecular biology era, may have been incorrectly described, increasing the possibility of detecting species-specific and molecularly distinct parasites with potential as indicators of host presence. In addition to taxonomy, ecological factors determine the realisation of potential host range, and are changing in many systems [58]. Shifts in prevalence and host range might have to be taken into account during parasite-based monitoring programmes, and at the same time can provide additional information on host ecology and infection patterns. Further improvements could be made through development as a loop-mediated isothermal amplification (LAMP-PCR), using similar methodology to that previously described [59,60]. This has potential for a test which could be used in a field setting: Feng et al. [59] found the LAMP-PCR method had lower, but adequate sensitivity for the specific detection of cestode DNA as compared to multiplex PCR, while Abbasi et al. [60] demonstrated 10-fold increased sensitivity over PCR for the detection of Schistosoma spp. in infected snails. * denotes parasite species for which the definitive host is geographically isolated from other host species. Conclusions We conclude that proof of principle has been demonstrated in using terrestrial parasite DNA to confirm the presence of hedgehogs in a given locale. PCR tests can be used to effectively detect and delineate isolates of C. striatum and C. vulpis from gastropod samples. A critical assessment of different slug tissue and nematode extraction methods, and epidemiological factors, is necessary for the improvement and development of the method described here. This method could provide significant support for monitoring and conservation efforts in hedgehogs, and could pave the way for similar methods to be employed for monitoring of other terrestrial species whose conservation is of concern. Supplementary Materials: The following are available online at http://www.mdpi.com/2076-2615/10/8/1420/s1, Figure S1: Sequence alignment of ITS-2 sequences showing positions of discriminatory primers resulting in specific PCR products differing in length by 50 bp. Figure S2: PCR of extracted nematode DNA with CS/CV multiplexed primer set illustrating specific amplification.
2020-08-20T10:01:29.661Z
2020-08-01T00:00:00.000
{ "year": 2020, "sha1": "740779dfabaea564eb6024ad7f7c53f2979abe3e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-2615/10/8/1420/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "be45021716ed4ef0c8a22c1ec967e8865454f25a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
211543905
pes2o/s2orc
v3-fos-license
Chemicals in Personal Care Products Tied to Early Puberty in Girls AIDA PETCA1,2, MIHAELA BOT1,2, RAZVAN COSMIN PETCA1,3*, CLAUDIA MEHEDINTU1,4*, RAMONA ILEANA BARAC1, MADALINA ILIESCU2, NICOLETA MARU1, BOGDAN MASTALIER1,5 1Carol Davila University of Medicine and Pharmacy, 8 Eroii Sanitari Blvd., 050474, Bucharest, Romania 2Elias Emergency Hospital, 17 Marasti Blvd., 011461, Bucharest, Romania 3Prof. Th. Burghele Clinical Hospital, 20 Panduri Str., 050653, Bucharest, Romania 4Malaxa Clinical Hospital, 12 Vergului Str., 022441, Bucharest, Romania 5Colentina Clinical Hospital, 19-21 Stefan cel Mare Blvd., 020125, Bucharest, Romania Puberty is defined as a complex process of development which makes the transition from childhood to adolescence. It consists in the appearance of secondar y sexual characteristics, behavioral changes, accelerated growth and ultimately the reproductive capacity [1,2]. Puberty is marked by a maturation of the hypothalamic-pituitarygonadal axis, which is responsible for the increased levels of hypothalamic gonadotropin releasing hormone (GnRH). GnRH leads to a rise in pulsatile secretion of luteinizing hormone (LH) and follicle-stimulating hormone (FSH) from the anterior pituitary gland. Gonadotropin release is a trigger for the gonads, the final result being the onset of ovulatory menstrual cycles [1,3,4]. The available studies showed a progressive decrease in age of reaching puberty, specifically the onset of menarche and breast development [5,6]. Precocious puberty consists in the onset of menarche before 9 years of age, or the appearance of secondary sex characteristics before 8 years of age [2]. The prevalence of precocious puberty is 1 in 5000 children and girls are 10 times more affected than boys [2]. An earlier onset of menarche has been associated with adverse health and social outcomes, such as shorter adult stature, an increased risk of type 2 diabetes, adult-onset asthma, cardiovascular disease and an increased risk of breast cancer and reproductive tract cancers [7][8][9][10]. Precocious puberty is also associated with many psychosocial disturbances, like an increased incidence of depression, withdrawal and internalizing disorders [2,11]. A study which compared a group of girls with early onset of the menarche with those who had menarche after 11 years of age has shown major differences at age 13 and 15, reporting many more episodes of rule-breaking at home, at school and during leisure time among the early maturing girls. The group of girls with precocious puberty also showed more school discipline problems, school fatigue and an earlier sexual debut with a greater incidence of abortions by the age of 16 years [11]. The two main hypotheses for the actual tendency towards earlier menarche are: the increasing prevalence of childhood obesity and increasing environmental exposure to endocrine disruptor chemicals (EDCs) in household and personal care products [7,8,12]. Endocrine disruptor chemicals (EDCs) EDCs are synthetic or natural environmental chemicals which are introduced into the human body through foodstuffs, water and air. They also can be transferred from the mother to the baby via breast milk and to the fetus via placenta [6,7,13]. These chemicals are highly spread in personal care products, in household products and household cleaners, leading to high exposures in the population through behaviors and daily activities [7,14]. The mechanisms of action of endocrine disruptors can be explained by their hormone-like characteristics. The endocrine function and development are affected by these chemicals in an agonist-or antagonist-specific manner [6,7]. They affect puberty through their androgenic, antiandrogenic, estrogenic or anti-estrogenic effects. Endocrine disruptors have also direct effects on the gonadotropinreleasing hormone (GnRH). The estrogenic effects may be exerted either directly by binding to estrogen receptors, leading to an increase of aromatase activity and finally increasing estrogen sensitivity or indirectly, rising the endogenous estrogen production by influencing the GnRH. The final result of all of these mechanisms is the precocious puberty [6]. Several examples of chemicals which are known to disrupt estrogen receptor signaling in vitro and in animal studies are parabens, triclosan, dichlorophenols and certain benzophenones. These compounds exert their action by modulating the downstream signaling processes, or by binding directly to the receptor itself [7]. The androgenic and anti-estrogenic effects of the endocrine disruptors are exerted through inhibition of steroidogenic enzyme production and aromatase enzyme activity. The anti-androgenic effects of these chemicals may be explained by the suppression of testicular steroidogenesis and androgen-receptor blockade. Because of these multiple mechanisms of action, endocrine disruptors not only lead to precocious puberty, but also interfere with delayed puberty and with many sexual differentiation disorders [6]. Phthalates and bisphenol A are examples of compounds which have a demonstrated role in disrupting androgen-dependent processes. Additionally, bisphenol A is involved in both estrogenic and anti-androgenic responses [7]. Phenols Phenols are endocrine disruptors commonly found in many personal care products. An important phenol is triclosan, which is still used in toothpaste, although it was banned from the antibacterial soap in the US [15]. Measuring the urinary concentrations of triclosan and 2,4dichlorophenol in pregnant women, it was demonstrated that in-utero exposure to these compounds is associated with earlier menarche in girls [16]. Also, a study measuring the urinary concentration of triclosan in pregnant women has shown that for every doubling in concentration, the menarche of their daughters was one month earlier [15]. In addition, peripubertal exposure to triclosan is linked not only to earlier menarche, but also with earlier breast development in girls [8,16,17]. Bisphenol A (BPA) is another phenol associated with precocious puberty in girls, which is often used to make household products [4,19]. Animal studies have demonstrated estrogenic properties for BPA, this compound being associated with earlier onset of puberty in female rats, leading to a precocious vaginal opening [12,20,21]. Human studies found a correlation between BPA and earlier age at menarche and thelarche in girls [12,[22][23][24]. There are studies which describe contradictory results regarding the effect of phenols to the pubertal timing in girls. For example, benzophenone-3 which was linked to later thelarche and menarche and enterolactone, a phenol which leads to later menarche [7,[16][17][18]. Phthalates Phthalates are also used in personal care products as softeners [25][26][27]. Three of the most used phthalates are di-n-butyl phthalate (DnBP) and di-iso-butyl phthalate (DiBP), which are more frequently found in cosmetics and nail polish, and diethyl phthalate (DEP), which is used in perfumes, shampoo, deodorants and soaps [28]. Human studies have shown a relationship between phthalates, allergic response and behavior changes in children. Meanwhile, in vitro and animal studies have marked the estrogenic and anti-androgenic effects of the phthalates [29]. Phthalates plasticizers lead to early puberty in female rats, affecting the female reproductive system through a weak estrogenic effect [22]. A recent study of female rats has shown a correlation between an earlier onset of puberty and neonatal and prepubertal exposure to dibutyl phthalate [30]. Another animal study found an earlier ovarian development and estrous in female rats which have been exposed in utero to di(2-ethylhexyl) phthalate (DEHP) [12,31,32]. In human studies, there is described a link between phthalate exposure and early onset of puberty in girls. For example, a study on Puerto Rican girls has shown a correlation between premature breast development and phthalate exposure, the most prevalent phthalate being di-2-ethylhexyl phthalate (DEHP) [33]. There are also evidences of high blood levels of DEHP in girls with precocious puberty [12]. Another study has assessed the impact of phthalate exposure during in utero development and peripubertal on the serum concentrations of sex hormones and on the timing of sexual maturation. It was demonstrated that in utero exposure to some phthalates (DEHP and butylbenzyl phthalate) can lead to premature onset of puberty and adrenarche. There have been measured the urinary phthalate metabolites among mothers, during their third trimester of pregnancy and among their girls at 8-13 years of age. It was found that in utero exposure to DEHP leads to increased concentrations of dehydroepiandrosterone sulfate (DHEA-S), which is an important precursor to pubarche [30]. In addition, there are evidences that each doubling of urinary concentration of a phthalate indicator in pregnant women leads to 1.3 months earlier onset of the pubarche in their daughters [15]. In contrast, another study has found that urinary concentrations of highmolecular weight phthalate (high-MWP) metabolites including di( 2-ethylhexyl) phthalate (DEHP) are linked to later pubarche [34]. It has been shown a correlation between the monoethyl phthalate (MEP) in pregnant women and an earlier onset of pubarche in their daughters [14]. Studying the effect of peripubertal exposure to MEP in overweight or obese girls, it has been demonstrated that this compound leads to earlier menarche [8,14]. On the other hand, exposure to another compound, diethyl phthalate (DEP) was linked to earlier onset of pubic hair and breast development [35]. In a case-control study comparing girls with thelarche with controls, it was observed a detectable serum level of phthalates in two-thirds of the cases and only in 14% of the controls [36,37]. Parabens Parabens are esters of p-hydroxybenzoic acid, often found in cosmetics, foods and pharmaceuticals as antimicrobial preservatives [38]. Types of exposure to parabens include: dermal contact, inhalation and ingestion [38]. Although there are described a lot of compounds, the two of the highly used parabens are methyl paraben (MP) and propyl paraben (PP) [38]. Parabens have been linked to endocrine disruption and reproductive toxicity, but concentrations up to 0.8% in mixtures or up to 0.4% if used alone are considered safe as cosmetic ingredients [38]. It is known that parabens have estrogenic effects in vitro but also in vivo, due to their uterotrophic effects, the estrogenicity increasing with side chain length [39]. Parabens may lead to enhancing estrogen effects, being associated with breast cancer etiology [25,40]. This is possible because of the elevation of free estradiol levels, through inhibition of sulfotransferase enzymes (SULTs). This finding leads to an explanation of how parabens can inhibit sulfation of estrogens [39]. Parabens can be excreted in the urine as intact esters, but also as a conjugated form of p-hydroxybenzoic acid, a nonspecific metabolite of all parabens. There are considered valid human exposure biomarkers the concentrations of all (both free and conjugated) urinary compounds of the parent parabens [38]. It has been shown that peripubertal exposure to methyl paraben is associated with an earlier onset of pubarche, menarche and telarche, while propyl paraben is only associated with earlier pubarche [16]. Meanwhile, other studies have shown no association between peripubertal exposure to parabens and earlier onset of puberty [7,[16][17][18]. For example, a study including female participants 12-16 years of age has shown no relationship between total parabens exposure and the age of menarche [7]. Other compounds It has been demonstrated a correlation between exposure to polybrominated biphenyls (PBBs) in pregnant women and earlier menarche in their daughters. Meanwhile, there was not found an association with thelarche [41]. In addition, another study found no correlation between exposure to PBB and breast development, but it was observed an earlier onset of pubarche and menarche in girls exposed to high levels of this compound in utero or by breastfeeding [42,43]. It has been shown that prenatal exposure to diethylstilbestrol (DES), one of the synthetic estrogens, elevates the risk of early menarche [41,44]. Conclusions The increasing prevalence of precocious puberty can be explained by exposure to different chemicals frequently used in personal care products, such as parabens, phenols and phthalates. Phenols are having different effects to the pubertal timing in girls, in-utero exposure to triclosan and 2,4dichlorophenol leads to earlier menarche of the offspring, while peripubertal exposure to triclosan was linked with earlier thelarche. In contrast, there are some phenols linked to later menarche and thelarche, such as benzophenone-3 or only with later age at menarche, such as enterolactone. Phthalates are also playing an important role in precocious puberty in girls. Exposure to monoethyl phthalate, 2-ethylhexyl phthalate and butylbenzyl phthalate of the pregnant women leads to earlier menarche and pubarche in their daughters and there are some evidences of later pubarche linked to high-molecular weight phthalate metabolites such as di(2-ethylhexyl) phthalate. There are contradictory results regarding the role of the parabens in precocious puberty in girls. Propyl paraben has been associated with earlier pubarche, while methyl paraben was linked not only with earlier development of pubic hair, but also with earlier age at menarche and precocious breast development. Prenatal exposure to other chemicals such as diethylstilbestrol is also associated with earlier menarche in girls, while polybrominated biphenyls can lead not only to precocious onset of menarche, but also of pubarche, in case of in-utero exposure or transfer by breastfeeding in girls. The effects of these chemicals are modulated by the genetic predisposition and by the timing of the exposure. Although they are spread in many products, consumers and especially the pregnant women should avoid these compounds, in order to prevent the adverse outcomes on the puberty timing.
2020-01-23T09:11:25.930Z
2019-10-15T00:00:00.000
{ "year": 2019, "sha1": "1af7b4c33f842359a466bfecb30287b5598263f3", "oa_license": "CCBY", "oa_url": "https://revistadechimie.ro/pdf/24%20PETCA%209%2019.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "dbd37dd2139ac3487415bc4c9f21286bf5231fa1", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
184488049
pes2o/s2orc
v3-fos-license
`Project&Excite' Modules for Segmentation of Volumetric Medical Scans Fully Convolutional Neural Networks (F-CNNs) achieve state-of-the-art performance for image segmentation in medical imaging. Recently, squeeze and excitation (SE) modules and variations thereof have been introduced to recalibrate feature maps channel- and spatial-wise, which can boost performance while only minimally increasing model complexity. So far, the development of SE has focused on 2D images. In this paper, we propose `Project&Excite' (PE) modules that base upon the ideas of SE and extend them to operating on 3D volumetric images. `Project&Excite' does not perform global average pooling, but squeezes feature maps along different slices of a tensor separately to retain more spatial information that is subsequently used in the excitation step. We demonstrate that PE modules can be easily integrated in 3D U-Net, boosting performance by 5% Dice points, while only increasing the model complexity by 2%. We evaluate the PE module on two challenging tasks, whole-brain segmentation of MRI scans and whole-body segmentation of CT scans. Code: https://github.com/ai-med/squeeze_and_excitation Introduction Fully convolutional neural networks (F-CNNs) have been widely adopted for semantic image segmentation in computer vision [4] and medical imaging [5]. As computer vision tasks mainly deal with 2D natural images, most of the architectural innovations have focused towards 2D CNNs. These innovations are often not applicable for processing volumetric medical scans like CT, MRI and PET. For segmentation, 2D F-CNNs were used to segment 3D medical scans slice-wise. In such an approach the contextual information from adjacent slices remains unexplored, which might lead to imperfect segmentations, especially if the target class is small. Hence, the natural choice of segmenting 3D scans would be to use 3D F-CNN architectures. However, there exist some practical challenges in using 3D F-CNNs: (i) 3D F-CNNs require large amount of GPU RAM space for training, and (ii) the number of weight parameters are much higher than for its 2D counter-part, which can make the models prone to over-fitting with The authors contributed equally. arXiv:1906.04649v2 [eess.IV] 12 Jun 2019 limited training data. Although the first issue can be effectively addressed using recent GPU clusters, the second issue still remains. This problem is prominent in medical applications, where training data is commonly very limited. Most datasets often contain only about 15-20 annotated training scans. To overcome the problem of over-fitting, 3D F-CNNs are carefully engineered for a task to minimize the model complexity by reducing the number of convolutional layers or by decreasing the number of channels per convolutional layer. Although this might aid training models with limited data, the exploratory capacity of the 3D F-CNN gets limited. In such a scenario, it is necessary to ensure that the learnable parameters within the F-CNN are maximally utilized to solve the task at hand. Recently, a computational module termed 'Squeeze and Excite' (SE) block [2] has been introduced to recalibrate CNN feature maps, which boosts the performance while increasing model complexity marginally. This is performed by modeling the interdependencies between the channels of feature maps, and learning to provide attention on specific channels depending on the task. This idea was also extended to medical image segmentation [7], where it was demonstrated that such light-weight blocks can be a better architectural choice than extra convolutional layers. Although, SE blocks were customarily designed for 2D architectures, they have recently been extended to 3D F-CNNS to aid volumetric segmentation [10]. In this paper, we propose the 'Project & Excite' (PE) module, a new computational block custommade to recalibrate 3D F-CNNs. Zhu et al. [10] directly extended the concept of SE to 3D by averaging the 4D tensor over all spatial dimensions to generate a channel descriptor for recalibration. We hypothesize that removing all spatial information leads to a loss of relevant information, particularly for segmentation, where we need to exactly localize anatomical structures. In contrast, we aim at preserving the spatial information without any excess model complexity or FLOP operations, which is relevant for fine-grained volumetric segmentation. We draw our inspiration from traditional tensor slicing techniques, by averaging along the three principle axes of the tensor as indicated in Fig. 1. We term this operation the 'Projection' operation. By this, we get three projection-vectors indicating the relevance of the slices along the three axes. A spatial location is important if all the corresponding slices associated with it provide higher estimates. So, instead of learning the dependencies of the scalar values across the channels as in [10], we learn the dependencies of these projection-vectors across the channels for excitation. Also, PE blocks provide a global receptive field to the network at every stage. Our contributions are: (i) we propose a new computational block termed 'Project & Excite' for recalibration of 3D F-CNNs, (ii) we demonstrate that our proposed PE blocks can easily be integrated into any F-CNNs boosting the segmentation performance, especially for small target classes, (iii) we demonstrate that PE blocks minimally increase the model complexity in contrast to using more convolutional layers, while providing much higher segmentation accuracy, substantiating its effectiveness in recalibration. Methods 'Squeeze & excite' (SE) blocks F se (·) take a feature map U as input and recalibrate it toÛ = F se (U). LetÛ ∈ R H×W ×D×C , with height H, width W , depth D, and number of channels C. Commonly, SE blocks are placed after every encoder and decoder blocks of an F-CNN. In this section, we detail the extension of SE to 3D F-CNNs and our proposed 'Project & Excite' blocks. 3D 'Squeeze & Excite' Module: This 3D SE block [10], that can be termed channel SE (cSE) module, is a direct extension of the 2D SE blocks proposed in [2] to a 3D version. The transformation F se (·) is divided into the squeeze operation F sq (·) and excite operation F ex (·). The squeeze operation F sq (·) performs a global average pooling operation that squeezes the spatial content of the input U into a scalar value per channel z ∈ R C . The excitation operation F ex (·) takes in z and adaptively learns the inter-channel dependencies by using two fully-connected layers. The operations are defined as: with δ denoting the ReLU nonlinearity, σ the sigmoid layer, W 1 ∈ R C r ×C and W 2 ∈ R C× C r the weights of the fully-connected layers and r is the channel reduction factor similar to [2]. The output of the 3D cSE module is defined by a channel-wise multiplication of U withẑ. The c th channel ofÛ is defined as: The 3D cSE module squeezes spatial information of a volumetric feature map into one scalar value per channel. Especially in the first/last layers of a typical architecture, these feature maps have a high spatial extent. Our hypothesis is that a volumetric input of large size holds relevant spatial information which might not be properly captured by a global pooling operation. Hence, we introduce the 'Project & Excite' module that retains more of the valuable spatial information within our proposed projection operation instead of spatial squeeze operation. This follows the excite operation, which learns inter-dependencies between the projections across the different channels. Thus, it combines spatial and channel context for recalibration. The architectural details of the 'PE' block is illustrated in Fig. 2. The projection operation F pr (·) is separated into three projection operations (F pr H (·), F pr W (·), F pr D (·)) along the spatial dimensions with outputs z hc ∈ R C×H , z wc ∈ R C×W and z dc ∈ R C×D . The projection operations are done by average pooling defined as: The outputs z c are tiled to the shape H × W × D × C and added to obtain Z, which is then fed to the excitation operation F ex (·), which is defined by two convolutional layers followed by a ReLU and sigmoid activation respectively. The convolutional layers have kernel size 1 × 1 × 1, to aid modelling of channel dependencies. The first layer reduces the number of channels by r, and the second layer brings the channel dimension back to the original size. The excite operation is defined as:Û where describes the convolution operation, indicates point-wise multiplication, V 1 ∈ R 1×1×1× C r and V 2 ∈ R 1×1×1×C the convolution weights, σ the sigmoid and δ the ReLU activation function. The final output of the PE block U is obtained by an element-wise multiplication of the feature map U andẐ. Experimental Setup Datasets: For evaluation, we choose two challenging 3D segmentation tasks. (i) Whole-brain segmentation of MRI T1 scans: For this task, we use the Multi-Atlas Labelling Challenge (MALC) dataset [3]. It consists of 30 T1 MRI volumes of the brain. We segment the brain volumes into 32 cortical and subcortical structures. 15 scans were used for training, 3 scans for validation and the remaining 12 scans for testing. Manual segmentations for MALC were provided by Neuromorphometrics, Inc. (ii) Whole-body segmentation of contrast enhanced CT scans: For this task, we use the Visceral dataset [8]. The gold corpus of the dataset has 20 annotated scans. We perform 5-fold cross-validation. One scan from the test fold was kept as validation set. We segment 14 organs from thorax and abdomen. Both datasets have common challenges w.r.t the limited amount of training scans and severe class-imbalance across the target classes. Training Setup: We choose 3D U-Net [1] architecture for our experimental purposes. Instead of using 3D sub-volumes, we train with whole 3D scans, for which we slightly modified the 3D U-Net architecture to ensure proper trainability. Our design consists of 3 encoder and 3 decoder blocks, with only the first two encoders performing downsampling, and the last two decoders performing upsampling. Each encoder/decoder consists of 2 convolutional layers with kernel size of 3×3×3. Further, the number of output channels at every encoder/decoder block was reduced to half of original size used in 3D U-Net to keep the model complexity low. For example, the two convolutions in encoder 1 have number of channels {16, 32} instead of {32, 64} and so on. We performed preliminary experiments to conclude that this architecture was the best for our application. Training Parameters: Due to the large and variable dimensions of the input volumes we chose a batch size of 1 for training purpose. Also, this configuration totally occupied the 2 × 12 GB RAM of the TITAN Xp GPU. As low batch sizes make training unstable with Batch normalization layers, we use Instance normalization [9] instead which is agnostic to batch size. Optimization was done using SGD with momentum of 0.9. The learning rate was initially set to 0.1 and was reduced by a factor of 10 when validation loss plateaued. On-the-fly data augmentation using elastic deformations and random rotations was performed on the training set. We used a combined Cross Entropy and Dice loss with the Cross Entropy loss being weighted using median frequency balancing to tackle the high class imbalance, similar to [6]. Experimental Results and Discussions Position of 'PE' blocks: In this section, we investigate the positions at which our proposed 'Project & Excite' (PE) blocks need to be placed within the 3D U-Net architecture. We explored 6 possibilities by placing them after every encoder block (P1), after every decoder block (P2), after the bottleneck block (P3), after both encoder and decoder blocks (P4), after each encoder block and bottleneck (P5), and finally after all the blocks (P6). We present the results of all these We observed that placing the blocks after every encoder, decoder and bottleneck provided the best accuracy, boosting by 4% Dice points. Also, we observed that placing it after encoder and bottleneck blocks improves the Dice score by 2% points, whereas placing it after decoder blocks does not effect the performance. We conclude that 'PE' blocks are most effective in encoder and bottleneck positions of F-CNN. In the following experiments, we place the 'PE' blocks after every encoder, decoder and bottleneck blocks. Model Complexity: Here we investigate the increase of model complexity due to addition of 'PE' blocks within 3D U-Net architecture. We compare the PE blocks with 3D cSE blocks complexity-wise and report them in Tab. 2. We present results on MALC dataset. We observe that both PE blocks and cSE blocks cause the same fraction of 1.97% increase in model complexity, whereas PE blocks provide a 2% higher boost in performance at the same expense. One might think that this boost in performance is due to the added complexity, which might also be gained by adding more convolutional layers. We investigated this matter by conducting two more experiments. First, we added an extra encoder and decoder block within the architecture. This immensely increased the model complexity by almost 40% and we observed a drop in Dice performance. One possible reason might be due to over-fitting given the limited data samples and sudden increase in model complexity. So, next we only added two additional convolutional layers at the second encoder and second decoder to make sure that the increase in model complexity is only marginal (∼ 4%), not risking over-fitting. Here, we did observe a boost in performance similar to cSE with double the increase in parameters, but still failed to match the performance of our PE blocks. Thus, we can conclude that PE blocks are in fact more effective than simply adding convolutional layers. Segmentation Results: We present the results of whole-brain segmentation and whole-body segmentation in Tab. 3. We compared 'PE' blocks to the 3D channel SE (cSE) blocks [10] and the baseline 3D U-Net. The placement of the cSE blocks in the architecture was kept identical to ours. For brain segmentation, we observe the overall mean Dice score by using 3D cSE increases by 2% Dice points, whereas our proposed 'PE' blocks lead to an increase of 4% Dice points, substantiating its efficacy. For whole body segmentation, the mean Dice score by using 3D cSE even decreases by 1%, while, when using PE blocks, it increases by 3.5%. Further, we explored the impact of PE blocks on some selected structures. Firstly, we selected bigger structures, white and grey matter for brain segmentation, and liver and right lung for whole-body segmentation. The boost in Dice score for white and grey matter was very marginal by using either cSE or PE blocks ranging within 1% Dice points. For liver and right lung the performance using cSE or PE blocks is comparable to the baseline 3D U-Net. Next, we analyze some smaller structures, namely inferior lateral ventricles, amygdala and accumbens for brain segmentation, and right kidney, trachea and sternum for whole body segmentation, which are difficult to segment. We observe an immense boost in performance using PE blocks in these structures ranging between 3 − 36% Dice points, while using cSE blocks even leads to decreasing performance for trachea and sternum. In Fig. 3, we present visualizations of the segmentation performance of PE models in comparison to baseline 3D U-Net and 3D cSE models. In the top row, white arrows indicate the region of left inferior lateral ventricle, which was missed by both 3D U-Net and 3D csE models. Our proposed PE model, however, was able to segment this very small structure. In the bottom row, white arrows point to the bifurcation of the trachea, where the 3D U-Net is oversegmenting the right lung and 3D cSE model is missing the trachea completely. In conclusion, we observed similar trends in both, whole-brain and whole-body segmentation, demonstrating the efficacy of PE blocks for segmentation of small structures in 3D scans. Conclusion We propose 'Project & Excite', a light-weight recalibration module that can be easily integrated within any 3D F-CNN architectures and boosts segmentation performance while increasing model complexity by a small fraction. We demonstrated that PE blocks can be an attractive alternative to adding more convolutional layers in 3D F-CNNs, especially in situations where training data and GPU resource is limited. We exhibited the effectiveness of 'PE' blocks by conducting experiments on two challenging tasks of whole-brain and whole-body segmentation.
2019-06-12T09:21:34.000Z
2019-06-11T00:00:00.000
{ "year": 2019, "sha1": "ecfb969b6f0821e3248aefe7f9157b312d219569", "oa_license": null, "oa_url": null, "oa_status": "CLOSED", "pdf_src": "Arxiv", "pdf_hash": "5a34b41df40f2e8a1e4aaff45bce8b6fec353891", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Engineering", "Computer Science" ] }
24423267
pes2o/s2orc
v3-fos-license
Counting Solutions of a Polynomial System Locally and Exactly We propose a symbolic-numeric algorithm to count the number of solutions of a polynomial system within a local region. More specifically, given a zero-dimensional system $f_1=\cdots=f_n=0$, with $f_i\in\mathbb{C}[x_1,\ldots,x_n]$, and a polydisc $\mathbf{\Delta}\subset\mathbb{C}^n$, our method aims to certify the existence of $k$ solutions (counted with multiplicity) within the polydisc. In case of success, it yields the correct result under guarantee. Otherwise, no information is given. However, we show that our algorithm always succeeds if $\mathbf{\Delta}$ is sufficiently small and well-isolating for a $k$-fold solution $\mathbf{z}$ of the system. Our analysis of the algorithm further yields a bound on the size of the polydisc for which our algorithm succeeds under guarantee. This bound depends on local parameters such as the size and multiplicity of $\mathbf{z}$ as well as the distances between $\mathbf{z}$ and all other solutions. Efficiency of our method stems from the fact that we reduce the problem of counting the roots in $\mathbf{\Delta}$ of the original system to the problem of solving a truncated system of degree $k$. In particular, if the multiplicity $k$ of $\mathbf{z}$ is small compared to the total degrees of the polynomials $f_i$, our method considerably improves upon known complete and certified methods. For the special case of a bivariate system, we report on an implementation of our algorithm, and show experimentally that our algorithm leads to a significant improvement, when integrated as inclusion predicate into an elimination method. Introduction In this paper, we propose a randomized but certified (i.e. Las-Vegas type) algorithm, denoted #PolySol, to count the number of solutions of a zero-dimensional polynomial system F within a given polydisc ∆ ⊂ C n . Let . . , x n ] for all i = 1, . . . , n, (1) be a zero-dimensional 1 polynomial system. We further assume that each of the coefficients c i,α of the polynomials Given a polydisc ∆ = ∆ r (m) = {z ∈ C n : z − m ∞ < r} of radius r centered at m, we aim to compute the number of solutions of F = 0 in ∆. Here, solutions are counted with multiplicity. As input, our algorithm #PolySol receives (arbitrary good approximations of) the coefficients of F, the polydisc ∆, and an integer K ∈ {0, 1, . . . , d F }, where d F := max i d i is defined as the maximum of the degrees d i of the polynomials f i . As output, it returns an integer k ∈ N ∪ {−1}. If k = −1, nothing can be said, that is, the algorithm fails to provide an answer to our request. Otherwise, k equals the number of solutions of F = 0 in ∆. In this case, we say that the method succeeds. We further show that our method always succeeds if (1) r is small enough, (2) K ≥ k, and (3) the smaller polydisc ∆ := ∆ r (m), with r := r 64n(K+1) n , contains a k-fold solution of F. We also derive a bound on the size of r that guarantees success of our method if the other two requirements are fulfilled. The given bound is adaptive in the sense that it does not only depend on global parameters such as the degree and the size of the coefficients of the polynomials f i , but also on solution-specific parameters, that is, the multiplicity and the size of z as well as the distances between z and the other solutions of F. Here, we state our main result for the special case, where F is defined over the integers. For a more general statement, see Theorem 8. Theorem 1. Suppose that z is a k-fold solution of a polynomial system F as in (1) with polynomials f i ∈ Z[x] of total degree d i and with integer coefficients c i,α of bit-size less than τ F . Then, for any K ≥ k, there exists an L * ∈ N with such that, with probability at least 1/2, the algorithm #PolySol(F, ∆, K) returns k for any disc ∆ = ∆ r (m) with r ≤ 2 −L * and m − z ∞ < r. Here, we use the definitions log(x) := max(1, log max(1, x ∞ )), and where z 1 , . . . , z N denote the distinct solutions of F and µ(z i , F) the multiplicity of z i . Notice that our method never yields the exact multiplicity of a solution, even in the case where there is a well separated k-fold solution z in ∆. Instead, we only obtain the sum of the multiplicities of all solutions contained in ∆. However, in the considered computational model, where only approximations of the coefficients of the input polynomials are known, it is simply not possible to achieve a stronger result. This is due to the fact that arbitrary small perturbations of the input already destroy the multiplicity structure of non-simple roots . We see a series of applications of our method. For instance, our method can be used to verify correctness of the result provided by a numerical (non-certified) method such as homotopy (e.g. [Ver99;BHS+13]) or subdivision methods (e.g. [MP09;BCG+08]). Corresponding implementations of such methods (e.g. Bertini, PHCpack, axel) are available and have proven to be efficient and reliable in practice. Suppose that such a method returns an approximation ζ of a k-fold solutions z such that ζ − z ∞ < 2 −L , however, without any guarantee on the correctness of the result. Now, in order to show correctness, we may run the algorithm #PolySol with input F, K = k, and ∆ = ∆ 64n(k+1) n ·2 −L (ζ). According to the above theorem, the method returns k if the claimed result is actually correct and L is large enough. Hence, we eventually succeed if the numerical solver provides a sufficiently good approximation of z together with the correct multiplicity. Again, we remark that the method does not provide a proof that there is exactly one root of multiplicity k, but only a proof that there k roots counted with multiplicity in ∆. For polynomial systems that are defined over the integers, there exist complete and certified methods (e.g. [Rou99;Laz09;BS16]) to compute isolating regions for all solutions together with the corresponding multiplicities, however, their possible application is limited in practice. In particular, if the polynomials f i are of large degree, the running time for the necessary symbolic computations (e.g. that of a Gröbner Basis or resultants) becomes prohibitive. Combining our method with a numerical solver may instead yield a certified result on the existence of solutions in a certain region. In Section 5, we report on preliminary implementation of our method for the special case of a bivariate system. That is, we integrated an implementation of our method in Bisolve [BEK+13; KS15], a highly efficient algorithm for isolating the solutions of a bivariate polynomial systems with integer coefficients. There, it serves as an inclusion predicate to verify the existence of a k-fold solution of the system. Compared to the original approach in Bisolve, we observe a considerable improvement with respect to running time and precision demand. Overview of the Algorithm. There exists a simple method, also known as Pellet's Theorem, to count the number of roots of a univariate polynomial f ∈ C[x] in a disc D r (m) = {x ∈ C : |x − m| ≤ r} of radius r centered at a point m ∈ C. The method works as follows: We first compute the Taylor-expansion at m and then check whether |c k | · r k > i =k |c i | · r i for some k. Notice that the latter inequality implies that the part c k · x k of f [m] of degree k dominates the remaining parts on the boundary of the disc D r (0). If this is the case, then D r (m) contains exactly k roots of f , which follows directly from Rouché's Theorem applied to f [m] and its degree k-part c k · x k . In [BSS+15], we give sufficient conditions on r and the locations of the roots with respect to m such that the above inequality is fulfilled. In particular, for m being a k-fold root of f , we give a bound r 0 in terms of the degree of f and the separation of m such that Pellet's Theorem applies for any r < r 0 ; see Lemma 9 for details. Our algorithm #PolySol can be considered as an extension of Pellet's Theorem to polynomial systems. Similar as in the one-dimensional case, we make crucial use of the fact that, for a sufficiently small neighborhood ∆ of a k-fold solution z of F, the system F[z] : f 1 (x + z) = · · · = f n (x + z) = 0 obtained by shifting each of the polynomials f i by z is dominated by terms of degree k or less. Hence, in order to study the local behavior of F at z, it should suffice to consider the truncation F[z] ≤k of F[z], where we only consider the part f i [z] ≤k = α:|α|≤k c i,α · x α of each f i [z] = f (x + z) = α c i,α · x α that is of degree k or less. In fact, in Corollary 3, we prove that, for any K ≥ k, the system F[z] ≤K has a k-fold solution at the origin, and we give a bound on its separation in terms of the separation of z as a solution of the original system F. In Theorem 7, we even show that if K ≥ k, and if m − z ∞ < 2 −L for a sufficiently large L, then we can work with F[m] ≤K instead of F[z]. Namely, in this case, F[m] ≤K has k solutions of norm less than 4 · 2 −L , whereas all remaining solutions have considerably larger norm, that is, larger than some value that does not depend on L. We now provide an overview of our approach. For the sake of simplicity, we omit technical details and only give the main ideas. Also, we do not treat any special cases, which considerably simplifies the approach when compared to the actual algorithm as given in Section 3. We first define L := log r 32n(K+1) n such that r 64n(K+1) n ≤ 2 −L ≤ r 32n(K+1) n = r . Obviously, we cannot check in advance whether the above requirements on m and L are fulfilled, however, we can check whether F[m] ≤K has a cluster of solutions near the origin. For this, we use a complete and certified algorithm to compute isolating regions of all solutions of F[m] ≤K that are contained in the polydisc ∆ = ∆ r (0). Notice that if K is small compared to the degrees of the polynomials f i , then the cost for computing the solutions of F[m] ≤K is much lower than solving the original system directly. In particular, for K = 1, ≤K . While the computation of UB is straightforward (see (11) in Section 3), the computation of LB is more involved. Namely, we first compute the hidden-variable resultant R := Res(F[m] ≤K , x ) ∈ Q[x ] with respect to each of the variables x ; see Section 2 for details on the hidden variable approach. The roots of R are the projections of the solutions of F[m] ≤K on the x -axis, and R is contained in the ideal given by the polynomials f i [m] ≤K , that is, there exist g ,1 , . . . , g ,n Using a recent result [DKS13] on the arithmetic Nullstellensatz, we derive upper bounds on the absolute value of the coefficients of the polynomials g ,1 ; see Corollary 2 and (13) in Section 3. In addition, we use our results on Pellet's Theorem from [BSS+15] to derive a lower bound for |R | on the boundary of the disc D r (0) ⊂ C, which is the projection of the polydisc ∆ into one-dimensional space; see Lemma 9. Combining the latter two bounds then yields LB. Finally, we check whether LB > UB, in which case we conclude from Rouché's Theorem that F[m] has the same number of solution in ∆ as the truncated system F[m] ≤K . If UB < LB, we return −1. In the analysis of our algorithm, we show that if m − z ∞ < r 64n(K+1) n for a sufficiently small r, then LB approximately scales like c · r k for some constant C, whereas UB scales like C · r −(K+1)L for some constant C . Thus, in this case, our algorithm eventually succeeds if K ≥ k. As already mentioned, we omitted many details in the above description. In particular, for completeness, we needed to address certain special cases. In particular, this comprises the case where F[m] ≤k has distinct solutions whose projections on one of the coordinate axis are (almost) equal or solutions at infinity that yield roots of the hidden variable resultant. We show how to handle such situations by means of a random rotation of the coordinate system without harming the claimed complexity bounds. Implementation for the Bivariate Case. For the special case of a polynomial system F : , we implemented our algorithm in Sage. As an oracle for computing an arbitrary good approximation of a solution z of F, we used a subroutine of the so-called Bisolve algorithm from [BEK+13;KS15], which currently constitutes one of the fastest exact and complete algorithm for solving bivariate systems. Bisolve is a classical elimination approach that projects the solutions of the system on each of the two coordinate axis in a first step by means of resultant computation and root isolation. This yields a set of points on a two-dimensional grid that are all possible candidates for the solutions of the system. Also, the candidates can be approximated to an arbitrary precision using root refinement for univariate polynomials. Then, in a second step, in order to check whether a certain candidate is a solution or not, Bisolve combines interval arithmetic and an inclusion test based on bounds on the cofactors g 1 and g 2 in the representation R = g 1 · f 1 + g 2 · g 2 of the resultant polynomials R as an element in the ideal f 1 , f 2 . This inclusion test is similar to our approach proposed in this paper, however, no truncation of the original system is considered. Also, it is tailored to the bivariate case and does not yield the multiplicity of a solution. In our experiments, we replaced the original inclusion test in the Bisolve algorithm by #PolySol and compared the precision demand and the running time to that of the original variant. We observed that, for a multiplicity k of z that is small in comparison to the degrees of the input polynomials, our novel approach outperforms the original variant. At least, for the considered instances, we observed a sublinear dependency of the needed precision on the degrees of the input polynomials. Notice that this is not in line with the derived bounds on the precision demand, which suggest at least a quadratic dependency. However, we remark that the given bounds are just worstcase bounds. In addition, our experiments can only be considered as preliminary at the current time, nevertheless we are confident that future work on this topic will support our first impressions. Related Work. The literature on solving zero-dimensional polynomial systems is vast and we can only give an incomplete overview. A historical summary and an overview of known techniques can be found in [Laz09] and [DE06], respectively. There are roughly two different classes of methods -numeric and symbolic methods. To the best of our knowledge, all existing complete and certified algorithms are based on elimination techniques. Using Gröbner bases [Buc06;Fau02] or resultants, they reduce the problem of solving a multivariate system to the problem of computing the roots of a univariate polynomial. Such methods further allow us to compute the coordinates of all solutions in terms of rational functions in the roots of a univariate polynomial (also called Rational Univariate Representation). A corresponding implementation [Rou99] has proven to be quite efficient for systems of moderate size. Also, these methods are well understood in theory and corresponding complexity bounds are available [BS16]. The major drawback of these methods is that the cost for the considered symbolic operations becomes prohibitive for larger systems. In contrast, numerical methods, e.g. based on subdivision techniques or homotopy continuation, often allow us to compute good approximations of the solutions. Unfortunately, they typically fail to give guarantees on the correctness of the computed results. One classical numeric approach is Newton's method, see [Rum10, Section 13] for a general description and an approach that uses Newton's iteration with interval arithmetic. Shub and Smale introduced α-theory [Blu98], where they provide conditions on a simple solution such that Newton iteration is guaranteed to yield quadratic convergence. Recent work [HL17] uses Newton iteration and α-theory to verify the existence of simple solutions of systems of polynomial-exponential equations, however, the approach does not extend to multiple solutions. In [Zhi17], an extension of α-theory is introduced that allows us to also certify multiple solutions of a polynomial system in a "numerical fashion" as studied in this paper. Another very popular numeric approach are homotopy continuation methods. There has been also quite some implementation effort, see PHCpack [Ver99] and Bertini [BHS+13]. In particular, we want to mention the work by Verschelde and Haegemans [VH94]. From a highlevel point of view, their approach is similar to ours as it is also based on Rouché's theorem. Their method relies on finding a sparse part of the polynomial system that dominates the rest of the system on the border of a considered region and can be used as a better starting system for homotopy based techniques. The main differences to our approach are the following. First, we use our technique to directly certify the existence of a zero, not only in order to construct a starting system for a numerical method. Moreover, the system that we use in order to approximate the input system is of lower degree, more precisely our "dominating part" is always of degree k if k is the multiplicity of the zero in the given region. 2 In contrast to their result, we also show that the precision that is needed in order to do so directly depends on the arrangements of the zeros of the system. Van der Hoeven [Hoe11] describes methods for tracking homotopy paths in a certified manner. Using an analytic variant of the geometric resolution method [GHM+95]. Subdivision methods [MP09;BCG+08] are usually incomplete in the sense that they only provide exclusion predicates and lack inclusion predicates. Thus they can be used in order to compute regions that are guaranteed to be free of solutions to the system but cannot ultimately guarantee that a region contains a zero. We want to stress that our work now provides an inclusion predicate that could be included in these approaches in order to turn these methods into complete methods. Notation and Definitions We start by introducing frequently used notation and important definitions. 1. For a point x = (x 1 , . . . , x n ) ∈ C, we define the norm x of x to be the ∞-norm by default, that is, In addition, we define M (x) := max(1, x ) and log(x) = M (log(M (x))). For a polynomial We further define τ f := log f 3. For a polynomial system F = (f 1 , . . D F is also called the Bézout bound in the literature. It constitutes an upper bound on the total number of solutions (counted with multiplicities) of a zero-dimensional system F. For a system F with generic coefficients, it actually equals the number of solutions. For a polynomial and a positive integer κ, we say that φ = αc α x α is an (absolute) κ-bit approximation of f if eachc α is a dyadic number of the form In other words, eachc α approximates c α to κ bits after the binary point. 6. For z ∈ C n and a polynomial f ∈ C[x], we define to be the shift of f to z. For k ∈ [d] , we denote with f ≤k := α:|α|≤k c α x α the truncation of f of degree k. For a system F = (f 1 , . . . , f n ), we define the truncation of F of degree k as F ≤k = (f 1≤k , . . . , f n≤k ). Error Bounds for Shifting, Truncation, and Rotation We first collect some bounds on the size of |f (z)| and f [z] depending on the modulus of some point z ∈ C n and the norm f of some polynomial f ∈ C[x]. We also give bounds on the error that occurs when computing f (z) or f [z] not exactly at z but at a nearby point ζ. . . , x n ] of total degree d and with f ≤ 2 τ . Moreover, let k ∈ [d] = {1, . . . , d}, z ∈ C n , and ζ be an approximation of z with ζ −z < 2 −L , then it holds: Proof. Part (a) and (b) follow immediately from the fact that f ≤k has at most n+k k coefficients and each occurring term c α · z α has absolute value bounded by 2 τ · z |α| . Part (c) is a direct consequence of [MOS11,Theorem 12], which provides general bounds on the error when evaluating a multivariate polynomial using floating point computation. For the last claim, notice that where α = (α 1 , . . . , α n ), α! := α 1 ! · · · α n !, and ∂ α f := The polynomials ∂ α f α! have total degree bounded by d and their norm is upper bounded by 2 τ · d n = 2 τ +n log d . Hence, Part (a) implies the first part of (d). The second part follows from Part (c) because, for any α, it holds that We further provide the following lemma that investigates the influence of considering only an approximation of a polynomial f when looking at shift and truncation. be a polynomial of total degree d with norm f ≤ 2 τ , and let z ∈ C n and ζ such that Proof. We first observe that using the triangle inequality, simple bounds on the number of monomials of lower (≤ k) and higher (≥ k + 1) degree, and the fact that x ≤ 1 yields Then, applying Lemma 1 part (d) to the left summand and the condition on the approximation where the second to last inequality follows from x ≥ 2 −L . In our algorithm, we will consider a transformation of the coordinate system induced by a rotation x → S · x, where S ∈ SO(n) is a rotation matrix with rational entries. The following lemma quantifies the impact of such a rotation on the bit-size of the coefficients of a given polynomial f . be a polynomial of total degree d and S ∈ SO(n) be a rotation matrix. Then, f * := f • S, it holds that f * ≤ 2 τ F · n+d d 2 . Proof. Notice that each of the entries a r,s of the rotation matrix S = (a r,s ) r,s has absolute value at most 1. Thus, f * (x) = f • S(x) = α:cα c α · [(a 11 x 1 + · · · + a 1,n x n ) α 1 · · · (a n1 x 1 + · · · + a n,n x n ) αn ] has coefficients of absolute value bounded by 2 τ F · n+d d 2 as, when expanding the product (a 11 x 1 + · · · + a 1,n x n ) α 1 · · · (a n1 x 1 + · · · + a n,n x n ) αn for a fixed α, there can be at most n+d d terms contributing to a specific monomial x α . The Hidden-Variable Approach Let us assume that an arbitrary zero-dimensional system F = (f 1 , . . . , f n ) as in (1) is given. That is, f i has total degree d i , f i < 2 τ i for all i, and it is assumed that the total number of solutions of F = 0, also at "infinity" (see the considerations below for an explanation), is finite. We now briefly describe the so-called hidden-variable approach that allows us to project the zeros of the system on an arbitrary coordinate axis. For more details, we recommend the excellent textbook [CLO05] by Cox, Little, and O'Shea. In a first step, we consider a homogenization of the system, that is, we introduce an additional (homogenizing) variable x n+1 and multiply each occurring term in each f i with a suitable power of x n+1 such that the so obtained polynomials f h i ∈ C[x 1 , . . . , x n+1 ] are homogenous and of total degree d i , respectively; see also the example below. Notice that each solution (x 1 , . . . , x n ) ∈ C of F = 0 yields a solution (x 1 , . . . , x n , 1) of the homogenized system In addition, if (x 1 , . . . , x n+1 ) ∈ C n+1 is a solution of F h = 0, then (t · x 1 , . . . , t · x n+1 ) is a zero of F h for all t ∈ C. In particular, if x n+1 = 0, we can set t = 1/x n+1 , which yields the solution (x 1 /x n+1 , . . . , x n /x n+1 ) of F = 0. It is thus preferable to consider the set S of solutions of the above homogenized system as a set of points in the n-dimensional projective space P n . The set S then decomposes into the set S <∞ = {(x 1 : . . . : x n+1 ) ∈ S : x n+1 = 1} of so-called affine solutions, for which x n+1 = 1, and the set S ∞ = {(x 1 : . . . : x n+1 ) : x n+1 = 0} of solutions at infinity, for which x n+1 = 0. Notice that there is a one-to-one correspondence between the affine solutions of the homogenized system and the solutions of the original system (1). As mentioned above, we aim to compute the projections of the solutions of F = 0 on one of the coordinate axis, say w.l.o.g., x = x 1 . For this, suppose that we fix some value ξ for x 1 . Plugging x 1 = ξ into the initial system then yields the specialized system . . , x n ) and the corresponding homogenized system , that is, we cannot deduce the system in (4) from plugging ξ into the homogenized system in (3). The reason is that the total degree of f i may become smaller for certain values for ξ, and thus homogenization does not commute with specialization. You may notice that (4) is a polynomial system consisting of n homogenous polynomials in n variables. If the initial homogenized system had a solution with x 1 = ξ, then this would yield a solution of (4) and vice versa. In other words, ξ would be the projection of a solution of the initial system. The following important result now gives a necessary and sufficient criteria to check whether this is actually the case. For an arbitrary polynomial system G consisting of n + 1 (not necessarily homogenous) polynomials in C[x 1 , . . . , x n ], we simply define Res(G) = Res(G h ). Since G has the same coefficients as G h , it still holds that Res(G) is a polynomial in the coefficients of G. In addition, since there is a one-to-one correspondence between the solutions of G and the affine solutions of G h , it follows that Res(G) = 0 if and only if G h = 0 has a solution in P n . Now, in order to compute all values ξ such that there exists a solution (x 1 , . . . , x n ) of our initial system F = 0 with x 1 = ξ, we aim to apply the above theorem to the system as defined in (4), however we now consider ξ as an indeterminate (so called hidden variable) rather than a fixed value. There are some subtleties with this approach. In particular, the degrees of the polynomials f [ξ] i may be different for certain values for ξ, which is crucial as the definition of the resultant polynomial Res strongly depends on the degrees of the given polynomials. However, we can avoid such critical situations if we assume that the given polynomials f i fulfill some mild prerequisites. Lemma 4. Suppose that each polynomial f i contains a term of total degree d i that does not depend on x 1 and write be its corresponding homogenization (with respect to the variables x 2 , . . . , x n ), then it holds: (a) For all ξ ∈ C, we have (f i and (f i ) h has total degree d i . 3 We remark that Res only depends on the actual degrees of the polynomials. (b) Each root x 1 = ξ ∈ C of R(x 1 ) := Res(F 1 , . . . , F n ) yields a solution (x 1 , . . . , x n ) ∈ C n of F = 0 with x 1 = ξ and vice versa. Proof. Part (a) follows directly from the fact that the total degree of f [ξ] i is equal to d i for all ξ as there exists a term of degree d i that does not depend on ξ. For (b), we first remark that the resultant of the polynomials F i is a polynomial in the coefficients of the F i , and thus a polynomial in x 1 . Since the degree of each f i does not depend on the choice of x 1 = ξ, we also have R(ξ) = Res(F 1 | x 1 =ξ , . . . , F n | x 1 =ξ ). Now, let x 1 = ξ be a complex root of R, then according to Theorem 2, there must exist a solution (ξ 2 : . . . : ξ n+1 ) ∈ P n−1 of the system In order to prove that this solution is an affine solution (i.e. a solution of F), we assume for contradiction that ξ n+1 = 0. Plugging x n+1 = 0 into the polynomials F i yields Hence, each of the terms c i,α (x 1 ) occurring in the above sum is a constant that does not depend on x 1 . Since (ξ : for any x 1 . This contradicts our assumption that F has only finitely many solutions. It follows that (ξ 2 /ξ n+1 , . For the other direction, let (ξ 1 , . . . , ξ n ) be a solution of F = 0, then (ξ 1 : . . . : ξ n : 1) is an affine solution of the corresponding homogenized system, and thus (ξ 2 : . . . : ξ n : 1) a solution of the system Obviously, the above considerations apply for any coordinate (hidden-variable) x k onto which we aim to project the solutions. The corresponding resultant polynomial Res(F, x k ) ∈ C[x k ] is called the hidden-variable resultant with respect to x k . The following theorem [BS16] bounds the cost for computing the hidden-variable resultant in the special case where the polynomials f i have integer coefficients. The technique is based on a method due to Emiris and Pan [EP05] and an asymptotically fast algorithm for determinant computation due to Storjohann [Sto05]. ..,n be a polynomial system with integer polynomials f i ∈ Z[x 1 , . . . , x n ] of magnitude (d, τ ). There is a Las-Vegas algorithm to compute Res(F, x k ) in an expected number of bit operations bounded by 4 We further remark that a root ξ of Res(F, x k ) might origin from several solutions z = (z 1 , . . . , z n ) of F = 0 sharing the same x k -coordinate x k = ξ. Under the requirements from Lemma 4, it holds that the multiplicity of ξ as a root of Res(F, x k ) equals the sum of the multiplicities of all these solutions z. Also, the roots of Res(F, x k ) are exactly the projections of the finite solutions onto the x k -coordinate, and vice versa. Furthermore, if there no solution at infinity, then Res(F, x k ) has degree D F as the system has exactly D F solutions (counted with multiplicity), which are all finite, and the roots of Res(F, x k ) are exactly the projections of these solutions onto the x k -coordinate. of each f i into a sum of terms of degree d i and into a sum of terms of degree less than d i . Then, for any k ∈ [n], it holds that the leading coefficient Proof. Let x n+1 be a homogenizing variable and be the corresponding homogenization of f i . For generic choice of the coefficients c i,α with |α| = d i , the above system is zero-dimensional and has no solution at infinity. Namely, for x k+1 = 0, the system writes as f h i (x 1 , . . . , x n , 0) = f i,d i , and a generic system of n homogenous polynomials in n variables has no solution. Thus, there exists no solution at infinity, which also rules out the possibility of the system being non zero-dimensional. Now, suppose that the coefficients are generically chosen such that all solutions are finite. Then, the total number of solutions equals the Bézout number D F and the degree of ) would depend on some coefficient c i,α with |α| < d i , then, for generic choice of all other coefficients, we could choose such a c i,α in a way such that the leading coefficient becomes zero, and thus deg Res(F, x k ) < D F , a contradiction. This shows that, for generic choice of the coefficients c i,α , the leading coefficient LC(Res(F, x k )) does not depend on the coefficients of the polynomials f i,<d i . From this, we conclude that LC(Res(F, x k )) does not depend on the coefficients of the polynomials f i,<d i in general. ..,n be an arbitrary polynomial system as in Lemma 5 with d i = d for all i, and let be the system obtained by adding polynomials of the form n α:|α|=d+1 c i,α · x α to each f i . IfF does not have any solution at infinity (which is the case for generic choice of the coefficients Proof. IfF has no solution at infinity, thenF is zero-dimensional and, in addition, Res(F, x k ) has degree DF = (d + 1) n . From Lemma 5, we further conclude that LC(Res(F, x k )) only depends on the coefficients c i,α of the degree (d + 1)-partsf i,d+1 of the polynomialsf i . Hence, we have LC(Res(F, x k )) ∈ Z =0 . ≤d polynomials of total degree at most d. Then, it holds that Namely, if det(a i,j ) = 0, thenF has no solution at infinity as each such solution would yield a non-trivial solution of the linear system n j=1 a i,j · X j = 0. Thus,F is zero-dimensional in this case and Res(F, x k ) has degree DF = (d + 1) n . From Lemma 5, we further conclude that LC(Res(F, x k )) only depends on the coefficients a i,j of the degree (d + 1)-partsf i,d+1 of the polynomialsf i . Hence, we have LC(Res(F, x k )) = LC (Res(f 1,=d+1 , . . . ,f n,=d+1 , x k )), and using Theorem 2.3 and Theorem 3.5 in [CLO05] further shows that It is also well known (e.g. this follows from Theorem 4 below) that Res(F, x k ) is contained in the ideal I := f 1 , . . . , f n defined by the polynomials f 1 , . . . , f n . In particular, for polynomials f i ∈ Z[x 1 , . . . , x n ] with integer coefficients, this guarantees the existence of an integer λ, with λ = 0, and polynomials Recent work [DKS13] allows us to bound the magnitude of the polynomials g i as well as the size of λ. For this, we first write , where x =k denotes all but the k'th variable. We further introduce a variable u i,α for every coefficient polynomial c i,α . Let u i = (u i,α ) α be the variables corresponding to the polynomial f i , and let u = (u 1 , . . . , u n ) denote the variables for all polynomials. Then, F can be considered as a system consisting of n polynomials in n − 1 variables x =k with coefficients u. Thus, its resultant Res(F) is a polynomial in Q[u], which is further contained in the ideal f 1 , . . . , f n ⊂ Q[u, x =k ]. The following theorem, which is a consequence of Theorem 4.28 in [DKS13] (see also [DKS13,pp. 6]), gives bounds on the degree and height of the polynomials in the cofactor-representation of Res(F) in this ideal. Theorem 4 ([DKS13] Consequence of Theorem 4.28). Given a polynomial system Φ = (ϕ 1 , . . We can now derive bounds on the degree and the bit-sizes of the polynomials g i as well as on the bit-size of λ in (5) from the above theorem: , we can explicitly compute (see (6) and (7)) positive integers A F and B F , with If all polynomials f i have only integer coefficients, then we may further assume that the polynomials g i have only integers coefficients as well. =k as a polynomial in the variables x =k and with coefficients u i,α ∈ C[x k ]. Theorem 4 now guarantees the existence of a positive λ ∈ Z =0 and polynomials Notice that since u only depends on x k , we may consider each g i as an element in C[x]. In addition, we have In n denotes the number of distinct coefficients u i,α . Further notice that, for each β, u β is a product of at most D F univariate polynomials in C[x k ], each of degree at most d F and of norm bounded by 2 τ F . Hence, it can be written as a sum of at most (d F + 1) D F terms, each of absolute value at most 2 τ F ·D F /d i . We conclude that the norm of g i is bounded by where we define The final claim follows from the fact that for all j. Generic Position via Rotation In the previous subsection, we have outlined how to project the solutions of a polynomial onto one of the coordinate axis. One subtlety of the approach was that certain mild conditions on the input polynomials need to be fulfilled in order to guarantee that the roots of the hidden variable resultant are exactly the projections of the (finite) solutions of the initial system; see Lemma 4. Another drawback of the approach is that distinct solutions might be projected onto the same point or onto two very nearby points on the coordinate axis, that is, the actual distance between distinct solutions is no longer preserved after the projection. We will show how to address these issues by using a random rotation of the coordinate system. We first start with the special case of dimension 2. We further note that the function h(t) = ( 1−t 2 1+t 2 , 2t 1+t 2 ) describes the trace of a point on the quarter-circle. Moreover, we havė , and since |ḣ(t)| = 2 1+t 2 is a decreasing function in t, it follows that the difference between two consecutive angles φ k+1 and φ k is decreasing in k. We thus conclude that all differences are lower bounded by Now, let L k ⊂ R 2 be the line passing through the origin and the point (cos φ k , sin φ k ), and let L ⊥ k ⊂ R 2 be line that passes through the origin and is orthogonal to L k . In addition, for each point p = ( (x ) + i · (x ), (y ) + i · (y )), we definē Then,p is a point in R 2 with p 2 ≥ p / √ 2. Let ∆ ⊂ R 2 be the disc centered atp of radius r = 2 −L−2 · p . Let q, r ∈ ∆ be any two points in ∆ and α be the angle at the origin of the triangle given by the origin and the points q and r. Then, it holds that Since the angle between any two distinct lines L k and L k is lower bounded by 2 −L , it thus follows that there can be at most one k such that L k or L ⊥ k intersects ∆ . Hence, if we pick a k ∈ {1, . . . , 2 L } uniformly at random and choose L k and L ⊥ k as the axis of the coordinate system obtained by rotating the initial system by φ k , then, with probability at least 1 − N 2 L , the new coordinates (x ,ȳ ) of each pointp will meet the condition that min(|x |, |ȳ |) > 2 −L−2 · p . Hence, the same holds true for the points S k (L) · p . We now turn to the general n-dimensional case. For integers k and L and distinct indices i, j ∈ {1, . . . , n}, we define to be a rotation matrix that operates on the i-th and j-th coordinate only. We further define the set of rotation matrices Lemma 7. Let N be a positive integer and p ∈ C n be N , with N ≤ N , points such that p = 0 for all = 1, . . . , N . S N and L are defined as in (9). Then, it holds (a) Choosing integers k ij ∈ [2 L ] for every pair i, j uniformly at random yields, with probability at least 3/4, a rotation matrix S ∈ S N such that, for each point p := S(L) · p , it holds that min i |p ,i | ≥ (2n 2 N ) −16n · p . (b) There is an integer λ of bit-sizeÕ(n 2 log N ) such that the entries of λS and λS −1 are integer numbers of bit-sizeÕ(n 2 log N ) as well. Proof. The proof follows almost immediately from Lemma 6. Namely, with probability at least 1 − N/2 L , both entries p ,i and p ,j of each point p := S [ij] k ij (L) · p will have absolute value at least 2 −(L+2) · max(|p ,i |, |p ,j |). Since at least one of the coordinates of p has absolute value p , we conclude that, with probability (1 − N/2 L ) ( n 2 ) > (1 − N/2 L ) n 2 2 > 1 − n 2 /2 2 L /N > 3/4, each coordinate of each point p = S(L) · p has absolute value at least It remains to show the existence of an integer λ of bit-sizeÕ(n 2 log N ) such that the entries of λS and λS −1 are of that bit-size as well. Each entry of a matrix S The matrix S is a product of O(n 2 ) many such matrices, thus for λ = i,j∈[n] 2 :i<j (2 2L + k 2 ij ) ≤ (2 2L+1 ) n 2 = 2Õ (n 2 log N ) it holds that λS is integer. Notice that S is contained in SO(n), which implies that its entries have absolute value at most 1. It thus follows that the integer entries of λS are of bit-sizeÕ(n 2 log N ) as well. In addition, the inverse of S −k ij (L), which yields comparable bounds for the entries of S −1 as for S. We will later make use of the above result when considering the set of non-zero solutions of a polynomial system F = 0. In general, some of these solutions might project (via resultant computation with respect to some variable x k ) onto zero or onto values close to zero. However, in our algorithm, we are aiming for projections that are of comparable size as the size of the corresponding solutions. In order to achieve this, we first consider a random rotation of the system given by some rotation matrix S from the set S N , with N := D F the Bézout bound on the total number of solutions. This yields the "rotated system" F := F • S −1 whose solutions are exactly the rotations of the initial solutions by means of the rotation matrix S. Then, with high probability, each of the coordinates of the solutions of F = 0 are of absolute value comparable to the norm of the solutions of F = 0. In addition, it is also likely that the rotated system fulfills the condition from Lemma 4 for each coordinate. c α · (a 11 (k) · x 1 + · · · + a 1n (k) · x n ) α 1 · · · (a n1 (k) · x 1 + · · · + a nn (k) · x n ) αn , and the coefficient C(k) of the monomial x d 1 is thus given by We first argue that C(k) does not vanish identically. Letf := α:|α|=d c α · x α 1 1 · · · x αn n be the corresponding homogenous polynomial of degree d such thatf (a 11 (k), . . . , a n1 (k)) = C(k). Assume that C(k) = 0 for all k, then this implies thatf vanishes on each point in T . Since the vanishing set of any non-zero homogenous polynomial in n variables has dimension at most n − 2, we conclude thatf is the zero-polynomial, and thus c α = 0 for all coefficients of f . This contradicts our assumption on f . Hence, it follows that C(k) is a non-zero rational function in k. In addition, each term c α · a 11 (k) α 1 · · · a n1 (k) αn has a numerator of total degree at most 2n 2 d in k and a denominator of the form i,j (2 2L + k 2 i,j ) e i,j , with e i,j ∈ N, of degree at most 2n 2 d in k. This shows that C(k) can be written as a rational function in k of total degree 2n 2 d + n 4 d ≤ 2n 4 d as n i,j=1:i<j (2 2L + k 2 i,j ) n 2 d constitutes a common denominator of all terms. According to the Schwartz-Zippel lemma, we thus conclude that choosing k i,j uniformly at random from {1, . . . , 2 4 log(2n 2 D F ) } guarantees with probability at least ρ := 1 − 2n 4 d · 2 −4 log(2n 2 D F ) that C(k) = 0. In the case where f = f i is one of the polynomials from F, we thus obtain a probability of at least such that f i contains a term of the form c · x d i 1 with a non-zero constant c. Since the same argument applies to any variable x k and to any of the n polynomials f i , the claim follows. From the above lemma, we conclude that by choosing a suitably random rotation matrix from the set S D F , we can ensure with high probability that there is a one-to-one correspondence between the (finite) solutions of F and the roots of the resultant polynomial Res(F, x k ), which are the projections of the solutions on the x k -axis. In addition, the absolute value of each projection compares well to the absolute value of the corresponding solution. In what follows, we will use the following definition of the set of admissible rotation matrices with respect to a given system F, i.e., matrices S ∈ S D F such that the statements (a) and (b) from the above Lemma 8 hold. Definition 1. (Admissible Matrices) For a given polynomial system F we say that a rotation matrix S ∈ S D F is admissible with respect to F if the statements (a) and (b) from Lemma 8 hold. We further denote by S F := {S ∈ S D F : S is admissible with respect to F} ⊂ S D F the set of admissible matrices with respect to F. Notice that, even though it is difficult (probably as difficult as computing all solutions of F) to determine whether a certain matrix in S D F is admissible with respect to F, the previous lemma shows that at least half of the matrices in S D F are admissible. The Algorithm We first sketch our algorithm #PolySol and then prove its correctness. We refer the reader to the pseudo-code in Algorithm 1 for details regarding #PolySol. The algorithm can be roughly split into 3 main steps: Step 1: Shifting and Truncation. Given a polynomial system F := (f 1 , . . . , f n ), a polydisc ∆ = ∆ r (m), and an integer K ∈ {0, . . . , d F }, we define a "precision" L := log r 32n(K + 1) n . // * Adding a degree (K + 1)-perturbation ; as mentioned, this step seems to be only necessary in theory. In practice, we recommend to directly proceed with Φ : the system obtained by adding the term x K+1 i of degree K + 1 to the polynomial φ i . This step seems to be odd at first sight, however, it ensures certain properties of Φ. In particular, Φ is guaranteed to have no zeros at infinity (and thus being zero-dimensional as well) according to Corollary 1 and our considerations in the corresponding example. This further implies that Φ = 0 has exactly D Φ = (K + 1) n finite solutions counted with multiplicity. Also, our choice of Φ allows us to bound the leading coefficient of Res(Φ, x ) for all = 1, . . . , n, which turns out to be useful in the analysis of our approach. Remark. In practice, the latter step does not seem to be necessary in most cases, and thus we recommend to simply proceed with Φ := Φ and to check Φ for being zero-dimensional. Also, when implementing our algorithms, we observed that proceeding with Φ instead of Φ only improves the overall performance. Step 2: Solving Φ. We will later prove that, under the assumption that L is sufficiently large (or equivalently ∆ is sufficiently small), and z is a k-fold solution of the initial system with m − z < 2 −L , the system Φ (as well as Φ for generic choice of its coefficients) yields a cluster of k (not necessarily distinct) solutions with norm less than 4 · 2 −L , whereas all other solutions have norm larger than δ 0 2 −L . Here, δ 0 is a constant that depends on the polynomial system but not on L; see Theorem 7 for the exact definition of δ 0 and further details. We first check whether there exists a cluster of solutions of Φ near the origin that is well separated from all other solutions of Φ. For this, we use a certified method (e.g. [BS16]) to compute all solutions of Φ. Here, by computing all solutions, it is meant to compute a set of disjoint discs, each of size less than 2 −L , together with the number of solutions contained in each disc such that the union of all discs contains all complex solutions. For the more involved problem of computing isolating regions of comparable size, the following theorem applies. Theorem 5. [BS16,Thm. 9, 10] There is a Las Vegas algorithm to compute isolating regions of size less than 2 −ρ for all complex solutions of a zero-dimensional polynomial system F = (f 1 , . . bit operations in expectation. Remark. We remark that computing the solutions of Φ is typically much more affordable than computing the solutions of the initial system F directly, in particular, in the case where n is small and K d. Notice that, for n of constant size, the cost for solving the initial system directly scales like d (ω+2)n−ω−1 τ F , whereas the cost for solving the truncated system scales like (KL + τ F + d log(m)) · (K + 1) (ω+2)n−ω−1 . Hence, for L and m of moderate size, the running times might differ by factor of size ≈ (d/(K + 1)) (ω+2)n−ω−1 . Step 3: Passing from Φ to F. In the final step, we aim to certify that F[m] has the same number of zeros (i.e. k counted with multiplicity) in ∆ := ∆ r (0) as Φ. In order to do so, we aim to apply the following generalization of Rouché's Theorem to Φ and F[m], see [VH94, Thm. 2.1] or [Llo75, Thm. 1] for a proof. Theorem 6 (Multidimensional Rouché). Let F = (f 1 , . . . , f n ) and G = (g 1 , . . . , g n ), with f i , g i ∈ C[x] for all i, define polynomial mappings from C n to C n . If, for a given bounded domain D ⊂ C n , we have where ∂D is the boundary of D, then F and G have finitely many zeros in D and the number of zeros (counted with multiplicities) of F and G in D is the same. In order to apply the above theorem to F := Φ and G := F[m], we derive an upper bound UB(m, r) on the absolute error when passing from Φ to F[m] as well as a lower bound LB(m, r) on the norm of Φ(x) on the boundary of the polydisc ∆. The construction of UB(m, r) is rather straightforward using Lemma 2. That is, we may choose In contrast, the construction of LB(m, r) is more involved: We already mentioned that if L is large enough, then there are k zeros z 1 , . . . , z k of Φ that have norm less than 4 · 2 −L , whereas all other zeros have norm δ 0 2 −L . Hence, under this assumption, picking 5 a random rotation matrix S from S D Φ = S (K+1) n and considering a corresponding rotation of the coordinate system, guarantees (see Lemma 8), with probability larger than 1/2, that the projection of any zero of the "rotated system" . . , φ * n ) := Φ • S −1 on any coordinate axis, except for the k solutions S · z i , yields a value that is large compared to r. Hence, in this case, the hidden-variable resultant R * := Res(Φ * , x ) of Φ * has k roots of absolute value less than r, whereas all other roots of R * have absolute value r. Notice that each φ * j ∈ Q[x] is a polynomial of degree K + 1 with rational coefficients, and according to Lemma 1 and Lemma 3, we have Lemma 7 further yields the existence of an integer λ of absolute value 2Õ (n 3 log K) with λ·S −1 ∈ Z n×n . Hence, we conclude that each term of degree K + 1 of λ K+1 · φ * j has integer coefficients, and [CLO05, Theorem 3.1] further yields that Using Lemma 1, part a, this further yields a corresponding upper bound Remark. The reader might wonder why we do not compute the above cofactor representation (12) directly and then derive bounds on the size of max i,j sup x: x =1 |g ,j (x)| using interval arithmetic, but instead use Corollary 2? The simple reason is that, at least in practice, computing the polynomials g i,j turns out to be considerably more costly than computing the resultant polynomials R * (x) = Res(Φ * , x ) only. In contrast, our approach of computing the bound γ does not require to compute the polynomials g i,j , and thus comes at almost no additional cost. We further remark at this point that we will use the bounds from Corollary 2 in our complexity analysis of the algorithm. In the next step, we compute lower bounds LB − and LB + for |R (x) * | on the boundary of the two discs D − := ∆ r/ √ n (0) ⊂ C and D + := ∆ r· √ n (0) ⊂ C, respectively. For this, we use the so-called T k -test, an approach that has recently been proposed in an algorithm for complex root isolation [BSS+15]. Lemma 9 ([BSS+15]). Let f ∈ C[x] be a uni-variate polynomial of degree d and let ∆ := ∆ r (0) ⊂ C be the disc with radius r centered at 0. The so-called T k -test returns a pair If b =True, we say that T k (∆, f ) succeeds. If T k (∆, f ) succeeds, ∆ contains exactly k roots counted with multiplicity and In addition, if ∆ r/(16d) (0) as well as ∆ 16d 4 r (0) contain exactly k roots, then T k (∆, f ) succeeds. We further define Since the maximum and minimum of a holomorphic function (in several variables) on a bounded domain is taken at its boundary, we further conclude that the above inequality holds for any x with r/ √ n ≤ x ≤ √ n · r. Notice that the rotation of the system by means of the rotation matrix maintains the 2-norm . 2 of any point. Thus, the the norm of any point x differs from the norm of the rotated point S · x by a factor that is lower and upper bounded by r/ √ n and √ n · r, respectively. Hence, from the above bound on Φ * , we conclude that Now in order to apply Rouché's Theorem to Φ and F[m], it suffices to check whether LB(m, r) > UB(m, r), in which case we have shown that Φ and F[m] have the same number of roots in ∆ r (0). Hence, we return True in this case. Otherwise, the algorithm returns False. In the next section, we will show that, if m is a sufficiently good approximation (i.e. for large enough L) of a k-fold solution of F, our algorithm succeeds. Here, we only give an informal argument: Notice that, for large L, the bound UB(m, r) scales like C · r K+1 for some constant C. The bound γ does not depend on L, hence LB(m, r) scales like min min(LB − , LB + ) for large enough L. However, in this situation, each R * has a cluster of k roots near the origin that is well separated from all of its remaining roots, and thus min(LB − , LB + ) scales like [|R * (k) (0)|/( √ n k k!)] · r k . Hence, we conclude that LB(m, r) scales like C · r k for some constant C , which implies that LB(m, r) must be smaller than UB(m, r) for large enough L. We remark that the precise argument is slightly more involved as many subtleties need to be addressed. In particular, we need to show that |R * (k) (0)|/( √ n k k!) does not depend on r if r is small enough, even though the definition of R * strongly depends on the choice of m, r, and the rotation matrix S. We will give details in the next section. Analysis We start by introducing some further notation. For a zero-dimensional polynomial system F = (f 1 , . . . , f n ) in n variables, let z 1 , . . . , z N denote its zeros. We define to be the separation of z i with respect to F and the geometric derivative of F at z i , respectively. We remark that these terms are derived from the interpretation of these quantities in the univariate case, where the separation of a root z 0 of a polynomial f ∈ C[x] is defined in exactly the same way, and the first non-vanishing derivative of f at z 0 can be expressed as a product involving the leading coefficient of f and the distances between z 0 and the other roots. We first provide some bounds on z i , σ(z i , F), and ∂(z i , F) for the special case where each f i has only integer coefficients. For similar bounds that are also adaptive with respect to the sparseness of the given system, we refer to [EMT10]. ..,n be a zero-dimensional system with integer polynomials f i , and let z 1 , . . . , z N denote the zeros of F. Then it holds: Proof. From Corollary 2, we conclude that Res(F, x ) is an integer polynomial of magnitude (D F , B F ) for all = 1, . . . , n. Since the -th coordinate z i, of each solution of F = 0 is a root of multiplicity at least µ(z i , F) of Res(F, x ) and since the Mahler measure For the second claim, notice that σ(z i , F) ≥ σ(z i, , Res(F, x )) for at least one (as two distinct solutions must differ in at least one coordinate), and that the separation of an integer polynomial of magnitude (D F , B F ) is lower bounded by 2 −Õ(D F ·B F ) ; e.g. see [MSW15] for a proof. For the bound on ∂(z i , F), notice that Res(F ,x )) . According to the proof of [MSW15,Thm. 5], it holds that ∂(z 0 , f ) = 2 −Õ(dL) for any root z 0 of a polynomial f ∈ Z[x] of magnitude (d, L). This shows that ∂(z i, , Res(F, x )) = 2 −Õ(D F B F ) for all . It remains to derive an upper bound on the denominator in the above fraction. For this, we define R := Res(F, x )[z i, ]. Then, it holds that According to Lemma 1, R is a polynomial of magnitude (D F ,Õ(B F + D F · log(z i, ))), and, in addition, it has the same leading coefficient as Res(F, x ). In particular, its leading coefficient is a non-zero integer, and thus of absolute value larger than or equal to 1. Thus, we have which shows that log(∂(z i , F) −1 ) =Õ(nD F (B F + log(z i ))). For the last claim, notice that σ(z i , F) appears as one of the factors in the definition of ∂(z i , F). Since the product of all remaining factors is upper bounded by the claim follows directly from the bound on N i=1 µ(z i , F)·log(z i ) and on log(∂(z i , F) −1 ). We are now ready to derive one of our main results in this paper. More specifically, the following theorem shows that, in a sufficiently small neighborhood (which we will also quantify) of a k-fold solution z = z i of F = 0, F(x) scales like c · x k with c a constant. We further argue that this implies that a sufficiently good approximation Φ of the shifted and truncated system F[z] ≤K , with arbitrary K ≥ k, has a cluster of k solutions near the origin, whereas all remaining solutions are well separated from this cluster. We also give bounds on the approximation error that involve the quantities σ(z, F) and ∂(z, F) that are intrinsic to the hardness of the given polynomial system. Theorem 7. Let F be a zero-dimensional system, z a zero of F of multiplicity k, and m be an approximation of z with m − z < 2 −L . Let K ≥ k, and Φ = (φ i ) i=1,...,n be a (K + 1) · L-bit approximation of F[m] ≤K with polynomials φ i of degree at most K, and let a 1 , . . . , a n ∈ C be arbitrary complex values of magnitude 0 ≤ |a i | ≤ 1 for all i. Then, the polynomial system is zero-dimensional, and there exists an L 0 ∈ N such that, for any L ≥ L 0 , Φ has exactly k zeros (counted with multiplicity) of norm smaller than 4 · 2 −L , whereas all other zeros have norm larger than δ 0 := σ(z,F ) (2n 2 D F ) 32n . In the special case, where each polynomial in F has only integer coefficients, it holds that Proof. We denote z 1 , . . . , z N , with z = z i , the zeros of F. Let S ∈ S D F be an admissible rotation matrix with respect to F as well as with respect to the shifted system F[z]. Notice that such a matrix exists as more than half of the matrices in S D F are admissible with respect to F and more than half of the matrices are admissible with respect to F[z]. Let F * := F • S −1 be the corresponding "rotation" of F and z * 1 , . . . , z * N be the zeros of F * such that z * j = (z * j,1 , . . . , z * j,n ) = S · z j . Since S is admissible with respect to F[z], Lemma 8 yields that for all and j = i. In addition, since S is also admissible with respect to F, Lemma 4 and Lemma 8 guarantees that each root of the resultant polynomial Res(F * , x ) is the projection of a finite zero of F * on the x -coordinate. Thus, Res(F * , x ) has a k-fold root at z * i, , whereas all other roots z * j, of Res(F * , x ) have distance at least σ(z,F ) (2n 2 D F ) 16n to z * i, . Now, applying Lemma 9 to a disc with center z * i, and arbitrary radius smaller than Denoting LC := LC(Res(F * , x )), this further yields and thus it follows that Furthermore, Res(F * , x ) is contained in the ideal spanned by the polynomials F * = (f * 1 , . . . , f * n ), that is, there exist polynomials g ,j ∈ C[x] with Res(F * , x ) = n j=1 g ,j f * j . According to Corollary 2, we may assume that for all , j, where the last inequality follows from Lemma 3. Using Lemma 1 then implies that Now, combining (18) and (19) yields So what can we conclude about our initial (non-rotated) system? Since a rotation maintains the Euclidean distance and since the max-norm differs from the Euclidean norm by a factor of at most √ n, it follows that a point x of max-norm x is rotated via S (or Now, suppose that L ≥ log(8/r 0 ) and thus 2 −L < r 0 8 . Since m is an approximation of z with z − m < 2 −L , F[m] has exactly one solution (namely,ẑ := z − m) of multiplicity k in ∆ r 0 /2 (0) and as, for such x, it holds that x /2 < x + m − z < r 0 . Applying Lemma 2 to each φ i and using the fact that M (m) ≤ 2M (z) then shows that, for all x with 2 −L+1 < x < r 0 /2, it holds that Notice that, due to the construction of Φ and Corollary 1, Φ is zero-dimensional. Hence, Rouché's Theorem applied to F[m] and Φ shows that the polydisc ∆ ρ (0) contains the same number of solutions of Φ and F[m] if 2 −L+1 < ρ < r 0 /2 and if, in addition, ρ fulfills the following inequality Equivalently, we must have 2 −L+1 < ρ < r 0 /2 and Hence, for each polydisc ∆ ρ (z), with arbitrary radius ρ ∈ (4 · 2 −L , r 0 /4), contains exactly k zeros of Φ. Since δ 0 < r 0 /4, this proves the first part of the theorem. It remains to prove the claim bound on L 0 for the special case, where F is a polynomial system defined over the integers. For this, we need to estimate the size of the leading coefficient of Res(F * , x ). Notice that there exists an integer λ of size 2Õ (n 3 log d F ) with λ · S −1 ∈ Z n×n , and thus F : (f 1 , . . . , f n ) = (λ deg f * 1 · f * 1 , . . . , λ deg f * n · f * n ) is a polynomial system with integer coefficients, which shows that | LC(Res(F , x ))| ≥ 1. Using [CLO05, Thm. 2 .3 and 3.5] then shows that Hence, the bound follows from (21) and the bound for log σ(z, F) −1 from Lemma 10. From the previous Theorem, we now immediately obtain the following result by setting m := z and Φ := F[z] ≤K for an arbitrary K ≥ k. Corollary 3. Let z be a k-fold zero of a zero dimensional system F and K ≥ k. Then, F[z] ≤K has a k-fold zero at the origin, and all other zeros have norm larger than δ 0 := σ(z,F ) (2n 2 D F ) 32n . 6 We can now show that Algorithm 1 terminates and yields a correct result assuming that L is large enough and the oracle, which provides an approximation m of the solution z, returns a correct answer. Theorem 8. If #PolySol(F, ∆, K) returns an integer k ≥ 0, then the polydisc ∆ = ∆ r (m) contains exactly k solutions of F counted with multiplicity. Vice versa, suppose that z is a solution of F of multiplicity k and K ≥ k, then there exists a positive integer L * of size with L 0 and δ 0 as in Theorem 7, such that #PolySol(F, ∆, K) returns k with probability at least 1/2 if r ≤ 2 −L * and z − m < r 64n(K+1) n . If F has only integer coefficients, it holds: Proof. For the first part, we proceed similarly as in the proof of Theorem 7, however, we work with the system Φ instead of the initial system F. From Line 7 in the algorithm, we already know that Φ has exactly k solutions with ( . -) norm less than r n , whereas all other solutions have norm at least nr. Now, when considering a random rotation matrix S ∈ S D Φ , the corresponding rotated system Φ * = Φ • S −1 has exactly k solutions with norm less than r √ n , whereas all other solutions have norm at least √ n · r. We may now further write with polynomials γ ,j ∈ C[x]. From Part d of Lemma 1 and Corollary 2 we conclude that |γ ,j (x)| for all , j. In addition, since T * (∆ r Hence, using (22), this shows that Since a holomorphic mapping cannot take its minimum or maximum in the interior of some domain, it thus follows that the above inequality even holds for any x with r √ n ≤ x ≤ √ nr. It thus follows that According to (11), UB(m, r) constitutes an upper bound on the error F[m](x) − Φ(x) for any x with x ≤ 1. Hence, in particular, we also have Hence, using Rouché's Theorem, we conclude that F[m] and Φ have the same number of solutions in the polydisc ∆ r (0). It remains to prove the second claim. For this, suppose that F has a k-fold solution at z with m − z < r 64n(K+1) n and that L := log 32n(K + 1) n r > L 1 := max(L 0 , log with δ 0 = σ(z,F ) (2n 2 D F ) 32n as defined in Theorem 7. Let Φ be an approximation of F[m] ≤K as defined in Theorem 7. Then, Φ has k solutions z 1 , . . . , z k of norm z i < 4 · 2 −L < r/(2n), whereas all remaining solutions, denoted byz 1 , . . . ,z m , have norm z j > δ 0 > 2nr. We thus conclude that the if-condition is satisfied in Line 7. Now, when choosing a random rotation matrix S ∈ S D Φ , the solutions z i near the origin are mapped to solutions z n(K+1) n , whereas the remaining solutions are mapped to solutionsz * j of Φ * with norm z * j > δ 0 / √ n > 2 √ nr. In addition, with probability more than 1/2, we have and all ∈ [n]. This implies that each of the resultant polynomials Res(Φ * , x ) has k roots of absolute value less than r 16(K+1) n √ n , whereas all remaining roots are of absolute value larger than 16 · √ n · (K + 1) 4n r. Notice that each polynomial Res(Φ * , x ) has degree (K + 1) n , and thus Lemma 9 guarantees success in Line 10 of the algorithm. Now, recall the lower bounds LB − and LB + for | Res(Φ * , x )| on the boundary of ∆ r/ √ n (0) and ∆ √ nr (0), respectively, as computed in Line 11. For arbitrary x ∈ C with |x| = r/ √ n, we have for all x with |x| = 2 √ nr, an analogous computation shows that LB + fulfills the same bound, that is, From Lemma 1 and our construction of Φ * , the leading coefficient of each polynomial Res(Φ * , x ) is a non-zero integer, hence we obtain that LB(m, r) = min =1,...,n min(LB − , LB + ) Notice that, for small r, LB(m, r) scales like C · r k , with a constant C that does not depend on r. The upper bound scales like C · r K+1 , and thus our algorithm succeeds if r fulfills the condition in (23) (i.e. log 32n(K+1) n r ≥ L 1 ) and r K−k+1 < C C . Both condition are fulfilled if log(1/r) ≥ L * := max(L 1 , log C C ) =Õ(L 0 + (K + 1) n · log 1 δ 0 + d F · log(z)). The claimed bound on L * for the special case where F is defined over the integers follows directly from the corresponding bound on L 0 from Theorem 7 and our bounds on log(z), log(σ(z, F) −1 ), and log(∂(z, F) −1 ) from Lemma 10. Application: Computing the Zeros of a Bivariate System In this section, we report on an application of our technique in the context of elimination methods for the bivariate case. More precisely, we incorporate the algorithm #PolySol as an inclusion predicate in the Bisolve algorithm [BEK+13; KS15]. Comparing Sage implementations of the original Bisolve algorithm and its modified variant, we empirically show that the idea of truncating the original system with respect to the multiplicity of the solution yields a considerable performance improvement. Bisolve is a classical elimination method for computing the real [BEK+13] or complex [KS15] zeros within a given polydisc ∆ = ∆ 1 × ∆ 2 ⊂ C 2 of a bivariate system It achieves the best known complexity bound (i.e.Õ(d 6 F +d 5 F ·τ F ) bit operations for computing all complex solution) that is currently known for this problem, and its implementation shows superior performance when compared to other complete and certified methods. As we aim to modify the Bisolve algorithm at some crucial steps, we start with a brief description of the original version. Bisolve in a Nutshell. In an initial projection phase, Bisolve computes a set C of candidate regions using resultant computation and univariate root finding. More specifically, we first compute the hidden-variable resultants R (x) := Res(F, x ) for = 1, 2. Then, for each root z ,i in ∆ , we compute an isolating disc ∆ ,i such that T (∆ ,i , R ) = (True, k ,i , LB ,i ). That is, the T -test succeeds and yields the multiplicity of z ,i as a root of R as well as a lower bound for |R | on the boundary of ∆ ,i . By taking the pairwise product of any two discs ∆ 1,i and ∆ 2,j , we obtain a set C of polydiscs ∆ i,j := ∆ 1,i × ∆ 2,j in C 2 . Notice that each solution z in ∆ of F must be one of the candidate solutions z i,j := (z 1,i , z 1,j ) as each coordinate of z is a root of the corresponding polynomial R . Hence, each solutions must be contained in one of the candidate regions, even though most candidate regions do not contain any solution. In addition, each candidate region ∆ i,j contains at most one solution, which must be z i,j In the validation phase, the algorithm checks for every candidate region ∆ i,j whether it contains a solution or not. In other words, we check whether the corresponding candidate solution z = z i,j is actually a solution or not. The approach used in Bisolve shares many similarities to the algorithm #PolySol as proposed in this paper. That is, we write R = g ,1 · f 1 + g ,2 · f 2 with g ,1 , g 2, ∈ Z[x, y] and = 1, 2 and compute an upper bound UB for |g ,i (x)| for = 1, 2, i = 1, 2, and arbitrary x ∈ ∆ i,j . Similar as in #PolySol, this is achieved without actually computing the polynomials g ,1 and g ,2 , but by exploiting the fact that these polynomials can be written as determinants of "Sylvester-like" matrices 7 ; see [KS15] for details. Together with the lower bounds LB 1,i and LB 2,j as computed above this yields a lower bound LB * = min(LB 1,i ,LB 2,j ) 2 UB for F ∞ = max(|f 1 |, |f 2 |) on the boundary of ∆ i,j . Now, in order to discard or certify z as a solution, Bisolve proceed in rounds, where a 2 m -bit approximation ζ of z is computed at the beginning of the m-th round. As an exclusion predicate, interval arithmetic is used in order to compute a superset f (∆ 2 −m (ζ)) of f (∆ 2 −m (ζ)) for = 1, 2. If we can show that either f 1 or f 2 does not vanish, the candidate is discarded. As an inclusion predicate the above lower bound LB * on the boundary of ∆ i,j is compared to the values that f 1 and f 2 take at the approximation ζ of the candidate z. More specifically, if max(|f (ζ)|, |g(ζ)|) < LB * , then ∆ i,j contains a solution; see Theorem 4 in [BEK+13]. If neither the exclusion nor the inclusion predicate applies, we proceed with the next round. The BisolvePlus routine. Notice that, even though Bisolve computes the set of all solutions of F within ∆, it does not reveal the multiplicity k of a specific solution z = z i,j = (z 1,i , z 2,j ) ∈ Z. However, due to the properties of the resultant polynomials, it holds that k = µ(z 1,i , R 1 ) if the following two conditions are both fulfilled: The first condition guarantees that there is no solution of F at infinity above any z ∈ C, whereas the second condition guarantees that there is no other finite (complex) solution of F that shares the first coordinate with z. We remark that it is easy to check the first condition, however, checking the second condition is more difficult. This is due to the fact that z might be the only solution in ∆ of F with x 1 = z 1,i , but there is a another solution of F with x 2 = z i,1 that is not contained within ∆. We aim to address this problem by the following approach (see also Algorithm 2): Let L ∈ N be fixed non-negative integer. In a first step, we check whether (24) is fulfilled. If this is not the case, we return False, otherwise, we proceed. Now, for each solution z i,j ∈ Z and each ∈ {1, 2}, we use a complex root finder 8 to compute a set of pairwise disjoint discs D ,j of radius less than 2 −ρ such that each disc contains at least one root and the union of all discs D ,j contains all complex roots of f (z 1,i , x 2 ) ∈ C[x]. Then, we determine all discs D 1,j 1 , . . . , D 1,js that have a non-empty intersection with one of the discs D 2,j . It follows that each common root of f (z 1,i , x 2 ) and f (z 1,i , x 2 ) must be contained in one of the discs D 1,j s . Hence, if each of these discs is contained in ∆ 2,i , then x 2 = z 2,j is the unique solution of f (z 1,i , x 2 ) = f (z 2,i , x 2 ) = 0, and thus (25) is fulfilled. In this case, we may conclude that µ(z 1,i , R 1 ) equals the multiplicity of z. If we succeed in computing the multiplicities for all solutions in Z, we return the solutions together with their corresponding multiplicities. Otherwise, we return False. Obviously, the above approach cannot succeed if one of the above conditions is not fulfilled. However, even if both conditions are fulfilled, it may still fail due to the fact that ρ has not been chosen large enough. Lemma 11. Suppose that both conditions (24) and (25) are fulfilled. Then, there exists a L 0 ∈ N such that Algorithm 2 succeeds for all L > L 0 . • The multiplicity k ,i = µ(z ,i ) of z ,i as a root of R . for each solution z = z i,j ∈ Z do for = 1, 2 do Compute disjoint discs D ,1 , . . . , D ,s of radius less than 2 −ρ such that Determine the set if for all D 1,s ∈ D * it holds that D 1,s ⊂ ∆ i then Set b i,j = True if i,j b i,j then return {(S −1 • z i,j , k i ) : z i,j ∈ Z and z i,j ∈ ∆ r (m)} Proof. Let be a lower bound on the distance between any distinct roots of f 1 (z 1,i , x 2 ) and f 2 (z 1,i , x 2 ). Now, if 2 −L < /4, then two discs D 1,j and D 2,j can only intersect if they contain a common root of f 1 (z i,1 , x 2 ) and f 2 (z i,1 , x 2 ). Since z 2,j is the only common root, we thus conclude that each of the discs D 1,j s must contain z 2,j . Hence, if L is large enough, then ∆ 2 contains D 1,j s . The problem with this approach is that we do neither know in advance whether the condition (25) is fulfilled nor do we know whether ρ has been chosen sufficiently large. In order to overcome this issue, we consider a rotation of the system by means of a rotation matrix S ∈ S D F . Then, with probability at least 1/2, both conditions (24) and (25) are fulfilled for the rotated system F * := F • S −1 . We now proceed in rounds (numbered by m), where, in each round, we choose a matrix S ∈ S D F at random and run Algorithm 2 with input F * = F • S −1 and ρ := 2 m . Since there are only finitely many different choices for S and since Setting We performed experiments on a compute server with 48 Intel (R) Xeon (R) CPU E5-2680 v3 @ 2.50GHz cores and a total of 256 GB RAM running Debian GNU/Linux 8. All code was implemented in SageMath version 7.6, release date 2017-03-25. Instance Generation The instances on which we compared the implementations are generated as follows. Given a trivariate polynomial P ∈ Z[x, y, z]. There are several different ways of obtaining two bivariate polynomials f, g from P that have solutions of higher multiplicity. The different ways are encoded by the strings 0xx, 0xy, 0yy, x0y, y0x in the file names. The following table summarizes the meaning of these abbreviations. We denote p v = ∂ v p for any polynomial p ∈ R[v] for some ring R. 0xx f = Res(P, P z , z) g = f x · f x 0xy f = Res(P, P z , z) g = f x · f y 0yy f = Res(P, P z , z) g = f y · f y x0y f = Res(P, P z , z) · f x g = f y y0x f = Res(P, P z , z) · f y g = f x From the resulting system f, g, we construct the sheared system f, g ← f (ax + by, cx + dy), g(ax + by, cx + dy) with integers a, b, c, d drawn uniformly at random from [−2, 2]. This is done in order to make degenerate situations where multiple solutions share the same x or y-value less likely. We create an even larger set of instances by renaming the variables of P from x, y, z to x, z, y or y, z, x (or equivalently considering P x and P y instead of P z ). We abbreviate this choice with xyz, xzy, and yzx. Now, let z be a solution of such a system f, g of multiplicity k. We pick random polynomials p, q of increasing degrees and consider the systems f · p, g · q. This results in systems f d , g d of increasing degrees d that have the same solution z of multiplicity k. For each degree d, we create three such system f d , g d by multiplying f, g with different random polynomials. There are two different classes of instances that we consider depending on how the initial trivariate polynomial P is chosen. In the first class, called herwig_hauser, we pick the polynomial P from the set of polynomials given as three dimensional surfaces in the Herwig Hauser Classics gallery [Hau]. In the second class, called random, we pick P randomly. In the first class called herwig_hauser we let d = 10, 12, . . . , 40, whereas in the second class random, we let d = 16, 32, . . . , 4096. We note that in the latter case we pick the random polynomial with which we multiply f, g in order to get f d , g d as sparse polynomials as otherwise evaluating f, g already becomes non-trivial. The generated instances can be found on the project page. 9 A folder corresponding to a candidate contains one file called orig.cnd, which refers to the polynomials f, g. The remaining files correspond to the polynomials f d , g d as described above. Every file contains four lines, the first two contain the system, while the third and fourth contain the boundaries x − r, x + r and y − r, y + r such that the solution is contained within this range. For herwig_hauser-instances on the left, and for random-instances on the right. Experiments and Evaluation Results In the first experiment, we compare the running time as well as the precision demand of the two respective validation methods called standard for the method included in the original Bisolve routine and truncate for the method using the new inclusion predicate on the instance class herwig_hauser. In Figure 1, we can see the evaluation for validating k-fold roots for k = 1, 2, 4, 8. The measurements are repeated three times, for each method and system. This results in 9 measurements (3 different random polynomials, 3 different runs) per degree per method. On the left, the running times are on the vertical logarithmic axis, whereas the degree of the systems is on the linear horizontal axis. On the right, the precision demand is on the vertical logarithmic axis, whereas the degree of the systems is on the linear horizontal axis. The error bars indicate 95%-confidence intervals. We can see a clear advantage for our new method truncate. On average over all instances of degree 40, we obtain an improvement of a factor of 43.6, 37.9, 29.8, 25.2 for k = 1, 2, 4, 8 in the precision demand. In Figure 2 on the left, we can see the precision demand for the herwig_hauser instances for different k = 1, 2, 4, 8 for the truncate method. We can see that the precision demand increases with k in a comparable amount as the theoretical worst-case bounds predict, namely, we can roughly see a quadratic dependence between the precision demand and the multiplicity k in Figure 2 on the left. In Figure 2 on the right, we can see results for the same experiment for the random instances. In this experiment, we only include the truncate method as the original method does not scale well enough for solving instances of that degree. Here both axis are logarithmic and the degree goes up to 4096. Fitting a linear model to the data points leads an estimate for the exponent of 0.99 ± 0.05 0.94 ± 0.06, and 0.75 ± 0.13 for k = 1, 2, 4. The coefficients of determination lie above 0.94 in all three cases that is roughly 94% of the variance of the data can be explained by the fitted power model. Thus, we may conjecture that the precision demand depends at most a linearly on d. We remark that the plot suggests that the impact of the degree d dominates over the impact of k for very large d as we cannot see a difference between the curves for different values of k for large d. We remark that the impact of k for small d explains the smaller exponent in the fitted linear model for k = 4 compared to k = 1, 2. The source code, the statistical data underlying the plots, the instances, and the script used for benchmarking are available for download on the project page. 10
2017-12-15T00:33:30.000Z
2017-12-15T00:00:00.000
{ "year": 2017, "sha1": "1ce53cce969f1e978f5b4cd5b32a65eb038ed82e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ef3c962b76747c97a5c8464379b1e4191207ce02", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
49433919
pes2o/s2orc
v3-fos-license
Diagnostic accuracy of the loop-mediated isothermal amplification assay for extrapulmonary tuberculosis: A meta-analysis Background Loop-mediated isothermal amplification (LAMP) is used to detect pulmonary tuberculosis (PTB); however, the diagnostic accuracy of the LAMP assay for extrapulmonary tuberculosis (EPTB) is unclear. We performed a meta-analysis to evaluate the performance of LAMP in the detection of EPTB. Methods We searched PubMed, EMBASE, the Cochrane Library, China National Knowledge Infrastructure (CNKI), and the Wanfang database for studies published before Sep 16, 2017. We reviewed studies and compared the performance of LAMP with that of a composite reference standard (CRS) and culture for clinically suspected EPTB. We used a bivariate random-effects model to perform meta-analyses and used meta-regression and subgroup analysis to analyze sources of heterogeneity. Results Fourteen articles including 24 independent studies (16 compared LAMP to CRS, 8 to culture) of EPTB were identified. LAMP showed a pooled sensitivity of 77% (95% confidence interval (CI) 68–85), specificity of 99% (95% CI 96–100), and area under SROC curves (AUC) of 0.96 (95% CI 0.94–0.97) against CRS. It showed a pooled sensitivity of 93% (95% CI 88–96), specificity of 77% (95% CI 64–86), and AUC of 0.94 (95% CI 0.92–0.96) against culture. The pooled sensitivity, specificity, and AUC of MPB64 LAMP were 86% (95% CI 86–86), 100% (95% CI 100–100), and 0.97 (95% CI 0.95–0.98), respectively, and those of IS6110 LAMP were 75% (95% CI 64–84), 99% (95% CI 90–100), and 0.91 (95% CI 0.88–0.93), respectively, compared with CRS. Conclusions These results suggest good diagnostic efficacy of LAMP in the detection of EPTB. Additionally, the diagnostic efficacy of MPB64 LAMP was superior to that of IS6110 LAMP. Introduction Tuberculosis (TB) is an infectious disease caused by Mycobacterium tuberculosis (MTB) and is one of the most serious challenges to public health [1]. The most common site of tuberculosis infection is the lung, but bacteria can also spread to extrapulmonary sites, causing extrapulmonary tuberculosis (EPTB). EPTB accounts for approximately 22% of total TB cases [2]. Diagnosis of EPTB is very challenging, because specimens of EPTB are not as easy to obtain by noninvasive methods as are sputum samples. Invasive procedures requiring special expertise are often required to obtain specimens such as cerebrospinal fluid and pleural effusion. Additionally, culture of EPTB specimens has low sensitivity. Biopsy along with histopathological examination and culture is required to diagnose EPTB. There are many methods for diagnosis of tuberculosis. PCR tests require an expensive thermal cycler to amplify DNA fragments in multiple temperature-dependent steps. Therefore, some PCR assays, such as Xpert MTB/RIF, are very costly, which is an obstacle to application in low-income areas. Loop-mediated isothermal amplification (LAMP) is an isothermal DNA method that relies on two or three sets of primers to amplify minute quantities of DNA within a shorter period of time. Compared with other nucleic acid amplification tests, LAMP is very economical. This is a new assay with high accuracy for pulmonary TB detection [3], but there are no systematic studies assessing its diagnostic accuracy for EPTB. For this purpose, we performed a meta-analysis to reveal the diagnostic test accuracy of the LAMP assay for EPTB using data from previous studies of the LAMP assay compared with that of a composite reference standard (CRS) and culture reference in the detection of EPTB. We analyzed the pooled sensitivity and specificity of this assay against different references. Moreover, the diagnostic efficiency of the test according to different target genes, types of samples, incubation times, condition of samples, and types of LAMP were evaluated by subgroup analysis. Data sources and search strategy We searched PubMed, EMBASE, the Cochrane Library, China National Knowledge Infrastructure (CNKI), and the Wanfang database for studies evaluating LAMP accuracy in TB published before Sep 16, 2017. The search formula (("Loop-Mediated Isothermal Amplification" OR LAMP) AND ("Tuberculosis"[Mesh] OR "Tuberculoses" OR "Kochs Disease" OR "Disease, Kochs" OR "Koch's Disease" OR "Disease, Koch's" OR "Koch Disease" OR TB)) was used for PubMed without any language restrictions. The search formulas for EMBASE, the Cochrane Library, CNKI, and the Wanfang database were similar to the PubMed search formula. The search strategies for each database were shown in the S1 File. References of included articles and published reviews were also reviewed for possible candidate studies. We extracted data including author, year, country, true positive (TP), false positive (FP), false negative (FN), true negative (TN) values for the assay, reference standard, target gene, and specimen type, as well as other parameters. Inclusion criteria We included full text original studies assessing the diagnostic accuracy of the LAMP assay for EPTB using extrapulmonary site specimens. Reference standards were defined in the studies and were appropriate. Articles directly provided TP, FP, FN, and TN values for the assay, or included the data necessary to calculate these measures. Case reports, studies of fewer than 10 samples, abstracts, and conference reports without full articles were excluded. Reference standard A composite reference standard (CRS) or mycobacterial culture was defined as the reference standard in the studies. Clinical manifestation, biochemical testing results, smears, histopathology, other nucleic acid amplification tests (NAATs), culture, or a response to anti-tuberculosis treatment constituted the reference standards in the CRS. Literature screening and selection Two investigators independently assessed candidate articles by reviewing titles and abstracts, then full text for inclusion. Discrepancies between the two decisions were resolved by discussion with a third investigator. Data extraction The same two investigators independently extracted the necessary information from each of the included articles. We then cross-checked the information obtained by the two investigators. Discrepancies between the two data sets were settled by discussion with a third investigator, just as in the literature selection phase. Data from studies against two different reference standards or target genes were treated separately. Assessment of study quality According to the two reference standards (CRS and culture), the two investigators independently divided studies into two groups and used a revised tool for Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) to assess study quality separately [7]. Publication bias was not assessed, because these methods are not applicable for studies of diagnostic accuracy [8]. Data synthesis and statistical analysis We first obtained the numbers of TP, FP, FN, and TN in each included study, and then calculated the estimated pooled sensitivity and specificity of LAMP associated with 95% CI against CRS or culture using bivariate random-effects models. Forest plots of the sensitivity and specificity as well as summary receiver operating characteristic (SROC) curves were generated for each study. The area under SROC curves (AUC) was likewise calculated. The I 2 statistics were calculated to assess the heterogeneity between studies compared with a standard reference. A value of 0% indicated no observed heterogeneity, and values greater than 50% were considered substantially heterogeneous [9,10]. We explored targeted genes, types of samples, incubation time, condition of samples and types of LAMP as potential sources of heterogeneity using subgroup and meta-regression analyses. At least four available studies were needed to carry out the meta-analysis for a predefined variable type. Data from studies against CRS and culture were analyzed separately. Stata version 14.0 (Stata Corp, College Station, TX, USA) with the MIDAS command packages was used to analyze the results. Imperfect reference standard Imperfect reference standards may lead to misclassification of samples in diagnostic validity studies [11,12]. For the paucibacillary nature of EPTB, culture is an imperfect reference standard and leads to an underestimation of the true specificity of LAMP. A CRS is a composite standard that comprises the results of several tests; however, a CRS itself may have reduced specificity that could result in apparent FN LAMP results, also leading to an underestimation of the true sensitivity of LAMP [13,14]. Therefore, a study comparing LAMP with both culture and CRS might provide a more credible range for sensitivity and specificity. Identification of studies and study characteristics Eight hundred candidate articles were identified by searching the relevant databases using our search strategy, and 14 qualified articles were included according to the inclusion criteria ( Fig 1, S2 File) [15][16][17][18][19][20][21][22][23][24][25][26][27][28]. The number of specimens evaluated in each article ranged from 27 to 315 with a median of 118. Twelve articles were written in English, and 2 in Chinese. All studies were conducted in countries with high tuberculosis burdens (India and China). We excluded two studies [29,30] that had the same data as other included studies [15,25], and one study [31] whose data was part of another study [28]. When an article reported the use of two different standards or target genes in the same study, we considered the article to include two independent studies. In accordance with this principle, 24 independent studies were included: 16 compared LAMP with CRS and 8 with culture (Table 1, S1 Table). The characteristics of the LAMP test used in the 14 articles are also shown in Table 1. The most common target genes were IS6110 and MPB64, used in 11 and 5 studies, respectively. Two studies [16,28] did not define the target gene. The incubation temperatures of all experiments were similar, at approximately 65˚C. The specimens included cerebrospinal fluid (CSF), pleural effusion, synovial fluid, pus, fine needle aspiration (FNA) of lymph glands, and others. Eight articles used only one type of specimen (e.g., only CSF) [20][21][22][24][25][26][27][28]. The other studies used multiple types of specimens [15][16][17][18][19]23]. Only one study provided HIV infection status [26]. Only two studies provided median or mean age. The CRS criteria used in the articles included the results of culture. Study quality The overall methodological quality of the included studies using a CRS and culture is summarized in Fig 2. Only four studies used a case-control design [15,21,26,27]. Diagnostic accuracy of the LAMP assay for EPTB detection When compared to a CRS using 2001 samples in 16 studies, the combined sensitivity and specificity of the LAMP assay for EPTB were 77% (95% CI 68-85) and 99% (95% CI 96-100), respectively ( Fig 3A). The I 2 statistical values were 95% for sensitivity and 85% for specificity, suggesting significant heterogeneity in diagnostic validity between the studies. When compared to a culture reference standard (8 studies, 1515 samples), the combined sensitivity of LAMP was 93% (95% CI 88-96) with I 2 = 49% and the specificity was 77% (95% CI 64-86) with I 2 = 96% for 1515 specimens in 8 studies (Fig 3B). The heterogeneity of the sensitivity was acceptable; however, the heterogeneity of the specificity was significant. The AUC of SROC was 0.96 (95% CI 0.94-0.97) and 0.94 (95% CI 0.92-0.96) versus vs CRS and culture, respectively, suggesting very good overall diagnostic validity. We explored the heterogeneity between the studies using hierarchical analysis on predefined subgroups of target genes, sample types, incubation time, condition of samples and types of LAMP used in the assay. Diagnostic accuracy of LAMP assay for extrapulmonary tuberculosis The pooled sensitivity and specificity of the MPB64 LAMP assay (613 samples) vs. CRS were 86% (95% CI 86-86) with I 2 = 3.33% and 100% (95% 100-100) with I 2 = 0, respectively ( Fig 4A). There was no heterogeneity in diagnostic validity between studies of MPB64 LAMP. The AUC of SROC was 0.97 (95% CI 0.95-0.98) for MPB64 LAMP vs CRS, suggesting very high efficiency. One study used MPB64 as the target gene in the LAMP assay compared with culture, but further analysis could not be carried out. When using IS6110 as the target gene, the pooled sensitivity and specificity of IS6110 LAMP compared with CRS were 75% (95% CI 64-84) and 99% (95% CI 90-100), respectively ( Fig 4B). I 2 statistical values were 95% and 88% for sensitivity and specificity, respectively, of IS6110 LAMP. The P values of meta-regression for sensitivity and specificity of the IS6110 LAMP assay against a non-IS6110 LAMP assay in comparison to CRS were 0.16 and 0.25, respectively, suggesting that this target gene was not a source of heterogeneity in the LAMP assay. Therefore, combining different studies to assess the diagnostic performance of the LAMP assay did not significantly skew the results. Compared with culture, the pooled sensitivity of IS6110 LAMP was 89% (95% CI 81-94), and specificity was 79% (95% CI 62-90). The I 2 statistical values of IS6110 LAMP were 42% and 97% for sensitivity and specificity, respectively. Heterogeneity of sensitivity among the studies was moderate. The pooled sensitivity of MPB64 LAMP was significantly higher than that of IS6110 LAMP (P<0.05); however, the difference between the specificities was not statistically significant (P>0.05), and the AUC of MPB64 LAMP was higher than that of IS6110 LAMP when assessed against CRS. Data of other target genes were too limited to analyze. Four studies assessed LAMP in CSF samples in comparison to a CRS. Pooled sensitivity was 76% (95% CI 56-89, I 2 = 97%), and pooled specificity was 99% (95% CI 77-100, I 2 = 89%) (Fig 5). The P-values of meta-regression for sensitivity and specificity were 0.33 and 0.29, respectively. The AUC of SROC was 0.99 (95% CI 0.77-1.00) for CSF samples vs CRS. Sensitivity and specificity of LAMP for pleural effusion against CRS ranged from 25% to 75.8% and 83.3% to 100%, respectively. For fine needle aspiration of lymph nodes, synovial fluid, and pus, sensitivity of this assay was 80%, 85.3%, and 83.3%, respectively. Sensitivity was 87.7% for IS6110 and MPB64 vs CRS, and specificity was consistent at 100%. However, data were too limited to perform meta-analysis. Discussion Timely and accurate diagnosis of tuberculosis is very important for effective management of the disease and prevention of infection in the community, particularly in areas with high burdens tuberculosis. Conventional diagnostic methods, such as smears and culture, are timeconsuming and not very sensitive. LAMP is an innovative point-of-care diagnostic technique with increased specificity, speed, and low cost [32]. It can provide results within 1 or 1.5 hours. Several studies have evaluated the diagnostic validity of the test for pulmonary TB [33,34]. A systematic review and metaanalysis reported by Nagai et al. showed summary estimates of sensitivity at 89.6% (95% CI 85.6-92.6%) and specificity at 94.0% (95% CI 91.0-96.1%) and a diagnostic odds ratio (DOR) of 145 (95% CI 93-226) [35]. However, there has been no reported systematic review and meta-analysis evaluating the diagnostic accuracy of LAMP for EPTB. Ours is the first study for this purpose. In this meta-analysis, we reviewed the diagnostic efficiency of the LAMP assay for EPTB compared with that of a CRS or culture reference. Based on AUC, the diagnostic performance of the LAMP assay was very good for EPTB, regardless of the reference standard used. However, this test was less effective than PCR assays such as Xpert MTB/RIF [36,37]. We found that LAMP had very high pooled specificity (99% 95% CI 96-100) but more moderate pooled sensitivity (77% 95% CI 68-85) for the diagnosis of EPTB vs. CRS. As expected, when culture was used as the reference standard, the pooled sensitivity for the diagnosis of EPTB was improved to 93% (95% CI 88-96), and pooled specificity decreased to 77% (95% CI 64-86). However, there was obvious heterogeneity among the studies, and the results should be interpreted carefully. For the detection of the MTB genome, several factors play important roles in standardizing a sensitive and specific LAMP assay. The target gene is an important factor, and a variety of target genes can be used in NAATs. In LAMP, the commonly used target genes are IS6110, MPB64, and IS1081, among others. Through this meta-analysis, we found that diagnostic efficacy was different when using different target genes. The IS6110 gene has been the favored target gene in studies using the LAMP assay, as multiple copies are present in the MTB genome [38,39]. However, in this study, it was not the most efficient target gene for diagnosis of EPTB in areas with high burdens of tuberculosis. We observed that the pooled sensitivity and specificity of MPB64 LAMP were significantly higher than those of non-MPB64 LAMP (P = 0.03 and P = 0.00, respectively) when compared to CRS. Heterogeneity between the studies using MPB64 LAMP was not significant. The pooled sensitivity and specificity of IS6110 LAMP compared with those of non-IS6110 LAMP were not significantly different vs. CRS. However, heterogeneity between IS6110 LAMP studies was very significant. The pooled sensitivity and AUC of MPB64 LAMP were higher than those of IS6110 LAMP against CRS. This result was consistent with those of studies using other NAATs [40,41]. Included studies using culture as the reference standard were limited, and the difference in pooled sensitivity and specificity for the two target genes could not be analyzed. We considered that LAMP accuracy for tuberculosis detection in EPTB specimens might vary widely according to specimen type, as it did in another systematic review and meta-analysis of EPTB diagnosis using the Xpert MTB/RIF assay [42]. However, our meta-analysis could not reach a conclusion, partially due to the limited number of studies using the same sample types. Only four studies used CSF to analyze the diagnostic accuracy of LAMP, and there were not enough separate studies using other sample types to carry out meta-analysis. Additionally, these results must be treated with caution, as the heterogeneity between the studies was very significant; this may lead to bias in the results. Further studies using different types of specimens are needed to assess the diagnostic accuracy of LAMP for individual samples. We observed that incubation time and sample condition in LAMP assays did not affect test results. Different assay types might affect results, e.g., an in-house LAMP assay might be better than the Loopamp MTBC assay. As the heterogeneity between the studies was very significant, further studies using different types of the LAMP assay are needed to assess its sensitivity and specificity. PCR tests are considered the most effective means of diagnosis [43]. However, these assays, such as Xpert MTB/RIF, are very costly, which is an obstacle to their application in lowincome areas. LAMP is gradually being accepted as an alternative test in resource-limited areas due to its relatively small financial burden [35]. In the current meta-analysis, all studies were conducted in low-income countries where medical resources are limited. We observed that the effectiveness of LAMP in EPTB diagnosis was similar to that of Xpert MTB/RIF, which was consistent with a previous study [44]. However, compared with Xpert MTB/RIF, LAMP has shortcomings, such as its inability to determine rifampicin resistance. For lowincome areas with low prevalence of drug-resistant tuberculosis, LAMP might be a useful alternative to Xpert MTB/RIF. Several limitations existed in our review. First, the meta-analysis was limited by the number of studies using different target genes and sample types, particularly those comparing LAMP against culture. Only two target genes and one sample type (CSF) could be analyzed through meta-analysis; the diagnostic validity of the LAMP assay for other target gene and sample types could not be assessed. Some included studies used multiple sample types, which may have led to some bias in the results. Second, the quality of some studies in this analysis was relatively poor. The heterogeneity between the studies was remarkable, and the meta-analysis results should be interpreted with caution. Conclusions In this meta-analysis, we observed that the pooled sensitivity and specificity of LAMP for the detection of EPTB were 77% and 93%, respectively, when compared with a CRS, and 99% and 77%, respectively, when compared with culture. Depending on the assessment of AUC, LAMP showed good diagnostic efficacy. We also found that the diagnostic efficacy of LAMP tests varied according to different target genes; the diagnostic efficacy of MPB64 LAMP was better than that of IS6110 LAMP. The diagnostic accuracy of LAMP for different samples could not be effectively assessed, as the number of studies using different sample types was limited. Additionally, an in-house LAMP assay might be superior to the Loopamp MTBC assay. Because of its low cost, LAMP could be useful in the diagnosis of EPTB, particularly in areas where financial resources are limited and drug-resistant MTB is not prevalent.
2018-07-04T02:58:58.148Z
2018-06-26T00:00:00.000
{ "year": 2018, "sha1": "c13498acd0f9d7ada716c93f44cbefe0c3bd062c", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0199290&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c13498acd0f9d7ada716c93f44cbefe0c3bd062c", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
235302719
pes2o/s2orc
v3-fos-license
Correction of Breech Presentation with Moxibustion and Acupuncture: A Systematic Review and Meta-Analysis Acupuncture-type interventions (such as moxibustion and acupuncture) at Bladder 67 (BL67, Zhiyin point) have been proposed to have positive effects on breech presentation. The aim of this systematic review and meta-analysis was to evaluate the effectiveness and safety of moxibustion and acupuncture in correcting breech presentation. We searched PubMed, MEDLINE, Embase, the Cochrane Central Register of Controlled Trials (CENTRAL), the Chinese Electronic Periodical Services (CEPS), and databases at ClinicalTrials.gov to identify relevant randomized controlled trials (RCTs). In this study, sixteen RCTs involving 2555 participants were included. Compared to control, moxibustion significantly increased cephalic presentation at birth (RR = 1.39; 95% CI = 1.21–1.58). Moxibustion also seemed to elicit better clinical outcomes in the Asian population (RR = 1.42; 95% CI = 1.21–1.67) than in the non-Asian population (RR = 1.20; 95% CI = 1.01–1.43). The effects of acupuncture on correcting breech presentation after sensitivity analysis were inconsistent relative to control. The effect of moxibustion plus acupuncture was synergistic for correcting breech presentation (RR = 1.53; 95% CI = 1.26–1.86) in one RCT. Our findings suggest that moxibustion therapy has positive effects on correcting breech presentation, especially in the Asian population. Introduction Breech presentation is a common malposition in the third trimester of pregnancy. The frequency of breech presentation in term pregnancies is 3%-4% in America and approximately 2% in China [1,2]. Risk factors for breech presentation include preterm labor, uterine anomaly, multiparity, placenta previa, and polyhydramnios [3]. Serious complications, such as traumatic injuries or asphyxia, can occur during vaginal delivery [4]. Therefore, a planned Caesarean section is recommended for pregnant women with breech presentation at childbirth [3]. However, Caesarean section is not free from complications, including wound infection, adhesions, hemorrhage, or scar rupture during subsequent labor [5]. Some non-invasive therapies are available, including knee-chest position management and external cephalic version (ECV). However, there is insufficient evidence to support kneechest position management, and ECV is a painful procedure for pregnant women [6,7]. Moxibustion and acupuncture have a long history in the treatment of various problems, including fetal malposition. The interventions are similar because they both stimulate acupoints to achieve a therapeutic effect. Moxibustion is a traditional Chinese procedure that utilizes the heat generated from a burning moxa stick (made from herbal preparations containing Artemisia vulgaris) to stimulate acupuncture points [8,9]. Several clinical trials have shown that moxibustion at Bladder 67 (BL67), also known as the Zhiyin point, elicits positive effects on breech presentation without serious adverse events [10,11]. However, systematic reviews and meta-analysis have reported conflicting results regarding the effects of moxibustion on breech presentation. For example, Vas et al. [12] and Li et al. [13] reported that moxibustion has positive effects on non-vertex presentation. However, Coyle et al. [14] suggested that moxibustion treatment may not improve non-cephalic presentations at birth relative to no treatment. To determine the efficacy of moxibustion on breech presentation, additional clinical trials since 2012 have investigated the effects of moxibustion [15,16]. Acupuncture has also been reported to correct fetal malposition, although evidence from systematic reviews and meta-analysis is lacking [17]. To fill this research gap, we conducted an updated systematic review and meta-analysis to evaluate the effects and safety of these acupuncture-type interventions in correcting breech presentation. Materials and Methods This systematic review and meta-analysis study are reported in accordance with the statement of preferred reporting items for systematic reviews and meta-analysis (PRISMA). The protocol was registered on PROSPERO with a registration number: CRD42020192572. Search Strategy In this systematic review, we included all RCTs on the use of acupuncture-type interventions (i.e., moxibustion and acupuncture) in the management of breech presentation, regardless of whether the RCTs were blinded. We performed literature searches in PubMed, MEDLINE, Embase, the Cochrane Central Register of Controlled Trials (CENTRAL), the Chinese Electronic Periodical Services (CEPS), and databases at ClinicalTrials.gov from the inception of the source to 31 January 2021. Keywords for literature search included "breech," "labor presentation," "acupuncture," "electroacupuncture," "acupressure," and "moxibustion." We explored our literature search with MeSH headings without restrictions of language, publication type, or date. We applied a filter to narrow the number of articles which fit the specific study type (e.g., RCTs) and study question (e.g., intervention). The details of the search strategy are presented in Supplementary Materials (Table S1). We also searched the reference lists of included studies and related articles in PubMed and clinical trial databases to identify relevant RCTs. Study Selection and Data Extraction We selected eligible studies based on the following inclusion criteria: (1) the study was an RCT; (2) pregnant women in the 28th-35th week of gestation with a normal pregnancy and an ultrasound diagnosis of breech (non-vertex) presentation were included; (3) the interventions consisted of moxibustion alone, traditional acupuncture or electro-acupuncture alone, or moxibustion and acupuncture; (4) comparisons between interventions and control measures (e.g., observation, usual care, or knee-chest position) were conducted; and (5) outcome measures (e.g., fetal presentation at birth and adverse events) were reported. Studies were excluded based on the following criteria: (1) the study was non-randomized, quasi-experimental, observational, qualitative, or did not involve human subjects; (2) had wrong or no comparators; or (3) had incomplete outcome data. Two authors independently selected articles according to the inclusion and exclusion criteria by screening the titles, abstracts, and full texts of included studies. We extracted the following information from studies that met the inclusion criteria: study characteristics (e.g., author, publication year, study design and settings, inclusion/exclusion criteria, methods of randomization), participant characteristics (age, gender, co-morbidities), interventions (types, duration), comparisons (types of control groups) and outcomes (types of outcome measures, adverse events). We retrieved data from individual studies having an intention-to-treat principle. Any disagreements about whether to include a study were resolved by a third reviewer. Assessment of the Risk of Bias in Included Studies Two authors independently assessed the methodological quality of each included clinical trial, according to the Cochrane risk of bias tool for randomized controlled trials (RoB 2.0) [18]. RoB 2.0 is composed of five domains, including bias arising from the randomization process (allocation), bias due to deviations from the intended interventions (performance), bias due to missing outcome data (follow-up), bias in the outcome measurement (measurement), and bias in the selection of the reported results (reporting). The authors rated each domain as either low risk, some concerns (uncertain risk of bias), or high risk. Discrepancies were resolved by the third reviewer. Data Synthesis and Statistical Analysis We compared moxibustion with control, acupuncture with control, and moxibustion plus acupuncture with control. The primary outcome was the fetal presentation at birth, and the secondary outcome was adverse events. Data were analyzed using Review Manager Software (version 5.3.5). Dichotomous outcomes were extracted from each study to compute the RR with a 95% CI. The pooled RR and the associated 95% CI were estimated by the Mantel-Haenszel method. Numbers needed to treat (NNT) were calculated from the formula (NNT = 1/absolute risk reduction). We assessed clinical heterogeneity by comparing the methodologies and study designs of the included studies. Statistical heterogeneity of effect sizes between studies was assessed using the I 2 statistic and Q statistic with an X 2 test. We defined statistical heterogeneity using p ≤ 0.1 for the X 2 test or I 2 ≥ 50%. In the meta-analysis, a fixed-effect model was performed when there was no significant heterogeneity, and a random-effects model was performed when the heterogeneity was significant. A funnel plot was produced to detect possible publication bias. Sensitivity analysis was performed to test the robustness of results by excluding trials that used low-quality methodologies. To assess between-group differences and explain heterogeneity, we carried out a subgroup analysis. Because regional differences may exist, we reported treatment effects on breech presentation separately. Study Selection and Characteristics We identified 198 studies using our search strategy and included 16 studies based on our inclusion and exclusion criteria. We summarize the process of study identification and selection in Figure 1, and present the characteristics of each of the included studies in Table 1. All included studies were randomized controlled trials. The size of the study populations ranged from 20 to 406 persons. The 16 studies included a total of 2555 participants; eight studies included participants from China [10,11,[19][20][21][22][23][24], two studies included participants from Italy [25,26], and the others included participants from France [27], Australia [28], Switzerland [29], Croatia [17], Denmark [16], and Spain [15]. Most studies were published in English (56.3%); others were published in Chinese (37.5%) and French (6.2%). Methodological Quality of Included Studies The risk of bias for included studies is shown in Figures 2 and 3. All studies were assessed as having low or uncertain levels of risk of bias, except in the domains of allocation and follow-up. We present the details of the risk of bias assessment in Supplementary Materials (Table S2). In general, the quality was moderate in all included studies, except for four studies [20,22,23,25] that were assessed as having a high risk of bias in the domain of either allocation or follow-up. Adverse Events Information on adverse events was presented in four trials. Because of the clinical heterogeneity between the included studies, we did not perform a meta-analysis of adverse events. Cardini et al. in 2005 reported adverse events (41.5%) related to moxibustion [25]. Patients had abdominal pain, throat problems, and unpleasant odor with or without nausea. Cardini et al. in 1998 and Vas et al. reported that no adverse events occurred in the moxibustion or control groups [10,15]. Neri et al. observed no adverse effects on participants who received moxibustion plus acupuncture or usual care [26]. Healthcare 2021, 9, x 9 of 14 Adverse events Information on adverse events was presented in four trials. Because of the clinical heterogeneity between the included studies, we did not perform a meta-analysis of adverse events. Cardini et al. in 2005 reported adverse events (41.5%) related to moxibustion Publication Bias We used Review Manager Software (Version 5.3.5) to evaluate the publication bias. The sample size of most studies was >100 participants with two comparison arms except for Do 2011 [28], Li 1996 [22], and Millereau 2009 [27]. Funnel plots are typically symmetrical for studies with large sample sizes (Figure 7). However, for studies with small sample sizes, no study reported a negative result, which suggests that publication bias is probable in the literature reporting correction of breech presentation with moxibustion and acupuncture. Discussion Our study found that acupuncture-type interventions (including moxibustion, acupuncture, and moxibustion plus acupuncture) at BL67 increase the frequency of cephalic presentation at birth. Moxibustion seemed to be more effective in correcting non-vertex presentation in the Asian population than in the non-Asian population. Previously, Vas et al. found that moxibustion had positive effects on correcting nonvertex presentation, although they noted that there was considerable heterogeneity among studies [12]. Li et al. demonstrated that moxibustion was effective in correcting breech presentation, but non-randomized controlled trials were included in this study [13]. The results of these two studies differed from those of Coyle et al. [14], who found that moxibustion did not reduce the frequency of non-cephalic presentation relative to no treatment [14]. This discrepancy could be attributed to emerging clinical trials in recent years. In addition, Coyle et al. did not include all relevant trials, such as Chen, 2007 [21], Do, 2011 [28], Li, 1996 [22], Millereau, 2009 [27], and Yang, 2008 [19]. Our study included only RCTs that were eligible and up-to-date. To minimalize the impact of potential bias, a sensitivity analysis was performed; such an analysis was not reported as being conducted in most previous studies. After comparing the net effects of different acupuncture-type interventions before and after sensitivity analysis, a positive effect on correcting breech presentation, particularly with moxibustion alone or in combination with acupuncture, is consistent. Our findings provide robust support of the effectiveness of moxibustion on correcting breech presentation. The mechanism of moxibustion is not fully understood. Moxibustion at BL67 is thought to stimulate the production of prostaglandin and estrogen, which increases uterus contractions that lead to fetal movements [30,31]. Traditional Chinese medicine Discussion Our study found that acupuncture-type interventions (including moxibustion, acupuncture, and moxibustion plus acupuncture) at BL67 increase the frequency of cephalic presentation at birth. Moxibustion seemed to be more effective in correcting non-vertex presentation in the Asian population than in the non-Asian population. Previously, Vas et al. found that moxibustion had positive effects on correcting non-vertex presentation, although they noted that there was considerable heterogeneity among studies [12]. Li et al. demonstrated that moxibustion was effective in correcting breech presentation, but non-randomized controlled trials were included in this study [13]. The results of these two studies differed from those of Coyle et al. [14], who found that moxibustion did not reduce the frequency of non-cephalic presentation relative to no treatment [14]. This discrepancy could be attributed to emerging clinical trials in recent years. In addition, Coyle et al. did not include all relevant trials, such as Chen, 2007 [21], Do, 2011 [28], Li, 1996 [22], Millereau, 2009 [27], and Yang, 2008 [19]. Our study included only RCTs that were eligible and up-to-date. To minimalize the impact of potential bias, a sensitivity analysis was performed; such an analysis was not reported as being conducted in most previous studies. After comparing the net effects of different acupuncture-type interventions before and after sensitivity analysis, a positive effect on correcting breech presentation, particularly with moxibustion alone or in combination with acupuncture, is consistent. Our findings provide robust support of the effectiveness of moxibustion on correcting breech presentation. The mechanism of moxibustion is not fully understood. Moxibustion at BL67 is thought to stimulate the production of prostaglandin and estrogen, which increases uterus contractions that lead to fetal movements [30,31]. Traditional Chinese medicine (TCM) theory teaches that disharmony of qi and blood may cause fetal malposition. It is thought that moxibustion at BL67 tonifies Yang qi and dredges channels to correct fetal position [19,21]. Some studies suggest that the effects of treatment might be related to ethnicity [32,33]. We performed a subgroup analysis to assess differences between ethnic groups and found that moxibustion seemed to be more effective in correcting non-vertex presentation in Asians than in non-Asian populations. To the authors' best knowledge, this is the first article that investigates the effect of moxibustion on breech presentation in different races. However, the mechanism of this phenomenon is unclear. During pregnancy, acupuncture has been hypothesized to have beneficial effects on pelvic pain or labor pain [34,35]. In TCM theory, moxibustion or acupuncture applied at BL67 is thought to activate blood circulation and dredge channels to correct fetal malposition [20]. However, there have been few studies on the use of acupuncture to treat breech presentation, and there has been no systematic review or meta-analysis in the literature to date. In our study, only two clinical trials were retrieved and included in the metaanalysis, but the risk of bias in one of those trials [22] was rated as "high." The result of the subsequent sensitivity analysis revealed that the effect of acupuncture was inconsistent. Therefore, reports on the effects of acupuncture should be interpreted with caution. According to Coyle et al., there was a positive effect on breech presentation using moxibustion combined with acupuncture [14]. Nevertheless, only one trial was included in the meta-analysis. Our study included a new trial [20], and the result was similar. The pooled RR of moxibustion versus moxibustion plus acupuncture was 1.39 vs. 1.53 without analysis and 1.34 vs. 1.42 with sensitivity analysis. The combination of moxibustion and acupuncture appears to exert a synergistic effect on correcting breech presentation. Previous systematic reviews included RCTs with different controls, including kneechest position or observation [12,13]. However, one systematic review by Hofmeyr et al. found that there was no difference in cephalic presentation between knee-chest position and observation [7]. Therefore, we included clinical trials with no-effect controls, including knee-chest position and observation. Moxibustion and acupuncture are generally safe when administrated by experienced clinicians, and both are less expensive than Caesarean section in general practice. In a study by Ineke et al., moxibustion reduced the number of Caesarean sections performed in pregnant woman with breech presentation and was cost-effective when compared to expectant management [36]. A previous study pointed out that there were no significant differences in the comparison of moxibustion with usual care, with respect to premature births or premature rupture of the membranes [12]. We performed meta-analysis of these two outcomes, and had similar results (see Supplementary Materials Figure S1). Because the use of TCM theories is increasing in many countries, the modality of using moxibustion might be more widely deemed as being beneficial in obstetric patients. This study was limited in several aspects. First, there might be publication bias in the meta-analysis. Second, the sample sizes of some included studies were too small for RCT design. Finally, the application time of treatment (15-20 min) and treatment duration (7-14 days) differed between studies Conclusions Our updated systematic review and meta-analysis suggested that moxibustion has a positive effect on correcting breech presentation. However, more randomized, controlled clinical trials are needed to evaluate whether our estimate of the magnitude of the effect of moxibustion remains constant.
2021-06-03T06:17:18.811Z
2021-05-22T00:00:00.000
{ "year": 2021, "sha1": "2a5fa31059146d35f0dd96fccdecca40b6389c9c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9032/9/6/619/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "29b98c1d7c26a1bf0c9e6550b2612e55455256f9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
251886409
pes2o/s2orc
v3-fos-license
Tripterygium wilfordii glycosides ameliorates collagen-induced arthritis and aberrant lipid metabolism in rats Rheumatoid arthritis (RA) is a chronic inflammatory autoimmune disease, and the dysregulation of lipid metabolism has been found to play an important role in the pathogenesis of RA and is related to the severity and prognosis of patients. Tripterygium wilfordii glycosides (TWG) is extracted from the roots of Tripterygium wilfordii Hook F. with anti-inflammatory and immunosuppressive effects, and numerous clinical trials have supported its efficacy in the treatment of RA. Some evidence suggested that TWG can modulate the formation of lipid mediators in various innate immune cells; however whether it can improve RA-related lipid disorders has not been systematically studied. In the study, type Ⅱ collagen-induced arthritis (CIA) model was used to investigate the efficacy of TWG in the treatment of RA and its effect on lipid metabolism. Paw volume, arthritis score, pathological changes of ankle joint, serum autoantibodies and inflammatory cytokines were detected to assess the therapeutic effect on arthritis in CIA rats. Then, shotgun lipidomics based on multi-dimensional mass spectrometry platform was performed to explore the alterations in serum lipidome caused by TWG. The study showed that TWG could effectively ameliorate arthritis in CIA rats, such as reducing paw volume and arthritis score, alleviating the pathological damages of joint, and preventing the production of anti-CII autoantibodies and IL-1β cytokine. Significant increase in ceramide and decrease in lysophosphatidylcholine were observed in CIA rats, and were highly correlated with arthritis score and IL-1β level. After TWG treatment, these lipid abnormalities can be corrected to a great extent. These data demonstrate that TWG exerts a beneficial therapeutic effect on aberrant lipid metabolism which may provide new insights for further exploring the role and mechanism of TWG in the treatment of RA. Rheumatoid arthritis (RA) is a chronic inflammatory autoimmune disease, and the dysregulation of lipid metabolism has been found to play an important role in the pathogenesis of RA and is related to the severity and prognosis of patients. Tripterygium wilfordii glycosides (TWG) is extracted from the roots of Tripterygium wilfordii Hook F. with anti-inflammatory and immunosuppressive effects, and numerous clinical trials have supported its efficacy in the treatment of RA. Some evidence suggested that TWG can modulate the formation of lipid mediators in various innate immune cells; however whether it can improve RA-related lipid disorders has not been systematically studied. In the study, type Ⅱ collagen-induced arthritis (CIA) model was used to investigate the efficacy of TWG in the treatment of RA and its effect on lipid metabolism. Paw volume, arthritis score, pathological changes of ankle joint, serum autoantibodies and inflammatory cytokines were detected to assess the therapeutic effect on arthritis in CIA rats. Then, shotgun lipidomics based on multi-dimensional mass spectrometry platform was performed to explore the alterations in serum lipidome caused by TWG. The study showed that TWG could effectively ameliorate arthritis in CIA rats, such as reducing paw volume and arthritis score, alleviating the pathological damages of joint, and preventing the production of anti-CII autoantibodies and IL-1β cytokine. Significant increase in ceramide and decrease in lysophosphatidylcholine were observed in CIA rats, and were highly correlated with arthritis score and IL-1β level. After TWG treatment, these lipid abnormalities can be corrected to a great extent. These data demonstrate that TWG exerts a beneficial therapeutic effect on aberrant lipid metabolism which may provide new insights for further exploring the role and mechanism of TWG in the treatment of RA. Introduction Rheumatoid arthritis (RA) is a chronic autoimmune disease affecting about 0.5-1% of the population worldwide (Symmons et al., 2002;Humphreys et al., 2013), with a high prevalence in women and a considerable disease and social burden including joint pain, disability, high incidence of comorbidities and longterm financial costs (Dougados et al., 2014). The true etiology of RA is complex, involving multiple factors such as pathogen infection, genetics, immunity, etc., and remains to be completely elucidated (McInnes and Schett, 2011;Croia et al., 2019;Jung et al., 2019;Karami et al., 2019). At present, the primary goal of RA treatment is to relieve symptoms and slow down the progress of the disease. A better understanding of disease mechanisms could lead to the development of effective preventive and therapeutic approaches to RA. Numerous studies have demonstrated that lipids play an important role in the pathogenesis of RA. Some lipid species, like eicosanoids, sphingolipids and lipoxins, etc., are considered to be crucial for the development of arthritic diseases by tightly regulating inflammatory processes (Gerritsen et al., 1998;Serhan et al., 2008;McInnes and Schett, 2011). Peroxidation of membrane phospholipids produces biologically active aldehydes, such as malonaldehyde (MDA) and 4hydroxynonenal (HNE), which damage the fluidity and permeability of the plasmatic membrane, eventually leading to destruction of cell structure and function (Phaniendra et al., 2015;Quiñonez-Flores et al., 2016). Studies have found that MDA level in RA patients was significantly elevated (Aryaeian et al., 2011;Hassan et al., 2011;Mishra et al., 2012), and positively associated with RA activity (Datta et al., 2014). In addition, alterations in a number of phosphatidylcholine (PC), lysophosphatidylcholine (LysoPC), phosphatidylethanolamine (PE), and sphingomyelin (SM) have been recognized to be correlated with disease activity in RA patients and reflect the therapeutic response to anti-rheumatic drugs (Kosinska et al., 2014;Koh et al., 2022). There are changes in the lipoprotein profiles in RA patients that may lead to increased morbidity and mortality (Toms et al., 2010). About 55-65% of RA patients developed dyslipidemia at an early stage (Curtis et al., 2012;Bag-Ozbek and Giles, 2015;Nowak et al., 2016;Phull et al., 2018), which may be accountable for the higher risk of comorbidities such as cardiovascular disease (CVD) in these patients (Myasoedova et al., 2010;Bag-Ozbek and Giles, 2015;Charles-Schoeman et al., 2015;Łuczaj et al., 2016). Tripterygium wilfordii Hook F. (TWHF), a traditional herbal medicine, was reported to be effective in the treatment of RA and other immune diseases (Tao and Lipsky, 2000;Jiang et al., 2015). Tripterygium wilfordii glycosides (TWG) is extracted from the roots of TWHF and exhibits anti-inflammatory and immunosuppressive effects (Ma et al., 2007). Research in synovial fibroblasts of arthritis patients suggested that TWG has an effect on decreasing the activity of nuclear factor κ-B (NF-κB), inhibiting gene expression of cyclooxygenase (COX)-2 and inducible nitric oxide synthase (iNOS), reducing the production of prostaglandin E2 (PGE2) and NO, and promoting caspase-3 expression (Yang et al., 2020). The dysregulation of lipid metabolism has been suggested to be involved in the pathogenesis of RA and is related to the severity and prognosis of patients. Improving lipid metabolism can help restore the metabolic homeostasis of RA patients, thereby alleviating the disease and reducing complications. Although TWG has been shown to exert a beneficial therapeutic effect in RA, whether it can improve RA-related lipid disorders has not been studied. In the present study, type Ⅱ collagen-induced arthritis (CIA) rat model, which has better similarity to human RA due to its chronic disease process (Holmdahl et al., 1992), was used to investigate the efficacy of TWG in the treatment of RA and its effect on lipid metabolism. The changes in paw volume, arthritis score, pathological changes of joint, serum autoantibodies and pro-inflammatory cytokines were detected to assess the efficacy of TWG on collagen-induced arthritis. Multi-dimensional mass spectrometry-based shotgun lipidomics (MDMS-SL) platform was employed to analyze the alterations of serum lipidome induced by TWG. Our results may provide further evidence of the role and mechanism of TWG in the treatment of RA. Frontiers in Pharmacology frontiersin.org 02 2.2 Animal modeling and drug treatment Female Wistar rats, 7 weeks old, weighing (160 ± 20) g, were provided by the Laboratory Animal Services Center of Zhejiang Chinese Medical University (Hangzhou, China). The animal experiment was approved by the Animal Ethics Committee of Zhejiang Chinese Medical University. After adaptive feeding for 1 week, rats were used to establish collagen-induced arthritis model according to a literature method (Rosloniec et al., 2010). The brief description is as follows: six rats were selected as control group, and the others were primarily immunized with CⅡ emulsified in FCA. About 200 μL of bovine CⅡ emulsion (1.0 mg/ml) was injected intradermally at the base of the tail. 1 week after the primary immunization, a booster immunization was given with 150 μL of CⅡ emulsion in FIA (1.0 mg/ml). The control group was injected with an equal volume of normal saline. 21 days after primary immunization, the rats with induced arthritis (arthritis score>6) were randomly divided into model group (CIA group, n = 6) and CIA + TWG group (n = 6). The CIA + TWG group was administrated with 6 mg/kg of TWG per day, equivalent to regular human dose of 1 mg/kg per day. The TWG suspension was prepared from tablet powder dissolved in distilled water. The CIA group and control group were given orally the same volume of distilled water. The rats were anesthetized with chloral hydrate (10%, w/v) and blood was collected from abdominal aorta after 21 days of intervention. The serum was separated by centrifugation for 10 min at 1,200 g and all samples were stored in refrigerator at -80°C. The detailed experimental process and grouping information are shown in Figure 1. Evaluation of arthritis severity After primary immunization, arthritis scores were measured every 7 days. Arthritis severity of each limb was graded on a 0-4 scale according to the modified method of (Wang et al., 2021): no swelling (0 points); mild swelling of the little toe joints (1 point); swelling of the toe joints and foot plantar (2 points); swelling of the foot below the ankle (3 points); swelling of entire foot, including the ankle (4 points). The arthritis score of the rat was obtained by summing the scores of the four limbs, and a score equal to or greater than 6 points is considered to be successful modeling, with a maximum score of 16 points (4 × 4). In addition, at 0, 7, 14 and 21 days after drug treatment, the paw volume of the right hind limb was measured with a toe volume meter as the paw swelling index. At the end of the animal experiment, ankle joints of rats were taken out. X-ray images (CARESTREAM Image Station System, Carestream Health, Inc., United States) were taken to observe the morphological changes of the joints. Furthermore, ankle joints were flushed with PBS, and fixed by 4% paraformaldehyde for 48 h. After decalcification, paraffin sections were made and stained with hematoxylin and eosin (H&E) to evaluate the histological changes of ankle joints. Detection of IL-1β and anti-CII antibodies levels An aliquot (100 µL) of serum samples from different groups or rats were collected to measure the levels of IL-1β and anti-CII antibodies by ELISA kits according to the manufacturers' instructions. Lipid extraction, analysis and data preprocessing Serum lipids were extracted by the modified Bligh-Dye protocol (Bligh and Dyer, 1959) in the presence of internal standards as described in the reference (Yang et al., 2009). The chloroform phase containing lipids was collected. The extraction process was repeated twice, and the lipid extracts were combined and evaporated under a nitrogen stream. The dried lipid extracts were redissolved in 2 μL of Frontiers in Pharmacology frontiersin.org chloroform/methanol (1:1, v/v), sealed with nitrogen, and stored at -20°C until analysis. The analysis of serum lipids was carried out on a triplequadrupole mass spectrometer (TSQ Quantiva, Thermo Scientific) connected to an automated nanospray ion source (NanoMate, Advion Bioscience) according to the reference (Han et al., 2005). Before lipid analysis, each lipid extract was further diluted with chloroform/methanol/isopropyl alcohol (1:2:4, v/v/v). Various species of lipids were characterized and quantified by MDMS-SL according to the reference (Yang, et al., 2009). All the mass data were acquired through different sequence subroutines running by Xcalibur software. Data preprocessing, including baseline calibration, de-isotope peak, peak intensity calculation, etc., was carried out according to the published reference (Han, et al., 2005). Statistical analysis Principal component analysis (PCA) based on the phospholipid profiles was carried out by SIMCA-P 14.1 (Umetrics AB, Umea, Sweden) to generally observe the distribution of samples from the control, CIA and CIA + TWG groups after mean centering. Furthermore, orthogonal partial least squares discriminant analysis (OPLS-DA) was employed to distinguish groups and screen the discriminant serum lipids. Permutation test was used to verify whether the model was over-fitted. Lipids with a variable importance in projection (VIP) value of the OPLS-DA model greater than 1.0 were considered to play an important role in the classification of different groups. Both p values and VIP values were taken as criteria for screening potential differential lipids, and VIP >1.0 and p < 0.05 were used as cutoff. To investigate the statistical significance in the levels of the paw volume, arthritis scores, IL-1β, anti-CII and lipids between control, CIA and CIA + TWG groups, ANOVA followed by a Bonferroni post hoc test for pairwise comparisons were performed using SPSS 18.0 (International Business Machines Corp., Armonk, United States). And p < 0.05 was considered statistically significant. In addition, Pearson's correlation between the arthritis score, IL-1β, anti-CII and differential lipids were analyzed. Role of the funding source No funding source had any role in study design; in the collection, analysis, and interpretation of data; in the writing of the report; and in the decision to submit the paper for publication. The corresponding author had full access to all the data in the study and had final responsibility for the decision to submit for publication. Results 3.1 Exploring the change of paw volume, arthritis score, IL-1β and anti-CII after Tripterygium wilfordii glycosides treatment During the experiment, we continued to observe the general condition of the rats. The rats in the control group were in good condition, with shiny fur and free movement. 5-7 days after the booster immunization with collagen, the CIA rats gradually developed polyarthritis, and showed fatigue, weight loss, swelling of foot joints, lameness and mobility impairment. After 21 days of treatment, the CIA + TWG group had smoother fur, significantly increased body weight (p < 0.05, Figure 2A), and improved mobility compared with the CIA group. The severity of arthritis was measured by the paw volume and arthritis score, with greater paw volume and higher arthritis score indicating more severe disease. After modeling, the paw volume of the right hind foot and arthritis score in the CIA group were significantly higher than those in the control group (p < 0.01, p < 0.01); after 14 days of treatment, compared with the CIA group, the paw volume and arthritis score of CIA + TWG group began to decrease significantly (p < 0.05, p < 0.01); after 21 days of treatment, the paw volume and arthritis score in the CIA + TWG group continued to decrease (p < 0.01, p < 0.01), suggesting that the joint swelling was alleviated by TWG ( Figures 2B,C). Then, we investigated the change of pro-inflammatory cytokine in CIA rats. In the study, serum level of IL-1β in CIA group was significantly increased; 21 days after TWG treatment, the IL-1β level was remarkably decreased as compared with the CIA group (p < 0.01, Figure 2D), indicating that TWG could inhibit the production of IL-1β to suppress the inflammatory response. In addition, we examined the levels of anti-CII antibodies in the CIA and CIA + TWG groups after 21 days of treatment. Compared with the CIA group, the level of anti-CⅡ antibodies was relatively lower after the treatment of TWG (p < 0.01, Figure 2E), indicating that TWG could prevent the antibody response mediated by collagen. X-ray images showed that the CIA group developed severe joint swelling, joint deformity, joint space narrowing and other arthritis-related joint characteristics; after 21 days of treatment, the CIA + TWG group had mild joint space narrowing, joint swelling, joint deformity relief, etc., (Figures 3A-C). The H&E stained histological of ankle joint showed that there were narrowed joint space, significant proliferation of fibrous connective tissue and infiltration of inflammatory cells, and severe erosion of articular cartilage and bone in the CIA group. TWG could alleviate the pathological damages of joint Alterations of total amounts of each lipid species after Tripterygium wilfordii glycosides treatment Lipids in rat serum were analyzed by MDMS-SL. After data preprocessing such as baseline correction and peak intensity calculation, nearly 100 kinds of lipid molecules belonging to 6 lipid species with high response in mass spectrometry were quantified, including SM, PC, LysoPC, ceramide (Cer), phosphatidylinositol (PI) and phosphatidylglycerol (PG). The total amounts of each lipid species in different groups were calculated respectively. It was found that the most abundant phospholipids are PC and LysoPC among the 6 phospholipid species (Figure 4). As compared to the control group, the total SM level and Cer level was significantly up-regulated in the CIA group (p < 0.05), while the total LysoPC level was significantly down-regulated (p < 0.05). 21 days after TWG treatment, the total LysoPC level was increased to that of the control group. In addition, TWG treatment reduced the total level of Cer ( Figure 4). Visualization of the difference of lipid profile after Tripterygium wilfordii glycosides treatment In order to display the overall distribution and clustering of the samples from the control group, the CIA group and CIA + TWG group, the lipid data were subjected to PCA after mean centering. Principal components 1 and 2 explained 53.7% and 25.5% of the variance, respectively. The PCA score plot is shown in Figure 5A, it was observed that there is a clear separation in the lipid profiles between the CIA group and the control group, reflecting the significant changes in serum lipid metabolism in rats after the injection of collagen. The lipid profile of CIA + TWG group tends to be closer to that of the control group, but it is still different from that of the control group. FIGURE 2 Comparison of percentage of weight change (A), paw volume (B), arthritis score (C), IL-1β level (D), anti-CⅡ level (E) in rats of the control, CIA and CIA + TWG groups. The percent change in weight was calculated as (weight after drug treatment-weight before immunization)/weight before immunization. p represents p < 0.05 between the control and CIA groups, # represents p < 0.05 between the control and CIA + TWG groups, and $ represents p < 0.05 between the CIA and CIA + TWG groups. Frontiers in Pharmacology frontiersin.org Alterations of serum lipids associated with collagen immunization and/or Tripterygium wilfordii glycosides treatment To reveal the lipid alterations caused by collagen immunization and TWG treatment, OPLS-DA was performed between the control group, CIA group and CIA + TWG groups. Permutation tests indicated good fitness of the OPLS-DA models in revealing the alterations in serum lipids. OPLS-DA score plots showed both collagen induction and TWG treatment led to changes in lipid profile ( Figures 5B-D). Differential lipids were screened out based on the VIP values of OPLS-DA models (VIP>1) and p values of significance tests (p < 0.05). 3.6 Significant associations of arthritis score, IL-1β, anti-CII antibodies with the levels of those differential lipids In the study, we found there was a significant positive correlation between IL-1β and arthritis score, which can reflect the severity of joint swelling. Among those screened lipids, the levels of Cer (N24:0, N23:0 and N22:0), SM (N22:1 and N20:0) were positively correlated with IL-1β level and arthritis score, as were the total amounts of SM species and Cer species; while LysoPC (20:4 and 18:0) and the total amount of LysoPC species were negatively correlated with IL-1β and arthritis score. After collagen immunization, antibody response against CII was induced, and TWG treatment downregulated the level of anti-CII antibodies. There is an association between LysoPC, SM, Cer and anti-CII antibodies existed in CIA rats (p < 0.05, Figure 7). Discussion CIA is an extensively used animal model of autoimmune arthritis. The pathological manifestations of CIA models are progressive synovitis and synovial hyperplasia, inflammatory cell infiltration, cartilage destruction, and finally lead to joint injury and stiffness; these characteristics are more similar to FIGURE 6 Significantly altered serum lipids related to collagen immunization and TWG treatment. Volcano plot showing the lipids with significant differences between the control and CIA groups (A), between the control and CIA + TWG groups (B), between the CIA and CIA + TWG groups (C), and heatmap showing the expression pattern of each differential lipid (D). Red color indicates high concentration of lipid and blue color indicates low concentration. p represents p < 0.05 between the control and CIA groups, # represents p < 0.05 between the control and CIA + TWG groups, and $ represents p < 0.05 between the CIA and CIA + TWG groups. Frontiers in Pharmacology frontiersin.org those of clinical RA (Holmdahl, et al., 1992). CIA model is established by immunizing genetically susceptible strains of mice/rats with CII. CII activates innate and adaptive immune responses, which have a primary role in the initiation and pathogenesis of RA in CIA model. Some studies found that anti-CII antibodies are present in the serum and synovial fluid of RA patients precede the onset of joint symptoms (Mullazehi et al., 2007;Whittingham et al., 2017). Patients with positive anti-CII antibodies exhibited higher disease activity and more severe symptoms (Mullazehi, et al., 2007). In the study, CIA rat model was used to investigate the efficacy of TWG. Anti-CII antibodies can be detected in CIA rats which implied that the immune response is induced by CII immunization. IL-1β is the initiating factor of inflammation and regulates a variety of cytokines, cell adhesion molecules and inflammatory mediators. Previous studies have shown that the level of IL-1 in the circulation of RA patients is higher than that of other chronic inflammatory joint diseases (Kay and Calabrese, 2004), and is associated with bone erosion and cartilage destruction in RA (Guo et al., 2018). Our results demonstrated that TWG could alleviate the severity of the disease, including reducing joint swelling, repairing joint injury, decreasing the generation of serum autoantibodies (anti-CII) and the secretion of pro-inflammatory cytokines (IL-1β). As energy sources, structural constituents and signaling molecules, lipids participate in the regulation of many important biological processes, such as cell growth, proliferation, differentiation, death, etc., (Wymann and Schneiter, 2008;Han, 2016). The disorders of lipid metabolism may lead to abnormalities in signaling, inflammation and autoimmune responses (Wymann and Schneiter, 2008). In this study, shotgun lipidomics revealed that the total amounts of Cer and SM species were increased in CIA rats, and were positively correlated with pro-inflammatory cytokine IL-1β. SM is an important component of cell biofilms and plasma lipoproteins, and plays a pro-inflammatory role by enhancing the expression of COX-2 and encoding genes related to inflammatory cytokines (Miltenberger-Miltenyi et al., 2020). SM can be hydrolyzed into Cer, which is involved in TNFα-mediated activation of NF-κB and RANKL-mediated osteoclast differentiation to promote the development of RA (Qu et al., 2018). Recent studies have found that the levels of Cer and SM species in synovial fluid of patients with RA and osteoarthritis were increased as compared to the healthy controls, which is consistent with the role of Cer and SM in inflammation (Aletaha et al., 2010). The study showed that the treatment of TWG could totally reverse the elevation of Cer level, and greatly reduce the levels of some SM (N20:0 and N22:1) molecules, which may facilitate the amelioration of inflammation and joint swelling in CIA rats. Ceramidase is essential for converting Cer to sphingosine, and some evidence suggested that TWG can interact with ceramidase to regulate the level of Cer (Qian et al., 2022). FIGURE 7 Pearson correlation analyses between arthritis score, IL-1β, anti-CII antibodies and those significantly changed lipids in different groups of rats. Frontiers in Pharmacology frontiersin.org 09 and plays a chemotactic role at the inflammatory site, thus boosting inflammatory response. However, it was reported that low level of LysoPC was observed in active RA patients, which might be related to the decrease of PLA2 activity (Lourida et al., 2007;Koh, et al., 2022). Our study also showed that the total amount of LysoPC was significantly decreased in CIA rats, and accordingly, the levels of most PC molecules were elevated to some extent. LysoPC is a major component of oxidized low density lipoprotein (oxLDL), which has been proposed as a critical pathogenic factor of atherosclerosis (Law, et al., 2019). Evidence suggested that there was an inverse correlation between LysoPC and the risk of CVD (Lee et al., 2013;Stegemann et al., 2014), which has a high incidence in RA patients (Aviña-Zubieta et al., 2008). Intriguingly, the reduced level of LysoPC in CIA rats can be corrected after the treatment of TWG. Network pharmacology research has found that some absorbed components of TWG, such as hypoglaulide, triptotriterpenic acid A, Wilforlide A, can target PLA2G10, PLA2G2A and PLA2G1B, thereby interfering with glycerol phospholipid metabolism and ether lipid metabolism, which in turn led to changes in the level of lysoPC and PC (Qian, et al., 2022). Taken together, the observed lipid profiles suggest an ameliorative effect of TWG on lipid disorders associated with RA, but do not provide a mechanistic explanation for the finding. Whether this is a cause or a consequence of joint inflammation remains to be investigated further, which is one of the limitations of this study. Conclusion The present study showed that TWG could effectively relieve the joint swelling, repair joint injury, and prevent the production of anti-CII autoantibodies and the secretion of IL-1β cytokine in CIA rats. Moreover, TWG could improve aberrant lipid metabolism caused by collagen immunization, including down-regulating Cer level and up-regulating LysoPC level. These results suggest that TWG exerts a beneficial therapeutic effect on lipid metabolism disorders, and further research is needed to better explain the biological mechanisms underlying these findings. Data availability statement The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding authors. Ethics statement The animal study was reviewed and approved by Experimental Animal Health Ethics Committee of Zhejiang Chinese Medical University. Author contributions Authors' contributions were as follows: experiment design (JZ and CW); animal experiment (LZ and LC); lipidomics study (LZ and CH); ELISA assay (YZ and DW); histological analysis (XZ); data analysis (JZ and DW); manuscript writing (YZ and JZ); and critical revisions (JZ and CW). All authors have read and approved the final manuscript. Funding This study has been supported by the National Key R&D Program of China (No. 2018YFC1705500), and the National Natural Science Foundation of China (No. 81403269).
2022-08-29T13:44:31.572Z
2022-08-29T00:00:00.000
{ "year": 2022, "sha1": "9be2cda8e2f7350438eb85cc795305abc88da3d7", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "9be2cda8e2f7350438eb85cc795305abc88da3d7", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
11791099
pes2o/s2orc
v3-fos-license
Racial Disparity in Duration of Patient Visits to the Emergency Department: Teaching Versus Non-teaching Hospitals Introduction: The sources of racial disparity in duration of patients’ visits to emergency departments (EDs) have not been documented well enough for policymakers to distinguish patient-related factors from hospital- or area-related factors. This study explores the racial disparity in duration of routine visits to EDs at teaching and non-teaching hospitals. Methods: We performed retrospective data analyses and multivariate regression analyses to investigate the racial disparity in duration of routine ED visits at teaching and non-teaching hospitals. The Healthcare Cost and Utilization Project (HCUP) State Emergency Department Databases (SEDD) were used in the analyses. The data include 4.3 million routine ED visits encountered in Arizona, Massachusetts, and Utah during 2008. We computed duration for each visit by taking the difference between admission and discharge times. Results: The mean duration for a routine ED visit was 238 minutes at teaching hospitals and 175 minutes at non-teaching hospitals. There were significant variations in duration of routine ED visits across race groups at teaching and non-teaching hospitals. The risk-adjusted results show that the mean duration of routine ED visits for Black/African American and Asian patients when compared to visits for white patients was shorter by 10.0 and 3.4%, respectively, at teaching hospitals; and longer by 3.6 and 13.8%, respectively, at non-teaching hospitals. Hispanic patients, on average, experienced 8.7% longer ED stays when compared to white patients at non-teaching hospitals. Conclusion: There is significant racial disparity in the duration of routine ED visits, especially in non-teaching hospitals where non-White patients experience longer ED stays compared to white patients. The variation in duration of routine ED visits at teaching hospitals when compared to non-teaching hospitals was smaller across race groups. critical treatments and test results. 19,20 Several studies have documented that extended LOS is usually due to evaluation time by a physician for critical testing, treatment, and bed placement. 21 Variation in ED LOS provides a good opportunity to study racial disparity because it is affected by a number of complex public health and healthcare facility-related issues. One study showed that disparities in waiting times exist in emergency care and that black patients wait longer to see emergency physicians than white patients. 22 A few studies used the National Hospital Ambulatory Medical Care Survey to document racial disparity in ED LOS for admitted patients. [23][24][25][26] Several studies in the literature found no evidence of racial or ethnic disparity in use of emergency care or in ED LOS. [27][28][29] The objective of this study was to determine whether racial disparities in duration of ED visits exist at teaching and non-teaching hospitals. ED visits for this study are limited to routine visits in which the patients are discharged for home or self care. This study contributes to the existing literature in the following important ways: First, existing studies examining racial disparity in ED LOS and general resource use employ data drawn from a sample of ED visits obtained from a survey or tracked as part of a beforeand-after intervention study. 30 One of the largest of these data files is a nationally representative sample of 138,569 ED visits over a 5-year period. 20 In contrast, our data file includes 4.3 million ED visits in a single year. Healthcare policies designed to provide solutions to increased ED LOS, ED crowding, and related issues may produce better outcomes when they are based on these large data sets. Such large databases may also shed light on the wide variations in use patterns of ED services and the significant differences in patient-related and area-specific factors. 31 Second, our findings may inform public and private policymakers on a broad range of issues, including, but not limited to, the variation in duration of routine ED visits by patient race group, age, gender, insurance coverage, and disease category; by hospital bed size, location, system membership, trauma center classification, and ownership status; and by geographic income distribution. Third, our study is also the first, to our knowledge, addressing racial disparity in ED LOS by hospital type. We compare the duration of routine ED visits across race groups in teaching and non-teaching hospitals, as the former generally treat more severe or clinically complex patients compared to the latter. Finally, this study further contributes to the existing literature by addressing several important factors affecting ED LOS, [32][33][34][35][36][37][38][39][40][41] i.e., hospital ED visit volume and ED admission day of the week. Study Design and Population We performed retrospective data analyses and multivariate regression analyses to investigate the racial disparity in the duration of routine ED visits that were discharged for home or self care using the Healthcare Cost and Utilization Project (HCUP) 1 State Emergency Department Databases (SEDD) for 2008. HCUP is maintained by the Agency for Healthcare Research and Quality (AHRQ). The SEDD employed in this study include data on 4.3 million routine ED visits in 3 states: Arizona, Massachusetts, and Utah. In general, the SEDD provide detailed diagnoses, procedures, total charges, and patient demographics. Demographics include gender, age, race, and insurance coverage (i.e., Medicare, Medicaid, private insurance, other insurance, and uninsured). However, the SEDD from these 3 states also provide admission and discharge time for each visit, from which duration 2 may be calculated. We obtained information about hospital characteristics (i.e., urban versus rural, ownership status, teaching status, bed size, and system membership) from the 2008 American Hospital Association Annual Survey Database and linked that data to SEDD files using hospital identifiers. In addition, we obtained information about the trauma level of the hospital using the Trauma Information Exchange Program database, collected by the American Trauma Society and the Johns Hopkins Center for Injury Research and Policy. Finally, we used the 2008 Area Resource File 3 to obtain county-level income information. A value for ED LOS is not readily available in our data. We computed the duration for each visit by taking the difference between admission and discharge times, which is the time patients waited in ED rooms plus their treatment time (the time spent with doctors). 32 Our measure of duration does not include boarding time because our ED data file includes information for only treat-and-release patients, not admitted ones. Therefore, we do not believe that the lack of separable ED LOS measures (i.e., waiting room time and treatment time) compromises our results because we use the same measures of ED LOS for all race groups within each hospital. More specifically, we assume that ED treatment time for patients with the same clinical conditions, age, and gender are similar regardless of their race groups. However, there might be some variation in ED waiting room time within and across hospitals. Our analytic approach addresses this issue. Hence, if we document that there is racial disparity in duration of ED visits as defined above, we can attribute that disparity mostly to ED waiting room time because our multivariate analysis of ED duration, explained in detail below, controls for the severity of clinical cases and other socio-economic characteristics which are typically responsible for any significant variation in actual treatment times. Statistical Analyses We started with extensive secondary data analyses by patient, hospital, and area characteristics to explore racial disparity in the duration of routine ED visits at teaching and non-teaching hospitals separately. ED duration is expressed in minutes, measured as the difference between admission time and discharge time. 4 The mean (median) duration for a specific admission hour was measured as the mean (median) value of the durations of all routine ED visits at that specific hour during 2008. We applied a similar approach when reporting the mean duration of ED visits across patient demographics and hospital characteristics. For example, the mean duration of ED visits for female patients was measured as the total duration of routine ED visits by all female patients divided by the total number of routine ED visits by female patients during 2008. We analyzed data with SAS 9.02 and Stata 12. Severity of illness is an important factor that can affect the mean duration of ED visits. To further explore the potential relationship between the mean duration of visits and various disease groups, we grouped ED visits into major disease categories based on Clinical Classification Software-a diagnosis and procedure categorization scheme based on the International Classification of Diseases, 9 th Revision, Clinical Modification (ICD-9-CM). While the HCUP SEDD provide all diagnosis codes for every visit, they do not always clearly differentiate between the primary diagnosis codes and other diagnosis codes. Therefore, we used all diagnosis codes reported for each visit when developing our major disease categories. Next, we developed a flexible functional linear model that controls for patient demographics and hospital and area characteristics to assess racial disparity in duration of routine ED visits at teaching and non-teaching hospitals. More specifically, we estimated several regression models using the natural log value of the duration 5 as the dependent variable to examine factors associated with the duration of patients' routine ED visits. We estimated a linear regression model that controls for: 1) patient characteristics including race group, age, gender, insurance coverage, and major disease categories; 2) hospital characteristics including bed size, location, membership in a large hospital system, trauma center classification, and ownership status; 3) geographic income distribution measured by median household income in patient's residence ZIP code; and 4) admission day of the week and average volume at the ED 1 hour before the admission hour. We tested our flexible linear models for the undesirable 4. Ideally, ED waiting time would be deconstructed into waiting room, treatment, and boarding times experienced by ED patients. However, in our data, we can only observe the total length of stay in the ED for each visit. 5. Because the distribution of duration of ED visits was skewed, the natural logarithm of ED duration was used as the dependent variables for the analysis. presence of multicollinearity (i.e., a linear relationship among predictor variables) and heteroskedasticity (i.e., variance of the error terms correlated with 1 or more explanatory variables). We saw no evidence of multicollinearity in the correlation coefficients of the predictor variables, and we corrected the heteroskedasticity we identified using Huber-White sandwich estimators to obtain robust standard errors and variance estimates. 42 We further estimated our linear model using robust regression methods to assess the validity of our results based on the linear model. [43][44][45][46] More specifically, we ran robust regressions using iteratively reweighted least squares, that is, we assigned a weight to each observation, with higher weights given to better-behaved observations. Descriptive Results Patient Characteristics. We began our analysis with a descriptive comparison of duration of routine ED visits across race groups to profile the differences by age, gender, insurance coverage, and disease category. We analyzed patient demographics to explore potential explanations for the racial disparity we observed. Table 1 displays the total number of routine ED visits and mean and median duration of visits for various patient characteristics at both teaching and non-teaching hospitals. As shown in Table 1, the mean duration of visits 6 ranged from 223 to 245 minutes at teaching hospitals, and 173 to 189 minutes at non-teaching hospitals across race groups. The mean duration increased with the age of the patient regardless of the teaching status of the hospitals. We also observed longer mean duration of routine ED visits for female patients when compared to male patients across race groups within each hospital setting. Next, we analyzed the mean duration of routine ED visits by insurance coverage for each race group at both teaching and non-teaching hospitals. We found that Medicare patients' visits had the longest mean duration (278 minutes at teaching hospitals and 213 minutes at non-teaching hospitals), which could be due to higher severity of illness and presence of multiple diseases among these patients. Table 1 shows that the mean duration of routine ED visits by white, black/African American, Hispanic, and Asian Medicare patients visiting teaching hospitals (non-teaching hospitals) was 280, 268, 280, and 300 (209, 241, 242, and 255) minutes, respectively. Table 1 also shows that the mean duration of routine ED visits by white, black/African American, Hispanic, and Asian patients with Medicaid coverage, at teaching hospitals (nonteaching hospitals) was 228, 201, 214, and 205 (159, 170, 164, and 185) minutes, respectively. For those without insurance coverage, the mean duration of routine ED visits by white, Black/African American, Hispanic, and Asian patients at 6. We focus mainly on the mean value of duration in our analysis. However, we have provided both mean and median values for each measure separately throughout all tables and figures to set the stage for further research and to provide additional detail to key policymakers and interested researchers. (139) 200 (147) 143 (116) 140 (114) 152 (120) 157 (120) 151 (120) Other conditions 276 (223) 285 (237) 251 (182) 270 (204) 267 (201) 208 (177) 207 (176) 223 (180) 211 (175) teaching hospitals (non-teaching hospitals) was 241, 213, 251, and 243 (163, 187, 187, and 191) minutes, respectively. These results suggest that there is no sizable difference in mean duration of ED visits between patients with any insurance coverage and uninsured patients. Severity of illness is an important factor that can affect the mean duration of ED visits. To further explore the potential relationship between the mean duration of visits and various disease categories, we grouped ED visits into major disease categories based on Clinical Classification Software. We used all diagnosis codes reported for each visit when developing our major disease categories. As presented in Table 1, routine ED visits for neoplasm; endocrine, nutritional, and metabolic diseases and immunity disorders, diseases of blood and blood forming organs, and mental disorders are associated with longer mean duration across all race groups regardless of hospital teaching status; routine ED visits for diseases of the skin and subcutaneous tissue, certain conditions originating in the perinatal period, and injury and poisoning were generally associated with shorter duration of ED visits at both teaching and non-teaching hospitals. Hospital and Area Characteristics. Next, we analyzed hospital and area characteristics to explore other potential factors associated with longer ED visits for each race group. Figure 1 shows that mean duration of ED visits at teaching hospitals is consistently longer when compared to non-teaching hospitals. Table 2 further shows that hospitals with large bed size 7 were associated with the longest duration of visits (279 minutes at teaching and 191 minutes at non-teaching hospitals) when compared to hospitals with small bed size (207 minutes at teaching and 161 minutes at non-teaching hospitals) or medium bed size (173 minutes at teaching and 161 minutes at non-teaching hospitals). White patients had longer ED stays when compared to black/African American, Hispanic, and Asian patients at teaching hospitals regardless of hospital bed size. In contrast to the pattern at teaching hospitals, white patients generally experienced shorter ED stays at non-teaching hospitals regardless of bed size. Table 2 also shows that the mean duration of routine ED visits at urban teaching hospitals was 67 minutes longer than at their urban non-teaching counterparts. The mean duration of routine ED visits encountered by white patients was longer by 22, 10, and 10 minutes (shorter by 13,14, and 16 minutes), respectively, when compared to the mean duration of routine ED visits encountered by black/African American, Hispanic, and Asian patients at urban teaching hospitals (urban non-teaching hospitals). We found that non-teaching hospitals generally serve rural areas and the mean ED duration of all routine visits at these hospitals 7. Further details about hospital bed sizes are available at: http://www.hcup-us.ahrq.gov/db/vars/hosp_bedsize/nisnote.jsp. Racial Disparity in Duration of Patients' Visits Karaca and Wong was 164 minutes, with some variation across race groups. Recognizing the differences in income levels across geographic regions, we compared the mean duration based on income distribution. In general, we did not find significant differences in mean duration of routine ED visits between relatively richer or poorer counties. Akin to previous results, white patients generally had longer ED stays at teaching hospitals and had slightly shorter ED stays at non-teaching hospitals when compared to other race groups regardless of geographic income distribution. We also observed that the mean duration of routine ED visits at teaching hospitals (non-teaching hospitals) that were members of a hospital system was 40 (25) minutes longer when compared to nonsystem-member teaching hospitals (non-teaching hospitals). Similarly, the mean duration of visits at Level 1 trauma centers was 269 minutes and substantially longer than those at Level 2 or Level 3 trauma centers, or at non-trauma centers at teaching hospitals. We also found that the mean duration of routine ED visits at Level 1 trauma centers when compared to Level 2 and Level 3 trauma centers or to non-trauma centers at non-teaching hospitals was longer by 81, 85, and 38 minutes, respectively. 8 Finally, we found that the mean duration of visits at public, non-profit, and for-profit teaching hospitals was 194, 248, and 217 minutes, respectively, showing significant differences between for-profit and non-profit hospitals. We observed similar but smaller variation between 8. Trauma level designation was based on American College of Surgeons or statespecific designation. We found most of the trauma Level 1 centers within teaching hospitals. Less than 2 percent of them were located within non-teaching hospitals. While we were expecting all of them to be located within teaching hospitals, we still report the results assuming the possibility of having Level 1 trauma centers located within non-teaching hospitals. public, non-profit, and for-profit non-teaching hospitals possibly due to differing financial incentives. Risk-adjusted Results. Table 3 presents the regression coefficients 9 of our linear model estimated separately for teaching and non-teaching hospitals. The empirical estimates show that the mean duration of routine ED visits encountered by black/African American and Asian patients at teaching hospitals (nonteaching hospitals) were, respectively, 10.0 and 3.4% lower (3.6 and 13.8% higher) than the mean duration of routine ED visits encountered by white patients. The difference in mean duration of routine ED visits at teaching hospitals between Hispanic and white patients were not statistically significant. However, Hispanic patients, on average, experienced an 8.7% longer duration of ED visit when compared to white patients at non-teaching hospitals. These risk-adjusted results parallel our descriptive results, indicating that white patients, when compared to non-white patients, generally have longer ED stays at teaching hospitals ( Figure 2) but shorter ED stays at non-teaching hospitals (Figure 3). Our results also support the findings of a previously published study 19 that found longer ED LOS for Black/African American non-Hispanic patients (10.6% longer) and Hispanic patients (13.9% longer) when compared to non-Hispanic white patients. We also obtained valuable information associated with patients' ED stays in general. The regression results show that the mean duration of routine ED visits for female patients was 9. We presented the empirical results of the linear regression model here. The estimates obtained from the robust linear regression model were parallel to our estimates of the linear regression model. 5.4 and 4.9% longer than for male patients at teaching and non-teaching hospitals, respectively. The regression results also show that the mean duration of ED visits increases with patient age both at teaching and non-teaching hospitals. Our risk-adjusted results also suggest that uninsured patients generally have shorter ED stays when compared to Medicare enrollees. We also found significant variation in mean duration of routine ED visits across disease categories. Our riskadjusted results were mostly parallel to our descriptive results (Table 1) indicating that patients diagnosed with neoplasm, endocrine, nutritional, and metabolic diseases and immunity disorders, diseases of blood and blood forming organs, and mental disorders generally experienced longer ED stays when compared to patients diagnosed with other conditions both at teaching and non-teaching hospitals. Table 3 also presents the estimated effects of hospital characteristics on mean duration of routine ED visits. The results suggest that the mean duration of routine ED visits was higher at for-profit teaching hospitals and lower at for-profit non-teaching hospitals when compared to their respective Racial Disparity in Duration of Patients' Visits Karaca and Wong cohorts of teaching and non-teaching public hospitals. Patients at teaching hospitals with large bed size have ED stays about twice as long as patients at teaching hospitals with small bed size. The mean duration of ED stays at non-teaching hospitals with large bed size was only 7.5% longer than the mean duration of ED stays at non-teaching hospitals with small bed size. We also found that the mean duration of routine ED visits at Level 1 trauma centers was significantly longer than those at non-trauma centers regardless of the hospital teaching status. Additionally, we obtained crucial information regarding how admission day of the week and hospital volume affect the mean duration of routine ED visits. Our risk-adjusted results in Table 3 show that the mean duration of ED visits at teaching hospitals (non-teaching hospitals) was 4.1% (3.7%) longer on Mondays and 5.9% (4.1%) shorter on other weekdays when compared to the mean duration of ED visits on weekends. We also found a positive correlation between longer ED stays and the number of patients present at the ED prior to admission time both at teaching and non-teaching hospitals. DISCUSSION This analysis, based on a very large data set, reveals considerable variation in duration of routine ED visits across race groups at teaching and non-teaching hospitals. We computed the duration of each visit by taking the difference between admission and discharge times, which is the total time patients waited in ED rooms plus their treatment time. We documented racial disparity in duration of ED stays both at teaching and non-teaching hospitals. We found that white patients generally have shorter ED stays at teaching hospitals and longer ED stays at non-teaching hospitals when compared to non-white patients. These findings provide robust evidence of racial disparity, especially in non-teaching hospitals, that may be used by decision makers in both public and private healthcare arenas to improve the timeliness of the care provided in the ED and to understand the factors causing the racial disparity. Some of our results are consistent with the characterization in the literature of care provided in the ED and are expected. 22,23 Level 1 trauma centers, for example, have comprehensive resources and are able to care for the most severely injured patients. One plausible explanation for longer ED stays is that Level 1 trauma centers provide the highest level of surgical care to seriously injured patients who may use more resources and whose treatments last longer. It is also plausible to assume that most Level 1 trauma centers provide leadership in education and research. We found that the mean duration of routine ED visits at non-trauma centers are longer than at Level 2 or Level 3 trauma centers, but it is not clear why. Another important finding of our study is pertinent to uninsured patients. We found that the duration of routine ED visits encountered by uninsured patients are about the same as the mean duration of routine ED visits by all patients. More precisely, the mean duration of all routine ED visits was 238 and 175 minutes, respectively, at teaching and non-teaching hospitals, whereas the mean duration of ED stays encountered by uninsured patients was 239 and 171 minutes at their respective hospitals. We further found that the difference in mean duration of ED visits between uninsured patients and others is not sizable across race groups at either teaching or non-teaching hospitals. It is plausible to assume that both uninsured and insured patients receive similar quality of care once they are admitted to the ED and that both cohorts could Karaca and Wong Racial Disparity in Duration of Patients' Visits face similar barriers to healthcare access at hospital EDs. Some of these findings are worthy of further exploration. For example, we believe that since elderly patients frequently present to the ED with multiple complications, they require more ED resources during their visits, which causes them to have longer duration of visits. Similarly, we found that race is correlated with increase in duration of ED visits. When this race correlation is associated with a lower socio-economic status, some policymakers may choose to use interpreters, or perhaps social workers, to work with patients to increase their access to primary care and thereby decrease their use of the ED for non-emergency complaints. LIMITATIONS We computed the duration of each visit by taking the difference between admission and discharge times, which yields the total time patients waited in ED rooms plus their treatment time. Our measure of duration, unfortunately, does not separate waiting time and treatment time. The data in the HCUP SEDD are based on ED encounters as the unit of analysis. Therefore, a given patient may have many visits represented in the data. As a result, the summary information reported under patient characteristics might overestimate or underestimate demographics for individual patients. This study also does not address the impact of financial incentives and other confounding factors across hospitals types on duration of ED visits. Our analysis is confined to the routine ED visits presented in the HCUP SEDD. ED encounters that result in subsequent admission to the same hospital are not included in the analysis. Therefore, the relative number of patients admitted at the individual EDs may compromise this analysis as their presence may limit the resources available to patients on routine ED visits. CONCLUSION Our results show that the mean duration for a routine ED visit was 238 minutes at teaching hospitals and 173 minutes at non-teaching hospitals. When documenting the mean duration, we uncovered a significant racial disparity in mean duration of ED visits at non-teaching hospitals. Based on patient demographics and hospital characteristics, we identified several important factors that are associated with increased ED stays. We identified a direct relationship between increased duration of ED visits and patient race, age, gender, and severity of illness; and hospital location and ownership status. We observed substantial variation in mean duration of ED visits by race group between teaching and non-teaching hospitals. The mean duration of ED visits at teaching hospitals (non-teaching hospitals) for White, Black/African American, Hispanic, and Asian patients was 245, 223, 234, and 235 (173, 187, 186, and 189) minutes respectively. Our risk-adjusted findings show that the mean duration of ED visits for Black/ African American and Asian patients was 10.0 and 3.4% lower (3.6 and 13.8% higher), respectively, than the mean duration of routine ED visits encountered by white patients at teaching hospitals (non-teaching hospitals). The mean duration of ED visits for Hispanic patients was 8.7% longer at non-teaching hospitals when compared to white patients. We did not find any disparity in duration of ED visits at teaching hospitals between white and Hispanic patients. We also found that female patients generally experienced longer ED stays than Racial Disparity in Duration of Patients' Visits Karaca and Wong Note: Data include all hospital emergency department routine visits that are discharged for home or self care during 2008 in Arizona, Massachusetts, and Utah. 1 Dependent variable is the log of the duration measured in minutes as the difference between admission and discharge time for each visit. 2 Less than 2 percent of Level 1 trauma centers were located within non-teaching hospitals. 3 Disease categories are based on Clinical Classification Software, which is a diagnosis and procedure categorization scheme based on the International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM).The HCUP SEDD do not differentiate between the primary diagnosis codes and other diagnosis codes. Therefore, we used all diagnosis codes reported for each visit when creating broader CCS disease categories. Further details about CCS disease codes are available at: http://www.hcup-us.ahrq.gov/toolssoftware/ccs/ccs.jsp#info. 4 This is a quartile classification of the estimated median household income of residents in the patient's ZIP Code. c The control group in regression analysis. *** P<0.01; ** P<0.05; * P<0.10. Racial Disparity in Duration of Patients' Visits Karaca and Wong male patients. Elderly patients and patients diagnosed with neoplasm, endocrine, nutritional, and metabolic diseases and immunity disorders, diseases of blood and blood forming organs, and mental disorders generally experienced longer ED stays than did other patients. Consistent with the existing literature, our results suggest that, in the aggregate, lack of health insurance did not have a significant direct association with longer mean duration of ED visits. The mean duration of ED visits was substantially longer at non-profit hospitals when compared to for-profit hospitals, and at Level 1 trauma centers when compared to other trauma centers or non-trauma centers. Our findings may also inform public and private policymakers on a broad range of issues including, but not limited to, admission day of the week, hospital volume, and the impact of hospital bed size on the mean duration of ED visits.
2018-04-03T04:13:06.120Z
2013-09-01T00:00:00.000
{ "year": 2013, "sha1": "4eb0bfe461dda9af6d59dabd6c017b4eeed9e112", "oa_license": "CCBYNC", "oa_url": "https://cloudfront.escholarship.org/dist/prd/content/qt9zj3t838/qt9zj3t838.pdf?t=ozfcml", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8cbf9098f37af65119dcc19f5fc7894832bbd544", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
11340142
pes2o/s2orc
v3-fos-license
Secretory Phosphatases Deficient Mutant of Mycobacterium tuberculosis Imparts Protection at the Primary Site of Infection in Guinea Pigs Background The failure of Mycobacterium bovis Bacille Calmette-Guérin to impart satisfactory protection against adult pulmonary tuberculosis has necessitated the development of more effective TB vaccines. The assumption that the vaccine strain should be antigenically as similar as possible to the disease causing pathogen has led to the evaluation of M.tuberculosis mutants as candidate tuberculosis vaccines. Methods/Principal Findings In this study, we have generated a mutant of M.tuberculosis (Mtb∆mms) by disrupting 3 virulence genes encoding a mycobacterial secretory acid phosphatase (sapM) and two phosphotyrosine protein phosphatases (mptpA and mptpB) and have evaluated its protective efficacy in guinea pigs. We observed that Mtb∆mms was highly attenuated in THP-1 macrophages. Moreover, no bacilli were recovered from the lungs and spleens of guinea pigs after 10 weeks of Mtb∆mms inoculation, although, initially, the mutant exhibited some growth in the spleens. Subsequently, when Mtb∆mms was evaluated for its protective efficacy, we observed that similar to BCG vaccination, Mtb∆mms exhibited a significantly reduced CFU in the lungs of guinea pigs when compared with the unvaccinated animals at 4 weeks after challenge. In addition, our observations at 12 weeks post challenge demonstrated that Mtb∆mms exhibited a more sustainable and superior protection in lungs as compared to BCG. However, the mutant failed to control the hematogenous spread as the splenic bacillary load between Mtb∆mms vaccinated and sham immunized animals was not significantly different. The gross pathological observations and histopathological observations corroborated the bacterial findings. Inspite of disruption of phosphatase genes in MtbΔmms, the lipid profiles of M.tuberculosis and MtbΔmms were identical indicating thereby that the phenotype of the mutant was ascribed to the loss of phosphatase genes and the influence was not related to any alteration in the lipid composition. Conclusions/Significance This study highlights the importance of M.tuberculosis mutants in imparting protection against pulmonary TB. Introduction Tuberculosis (TB) continues to intimidate human race unabashedly and remains a major cause of morbidity and mortality throughout the world [1,2]. Every week, more than 150,000 individuals develop TB and ~30,000 human lives are lost globally due to this dreaded disease. The lethal liaison between TB and HIV infections and the emergence of various forms of drug resistant M.tuberculosis strains have made the situation even more precarious [3,4]. Although, the current vaccine, Mycobacterium bovis Bacille Calmette-Guérin (BCG) does provide protection against childhood TB especially TB meningitis, it is ineffective in providing consistent protection against the disease in adults and older people [5]. Under the best of the circumstances, it has provided 80% protection, which generally has been to the tune of 40-60% on an average. Therefore, the need to develop a superior TB vaccine than BCG cannot be over-emphasized. The purpose of an effective live vaccine would be best served if the vaccine strain is antigenically as similar as possible to the disease-causing pathogen in order for it to generate the host immune responses that mimic natural infection [6]. Comparative genomic studies have revealed that BCG, in comparison to M.tuberculosis, lacks 16 defined regions (RD1-16) comprising of ~150 genes, some of which are known to encode potential antigenic determinants that could increase the immunogenicity of a vaccine [7,8]. This makes the use of attenuated M.tuberculosis strains rather than BCG, for the generation of appropriate immune responses, an attractive idea [5,9,10]. Several M.tuberculosis mutants have been evaluated in animal models and have resulted in varying degrees of success in imparting protection against TB when compared with BCG [11][12][13][14][15]. Immunization of mice with the ∆RD1∆panCD mutant of M.tuberculosis (an attenuated M.tuberculosis RD1 knockout and pantothenate auxotroph) resulted in 1-2 log 10 CFU lower bacillary loads in the spleens, lungs and liver when compared with the BCG. However, in bull calves, no histopathological differences were observed in the lung and lymph nodes of ∆RD1∆panCD vaccinees when compared with the unvaccinated controls [14,15]. Similarly, mice vaccinated with ∆secA2 mutant (sec A2 deletion mutant of M.tuberculosis) exhibited significantly lower pulmonary and splenic CFU when compared with the BCG vaccinated group, however, the same vaccine performed as well as BCG in guinea pigs [11]. In contrast to these observations, Martin et al. demonstrated similar level of protection exhibited by SO2 strain (phoP deletion mutant of M.tuberculosis) in mice, although guinea pigs vaccinated with SO2 exhibited significantly increased survival time when compared with BCG [12]. The variable results shown by the candidate vaccines and the fact that none of the current candidates has successfully made through the clinical trials reinforce the importance of keeping the pipeline full with the new candidates [16]. Among the secretory proteins of M.tuberculosis, three phosphatases, namely, mycobacterial secretory acid phosphatase (SapM) and two phosphotyrosine protein phosphatases (MptpA and MptpB) have been shown to contribute to its pathogenicity [17][18][19][20]. SapM dephosphorylates phosphatidylinositol 3-phosphate (PI3P), a membrane trafficking regulatory lipid, resulting in the arrest of phagosome maturation [19]. In addition, in a study by Festjens et al, disruption of sapM locus in BCG improved its protective efficacy as a vaccine against TB [21]. The increased efficacy of the vaccine was accredited to the efficient activation and recruitment of dendritic cells to the draining lymph nodes in the absence of SapM, thus allowing successful antigen presentation and activation of the adaptive immunity by dendritic cells [21]. A recent study showed that the fbpA/sapM double mutant of M.tuberculosis was attenuated for growth and more immunogenic in macrophages as compared to M.tuberculosis [22]. MptpA has been demonstrated to block phagosomelysosome fusion by inhibiting V-ATPase trafficking to the mycobacterial phagosome [23][24][25]. It has been reported that mptpA mutant of M.tuberculosis was impaired for survival/ growth in THP-1 macrophages and phagosomes harboring the mutant strain exhibited increased phagosome-lysosome fusion [23]. It has been previously reported that M.tuberculosis devoid of MptpB activity was impaired for survival in IFN-γ activated macrophages and in guinea pigs [26]. In another study, it was shown that MptpB inhibits ERK ½, p38 signaling pathways and caspase 3 activity, thus subverting the host immune response to infection [27]. The importance of MptpB in the intracellular survival of M.tuberculosis was also demonstrated in a study in which specific inhibitors against MptpB were shown to inhibit mycobacterial survival within murine macrophages [17,27]. In this study, by deleting the function of three virulence genes, namely, mptpA (Rv2234), mptpB (Rv0153c) and sapM (Rv3310), we have developed the mutant Mtb∆mms and evaluated its protective efficacy in guinea pig model of experimental tuberculosis. Bacterial strains and growth conditions The bacterial strains and plasmids used in this study are listed in Table 1. M. bovis BCG (Danish strain) was obtained from BCG laboratories, Chennai, India. M.tuberculosis H37Rv (ATCC No. 25618) used for challenge was procured from Dr. J. S. Tyagi, AIIMS, New Delhi, India. Mycobacterial strains were grown to mid-log phase in MB7H9 medium supplemented with 1X albumin-dextrose-catalase (ADC), 0.5% glycerol and 0.05% Tween 80. PBS stocks were prepared and stored at -80°C till further use. The CFU of stocks was enumerated by plating appropriate dilutions in duplicates on MB7H11 agar supplemented with 1X oleic acid-albumin-dextrose-catalase (OADC) and 0.5% glycerol. E.coli strains, XL-1 Blue (Stratagene) and HB101 (Life Technologies) were used for cloning purposes. Kanamycin and Chloramphenicol were used at 25 μg/ml and 30 μg/ml, respectively. Hygromycin was used at 50 μg/ml for M.tuberculosis and at 150 μg/ml for E.coli. Generation of Mtb∆mms Mutant of M.tuberculosis To generate Mtb∆mms mutant of M.tuberculosis, a portion of mptpA and sapM was deleted in the genome of Mtb∆mptpB [26] and replaced with Kanamycin resistance cassette and Chloramphenicol resistance cassette, respectively. For the generation of MtbΔmptpBΔmptpA double gene mutant, primers were designed to amplify (i) 156 bp of 5′ proximal end of mptpA along with 1135 bp of immediate upstream region of mptpA (amplicon I) and (ii) 167 bp of 3′ distal end of mptpA along with 1240 bp of immediate downstream region of mptpA (amplicon II). The amplicons I and II were PCR amplified and cloned into the vector pLitmus-38 (New England Biolabs) to generate the vector pLITΔA (with a deletion of 169 bp from the central region of mptpA ORF). The Kanamycin resistance gene was excised out from pSD5 as an NheI-BstEII fragment, end-repaired and cloned into NdeI digested, end-repaired pLITΔA to generate pLIT38ΔAK. The vector pLIT38ΔAK was pretreated with alkali [28] Confirmation of deletion of mptpA and sapM by Southern hybridization To confirm the deletion of mptpA in MtbΔmptpBΔmptpA, the genomic DNA was isolated from the parental strain (MtbΔmptpB) and the mutant strain (MtbΔmptpBΔmptpA) followed by the digestion of 2 µg of DNA with PvuII. The deletion of sapM in MtbΔmms was confirmed by isolating the genomic DNA from MtbΔmptpBΔmptpA and MtbΔmms, followed by digestion of 2 μg of DNA with PstI. DNA was electrophoresed through 1% agarose gel followed by depurination, denaturation and neutralization of DNA within the agarose gel. DNA was then transferred onto positively charged nylon membrane by capillary transfer overnight and immobilized by UV radiation. 200 bp region at 5′ termini of mptpA and sapM were amplified for the generation of probe. The probe labeling, subsequent pre-hybridization, hybridization and detection were performed as described in the DIG High Prime DNA Labeling and Detection Starter Kit II (Roche Applied Science, IN, USA). Lipid profile analysis Isolation of mycolic acids. Mycolic acids were extracted from M.tuberculosis as well as MtbΔmms as described previously [29]. Briefly, mycobacterial strains were grown in 50 ml of MB7H9 supplemented with 1X ADC to an A 600 of 1.0. The culture was harvested, heat killed (95°C for 1 hr) and then saponified with 6 ml of 20% tetrabutylammonium hydroxide at 100°C, overnight, to hydrolyze the mycolic acids from the cell wall. Free mycolic acids so generated were methylated by adding 1:1 dichloromethane methyleuchlorid and 300 μl of methyl iodide to form mycolic acid methyl esters. Upon phase separation, the lower organic layer was collected, dried and resuspended in diethyl ether (3 ml). This lipid suspension was centrifuged at 2500 rpm for 2-3 min and the supernatant was collected and dried. The crystals thus formed, were suspended in 900 μl of a mixture of toluene and acetonitrile (2: 1). The solution was transferred to a microcentrifuge tube followed by addition of 600 μl of acetonitrile to the suspension. The suspension was then frozen at -20°C overnight. The solution was centrifuged at 12000 rpm at 4°C for 15 min. Finally, the pellet was suspended in 500 μl of diethyl ether and transferred to a small glass tube and evaporated with liquid nitrogen. The equivalent amount of mycolic acids extracted from M.tuberculosis as well as MtbΔmms, suspended in diethyl ether were spotted on a thin layer chromatography (TLC) plate (Merck, TLC Aluminium sheets silica gel 60), chromatographed in hexane: ethylacetate (95: 5, v/v) seven times and visualized by staining with 20% sulphuric acid in ethanol followed by charring. Extraction of polar and apolar lipids. Mycobacterial lipids were extracted as described previously [30]. Briefly, 50 ml of mycobacterial cultures, grown in MB7H9 supplemented with 1X ADC, were harvested at an A 600nm of 1.0 and heat killed (95°C for 1 hr). Apolar lipids were extracted by adding 2 ml of methanolic solution of 0.3% sodium chloride and 1 ml of petroleum ether (60-80°C) to the cell pellet. The cell suspension was mixed end-over-end for 30 min followed by centrifugation at 2500 rpm for 10 min. The upper layer consisting of apolar lipids was collected in a separate vial and 1 ml of petroleum ether was added to the lower layer, vortexed and mixed end-over-end for 15 min. The cell suspension was again centrifuged to recollect the upper layer. The upper layers comprising of apolar lipids were pooled and dried at 60°C. Further, the polar lipids were extracted by adding 2.3 ml of chloroform: methanol: 0.3% sodium chloride (90: 100: 30, v/v/v) to the bottom layer. The cell suspension was mixed end-overend for 60 min followed by centrifugation at 2500 rpm for 10 min. Polar lipids, present in the supernatant fraction, were collected and the pellet was further treated twice with 750 μl of chloroform: methanol: 0.3% sodium chloride (50: 100: 40, v/v/v), to obtain all polar lipids. The supernatants from these three extractions were pooled and further extracted with 1.3 ml of chloroform and 1.3 ml of 0.3% sodium chloride. The lower layer comprising of polar lipids was collected into a fresh glass tube and dried at 60°C. Equivalent amounts of polar and aploar lipids suspended in chloroform: methanol (2: 1, v/v) from both M.tuberculosis and MtbΔmms strains were then spotted on TLC plates and analysed for different lipid fractions by using different solvent system as described in Table S1. TLC plates were developed by dipping in 10% phosphomolybdate or spraying with 2% orcinol in 10% sulphuric acid (for solvent C) followed by charring. For the detection of trehalose monomycolate (TMM), trehalose dimycolate (TDM) and sulfolipids (SL), 5 μCi of 14Cacetate was added to 10 ml of log phase culture of both M.tuberculosis and MtbΔmms strains, separately. Cultures were then harvested after 18 hrs of radioactive pulse and apolar lipids were extracted from a methanolic solution of 0.3% sodium chloride and petroleum ether as described above. The organic phase was suspended in chloroform: methanol (2: 1, v/v). Approximately, 25,000 counts from the samples belonging to each strain were spotted on the TLC plate followed by chromatography in the appropriate solvents (Table S1). The lipids were visualized with a Typhoon FLA 700 Phosphorimager. Comparison of the growth of MtbΔmms and the parental strain in human macrophages Human monocytic THP-1 cells were cultured in complete RPMI-GlutaMAX TM medium [containing 10% heat inactivated FBS and 1% antibiotic-antimycotic mix] (GIBCO Grand Island, NY, USA) and were differentiated to macrophages by the addition of 30 nM Phorbol 12-myristate 13-acetate (PMA, Sigma) for 16 hrs at 37°C, 5% CO 2 . Cells were washed with complete RPMI medium and rested for 2 hrs in fresh medium without antibiotic-antimycotic mix before infection. For infection, 5 x 10 5 macrophages were infected with 5 x 10 5 mycobacteria to achieve an MOI of 1:1 in 24 well plates for 4 hrs in triplicates [31]. Following infection, the extracellular bacteria were removed by overlaying the cells with RPMI medium containing 200 μg/ml amikacin for 2 hrs. At designated time points, day 0 (4 hrs), 2, 4 and 6, macrophages were lysed by the addition of 0.025% SDS and intracellular bacteria were enumerated by plating appropriate dilutions on MB7H11 agar. Colonies were counted after 4 weeks of incubation at 37°C and the data was expressed as CFU/ml. Experimental animals Pathogen-free outbred female guinea pigs (200-300 g) of the Duncan-Hartley strain were procured from Disease Free Small Animal House Facility, Lala Lajpat Rai University, Hissar, India. The animals were housed in individually ventilated cages and were provided with food and water ad libitum in a BSLIII facility at University of Delhi South Campus (UDSC), New Delhi, India. Ethics statement Guinea pig experiments included in this manuscript were reviewed and approved by the Institutional Animal Ethics Committee of University of Delhi South Campus, New Delhi, India (Ref. No. IAEC/AKT/Biochem/UDSC/24.08.2010). All animals were routinely cared for, according to the guidelines of CPCSEA (Committee for the Purpose of Control and Supervision of Experiments on Animals), India. Guinea pigs were vaccinated intradermally with mycobacterial strains by injecting not more than 100 μl and were euthanized, whenever required, by CO 2 asphyxiation and all efforts were made to ameliorate animal suffering. Influence of deletion of phosphatase genes on the pathogenicity of M.tuberculosis To evaluate whether the Mtb∆mms mutant was sufficiently attenuated for its use as a vaccine, animals (n=6) were inoculated intradermally (i.d.) with 5 x 10 5 bacilli of either M.tuberculosis or Mtb∆mms or BCG in 100 μl of saline. Animals were euthanized at 4 weeks and 10 weeks post inoculation by CO 2 asphyxiation. Lungs, liver and spleen were scored for gross pathological damage such as tissue involvement, areas of inflammation, extent of necrosis and number/size of tubercles due to infection. The scores given to these organs were graded from 1-4 and were based on the modified Mitchison scoring system [32]. For histopathological evaluation, the right lung and a portion of left dorsal lobe of liver were removed and fixed in 10% buffered formalin. 5 µm thick sections of formalin fixed, paraffin embedded lung tissues were stained with haemotoxylin and eosin (H & E). The tissues were coded and the coded samples were evaluated by a certified pathologist having no knowledge of the experimental groups. Left caudal lung lobe and caudal portion of spleen were aseptically removed for the measurement of the bacillary load. The specific portions of lungs and spleen were weighed and homogenized separately in 5 ml saline by using a polytron homogenizer. Appropriate dilutions of the homogenates were plated on to MB7H11 agar plates in duplicates and incubated at 37°C for 3-4 weeks. The number of colonies was counted and expressed as mean log 10 CFU/organ. Evaluation of protective efficacy of MtbΔmms against M.tuberculosis infection Guinea pigs were divided into 3 groups (n=8) and the animals were immunized intradermally with 5 x 10 5 CFU of either (i) BCG or (ii) Mtb∆mms in 100 μl of saline. In the control group, guinea pigs were injected with 100 μl of saline. Twelve weeks post immunization, guinea pigs were infected with a low dose of virulent M.tuberculosis via the respiratory route in an aerosol chamber (Inhalation Exposure System, Glascol Inc.), pre calibrated to deliver 10-30 bacilli in lungs per animal. Guinea pigs were euthanized at 4 weeks and 12 weeks after challenge and evaluated for bacterial load, gross pathological and histopathological changes in various organs as described in the previous section. A significant reduction in these parameters in vaccinated animals was considered as a protective effect of the vaccine. Statistical analyses For comparison between the groups, Non-parametric Kruskal-Wallis test followed by Mann-Whitney U-test, One-way analysis of variables (ANOVA) with Tukey post-test, Two-way ANOVA with Bonferroni multiple comparison test and student's t-test were employed, wherever appropriate. Differences were considered significant when p<0.05. For statistical analyses and generation of graphs, Prism 5 software (Version 5.01; GraphPad Software Inc., CA, USA) was used. Functional disruption of mptpA and sapM in Mtb∆mptpB and characterization of the multigene mutant To generate triple gene mutant of M.tuberculosis, we first disrupted mptpA in Mtb∆mptpB (published from our laboratory previously, [26]) to generate Mtb∆mptpB∆mptpA ( Figure 1A). Deletion of mptpA was confirmed by three approaches (1). PCR by using mptpA gene specific primers (Table 1, Figure 1B). In the case of Mtb∆mptpB, a 0.5 kb amplicon representing the complete mptpA gene was amplified as expected, while in the case of Mtb∆mptpB∆mptpA, an amplicon of 2.0 kb was observed indicating the disruption of mptpA by Kanamycin resistance cassette (2). Southern hybridization. In the case of MtbΔmptpB strain, the probe hybridized to a 0.5 kb (lane 1) PvuII fragment whereas disruption of mptpA gene by Kanamycin resistance cassette resulted in a signal at 2.4 kb (lane 2) in the MtbΔmptpBΔmptpA strain ( Figure 1C) (3). Nucleotide sequencing. Both 0.5 kb and 2.0 kb amplification products were DNA sequenced that further confirmed the disruption of mptpA in MtbΔmptpBΔmptpA. Further, the sapM gene was deleted in MtbΔmptpBΔmptpA by employing linear AES to generate MtbΔmms ( Figure 1D). The triple gene mutant was confirmed by PCR by employing sapM gene specific primers (Table 1, Figure 1E). The primers yielded an amplicon of 0.9 kb in Mtb∆mptpB∆mptpA, however, the deletion of sapM gene resulted in a PCR amplicon of 1.5 kb in Mtb∆mms ( Figure 1E) (2). Southern hybridization. The probe in the MtbΔmptpBΔmptpA hybridized to a 3.0 kb (lane 1) PvuII fragment whereas disruption of sapM gene by Chloramphenicol resistance cassette resulted in a signal at 1.5 kb (lane 2) in the MtbΔmms strain ( Figure 1F) (3). Nucleotide sequencing. Both 0.9 kb and 1.5 kb amplification products were DNA sequenced that further confirmed the disruption of sapM in MtbΔmms. Deletion of mptpA and sapM was further confirmed by immunoblot analysis by using polyclonal antibodies raised against MptpA and SapM. As shown in Figure 1G and Figure 1H, we did not observe any expression of MptpA and SapM in MtbΔmms. Disruption of Phosphatases Does Not Alter the Lipid Profile of MtbΔmms To ascertain whether the disruption of phosphatase genes had any influence on the lipid composition of the mutant, we performed a total lipid analysis of the parental as well as the mutant strain by TLC. M.tuberculosis and MtbΔmms were analysed for the well known characterstic lipids of the tubercle bacillus. The apolar and polar lipid fractions were extracted and assayed for phthiocerol dimycocerosate (PDIM), triacylglycerol (TAG), mycolic acids, free fatty acids, diacylglycerol (DAG), diacyltrehalose (DAT), trehalose monomycolate (TMM), trehalose dimycolate (TDM), glucose monomycolate (GMM), sulfolipids (SL), phosphatidylinositol (PI) and phosphatidylinositol mannoside (PIMs) and phospholipids (P) by TLC. Equivalent amounts of apolar as well as polar lipids from both M.tuberculosis and MtbΔmms were spotted on TLC plates and analysed for different lipid fractions (Table S1). TLC analysis of the lipids of M.tuberculosis and MtbΔmms exhibited a similar and usual lipid profile with respect to the mycobacterial lipid components. M.tuberculosis produces three classes of mycolic acids: alpha-, keto-and methoxy-mycolic acids [33]. Analysis of total mycolic acids extracted from both M.tuberculosis and MtbΔmms by single dimension TLC exhibited that there was no significant difference in total or alternate types of mycolic acids (Figure 2A). In addition, we observed similar accumulation of structural variants of DIM and TAGs as described by Giovannini et al [34] ( Figure 2B). Two dimensional TLC indicated the equivalent presence of both apolar (DAG, TMM, TDM, SL, GMM, DAT) and polar lipids (PIMs, PI and P) in both M.tuberculosis and MtbΔmms as described previously by Bhatt et al [35] (Figure 2C, Figure 2D and Figure 2E). Hence, our observation demonstrated that the lipid profile of M.tuberculosis and MtbΔmms was similar with no notable differences. Mtb∆mms exhibits a severe growth defect in human THP-1 macrophages Next, we compared the growth characteristics of MtbΔmms and the parental strain in MB7H9 medium and in THP-1 cells. As shown in Figure 3A, we did not observe any difference in the growth characteristics of MtbΔmms and the parental strain in MB7H9, however, a significant difference was observed in the growth kinetics between these two strains in THP-1 macrophages. We observed that Mtb∆mms displayed a significantly reduced ability (~2.89 fold difference) to infect macrophages in comparison to the parental strain. Moreover, while M.tuberculosis continued to grow normally for 6 days, Mtb∆mms exhibited no sign of growth during this time period demonstrating that the deletion of 3 phosphatases rendered the mutant completely incapable of growing in the macrophages ( * * * p<0.001) ( Figure 3B). These results demonstrate the importance of mptpA, mptpB and sapM in the growth and survival of M.tuberculosis in the human macrophages. Deletion of phosphatase genes leads to the attenuation of M.tuberculosis To evaluate whether the deletion of three phosphatases had rendered the Mtb∆mms mutant sufficiently attenuated for its use as a vaccine, animals were inoculated with either M.tuberculosis or BCG or Mtb∆mms strain ( Figure 4A). At 4 weeks post inoculation, we observed the maximum bacillary load of 3.96 log 10 CFU in the lungs of M.tuberculosis infected animals, as compared to negligible bacillary load of 0.28 log 10 CFU in the lungs of BCG treated animals. No bacilli were detectable in the lungs of Mtb∆mms inoculated animals ( Figure 4B). However, when the splenic bacillary counts were analyzed, we observed 5.68 log 10 CFU, 0.36 log 10 CFU and 3.99 log 10 CFU in the animals inoculated with M.tuberculosis, BCG and Mtb∆mms, respectively ( Figure 4C). Hence, during this initial phase, Mtb∆mms showed some growth in the spleens of animals, although it was ~70 fold less as compared to the parental strain. At 10 weeks post inoculation, a bacillary Figure 4D). In spleens, we observed a bacillary count of 5.45 log 10 CFU in M.tuberculosis infected animals. However, no bacilli were recovered from the spleens of animals inoculated with either Mtb∆mms or BCG ( Figure 4E). Although, Mtb∆mms exhibited some growth in the spleens of the inoculated animals, the bacilli were recovered only during the initial phase (4 weeks post inoculation) and the bacillary load was only 1.4% of that observed in the case of M.tuberculosis infected animals (70 fold fewer bacilli in Mtb∆mms inoculated animals). Further, on extending the time post inoculation, no Mtb∆mms bacilli were recovered in spleens as well as in lungs. Thus, based on these observations, it appeared that as a result of deletion of the phosphatase genes, Mtb∆mms was sufficiently attenuated for growth in the host tissues and could be safely used as a vaccine candidate. Deletion of phosphatase genes renders M.tuberculosis incapable of causing pathology in guinea pigs at 10 weeks post inoculation The gross pathological changes observed in the organs of the animals at 4 weeks and 10 weeks post inoculation with Mtb∆mms were commensurate with the bacillary load observed. At 4 weeks post inoculation, the extent of damage observed in the case of M.tuberculosis inoculated animals was found to be maximum amongst all the groups with numerous small sized tubercles along with scattered areas of necrosis in all the organs (score: 2 in lungs and liver and 3 in spleen), indicating progressive pulmonary and extra-pulmonary disease ( Figure 5A). However, in the case of BCG inoculation, no pathology was observed in the lungs, liver or spleen as expected (score: 1 in all the organs). In the case of inoculation with Mtb∆mms, most of the animals displayed negligible lung and hepatic pathology (score: 1) with predominantly scanty and extremely small necrotic lesions. However, in the case of spleen, Mtb∆mms inoculated animals were allotted intermediate score (score: 2) in comparison to other two groups. This indicated that Mtb∆mms inoculation resulted in some pathological damage to spleens, although, the damage was considerably less in comparison to the M.tuberculosis infected animals ( Figure 5A). When the animals were evaluated at 10 weeks post inoculation, in the case of M.tuberculosis infected animals, as expected, the extent of damage was more than that observed at 4 weeks (score: 3 in lungs, 4 in liver and 3-4 in spleen) with extensive involvement and numerous large sized tubercles effacing the entire organs ( Figure 6A). However, the animals inoculated with either BCG or Mtb∆mms displayed normal lungs, liver and spleen phenotype (score: 1) with no pathological damage ( Figure 6A). To evaluate the histopathological changes in the lungs and liver of guinea pigs inoculated with M.tuberculosis, BCG or Mtb∆mms, the tissue sections were stained with haematoxylin and eosin. At 4 weeks post inoculation, the lungs of M.tuberculosis infected animals exhibited granulomatous infiltration with caseating necrotic granulomas effacing the pulmonary parenchyma ( Figure 5B). Inoculation with either BCG or Mtb∆mms resulted in a negligible granulomatous Mtb∆mms vaccination limits M.tuberculosis multiplication in the lungs of guinea pigs As the Mtb∆mms mutant appeared to be safe for its use as a vaccine candidate, we next evaluated its protective efficacy against M.tuberculosis challenge. For this, guinea pigs were vaccinated with either BCG or Mtb∆mms and were infected with 10-30 M.tuberculosis bacilli by aerosol route at 12 weeks post vaccination. As a control, one group of guinea pigs was sham immunized. Following 4 and 12 weeks after challenge, animals were euthanized and bacillary load in the lungs and spleens was determined ( Figure 7A). At 4 weeks after challenge, the sham immunized animals exhibited 6.30 log 10 CFU in lungs. The BCG vaccinated animals exhibited a significantly reduced CFU in lungs (4.43 log 10 CFU) indicating 1.87 log 10 CFU reduction ( * p<0.05) as compared to the shamimmunized animals ( Figure 7B). Mtb∆mms vaccinated animals exhibited a bacillary load of only 3.60 log 10 CFU in lungs indicating that the mutant also significantly reduced the pulmonary load by 2.70 log 10 CFU ( * * * p<0.001) in comparison to the sham immunized animals ( Figure 7B). Further, the sham immunized animals exhibited a splenic bacillary load of 5.37 log 10 CFU ( Figure 7C), while, the BCG vaccinated animals exhibited a splenic bacillary load of only 1.62 log 10 CFU. This significant reduction in splenic bacillary load by 3.75 log 10 CFU ( * * * p<0.001) demonstrated a tight control of hematogenous spread of bacilli by BCG. The splenic bacillary load in the case of Mtb∆mms vaccinated animals (4.73 log 10 CFU) was 0.64 log 10 CFU less when compared with the sham immunized animals but the difference was not significant ( Figure 7C). On extending the time period between challenge and euthanasia to 12 weeks, the sham immunized animals exhibited a bacillary load of 6.15 log 10 CFU in lungs ( Figure 7D). Immunization with BCG resulted in 4.57 log 10 CFU in lungs as compared to the sham immunized animals, however, the difference in the pulmonary bacillary load between BCG and sham immunized animals was statistically not significant and with the extension of time, the ability of BCG to impede bacillary multiplication met with a considerable decline. In contrast, Mtb∆mms vaccinated animals exhibited only 3.16 log 10 CFU in lungs, thus indicating a significantly reduced bacillary load by 2.99 log 10 CFU ( * p<0.05) in comparison to the sham immunized animals. These observations demonstrated that Mtb∆mms exhibited a more sustainable and superior protection as compared to BCG. The splenic bacillary load in the case of sham immunized animals was 4.92 log 10 CFU ( Figure 7E). Although, the splenic bacillary loads in the cases of BCG vaccination and Mtb∆mms vaccination were 3.30 log 10 CFU and 4.21 log 10 CFU, respectively, these were not significantly different in comparison to the splenic bacillary load observed in the sham immunized animals. Thus, our observations indicated that Mtb∆mms was not able to exhibit a significant control on the hematogenous spread at either 4 weeks or 12 weeks after challenge. Mtb∆mms vaccination imparts protection from pathological damage in lungs At 4 weeks after challenge, the sham immunized animals exhibited severe pathology in lungs characterized by the presence of numerous large and small sized tubercles (score: 4 in lungs). However, hepatic and splenic tissues exhibited moderate involvement (score: 2 in liver and 2-3 in spleen) ( Figure 8A). In contrast, BCG vaccinated animals displayed significantly reduced gross lesions in the organs when compared with the unvaccinated animals (score: 2 in lungs and 1 in liver and spleen). In the case of immunization with Mtb∆mms, the animals exhibited moderately inflamed lungs and spleen (score: 2 in both the organs) with minimal hepatic tissue destruction (score: 1) ( Figure 8A). On extending the period between challenge and euthanasia to 12 weeks, we observed an overall increase in the gross pathological damage to the organs of sham immunized animals as characterized by extensive involvement of tissue with numerous large tubercles and scattered areas of necrosis in both lungs and liver (score: [3][4]. In addition, a marked discoloration of spleen with numerous large and small sized tubercles and occasional attrition of capsular structure was also observed in most of the sham immunized animals (score: 4) ( Figure 9A). BCG immunized guinea pigs exhibited moderate involvement of lung and splenic tissues with small sized tubercles effacing the entire tissues (score: 2-3 in lungs and 1-2 in spleen). However, liver of these animals exhibited normal architecture (score: 1). In case of immunization with Mtb∆mms, the animals exhibited a lung phenotype similar to BCG immunized animals (score: 2-3). However, in the case of spleen and liver, vaccination with Mtb∆mms resulted in an enhanced pathology when compared with the BCG vaccination (score: 2-3 in spleen and 3-4 in liver) ( Figure 9A). Histopathological analyses of lung and liver sections further substantiated the gross pathological observations. At 4 weeks after challenge, sham immunized animals exhibited several discrete necrotic tubercles occupying 30-40% of the lung sections ( Figure 8B). Vaccination with BCG or Mtb∆mms prevented pulmonary damage as was evident from the presence of only a moderate granulomatous infiltration and well preserved alveolar spaces, when compared with the sham immunized animals. Sham immunized animals displayed inflammation of hepatic tissues with scattered areas of cellular infiltration. BCG vaccinated animals exhibited minimal involvement of hepatic tissues; however, Mtb∆mms immunized guinea pigs exhibited moderate involvement with granulomatous infiltration ( Figure 8B). Further, at 12 weeks after challenge, as expected, lungs of the sham immunized animals exhibited extensive granulomatous infiltration with multi-focal coalescing granulomas along with prominent central coagulative necrosis ( Figure 9B). BCG immunized animals exhibited the presence of scattered areas of inflammation with discrete granulomas along with necrotic centre in lungs. Mtb∆mms vaccinated animals exhibited moderate inflammation in lungs similar to BCG vaccinated animals. On comparing the pathological changes in liver ( Figure 9B), sham immunized animals exhibited effacement of a large proportion of hepatic parenchyma due to multiple coalescing foci of necrotic granulomas. Immunization with BCG resulted in a significant reduction in hepatic damage with only a negligible granulomatous infiltration. However, in the case of Mtb∆mms immunization, the hepatic lobules displayed an extensive granulomatous infiltration ( Figure 9B). Discussion The development and widespread administration of the BCG vaccine since the early 1920s was originally hailed as a major breakthrough with the promise to eradicate the scourge of TB from the world. However, the early promise was not realized and with the growing incidence of TB cases and inconsistent protective efficacy of BCG, it became evident that the BCG vaccine, in its existing form is of limited use in controlling the disease particularly in the elderly [36]. The availability of complete M.tuberculosis genome sequence and an increased understanding of the genes involved in M.tuberculosis virulence and immune responses has led to a renewed optimism that it should be possible to develop more efficient TB vaccines than the existing BCG [37,38]. In this study, we have developed a multigene mutant of M.tuberculosis, having deletions in three genes namely, mptpA, mptpB and sapM that are involved in host-pathogen interaction and signal transduction. M.tuberculosis Erdman and M.tuberculosis H37Rv have been commonly used as basis for generating attenuated strains of M.tuberculosis [39][40][41][42][43][44]. However, to ensure that their virulence was not diminished on account of repeated in vitro subculturing, the bacilli recovered from the organs of infected animals were subcultured only once for their use for generating mutants or for challenging the animals. We have evaluated the vaccine efficacy of the resulting mutant against M.tuberculosis challenge in guinea pigs. To ensure that the mutated genes were essential only for the growth of the pathogen in the host and not in the broth culture, we selected the genes implicated in the host-pathogen interaction. As expected, MtbΔmms grew in broth culture similar to the parental strain, however, it displayed a significantly reduced ability to infect and grow inside the human THP-1 macrophages emphasizing that the phosphatases are vital for the growth of the pathogen in macrophages. In addition, our studies in guinea pigs also provide evidence that MtbΔmms is highly attenuated for its growth and ability to cause pathology in the host. On inoculation of guinea pigs with Mtb∆mms, no bacilli were recovered from the lungs of the animals at any time point of the study. From the spleens of these animals, bacilli were recovered during early phases after the inoculation (4 weeks) but no bacilli were recovered after this initial period. This demonstrated that MtbΔmms could survive only for a short while; therefore, at 10 weeks post inoculation, the organs of MtbΔmms as well as BCG inoculated animals appeared to be similar with no apparent damage indicating that the mutant was safe to be used as a vaccine candidate. For the evaluation of protective efficacy, we employed guinea pig model of experimental tuberculosis. The guinea pig model of low dose aerogenic infection with virulent M.tuberculosis has been preferentially used to elucidate the events in the pathogenesis of pulmonary tuberculosis [45]. Guinea pigs are more susceptible to tuberculosis infection and have the advantage over mice in that the pathology of the disease in this model is closer to human tuberculosis. Thus, it serves as an effective model to evaluate vaccine efficacy. When guinea pigs are infected with less than 10 CFU of virulent M.tuberculosis, it has been observed that the pathogen disseminates from lungs to the pulmonary lymph nodes via hematogenous spread and then appears in spleens within ~3 weeks post infection [46,47]. Bacilli reseed the lung by ~4 weeks to form secondary granulomas. The protective efficacy of Mtb∆mms was evaluated based on its ability to reduce the bacillary load in lungs and spleens of guinea pigs post M.tuberculosis infection as well as to control the pathological damage. Our study shows that Mtb∆mms vaccination was able to restrict bacterial multiplication at the primary site of infection leading to reduction in the pulmonary bacillary load and this bacillary load reduction by Mtb∆mms as well as by BCG was comparable during early phase (4 weeks) after infection. Unlike in the case of BCG vaccination, however, Mtb∆mms was not able to control hematogenous spread. A number of studies have reported the examples of vaccines which fail to provide consistent protection in all the organs uniformly [12,48,49]. For example, it has been reported that a recombinant BCG expressing ESAT-6 provided significant protection in both mice and guinea pigs against dissemination at extra-pulmonary site but failed to protect against pulmonary form of the disease [49]. On the other hand, vaccination of guinea pigs with DNA encoding the mycobacterial antigen MPB83 influenced the pulmonary pathology but not the hematogenous spread following the aerogenic infection with Mycobacterium bovis [48]. Also, in the case of vaccination of guinea pigs with the ∆phoP mutant, a significant reduction in the bacillary load in lungs but not in the spleens was observed as compared to the unvaccinated animals [12]. On extending the period between the M.tuberculosis challenge and euthanasia to 12 weeks, although BCG appeared to lose the control of bacillary multiplication in the pulmonary tissue, Mtb∆mms was still very effective in controlling the lung infection. At this time point, however, neither BCG nor Mtb∆mms exhibited any significant control over the hematogenous spread. The pathological damage in the animals from various groups corroborated the CFU data. From this, we could infer that Mtb∆mms imparted as much or better control of the disease than BCG at the pulmonary site. However, immunization with the phosphatase mutant did not show any superior control over the bacillary multiplication in spleens, when compared with the sham immunized animals. As phosphatases play an important role in the lipid metabolism, we evaluated the lipid profiles of M.tuberculosis and MtbΔmms to ascertain whether disruption of phosphatase genes might result in the altered lipid profile in MtbΔmms. However, our observations demonstrated that the lipid profiles of both the strains were identical inspite of disruption of phosphatase genes in MtbΔmms indicating thereby that the phenotype of the mutant was ascribed to the loss of phosphatase genes and the influence was not related to any alteration in the lipid composition. To summarize, we demonstrate that mutation of genes encoding the signal transduction associated phosphatases of M.tuberculosis provides optimism for the generation of novel potential vaccine candidates against tuberculosis. The Mtb∆mms was not only significantly attenuated for growth in macrophages and guinea pigs, it also imparted an enhanced protection against pulmonary TB. However, further modifications would be required in order for Mtb∆mms to elicit more appropriate immune responses for imparting superior protection including the control of hematogenous spread. Moreover, due to increasing concern about the emergence of antibiotic resistance in human pathogens, use of antibioticresistant genes in recombinant vaccines meant for use in humans is not permissible [50]. Hence, the antibiotic resistance genes from MtbΔmms would have to be removed before any possibility of its use in human clinical trials. Our future efforts would focus on addressing these issues. Table S1. Solvent systems employed for lipid analyses.
2018-04-03T00:21:05.620Z
2013-10-18T00:00:00.000
{ "year": 2013, "sha1": "295fce58379f406e3b31be7b8740fc77ccc4a935", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0077930&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "295fce58379f406e3b31be7b8740fc77ccc4a935", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
256043310
pes2o/s2orc
v3-fos-license
TT¯\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathrm{T}\overline{\mathrm{T}} $$\end{document}-deformed nonlinear Schrödinger The TT¯\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathrm{T}\overline{\mathrm{T}} $$\end{document}-deformed classical Lagrangian of a 2D Lorentz invariant theory can be derived from the original one, perturbed only at first order by the bare TT¯\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathrm{T}\overline{\mathrm{T}} $$\end{document} composite field, through a field-dependent change of coordinates. Considering, as an example, the nonlinear Schrödinger (NLS) model with generic potential, we apply this idea to non-relativistic models. The form of the deformed Lagrangian contains a square-root and is similar but different from that for relativistic bosons. We study the deformed bright, grey and Peregrine’s soliton solutions. Contrary to naive expectations, the TT¯\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathrm{T}\overline{\mathrm{T}} $$\end{document}-perturbation of nonlinear Schrödinger NLS with quartic potential does not trivially emerge from a standard non-relativistic limit of the deformed sinh-Gordon field theory. The c → ∞ outcome corresponds to a different type of irrelevant deformation. We derive the corresponding Poisson bracket structure, the equations of motion and discuss various interesting aspects of this alternative type of perturbation, including links with the recent literature. Introduction The presence of an irrelevant operator in a quantum field theory is usually not good news, as far as understanding the high-energy physics of the model is concerned. Trying to reverse the renormalisation group flow requires the introduction of an infinite number of counterterms in the Lagrangian. Therefore, perturbing a field theory with irrelevant operators can drastically affect the ultraviolet properties of the model and introduce new fundamental degrees of freedom at high-energy. In two space-time dimensions, the TT composite operator [1] is an exception to this rule since this irrelevant field is well defined also at the quantum level. The TT perturbation is solvable [2,3], in the sense that physical observables of interest, such as the S-matrix and the finite-volume spectrum, can be found in terms of the corresponding undeformed quantities. For the TT operator, we can reverse the renormalisation group trajectory and gain exact information about ultraviolet physics. The outcome is stunning: while the low-energy physics resembles that of a conventional local quantum field theory at high-energy the density of states on a cylinder shows Hagedorn growth similar to that of a string theory [4][5][6]. A widely studied Lorentz-breaking perturbation is the JT model [7], other integrable deformations that explicitly break this symmetry were introduced and partially studied in [8]. A framework where TT-type perturbations may potentially lead to concrete applications in fluid dynamics, nonlinear optics and condensed matter physics corresponds to the domain of non-relativistic nonlinear wave equations. One of the most-studied models with direct relevance in cold atom experiments is the nonlinear Schrödinger (NLS) equation. The primary purpose of this paper is to derive the JHEP04(2021)121 explicit form of the TT perturbed Lagrangian for a family of NLS equations with arbitrary interacting potential. The exact expression of the Lagrangian is surprisingly similar to that of TT-deformed relativistic bosons [3,9,10]. A second type of deformation of the NLS model is obtained performing the non-relativistic limit of the deformed sinh-Gordon theory. Refs. [11,12] appeared as we were working on this project. These authors discuss various aspects of deformed 1D non-relativistic quantum particle models with complementary results, compared to those presented here. Non-relativistic scalar theory: generalities In this paper we shall consider a class of non-relativistic classical field theories of a complex scalar field ψ(x), with x = (t, x), described by the hermitian Lagrangian density where ψ * denotes the complex conjugate field, ψ ,t := ∂ t ψ and ψ ,x := ∂ x ψ are the derivatives w.r.t. time and space, and the potential V is a generic function of |ψ| 2 = ψψ * . Setting V = V NLS := g|ψ| 4 , for some real constant g, (2.1) becomes the nonlinear Schrödinger (NLS) Lagrangian density, i.e. L = L NLS . Let us recall some well-known facts about these non-relativistic Lagrangian models. The dynamics of the system is described by the Euler-Lagrange equations where V := ∂V ∂|ψ| 2 . The invariance under space and time translations implies the existence of a Noether current that is the stress-energy tensor, whose components are computed from L as explicitly, Let us define the currents H µ = T µ t and P µ = T µ x that fulfil the continuity equations where the quantities H := H t and P := P t represent the total energy and momentum densities. The invariance under a global phase rotation implies the existence of the Noether current J whose components are computed from L as The current J fulfil the continuity equation 8) and the quantity M := J t defines the total mass density. In Hamiltonian formalism, we introduce the Hamiltonian density where π and π * are the conjugated momenta defined by the Legendre transformation Clearly, it is not possible to express the time derivative in terms of the conjugated momenta as the usual Legendre procedure would require. In fact, the last equations reveal the presence of redundant variables that is a typical feature of constrained Hamiltonian systems. As it is well-known, the Dirac-Bergmann algorithm allows to elegantly overcome this issue (see, for example [13]). However, due to the mixing between space and time coordinates, the situation appears to be much more complicated in the TT-deformed model. In the following, we shall mainly ignore this problem and postpone a rigorous study of the deformed Hamiltonian and Poisson structure to the future. The deformed Lagrangian The aim of this section is to derive the Lagrangian density L(x, τ ) of the TT-deformed theory. By definition, the latter fulfils the flow equation where T (x, τ ) is the deformed stress-energy tensor that descends from formula (2.3) setting L = L(x, τ ). In principle, equation (3.1) can be solved for L(x, τ ) by means of a perturbative expansion around τ = 0 [3]. However, the form of the original stress-energy tensor (2.4) discourages the application of this approach. It is natural to try to obtain the deformed Lagrangian using the same change of variables found in [14] in the relativistic context and then check if the flow equation is satisfied. 1 Following the logic of [8,14] and [17], the starting point is the identity JHEP04(2021)121 where y = (t , x ) is another set of coordinates related to x = (t, x) via a coordinate transformation y(x) whose Jacobian J is such that Performing the coordinate transformation in (3.2) leads to , (3.4) that allows to reconstruct L(x, τ ) from the original Lagrangian and the knowledge of the coordinate transformation. Remark. The Hessian matrix associated to this change of variables is symmetric on-shell, in fact due to the continuity equations (2.5). Let us sketch the computation of L(x, τ ) starting from formula (3.4). The first step consists in writing the derivatives ψ ,t , ψ ,x and their c.c. (complex conjugates) in terms of ψ ,t , ψ ,x and their c.c. by inverting the algebraic systems A straightforward computation gives where we defined and JHEP04(2021)121 Next, we write numerator and denominator of (3.4) in the following way Implementing the transformation (3.7) in the expressions above and after some algebraic manipulations one gets Finally, plugging (3.8) in (3.11) and using the expressions (3.10) in (3.4) one finds The form of the Lagrangian (3.12) is similar to its relativistic counterpart (see [3,9,10]) and it would be nice to have some interpretation in terms of topological gravity [18]. As stated at the beginning of the section, we can easily check that (3.12) fulfils (3.1) using the following expressions . This proves that (3.12) is indeed the TT-deformed non-relativistic Lagrangian density. Deformed soliton solutions The knowledge of the coordinate transformation provide a useful tool to obtain classical solutions to the deformed theory without explicitly solving them. In this section, we concentrate on the NLS theory and derive the TT-deformation of some particular soliton solutions. Recall that the NLS Lagrangian density is and the Euler-Lagrange equations associated to it are JHEP04(2021)121 Starting from a given solution ψ 0 (x) to (4.2), we can obtain the corresponding TT-deformed solution ψ 0 (x, τ ) by means of the coordinate transformation, using the strategy described in [14]. Let us summarise here the main idea of the method. We start from the definition of inverse Jacobian (3.3) and plug the solution ψ 0 (y) together with its c.c. inside the explicit expressions of the components of T (y). In this way, we end up with two systems of partial differential equations for the unknown functions t(y) and x(y). Integrating these systems we first recover the map x(y) = (t(y), x(y)) and then we invert it to arrive at y(x) = (t (x), x (x)). Finally, plugging y(x) into the explicit expression of ψ 0 (y), we obtain the desired deformed solution as The bright soliton Bright solitons are solutions localized in space that emerge in the regime g < 0. Therefore, throughout this section we shall fix g = −k, with k ∈ R + . The bright soliton solution has the following analytic expression where η ∈ R + is the amplitude, κ ∈ R is the wave number, ω = κ 2 2m − kη 2 is the dispersion relation and v = ∂ω ∂κ = κ m is the velocity of the soliton. It can be easily checked that (4.5) together with its c.c. are solutions to (4.2). Plugging (4.5) and its c.c. in (4.3) we arrive at . The latter systems of differential equations can be integrated for x(y) = (t(y), x(y)) as follows JHEP04 (2021)121 where the constants of integration has been chosen in accordance with the initial condition at τ = 0. To obtain the inverse relation y(x) = (t (x), x (x)), we first observe that Plugging (4.8) into (4.7) and using the property In conclusion, |ψ 0 (x, τ )| is defined through the following implicit relation (4.11) Driven by the analogy with the relativistic case, we expect that the deformation causes the emergence of shock-wave singularities in the solution for some specific critical values of τ , in correspondence of which the solution becomes multi-valued. For these values of τ the coordinate transformation is not invertible anymore, hence they are defined by the locus The explicit computation of the determinant of J −1 (y) evaluated on the solution ψ 0 (y) gives where in the last equality we used (4.8). Since 0 ≤ |ψ 0 (y)| ≤ η, the values of τ that fulfil (4.12) is given by the image of the real-valued function that is obtained by solving the equation det J −1 (y) = 0 w.r.t. τ . Recall that the parameter ω is defined as ω = κ 2 2m − kη 2 with m, k, η ∈ R + and κ ∈ R. An elementary analysis of the function F reveals that its domain and image are JHEP04(2021)121 Therefore, the shock-wave phenomenon occurs only for positive τ when ω > 0, and for both positive and negative sign of τ when ω < 0. The latter situation is similar to the sine-Gordon breather [14]. It is worth notice that there is a strong resemblance between figures 1 and the plots in [19], where the shape and time evolution of vortex filaments are associated with soliton solutions of NLS. This observation suggests that probably the correct framework for the interpretation of the TT-deformed NLS is through an embedding in three space dimensions, as explained in [19,20]. We expect that the only effect of the TT is a deformation of the shape of the filament, without changing the "solitonic surface". This fact corresponds to the non-relativistic analogue of what was previously observed in [3] and [14] for deformed massless free bosons and the sine-Gordon model, respectively. The grey soliton Another typical solution of the NLS equation is the grey soliton (see, for example [21]) that exists in the regime g > 0. It has the following analytical expression where, in the cold-atom framework, v ∈ R is the velocity of the soliton, n 0 ∈ R + is the ground-state density of condensed atoms, µ = 2gn 0 ∈ R + is the chemical potential, v 1 = µ/m ∈ R + is the velocity of the first sound and ξ = 1/ √ 2mµ ∈ R + is the healing length. We shall follow the same steps of the previous section. Plugging the solution ψ 0 (y) together with its c.c. in (4.3) we get . (4.17) The systems (4.17) can be integrated for x(y) = (t(y), x(y)) as follows Using the fact that JHEP04(2021)121 we get (4.20) In conclusion, the inverse relation y(x) = (t (x), x (x)) yields Thus, also in this case the deformed solution is defined through an implicit relation as Following the same logic adopted in the previous section for the bright soliton, it is possible to find the critical values of τ where the solution becomes multi-valued. However, the computation and explicit outcomes are quite involved and not particularly enlightening, thus we decided to omit them. The Peregrine's soliton As in the case of the bright soliton, we shall set g = −k with k ∈ R + . The Peregrine's soliton has the following analytical expression (4.23) Plugging the solution and its c.c. in (4.3) we get Integrating the latter systems for x(y) = (t(y), x(y)) we get (4.26) Unfortunately, it is not possible to invert the coordinate transformation, therefore, we resort to numerical integration. Figure 2 shows the deformation of the Peregrine soliton for different values of τ . As for the bright and grey solitons, described in sections 4.1 and 4.2, we see the appearance of the "wave-breaking" phenomena, resembling that observed in relativistic models [10,14,22]. JHEP04(2021)121 5 Non-relativistic limit of TT-deformed sinh-Gordon As extensively discussed in [23][24][25][26], the non-relativistic limit (NR) of the φ 4 or the sinh-Gordon (sh-G) models correspond to the NLS theory with quartic potential. This limit can be consistently performed not only at the level of the classical action and equations of motion, but also for various quantum objects of physical interests, such as the Thermodynamic Bethe Ansatz and the form factors. Therefore, the naive expectation is that the TT-deformed NLS theory should be easily obtainable from the NR limit of the deformed sinh-Gordon model. We consider the sinh-Gordon theory with background metric η = diag c 2 , −1 and action where the Lagrangian density is that fulfils the flow equation where the c 2 factor comes from | det η|. In analogy with [25], we shall consider a double scaling limit such that c → ∞ ,ḡ → 0 with β =ḡc = const. . Therefore, the sinh-Gordon potential admits the following expansion aroundḡ = 0 where the powers of φ higher than φ 4 are suppressed. Following [23][24][25], we parametrize the field φ as where ψ describes only the non-relativistic degrees of freedom. Using (5.7), the kinetic and the potential terms of the sinh-Gordon Lagrangian become JHEP04(2021)121 where O n (x) and O n (x) collect products of powers of ψ and ψ * while L K (x) is the kinetic part of the non-relativistic Lagrangian density L(x), as per (2.1). Notice that terms involving exponential factors e inmc 2 t oscillate so fast as c → ∞ that average to zero when integrated over any small but finite time interval. We shall drop these terms taking a suitable time average denoted by the symbol . It follows that Plugging (5.8) and (5.9) in (5.2) and taking the time average, we obtain the result of [25] L sh-G (x) = 1 2 where L NLS (x) is the nonlinear Schrödinger Lagrangian density (4.1) with coupling constant g = β 2 /16, explicitly Such non-relativistic limit of the sinh-Gordon model is uniquely defined. However, the same procedure appears to be ambiguous when applied to the TT-deformed case. Mean field approach In this section we shall discuss one among the many possible ways to perform the NR limit in the deformed sinh-Gordon model. Before we begin, notice that the factor c 2 in (5.5) is problematic when taking the NR limit. Therefore, we reabsorb it by rescaling τ as τ /c 2 in (5.4). Given this, the idea is to apply a Mean Field (MF) approach which consists in taking the average of the potential and kinetic terms appearing in (5.4) as follows , (5.13) then take the limit c → ∞. This procedure is somehow justified, a-posteriori, by the simplicity of the final outcome. Let us consider separately the various terms in expression (5.13), namely and JHEP04(2021)121 where L NLS (x) is as per (5.12). Combining (5.14) and (5.15), the terms proportional to c 2 cancel in the sum and lead to This is an unexpected result since L NR is very different from the TT-perturbed Lagrangian (3.12). Moreover, the expansion of (5.16) around τ = 0, shows that there is not agreement with (3.12) already at leading order in τ . The expansion of (3.12) is instead . Although the question is still open, probably, L NR does not correspond to an integrable deformation of NLS. The flow equation fulfilled by (5.16) is: The Hamiltonian density is |ψ| 4 and the conjugated momenta are It is easy to show that all components of the stress-energy tensor rescale in the same way: where T NLS (x) is as per (2.4) with L = L NLS . As it is customary in the integrable model framework, it is convenient to continue working with the field pair (ψ, ψ * ). The equal-time Poisson bracket of the deformed theory is where x and y denote two different spatial points at fixed time and δ(x) is the Dirac delta. In fact, using the former definition it is possible to verify that formula together with its c.c. yield the deformed EoMs. Let us briefly sketch the computation. From the definition (5.18) and using Leibnitz rule we have . JHEP04(2021)121 Next, we manipulate separately the Poisson brackets in the second line of the latter expression, obtaining and where we used the fact that (5.27) that follows immediately from (5.21). Finally, plugging (5.24) and (5.26) into (5.23) we get Integrating by parts the integral in (5.28) that is equivalent to JHEP04(2021)121 The same EoM can be directly derived from the Lagrangian (5.16), however the knowledge of the Poisson structure (5.21) should help in the exploration of the hidden integrability structure and for the direct quantisation of this model. Summing up (5.31) with its complex conjugate, we can derive the deformed continuity equation from which we read off the components of the deformed conserved U(1) current Splitting the complex function ψ in its modulus and phase as we can recast equation (5.32) in the form While we were working on this project the paper [11] appeared on ArXiv. Following a quite different logic, using the ideas of a change of the metric and generalized hydrodynamics the authors of [11] also arrived to (5.35). At this point, it is clear that the non-relativistic limit's outcome depends on the stage chosen to perform the "small-time interval" average procedure. We have not yet identified a guiding principle to discern between the various options. Let us end this section with a concluding remark. If we parametrize the field φ as per (5.7), employ the notion of time-average as described in the previous section and finally take the c → ∞ limit we obtain the following relation where P µ and J µ are the conserved currents associated to the nonlinear Schrödinger Lagrangian density (5.12). Thus, the first order truncation of (5.4) becomes, upon rescaling τ as τ /c 2 , It is interesting to observe that, a suitable NR limit maps det [T sh-G ] of the sinh-Gordon theory into the bilinear combination −ε µν J µ P ν of the NLS model, that is different from det [T NLS ]. In fact, −ε µν J µ P ν is the perturbing operator associated with the so-called hard-rod deformation, recently studied in [27] and defined by the flow equation Let us stress that the computation presented here holds at the first perturbative order in τ . It would be important to go further in perturbation theory and understand if the whole flow equation (5.38) for the NLS model can be recovered from the TT-deformed sinh-Gordon model according to some specific non-relativistic limit procedure. JHEP04(2021)121 6 Conclusions The nonlinear Schrödinger equation plays a significant role in various physics branches, ranging from classical hydrodynamics, superfluidity, and nonlinear optics. In this paper we have identified the TT-deformed NLS Lagrangian, with generic interacting potential, and studied particular solutions of the corresponding equations of motion. Compared to the unperturbed case, the deformed soliton solutions exhibit the phenomena of bifurcations or wave breaking. Several aspects of this model deserve further study. First of all, we would like to fully develop the Hamiltonian approach, which is made complicated by the presence, already in the undeformed theory, of a second class Hamiltonian constraint. From the fact that the finite volume/temperature spectrum of TT-like perturbed models fulfils Burgers-type equations [2,3,7,28], we know that there must be a way to overcome the technical problems caused by the highly non-trivial evolution of the Poisson bracket structure under the TT-perturbation. Various quantum aspects of this deformation are discussed in the nice recent work [12]. The second type of deformation, described in section 5, is also interesting. In many respects, it leads to a simpler system compared to the "standard" TT perturbation of section 3. For both perturbations, it would be essential to investigate the connection with the theory of vortex filaments, as discussed in [19] (see also [20]). Further work in this direction may shed some light on the physical interpretation of these systems and their possible interpretation as non-relativistic variants of the Nambu-Goto model. Finally, it is necessary to stress that under specific conditions, the NLS equation represents a model for solitons and rogue waves in hydrodynamics and nonlinear optics [29,30]. Interestingly in these laboratory setups, one can exchange the role between space and time coordinates and even describe stationary optical beams where both t and x correspond to physical space coordinates. This is for example achieved in planar glass wave-guides with Kerr non-linearity [30,31]. The possibility to build these type of devices gives us some hope for future realisations of "TT-optical systems" related, for example, to the simple EoM (5.31). Concerning the Poisson structure associated with the TT perturbation described in section 3, a preliminary investigation reveals the appearance of non-ultralocal terms. This fact could make the quantisation procedure of the model problematic (see, for example, [32] for a recent discussion of this issue in the closely related sigma-model context.). The Poisson structure associated with the "mean-field" model of section 5.1 is relatively simple. However, it remains open the question about the probable loss of integrability, which would reduce the possibility of comparison with exact results, such as the S-matrix and the Thermodynamic Bethe Ansatz equations. We shall leave a more extensive discussion about these compelling questions for the future. Furthermore, it would be important to explore possible connections between the results presented here and in [12] and the corresponding deformations of the two-dimensional Yang-Mills-Higgs model [33], as recently suggested by [34] as a natural generalization of the TT and q-deformed Yang-Mills setups of [10,[34][35][36]. Note added 1: while we were already at the writing stage of this paper, we became aware of the work [27] by Dennis Hansen, Yunfeng Jiang and Jiuci Xu, which has some overlap with ours. In particular, on the dynamical change of coordinates and the TT-deformed Lagrangian described in section 3. Note added 2: we thank Sergey Frolov for informing us that, in collaboration with Chantelle Esper, obtained the one-soliton solution of the TT-deformed NLS equation [51] using the light-cone gauge approach of [15].
2023-01-21T15:07:16.657Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "c8fedcb04e6700aa27b26e334e4cea5a359eb51d", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP04(2021)121.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "c8fedcb04e6700aa27b26e334e4cea5a359eb51d", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
272039488
pes2o/s2orc
v3-fos-license
Fiqh Kankilo and the Purification System of the Butonese People: A Socio-Legal Historical Perspective of Islamic Law and Legal Pluralism : This study is the dialectic between Islam and local culture in the purification system of the Butonese people. Data were obtained by means of interview, literature review, and observation. The study used the approaches of the social history of Islamic law and legal pluralism. The data were then analyzed using the Miles and Huberman analysis model. The findings of the study reveal that kankilo is a scientific concept that contains knowledge about the ways and purposes of purification as a result of the dialectic process between Islam and local traditions. For the Butonese, kankilo is taharah (purification), and taharah is kankilo. Hence, the concept and knowledge of taharah in the context of Buton culture can be referred to as Fiqh Kankilo. Fiqh Kankilo is a product of fiqh (Islamic jurisprudence) thought designed on the local characteristics of Buton society. The locality that is clearly visible in the content of Fiqh Kankilo is inevitable, considering that the quality of individuals and cultures where a religious law develops is not a blank slate or a cultural vacuum. Thus, religion (i.e., fiqh) and culture are ultimately two aspects that are certain to be in partnership, to build synergy, and to greet each other. This inevitability occurs because the understanding of religion cannot avoid the locality of culture that is relative and particular. The legitimacy of the existence of Fiqh Kankilo can at least be referred to the history of the development of Islamic legal thoughts, which provides room for accommodation of traditions on the paradigmatic basis of “adat al-muhakkamat” (customary law). Introduction One of the living traditions that continues to be upheld in the cultural space of the Butonese people is the tradition of kankilo.Kankilo literally means purity". 1 By description, kankilo is a tradition related to purification rituals or for the purpose of purifying oneself.The scope of purification in kankilo is not merely related to bodily purity (i.e., hadath and najas), but it also represents a ritual act to purify oneself from possible evil deeds towards oneself, others, and the surrounding environment.In the latter sense, kankilo for the Butonese transforms the performer into a figure who adorns oneself with praiseworthy character traits. 2 Thus, the essence of the kankilo ritual practice that has grown in the historical cultural space of the Butonese is to transform the ritual performer into a figure who adorns oneself with praiseworthy character traits, which is achieved through 1167 the stages of purifying the body from hadath (a state of impurity) and najas (external impurity). In this study's preliminary analysis of written documents containing the concept of the kankilo tradition, specifically the patanguana kankilo texts, the descriptions of bodily purification from hadath and najas, as outlined in fiqh taharah, have been evident.The scope of the kankilo bodily purification includes istinja (cleansing oneself after defecation or urination), wudhu (ablution), janabah (ritual) bath, and shahada (profession of faith).However, in practice, beyond referring to sharia norms (i.e., the Qur'an and Hadith), kankilo contains local elements in its technicalities and substantive orientation.The reality of local content in kankilo ritual practices has piqued our interest in conducting an indepth study of kankilo as a concept of purification and its practices in Buton society.Furthermore, this study would also elaborate on the form of the dialectic between religion and adat (custom) in the kankilo concept and the reason kankilo could continue to be embodied in the worldview of the Butonese people until present. One of the most popular studies describing the relationship between Islam and local cultures in the Nusantara is Clifford Geertz's research in Mojokuto, The Religion of Java.Geertz's research produced a famous concept known as the tripartite theory (i.e., abangan, santri, and priyayi).Geertz depicts that Javanese people have their own religions or local religions heavily influenced by beliefs in the supernatural, as well as various ritual traditions identified with the beliefs of the abangan, centered in rural areas.3Following Geertz's pattern, Niels Mulder also concludes the same idea.He states that the religion found in Southeast Asia is a religion that has undergone a process of indigenization (i.e., localization).Mulder further describes that in the dialectical process that occurs between religion and local traditions, the foreign religion absorbs local traditions, and not vice versa.Similarly, in the existence of Islam in relation to local cultures, Islam absorbs the local cultures. 4eertz's Trichotomy, using the terms abangan, santri, and priyayi, however, is considered inaccurate by Hasyra Bachtiar.Bachtiar argues that these three groups, as classified by Geertz, do not originate from the same classification system.Abangan and santri, according to Geertz, are categorized based on their level of adherence to Islam, while priyayi is a social classification.Moreover, Geertz views these categories as absolute, which Bachtiar contends it is not the case. 5Mark R. Woodward also rejects Geertz's view.Woodward argues that Geertz's assertion that Islam in Javanese courts has been heavily influenced by mystical traditions that blend Hindu and Islamic elements (syncretism) is incorrect.Instead, Woodward suggests that it is a compatible relationship between Islam and local culture.Furthermore, Woodward contends that Javanese Islam is not animistic, but rather a contextual Islam that undergoes continuous acculturation.He rejects Geertz's view of slametan offerings as animistic, arguing that these ritual practices are based on traditions linked to the hadiths of the Prophet Muhammad. 6n addition to several Western academic writings explaining the relationship between Islam and local cultures, there are also a number of writings by Indonesian intellectuals related to Islam and Javanese culture, as well as writings about Islam and other local cultures, such as Erni Budiwanti's "Islam Sasak: Wetu Telu Versus Wetu Lima" (2000).Budiwanti's study concludes that the process of interaction between Islam and the local culture of the Sasak people is characterized by acculturation.7 In addition, a number of researchers has also carried out studies on the relationship between Islam and culture with the locus of Buton society.One of the works widely referenced by Buton-based researchers is Abdul Mulku Zahari's work "Adat Fi Darul Butuni".8 J. W. Scoorl, a Dutch anthropologist, has also conducted research titled "Masyarakat, Sejarah dan Budaya Buton [Buton Society, History, and Culture] focused more on the culture of society, while Yunus's focused on the aspect of Sufism in relation to the power system.Yunus explains that the elements of Sufi teachings and elements of power are interconnected, forming and strengthening the components of the sultanate's power in particular and the power of the sultanate in general.12 However, Alifuddin's study found that in the long historical process experienced by Buton society until eventually choosing Islam as the official "ideology", it did not automatically eliminate all the previous local elements.Thus, Islam and local elements shall coexist to formulate a relationship that is adequate for the needs of the Butonese people.This necessity also continues when elements of modernity penetrate this region.13 Among the many writings about Buton, none specifically discusses kankilo, as examined in this present study.A study on kankilo can be found in Hamirudin Udu's research, "Kangkilo Oral Tradition: Reflection of Sufism and Political Powers in Buton Community".14 Udu mentions that in the oral tradition of kankilo, there is knowledge and understanding of Sufism embodied in the cultural space of the Butonese.The values of order contained in the oral tradition of kankilo, both theocentric and anthropocentric, are used to create social harmony in the life of the Buton community.In addition, they intend to realize the relationship between God and humans, as servants in the form of worship, the relationship between humans and other humans, as well as between humans and the universe.15 Although Udu's study concerned with kankilo, the focus of his study was more on the aspects of Sufism.Therefore, it differs from this study, which examined kankilo from the aspect of purification rituals (i.e., taharah). The dialectic of religion and custom is a dynamic relationship between two cultural elements, naturally resulting in a tug-of-war or a mutual influence between the two and existing between two extremes: conflict and integration.Between conflict and integration, there is the assumption of compromise as a middle ground to avoid cultural clashes.Compromise can take the form of adaptation, accommodation, assimilation, or even syncretism. 16Integration, a result of adaptation, accommodation and assimilation stages, according to Ralph Linton, will never be fully realized.Linton states that not all elements within a culture can perfectly adapt to each other, and therefore, every culture undergoes Alifuddin, et.al DOI: 10.22373/sjhk.v8i2.21578constant change, either through invention or through diffusion.This implies that no culture has ever been perfectly integrated at a particular point in history. 17he dialectic phenomenon between religion and custom is essentially a universal occurrence and is not limited to a specific region.This phenomenon has existed since the very beginning of Islam's development, and it is in this context that the term "al-adat al-muhakkamah" (customary law) emerged.In further developments, Islamic law placed the issue of custom within a specific framework of rules to be applied, known as 'urf (local custom).Applying Islamic law in accordance with custom means preserving the public interest, which is one of the principles of Islamic law (i.e., 'urf).However, it is important to note that in its implementation, 'urf is not intended to alter the principles of Islamic law. 18his present study employed descriptive analytics with qualitative data.Data were obtained by means of interview, literature review, and observation.As this study relates to the study of Islamic legal anthropology and sociology, the data obtained in the research site were analyzed using historical 19 and legal pluralism approaches. 20 The Practice of Kankilo as a Concept of Purification in Buton Society For the Butonese people, the purification ritual, or kankilo, has become an integral part of their tradition that continues to be practiced and preserved.The significance of kankilo for the Butonese is not merely about cleansing the body from "dirt" (i.e., najas and hadath), but it also involves a spiritual dimension aimed at purifying the soul from negative attributes that may attach to humans.In their local language (i.e., Wolio), purity or purification ritual is expressed by the word kankilo, 21 which is literally equivalent to the Arabic word "taharah".This 1171 view is based on the description of Sultan Muhammad Idrus Qaimuddin (1824-1851) in the Fakihi (i.e., fiqh) book as follows: Kupebaangi Kutula-tula Kangkilo I begin to explain about cleanliness Osiytumo Puuna Pai amala That is the root of all deeds Kapupuana Bicarana Sambaheya The end of the matter of the law of prayer Osyitumo Ariyna Islamu 22 That is the pillar of Islam In the first stanza, he explicitly mentions the word "kankilo" in one of his writings about the purification process contained in the book Fakihi. 23Another fact indicating that kankilo in the sense of purification from hadath and najas is equal to "taharah" in Islamic fiqh is evident in its practices, as they are explicitly a derivation of the concept of purification as taught in Islamic fiqh books.Nevertheless, in the concept of kankilo, local variants are found, which are "foreign" to mention that they are not included or found in the books and lessons of taharah commonly studied in pesantren/madrasah or other formal religious education.For clarity, the following section describes several parts of the discussion of purification and its process in kankilo patangauna, which shows that kankilo is a derivation of the concept of taharah, including the local content contained therein.However, the case examples are limited to several important issues, such as istinja, wudhu, and janabat bath. Istinja in Kangkilo Patangauna In the classic texts of Buton, the lesson of istinja is a crucial aspect.The procedure of istinja has been passed down from parents to children and generations of the Butonese people through oral traditions.In Islamic jurisprudence, istinja is defined as an act performed by a person to cleanse one's private parts from najas (e.g., urine and feces) using water.Istinja is one of the religious commands mentioned in the Sunnah or Hadith of the Prophet, as narrated by Anas bin Malik: "The Messenger of Allah once entered the restroom to defecate; then I brought him a bucket of water and he performed istinja with it" In addition to cleansing oneself with water after defecating or urinating, Islamic law also provides guidance for situations where water is unavailable.In such cases, one may use clean solid objects like stones to cleanse the area (i.e., qubul or dubur), a practice known in Islamic jurisprudence as istijmar. From Abu Hurairah, may Allah be pleased with him, it is narrated that Fiqh Kankilo and the Purification System of the Butonese People Muhammad Alifuddin, et.al DOI: 10.22373/sjhk.v8i2.21578 the Prophet Muhammad (peace be upon him) said, "Whoever performs istijmar (cleansing with stones) should do so in an odd number.Whoever does this has done well, and whoever does not, there is no harm."(Narrated by Abu Dawud, Ibn Majah, Ahmad, Baihaqi, and Ibn Hibban) 24 From Aisha, may Allah be pleased with her, it is narrated that the Prophet Muhammad (peace be upon him) said, "When any one of you goes to the toilet, let him take three stones, for that is sufficient for him."(Narrated by Abu Dawud, Baihaqi, and Shafi'i) 25 The Prophet Muhammad (peace be upon him) also said, "Let none of you perform istinja, except with three stones."(Narrated by Muslim)26 The procedures for istinja and istijmar, as outlined in the fiqh books, are as follows: starting by taking water with the left hand and washing the genitals, specifically the opening where urine exits.Alternatively, if mazi (pre-seminal fluid) has been discharged, the entire genital area should be washed.Then, the anus is washed and rinsed with water while rubbing it with the left hand. However, istijmar, as mentioned earlier, means performing istinja without using water, but instead using stones or other objects.There are three different stones used to clean the remaining residue after defecation.Istinja and istijmar, as part of the Islamic concept of purification, have been detailed in fiqh books and are taught as basic lessons in Islamic schools and madrasa. In Buton society, the same lessons are also taught to children and the next generation, referred to as kankilo.As mentioned previously, kankilo is the concept of purification within the cultural context of Buton Muslim society.The following statement describes kankilo as observed and passed down in Buton society. "After the najas comes out, we take water or stones or dry, useless wood.Then, we wipe the area until it is dry and carefully ensure that the najas does not touch other parts of the skin.If it does touch another part of the skin, the istinja is not valid because the wiping is considered the initial istinja.Then, we wash or rinse ourselves, starting with our thumb to wash our navel.Our index finger washes our right groin, our little finger washes our left thigh, our middle finger washes 'heaven', and our ring finger washes 'hell', three times on the right and three times on the left.Then, we continue to rotate to the right until all traces of najas are removed, into the 1173 letter 'dal', the origin of earth.Once we feel it is dry or clean, we intend to remove the remaining najas, transforming them into the letter 'mim', the origin of water.If we feel uneasy, we look at the water as if to transform the najas into the letter 'ha', the origin of air.After feeling clean, we contemplate removing any doubts about najas within ourselves, transforming it into the letter 'alif', the origin of fire.Finally, we wash our zurriyyat adal (private parts) three times while reciting, "Allahumma thahri qalbi minal nifaqi wahasinu farji minalfawahisyi" (O Allah, my Lord, purify my heart from hypocrisy and purify my private parts from all impurity)." 27he description above is essentially an expansion of the meaning or interpretation of several hadiths related to the teachings of istinja and istijmar.The basic meaning of istinja is to remove dirt, whereas in fiqh, istinja has several meanings.Technically and methodologically, istinja, as mentioned in several hadiths, is the act of cleaning oneself from najas (external impurity) after urinating or defecating.In the kankilo text, it is stated: "After the najas comes out, we take water or stones or dry, useless wood.Then, we wipe the area until it is dry and carefully ensure that the najas does not touch other parts of the skin.If it does touch another part of the skin, the istinja is not valid because the wiping is considered the initial istinja." The text explicitly describes the initial stage or process involved in performing istinja, which entails using water or, in the absence of water, employing dry stones or wood for istijmar.A deeper analysis of the text, particularly considering the specific details mentioned, clearly indicates that istinja and istijmar are derived from the concepts in various teachings of the Prophet as previously explained.However, in subsequent developments, the tradition of istinja as outlined in the text has been infused with the local wisdom.The next part of the Kankilo text is as follows: 27 Amapupuaka inaisi molimba itu taalamo batu atawa okau-kau, inda mokoampadea momatau.Kasimpo tapenkuri pokawaaka omatau onajisi itu.Maka janganiya booli ajampe ikuli mosaganana najisi itu.Barangkala ajampe ikuli mosaganana indamo osaha itu tapekaobusa karana tapenkuri itu osytumo istingga awwali.Kasimpo tapekaobusa, baabaana onganga ogeta abanui puseta syahadata abanui puuna kalata ikana.Kancilita abanui kancilta puuna kalata ikai lakina limata ibanui syoroga.Sosota ibanui narakaa talu wulinga palikana talu wulinga palikaai.Kasimpo tapalipali kaanamo pokawaaka aila ipupuna najisi itu, tantomakamo paila pupuna najisi itu hurufuna dale asala tana.Amararoaka onamisita tantomakamo paila lumuna najisi itu horofu mimu asala uwe.Amahuhiaka onamisita tantomakamo paila bouna najisi itu horufu ha asala ngalu.Amangkiloaka onamisita tontomakamo paila mokonamu-namu inajisi inuncana karota siy horufu alefu asala waa.Kasimpo tabanui zurriyati aadamu taluwulinga temomubaca inciasi.Allahumma thahri qalbi minal nifaqi wahasinu farji minalfawahisyi.Unknown, Kankilo Pantangauna Manuscript (Bau-Bau: n.p., n.d.) Fiqh Kankilo and the Purification System of the Butonese People Muhammad Alifuddin, et.al DOI: 10.22373/sjhk.v8i2.21578http://jurnal.ar-raniry.ac.id/index.php/samarah1174 "...Then, we wash or rinse ourselves, starting with our thumb to wash our navel.Our index finger washes our right groin, our little finger washes our left thigh, our middle finger washes 'heaven', and our ring finger washes 'hell', three times on the right and three times on the left.Then, we continue to rotate to the right until all traces of najas are removed..." Based on the description above, there appears to be a methodological and technical difference between the ways of istinja and istijmar as presented in several hadiths and the kankilo tradition in Buton.This difference is clearly seen in the procedure or at the level of implementation.In the hadiths that explain istinja, the procedure is very simple: washing the anus and rinsing it with water by rubbing it with the left hand.In contrast, the Buton version of istinja is added with several methods (kaifiyat), for example: "Our index finger washes our groin on the right, our little finger washes the base of our thigh on the left..." This difference is inseparable from the influence of the local culture of the community and is also an expansion of the interpretation of several hadith texts that explain istinja and istijmar.Although there are some differences, the essence of the implementation of istinja does not change, i.e., to purify the performer from impurities, which is an absolute prerequisite for every Muslim to perform worship. The content and procedure of istinja contained in the Butonese people's kankilo suggest that the interpretation or elaboration of the meaning of istinja cannot be separated from the local characteristics of Buton Islam, which is rich in various shades of Sufism.This can be seen in the description of the method of istinja in the Buton people's kankilo as follows: "...Then, we continue to rotate to the right until all traces of najas are removed, into the letter 'dal', the origin of earth.Once we feel it is dry or clean, we intend to remove the remaining najas, transforming them into the letter 'mim', the origin of water.If we feel uneasy, we look at the water as if to transform the najas into the letter 'ha', the origin of air.After feeling clean, we contemplate removing any doubts about najas within ourselves, transforming it into the letter 'alif', the origin of fire.Finally, we wash our zurriyyat (private parts) three times while reciting, "Allahumma thahri qalbi minal nifaqi wahasinu farji minalfawahisyi" (O Allah, my Lord, purify my heart from hypocrisy and purify my private parts from all impurity). Wudhu The command to perform wudhu as part of religious law is explained in the Qur'an, Surah Al-Maidah verse 6.In this verse, Allah explains several body parts for wudhu that must be cleansed through washing and wiping, namely: the face, both hands up to the elbows, the head, and both feet up to the ankles. 28This verse is then further elaborated or detailed in several hadiths of the Prophet, including a hadith narrated by Usman bin Affan, as follows: Humran reported that Usman taught him the procedure of wudhu.So, he washed his palms three times, then rinsed his mouth and nose, then washed his face three times, then washed his right hand up to the elbow three times, then washed his left hand like he washed his right hand, then he wiped his head, then washed his right foot up to the ankle three times, then washed his left foot like he washed his right foot."Then, wash the hands to cleanse our flesh, then the mouth to cleanse our heart, and then wash the nose to cleanse our desires.Then, wash our face with the intention, "nawaitu raf'al hadasi asghorul istibahati shalati fardhan lillahi ta'ala" (I intend to perform ablution to remove minor impurity for the obligatory prayer for the sake of Allah), coinciding with the water reaching our forehead, then wash the eyes to cleanse our hearts.Then, wash the hands up to the elbows to cleanse our blood.Then, wash the crown of the head to cleanse our brain.Then, wash the ears to cleanse our gall, and then wash the neck to cleanse our lungs, then wash the feet up to the ankles to cleanse the angels: Jibril,Mikail,Israfil,and Izrail." 30 The table below summarizes the parts of body that are included in the Buton's procedure of wudhu: No It is evident that regarding the body parts that are washed and wiped during wudhu, there is no fundamental difference between the information obtained from the Qur'anic verses and the hadith of the Prophet and the content of the kankilo text about wudhu.The difference arises when we pay attention to 30 Kasimpo tabaho limata tapekankilo antota.Kasimpo tabaho ngangata apekankilo baketa, kasimpo tabaho angota tapekankilo nafsuuta.Kasimpo tabaho routa niatimo, nawaitu raf'al hadasi asghorul istibahati shalati fardhan lillahi ta'ala", asaubawa tee tumpuna uwe ibawona routa.Kasimpo tabaho matata tapekankilo yaeta, kasimpo tabaho limata kawana sikuta tapekankilo raata, kasimpo tabaho uwu-uwuta tapekangkilo otata.Kasimpo tabaho talongata tapekankilo piuta, kasimpo tabaho barokota tapekankilo kumbata.Kasimpo tabaho yaeta kawana biku-bikuta tapekangkilo Jabaraili, Mikaili,Iisrafili, Izraili.Unknown, Kankilo Pantangauna Manuscript (Bau-Bau: n.p., n.d.) Fiqh Kankilo and the Purification System of the Butonese People Muhammad Alifuddin, et.al DOI: 10.22373/sjhk.v8i2.21578 the purpose of washing and wiping these body parts.The study has not found any authentic hadith that explains the purpose of washing each body part in wudhu, unlike in the kankilo patangauna.In this regard, the study considers that what is contained in the kankilo patangauna text describing wudhu is the result of an interpretation or local construction by the local community in understanding the verses and hadiths related to the issue of wudhu. Janabat (Mandatory Ritual) Bath To bathe, in its linguistic sense, means to pour water evenly over something.However, according to Islamic terminology, "bathing" specifically refers to pouring water over the entire body in a certain way, with the intention of worshipping Allah.Several situations necessitate a mandatory ritual bath (janabat): (1) the emission of semen, (2) jima' (sexual intercourse for married couples), (3) conversion to Islam, and (4) the end of menstruation or postpartum bleeding for women.Based on the hadith of the Prophet, the procedure for a janabat bath is as follows: Aisha R.A reported that, "Whenever the Messenger of Allah began his janabat bath, he would begin by washing his right hand with his left, then he would clean his private parts, and then perform wudhu.Then, he would take some water and insert his fingers into his hair, then pour water over his head three times, then he would pour water over his entire body, and then he would wash his feet."(Hadith Muttafaq 'Alaih/Agreed upon by Bukhari and Muslim with Muslim's wording) In line with the above hadith, Maimunah R.A reported that the Prophet would begin his janabat bath by performing wudhu, then he would wash his right hand with his left hand two or three times.Then, he would wash his private parts, and place his hands on the floor or the wall, and then rinse his mouth and nose, and wash his face and hands.Next, he would pour water over his body, and then rub his body, and bend down to wash his feet.Maimunah R.A said, "I brought him a cloth, but he refused it and dried himself with his hands." 31ased on these hadiths, the procedure for performing janabah bath is as follows: (1) Washing the hands two or three times, (2) Washing or cleaning the private parts, (3) Placing the hands on the ground, meaning cleaning the hands after cleansing the private parts, (4) Performing wudhu as usual without washing the feet, (5) Pouring water over the head, ( 6) Bathing thoroughly/wetting the entire body, and ( 7) Washing both feet.The procedure for performing janabah bath, as described, is the most common and can be found in various fiqh books Fiqh Kankilo and the Purification System of the Butonese People Muhammad Alifuddin, et.al DOI: 10.22373/sjhk.v8i2.21578written by the ulema (Islamic scholars).This procedure is the result of the interpretation of hadith texts that elaborate the ways the Prophet performed janabah bath. The guidelines for janabah bathing, as the guidelines for wudhu and istinja, are also fundamental teachings delivered by religious teachers in both Islamic schools (madrasah) and general schools.These guidelines are also taught or transferred from parents to their children.The goal is for adolescents to understand the basic teachings of purity based on sharia/religious law. 32he Buton community, like other Muslim communities, transmits knowledge about these matters both formally and through tradition.Within the Buton cultural system, there are guidelines for purification whose roots can be traced back to Islamic teachings, known as kankilo in the local language.Within the kankilo texts inherited by the Butonese, there are also teachings and guidelines for purifying oneself from major impurity (hadath).The following is a description of a kankilo text that contains the procedure for purifying oneself from major hadath. "Then, when taking a janabah bath, first contemplate the entrance of wadi into the madi, the madi into the mani, and the mani into the manikam.Believe that all these merge with the water that will be used for bathing.Then, begin to wash the right side with the intention: "Nawaitu raf'al haditsil akbaru istihatisshalati fardhan lillahi ta'ala" (I intend to remove the major impurity to purify myself for prayer, an obligation for the sake of Allah Almighty), coinciding with the water touching the skin.Then, wash the right and left sides, followed by the back, and spread out all the hair, the skin of the nose, ears, eyes, navel, and the entire body.Smooth out all the folds, and ensure that nothing hinders the water from reaching our body.Once clean, contemplate the nurul iman (light of faith) as a light that is at the door of the heart and dissolve into it.Then, our self will be purified like our purity in the realm of the unseen.That is what is meant by being janabah (purified from major impurity). 33xamining the content of the text above, it is clear that there is a definite guideline on how one should perform a janabat bath and how to begin it.The text 32 M. Quraish Shihab, Membumikan Al-Quran, (Bandung: Mizan, 1992), p. 176 33 Kasimpo tabaho weta ikanata, kasimpo tabaho weta ikaita, kasimpo tabaho weta iaroata.Kasimpo tabaho italingata tapalipua bari-baria bulata.Kasimpo tabaho ngangata, oangata, matata, opuseta, karota, isulubita, patipua, bari-baria lapita tee moduka booli temoempe siy tumpana uwe ikarota.Amankiloaka tapebaho tontomakamo iweitu tamangkilomo itu simbou kankilota iaalamu misali, Isyitumo isorongiaka jinubu.Unknown, Kankilo Pantangauna Manuscript, (n.p., n.d.) Fiqh Kankilo and the Purification System of the Butonese People Muhammad Alifuddin, et.al DOI: 10.22373/sjhk.v8i2.21578http://jurnal.ar-raniry.ac.id/index.php/samarah1179 implicitly suggests that a janabat bath begins with performing wudhu and then washing or pouring water over the body, starting from the right side.The basic procedure of a janabat bath in kankilo is clearly a derivation of the procedure as outlined in the hadiths of the Prophet.The difference lies in the placement of the meaning of the purpose, which in the fiqh context it is essentially to purify oneself from major hadath (e.g., the emission of semen, after intercourse, and post menstruation and postnatal bleeding for women), whereas in the context of kankilo, this basic purpose is then developed into more metaphysical aspects.This can be seen in the following content of the kankilo text: "Then, when taking a janabah bath, first contemplate the entrance of wadi into the madi, the madi into the mani, and the mani into the manikam.Believe that all these merge with the water that will be used for bathing." The explanation above suggests that the janabah bath in the context of kankilo is not merely about cleaning bodily impurities per se, but also includes the aspects of inner purification.Subsequently, it confirms the existence of local content in the practice of the janabah bath, which creates a difference from the procedure of the janabah bath as described in the books of fiqh on taharah. The Dialectical Pattern of Islam and Local Tradition in the Butonese Purification System The reality of the cultural dialogue occurring between Islamic tradition (taharah) and local tradition (kankilo) among the Butonese, as illustrated in previous examples, serves as evidence that the presence of Islam in the Butonese socio-cultural space has undergone a persuasive process and mechanism.Islam, which the Butonese subsequently chose as their "ideology", arrived without imposing its value system or even its system of worship to be practiced "totally", especially if the application of the value system disregarded those preserved in the local traditions.Therefore, the reality of Islam in Buton, since its inception, has appeared to be "accommodative" and, therefore, in its history of propagation, Islam did not encounter many obstacles, let alone rejection. In relation to kankilo as a concept of purification developing within the cultural space of Islamic Buton, it is safe to assume that before Islam exerted its influence in this region, the tradition or ritual of purification and self-purification had already been an important part of the Butonese people's way of life.When Islam began to exert its influence and brought along the concept of purification as contained in the fiqh books on taharah, then the concepts of istinja, wudhu, and janabat were subsequently conveyed to the community.However, in order for this concept of purification not to replace the purification concept of kankilo, several aspects of the kankilo tradition were incorporated into taharah within the framework of Islam. The success of early Islamic advocates in Buton in avoiding conflict and rejection from the local community was inseparable from their ability to translate local traditions into Islam and vice versa, resulting in a compatible relationship as described by Woodward in his depiction of Javanese Islam. 34On the other hand, the skill of Buton Islamic advocates in transmitting Islamic ideas within the Buton culture did not result in Islam being distorted or following local culture at the expense of its substantive aspects, as seen in Geertz's and Mulder's views on Javanese Islam. 35The dialectic between Islam and local culture, as exemplified in the ritual purification practices previously explained, is a form of elegant accommodation.It is an accommodation of local culture into Islam in terms of orientation, without altering the substantive technicalities. As a tradition living within the Butonese community, kankilo is deeply intertwined with the Butonese view as Muslims. 36In this context, kankilo essentially communicates the Butonese people's ideas and concepts about purification, which are fundamentally derived from Islamic teachings, but have been expanded to meet local Butonese needs.This expansion means that the purpose of purification is not merely to cleanse the body of hadath and najas.The symbolization of hadath and najas as something "dirty" that must be purified inevitably manifests in actions that are "clean" in the sense of virtuous behavior.Therefore, the practice of purification as a manifestation of the idea contained in the concept of kankilo has significant meaning for the Butonese people in their efforts to achieve purity of body and soul. 37Technically and methodologically, some parts of the procedures for istinja, janabat bath, and wudhu are not found in early Islamic traditions or described in hadith texts on related issues.It can be ascertained that some of the purification provisions in Buton society are local constructs.This illustrates the occurrence of interactions between cultures that meet one another. The Roots of Paradigms and the Legitimacy of Fiqh Kankilo Religion (including masail al-fiqhiyah/fiqh issues) and culture can mutually influence each other, as both are systems of values and symbols.Religion is a symbol representing the value of obedience to a supernatural power, while culture 34 Woodward, "The Slametan….",pp.55-89 35 Geertz,The Religion…..,p. vii/ Mulder,Agama….., One of the kankilo contents that has been passed down from generation to generation is the four guidelines for purification, namely: istinja which is likened to purity in the realm of spirits, janabah bathing which is likened to purity in the realm of mitsal, wudhu which is likened to purity in the realm of ajsam, and faith that is placed in the heart which is likened to purity in the human world.If these four are carried out, then it will produce a pure person.Interview: La Ode Abu/16-4-2020 37 Anceaux,Wolio….p. 66 serves as values and symbols that guide human beings in their interactions with their environment. Unike culture, which is subject to change, religion is believed by most of its adherents to be "final" and unchanging.However, because the scope and operationalization of religious values and symbols occur within the flow of history, sometimes the position of religion is shifted by culture.This twoway interaction occurs because both religion and culture are historical realities.38 The phenomenon of kankilo in the religious practice (e.g., taharah) of the Butonese people is an empirical fact that illustrates how the dialectic of religion and culture has evolved in the dynamics of historical currents to influence each other mutually.Based on this historical reality, it is difficult to avoid the nuances of local values in religious concepts that are built within a community, as they are derived from the cultural processes of the society concerned.Thus, the phenomenon of locality in a religion (i.e., fiqh) as depicted in the understanding and practice of the Butonese people is a common and natural phenomenon that can be found in any society, ethnic group, and religion. The fiqh reality of kankilo, which combines Islamic sharia norms with cultural practices in the purity rituals of Buton, presents evidence of a blend between local culture and "ideal" Islam that is deeply rooted in the religious culture of the Butonese.This phenomenon, which presents a configuration of Islam and local culture in the religious traditions of the Butonese, remains a reality and fact observable to this day.The system of purification from the kankilo perspective, which has developed, been taught, and practiced in Buton, depicts an interaction between cultures that greet each other. This study highly believes that the event of interaction between Islamic values and local traditions, in the case of kankilo, which has produced a combination of values deeply rooted in Buton Islamic practices to this day, was a process that was carried out very selectively and with great care.Thus, the fact about the concept of purification in kankilo does not show any indication of a "deviation" from the substantial matters in the tawqifi realm of worship, such as the ways of istinja, wudhu, and janabat bath.All of these, from the kankilo perspective, are still carried out based on techniques and mechanisms that remain within the framework of standard sharia norms.Even if local variants are included, the context does not violate the substantive aspects, but is merely an additional accessory. Kankilo, a religious practice within a framework of Islam and local culture, is a local phenomenon, and this locality is simultaneously global.This implies that the dimension of locality in the fiqh tradition of kankilo can be found everywhere and has been ongoing across time.This view is very reasonable, state of ritual impurity, performing wudhu, and praying (salat).These five elements are the indicators of purity within the kankilo practice in Buton society." 45n this sense, kankilo is a body of knowledge about purification that exists within the socio-cultural system of the Butonese.Its scope is not limited to the physical act of cleansing oneself from any impurity, but also includes the purification of the soul or spirit.As a ritual, kankilo is inherently considered a symbolic behavior, meaning that its significance lies beyond the action itself.A symbol is any behavior or object given meaning 46 encompassing four aspects: physical, behavioral, linguistic, and conceptual.Within these symbols, there is a meaning attached by the bearers of the culture.As a medium where meaning is condensed, everything symbolic is inherently a guide for its users. 47Similarly, kankilo serves as a guide to purification for the Butonese people. As a tradition that lives within the Buton community, kankilo is deeply intertwined with the Butonese worldview. 48In this context, kankilo essentially communicates the Butonese people's ideas and concepts about purification, encompassing both the physical body and the soul.The practice of purification, as embodied in the concept of kankilo, holds significant meaning for the Butonese in their pursuit of spiritual and bodily purity. 49The concept of purification in kankilo, which has been a part of the Butonese religious culture, has been passed down through generations or inherited through oral tradition.Although there are books that specifically discuss the tradition of kankilo, they are now very difficult to find. Substantially, kankilo is a purification ritual 50 that, in Islamic tradition, can be equated with taharah.In spite of some technical and methodological differences between the two, for the Butonese, kankilo is taharah, and taharah is kankilo.Kankilo, as a concept of taharah that lives within Buton culture, is an ijtihad (independent legal reasoning) of local ulema who have successfully integrated Islam with local traditions.Thus, in principle, kankilo is taharah Fiqh Kankilo and the Purification System of the Butonese People Muhammad Alifuddin, et.al DOI: 10.22373/sjhk.v8i2.21578http://jurnal.ar-raniry.ac.id/index.php/samarah1185 conceived from the construction of a local Muslim community, which is also based on the principles of norms contained in the Qur'an and Hadith.In other words, kankilo is a cultural-based fiqh taharah (i.e., Islamic jurisprudence of purification). Conclusion Kankilo is a body of knowledge on purification deeply embedded in the socio-cultural system of the Butonese people.Its scope extends beyond the mere physical cleansing from hadath and najas, encompassing spiritual purification as well.Although kankilo contains local elements, these are merely "accessories" and do not alter the fundamental aspects of the taharah (purification) system known in Islamic fiqh tradition.Research has shown that kankilo is a product of the dialectic between Islam and local traditions, a process that has been elegant, selective, and cautious.Therefore, kankilo can be seen as a product of fiqh "thought" that occurs within a space of dialogue or interaction between cultures.Islamic norms remain in their portion and position, while customary norms are accommodated as variant accessories.Thus, in the context of Islamic law, kankilo can, to a certain extent, be categorized as 'urf. As a tradition, kankilo is essentially taharah, and for the Butonese, kankilo is taharah, and vice versa.In its practice, the substantial aspects of the purification system in kankilo refer to the primary normative roots of the Qur'an and Hadith.This leads to our conclusion that kankilo is equivalent to fiqh taharah.Therefore, the practice of purification known as taharah in the cultural narrative of the Butonese people can be called Fiqh Kankilo.Ultimately, we argue that "Fiqh Kankilo" is a product of fiqh thought that is devised in accordance with the local character of its society.The locality that is evident in the content of Fiqh Kankilo is inevitable, considering that the quality of individuals and the culture in which a religious law grows is not a cultural vacuum.Hence, religion (i.e., fiqh) and culture are ultimately two domains that are always allied in forming and initiating a life together, and thus, religion (i.e., fiqh) cannot avoid the locality of culture, which is relative and particular.The legitimacy of the existence of Fiqh Kankilo can, at least, be referred to the history of the development of Islamic legal thought, which provides room for accommodation of traditions, based on the paradigmatic foundation of "adat al-muhakkamat". Fiqh Kankilo and the Purification System of the Butonese People Muhammad Then, he said, "I have seen the Messenger of Allah perform wudhu like the wudhu I have performed".(HadithMuttafaq'Alaih/Agreed upon by Bukhari and Muslim) 29Based on the hadith above and other hadiths explaining the Prophet's procedure of wudhu, when the Prophet performed wudhu, the body parts that he washed and wiped are described in the following table: "O believers!When you rise up for prayer, wash your faces and your hands up to the elbows, wipe your heads, and wash your feet to the ankles.And if you are in a state of (full) impurity, then take a full bath.But if you are ill, on a journey, or have relieved yourselves, or have been intimate with your wives and cannot find water, then purify yourselves with clean earth by wiping your faces and hands.It is not Allah's Will to burden you, but to purify you and complete His favour upon you, so perhaps you will be grateful."Department of Religious Affairs of the Republic of Indonesia, Al-Qur'an dan Terjemahannya , (Jakarta: Proyek Penerjemahan al-Quran, 1986), p. 86 28 29 Abu Muhammad 'Abd Allah Muhammad bin Ismail Al-Bukhari, Al-Jami' al-Shahih (Shahih al-Bukhari), Beirut: Dar al-Fikri, (Hadith 159 Bab al-Wudhu' qabl al-gus)/ Muslim, Sahih Muslim , (Thaharah Book/ Hadith 331), (Beirut: Dar al-Fikri, n.d.)
2024-08-29T16:21:39.310Z
2024-07-31T00:00:00.000
{ "year": 2024, "sha1": "80ea2cd5b3b4a54536469dddb5bea85a37d7e13b", "oa_license": null, "oa_url": "https://doi.org/10.22373/sjhk.v8i2.21578", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "e46f40a2438b1553d916c8005388ef4a5ee9d454", "s2fieldsofstudy": [ "Law", "History", "Sociology" ], "extfieldsofstudy": [] }
269996670
pes2o/s2orc
v3-fos-license
Thermal Comfort of Nelore Cattle (Bos indicus) Managed in Silvopastoral and Traditional Systems Associated with Rumination in a Humid Tropical Environment in the Eastern Amazon, Brazil Simple Summary The objective of this study was to evaluate the thermal comfort of Nelore cattle (Bos indicus) managed in silvopastoral and traditional systems associated with rumination behavior in a humid tropical environment in the Eastern Amazon, Brazil. The thermoregulatory responses of 20 uncastrated male Nelore cattle in silvopastoral and traditional systems from June to July 2023 were evaluated. Physiological variables were measured, including respiratory rate (RR), rectal temperature (RT) and body surface temperature (BST). The RR was higher in the traditional system and RT showed significant variations over the collection periods. The black globe temperature and humidity index (BGHI) indicated mild to moderate stress. The silvopastoral system showed advantages in RR, RT and in rumination behavior. The results suggest that air temperature (AT) significantly influenced RR and thermal comfort in both systems, and that the SP system offers more thermal comfort advantages compared to the TS system. Abstract The objective of this study was to evaluate the thermal comfort of Nelore cattle (Bos indicus) managed in silvopastoral and traditional systems associated with rumination behavior in a humid tropical environment in the Eastern Amazon, Brazil. The study was carried out on a rural property in Mojuí dos Campos, Pará, Brazil, during the transition period of the year, from June to July 2023. Over these two months, six consecutive data collection days were held. We selected 20 clinically healthy non-castrated male Nelore cattle, aged between 18 and 20 months, with an average weight of 250 kg and body condition score of 3.5 (1–5). These animals were randomly divided into two groups: traditional system (TS) and silvopastoral system (SS). The physiological variables evaluated included RR, RT and BST. The variables were analyzed using the linear mixed model. For agrometeorological variables, higher values were observed between 10:00 a.m. (33 °C) and 6:00 p.m. (30 °C), with the highest temperature observed at 4:00 p.m. (40 °C). The RR showed interactions (p = 0.0214) between systems and times; in general, higher RR were obtained in the Traditional. The animals’ RT showed no significant difference (p < 0.05) between the production systems, but there was a statistically significant difference in relation to the time of collection (p < 0.0001). In the BGHI, it was possible to observe that there was mild stress in the period from 22:00 at night to 6:00 in the morning and moderate stress in the period of greatest increase in temperature, from 10:00 in the morning to 18:00 at night. BST showed no statistical difference between the regions studied or between the SP (35.6 °C) and TS (36.25 °C) systems. RT in the TS showed a positive correlation with AT (r = 0.31507; p = 0.0477). RT in the SP showed a positive correlation with THI (r = 35583; p = 0.0242). On the other hand, RT in the SP (r = 0.42873; p = 0.0058) and ST (r = 0.51015; p = 0.0008) showed a positive correlation with BGHI. RR in the TS showed a positive correlation with BGHI (r = 0.44908; p = 0.0037). The greatest amounts of rumination were carried out by animals in the SP system, generally ruminating lying down (p < 0.05). With regard to rumination behavior in the morning and afternoon, there were higher numbers of WS and LD in the TS (p > 0.05). Most of the time, the cattle were LD during the morning and afternoon shifts, and at night and dawn they were WS in the TS. Therefore, the SP offers more thermal comfort advantages compared to the TS system. Introduction Animal welfare is defined as the mental and physical state in relation to the environment in which they live and die [1,2].Therefore, a good degree of animal welfare demonstrates that an individual is healthy, safe, comfortable, well nourished and free to express natural behaviors of the species without suffering from harmful psychological states such as frustration, pain and stress [3][4][5]. Animals have the ability to control their body temperature when exposed to high temperature variations, with thermoregulation being the mechanism responsible for homeostasis, dissipating excess heat accumulated in the body through perspiration and peripheral circulation, resulting in increased breathing, wheezing and decreased rates of food intake to retain metabolic heat [2][3][4][5][6][7][8][9][10][11]. Therefore, one of the most common causes capable of reducing animal welfare is thermal conditions, i.e., very low or very high temperatures, due to anatomical reasons or the environment in which the animals are raised [12][13][14][15][16][17][18].The animal organism tends to prioritize homeostasis; however, when subjected to agents that trigger stress, animals respond through a combination of physiological, biochemical and behavioral reactions [19][20][21]. Thermal stress causes serious negative effects on the welfare of cattle, which can lead to major economic and large-scale production losses [22].Furthermore, sweating, increased RR, water intake, vasodilation, reduced productivity and decreased milk production may occur, and in high degrees of stress, an increased mortality rate may be seen [23][24][25][26][27]. In this context, infrared thermography appears to be a non-invasive method capable of capturing the surface temperature of animals and helping to identify the increase in temperature [2,[28][29][30][31][32][33][34][35].Infrared thermography has proven to be a non-invasive and accurate technique capable of identifying changes in surface temperature, as well as preventing animal stress and promoting animal welfare [17,18,[30][31][32].This technique can be used for production, companion and laboratory animals [31].As for physiological mechanisms, the diameter of the blood vessels located near the body surface is considered one of the main mechanisms responsible for heat loss or gain, i.e., skin vasodilation provides greater heat exchange with the environment [32,33]. Furthermore, different mathematical indices can be used to measure the degree of stress in cattle, such as the Temperature and Humidity Index (THI), the Black Globe and Humidity Index (BGHI) and the Benezra index [36][37][38][39]. The evaluation of several mathematical indices, including the THI, BGHI and the Benezra index, makes it possible to understand and monitor the thermoregulatory re-sponses of cattle.These indices provide quantitative measures to assess the degree of stress experienced by animals in response to environmental conditions [36].The THI, for example, considers the combined effects of temperature and humidity, offering a comprehensive indicator of heat stress.Similarly, the BGHI takes into account the temperature of the globe, which incorporates radiant heat, adding another layer of complexity to the assessment of environmental stress in livestock [39]. Calculating these indices provides information on the thermoregulatory capacity faced by cattle in different environments.Understanding how temperature, humidity and other factors interact allows the development of specific strategies to mitigate heat stress, improving animal welfare and optimizing productivity [36]. The reduction in heat stress in grazing animals involves applying various strategies to mitigate the adverse effects of high temperatures.It is essential to provide ample shade, as this allows animals to escape direct sunlight and reduces the ambient temperature.Adequate access to fresh, clean water is important to prevent dehydration, and water sources should be strategically placed throughout the pasture.In addition, promoting good air circulation, maintaining adequate spacing between animals and using natural windbreaks can help dissipate heat. Ruminating behavior is characterized by chewing, regurgitation and remastication, and is also a process by which animals adapt to heat stress conditions.During heat stress, animals tend to ruminate more, since the production of saliva acts as an alternative means to dissipate the heat absorbed by the animal, avoiding hyperthermia [17]. In hot environments, extreme heat can lead to heat stress and reduce the frequency of rumination behavior, due to the overload of the animal's thermoregulation mechanism, causing it to use energy-intensive resources to try to dissipate the heat, consequently reducing the digestibility, performance and health of ruminants [30][31][32]. However, there are limitations to managing heat stress in pastures, including the inability to control the ambient temperature.In addition, the availability of shade and water can be compromised in extensive grazing systems.Monitoring weather forecasts and adjusting grazing schedules can help minimize heat stress, but the challenges of unpredictable environmental conditions remain a constant consideration when managing animals on pasture. This study is justified by the need to understand the impact of silvopastoral and traditional systems on the physiological variables of Nelore cattle in the challenging Amazonian environment.The information acquired has the potential to guide the development of more efficient management practices, promoting animal welfare and productivity under the specific climatic conditions of the region.It should be noted that the traditional system, with animals exposed to the sun, is widely adopted in the Amazon, making this study fundamental for revealing how animals in this region behave under different shading conditions, providing information that will help cattle breeders. In this scenario, evaluating the behavior of cattle, especially the act of ruminating while lying down or standing up, can indicate a zone of comfort or thermal stress, and is essential for deciding on the best place or system in which to raise the animals [17].For all these reasons, the objective of this study was to evaluate the thermal comfort of Nelore cattle (Bos indicus) managed in silvopastoral and traditional systems associated with rumination behavior in a humid tropical environment in the Eastern Amazon, Brazil. Ethical Aspects This study was submitted to the Committee for Ethics in the Use of Animals (CEUA) and obtained Approved status, under protocol CEUA-UNAMA 0001-87/2023, in May 2023. Location The study was carried out on a rural cattle farm, located in Mojuí dos Campos, Pará, Brazil (Figure 1), in the transition period of the year (rainier to less rainy-June/July).The climate of the mesoregion is hot-humid (Am4), characterized by total rainfall of less than 60 mm in the least rainy month and with annual rainfall between 1900 and 2100 mm, average annual air temperature classified as 25.6 • C and humidity relative values ranging from 84 to 86% [40].The rainiest quarter occurs between the months of February and April and the least rainy between the months of August and October [41]. Experimental Animals, Management and Characterization of the Production System We used 20 clinically healthy, non-castrated male Nelore cattle (Bos indicus), with similar coloring, aged between 18 and 20 months, with an average weight of 250 ± 36 kg and body condition score 3.5 (scale from 1 to 5).The animals had white coats, which is common in Nelore cattle raised in the Amazon region. In this study, only uncastrated males were used due to the common practice in the Amazon region of raising males for fattening, as they are then sent for slaughter.This choice reflects the local reality, where the rearing of males for meat production purposes is predominant.Focusing on the use of non-gelded males allowed for a more specific and representative analysis of the thermal comfort conditions faced by this segment in the region. The cattle were randomly divided into two groups: traditional system-TS (n = 10) and silvopastoral system-SP (n = 10).The TS group was taken to a paddock without tree shadows, with Brachiaria brizantha cv.Marandú pasture (Table 1), access to drinking water and mineral salt ad libitum.The SP group remained in a paddock of the same size, with approximately 20% shade from chestnut trees (Bertholletia excelsa), with access to drinking water and mineral salt ad libitum.The traditional and silvopastoral groups were supplemented in the dry season with sorghum and corn silage produced on the property (Table 1), in addition to 0.1% of live weight of concentrated feed (Table 1).The total experimental area was 10.2 ha of Brachiaria brizantha cv.Marandú divided into six 1.7 ha paddocks, two per treatment.The animals' adaptation period to handling was seven consecutive days, where the animals were taken to the chute to collect data on the physiological variables that were evaluated during the experiment.After the adaptation period, the cattle remained in their respective systems.The production systems used were characterized as follows: i. Traditional System (TS)-No shade and no access to the bathing area.In this system, the animals are grazed without the presence of trees or other elements that can provide shade, and they do not have access to a bathing area.ii.Silvopastoral System (SP)-With shade and without access to the bathing area.In this system, the animals are subjected to pasture with the presence of trees and other elements that can provide shade. Agrometeorological Variables The agrometeorological variables that were evaluated were air temperature (AT • C), relative air humidity (RH %), wind speed (WS, m s −1 ), dew point temperature (DPT • C), wet bulb temperature (WBT • C), and black globe temperature (BGT • C).They were obtained by using thermal sensors, measured every fifteen minutes, during the experimental period. Physiological Variables Physiological data were collected at 6:00 a.m., 12:00 p.m., 6:00 p.m., and 12:00 a.m. during the transitional period of June and July.On collection days, the cattle were led to the management corral and held for 30 min before activities commenced, ensuring minimal interference with physiological variables.The animals were walked to the corral on foot, and confined in brete-type trunks (conventional mechanical trunk plus model from the Coimma ® brand -Dracena, São Paulo, Brazil) within sheltered areas to protect them from direct sunlight and rain.Handling occurred in groups of ten, with no fixed order of entry, preventing temporal influences on animal data within the designated time frame.The animals underwent a seven day adaptation period before the experiment so that there would be no interference in the collection of physiological data. The sampling times of 6:00 a.m., 12:00 p.m., 6:00 p.m. and 12:00 a.m.selected for this study were carefully chosen to cover all the times of greatest and least intensity of heat and humidity throughout the day.This strategic approach is crucial when assessing animal comfort, as it allows significant variations in the thermal conditions faced by the animals to be captured.By considering daily extremes, the results obtained during the collections provide a comprehensive and representative view of the impact of environmental conditions on the thermoregulatory responses of Nelore cattle, contributing to a more complete analysis of their thermal animal welfare. Respiratory Rate (RR) The RR was obtained by inspecting and counting the thoracic-abdominal movements, for one minute [42], with the help of a digital stopwatch, in the transition period (June/July) at 6:00 a.m., 12:00 p.m., 6:00 p.m. and midnight.This assessment was carried out by a single, previously trained observer. Rectal Temperature (RT) The RT was obtained using a veterinary clinical thermometer (Model-5198.10,Incoterm ® , São Paulo, Brazil), with a maximum scale of up to 44 • C, inserted transrectally into the animals with the results expressed in degrees Celsius, as described by Dirksen et al. [43].This evaluation was carried out by a single, previously trained observer. Infrared Thermography Infrared thermography was employed to diagnose thermal patterns within the environments of the three production systems.Data collection occurred on 1 July 2023, covering an area of 1.7 hectares per production system.This evaluation was carried out by a single, previously trained observer.Thermographic images were captured using a thermal imaging camera (FLIR T650sc, Wilsonville, OR, USA, 2015) between 12:00 and 15:00, a period characterized by the intense impact of solar radiation on the targets observed in the field research.To accurately reflect Thermal Regulation Index (TRI) fluctuations, images were acquired on the right side of the animals, minimizing interference from ruminal movements.The collected images were stored on a memory card and analyzed using FLIR Tools software (version 6.4), calculating average temperatures for each region with an emissivity set at 0.98.In the systems, thermograms were acquired at an approximate orthogonal distance of 5 m, outside the flight zone of the bull [18]. This camera has high precision, with a fixed 25 mm lens, a temperature range from 40 to 150 • C, thermal sensitivity of 50 mK (>0.05 • C at a room temperature of 30 • C) and a spectral range of coverage from 0.7 to 100 µm, and the imaged targets had a response between 0.7 and 3.0 µm and an optical resolution of 640 × 480 pixels with a maximum emissivity index of 0.95; subsequently, the images were processed in the computer program Flir Tools, 6.3v [44], with the Rainbow HC palette chosen.The images were acquired from four areas, the head region, armpit, flank and rump (Figure 2), according to the description of thermal windows described in Silva et al. [18] and Mota-Rojas et al. [33]. Behavioral Assessment The animals were assessed for their ruminating behavior, in which the animal chews, swallows, regurgitates and re-swallows with the food bolus present in the spaces between the cheeks, while standing or lying down.Rumination behavior was observed during the months of June and July.Two of the ten animals assessed were divided up every 5 min, between the hours of 6 a.m. and 6 p.m. [17,46].The animals were marked on their sides and on the croup with the respective number. Behavior was observed visually and on the spot.A total of six trained observers were used, divided into pairs and replaced every 2 h to avoid fatigue.At the beginning of the experiment, the observers carried out an inter-observer test, i.e., they evaluated and recorded behaviors independently, isolated in the field, for a period of 5 h, thus assessing the accuracy of each evaluator.The inter-observer test was assessed by the Kappa coefficient, calculated using Microsoft Excel 2013 (Microsoft Corp., Redmond, WA, USA), and showed a reliability index of 92.5%, which was adequate, as described by Silva et al. [17]. Data Analysis The RT and RR parameters collected individually from each animal were considered as independent variables analyzed separately through a linear mixed model with covariance structure in longitudinal data using the following model: where Y is the vector of animal observations; X is an incidence matrix associated with vector B of the fixed effects of production systems (traditional; silvopastoral) and collections at different times (6:00, 12:00, 18:00 and 00:00 h), such as the interaction between production system and schedules; Z is an incidence matrix associated with the vector a of animal random effects solutions; and "e" is a vector that represents the residuals. For the residue ("e"), the variance is defined as where I is a matrix identity of order "np", where n is the number of animals and p is the number of measurements taken on each animal for the number of times at which the animals were measured, ⊗ represents the product of Kronecker and Σ O is the structure of the matrix between repeated measurements on the same animal at different times, tested according to the following structures: variance, such as variance component (VC), first order autoregressive (AR(1)), compound symmetry (CS), heterogeneous compound symmetry (CSH) and Toeptiz (Toep). The choice of the model with the most appropriate residue structure is carried out according to the Akaike information criteria (AIC) [47], Schwars Bayesian (SB), defined as: AIC = −2log(L) +2p ( 47) and SB = −2LOG(L) + p log(n) where p is the number of parameters to be estimated and n is the number of observations in the sample [48].The structure of the covariance matrix between repeated measures that was most appropriate for analyzing RT and RR was the first-order auto-regressive matrix.The software SAS OnDemand for Academics (version 9.4) [49] was used for statistical analysis by mixed procedure, and in all analyses the significance level was equal to 0.05. Results There were changes in the weather variables between the different times of the months, with an oscillation in temperature (Figure 3), with the highest values being observed between 10:00 a.m. ( 33 In relation to RR, there were significant interactions (p = 0.0214) between systems and schedules; in general, higher RR results were obtained in the traditional system.The RT of the animals did not differ between production systems (Table 2); however, there were significant statistical differences in relation to the collection time (p < 0.0001), with a quadratic effect (Figure 4), with the equation represented as ŷ = 37.8425 + 0.1801x − 0.0051x2, with coefficient of determination (R2) equal to 0.89.The curve is shown in Figure 4. BST showed no statistical difference between the regions studied or between the SP (35.6 • C) and TS (36.25 • C) systems.At the THI, it was possible to observe that there was mild stress in the period from 10:00 p.m. to 6:00 a.m. and moderate stress in the period of greatest temperature rise, from 10:00 a.m. to 6:00 p.m.It was observed that the animals were in a thermal comfort situation between 2:00 a.m. and 6:00 a.m.(Figure 4) and were in a warning situation between 10:00 a.m. and 10:00 p.m., therefore signaling thermal stress, this being the time of highest temperature and humidity percentage.In relation to the BGHI, it was noted that from 2:00 a.m. to 6:00 a.m.there was no thermal stress in the animals, but from 10:00 a.m. to 6:00 p.m. stress was noted, due to the danger zone being signaled. RT in the TS showed a positive correlation with AT (r = 0.31507; p = 0.0477).RT in the SP showed a positive correlation with THI (r = 35583; p = 0.0242).On the other hand, RT in the SP (r = 0.42873; p = 0.0058) and ST (r = 0.51015; p = 0.0008) showed a positive correlation with BGHI.RR in the TS showed a positive correlation with BGHI (r = 0.44908; p = 0.0037) (Table 3). With regard to RR, the standard error of the mean was 0.85 mpm and there was an interaction effect between time of day and production system, with significant differences between production systems at 6:00 a.m. and 12:00 p.m.; at both times, the TS system had the highest RR averages (Table 4). The greatest amount of rumination was carried out by animals in the SP system, generally ruminating lying down (p < 0.05) (Figure 5).Note: AT = air temperature; RH = relative humidity; THI = Temperature and Humidity Index; BGHI = Black Globe Humidity Index; RT = rectal temperature; RR = respiratory rate; BST = body surface temperature.SP = silvopastoral system; TS = traditional system.The correlation was considered positive or negative when the p-value was less than 0.05.With regard to rumination behavior in the morning and afternoon, there were higher numbers of WS and LD in the TS (p > 0.05).Most of the time, the cattle were LD during the morning and afternoon shifts, and at night and dawn they were WS in the TS (Figure 6). The following behaviors are presented in reference to the rumination behaviors according to the shift of the day and the production system, with lower numbers of behaviors in the morning and early morning shifts (Figure 7A,D), and higher time fractions of this behavior in the afternoon and evening shifts (Figure 7B,C).Rumination is more abundant in the SP system. Discussion The variation observed in meteorological variables throughout the day, such as fluctuations in temperature and fluctuations in relative humidity, reflects the influences of the solar cycle and the typical climatic conditions of the region under study [17,30].The increase in temperature during the period from 10:00 a.m. to 6:00 p.m. is consistent with the increase in direct solar radiation during the daylight hours, which culminates in its peak at 4:00 p.m.The high temperature observed at 16:00 (40 • C) is a reflection of the increase in solar radiation and heat accumulation throughout the day, and the relative humidity is higher during the night [18]. The fluctuation in relative humidity throughout the day can also be understood through the interactions between temperature and humidity.The lowest humidity values were seen at 20:00 in the evening, as more water droplets accumulated in the air, resulting in a drop in relative humidity.However, relative humidity tends to increase after 7 p.m. as the ambient temperature begins to drop, allowing the air to retain more moisture [18,39]. The significant difference in RR between the silvopastoral system and the traditional system can be understood considering the impacts of different environmental conditions, with the presence of shade in the silvopastoral system and management based on the respiratory behavior of cattle, as well as the absence of shade in the traditional system combined with solar radiation, promoting greater breathing intensity in an attempt to dissipate endogenous heat [50][51][52].Direct exposure to sunlight can also increase the thermal load on animals, especially when shaded areas are not available, as evident in the traditional system [17,33,53,54]. The increase in RR in the traditional system may be associated with a series of factors.In conventional systems, in which cattle do not have access to adequate shade and are exposed to direct heat from the sun, thermal stress can be more pronounced.This can lead to an increase in the animals' body temperature, causing them to seek to compensation for excess heat by increasing RR [18,55,56]. The reduction in RR observed in the silvopastoral system compared to the traditional system can be attributed to the more favorable thermal comfort conditions provided by the silvopastoral system.In this system, the presence of trees offers shade and protection against direct solar radiation, which helps to reduce heat stress in cattle.The milder ambient temperature provided by the shade of trees allows animals to maintain a more stable body temperature, reducing the need to increase RR to regulate temperature [16,57,58]. The RT showed differences (p < 0.05) between the times.This can be explained because animals placed in environments classified as high THI have difficulty dissipating heat, as they are exposed to temperatures beyond their tolerance, resulting in thermal stress.Under these conditions, endogenous heat production exceeds its cooling capacity, leading to a thermal shock that increases RT, resulting in indices above reference values [59][60][61][62][63][64]. The variation in THI throughout the day, as described, is related to the oscillations in temperature and relative humidity throughout the day.THI is a measure that combines air temperature and relative humidity to assess the potential heat stress to which animals are exposed.Higher THI values indicate a greater risk of heat stress [18]. The mild stress observed during the night and early morning (10:00 p.m. to 6:00 a.m.) can be explained by the combination of higher temperatures and relatively low relative humidity during these hours.Lower relative humidity contributes to a reduced ability of the air to cool animals through evaporation, increasing the risk of heat stress.Even though the air temperature is milder at night, low relative humidity can make the environment less favorable for heat dissipation by animals [2]. The moderate thermal stress observed during the period of rising temperatures (9:30 a.m. to 7:00 p.m.) is related to the increase in air temperature and the interaction with relative humidity.As the temperature rises, relative humidity tends to decrease, increasing the cooling demand for animals.This increase in temperature and reduction in relative humidity can result in an environment in which cattle have greater difficulty regulating their body temperature, leading to heat stress [18].In addition, during this period, direct exposure to sunlight can result in excessive heat accumulation in animals, causing an increase in body temperature and making their thermoregulation capacity more challenging [39,65,66]. The variation in thermal comfort and thermal stress levels throughout the day, as described, is closely related to climatic conditions and the animals' ability to adapt to fluctuations in temperatures and humidity.The thermal comfort of animals is determined by the balance between body temperature and environmental conditions, while thermal stress occurs when this balance is impaired, compromising the animals' well-being [67][68][69]. The period of thermal comfort observed during the dawn and early morning [02:00 to 06:00 in the morning] is characterized by milder temperatures and relatively high humidity.These conditions are suitable for cattle to maintain their body temperature within acceptable limits, since the ambient temperature is not excessively high and humidity favors the evaporation of moisture through the respiratory tract, helping with cooling [65]. On the other hand, the period of thermal stress identified between 10:00 a.m. in the morning and 10 p.m. at night is attributed to the increase in temperature combined with the higher percentage of humidity.Conditions with high temperature and higher humidity can limit cattle's ability to dissipate heat efficiently, leading to heat stress.The lack of effective cooling mechanisms, such as profuse sweating, makes cattle particularly susceptible to these conditions [70][71][72]. As environmental temperatures increase, cattle, need to implement physiological mechanisms to mitigate the increase in body temperature [73].One of these mechanisms is the increase in RR, which promotes heat evaporation through the increased exchange of breathed air.This leads to a reduction in the body's internal temperature, as evidenced by RT [74][75][76]. The THI and BGHI are measurements that combine AT with RH, providing a more comprehensive assessment of thermal conditions.When THI increases, it indicates an environment where air temperature is high relative to humidity, which can contribute to greater heat stress in animals [39]. The higher RR and RT in the TS can be attributed to various factors related to environment and animal management.Firstly, the absence of trees makes temperatures higher in the environment, especially during periods of intense heat.The increase in AT can lead animals to tachypnea as a mechanism for dissipating excessive heat, resulting in a higher RR [77,78]. In this same scenario, due to the lack of shade, heat stress is observed in animals, especially in regions with hot climates.Heat stress can cause a variety of physiological responses, such as a failure in thermoregulation or an increase in RR, which can be more pronounced in TS, where preferences for cooling or shelter can be limited, a fact observed in this study and confirmed by the THI and BGHI indices [39,79]. The effects of the interaction between time of day and rearing system play different roles in determining RR.The significant variations in average RR between the rearing systems at 6:00 a.m. and 12:00 p.m. indicated a dynamic response by the animals to the environmental and management circumstances throughout the day [80,81].The higher RR and RT averages evaluated in the TS system at similar times represent a specific adaptation of the systems to metabolic needs or to the environment during these periods.In this way, the lack of varied management strategies based on the time of day and the rearing system can be studied in order to improve animal welfare and quality, leading to more effective practices in animal production [82,83]. Rumination, whether lying down or standing up, was affected by the time of day and the systems.The LD behavior observed most of the time in the SP animals can be explained by the relationship with their management and the environment in which they were being raised, since the availability of shade and shelter during the intense heat of the day provides more comfort for rest periods, leaving them more relaxed to perform their natural behaviors [84,85]. Rearing cattle in shaded environments offers stimuli due to the presence of trees, which also favors rumination, providing quieter areas that reduce animal stress caused by adverse climatic variables [86].In addition, the quality and availability of forage are higher in the SP, which also influences rumination behavior, causing the animals to spend most of their time ruminating, contributing to better digestion and rest, impacting well-being and consequently animal productivity [87,88]. Thus, it is worth noting that in the SP during the morning and afternoon periods the animals remained LD, and during the night and early morning periods the ST animals had a preference for WS behavior, thus suggesting an adaptation to periods of activity and rest.These behavioral patterns can be explained by various factors such as environmental conditions, management in each system and the complexities and interactions between the environment and animal behavior [89][90][91]. Conclusions The traditional system showed differences between the systems and the times of day, with the highest RR, signaling an attempt by the animals to adapt more physiologically in this system, which was not noticeable in the RT for the systems, but between the times of day.In addition, the THI and BGHI indices indicated a comfort zone in the early morning, specifically between 2:00 a.m. and 6:00 a.m., and stress between 10:00 a.m. and 6:00 p.m.The increase in RR and RT is related to the increase in AT.Associated with this, it was noted that the animals' LD rumination took place to a greater extent in the SP, where there was shade.During the shifts with the highest thermal radiation, i.e., morning and afternoon, the animals tried to ruminate either in LD or WS.At night or in the early hours of the morning, the cattle tried to ruminate in the WS, especially in the TS. Therefore, the SP system offers advantages for the thermal comfort of the cattle, with better RR and RT indices and a lower stress index compared to the TS system, which is confirmed by the higher rumination index in the SP system. It is recommended that producers adopt measures to mitigate heat stress in the TS.For the TS, where RR values were highest, it is advisable to provide adequate shading and ensure good ventilation conditions and water availability.In addition, access to cooler areas during periods of rising temperatures can help minimize the impact of heat.In addition, it is recommended to adopt targeted measures such as adjustments to reproductive management periods and investments in infrastructure to promote the thermal comfort of animals in both breeding practices, favoring productive and reproductive performance. Figure 1 . Figure 1.Location map of the study area. Figure 3 . Figure 3.Time and values of the variables relative humidity and air temperature of the climatic variables observed from 6:00 a.m. to 6:00 a.m. during the experimental period, between the months of June and July. Figure 4 . Figure 4. Temperature and Humidity Index (THI) and Black Globe Humidity Index (BGHI), at different times of the day, in the Eastern Amazon. Figure 6 . Figure 6.Relative frequencies (and percentages) for the behavior of animals ruminating lying down (LD) and standing up (WS), at different times and in each production system, with the associated chi-squared test (X2).LD = lying down ruminating; WS = ruminating while standing; TS = traditional system; SP = silvopastoral system. Table 1 . Chemical composition of forage and components present in the concentrate fed to male Nelore cattle managed in different production systems in the Eastern Amazon. Notes: Diets (n = 2 systems) fed to Nelore cattle raised in three types of production systems.DM, dry matter; OM, organic matter; MM, mineral matter; CP, crude protein; EE, ether extract; NDF, neutral detergent fiber; ADF, acid detergent fiber.* This silage was supplied to all the systems. Table 2 . Breakdown of time depending on the production system. Table 3 . Positive and negative correlations between the variables AT, RH, THI and BGHI were also observed in all shifts according to the experimental treatment. Table 4 . Average and standard deviations (SD) for RR at different times and in each production system. (1) Averages followed by different letters (a, b) in the row differ by the F test, with a significance level of 0.05.TS = traditional system; SP = silvopastoral system.SD = standard deviation.
2024-05-25T15:06:19.165Z
2024-05-23T00:00:00.000
{ "year": 2024, "sha1": "8c4f2d72125abb8cf0111342f093866fb2dd1b76", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2306-7381/11/6/236/pdf?version=1716472447", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bb03db13f91df2c3ca620650b2657dbf6f9db8a1", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
107469368
pes2o/s2orc
v3-fos-license
Removal of Acidic Dyes from Aqueous Media Using Citrullus Lanatus Peels : An Agrowaste-Based Adsorbent for Environmental Safety In this work, removal of fluorescein and eosin dyes using common agrowaste, i.e., peels of water melon (Citrullus lanatus) (WMP), has been studied in the batch mode. *e sorbent material (WMP) was characterized by using scanning electron microscopy, Fourier transform infrared spectroscopy, thermogravimetric analysis, and elemental analysis. *e sorbent was chemically modified by subjecting it to 0.1N HNO3 and 0.1N NaOH solutions. Different parameters such as sorbent dose, pH, temperature, and agitation speed were optimized to investigate the sorbent efficiency for fluorescein and eosin dyes. Among three forms (raw, base-treated, and acid-treated), the base-treated form exhibited higher removal efficiency, followed by acid-treated and then the raw form. Generally, range for the removal of fluorescein and eosin was found to be 48.06–88.08% and 48.47–79.31%, respectively. Mathematical modeling of sorption data by Langmuir and Freundlich sorption isotherms and thermodynamic investigations were carried out to check the suitability of these agrowaste materials on bulk scale. *e promising results concluded that peel of water melon (common agrowaste) can be potentially utilized for the removal of toxins. Introduction Since several years, environment has ever been victim of man-made activities, such as burning domestic garbage, contributing domestic sludge, human waste, discharging different types of smokes by burning fuels, and automobile exhaust.Moreover, chemicals are being used as reactants, intermediates, or catalysts and are discharged as effluents into different water bodies.Subsequently, these are escaped into agricultural soils upon irrigation.Ultimately, these reaches human body, as human beings are a big consumer.Such transfer of chemicals from industrial units to human body is a serious threat to quality of life.Hence, green environment has become the keyword for assuring better quality of life.Among different chemicals used in different industrial processes, one of the major classes is dyes.Dyes are used for coloring different materials like leather tanning, textile, food materials, paints, and paintings.Textile industrial effluent is reported to be rich in dyes, and in this way, dyes are the major source of water pollution [1]. Pakistan is an agricultural country, and production of fruits is an important part of its agriculture sector.e opted sorbent, i.e., water melon peel, is the agrowaste of water melon fruits.It is produced in great quantity during summer season.During summer, the temperature rises up to 50 °C in several regions of the country, and hence, the consumption of these fruits also increases.is results in the accumulation of its peel in the form of agrowaste. Different methods are available to remove the toxin from water; however, adsorption especially biosorption has got great attention in last few decades as a green technique.Moreover, the said method is highly economical and result oriented [2][3][4].Biosorption, by using biomasses, involves physical or chemical binding.is phenomenon is cost-effective and has much lesser health hazards in comparison to other techniques [2,5].Following-up the utilization of these dyes in industries and inexpensive availability of water melon peels, in this work, adsorption of two widely used acid dyes, i.e., eosin and fluorescein, has been studied and optimized.ey are commonly employed for staining biological strains and in fluorescent materials.But their excessive discharge in wastewater leads to various health hazards and ecological disorders.So their removal in an ecofriendly way is explored here. Collection and Preparation of Sorbents. Selection of water melon peel as the sorbent material was made on the basis of its abundant availability, underestimation, and eventually discarding nature just as the agrowaste material.Peel of water melon was collected from discarded wastes at different fruit shops, present in different locations of Lahore city.After collection, the sorbent material was repeatedly washed with tap water to remove water soluble impurities, dust, soil, etc., followed by extensive washing with deionized water and drying in sunlight till the removal of residual drops of water.Finally, dried and cleaned sorbent samples were spread on filter paper sheets and placed under shade till complete drying. en, it was ground and sieved to achieve a homogeneous size.e homogenized material was stored in zipper bags, at ambient conditions for further use. Characterization. Characterization of the above collected sorbent was carried out by using CHN analysis (Exeter CE-440 Elemental Analyzer), scanning electron microscope (JEOL model 2300 scanning electron microscope), thermogravimetric analysis (PerkinElmer Diamond Series unit, USA), and FTIR (PerkinElmer 1600) spectrophotometer. Pretreatments of Biomasses. Water melon peel (1.0 g) was chemically treated with 100 mL of 0.1 N NaOH and 0.1 N HNO 3 separately, by shaking for 2 h.Chemically treated biomass was further washed with distilled water to remove any residual traces of acid/base.e pretreated sorbent was dried in an electric oven at 30 °C, ground (by pestle-mortar), and stored in jars with air-tight lids. Determination of Dye. Each of the two selected dyes, i.e., fluorescein/eosin, was dissolved (1.0 g each) first in small volume of ethanol and then made up to mark with 1000 mL of distilled water to make the concentration 1000 ppm.From this stock solution, standards were prepared having concentration a range of 10-50 ppm and their absorbance was noted by using CECIL CE-7200 spectrophotometer.e concentration of each dye, before and after addition of each sorbent separately, in the solution was determined through the calibration curve to determine the removal efficiency of each sorbent material for the dye. Optimization Study of Batch Sorption. For batch studies, different parameters affecting the efficiency of the sorption process, i.e., pH, sorbent dose, contact time, dye concentration, and temperature, were optimized, separately for native and chemically treated sorbent.To optimize the pH of the batch adsorption experiment, adsorption of dyes on each sorbent material was studied by varying pH from 1 to 7. Higher pH is avoided due to precipitation of dye molecules.During optimization, concentration of each dye was kept fixed at 50 mg/L, concentration of sorbent at 0.2 g/50 mL and at 30 °C, and shaking time of 30 min at 100 rpm speed.Amount of sorbent was varied from 0.2 to 2 g/50 mL, in order to find the optimized dose.During optimization of amount of sorbent, concentration of dye was fixed at 50 mg/L, temperature at 30 °C and shaking speed 100 rpm for 30 minutes.Optimum pH 4 of each metal ion/dye solution was kept constant.After adding the optimized sorbent dose (0.2 g/50 mL) in aqueous solution of sorbate, dyes (50 mg/L), at pH 4, mixture of sorbent-sorbate was shaken at 30 °C and at 100 rpm for 5 to 50 min.Effect of temperature on removal efficiency of the sorbent material for dyes was studied by varying the temperature range from 10 °C to 90 °C.All other affecting parameters were kept at their optimized levels. Statistical Analysis. All the results are calculated as mean of a set of experiments with standard deviations.e regression techniques were employed to find the coefficients of thermodynamic, kinetic, and equilibrium models. Results and Discussion Selection of water melon peel was made on the basis of its abundant availability as the agrowaste material.Its peel is usually discarded after eating. ese materials not only create sanitation problems but also contribute offensive odour to environment, if left for sometime in environment. ese materials are either dumped in some landfill instantly or incinerated.It is very valuable to use them for some useful purpose, i.e., adsorption, before discarding. Before studying sorption efficiency of this agrowastebased sorbent, characterization of this material was carried out to determine its composition, especially the surface structure, as adsorption is a surface phenomenon and nature of sorbate-sorbent binding interaction mainly depends on surface composition of the sorbent material. Characterization of Sorbent. Proximate analysis of water melon peels (WMP) was carried out by the reported method [7].Percentage of fiber, cellulose, hemicelluloses, and lignin was determined, and results are presented in Table 1.It was found to be rich in fiber, cellulose, hemicellulose, and lignin. e percentage of C, H, and N in raw sorbent was determined through an elemental analyzer, and results are depicted in Table 1.Fortunately, the opted sorbent materials exhibited the appreciable content of carbon, revealing their significant potential as sorbents. FTIR Spectroscopic Analysis. FTIR spectroscopy is considered as the state-of-art technique for characterization of any natural material or synthetic compound regarding determination of functional groups.Functional groups present on the surface of WMP were determined by using the FTIR spectrometer (Figure 1). Generally, FTIR spectra reveal the presence of amines, alcohol, carboxylic acid, hydroxyl groups, phenol, alkanes, amino acids, alkyl halide, and aromatic compounds in the peel of water melon.A deep band showing strong absorbance can be seen around 3100 cm −1 , which may be due to presence of -NH or -OH groups.is band becomes more deep, i.e., showing stronger absorbance, upon treatment with 0.1 N NaOH, which may also be interpreted with the increased concentration of hydroxyl groups upon treatment with base.Region from 1400 to 900 cm −1 was found to be rich in peaks, which seemed to be fused in the form of a complex matrix.e peaks at 1230.82 cm −1 , 1049.71 cm −1 , and 795.97 cm −1 represent phenol/tertiary alcohol, C-O stretch, and primary amine, C-N stretch, respectively [8,9]. Scanning Electron Microscopy. Scanning electron microscopic analysis of raw WMP was made to determine the surface texture and morphological characteristics (Figure 2).From apparent view, irregular-shaped pores are present in SEM graphs for WMP, however, seem to be uniformly distributed on the entire surface and smooth.Due to the presence of irregular-shaped pores, WMP would have a strong binding ability. ermogravimetric Analysis. ermogravimetric analysis is an effective tool to examine the pattern of chemical changes in the lingocellulosic matrix.WMP was subjected to it over a wide temperature range of 25-1000 °C (Figure 3).As shown in Figure 3, initial decrease in weight was noted below 200 °C and may be attributed to loss of light volatiles, mainly water.e second weight loss is mainly observed between 200 and 400 °C and may be related to breakdown of lignin and cellulose.Above 500 °C, a continuous weight loss was noticed, which may be attributed to slow decomposition of the remaining heavy components, which may consist of stable micronutrients like metal oxide [7,10]. Optimization of Parameters. Adsorption is a surface phenomenon and is based on physicochemical interaction between sorbate and sorbent materials.To achieve the maximum adsorption, parameters like sorbent dose, pH, time of contact, temperature, dye concentration, and agitation speed were optimized.To possibly enhance the removal efficiency of these sorbents, it was separately subjected to acid treatment and base treatment.en, its removal efficiency in its raw form, acid-treated form, and basetreated form for the removal of dyes was investigated. 3.5.1.Sorbent Dose.Sorption efficiency of the opted sorbent (WMP) was studied against all the opted sorbates, i.e., eosin and fluorescein, one by one, by varying the sorbent dose over the range of 0.2 to 2.0 g, in their raw form, acid-treated form, and base-treated form.As adsorption is an equilibriumbased phenomenon, maximum sorption is achieved at certain sorbent dose, at which sorbate equilibrates well between aqueous solution and sorbent surface.More amount of adsorbents adsorb more amounts of dyes, but decrease in adsorption after reaching a maximum value may be due to coagulation of adsorbent particles.Sorbent dose of WMP in its raw form, acid-treated form, and base-treated form was optimized for the maximum removal of dyes from aqueous solutions. Maximum removal of fluorescein achieved was 78.2% by B-(WMP) at 0.6 g/50 mL optimized dose compared with A-(WMP) and raw WMP.Optimized sorbent dose for the maximum removal efficiency of eosin was between 1.0 g/ 50 mL to 1.2 g/50 mL and was also exhibited by B-(WMP) (79.31%) (Table 2).Overall, removal efficiency for both the explored dyes was quite appreciable, as shown in Figure 4, with base-treated WMP [11]. Contact Time. Contact time of dye solution and sorbent has been reported to be very a useful parameter in sorption studies.Contact time between all forms of raw, acid-treated, and base-treated WMP was, separately, varied over a range of 5-50 minutes, and results are presented in Table 3.Generally, the base-treated form exhibited the highest (80.42%) removal efficiency, followed by acid-treated (69.15%) and raw forms (55.67%), fluorescein.Same trend has been noted for the eosin dye (Table 3). is observation suggests increase in pore size, pore area, and pore volume on the sorbent surface upon treatment with base in comparison to the raw sorbent, revealing the same trend as clear from Figure 5, as was noted for optimization of the sorbent dose. Effect of Agitation Speed . Agitation speed has also been reported as the important parameter, affecting sorption of different sorbates.Sorbate materials are sorbed at a particular agitation speed, while at a certain speed, they get desorbed.Agitation increases the available surface area for sorbates and distributes the sorbate effectively in solutions.Agitation speed varies, depending on the nature of sorbate and sorbent.To achieve the optimum agitation speed, a range of 75-200 rpm was chosen, and results are presented in Table 4. Maximum removal (88.08%) was accomplished at an agitation speed of 125-150 rpm.For the base-treated sorbent form, equilibrium was found to be established at a relatively lower speed (Figure 6). 5.For fluorescein, maximum removal was achieved with base-treated WMP at 323 K, while eosin removal was achieved at 303 K, as shown in Figure 7. 3.5.5.Effect of pH.pH change strongly affects the process of chelation, precipitation, and solubility.Solutions for dyes were prepared in aqueous buffers, having a pH range of 1-7.Trend of sorption remained the same, as discussed above.Base-treated water melon peels had shown more adsorption, as indicated in Figure 8, because of protonation of adsorbent binding sites, which can interact more with acidic dyes, like fluorescein and eosin in acidic conditions, as shown in Figure 9.Moreover, these dyes are more ionized, and their solubility is more in acidic conditions.As both these facts help, overall, B-(WMP) exhibited the highest removal efficiency of 86.33% and 79.41% for fluorescein and eosin, respectively (Table 6). Isothermal Studies. An adsorption isotherm has a key role in surface chemistry and provides useful information for regarding sorption under optimized conditions.In this study, Langmuir and Freundlich were employed to fit the equilibrium data.Among all the opted raw as well as chemically treated, the base-treated form showed maximum sorption for dyes, and hence, isothermal studies were done on the base-treated form B-(WMP) by applying optimized conditions of sorbent dose, contact time, pH, agitation speed, and temperature (Tables 7 and 8) [12]. In case of the Langmuir model, the isotherm correlation coefficient (R) values for B-(WMP) is near to unity.On the basis of R values, it was assumed that the Langmuir isotherm could describe the sorption process in good way.Moreover, the maximum sorption capacity "q m " was highest for it [13]. According to the Langmuir adsorption isotherm, monolayer adsorption occurred.Langmuir isotherm equation is represented by the following equation: where q m is the monolayer (maximum) adsorption capacity (mg/g), C e (mg•L −1 ) is the concentration of dye at 4 Journal of Chemistry equilibrium, and R L is a separation factor.It describes the nature of the adsorption process.It is represented by the following equation: If value of R L is greater than 1, the unfavorable adsorption occurs.If R L is between 0 and 1, then isotherm is favorable. e value of R L � 1 indicates linear adsorption.In the present study, the value of R L is between 0 and 1. Heterogeneous surface sorption is well explained on the basis of the Freundlich model and favors multilayered physisorption process. e Freundlich isotherm is represented as where q e (mg•g −1 ) is the adsorption capacity of the adsorbent at equilibrium and C e (mg•L −1 ) is the equilibrium dye concentration.Adsorption capacity is described by "K F ," and "n" represents adsorption intensity.Both are Freundlich % adsorption Adsorbent dose (g) Journal of Chemistry parameters.As presented in Table 8, the "n" value under this study is in the range of 0-6.Base-treated water melon peel B-(WMP) showed highest values of "K F " for dyes (Table 8). 3.6.2.Kinetic Studies.Kinetic models are quite useful for understanding reaction rate, mechanism, and its applicability on commercial scale.In this regard, experimental Journal of Chemistry data of the time-dependent sorption of dyes were fitted to the kinetic models, i.e., pseudo-first-order and pseudosecond-order order models.e coefficient (R 2 ) values for the pseudo-first-order model were low, which indicates that this model did not fit on the experimental data.Contrast to this observation, it was found that the secondorder model provided promising fits for the sorption data; hence, this model was applied to the experimental data under optimized conditions.e values of the regression coefficient (R 2 ) were in the range 0.957 to 0.962 near to unity (Table 9).e initial sorption rate (h) was determined from "k 2 " and "q e " values which are more for base-treated water melon peel B-(WMP), which means plenty of adsorption site were available for dyes, thus favoring good adsorption [14,15]. ermodynamic Studies. ermodynamic parameters are also very important in order to see spontaneous or nonspontaneous nature of process.Furthermore, these parameters elucidate the spontaneity, feasibility, and heat change of the sorption process.ermodynamic properties of the sorption process of dyes under this study by basetreated sorbents were evaluated by calculating the Gibbs free energy change (ΔG °), entropy change (ΔS °), and enthalpy change (ΔH °), and results are shown in Table 10.It is well established in the literature that the range of ΔG °values for physisorption is between 20 and 80 kJ/mol while chemisorption is between 80 and 400 kJ/mol [16,17]. is further indicates that the sorption process under this study seems to be physisorption.Enthalpy change values are also negative, which means that the sorption process is Conclusions It can be concluded from this study that peel of Citrullus lanatus can be effectively used for removal of fluorescein and eosin from wastewater streams of textile industry.e adsorption capacity of this agrowaste peel can be prominently enhanced by chemical modification.Base-treated water melon peels have found to possess more sorption capacity.Kinetic studies favored pseudo-second-order kinetics, which indicate rapid transfer of dyes from solution.ermodynamic studies revealed the spontaneity and exothermic nature of removal of dyes by this sorbent. 3. 5 . 4 . Temperature.Temperature significantly affects the magnitude and rate of adsorption.Sorbate-sorbent interaction was investigated by varying the temperature of solution over a range of 283-363 K, and results are presented in Table Figure 4 : Figure 4: Comparative sorption efficiency of base-treated WMP at different doses for removing fluorescein and eosin dyes. Figure 6 :Figure 5 : Figure 6: Comparative sorption efficiency of the base-treated WMP at different agitation speeds for removing fluorescein and eosin dyes. Figure 8 : Figure 8: Comparative sorption efficiency of the base-treated WMP at different pH values for removing fluorescein and eosin dyes. Figure 7 : Figure 7: Comparative sorption efficiency of base-treated WMP at different temperatures for removing fluorescein and eosin dyes. Figure 9 : Figure 9: Interaction of acidic dyes with the lingocellulosic material of Citrullus lanatus peels. Table 1 : Physicochemical analysis of sorbent materials. Table 2 : Sorption efficiency of WMP sorbent at different doses. Table 3 : Sorption efficiency of the sorbent material (WMP) over a different range of contact time. Table 4 : Sorption efficiency of the sorbent material (WMP) at different agitation speeds. Table 5 : Adsorption rate of the sorbent material (WMP) at varying temperatures. Table 6 : Effect of pH on sorption efficiency of the sorbent (WMP).
2019-04-11T13:07:45.780Z
2019-03-13T00:00:00.000
{ "year": 2019, "sha1": "c2955f7cea24c6ec7abb2cd5b410ae9f1dba364a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2019/6704953", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "c2955f7cea24c6ec7abb2cd5b410ae9f1dba364a", "s2fieldsofstudy": [ "Environmental Science", "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
245058362
pes2o/s2orc
v3-fos-license
Evaluation of Fermented Oat and Black Soldier Fly Larva as Food Ingredients in Senior Dog Diets Simple Summary Along with concerns about the shortage of future food resources, the problem of ensuring a stable supply of feed materials is emerging. The rapid growth of the pet food market is also increasing the demand for new food ingredients, requiring the evaluation of their safety and nutritional value. Recently, insects and fermented foods are some of the materials that have entered the spotlight as potential future foods, and studies on their usefulness as food are being actively conducted. This study aimed to evaluate and verify the safety of fermented oat (Avena sativa) and black soldier fly larva (Hermetia illucens L.) when used in a dog food as part of the effort toward discovering suitable nutritionally excellent and functional food materials. Our results show that 10% fermented oat flour, 5% black soldier fly larva meal, or a combination thereof in the food did not negatively affect food intake, body weight, fecal status, skin condition, or hematological and biochemical parameters. Overall, our findings suggest that fermented oat and black soldier fly larva can be used as food ingredients for dogs. Abstract The aim of this study was to evaluate the suitability of fermented oat (FO) and black soldier fly larva (BSFL) as food ingredients for dogs. A total of 20 spayed female dogs were divided into four treatment groups, with 5 dogs per group. The four treatment groups consisted of a control group, a diet with 10% FO, one with 5% BSFL, and one with 10% FO and 5% BSFL, and each experimental food was fed for 12 weeks. The feeding of FO and/or BSFL did not affect the daily food intake, body weight, body condition score, fecal score, or skin condition of the dogs. In all the experimental groups, no significant differences in serum IgG, IL-10, or TNF-α levels were observed upon the feeding of FO and/or BSFL. Some hematological (white blood cell and basophils) and serum biochemical parameters (phosphorous, globulin, and alkaline phosphatase) showed significant differences with FO and/or BSFL feeding compared to the control group, but they were within the normal reference range. No adverse clinical signs related to these parameters being affected by FO and BSFL were observed. The feeding of BSFL for 12 weeks reduced the serum cholesterol level (p < 0.05) at the end of the experiment. Our findings suggest the suitability of FO and BSFL as food materials for dogs. Introduction Amid increasing global concerns over the shortage of future food resources [1,2], the rapid growth of the pet food market is raising concerns regarding whether a stable supply of raw materials for pet food can be maintained. In addition, hypercompetition in the pet food market has led to the indiscriminate use of new ingredients where preliminary verifications of safety and nutritional value are lacking [3]. In this regard, the need for research on the nutritional value, safety, and functionality of novel ingredients that can replace existing ingredients has been emphasized [4]. In the pet food industry, livestock products are mostly used as a protein source, but the demand for novel protein materials is increasing due to the competitively for their use in food for humans, and regarding the sustainability of livestock products. In other words, it is known that livestock products mainly used as protein sources in dog food can cause allergies. Therefore, many efforts, such as towards using hydrolyzed proteins and finding alternative protein ingredients, have been made to reduce the allergic response induced by exposure to protein sources in dogs [5]. Many researchers are paying attention to the potential suitability of edible insects for human food as well as animal feed [6][7][8]. In particular, black soldier fly larva (BSFL; Hermetia illucens L.) has been reported to be a suitable insect species given its nutritional value, safety, and amenability for mass production [9,10]. Several studies have reported that BSFL meal can partially replace major protein sources (e.g., fish and soybean meal) in conventional diets for poultry [11,12], fish [13,14], and pigs [15]. In addition, some recent studies that evaluated the safety and physiological effects of a BSFL diet on companion dogs have reported positive results [16][17][18]. Nonetheless, more research is needed in terms of the safety and feeding effects of using BSFL in pet food. Recently, the trend in the pet food market is showing a shift toward grain-free, glutenfree, human-grade, natural, and organic pet food. However, scientific evidence that these are nutritionally superior or that they are more beneficial to pet's health is lacking. Oats, one of the grains, are known to provide nutrients such as proteins, unsaturated fatty acids, vitamins and minerals, as well as arabinoxylan, β-glucan, and phenolic compounds, having multiple functional and bioactive properties [19,20]. Although there are very limited studies on the efficacy of oat feeding in dogs, one study reported that the intake of oat beta-glucan could improve the apparent total tract digestibility of macronutrients, and was effective in reducing serum total cholesterol and low-density lipoproteins in adult dogs [21]. Meanwhile, although not studied in dogs, previous studies have suggested that oats with various biologically active substances help to prevent diseases, such as cardiovascular pathologies, colon cancer, type II diabetes, and obesity in humans [22][23][24][25][26]. Furthermore, attempts have been devoted to developing oat-based fermented foods using lactic acid bacteria to improve the nutritional value and functionality of oats [27][28][29]; some studies have confirmed the potential value of fermented oats as a functional food [30][31][32]. Despite the positive effects of oats or fermented oats (FOs), no study has reported the effects of feeding FOs to dogs. Therefore, this study was conducted to evaluate the safety and feeding effects of FO and BSFL in dogs. BSFL and FO Preparation Freeze-dried BSFLs were supplied from the National Institute of Crop Science (Wanju, Republic of Korea). After hatching, the 5-day-old BSFLs were bred with corn and soybean meal-based feed (19% crude protein and 3150 ME kcal/kg) until 17 days of age, and produced by washing and freeze-drying. FO was prepared as follows: the whole-grain oats (Avena sativa) were ground with water, and incubated for 8 h at 37 • C with a starter (1 × 10 8 CFU/mL Pediococcus pentosaceus, CBT SL4; 5 × 10 8 CFU/mL, Bifidobacterium longum, KCTC 10630BP; 5 × 10 8 CFU/mL, Lactobacillus plantarum, KCTC 1048). Thereafter, the supernatant was removed by centrifugation, and the precipitate was dried and used for the experimental diet. Animals, Designs, Diets, and Housing This experiment was conducted in accordance with the method approved by the Animal Care and Use Committee National Institute of Animal Science (NIAS-2018-308). Twenty spayed, 10.8 ± 0.04-year-old, small-breed dogs (seven Schnauzers, six Poodles, and Animals 2021, 11, 3509 3 of 14 seven Maltese; initial body weight (BW) 4.18 ± 0.32 kg; 9-scale body condition score (BCS) 4.2 ± 0.17) were used in this study. The dogs were randomly divided into four groups with 5 dogs per group: Group 1-fed a rice and poultry meal-based diet (CON); Group 2-fed a diet with 10% FO (FO); Group 3-fed a diet with 5% BSFL (BSFL); Group 4-fed a diet with 10% FO and 5% BSFL (FO+BSFL). In this study, there were many variables, including three breeds of dogs and four treatment foods, but the number of dogs assigned to each group was as small as five. Thus, only female dogs were used to minimize other variables. All experimental diets were formulated to meet the nutritional requirements for adult dogs as suggested by the Association of American Feed Control Officials [33] ( Table 1). The inclusion rates of FO and BSFL were determined to be within the level that could meet both isonitrogenous and isocaloric criteria as the control food. All ingredients for the experimental food were used based on commercial products in a powder type, except for lard. All ingredients were mixed at 500 rpm for 10 min using a food paste mixer (Mixer, Sung-il, Seoul, Korea), and then liquid lard and water were added and kneaded at 1500 rpm for 20 min. The mixed dough was subdivided into pieces of 10 cm in diameter and steam-heated (100 • C or above) for 40 min using a steamer (Rice cake maker, Dahan, Seoul, Korea). Thereafter, the steamed dough was pelleted into a cylindrical shape with a diameter of 10 mm and a length of 15 mm using a noodle machine (Noodle maker, Yusung, Daegu, Korea). The pelleted food was dried in a dryer at 70 • C for 1 h and stored at −20 • C until feeding (dry-oven, Shiniltech, Jeonju, Korea). The experimental food was transferred from the freezer to room temperature 1 day before feeding and allowed to cool until it reached room temperature before feeding. No palatants were used in this experimental food. The chemical compositional parameters of the experimental foods were analyzed following standard Association of Official Analytical Chemist (AOAC, 2006) methods [34], including the moisture content (AOAC method 934.01), crude protein (CP, AOAC method 984.13), ether extract (EE, AOAC method 920.39), ash (AOAC method 942.05), crude fiber (CF, AOAC method 978.10), calcium (Ca, AOAC method 927.02), and phosphorus (P, AOAC method 965.17). The nitrogen-free extract (NFE) was calculated using the following equation: NFE (% DM) = 100 − (CP + CF + EE + Ash). Each dog was housed in an individual room (1.7 m × 2.1 m) at consistent room temperature (22-24 • C) and with consistent lighting (12 h light and 12 h dark cycle) for the study period. Food was provided at an amount estimated by the MER equation for each dog twice per day throughout the duration of the trial, and drinking water was provided ad libitum for 12 weeks. The MERs of the dogs were calculated using the AAFCO's MER calculation as follows: MER = 132 × metabolic body weight (mBW). Food intake was measured daily, and BW and BCS were measured weekly. The rate of change in body weight gain (BWG) was calculated as follows: rate of change of BWG = (final body weight)/(initial body weight) × 100. BCS was evaluated on a 9-point body condition score scale according to the criteria developed by Laflamme [35]. Fecal scores were evaluated on a 5-point fecal score scale (1 = dry to 5 = liquid feces) according to the Waltham Fecal Scoring System [36] every day during the entire test period. Sampling and Analysis Blood samples were collected from the jugular vein after 12 h of fasting at the beginning and end of the experiment. The collected blood was immediately separated into EDTA vacutainer tubes (ref 367861, BD Vacutainer, NJ, USA) and serum vacutainer tubes (ref 367812, BD Vacutainer, NJ, USA). The whole blood in the EDTA vacutainer tubes was used for complete blood cell count (CBC) analysis immediately after collection. CBCs were measured using an automatic hematology analyzer (IDEXX Laboratories, Inc., Westbrook, ME, USA). Serum was obtained by centrifugation (2000× g, 10 min) from blood in the serum vacutainer tubes, and then stored frozen (−80 • C) until analysis. The serum biochemical parameters were analyzed using an automatic biochemical analyzer (Hitachi 7180; Hitachi High-Technologies Co., Tokyo, Japan). Serum canine tumor necrosis factor-alpha (TNF-α, SEKC-0033, Solarbio Co., Ltd., Beijing, China), interleukin-10 (IL-10, SEKC-0026, Solarbio Co., Ltd., Beijing, China), and immunoglobulin G (IgG, SEKC-0050, Solarbio Co., Ltd., Beijing, China) were quantified using enzyme-linked immunosorbent assay kits according to the manufacturer's instructions. The transepidermal water loss (TEWL), moisture, and oil content in skin were measured to examine the skin conditions in the groin, armpits, and ears at the end of the experiment (at 12 weeks) using a closed chamber-type instrument (GPSkin Barrier ® , GPOWER Inc., Seoul, Korea). The hair in each skin area was removed with a dog hair clipper without using a cleanser 1 day before the measurement. Statistical Analysis All statistical analysis was performed using SPSS version 17.0 (SPSS Statistics, IL, USA, 2009). Data are presented as mean ± standard error (SE). Because all experimental groups were composed of three breeds, the significant differences among the control group (CON) and each test group (FO, BSFL, and BSFL + FO) were analyzed by univariate analysis with a general linear model (GLM) with Tukey test as the post-hoc analysis, treating the breed factor as a covariate. Changes in CBCs and serum biochemical parameters over time were analyzed using repeated measures of GLM. The associations between the Animals 2021, 11, 3509 5 of 14 experimental foods and BCS or fecal score were investigated using the nonparametric test with a Chi-squared test. Differences were considered statistically significant when p < 0.05. Results 3.1. Food Intake, Body Parameters, and Fecal Score Table 2 shows the daily food intake, BW, and BCS, for which no significant differences were found among the CON and each treatment group. In all groups, the body weight was higher at the end of the experiment than at the beginning, but there was no significant difference among the control dogs and those fed FO and BSFL. The fecal scores of all the experimental groups were within the desirable range of between 2.10 and 2.40, and the dog food with FO and BSFL did not affect the fecal scores ( Figure 1a,b). USA, 2009). Data are presented as mean ± standard error (SE). Because all experimental groups were composed of three breeds, the significant differences among the control group (CON) and each test group (FO, BSFL, and BSFL + FO) were analyzed by univariate analysis with a general linear model (GLM) with Tukey test as the post-hoc analysis, treating the breed factor as a covariate. Changes in CBCs and serum biochemical parameters over time were analyzed using repeated measures of GLM. The associations between the experimental foods and BCS or fecal score were investigated using the nonparametric test with a Chi-squared test. Differences were considered statistically significant when p < 0.05. Table 2 shows the daily food intake, BW, and BCS, for which no significant differences were found among the CON and each treatment group. In all groups, the body weight was higher at the end of the experiment than at the beginning, but there was no significant difference among the control dogs and those fed FO and BSFL. The fecal scores of all the experimental groups were within the desirable range of between 2.10 and 2.40, and the dog food with FO and BSFL did not affect the fecal scores (Figure 1a,b). The results are expressed as mean ± SE. CON, control group; FO, group with 10% fermented oat added to food; BSFL, group with 5% black soldier fly larva added to food; FO + BSFL, group with 10% fermented oat and 5% black soldier fly larva added to food. The p-value on fecal score was 0.666. at the four sites. The TEWL, moisture, and oil levels of each treatment group did not show significant differences compared to the control group (Figure 2a-c). SE. CON, control group; FO, group with 10% fermented oat added to food; BSFL, group with 5% black soldier fly larva added to food; FO + BSFL, group with 10% fermented oat and 5% black soldier fly larva added to food. The p-value on fecal score was 0.666. Figure 2 shows the effects of the feeding of FO and BSFL on skin status. At the end of the experiment, the TEWL, moisture, and oil levels of the groin, armpits, back, and ears were measured, and the measured value is expressed as the average of values determined at the four sites. The TEWL, moisture, and oil levels of each treatment group did not show significant differences compared to the control group (Figure 2a-c). Hematological and Biochemical Parameters The results of hematological parameters are presented in Table 3. All hematological parameters analyzed in this study were within the normal reference range, and no significant differences in these parameters were observed by the single or combined feeding of BSFL and FO among all experimental groups, except for white blood cells (WBCs) in the BSFL group. At the end of the experiment, the BSFL group had a significantly higher WBC value than the CON group (p < 0.05). Basophils (BASO) were not affected by the feeding FO or BSFL, but BASO in the BSFL group was significantly increased at the end of experiment compared to the beginning (p < 0.05). All the experimental groups showed no significant effects of FO and BSFL on neutrophils (NEU), lymphocytes (LYM), monocytes Hematological and Biochemical Parameters The results of hematological parameters are presented in Table 3. All hematological parameters analyzed in this study were within the normal reference range, and no significant differences in these parameters were observed by the single or combined feeding of BSFL and FO among all experimental groups, except for white blood cells (WBCs) in the BSFL group. At the end of the experiment, the BSFL group had a significantly higher WBC value than the CON group (p < 0.05). Basophils (BASO) were not affected by the feeding FO or BSFL, but BASO in the BSFL group was significantly increased at the end of experiment compared to the beginning (p < 0.05). All the experimental groups showed no significant effects of FO and BSFL on neutrophils (NEU), lymphocytes (LYM), monocytes (MONO), red blood cells (RBC), hemoglobin (HGB), or hematocrit (HCT) during the study period. The results of serum biochemical parameters are presented in Table 4. During the experimental period, no significant effect of feeding FO or BSFL was observed on serum glucose (GLU), creatinine (CREA), blood urea nitrogen (BUN), calcium (CA), alanine aminotransferase (ALT), gamma glutamyltransferase (GGT), or albumin/globulin (A/G) ratio values among all experimental groups. However, the FO and FO + BSFL groups showed significantly lower alkaline phosphatase (ALKP) than the control group at the end of the experiment (p < 0.05). For the FO + BSFL group, we recorded significantly lower GLOB compared to the control group at the end of the experiment (p < 0.05). For the BSFL group, we recorded significantly lower phosphorous (PHOS) compared to the control group (p < 0.05). In addition, the BSFL group showed significantly less total protein (T-PRO) and total cholesterol (T-CHO) at the end compared to the beginning of the experiment (p < 0.05). Values are expressed as mean ± SE. WBC, white blood cell; NEU, neutrophils; LYM, lymphocytes; MONO, monocytes; EOS, eosinophils; BASO, basophils; RBC, red blood cells; HGB, hemoglobin; HCT, hematocrit. *, significant differences from the control in the same row (p < 0.05); # , significant differences between the initial and final values in the same column (p < 0.05). CON, control group; FO, group with 10% fermented oat added to food; BSFL, group with 5% black soldier fly larva added to food; FO + BSFL, group with 10% fermented oat and 5% black soldier fly larva added to food. The values for all the parameters of the FO and FO + BSFL groups remained within the reference ranges throughout the experiment. However, the control group showed slightly higher GLOB values both at the beginning (3.82 ± 0.28 g/dL) and at the end of the experiment (4.12 ± 0.23 g/dL) compared to the reference range (1.6-3.6 g/dL). Although the BSFL group showed slightly higher GLOB (3.80 ± 0.17 g/dL) and T-BIL (0.40 ± 0.03 mg/dL) values compared to the reference ranges (GLOB, 1.6-3.6 g/dL; T-BIL, 0.1-0.3 g/dL) at the beginning of the experiment, they were measured as being within the normal ranges at the end of the experiment (Table 4). Values are expressed as mean ± SE. GLU, glucose; CREA, creatinine; BUN, blood urea nitrogen; PHOS, phosphorous; CA, calcium; T-Pro, total protein; ALB, albumin; GLOB, globulin; A/G, albumin/globulin ratio; ALT, alanine aminotransferase; ALKP, alkaline phosphatase; GGT, gamma glutamyltransferase; T-BIL, total bilirubin; T-CHO, total cholesterol. *, significant difference from the control in the same row (p < 0.05); # , significant differences between initial and final values in the same column (p < 0.05). CON, control group; FO, group with 10% fermented oat added to food; BSFL, group with 5% black soldier fly larva added to food; FO + BSFL, group with 10% fermented oat and 5% black soldier fly larva added to food. Figure 3 shows the effects of feeding FO and BSFL on changes in canine immunoglobulin G (IgG), interleukin 10 (IL-10), and tumor necrosis factor alpha (TNF-α) levels. Canine IgG ranged between 1.68 and 8.10 mg/mL, the IL-10 ranged between 140.24 and 415.58 pg/mL, and the TNF-α between 1.20 and 6.12 pg/mL, and no statistically significant differences were found in any of the experimental groups compared to the control group. In addition, no significant changes were observed in the levels of IgG, IL-10, or TNF-α in any experimental group for 12 weeks (initial vs. final; Figure 3). Figure 3 shows the effects of feeding FO and BSFL on changes in canine immuno-globulin G (IgG), interleukin 10 (IL-10), and tumor necrosis factor alpha (TNF-α) levels. Canine IgG ranged between 1.68 and 8.10 mg/mL, the IL-10 ranged between 140.24 and 415.58 pg/mL, and the TNF-α between 1.20 and 6.12 pg/mL, and no statistically significant differences were found in any of the experimental groups compared to the control group. In addition, no significant changes were observed in the levels of IgG, IL-10, or TNF-α in any experimental group for 12 weeks (initial vs. final; Figure 3). The results are expressed as mean ± SE. IgG, immunoglobulin G; IL-10, interleukin 10; TNF-α, tumor necrosis factor alpha. CON, control group; FO, group with 10% fermented oat added to food; BSFL, group with 5% black soldier fly larva added to food; FO + BSFL, group with 10% fermented oat and 5% black soldier fly larva added to food. Feeding and Body Parameters This study was performed to evaluate the suitability of BSFL and FO for inclusion in a dog diet. Many studies have reported the effects of feeding BSFL [10,11,14] to livestock and dogs; however, to the best of our knowledge, this study is the first to evaluate FO as a food ingredient for dogs. We confirmed that dietary supplementation of 5% BSFL, 10% FO, and 5% BSFL + 10% FO had no effect on food intake, body weight, or BCS in dogs. These results are consistent with those of Freel et al., who reported that the feeding of defatted BSFL meal (5%, 10%, or 20%) and BSFL oil (2.5% or 5%) for 4 weeks did not affect food intake or BW in adult beagle dogs [18]. In addition, although the focus was not fermented oat, Traughber et al. reported that feeding a diet containing 40% oats did not affect food intake or BW compared with the dogs fed a diet containing 40% rice [37]. Moreover, Figure 3. Effect of dog food with FO and BSFL on the IgG and inflammatory cytokines in dogs: (a) IgG, (b) IL-10, and (c) TNF-α. The results are expressed as mean ± SE. IgG, immunoglobulin G; IL-10, interleukin 10; TNF-α, tumor necrosis factor alpha. CON, control group; FO, group with 10% fermented oat added to food; BSFL, group with 5% black soldier fly larva added to food; FO + BSFL, group with 10% fermented oat and 5% black soldier fly larva added to food. Feeding and Body Parameters This study was performed to evaluate the suitability of BSFL and FO for inclusion in a dog diet. Many studies have reported the effects of feeding BSFL [10,11,14] to livestock and dogs; however, to the best of our knowledge, this study is the first to evaluate FO as a food ingredient for dogs. We confirmed that dietary supplementation of 5% BSFL, 10% FO, and 5% BSFL + 10% FO had no effect on food intake, body weight, or BCS in dogs. These results are consistent with those of Freel et al., who reported that the feeding of defatted BSFL meal (5%, 10%, or 20%) and BSFL oil (2.5% or 5%) for 4 weeks did not affect food intake or BW in adult beagle dogs [18]. In addition, although the focus was not fermented oat, Traughber et al. reported that feeding a diet containing 40% oats did not affect food intake or BW compared with the dogs fed a diet containing 40% rice [37]. Moreover, in dogs fed a diet containing 25% barley, which has a composition similar to oats, there was no effect on daily intake, preference ratio, nutrient digestibility, or stool scores compared to those with a diet containing 25% rice [38]. When the basic diet was a vegetarian diet, supplementation with corn (20.1%), rye (20.1%), and fermented rye (60.4%) was found to affect food intake and fecal scores, but did not affect body weight in dogs [39]. Fermented food is known to be less palatable for dogs due to its unique smell and taste [40], but we did not find a significant effect of dietary supplementation with FO on food intake. In the study by Lee et al., chicken meat fermented with Pediococcus spp. had a lower palatability than non-fermented chicken meat, but, nevertheless, there were no changes in diet intake or body weight in their study, and their results were consistent with our findings [41]. FO was obtained by centrifugation in this study. We inferred that the reason why FO feeding had no effect on food intake could be because some of the offensive compounds affecting palatability might be removed during the supernatant removal process after centrifugation. Although we did not analyze the nutrient digestibility of the food used in this study, the lack of change in BW and BCS in dogs fed the experimental diets for 12 weeks, together with the fact that MER was met, suggests that BSFL and FO have value for use as food materials that do not have a negative effect on nutrient availability. Safety and Health Parameters A food allergy is defined as a hypersensitivity response caused by an abnormal immune system response to a specific allergen in the food; about 1% of canines and felines have an allergic reaction to food [5]. This food allergy in dogs is mainly caused by protein sources derived from livestock products (such as chicken, beef, lamb, and egg etc.) [5]. The main clinical signs of food allergy and hypersensitivity are dermatological disease (e.g., pruritus, erythema, papular eruptions, etc.) and inflammatory response, accompanied by gastrointestinal symptoms (e.g., vomiting, diarrhea, frequent defecation, colitis, etc.) [5,42]. To evaluate the clinical signs of allergy to dietary BSFL and FO, we investigated the skin status (TEWL and contents of moisture and oil), fecal score, and immune-related parameters (IgG, IL-10, and TNF-α) in serum. TEWL is the amount of water lost through the skin epidermal layer. An increase in TEWL indicates an impairment of the skin barrier function [43], and Shimada et al. suggested that the TEWL can be used as an indicator reflecting the damaged functioning of the skin barrier in dogs [44]. In addition, the pathophysiological response of food allergy and/or hypersensitivity is caused by immune action due to the allergens that have passed through the intestinal mucosal barrier in gut-associated lymphoid tissue. The immune response is induced by the interaction between immunocytes (e.g., Mast cells, eosinophils, helper T cell 1, helper T cell 2, etc.) and immunoglobulins (e.g., IgE, IgA, IgG, etc.) and various cytokines (e.g., IL-4, IL-5, IL-10, IL-6, TGFβ-1, TNF-α, etc.) [45,46]. The results of this study show that there was no significant effect on the skin status, fecal score, or immune parameters (IgG, IL-10, and TNF-α) of dogs fed a diet containing 5% BSFL and 10% FO for 12 weeks. These results suggest that BSFL and FO, as food ingredients, pose a low risk as allergens to dogs. The process of fermentation is a representative process for beneficial health food, and is known to enhance detoxification and suppress allergic reactions by reducing aflatoxins and producing antimicrobial factors [47]. Park et al. [48] reported that the addition of medicinal plants fermented by Enterococcus faecium to dog food results in antioxidant activities and improves the fecal microbiota, with a higher number of beneficial microorganisms in dogs. One study demonstrated that fermented soybean products enhance the activity of natural killer cells and increase TNF-α gene expression in antigen-stimulated PBMCs in dogs [49]. Although it was not performed using dogs, a previous study showed that the intake of fermented wheat bran in pigs increased the protein expression of IL-10 [50]. These studies reported different results from our study, in which IgG, IL-10, and TNF-α were not affected by the experimental diet. The hematological and biochemical parameters of the dogs were analyzed to confirm the safety of BSFL and FO. All the hematological parameters were within the normal reference ranges in all the experimental groups. In this study, WBC at the end of the experiment was significantly higher in the BSFL group than in the CON group, and the level of BASO in the BSFL group was significantly increased over time (p < 0.05). WBC and BASO are two of the indicators monitored to indicate changes in pathological conditions accompanied by hypersensitivity, and allergic and inflammatory reactions [51]. Although our results showed that the WBC and BASO levels in the BSFL group were significantly different (vs. CON) and changed over time, this does not mean that BSFL caused pathological problems, because these levels still remained within the normal range throughout the experimental period. Furthermore, this claim is supported by the results for the BW, skin status, fecal score, and immune-related parameters mentioned above. Additionally, these results are consistent with those of Kröger et al. and Freel et al., who reported that feeding dogs BSFL did not negatively affect their hematological parameters [17,18]. To the best of our knowledge, studies on using fermented oats as raw materials for dog food appear limited. Gizzarelli et al. reported that dogs fed an oat-based diet showed no significant changes in hematological parameters (WBC, RBC, HGB, HCT, MCV, and MCH) compared to dogs fed a rice-based diet [52]. Although their study focused on oats, their results are consistent with those of our study using FO. Some biochemical parameters (PHOS, GLOB, and ALKP) showed significant changes among the CON and each treatment group at the end of the trial when feeding with FO, BSFL, and their combination. However, most parameters were observed to be within the normal reference ranges, with GLOB and T-BIL being the exceptions. Although the concentrations of serum GLOB and T-BIL were outside the normal reference ranges, the reference range presented in this study is a simple reference value provided by the analysis equipment (Hitachi 7180; Hitachi High-Technologies Co., Tokyo, Japan), and the reference ranges presented in this paper are not absolute criteria for judging whether a dog is clinically and pathologically normal or abnormal. In addition, different standards for the reference range of serum biochemistry in animals are suggested for different research institutions, analysis equipment, and researchers, and it has been argued that different optimal normal ranges should be applied depending on the breed, age, and physiological state, among other factors [53][54][55]. When the reference ranges suggested by Fielder [53] and Dall'Aglio [54] were applied, the concentrations of serum GLOB and T-BIL in this study were within the normal ranges. Furthermore, the experimental dogs in this study were judged to have a normal health condition, under the diagnosis of a professional veterinarian, based on the results of their hematologic and biochemical parameters and clinical health status at the time of trial initiation. Whalan suggested that it would be ideal to consider correlations with other parameters rather than just one anomalous parameter when utilizing clinical pathology data to evaluate animal health status [51]. Other parameters associated with GLOB and T-BIL (T-PRO and albumin in the case of GLOB and CBC in the case of T-BIL) were within the normal range in this study. In addition, the pathological signs associated with their parameters (acute inflammatory disease and atopic dermatitis, etc., for GLOB, and inflammation, shock, and excessive hemolysis for T-BIL) were not observed. By comprehensively considering all the factors mentioned, we judged that all dogs used in this study had normal physiological conditions. Several studies have reported the results of evaluating the effects of BSFL and oat feeding on serum biochemical parameters. Freel et al. reported that the concentration of ALT in the serum of adult beagles was significantly increased by BSFL feeding for 28 days [18], and Lei et al. reported a linear increase in the ALB concentration depending on the amount of BSFL (0%, 1%, or 2%) [15]. Gizzarelli et al. reported that biochemical parameters (GLU, CREA, BUN, PHOS, CA, T-PRO, ALB, GLOB, ALT, GGT, and CHOL) were not significantly affected by feeding a diet containing oats in healthy adult dogs [52]. The inconsistency among the results of those studies and this study is considered to be caused by various factors, such as the concentration of materials, the feeding period, the animal age, the animal breed, etc. Thus, further studies are needed to determine the effects of BSFL and FO on serum biochemical characteristics in dogs. Notably, the BSFL group showed a significant decrease in T-CHO over the 12 weeks, while the other groups showed slight increases when comparing the values at the start and end of the experiment, though these were not significant. One of the nutritional properties of BSFL is that, similar to coconut oil, it has a high content of lauric acid, which is a mediumchain fatty acid (C12). The BSFL used in this study contained about 26.9% fat (Table 1), and the lauric acid content among the total fatty acids was 32.8 g/100 g (data not shown). Wood and Migicovsky reported that lauric acid reduces the incorporation of cholesterol in the liver, whereas unsaturated oils increased the total cholesterol in rat liver [56]. In addition, in a human feeding study, Hashim et al. observed that medium-chain triglyceride prepared with C6-C12 saturated fatty acid temporarily elevated and then reduced the serum cholesterol [57]. The finding of our study that serum cholesterol was reduced in dogs fed a diet supplemented with BSFL is consistent with their results. However, other studies reported conflicting results, whereas all of the saturated fatty acids (C8 to C16), including lauric acid, increased total cholesterol [58,59]. The effect of lauric acid on serum cholesterol is still controversial because it appears to be influenced by various factors, such as the duration of the experiment [60]. To resolve this, further studies on whether BSFL can reduce serum total cholesterol in dogs with hypercholesterolemia are required. Conclusions This study was conducted to evaluate the suitability of FO and BSLF as food materials for dogs. Comprehensively, the feeding of 10% FO and 5% BSFL for 12 weeks did not affect food intake, body weight, or BCS, and did not have a negative effect on physiological and biochemical responses in dogs. Furthermore, the findings suggest that BSFL may have the ability to reduce serum total cholesterol in dogs. Further studies of the effects of BSFL on serum total cholesterol in dogs are required. Our results demonstrate the safety and potential functionality of FO and BSFL, and verify their suitability as food ingredients for dogs.
2021-12-12T16:31:56.007Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "c53408985e3d6f2041cf83677fd81587ab8c6e2f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-2615/11/12/3509/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "23796cae7b747c44adde4419514344e7e34dbf35", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
250927615
pes2o/s2orc
v3-fos-license
A preliminary study on the feasibility of community game-based respiratory muscle training for individuals with high cervical spinal cord injury levels: a novel approach Background Respiratory disorders result in rehospitalization and premature death of patients with cervical spinal cord injuries (CSCI). Community game-based respiratory muscle training (RMT) programs could reduce secondary complications. Methods We examined the feasibility and preliminary efficacy of RMT as a community-based exercise program. Among the 10 included participants (eight male and two female), four, one, one, and four reported C3, C4, C5, and C6 complete injuries, respectively (eight graded by American Spinal Injury Association impairment scale [ASIA] A and two by ASIA B). Their mean age was 43 ± 12.3 y. The time since injury was 10 ± 6.7 y. The participants completed an RMT program for 60 min/day, twice weekly, for 8 weeks. The participants were trained in the use of a newly developed game-based RMT device. The device provides consistent pressure for respiratory muscle strength and endurance training. Seven RMT devices were modified to allow 10 game-based RMT programs. Forced vital capacity (FVC), forced expiratory volume in 1 s (FEV1), peak expiratory flow (PEF), vital capacity (VC), inspiratory capacity (IC), inspiratory reserve volume (IRV), expiratory reserve volume (ERV), maximum inspiratory pressure (MIP), maximum expiratory pressure (MEP), and peak cough flow (PCF) were measured. Results There were improvements after RMT compared to pre-RMT in FVC (p = 0.027, 10.62%, 0.22 effect size [ES]), PEF (p = 0.006, 23.21%, 0.45 ES), VC (p = 0.002, 35.52%, 0.60 ES), IC (p = 0.001, 46.94%, 0.81 ES), IRV (p = 0.001, 90.53%, 1.22 ES), MIP (p = 0.002, 97.25%, 1.32 ES), MEP (p = 0.005, 141.12%, 1.07 ES), and PCF (p = 0.001, 35.60%, 0.74 ES). The participants reported a positive impact of the program. Conclusions Community game-based RMT for individuals with CSCI appears to be safe and feasible. Community exercise with RMT use may have a positive impact on the respiratory measures for patients with CSCI who are vulnerable to respiratory compromise. Trial registration KCT0005980. in medical technology and care over the past decades, the associated increase in life expectancy has led to a rising population of patients with SCI [3]. Further, these patients may have an increased risk of acquiring potentially fatal secondary health conditions [4,5]. Dysfunctions owing to cervical SCI (CSCI) include complete or partial impairment of motor control and sensory function [6]. CSCI disrupts the respiratory function of inspiratory and expiratory muscles such as the diaphragm, intercostal muscles, accessory respiratory muscles, and abdominal muscles [7,8]. The reduction of respiratory function is considered the major short-and long-term cause of morbidity and mortality in SCI cases owing to the associated complications, such as atelectasis or pneumonia [9,10]. A common consequence of CSCI is the defective innervation of the inspiratory and expiratory muscles [11]. This defective innervation results in muscle dysfunction that contributes to changes in the chest wall compliance, lung capacity, ventilatory efficiency, and maximum expiratory and inspiratory muscle pressure [11]. Particularly, in CSCI cases, there is impairment in the control of the cervical spinal cord over the respiratory muscles located below the injury point [11]. The resulting paralysis of the respiratory muscles reduces the ability to cough and accumulates airway secretions, thus causing various respiratory complications [12]. Furthermore, weakened respiratory muscles cannot sufficiently inflate the lung to its maximum volume; in addition, they cannot compress the lung to its minimum residual volume [13]. Therefore, prolonged insufficient thoracic expansion results in the shortening and hardening of the thoracic tissue and in muscular fibrosis, which reduces the compliance of the thoracic cavity and promotes atelectasis; in turn, it results in a lower compliance of the lungs [13,14]. Such factors may reduce coughing and sputum secretion abilities, which may pose as serious disturbances in respiratory hygiene [15]. In the initial stage of spinal shock, pulmonary function is reduced due to flaccid paralysis of all the muscles and paralysis of the respiratory muscles [16]. In patients with CSCI, recovery to the pre-SCI state of respiratory function is difficult. Individuals with CSCI may experience a vital capacity reduction of up to 50% and a functional residual capacity reduction of up to 75% [8]. As the reduced respiratory function can restrict the daily life of individuals with CSCI with challenges (i.e., dyspnea) and difficulties in sputum secretion [17], respiratory muscle training (RMT) seems to be essential to boost impaired pulmonary function and reduce respiratory complications [18,19]. RMT demonstrated significant improvements in respiratory muscle strength and endurance, thereby ameliorating respiratory complications [16,17]. Although the implementation of RMT interventions is crucial to prevent respiratory complications following CSCI, participation in irregular RMT intervention may lead to obstructive pulmonary disease and worsening of respiratory failure [20]. In particular, as respiratory failure in individuals with CSCI increases the risk of respiratory complications, early implementation of the appropriate RMT intervention seems to be essential [16]. RMT interventions result in the improvement of respiratory function, effective coughing for the removal of secretions, and reduced secretions owing to autonomic dysfunction [15]. According to a Cochrane review [16], several studies have explored the mechanism of respiratory dysfunction and conducted various RMT interventions to improve respiratory function [16]; 11 studies have demonstrated that such interventions were safe and effective in improving the respiratory strength and coughing ability of patients with CSCI. Nevertheless, interventions to improve respiratory strength and coughing ability in patients with CSCI were usually performed in a hospital setting by performing everyday activities, such as blowing candles, blowing balloons, blowing a ping-pong ball, and singing, without using specialized medical devices [16,21]. Although Berlowitz and Tamplin advise repetitive RMT interventions [16], the procedure is considered monotonous and the level of improvement cannot be assessed during the intervention. Breathing training is thus inconvenient. Gamebased RMT was developed to overcome these issues and provide engaging and more practical breathing training; this program incorporated the term "game" into RMT [22]. It was created to allow individuals with CSCI to be excited and engaged, thus enabling the easier performance of RMT within the community. This program enables the continuous management of individuals with CSCI to prevent respiratory complications, along with the maximization of RMT. Our aim was to examine the feasibility and preliminary efficacy of game-based RMT on respiratory function and cough ability in individuals with CSCI. Participants This study was approved, and all methods were carried out in accordance with relevant guidelines and regulations by the National Rehabilitation Hospital's Institutional Review Board (NRCIRB 2016-03-029). In this feasibility and preliminary study, participants were recruited on a voluntary basis from a rehabilitation sport (RS) class. This study was conducted following the principles of the Declaration of Helsinki. The study protocol was registered and assigned the number KCT0005980 (first registration 09/03/2021). Both verbal and written consent for study participation was obtained from each participant prior to the commencement of the study. All the participants were informed of the objectives, procedures, and potential risks or discomfort associated with study participation. The RS class was completed at The Korea National Rehabilitation Institute Project (Seoul, South Korea), a community-based organization that provides exercise opportunities for people with disabilities. The RS class was specifically designed for patients with CSCI and included all such cases irrespective of their age, time since injury, and injury levels. All the participants in the RS class were allowed to participate in this study (Fig. 1); volunteers for the study signed an informed consent. The participants were included in the study if they had a CSCI and an American Spinal Injury Association (ASIA) impairment scale level of A or B. Additionally, the participants were included in the study if they were over 20 years of age. We excluded those who had any other neurological condition other than SCI; could not complete a single repetition using the RMT device; had no arthritis or neuromuscular disease in the spine that could affect lung function; or had any conditions limiting participation in exercise, including but not limited to orthopedic, cardiac, or pulmonary diseases. Consenting participants self-reported the level of injury, complete versus incomplete, ASIA level, age, and time since injury. Rehabilitation sport class The RS class was held at the Korea National Rehabilitation Institute Project twice per week for 8 consecutive weeks. The program consisted of a warm-up, cool-down (accessory muscles stretching), game-based RMT program, and arrangement exercise (breathing assistant muscle stretching). Game-based respiratory muscle training The participants were trained in the use of a developed game-based RMT device [22]. The game-based RMT device provides consistent pressure for respiratory muscle strength and endurance training, regardless of the breathing speed. The programs comprised 8 game-based RMT interventions for both inspiratory and expiratory muscle training. The program was conducted using 3 or 4 devices per training day. The details of the programs performed in this study are presented in Table 1. An exercise instructor and physical therapist were trained in the administration of the RMT devices; the initial training level was set such that they could complete 15 breaths/set without exhibiting symptoms of hyperventilation. The RMT program (60 min/session) was performed twice a week (8 weeks) using the developed RMT devices. Each training day program consisted of warm-up (10 min) with stretching muscles around the neck, cooldown (10 min), and game-based RMT (40 min). This training protocol was reported to be feasible and effective in patients with CSCI [22]. The participants were provided weekly training diaries noting the training repetition and set number, perceived rate of exertion, and adverse responses. If a training diary was forgotten, the report was recorded by an exercise instructor. All the participants received their own exercise instructor, and the resistance was advanced weekly. Measures The outcome measures were evaluated before and after the 8-week program. Before the evaluation, the participants were taught regarding the test method. The respiratory function was measured using a digital respiratory function measuring device (Pony FX, COSMED, Rome, Italy) [22]. For accurate measurement, the examiner provided sufficient explanation to the participant to comprehend the test, presented the process, and measured the respiratory function [23]. In this study, the respiratory function of forced vital capacity (FVC), forced expiratory volume in 1 s (FEV 1 ), peak expiratory flow (PEF), vital capacity (VC), inspiratory capacity (IC), inspiratory reserve volume (IRV), and expiratory reserve volume (ERV) were measured. The respiratory muscle strength was evaluated by the maximum inspiratory pressure (MIP) and the maximum expiratory pressure (MEP) measurement using the Pony FX (COSMED) [16]. The peak cough flow (PCF), which measures the ability to produce an effective cough, was assessed using a PCF meter (PF100, Microlife Corp., Cambridge, UK). The participants were instructed to inhale to the maximum capacity and then cough as strongly as possible [16]. Three measurements of each variable were obtained and the mean of the three values was analyzed. Data analysis All the statistical analyses were performed with SPSS version 21.0 (IBM Corp., Armonk, NY). The means and standard deviations of each variable were obtained using descriptive statistics. The number of samples (n = 10) was < 30. The normality test was performed using the Shapiro-Wilk test; however, the non-parametric test was conducted, since it satisfied the significance level (p < 0.05). The Wilcoxon test was conducted to assess the differences between the pre-and post-exercise measurements of the participants' performance. In addition to this null hypothesis testing, the data were also assessed for clinical significance using an approach based on the magnitudes of change. We calculated the magnitude of the size of differences by the effect size (ES) [24]. We considered an ES of 0.00-0.19, 0.20-0.49, 0.50-0.79, and ≥ 0.80 as trivial, small, moderate, and high, respectively [24]. Demographics In total, 10 participants (sex, eight male and two female) with CSCI consented to participate, completed the training, and were included in the analysis. All participants had complete cervical level injuries ( Table 2). Their age was 43.6 ± 12.3 years (mean ± standard deviation), and the time since injury onset was 10.3 ± 6.7 years ( Table 2). Pre-and post-measures were obtained for all participants who completed the training. Outcome measures The mean difference for all measures across participants demonstrated an overall improvement in all the outcome measures (Table 3 Safety, feasibility, adverse events There was no study related to adverse events. Respiratory variables were examined for the 10 participants (Table 3). Discussion In this study, we reported that game-based RMT significantly improved the respiratory outcomes in patients with CSCI. This is the first study to demonstrate how a novel approach to game-based RMT with a community exercise program could be an encouraging intervention strategy for patients with CSCI. A functional, effective method of RMT is required to support the repetitive, intensive training warranted for the respiratory rehabilitation of patients with CSCI. Additionally, participants' active participation throughout the lengthy rehabilitation. RMT should easily attract the interest of participants. Patients with CSCI experience weakened respiratory muscles, leading to ineffective cough or sputum removal capacity [16]. Furthermore, secretions accumulate in the airway owing to dysphagia and inspiration, leading to various respiratory complications, such as pneumonia and atelectasis [16]. Thus, RMT interventions based on an accurate diagnosis of functional capacity and condition, prognosis, and severity in patients with CSCI are vital [25,26]. The effects of RMT on overall respiratory function were quantitatively examined by performing a comparison between the pre-exercise and post-exercise values. Interestingly, most outcome measures were improved, except for FEV 1 and ERV. The program was conducted in two 60-min sessions per week, for 8 weeks. To examine the effectiveness of RMT, pre-and post-intervention evaluations of the participants' respiratory function, respiratory muscle strength, and cough ability were evaluated. An observation of the study findings revealed significant improvements in the FVC, PEF, VC, IC, IRV, MIP, MEP, and PCF throughout the 8-week RMT intervention period. After training, both the FVC and FEV 1 increased. The pulmonary function test (FVC and FEV 1 ) is the simplest and most comprehensive respiratory functional assessment for the diagnosis and evaluation of airway diseases. The FVC and FEV 1 measure forceful expiration following maximum inspiration and forceful expiration in one second, respectively [27]. Pulmonary function testing is used to measure the change and improvement of respiratory function in individuals with CSCI. The observed significant improvement in FVC was consistent with the results of a previous study [19] that reported a similar improvement in the FVC of patients with SCI after 8 weeks of RMT training. However, this study used a combination of developed game-based RMT interventions instead of a traditional RMT. In addition, patients with CSCI (ASIA A or B) who have difficulty recruiting participants were included. The game-based RMT was used to strengthen the muscles involved in inhalation and exhalation, which resulted in an increase in the FVC and FEV 1 . The observed significant improvements suggested that game-based RMT interventions are effective in increasing the FVC in patients with CSCI. After RMT completion, PEF, VC, IC, and IRV were significantly improved. Table 3 Summary of training effects for respiratory function test variables obtained before and after intervention Values are presented as means ± standard deviations. The p value was derived from a paired t-test of the results before and after the intervention CI, confidence interval; FVC, forced vital capacity; FEV 1 , forced expiratory volume in one second; PEF, peak expiratory flow; VC, vital capacity; IC, inspiratory capacity; IRV, inspiratory reserve volume; ERV, expiratory reserve volume; MIP, maximum inspiratory pressure; MEP, maximum expiratory pressure; PCF, peak cough flow; MBorg, Modified Borg scale; SD, standard deviation *p < 0.05; **p < 0.01 # Small, ## These results were similar to those of previous studies that reported a significant improvement in the PEF, VC, and IC after RMT in patients with SCI [10,16,28]. These results have been attributed to the changes in the chest wall characteristics [29], such as the recovery of diaphragmatic function [9,30], improved ability of the accessory respiratory muscles, and increased stability and adaptability of the thoracic cavity [31]. Moreover, VC and expiratory flow decrease after CSCI, which can cause severe respiratory failure [32]; therefore, patients with CSCI should be managed with a steady RMT protocol to restore respiratory muscle function. Evaluation of respiratory muscle strength is used to determine respiratory failure [33] and to evaluate the changes and improvement in coughing capacity in patients with SCI. Furthermore, the maximum cough flow is produced by an increase in the abdominal and chest pressure, generated by the contraction of the internal intercostal and abdominal muscles [34]. This method of evaluation is used as a measure to evaluate the degree and change in the improvement of the respiratory muscle strength and coughing capacity of patients with SCI [35]. Following 8 weeks of gamebased RMT, improvements were observed in the MIP and MEP. The results of providing game-based RMT to patients with CSCI revealed significant improvements in the MIP and MEP. Our findings were consistent with those of a previous study [10], where significant improvements in the MIP and MEP were observed following 6 weeks of RMT among patients with spinal injuries (C4-C7, T1). These findings highlighted the possibility that the game-based RMT helped strengthen the muscles involved in inspiration and expiration, thus, improving the MIP and MEP. The improvements in the MIP and MEP may be attributed to the hyperventilation that occurred due to the participants' efforts to win, a part of the RMT protocol, which might have helped activate and strengthen the respiratory muscles of the participants [22]. Thus, the RMT intervention program proposed in this study could be used to effectively improve the MIP and MEP. Furthermore, the provision of game-based RMT for 8 weeks among patients with CSCI showed a significant improvement in the PCF. This finding was similar to that of a previous study that reported a significant improvement in the PCF following 4 weeks of RMT among patients with CSCI [36]. However, they used activation of the abdominal muscles using functional electrical stimulation with assisted RMT to improve the PCF. Coughing is an important protective mechanism to excrete secretions in an effort to prevent respiratory complications, such as atelectasis and pneumonia [37]. In order to cough effectively, the three stages of coughing (inhalation-compression-exhalation) should function normally [38]. Nevertheless, in cases where the spinal cord is damaged, the coughing mechanism could be abnormal, thus, requiring assistance in coughing to adequately excrete the secretions settling in the airway [15]. Intercostal and abdominal muscle paralysis makes expiratory muscle contraction following inhalation challenging [15]. Thus, the expiratory muscle contraction owing to lung expansion from the inspired air and chest wall recoil makes it more difficult for patients to cough effectively [39,40]. Therefore, game-based RMT may be an effective intervention for the improvement of expiratory function and cough ability in patients with CSCI. Conventional RMTs, such as diaphragmatic breathing, isocapnic hyperpnea training, air stacking exercise, pursed-lip breathing, and air-shifting, are repetitive hospital-based interventions, which could result in the loss of patients' interest and in abandonment of the rehabilitation process [21]. The interventions suggested in a previous study [21] were mundane and did not demonstrate observable improvements, which resulted in challenges to the implementation of RMT. In this study, game-based voluntary hyperventilation was encouraged during RMT. The intervention was developed in an effort to alleviate the challenges and provide a more interesting and effective RMT procedure. Moreover, a combination of both game and RMT possesses several advantages compared to the conventional RMT protocols. First, game-based RMT can induce competition against other participants, thus promoting a more continuous participation in the program. Second, it provides an interactive environment for participants with the same condition. Third, the effectiveness of the RMT seems to be higher because of the increased interest and participation rate, which ultimately can improve the quality of life of patients with CSCI. Limitations Since this was a preliminary pilot study, there were some limitations that should be acknowledged. First, it had a single group pre-and post-design, and there was no control group to compare the exercise effects. It is necessary to study the effects of RMT in future studies based on the principle of random allocation of participants with a control group. Second, the study had a small sample size and the participants were mostly male; thus, the results may not be generalizable to all patients with CSCI. Further studies should include more women to examine sex differences in the results of the respiratory function tests performed in patients with CSCI. Third, it is necessary to examine the effect of age, smoking status, location of injury (e.g., C4-C7), and onset of injury on the participants' respiratory function. Fourth, the results did not indicate whether the improvements resulted from the performed exercise programs, as all the participants partook in various programs. Further studies with larger numbers of individuals with CSCI considering various injury characteristics (i.e., level, severity, and duration) and clinical information (i.e., smoking, tracheostomy, and use of ventilator) are warranted to identify the factors contributing to pulmonary function improvement and, ultimately, obtain favorable rehabilitative outcomes. Conclusions To our knowledge, this is the first feasibility and preliminary study to suggest the use of RMT in combination with a community exercise program for patients with CSCI. Overall, the participants demonstrated improvement in all the respiratory outcomes. With increased education and expansion of these types of programs, compliance with an RMT program may increase. Finally, we judged that the safety verification for the mechanical part should be carried out for the generalization of the developed RMT devices.
2022-07-22T13:43:23.532Z
2022-07-22T00:00:00.000
{ "year": 2022, "sha1": "14347b3a9a04573336e723e721e30912bafef9bc", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "d15b800a60351c12bbd2e6a8466d0e3e89c0e224", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
3389882
pes2o/s2orc
v3-fos-license
Broca’s Area as a Pre-articulatory Phonetic Encoder: Gating the Motor Program The exact nature of the role of Broca’s area in control of speech and whether it is exerted at the cognitive or at the motor level is still debated. Intraoperative evidence of a lack of motor responses to direct electrical stimulation (DES) of Broca’s area and the observation that its stimulation induces a “speech arrest” without an apparent effect on the ongoing activity of phono-articulatory muscles, raises the argument. Essentially, attribution of direct involvement of Broca’s area in motor control of speech, requires evidence of a functional connection of this area with the phono-articulatory muscles’ motoneurons. With a quantitative approach we investigated, in 20 patients undergoing surgery for brain tumors, whether DES delivered on Broca’s area affects the recruitment of the phono-articulatory muscles’ motor units. The electromyography (EMG) of the muscles active during two speech tasks (object picture naming and counting) was recorded during and in absence of DES on Broca’s area. Offline, the EMG of each muscle was analyzed in frequency (power spectrum, PS) and time domain (root mean square, RMS) and the two conditions compared. Results show that DES on Broca’s area induces an intensity-dependent “speech arrest.” The intensity of DES needed to induce “speech arrest” when applied on Broca’s area was higher when compared to the intensity effective on the neighboring pre-motor/motor cortices. Notably, PS and RMS measured on the EMG recorded during “speech arrest” were superimposable to those recorded at baseline. Partial interruptions of speech were not observed. Speech arrest was an “all-or-none” effect: muscle activation started only by removing DES, as if DES prevented speech onset. The same effect was observed when stimulating directly the subcortical fibers running below Broca’s area. Intraoperative data point to Broca’s area as a functional gate authorizing the phonetic translation to be executed by the motor areas. Given the absence of a direct effect on motor units recruitment, a direct control of Broca’s area on the phono-articulatory apparatus seems unlikely. Moreover, the strict correlation between DES-intensity and speech prevention, might attribute this effect to the inactivation of the subcortical fibers rather than to Broca’s cortical neurons. INTRODUCTION Speech represents the unique human ability to translate thoughts and feelings in articulate sounds. This neural function has been historically attributed to a complex network involving, as essential components, the pars opercularis and triangularis (Brodmann Area BA44 and BA45, respectively) of the posterior Inferior Frontal Gyrus (IFG). Broca (1861) reported the lesion of this area as the distinguishing feature in brains of patients affected by permanent "speech loss, " therefore BA44-45, now called Broca's area, was consecrated as an essential hub in the neural control of speech production (Berker et al., 1986;Amunts et al., 1999). The "speech loss" in Broca's patients, described as the inability to articulate language (Berker et al., 1986), was termed production (or Broca's) aphasia. The term "production" aphasia and its direct identification with a deficit in the executive articulation of speech sounds, lead to the logical but unsubstantiated conclusion that Broca's area has a crucial role in motor control of speech. As of today, there is, however, no experimental evidence showing that this area has a direct function in motor control, i.e., a motor output measurable in specific muscles occurs when this region is stimulated, a distinctive characteristic all cortical motor areas. In the last decades, different studies have raised some criticism on the supposed motor function of Broca's area (Flinker et al., 2015;Duffau, 2017;Rao et al., 2017), supported by two observations: pure lesions of Broca's area do not result, as would be expected, in production aphasia, but rather in a transitory, rapidly improving mutism (Mohr et al., 1978); moreover the detailed analysis of the preserved brains of Broca's patients clearly show that the lesion was not confined to BA44-45, but involved other structures beyond the frontal operculum (Dronkers et al., 2007), challenging the univocal correlation between production aphasia and a lesion of Broca's area. The issue remains unresolved, due to the lack of appropriate experimental tools to study this area in humans in ecological conditions. In the last two decades, the introduction of the intraoperative brain mapping technique for surgical removal of brain tumors with Direct Electrical Stimulation (DES) allowed the direct investigation of the functional properties of Broca's area in ecologic conditions. The brain mapping technique relies on the premise that DES, delivered at the cortical and subcortical level in awake patients performing a behavioral task, interferes with its execution only when applied on structures belonging to the neural circuit sub-serving the ongoing task (Duffau and Duchatelet, 2016). In this setting, analysis of the deficit induced by DES provides elements relevant to disclosing the role of the stimulated area/fibers in the neural control of the task. The intraoperative stimulation of Broca's area during speech production induces a so-called "speech arrest" (Luders et al., 1987;Axelson et al., 2009;Chang et al., 2011;Mandonnet et al., 2016;Chang et al., 2017), a well-known phenomenon in the neurosurgical literature, defined as "the complete interruption of ongoing speech in absence of oro-facial movements and vocal output" (Tate et al., 2014(Tate et al., , 2015Chang et al., 2017). This evidence supports the notion that Broca's area has a critical role in speech production, but does not reveal precisely whether intraoperative stimulation of Broca's area interferes with preceding semantic or phonological control necessary for speech production (for a review see Hickok, 2012), or whether it interferes directly with control of motor structures that produce speech. In a recent intraoperative study, we stimulated Broca's area with both standard low frequency DES to induce "speech arrest, " and high frequency DES to induce motor evoked potentials (MEPs) in phono-articulatory muscles (Cerri et al., 2015). Stimulation of Broca's area with short trains of high frequency DES failed to elicit MEPs in the hand or in oro-facial muscles, including phono-articulatory muscles active in speech, irrespective of muscle excitability (resting or pre-activated muscles) and/or functional state (during language tasks). The same paradigm applied to motor cortices (ventral pre-motor and primary motor cortex) elicited MEPs in both oro-facial and hand muscles. This result challenged the inclusion of Broca's area among the wellestablished motor cortices hosting muscle representations. In the same study, low frequency DES induced "speech arrest" as expected, but careful observation suggested that it did so by aborting the onset, rather than "disrupting ongoing speech production." This observation supports the hypothesis that the "speech arrest" induced by low frequency DES on Broca's area, that is described by some neurosurgeons, may not be due to an impairment in execution of motor programs controlling the recruitment of phono-articulatory muscles' motor units, but rather to an impairment in the pre-articulatory phase. This hypothesis, which in our previous study was based on descriptive comparison of EMG patterns of muscle activation, without and during DES delivered on Broca's area, needs to be confirmed with a quantitative approach. Addressing whether "speech arrest" might in fact not be due to an arrest of ongoing muscle activity during speech is mandatory in order to challenge the involvement of Broca's area in motor control of speech. Ultimate proof requires evidence of a functional connection of this area with motoneurons driving phonoarticulatory muscles. In sum, when considering the literature investigating the anatomo-functional properties of Broca's area, there are no doubts about its involvement in language, however, the precise nature of its role remains obscure, particularly in motor control of speech. The present study is the first to investigate the effect of intraoperative DES of Broca's area on the activity of phono-articulatory muscles during single word tasks (object picture naming and counting), with a quantitative approach. The electromyography activity (EMG) of a sample of phonoarticulatory muscles was recorded in 20 patients performing counting and naming tasks during DES on Broca's area and analyzed in the frequency (power spectrum) and in the time (root mean square) domains. Should Broca's area be critically involved in the control of motor output during speech, its stimulation during task performance is expected to affect the ongoing motor unit recruitment in the muscles involved in the task, and this effect is expected to be time-locked to the duration of the DES. Taking the analysis a step further, we also investigated whether the effect of DES can be attributed to a specific sector of the frontal operculum. To this aim, the topographic map of the responsive sites on Broca's area was created and matched with the probabilistic map of terminations within the frontal lobe, of the main systems of fibers sub-serving the language network, reconstructed with High-Angular Resolution Diffusion Imaging (HARDI) q-ball tractography. MATERIALS AND METHODS The study was performed on 20 patients affected by gliomas during the surgical removal of tumor with the aid of the brain mapping technique. The aim of the study was to assess the actual involvement of Broca's area (BA44-45) in motor control of speech. To this aim we focused the investigation on the EMG activity of the phono-articulatory muscles active during two speech tasks (object picture naming and counting) performed by patients during awake surgery for brain tumor removal in two conditions, i.e., during and in absence of DES stimulation of Broca's area. The EMG of each muscle recorded during task performance in the two conditions was analyzed offline in frequency (power spectrum, PS) and time domain (root mean square, RMS) and the two conditions compared. A comprehensive summary of methods is provided in Box 1. A detailed description of methods reports, below, the inclusion criteria, the surgical procedure, the intraoperative brain mapping technique, the EMG and neuroimaging data analysis. Patient Selection Twenty patients affected by gliomas were enrolled in this study. All the patients had left language dominance. Tumors were localized in the left frontal, temporal and/or insular lobes, never infiltrating or reorganizing Broca's area (see for inclusion criteria Fornia et al., 2016). All patients were free from neurological deficits affecting the motor and/or language functions (see Table 1 for detailed description of all patients). All patients gave written informed consent to the surgical and mapping procedure, which followed the principles outlined in "World Medical Association Declaration of Helsinki: Research involving human subjects." The study was performed with strict adherence to the routine procedure normally utilized for surgical tumor removal. Accordingly, all data were recorded utilizing electrophysiological monitoring and stimulating protocols (see below) adopted for routine clinical mapping. Pre-operative Routine In the pre-operative routine assessment, all patients were submitted to handedness assessment, neurological examination and a neuropsychological evaluation of cognitive abilities as the non-verbal intelligence, memory, praxis and language. The neuroradiological examination included morphological T1, T2, FLAIR, DWI and post contrast T1 images (Bello et al., 2014). A functional MRI (fMRI) study was performed in all patients to localize three neighboring cortical areas of the frontal lobe: (i) the primary motor cortex (M1), localized with a finger-tapping task, (ii) the Broca's area and (iii) the ventral pre-motor cortex (vPM) identified with a covert visual naming and fluency task or covert auditory verb generation (Ruge et al., 1999;Cerri et al., 2015). The language hemispheric dominance was determined by the laterality index, based on the fMRI results in both language tasks. Following the fMRI investigation, High-Angular Resolution Diffusion Imaging (HARDI) q-ball (6 patients) or Diffusion Tensor Imaging (DTI) (14 patients) tractography reconstructions allowed for reconstruction and visualization of the motor and/or language fibers in the frontal lobe to estimate their anatomical relationship with the lesion. All the data acquired in the pre-operative routine assessment contributed to the design of the optimal functional mapping strategy to be performed during the surgical tumors removal. Surgical Procedure and Intraoperative Routine Patients were subjected to an asleep-awake-asleep anesthesia. The surgery was performed with the aid of the brain mapping technique for cortical and subcortical mapping. According to this technique, DES was applied on cortical areas to define the surgical cortical entry zone, while subcortical mapping was performed along with tumor resection to locate functional motor and/or language fibers representing the limit of resection (Bello et al., 2014). Neurophysiological brain monitoring The cortical activity was monitored, during the procedure, by ElectroEncephaloGraphy (EEG, Comet) and ElectroCortico-Graphy (ECoG, Grass). ECoG was recorded from a cortical region adjacent the area to be stimulated by subdural strip electrodes (4-8 contacts, monopolar array referred to a midfrontal electrode) through the whole procedure, to monitor the basal cortical electrical activity and to detect after-discharges or electrical seizures during the resection. EEG was recorded with electrodes placed over the scalp in a standard array. EEG and ECoG signals were filtered (bandpass 1-100 Hz), displayed with high sensitivity (50-150 µV/cm and 300-500 µV/cm respectively) and recorded. The integrity of the essential descending motor pathways was monitored throughout the procedure using the so-called "trainof-five" (To5) monitoring technique. Trains of 5 stimuli were delivered from the beginning to the end of the procedure to M1 cortex to elicit Motor Evoked Potentials (MEPs) in contralateral oro-facial and hand muscles in order to monitor the integrity of the corticospinal transmission (for details see Bello et al., 2014). During surgery, the muscle activity of the patient was recorded by pairs of subdermal hook needle electrodes (Technomed) inserted into 20 muscles (face, upper and lower limb) contralateral to the hemisphere to be stimulated, plus 4 ipsilateral muscles connected to a multichannel EMG recording (ISIS, INOMED, sampling rate 20 kHz, notch filter at 50 Hz) (Bello et al., 2014). EMG was used to record the responses to stimulation, either the To5 MEPs responses or the responses elicited during the brain mapping, the voluntary motor activity and to distinguish between electrical and clinical seizures. Close attention was paid to prevent intraoperative seizures, with the ECoG and EMG monitoring: as clinical routine procedure at the first ictal sign, the stimulation was stopped and cold irrigation was applied, to abort the seizure and whenever the seizures spread to the whole hemibody, propofol bolus infusion (4 ml on average) was delivered. Neurophysiological brain mapping The brain mapping technique for cortical and subcortical mapping of both motor and language components can be performed with two different stimulation paradigms: Low Frequency (LF) and High Frequency (HF) stimulation (Bello et al., 2014). In the present study, we focused the analysis of EMG data obtained during brain mapping performed with the LF stimulation (LF-DES) delivered on Broca's area while the patients were performing two speech tasks, i.e., the object picture naming and the counting task (see below). According to the clinical procedure, both tasks were performed in two different conditions: in absence of DES stimulation (DES-OFF condition) and during LF-DES delivered on Broca's area (DES-ON condition). The LF stimulation consisted in trains (1-4 s duration) of biphasic square wave pulses (0.5 ms each phase) at 60 Hz (ISI 16.6 ms) delivered by a constant current stimulator (OSIRIS-NeuroStimulator) integrated into the ISIS-System through a bipolar probe (two ball tips, 2 mm diameter, spaced by 5 mm). Intraoperative speech tasks. Object picture naming task: the patient was asked to name the picture of objects randomly presented on a computer screen. Pictures were presented with a regular timing allowing a pause of few seconds between subsequent pictures. As soon as the picture was presented, the patient was asked to name the picture, while during the pauses the patient was silent (resting). Counting task: the patient was asked to count from one to ten. The count was self-paced, but patients were previously trained to wait few seconds (resting) between one number and the following one. During task performance (either counting or object picture naming), LF-DES was randomly applied on the same stimulation site, so that some trials were performed without stimulation (DES-OFF) and others during stimulation (DES-ON). When the naming task was performed during stimulation, the surgeon delivered DES at the presentation of the object to be named. DES during counting task was delivered in the pause between to subsequent numbers to be pronounced. No external cues were given to the patients to pace the onset of the speech. Intraoperative identification and stimulation of Broca's area. According to clinical procedure, for each patient the pre-operative anatomo-functional identification of Broca's area, to be distinguished from the ventral pre-motor area (vPM) and the primary motor cortex (M1), was performed with a dedicated neuroimaging (fMRI-DTI) study (Ruge et al., 1999;Cerri et al., 2015). The fMRI and DTI or HARDI data were loaded on the neuronavigation system to be available for the neurosurgeon. During the procedure, the conclusive identification of the three areas to define the point of surgical entry was mandatory and performed with the brain mapping technique. LF stimulation delivered in resting condition is highly effective on M1 inducing overt motor responses (orofacial movements), clearly recorded by the EMG electrodes, while it is not effective when applied to vPM and Broca's area. Conversely, LF-DES delivered on the three areas during speech tasks, impairs the task performance although with different clinical features, thus allowing the surgeons to distinguishing among the different cortices. The occurrence of speech disturbances upon application of DES to the three areas, measured by clinical inspection, has been indeed extensively documented (Matsuda et al., 2014;Tate et al., 2014;Chang et al., 2017) and, therefore, is part of the clinical routine in functional neurosurgical practice. As a standard routine, when the stimulation of Broca's area stops ("speech arrest" phenomenon) without inducing movements, the patient's counting/naming at least three non-consecutive times, the identification of Broca's area is considered reliable. Regarding vPM, when the stimulation induces disruption of speech referred as "anarthria" (a term so far considered a synonymous of speech arrest) (Tate et al., 2014(Tate et al., , 2015, at least three non-consecutive times, the identification of the area is considered reliable. Differently, the stimulation of M1 induces a disruption of speech with facial muscles contraction (Tate et al., 2014) and referred as "dysarthria" (a motor impairment accompanied with dysphonic/aphonic speech) (Deletis et al., 2014). According to the routine procedure for language mapping, in our patients the first areas to be identified were M1 and/or vPM. LF-DES paradigm was applied on M1 (on face motor areas) and/or vPM to identify the minimum current intensity (Threshold Intensity) needed to induce a clear interference (dysarthria and/or anarthria/speech arrest) in task performance (ThreshI-DES). Then DES was applied onto the putative Broca's area and the intensity of stimulation was initially set at ThreshI-DES and then increased until the "speech arrest" was obtained (SupraThreshI-DES). This protocol was applied only when it was clinically mandatory for the surgeon to define a clear border between Broca's area and the neighboring vPM. In our study, the SupraThreshI-DES was applied only in 7 out of 20 patients. The complex clinical setting and the primary concern to avoid any impact on the clinical procedure, did not allow recording the same number of trials in each patient. In each patient, the EMG activity of the entire set of muscles was recorded during the cortical mapping. A dedicated channel, recording the patient's vocal production, was simultaneously recorded. In particular, for the offline analysis we focused on responses of a sample of phono-articulatory muscles: superior orbicularis oris, mylohyoid, mentalis contra-and ipsilateral to the left hemisphere and the contralateral platysma. EMG Analysis The EMG of the phono-articulatory muscles during speech tasks performance was recorded in two conditions: in absence of stimulation (DES-OFF) and during stimulation (DES-ON). Offline analysis of intraoperative data was performed at single subject level as follows: (1) Selection of the EMG activity occurring in phonoarticulatory muscles during task performance in the two conditions: DES-OFF and DES-ON. Selection of the EMG Activity Occurring in Phono-Articulatory Muscles during Task Performance in the Two Conditions: DES-OFF and DES-ON The first analysis was aimed at extracting from the EMG recorded at baseline, the EMG signal corresponding to the task performance both in absence of stimulation and during stimulation. To this aim the EMG signal recorded in both DES-OFF and DES-ON conditions was analyzed, offline, using a dedicated software (MatLab) allowing to distinguishing and extracting from the baseline EMG, the EMG signal corresponding to the task performance. The same data analysis was performed separately for the object picture naming and counting tasks. Two subsequent analysis were performed: (A) the EMG signal corresponding to the muscle activation occurring during DES-OFF was selected with respect to the EMG signal corresponding the resting condition (baseline); (B) the EMG signal corresponding to muscle activation occurring during DES-ON was selected based on the stimulus artifact, recorded in a dedicated EMG channel, indicating the exact time window of the stimulus; (A) DES-OFF condition. To select and extract the EMG segment corresponding to the muscle activation occurring during the utterances pronounced in DES-OFF condition, the EMG of each phono-articulatory recorded muscle was rectified and the Root-Mean-Square (RMS) calculated across an epoch (time window) of 100 ms, with a sliding window of 50 ms to the preceding one. RMS of the baseline activity was calculated by averaging four fragments of EMG signal in resting condition selected randomly (length of each part about 2 s) plus its 3 * SD. The latter value was chosen to exclude from the baseline signal the non-specific muscular activity occurring during small and nontask related movements, possibly creating a false positive EMG activation. For each utterance (trial), the onset and the offset of the task-related muscle activity were extracted by subtracting the RMS baseline activity from each trial of the task, i.e., by setting at the point of intersection between the 3 * SD line and the RMS slope corresponding to the onset or offset of the EMG activation of the phono-articulatory muscle during tasks. Illustration of the methods used for the extraction of the EMG segments to be entered in the RMS and PS analysis is presented in Figure 1. The duration of the EMG segment corresponded to the time needed to produce the utterance, which was normally short and variable among patients. All the EMG segments corresponding to all trials in DES-OFF conditions were entered in the quantitative EMG analysis (see Quantitative Characterization of the Pattern of EMG Activity Recorded in Phono-Articulatory Muscles during DES-OFF and during DES-ON Condition). A total of 369 (considering all patients), trials were recorded in DES-OFF condition, irrespective of the speech task. Number of DES-OFF trials per patient ranged 4-47 (Naming: mean 18 trials/patient ± 9SD; Counting: mean 11 trials/patient ± 14 SD). (B) DES-ON condition. The EMG signal corresponding to DES-ON condition (either TreshI-DES and SupraThreshI-DES) was selected by using as a reference the stimulation artifact recorded by one of the EMG synchronized-channels (Orbicularis Oculi). The EMG segment selected (trial) corresponded to the onset and offset of the stimulation and its duration corresponded to the duration of the stimulus artifact. All the EMG segments corresponding to all trials in DES-ON conditions were entered in the quantitative EMG analysis (see Quantitative Characterization of the Pattern of EMG Activity Recorded in Phono-Articulatory Muscles during DES-OFF and during DES-ON Condition). A total of 97 trials, considering all patients, were recorded in DES-ON condition, irrespective of the speech task (mean number of trials for subject ± SD = 5 ± 4). Within the 97 trials are included both stimulation failing to elicit any effect (DES-ON ThreshI-DES) and stimulation inducing speech prevention (DES-ON SupraThreshI-DES). Number of DES-ON trials per site ranged 3-10. All recorded-data (369 DES-OFF and 97 DES-ON) entered the statistical analysis (performed at the single patient level) in both time and frequency domain parameters. The higher number of DES-OFF trials and its variability among patients were due to the clinical requirements, which were different in different procedures. In some patients the tumor was located near (but FIGURE 1 | Selection of phono-articulatory muscles activity. Illustration of the methods used for the extraction of the EMG segments to be entered in the RMS and PS analysis. The EMG segment was extracted at the ONSET and OFFSET of the EMG activation based on the RMS calculation (see Materials and Methods). The extracted segment was then processed for both RMS and PS activity. Exemplary illustration from the EMG signal related to Orbicularis Oris contra-(upper trace) and ipsilateral (lower trace) muscle. Both EMG signals are rectified. The horizontal lines (black) correspond to the value of three times the standard deviation (3 * SD) of the mean amplitude (in µV) of the baseline EMG, while the white lines on EMG signals correspond to Root-Mean-Square (RMS) blue line in upper trace and white line in lower trace. The onset of the EMG activity related to DES-OFF was set at the point of intersection between the 3 * SD line and the increase of RMS slope (arrows at left). Accordingly, the offset of the EMG activation during tasks was set at the point of intersection between the line and the decrease of RMS slope under the 3 * SD line (arrows at right). not infiltrating) to cortical and subcortical areas involved in language and other cognitive/neurological functions, while in other it was embedded within the areas and pathways related to language and speech (not infiltrating). In the latter patients the brain mapping procedure with speech tasks needed to define the functional borders of the tumor was performed more extensively with respect to the former patients and the number of trials was high. For this reason, a wide range of DES-OFF trials per patient (4-47) was collected. For the same reason, when the mapping was focused at disclosing the border between Broca's area and the neighboring cortex, the number of DES-ON trials per patient was lower ranging 3-10 trial per patient. Quantitative Characterization of the Pattern of EMG Activity Recorded in Phono-Articulatory Muscles during DES-OFF and during DES-ON Condition The EMG segments selected with the previous analysis (DES-ON condition: 97 trials and DES-OFF condition: 369 trials) was processed with a quantitative analysis in order to obtain the distinguishing features characterizing the muscle activation in the two conditions. To this aim the EMG segments of the recorded muscles was analyzed, in both conditions, in frequency (power spectrum, PS) and in time (root mean square, RMS) domain. This approach allows for the quantitative characterization of the motor units recruitment in the different muscles active in the two conditions. The same analysis was applied to each muscle to characterize also the EMG activity in baseline, i.e., when muscles were not active in performing speech (e.g., EMG recorded in the pause within two words to be produced). This analysis was mandatory to correctly identify the onset of the EMG activity during task performance. Analysis in time domain The RMS was selected as the main parameter for the analysis of the EMG signal in time domain: the mean and peak values (in µV) RMS were compared in the two conditions. Analysis in frequency domain The PS of the signal (Fast Fourier Transform) was computed to estimate the mean and median frequency and the area (in µV 2 ) under the spectrum curve. Notably, in each power spectrum lacks the information related to power of 50 Hz signal, since a safety notch filter (at 50 Hz) is applied by machine (ISIS, INOMED) to exclude alternate current by signals. The statistical analysis was performed at the single subject level, not at the population level. Supplementary Figure S1 In all patients and in all conditions the EMG analysis was always matched with the functional outcome, i.e., the clinical evaluation of the task performance routinely assessed by the neuropsychologists during the procedure as part of the clinical procedure. Statistical Comparison of Pattern of EMG Activation in the Two Conditions For each patient, a statistical analysis was applied to compare the two conditions: DES-OFF and DES-ON. The aim was to disclose whether DES applied on Broca's area during task performance actually exerts a disruption of the ongoing motor program interfering with the physiological motor unit recruitment occurring during DES-OFF, used as main reference. For each muscle analyzed, the comparison of the EMG activity recorded during DES-OFF (369 trials) and during DES-ON condition (97 trials) was computed on all the five parameters calculated: the mean and peak values for RMS and the mean and median frequencies, and area under the curve for PS. Statistical analysis was performed in Statistica 7.1 software, by means of Mann-Whitney U-test. The EMG analysis was conducted for each single patient separately. Since we compared the effect of DES on Broca's area during speech (DES-ON) vs. speech without DES (DES-OFF) as a main reference, we treated the two conditions as independent, therefore a test for independent samples was chosen. The non-parametric test for independent samples, particularly the Mann-Whitney U-test, was adopted since a small sample of trials were available for a single patient and the distribution tests lack of sufficient power to provide meaningful results. The significance level adopted was p < 0.05. The analysis of the EMG during object picture naming and counting tasks was performed separately. Neuroimaging Analysis Postoperative neuroimaging analysis was performed to allow the precise localization of the DES-induced effect on Broca's area at single subject and population level, and to estimate the putative correlation of the effect obtained by stimulating Broca's cortex and the main systems of connecting fibers involved in language. 3D Map of the LF-DES Stimulation Sites on Broca's Area For each patient, the reconstruction of the exact position of the sites stimulated with both ThreshI-DES and SupraThreshI-DES on Broca's area and on the neighboring vPM/BA6 was computed. During the intraoperative mapping, the coordinated of the stimulation sites effective in inducing the impairment of the task were recorded on the neuronavigator system. To report the exact position of the sites on the 3D MRI surface of the patients, the following procedure was adopted. The MRI-T1 volume of each patient was used to perform the cortical surface extraction and the surface volume registration was computed with the Brainsuite (Shattuck and Leahy, 2002) dedicated software. Data were then loaded into Brainstorm (MatLab Tool Box 1 ) (Tadel et al., 2011) and the exact position of the coordinates was labeled onto the 3D patient's MRI. Subsequently the 3D MRI and the labeled points were co-registered to the MNI space system (non-linear ICBM 152). This procedure allowed to report each recorded stimulation point into the MNI coordinates space system (Fornia et al., 2016). Coordinates of each point were then labeled onto the ICBM 152 to create a 3D reconstruction of the left stimulated hemisphere. The Unified segmentation (Ashburner and Friston, 2005) was used to normalize each brain and the related stimulated sites to the MNI space. However, since this transformation may introduce significant spatial location inaccuracies, we visually inspected the location of the stimulated sites on the MNI template and matched it with the original coordinates of patient's brain. All the sites in BA44/45 and ventral portion of the precentral gyrus located on the MNI template matched with the sites originally identified on patient's brain. This procedure reduced at best the inaccuracies of the transformation. MR Tractography Analysis By means of High-Angular Resolution Diffusion Imaging (HARDI) q-ball tractography technique, the cortical terminations of the main white matter bundles related to the language function were reconstructed to investigate whether they reach the sites of stimulation on Broca's area and therefore might be involved in the genesis of the effect observed and characterized with the quantitative analysis. HARDI datasets were corrected for movement and eddycurrent distortions using FMRIB Software Library (FSL). Diffusion Imaging in Python (Dipy) software was used to estimate fractional anisotropy (FA) and for q-ball residualbootstrap fiber tracking of language pathways (Caverzasi et al., 2014(Caverzasi et al., , 2016. Tracking was performed using an FA threshold = 0.1 and max angle = 60 • as stopping parameters in the algorithm. Tractography of language pathways was performed by a boardcertified neuroradiologist (A.C., with 11 years of experience in MR tractography analysis). The main white matter bundles belonging to language pathways (from Chang et al., 2015;Kinoshita et al., 2015) (Caverzasi et al., 2016). Results were visualized using Trackvis 2 . Specifically, to reconstruct the IFOF and UF a single-plane seed ROI was defined on the FA color map in the coronal plane passing through the anterior commissure, by selecting the anterior part of the left external and extreme capsules, where the two tracts run in contiguity. Target ROIs for 2 http://trackvis.org the UF and IFOF were localized at the levels of the temporal and occipital lobes, respectively. For both tracts, the left frontal lobe was used as a second target ROI. Streamlines that passed through both target ROIs were retained. To reconstruct SLF-II and -III and AF a seed ROI was positioned in the coronal plane at the level of a region of high anteroposterior anisotropy lateral to the central part of the lateral ventricle and the corona radiata. Target ROIs were selected as follows: in the angular gyrus for SLF-II, in the supramarginal gyrus for SLF-III, and on the axial peritrigonal plane at the level of the posterior middle and superior temporal gyri for the AF. The left frontal lobe was used as a second target ROI. Streamlines that passed through both target ROIs were retained. To reconstruct FAT, the first region of interest was located in the white matter of the inferior frontal gyrus and the second region of interest in the white matter of the superior frontal gyrus, including the anterior cingulate and pre-supplementary motor area (Catani et al., 2013). For each patient, the FA maps were co-registered to the anatomical images and to the FSL 2 mm × 2 mm × 2 mm resolution Montreal Neurological Institute (MNI) atlas using FSL linear and non-linear transformations (FMRIB's FLIRT and FNIRT registration tools). Density maps of the end points of each fiber tract were saved in the patients' native space using TrackVis and thresholded to obtain binary masks containing all voxels that were visited by at least one streamline in the q-ball residualbootstrap tractography. These masks were spatially normalized to the MNI space using the linear and non-linear transformations derived from the co-registration of the FA maps to the FSL MNI atlas. All patients' end point maps in the MNI space for each tract were summed to visualize the distribution of the tract terminations and their overlap with intraoperative stimulation sites normalized to the MNI space. A 3D rendering of the end points of the different tracts was obtained using the FSL 3D viewer. RESULTS The involvement of Broca's area in motor control of speech was investigated during the surgical removal of brain tumors, performed with the aid of the brain mapping technique. The main focus of the study was the investigation of a functional connection of Broca's area with motoneurons driving phono-articulatory muscles. To this aim we investigated whether the intraoperative stimulation of Broca' area affects the motor unit recruitment of the phono-articulatory muscles active during speech, in 20 patients. For each patient two different intraoperative conditions were compared: (i) the DES-OFF: speech production in absence of stimulation, and (ii) the DES-ON: speech production during direct low-frequency current stimulation applied onto Broca's area during speech tasks. Muscle activity during object picture naming and counting task was investigated with a quantitative analysis in both frequency and time domain, to evaluate which domain reveals the distinguishing features of the recruitment of motor units in the two conditions. In all patients and all conditions, the EMG analysis was always matched with the functional outcome, i.e., the clinical evaluation of the task performance routinely assessed by the neuropsychologists during the procedure as part of the clinical procedure. Main Results can be summarized as follows: (1) DES on Broca's area induces an intensity-dependent "speech arrest": the intensity of DES needed to induce "speech arrest" when applied on Broca's area was indeed higher when compared to the intensity effective in inducing speech impairments when delivered on the neighboring pre-motor/motor cortices. (2) The quantitative parameters measured in frequency (PS) and time (RMS) domain on EMG recorded during "speech arrest" were comparable to those measured on the EMG recorded at baseline as if the motor program never started. (3) Speech arrest induced by DES on Broca's area was an "allor-none" effect: partial interruptions of speech were not observed. Muscle activation started only by removing DES, as if DES prevented speech onset. (4) DES on Broca's area never affects the motor unit recruitment in either naming or counting tasks, coherently with the functional outcome reporting a lack of any deficit in phono-articulation. (5) Speech arrest is an effect obtained when stimulating a specific sector of Broca's area, i.e., the ventral BA44. (6) No semantic or phonological deficits were observed when stimulating BA44-45 with the speech tasks adopted in this study. (7) Speech arrest with the same features was observed also when stimulating directly the subcortical fibers running below Broca's area. Effect of LF-DES Applied on Broca's Area during Speech Tasks: Functional Outcome vs. EMG Analysis EMG quantitative analysis and the functional outcome show that the effect of Broca's area stimulation during speech is an all-ornone effect strictly dependent of the intensity of stimulation. According to clinical procedure, for each stimulation site on Broca's area the intensity of LF-DES was initially set at Threshold Intensity (ThreshI-DES), i.e., the minimum intensity inducing a disruption of ongoing speech when applied on M1 and/or on vPM (see Materials and Methods). The average ThreshI-DES value (average stimulation intensity ± SD) applied on Broca's area was 2.7 ± 0.7 mA (train duration of 2.8 ± 1.0 s). Each site was stimulated for a minimum of three non-consecutive times. DES-ON stimulation trials were alternated with DES-OFF trials. The effect of stimulation with ThreshI-DES on task performance was evaluated by clinical inspection (vocal outcome) during surgery and by the quantitative offline analysis of the EMG activity (see Materials and Methods). All values were then compared with those measured during DES-OFF. Due to clinical needs, in 7 out of 20 patients the intensity of LF-DES was increased (SupraThreshI-DES). Again, the effect of stimulation with SupraThreshI-DES on task performance was evaluated by clinical inspection and by the quantitative offline analysis of the EMG activity and all values were compared with those measured during DES-OFF. Statistical differences (Intensity and Duration) between SupraThreshI-DES and ThreshI-DES in the population of 7 patients receiving both was assessed by means of Mann-Whitney U-test. Analysis showed that the DES failing to induce the effect (ThreshI-DES) and DES inducing the effect (SupraThreshI-DES) significantly differed in terms of Intensity (U = 130.5, P < 0.01, mean SupraThreshI-DES 4.3 ± 1.3 mA; mean ThreshI-DES 3.1 ± 1.3), but not in terms of Duration (U = 199.5, P = 0.07, mean SupraThreshI-DES 4.3 ± 1.3 mA; mean ThreshI-DES 3.1 ± 1.3) suggesting that "speech arrest" is intensity dependent effect. The following paragraphs report in details the effect of ThreshI-DES and SupraThreshI-DES on functional performance and on the motor unit recruitment as assessed by the quantitative analysis of the EMG (see Materials and Methods). Threshold LF-DES on Broca's Area When ThreshI-DES was delivered on the ventral sector of premotor cortex, vPM, i.e., the area located posterior to Broca's area, it disrupted task performance by inducing a dysfunctional articulation of the word being pronounced and/or vowel emission. During stimulation of vPM, patients attempted to pronounce a word but phono-articulation was not appropriate, as shown by stuttering and stopping, and by the pattern of ongoing EMG of the active muscles, which is altered with respect to the pattern occurring in DES-OFF (Cerri et al., 2015). Conversely, when ThreshI-DES was delivered to Broca's area just prior or during speech initiation, no effect was observed on speech production (anarthria/"speech arrest, " dysarthria or an interruption of ongoing speech), or on semantic and phonological processing (Figure 2A). During stimulation, all patients could speak normally and performed both tasks with precision. Quantitative analysis of the EMG activity showed, in all patients, no significant differences in muscle activation between DES-OFF and DES-ON ( Figure 3A and Supplementary Table S1A): mean and median frequencies, and PS area were comparable in the two conditions (p > 0.05), as well as mean and peak RMS values (p > 0.05). In summary, DES on Broca's area, delivered at threshold intensity, as defined by its effectiveness on the vPM area, did not affect motor unit recruitment in either naming or counting tasks. This is coherent with the functional outcome, reporting a lack of any phono-articulatory deficits. SupraThreshold LF-DES on Broca's Area In seven patients, the intensity of LF-DES was increased until an effect was observed (SupraThreshI-DES): the patients did not, and did not even try to, articulate the word/number. This effect is commonly referred to as "speech arrest." Notably, inability to pronounce the word occurred without any evident attempt to do so, as if the stimulation prevented the onset of speech altogether, rather than blocking the muscles' activity during speech ( Figure 2B). As soon as the stimulus was removed, patients performed the task. No partial effects were observed for intensities ranging between ThreshI-DES and SupraThreshI-DES suggesting that speech arrest occurs under the all-or-none principle. If instead of delivering DES prior to speech initiation, SupraThreshI-DES on Broca's seems to prevent speech production. The EMG signal during DES appears similar to that of the baseline and not to that of DES-OFF. When the stimulus was applied onto the cortex, in cranial muscles' channels (mainly phono-articulatory muscles, see first 4 channels) and in a dedicated channel (Orbicularis Oculi, red rectangle) the stimulus artifact was recorded. The pure artifact of stimulation recorded in the Orbicularis Oculi channel was used as reference time window to select the EMG of the phono-articulatory muscles during DES (see Materials and Methods). The stimulus artifact, visible in the figure in phono-articulatory muscles, was filtered to perform the EMG quantitative analysis. The signal was processed with different notch filters at 60 Hz and its harmonics. SupraThreshI-DES was applied on Broca's area during ongoing word pronunciation (2 patients), it failed to induce any effect (on speech and on language) so that the task was correctly performed. Quantitative analysis of EMG activity of muscles, comparing the DES-OFF and the SupraThreshI-DES interference, showed a significant difference between EMG signal recorded in the two conditions: mean and median frequencies, and PS area, as well as mean and peak RMS values, were all significantly different (p < 0.05). On the other hand, EMG signal recorded during SupraThreshI-DES stimulation was superimposable to EMG recorded at baseline, suggesting that, during stimulation, no attempt at activating the muscles had occurred. Confirming this observation, no significant differences (p > 0.05) were observed when comparing all the EMG parameters (RMS and PS) recorded during SupraThreshI-DES with those measured at baseline (Figure 3B and Supplementary Table S1B). This result indicates that SupraThreshI-DES on Broca's area prevents the onset of the motor program rather than disrupting its execution. Coherently, when SupraThreshI-DES was applied during the already ongoing articulation of the word (2 patients), the EMG parameters were not different (p > 0.05) from those recorded in DES-OFF, and no effect on performance was observed. In 2 out of 20 patients, it was possible to directly stimulate fibers running subcortically below Broca's cortex with SupraThreshI-DES (due to time-restraints of the surgical setting it was not possible to also add a ThreshI-DES stimulation). The effect of stimulation of these fibers was superimposable to the effect of SupraThreshI-DES delivered at cortical level, resulting in all the parameters of the EMG signal being the same as those of the baseline (p > 0.05). In conclusion, both qualitative and quantitative analysis of the intraoperative data demonstrate that LF-DES delivered at threshold intensity (ThreshI-DES), while effective on the neighboring vPM and M1, was completely ineffective on Broca's area. In order to be effective in inducing "speech arrest" in both naming and counting tasks, current intensity had to be increased (SupraThreshI-DES), and in any case this was only effective if delivered before, not during, the tasks. This data demonstrates that "speech arrest" derives from a lack of activation of the motor program, not from an arrest of ongoing execution. 3D Map of the SupraThreshI-DES Positive Sites on Broca's Area: Neuroimaging Analysis The topographical distribution of the effect of LF-DES was created on an average 3D map plotting the position of each "eloquent" site, in each patient (see Materials and Methods). Only the sites responsive with a clear effect on task performance to at least three stimulations were considered "eloquent" and therefore reported on the map. All eloquent sites were plotted for comparison on the same map (Figure 4). Two main results emerged from this analysis. First, the topographical distribution of sites where ThreshI-DES was successful in impairing the FIGURE 3 | Statistical analysis. (A) The chart shows, for each recorded muscle in all twenty patients, if there is a significantly difference (Mann-Whitney U-test) between each PS and RMS parameters in DES-OFF vs. performance during ThreshI-DES. Black bars indicate that there is Not a Significantly (NS, p > 0.05) difference between two conditions. Instead, gray bar would indicate a significantly different (p < 0.05). ThreshI-DES on Broca's area does not interfere with speech production. (B) The chart shows, for each recorded muscle in seven patients in which was applied SupraThreshI-DES, if there is a significantly difference (Mann-Whitney U-test) (upper chart) and between each PS and RMS parameters at DES-OFF vs. performance during SupraThreshI-DES (lower chart). Black bars indicate that there is Not a Significantly (NS) difference between two conditions. Instead, gray bar would indicate a significantly different (p < 0.05). SupraThreshI-DES on Broca's prevents totally the speech production. The phono-articulatory recorded-muscles and the EMG calculated-parameters (PS and RMS) are indicated on x-axis. Only for the graphic representation, the PS parameters (mean and median frequency, and the area under PSs curve) and the RMS parameters (mean and peak) are grouped together. The number of analyzed-patients is indicated on y-axis. PS, Power Spectrum; RMS, Root Mean Square. task by inducing an improper articulation of speech (yellow dots in Figure 4) clustered, as expected, in vPM, with no eloquent sites found on Broca's area, when stimulated at this intensity (blue dots in Figure 4). Second, it emerged that the effect of SupraThreshI-DES on task performance, i.e., "speech prevention, " was actually located on Broca's area, although not homogeneously distributed on BA44/45, but clustering the ventral portion of BA44 sector (vBA44, red dots in Figure 4). The quantitative analysis, matched with the topographic distribution of LF-DES eloquent sites, showed that all the eloquent sites on Broca's area were characterized by the abortion of any attempt to speak, and that the EMG recorded during stimulation was not different from the EMG recorded at baseline. The disruption of an ongoing EMG activity was never observed. Interestingly, the stimulation of all sites reported on the 3D map failed also to elicit semantic and phonological errors; when applied to Broca's area before speech onset, both ThreshI-DES and SupraThreshI-DES failed to induce either semantic or phonological errors. The overall map shows that the effect of LF-DES on Broca's area on language production is limited to the ventral BA44, that a higher intensity of stimulation is needed compared to other cortical motor areas and that, when effective, stimulation results in "speech prevention" rather than in "speech arrest, " since LF-DES does not arrest ongoing speech, but prevents its onset. FIGURE 4 | 3D map of the stimulation points. Each sites represented over 3D map (non-linear MNI152) as blue, red and yellow points were never responsive to stimulation (both ThreshI-DES and SupraThreshI-DES) for semantic and phonological errors. Each sites represented onto Broca's area (BA44-45) as blue points were never responsive to stimulation (both ThreshI-DES and SupraThreshI-DES) for motor alteration of speech. Each sites represented on more ventral portion of the BA44 as red dots were responsive to SupraThreshI-DES (6 patients while 1 patient was excluded due to screen shot inaccuracy), inducing the phenomenon of "speech prevention." Each sites represented onto caudal cortex of the Broca's area (the ventral pre-motor cortex, vPM/BA6) as yellow points were responsive to stimulation (ThreshI-DES). In these sites, the ThreshI-DES induced an improper articulation of the speech. Since only a supra-threshold intensity stimulation of Broca's area was effective in blocking eloquence, the above results are compatible with the hypothesis that subcortical fibers might have been involved rather than, or in addition to, the cortical site per se. In order to understand which of the main systems of fibers involved in language might have been included when stimulating Broca's area, the topographic distribution of the eloquent sites on vBA44 was matched with the probabilistic map of terminations, within the frontal lobe, of the main systems of fibers subserving the language network, as demonstrated by High-Angular Resolution Diffusion Imaging (HARDI) q-ball technique (see Materials and Methods). The main white matter bundles of the language network were reconstructed in six patients and the cortical terminations of tractography analysis plotted in the MNI space (Figure 5a). This analysis highlights the terminations of the arcuate fasciculus (AF), of the superior longitudinal fasciculus component II and III (SLF II-III) and of the frontal aslant tract (FAT), all reaching the ventral portion of BA44 (Figure 5b in green, pink and blue respectively) where the stimulation sites related to the "speech prevention" phenomenon were found (red squares in Figure 5b). Parallel to their termination in vBA44, the same systems of fibers emerge also in the ventral portion of vPM, where DES induced the improper articulation of speech. DISCUSSION Motor control of speech requires highly skilled voluntary activation of up to 100 phono-articulatory muscles driven by bulbar motoneurons, controlled bilaterally by the primary motor cortex (M1) (Ackermann and Riecker, 2010). For more than 100 years, Broca's area has dogmatically been considered to be a peculiar motor cortex in charge of motor control of language production (BA44-45) (Berker et al., 1986), despite the fact that FIGURE 5 | Cortical termination of the main language fibers, as demonstrated by MR tractography. Results of the MR Tractography performed on six out of the seven patients that received SupraThreshI-DES. The panel (a) shows the cortical terminations, in voxels, that were reached by the main systems of fibers reported as strictly associated to language function (including speech) by the neurosurgical literature. The color code refers to the number of patients in which fiber cortical termination sites were found. The higher the number of patients the higher is the intensity of the color. Voxels representing only one patient were excluded. The main tracts represented were: arcuate fasciculus (AF, in green), the superior longitudinal fasciculus component II and III (SLF II-III, in pink), the superior longitudinal fasciculus component temporo-parietal (SLF-tp, in black), the frontal aslant tract (FAT, in blue), the inferior fronto-occipital fasciculus (IFOF, in yellow) and the uncinate fasciculus (UF, in red). The terminations of AF, of SLF II-III and of FAT are shown in panel (b): in the inferior frontal gyrus, these tracts reach the ventral portion of BA44 where the stimulation sites related to "speech prevention" phenomenon (red squares) were found. conclusive evidence supporting its motor properties has never been provided. Today, attributing Broca's area with a direct role in motor control of speech production is highly debated due to a lack of a univocal correlation between "production" aphasia and pure lesions of this area. In this respect, "apraxia of speech" is a paradigmatic example, pointing to vPM/BA6 rather than Broca's area as the motor area of speech: in fact, it is a clear phonoarticulatory dysfunction resulting from heterogeneous lesions, which have the ventral pre-motor cortex (vPM), and not BA44-45, as a common feature (New et al., 2015). Mirroring this observation, and supporting the debate, is the evidence that injuries restricted to Broca's area do not result in a permanent motor deficit, as is the unfortunate rule regarding damage to motor cortical areas, but rather in a temporary mutism (Mohr et al., 1978). On the other hand, should a putative direct role of Broca's area in motor control of speech be hypothesized, it could be exerted either by shaping the activity of M1 or as the independent control of bulbar motoneurons. In both cases, Broca's area must affect motoneuronal excitability and, in turn, the activity of phono-articulatory muscles, either directly or indirectly via M1. In both cases, evidence for a functional connection of Broca's area with motoneurons driving phonoarticulatory muscles must be provided and is at present lacking. An alternative point of view, possibly adding interesting elements to address this issue, comes from the modern linguistic theory (Hickok andPoeppel, 2004, 2007;Papoutsi et al., 2009;Long et al., 2016). It postulate that, within the language cortical network, the ventral part of BA44 might compute the phonetic encoding (Papoutsi et al., 2009;Long et al., 2016), i.e., the pre-articulation process translating syllables into articulatory gestures, which are then organized by the ventral premotor-primary motor (vPM-M1) areas directly controlling the recruitment of the phono-articulatory muscles. Accordingly, a pure lesion to ventral BA44, would block speech production by preventing the phonetic translation from Broca's area to vPM-M1. A direct investigation of this model in ecological conditions is extremely difficult, although not completely impossible. Neurosurgical literature reports interesting observations recorded during surgical resection of brain tumors performed with the brain mapping technique (Bello et al., 2006;Tate et al., 2014Tate et al., , 2015Chang et al., 2017). This technique allows the opportunity to stimulate, with direct electrical current (DES), cortical areas and subcortical fibers. In this setting the frontal operculum, the central and precentral gyri have been extensively investigated in awake patients performing object picture naming and/or counting tasks. When stimulating Broca's area, neurosurgeons report an impairment of the task, an effect known as "speech arrest" (Luders et al., 1987;Axelson et al., 2009;Chang et al., 2011;Tate et al., 2014) and described as "the block of ongoing speech in absence of oro-facial movements or vocal output" of patients. The definition of the speech arrest reported so far is based on the clinical observation of the behavior of patients during surgery, and it seems coherent with the modern linguistic model (Hickok andPoeppel, 2004, 2007;Papoutsi et al., 2009;Long et al., 2016). Although very interesting, the clinical/behavioral observation is not, per se, sufficient to unravel whether the transient lesion induced in Broca's area by DES results in a block of the pre-articulation phonetic encoding to be transmitted to vPM-M1 or rather in a block of the direct action of Broca's on motor unit recruitment in phono-articulatory muscles. In absence of EMG recording during the task, it is not possible to distinguish between these two conditions, which are very different in terms of motor control. The phenomenological clinical observation, based solely on auditory and visual inspection of vocal output and motor contraction of facial muscles, cannot provide a causal, mechanistic model of the neural processes leading to impairment of speech. Moreover, in the neurosurgical literature "anarthria" and "speech arrest, " obtained when DES is applied to vPM or to Broca's area respectively, are reported as synonymous (Chang et al., 2011;Tate et al., 2014;Mandonnet et al., 2016), although very different phenomena. A recent study by our group revealed that the effect of DES on the two areas is actually very different in terms of muscle behavior (Cerri et al., 2015). Visual inspection of EMG activity suggests indeed that DES delivered on vPM before the onset of speech, disrupts the ongoing muscle activity, while when delivered on Broca's area before the onset of speech it halted speech by preventing the activation of the motor program altogether, not by disturbing its ongoing execution. Although very interesting, this descriptive report was not sufficient to shed definitive light on this issue, because it did not exclude with a specific and quantitative analysis the direct action of Broca's area on muscle activity. This was the goal of the present study, to investigate, with a quantitative approach, the effect of DES-induced transient inactivation of Broca's area on EMG activity of phono-articulatory muscles recorded before and during speech production, in object picture naming and counting tasks. Overall the results confirm the previous study (Cerri et al., 2015), challenging a direct role of Broca's area in modulating the recruitment of phono-articulatory muscles' motor units. When stimulation was delivered before the onset of speech, indeed no active inhibition or disruption (as sort of dysfunctional recruitment) of the muscle activity was observed, all muscles remained instead relaxed in the resting state (baseline condition), and speech never started. Conversely, if the stimulation occurred during ongoing speech production it was ineffective and objecting naming or counting proceeded normally. This quantitative study is strongly supported by the complementary evidence that Broca's area does not have a motor output in the resting condition (Cerri et al., 2015) and thus cannot be defined as a proper "motor" area. Consistently, our data raise the doubt on the existence of a functional connection between Broca's area and the motor nuclei controlling phono-articulatory muscles and suggests that this area may instead be involved in more cognitive pre-articulatory function. Based on our results, we suggest that the term "speech arrest" be reworded to "speech prevention, " to give a more accurate description of the fact that DES on Broca's area prevents the onset of muscle activation, rather than arresting it. Moreover, according to intraoperative observation, this "speech prevention" induced by the DES is an "all-or-none" phenomenon, in that this deficit either occurred fully, or not at all, without partial deficits. We interpret this finding with the hypothesis that Broca's area may operate as a functional gate, authorizing the phonetic translation preceding speech articulation. When applied to Broca's area, DES may interfere with its computational activity inactivating the functional gate, and thus the phonetic encoding would be prevented, eventually halting the naming process. The suggested role of Broca's area in gating the phonetic encoding is supported by data reported by Flinker et al. (2015), showing that Broca's activation occurs few milliseconds before the actual motor program execution activating motor cortices, reasonably pointing to a role in the phonetic coding rather than in semantic/phonological coding, which is expected to occur earlier with respect to Broca's activation. This hypothesis fits the most accredited current linguistic model of speech production (Hickok andPoeppel, 2004, 2007;Papoutsi et al., 2009;Long et al., 2016). Coherently, DES-induced 'speech prevention' sites are clustered in the ventral-posterior sector of Broca's area,vBA44, proposed in this model as the best candidate for phonetic encoding, feeding the vPM-M1 network (Papoutsi et al., 2009;Long et al., 2016). According to our data, "speech prevention" is an intensitydependent effect: the intensity of ThreshI-DES, effective in inducing speech disruption when applied to the motor cortices neighboring Broca's area (vPM and M1) (Cerri et al., 2015), was indeed ineffective on Broca's area, and a significantly higher intensity (SupraThreshI-DES) was required to induce "speech prevention." The hypothesis that vBA44 is similar in function to the other motor cortices but just less excitable, seems simplistic, too "ad hoc" and unjustified, especially given the structural cytoarchitectonic similarities between BA44 and vPM (Amunts et al., 1999). Alternatively, "speech prevention" might actually result from inactivation of subcortical fibers running below the cortical site of stimulation. Should this be true, we can speculate that the effect is not elicited with ThreshI-DES because the current is not strong enough to reach and act on the subcortical fibers, which instead it can do when SupraThreshI-DES is utilized. DES delivered subcortically to vBA44 indeed induced the same effect as SupraThreshI-DES delivered cortically. The conceptual consequences of this hypothesis depend on the nature of the axons running subcortical to vBA44. If these fibers are in direct connection with Broca's area, afferents or efferents, then "speech prevention" can still be attributed, although indirectly, to Broca's area, though with the important new evidence that prevention does not derive from a motor impairment. Conversely, if the stimulated axons are running on their way to other targets bypassing vBA44, the attribution of "speech prevention" to the effect of DES disrupting the activity of Broca's cortical neurons, and consequently their hypothesized role in phonetic translation, must be reconsidered. Surprisingly, we found that SupraThreshI-DES applied before the onset of speech also failed to induce any effect on language (semantic and phonological processing). The answer may be found in the system of fibers belonging to the language network and reaching the frontal lobe, described with the most advanced neuroimaging techniques (Catani et al., 2013): the arcuate fasciculus (AF), the superior longitudinal fasciculus component II and III (SLF II-III), inferior frontal occipital fasciulus (IFOF), uncinate fasciculus (UF) and the frontal aslant tract (FAT), all reported to be involved in different functions in language network. These range from phonological (Dick and Tremblay, 2012;Chang et al., 2015;Fujii et al., 2015Fujii et al., , 2016, syntactic (Chang et al., 2015;Skeide et al., 2016), reading (Gullick and Booth, 2015), repetition (Chang et al., 2015) processes, starting mechanisms of speech (Boetz and Barbeau, 1971;Kinoshita et al., 2015), to phonoarticulation (Chang et al., 2015). Building on this knowledge, we used HARDI tractography to reconstruct the white matter connectins of Broca's area (Figure 5a) to disclose whether these may be reached by the DES. Our analysis, consistent with the most recent post-mortem microdissection studies (Lemaire et al., 2013;Sarubbo et al., 2016), shows parallel branches of AF, SLF II-III and FAT (Figure 5b), running below the sites of stimulation in BA44 and terminating in both BA44 and in the neighboring vPM, all potentially responsible for the "speech prevention" effect, although current spread might not reach the FAT tract as easily, due to its medial and deep course . The observation that DES on vPM induces speech motor disruption (Cerri et al., 2015), an effect different from "speech prevention, " suggests that the branches reaching vBA44, rather than those reaching vPM could be involved in the effect. Should this be the case, however, DES would be expected to be equally effective when acting on the AF and SLF-III vBA44 branches (SupraThreshI-DES) feeding Broca's area by interrupting transmission through their axons on their way to Broca's and/or when acting (ThreshI-DES) on Broca's cortex by interrupting the activity of the cortical neurons receiving AF and/or SLF-II-III vBA44 branches. However, in all our tests ThreshI-DES was never effective on Broca's area. Certainly, the complexity of the language network does not allow an easy allocation of function to all the different structures involved, i.e., cortical areas and tracts, with a single study, but considering our data the univocal attribution of phonetic translation to Broca's area alone must be further investigated. Data collected in two patients showing that DES failed to impair the task, when applied during ongoing articulation of the words to be pronounced, must be substantiated with a significantly higher number of trials. Dedicated studies are needed aimed at assessing the time course of the effect of DES at different stages of articulation. This would support and better disclose the role of Broca in gating phonetic encoding. Finally, the lack of phonological and semantic errors when stimulating all the BA44-45 area with both ThreshI-DES and SupraThreshI-DES in our study, contradicting the neurosurgical literature (Tate et al., 2014), deserves discussion. This apparent contradiction could be explained considering that many studies evaluate Broca's function on performance of entire sentences (Démonet et al., 2005), suggesting the involvement of Broca's area in higher order semantic processes (Hagoort, 2005;Vigneau et al., 2006;Price, 2010;Friederici, 2011), rather than single word tasks. As our data focuses on task performance using single words, our results may actually support this view. CONCLUSION This study is the first to investigate the effect of DES applied on Broca's area during speech using a quantitative approach applied to the EMG recorded from phono-articulatory muscles. Results strongly challenge the concept of a direct functional connection existing between Broca's area and motoneurons of phono-articulatory muscles. We show that rather than inducing interruption/disruption of ongoing speech, commonly referred to as "speech arrest, " intraoperative stimulation of ventral BA44 results in a complete lack of onset of muscle activation, which might be better defined as "speech prevention." "Speech prevention" is an all-or-none intensity-dependent effect requiring a higher intensity to induce "speech arrest" when compared to neighboring motor cortices. Intraoperative data points to Broca's area acting as a functional gate at the prearticulatory stage, allowing or halting phonetic translation into a motor program to be organized and executed by the other motor areas. Direct control of Broca's area on the phonoarticulatory apparatus seems unlikely due to the absence of a direct effect on motor unit recruitment. The strict correlation between DES-intensity and speech prevention, might suggest an appropriate interpretation is that inactivation of subcortical fibers is occurring, likely the SLF II-III and/or the AF running below vBA44 and reaching both vBA44 and vPM, rather than cortical neurons themselves. In this light, "speech prevention" might be considered the result of a substantial shut down of phonetic encoding, affecting both object picture naming and counting tasks. The possibility that more than one tract running below vBA44 contributes to the effect cannot be ruled out, and the univocal attribution of phonetic translation to Broca's area must be reconsidered. AUTHOR CONTRIBUTIONS VF and GC designed the study and wrote the manuscript. VF and CS collected the data and performed the data analysis. VF and LF helped in statistical analysis. MM and VF developed the MatLab script. LB selected the patients, directed, and together with MRi and FP, executed the surgical procedure and the intraoperative brain mapping. VF, LF, MRo, and LB contributed to the 3D reconstruction. AC performed the HARDI q-ball tractography analysis. LB and PB contributed to data interpretation. All authors contributed in writing the manuscript. GC directed the project. ACKNOWLEDGMENTS The authors thank Prof. Roland G. Henry and the colleagues of the Departments of Neurology and Radiology and Biomedical Imaging and Graduate Program in Bioengineering, University of California, Berkeley/University of California, San Francisco, CA, United States for kindly allowing us to use the Dipy (Diffusion Imaging in Python) software for processing of HARDI tractography data. SUPPLEMENTARY MATERIAL The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fnhum. 2018.00064/full#supplementary-material FIGURE S1 | EMG parameters submitted to statistical analysis in a single subject. All the 5 EMG parameters calculated for comparison are illustrated. TABLE S1A | The table reports the exact p-values obtained from statistical comparisons between EMG signals related to DES-off condition and DES-on No effect condition for all analyzed data in each patient (20 patients), for all EMG calculated parameters (PS median and mean frequency and area and RMS mean and peak) and for all recorded muscles (contra-and ipsilateral orbicularis oris, mylohyoid and mentalis muscles and contralateral platysma muscle). Empty cells indicate that the EMG signal was not suitable for analysis due to technical problems. TABLE S1B | The table reports the exact p-values obtained from statistical comparisons between EMG signals related to Baseline and Speech prevention condition for all analyzed data in each patient (7 patients), for all EMG calculated parameters (PS median and mean frequency and area and RMS mean and peak) and for all recorded muscles (contra-and ipsilateral orbicularis oris, mylohyoid and mentalis muscles and contralateral platysma muscle). Empty cells indicate that the EMG signal was not suitable for analysis due to technical problems.
2018-02-22T14:05:27.748Z
2018-02-22T00:00:00.000
{ "year": 2018, "sha1": "75af1b9470a98482efa3e70663034825295aef9b", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnhum.2018.00064/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "75af1b9470a98482efa3e70663034825295aef9b", "s2fieldsofstudy": [ "Linguistics" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
91184631
pes2o/s2orc
v3-fos-license
On transitive modal many-valued logics This paper is focused on the study of modal logics defined from valued Kripke frames, and particularly, on computability and expressibility questions of modal logics of transitive Kripke frames evaluated over certain residuated lattices. It is shown that a large family of those logics -- including the ones arising from the standard MV and Product algebras -- yields an undecidable consequence relation. Later on, the behaviour of transitive modal Lukasiewicz logic is compared with that of its non transitive counterpart, exhibiting some particulars concerning computability and equivalence with other logics. We conclude the article by showing the undecidability of the validity and the local SAT questions over transitive models when the Delta operation is added to the logic. Introduction Modal logic is one of the most developed and studied non-classical logics, yielding a beautiful equilibrium between computational complexity and expressibility. Generalizations of the concepts of necessity and possibility offer a rich setting to model and study notions from many different areas, including proof-theory, temporal and epistemic concepts, work-flow in software applications, etc. On the other hand, substructural logics provide a formal framework to manage vague and resource sensitive information in a very general (and so, adaptable) fashion. Modal many-valued logics appear in the literature both pursuing purely theoretical development and also with the objective of offering a richer framework to model complex environments that might require valued information as well as qualification operators. While the first publications on the topic can be traced back to the 90s [15,16] (that focus on the problem over finite Heyting algebras), it has been only in the latter years when a more systematic work has been developed. In [20] a brief study of the S5 modal logics over BL algebras is presented, but it is in more recent works where the modal logics over arbitrary Kripke frames (also referred to in the literature as minimal modal logics) are studied. Several works since have studied different aspects of these logics. Most relevant for the present paper are the works related to axiomatizability and prooftheoretic questions, addressing the minimal modal logics over finite MTL algebras [4], Lukasiewicz finite and infinite standard algebras [22], Product standard algebra [26], and Gödel standard algebra [8,9], [23]. Concerning computability, in [6,7] it is proven that the minimal (local) modal Gödel logics with both and ✸ modal operators are decidable (both over models with a crisp accessibility relation and with a valued one). It is also shown that the S5 extension of the previous logic with crisp accessibility (equivalent to the one-variable fragment of predicate Gödel logic) is decidable too. However, in relation to the present paper, we point out that the question whether the purely transitive extension is decidable or not is left open. For modal Lukasiewiccz and Product logics, no general results on decidability have been proven, and the failure of the finite model property, as well as the difficulties to get recursive and finitary axiomatizations for them make the possible answers to this question non trivial to conjecture. The nearest problem addressed in the literature concerns the decidability of some Fuzzy Description Logics (FDL) (see eg. [25], [21], [1], [12], [3]). These logics expand towards the valued setting the so-called Description Logics, a formalism used intensively in AI and ontologies which can be seen as semantic variations (in some cases, also syntactic) of modal logic. In relation to fuzzy modal logic, we can see FDL as a multi-modal system over models with both weighted accessibility relations and formulas, that is not based on the complete usual logical language but that has, on the other hand, names for worlds and the possibility of referring (via constants) to each element of the algebra of evaluation. The study of decision procedures in FDL is focused in variants of the r-SAT (existence of a valuation that valuates to at least r) problem, and in [11] we can find a translation of the known results to the context of many-valued modal logics. However, since these results are limited to the context of valued accessibility relation and multi-modal operations, it does not seem likely to exist a uniform translation of them to modal logics arising from classical frames with valued formulas, the topic of study in the ongoing work. Moreover, questions concerning validity and derivability in the logic remain, in most cases, open. 1 A general approach to determine undecidability of consistency over FDLs is developed in [3], proving in particular that the SAT problem over Product and Lukasiewicz FDLs is undecidable as long as certain expressivity conditions are met. However, the approach is not suited to cope with the problems studied in this paper, since they belong to non-comparable settings. On the one hand, our main goal is that of shedding some light over decidability of the minimal logics (both as sets of theorems or as deduction systems) arising from valued models with a crisp accessibility relation. On the other hand, the methods from the previous reference are focused on the question of consistency (nor reducible to validity since the logic is many-valued) and moreover, strongly related to the language of FDLs (with incorporates eg. constants for the elements of the models) and the possibility of assigning degrees to the accessibility relations, none of which can be done in our context. Along this paper, we focus on the study of the decidability of the local consequence relation on modal logics over models with crisp accessibility relation valued on certain classes of F L ew -algebras, that comprehend the well-known cases of the Luaskewicz standard algebra, the class of finite MV chains, the standard Product algebra and the one-generated product algebra. The main contribution of the paper is that the consequence over transitive models of the above kind are undecidable, also if we restrict the logic to the one arising from only the finite models in the class. Remarkably enough, transitive models are one of the most common kind of relational models naturally appearing in CS and other fields (from accessibility 1 It is known from [21] that validity over the multi-modal Lukasiewicz logic with fuzzy accessibility relation is decidable, and a similar result concerning the product case was presented with partial mistakes in [10], and corrected in unpublished notes by the authors. models of the real world to dynamic-logic style software formalizations, preferences and other epistemic notions modelling, etc). Thus, the undecidability of these logics points to the problems that might arise with their use for applications in an unrestricted way, as well as opens to consideration the study of weaker logics with better computational behaviour. A second main contribution of this paper is an study of some particularities of the modal logics defined extending propositional Lukasiewicz logics. First, arising as a consequence of some results from [21] and [5], we show the decidability of the local modal Lukasiewicz logic (as consequence relation), which interestingly provides us with an example of a decidable modal logic whose transitive expansion is undecidable (a phenomena of which, to the best of our knowledge, there were not known examples up to now). On the other hand, we also observe that, while the minimum (local) modal logic over the standard MV algebra, and that over all finite MV algebras coincide, this is not the case for the respective transitive logics. The paper is structured as follows: In section 2 we introduce all the definitions that will be used throughout the paper, aiming to be as self-contained as possible. Section 3 focuses on the undecidability result stated above, and details the reduction of the logical consequence over transitive models to the Post Correspondence Problem. Section 4 shows the decidability of the local modal Lukasewicz logic, and provides a separating example for transitive modal logic over the standard MV algebra and the one over all finite MV chains. Lastly, in Section 5 we observe how the previous logics expanded with the Monteiro-Baaz ∆ operation turn to have not only undecidable consequence relation, but also undecidable validity and SAT. Preliminaries Modal many-valued logics arise from Kripke structures evaluated over certain algebras, putting together relational and algebraic semantics in a fashion adapted to model different reasoning notions. Along the present work, the algebraic basis of these semantics will be the one of F L ew -algebras, the corresponding algebraic semantics of the Full Lambek Calculus with exchange and weakening [17], [13]. This will offer a very general approach to the problem while relying in well-known algebraic structures. Along this section, we will formally introduce the previous algebras and the basic definitions necessary for the further development of the paper. We will usually write ab instead of a·b, and abbreviate n x · x · · · x by x n . Moreover, as it is usual, we will define ¬a to stand for a → 0. In the setting of the previous definition, we will denote by Fm p the algebra of formulas built over a countable set of variables V using the language corresponding to the above class of algebras (i.e., ∧/2, ∨/2, ·/2, → /2, 0/0, 1/0 ). As usual, we let (x ↔ y) := x → y) · (y → x) and ¬x := x → 0. Let us introduce some well-known examples FL ew -algebras over the universe [0, 1] (in fact, also BL algebras, i.e, further satisfying prelinearity -MTL-and divisibility [20], [14]). In the algebras below, ∧ and ∨ stand for usual lattice conjunction for × the usual product between real numbers; • A 1 [0, 1] Π , one-generated product algebras (all are isomorphic), is any subalgebra of [0, 1] Π with universe {0, 1} ∪ i∈ω a i for some a ∈ (0, 1). Le us also point out some particular characteristics of some FL ew -algebras that will be of use later. • A is n-contractive whenever a n+1 = a n for all a ∈ A. • A is weakly-archimedean if for any two elements a, b ∈ A, if a b n for all n ∈ ω then ab = a. Observe that if A is n-contractive, the element a n is idempotent for any a ∈ A. Simple examples of these algebras comprehend Heyting and Gödel algebras, and M V n algebras. On the other hand, the (infinite) standard MV-algebra, the standard product algebra and any one-generated subalgebra of the latter one are not ncontractive for any n. For what concerns weak-archimedeanicity, observe that if the element inf b n exists in a weakly-archimedean algebra, then it is an idempotent element. Examples of weakly-archimedean algebras are the standard MV-algebra, the standard product algebra, as well as the algebras belonging to the generalised quasi-varieties generated by them. In particular, any (non-trivial) one-generated subalgebra of the standard product algebra is weakly-archimedean . For what concerns this work, it is interesting to recall that the logic F L ew , the Full Lambek Calulus with exchange and weakening, is complete with respect to the class of logical matrices { A, {1} : A ∈ F L ew }. That is to say, for any The algebra of modal formulas Fm will be built in the same way as Fm p , but by expanding the language of FL ew -algebras with two unary operators and ✸. While it is clear how to extend a propositional evaluation from V into an F L ewalgebra to F m p , the semantic definition of the modal operators is defined from the relational structures in the following way. Definition 2.3. Let A be a FL ew -algebra. An A-Kripke model is a structure M = W, R, e such that • W, R is a Kripke frame. That is to say, W is a non-empty set of so-called worlds and R is a binary relation over W , called accessibility relation; • e : V × W → A. e is extended to F m p in such a way that (world-wise) it is a homomorphism into A, and to F m by further letting whenever that infima/suprema exist, and undefined otherwise. To lighten the notation, we will usually write Rvw, and say in this case that w is a successor of v, to denote v, w ∈ R. Definition 2.4. (1) A model is safe whenever the values of e(v, ϕ) and e(v, ✸ϕ) are defined for any formula at any world. We will denote by FL ew -Kripke models to the class of all A-Kripke models for any FL ew -algebra A. (2) A safe model is witnessed whenever for any modal formula Mϕ and each world v ∈ W , there is w Mϕ ∈ W such that Rvw Mϕ and e(v, Mϕ) = e(w Mϕ , ϕ). For what concerns notation, given a class of models C, we denote by ωC the finite models in C (observe these are always safe and witnessed). On the other hand, for a class of algebras C (or a single algebra A) we write K C (correspondingly K A ) to denote the class of safe Kripke models over the algebras in the class (or over the single algebra specified). Finally, in order to lighten the reading, we will let K L , K ω L and K Π to denote respectively K [0,1] L , K {MVn : n∈ω} and K [0,1]Π . As it happens for classical models, we can also consider some condition only over the kind of accessibility relation and study the logic arising from the corresponding classes of models. Along this work, we are focused in the restriction to transitive accessibility relations, i.e., those models such that for any u, v, w ∈ W , if Ruv and Rvw then Ruw. As usual, for an arbitrary class of models C, we will denote the transitive models in it by 4C. Observe, however, this is only a naming convention, since we are not assuming in any case that the transitive logic corresponds to an extension of the minimal one by the 4 axiom(s) schemata. Towards the definition of modal logics over FL ew -algebras relying in the notion of FL ew -Kripke models, it is natural to preserve the notion of world-wise truth being {1} (in order to obtain, if restricted to world-wise, the propositional F L ew logic). With this in mind, for any A-Kripke model M and v ∈ W we say that M satisfies a formula ϕ in v, and write M, v |= ϕ, whenever e(v, ϕ) = 1. Similarly, we simply say that M satisfies a formula ϕ, and write M |= ϕ whenever for all v ∈ W M, v |= ϕ. Over the previous notion of satisfiability, two different consequence relations can be defined, a local and a global one. Along the present work, we will focus on the preservation of truth locally. Definition 2.5. Let Γ, ϕ ⊆ ω F m, and C be a class of FL ew -Kripke models. Then we say that ϕ follows from Γ locally in C, and we write Γ ⊢ C ϕ, whenever for any M ∈ C and any v ∈ W , When C is clear from the context, we will simply write ⊢. Moreover, for a model M and a world v ∈ W , we will write Γ ⊢ M,v ϕ to denote that e(v, Γ ) ⊆ {1} and e(v, ϕ) < 1. Observe the necessity rule ϕ ⊢ ϕ is only valid in the above deductive system for theorems of the logic, as it happens in the classical local modal logic. The following basic notions concerning manipulation of Kripke models will be of use later on. Definition 2.6. Given a Kripke model M and w ∈ W , we let the depth of w be given by Observe that if there exists some cycle in the model, all worlds involved in it have infinite depth. Definition 2.7. We let the propositional subformulas of ϕ be the set defined by For Γ a set of formulas we let PSFm(Γ ) := γ∈Γ PSFm(γ). Let us finish the preliminaries by stating a well-known undecidable problem, that will be used in the next sections to show undecidability of some of the modal logics introduced above. Recall that given two numbers x, y in base s ∈ ω, their concatenation xy is given by w 1 · s y + y (for ·, + the usual real product and sum), where y is the number of digits of y in base s. Finding a solution for PCP-instances yields an undecidable procedure [24]. Undecidability of transtive local deduction Along the following sections, unless stated otherwise, we let A to be a class of weakly-archimedean linearly ordered FL ew algebras such that for any n ∈ ω there is some A n ∈ A such that A n is non n-contractive 4 . That is to say, there is some a ∈ A n such that a n+1 < a n . Examples of classes of algebras like the above one are Natural examples of classes of algebras that do not satisfy the above conditions are {[0, 1] G } (and the variety generated by it) and the varieties of MV and product algebras. By relying on the properties specified above for the class of algebras A, we can prove the following result. Theorem 3.1. The problem of determining whether ϕ follows locally from Γ in 4K A is undecidable. Moreover, also the problem of determining whether ϕ follows locally from Γ in ω4K A is undecidable. More in particular, the three-variable fragments of both previous deductive systems are undecidable. Its proof follows as a simple consequence of Proposition 3.9, which we now proceed to formulate and prove. In order to do so, given an arbitrary instance P = { v 1 , w 1 , . . . , v m , w m } of the Post correspondence problem, let us define a set of formulas Γ P ∪ ϕ P . We let Γ P be the union of the following formulas with variables V = {y, v, w}: (1) y ↔ ✸y; (2) Let us prove some technical lemmas concerning Kripke models with a world in which Γ P holds, but not ϕ P . First, we can easily see how variable y is forcing certain conditions on the underlying structure of those models. Γ P suffices to prove a completeness with respect to models where the variable y takes the same value everywhere, except possibly in the root world (whose value is irrelevant for the proof). Lemma 3.2. Let M ∈ 4K A be a transitive A-Kripke model and u ∈ W be such that Γ P ⊢ M,u ϕ P . Then there is α y ∈ A such that for all t 1 , t 2 ∈ W with Rut 1 and Rut 2 e(t 1 , y) = e(t 2 , y) = α y . Proof. Assume Rut 1 and Rut 2 , and towards a contradiction let e(t 1 , y) < e(t 2 , y). Then, by definition, e(u, y) e(t 1 , y) < e(t 2 , y) e(u, ✸y), contradicting that e(u, (1)) = 1. ⊠ Since the model is transitive, this allows us to affirm that if Γ P ⊢ 4KA ϕ, then it happens in a tree M with root u, and so that there is α y ∈ A such that for all world t ∈ W \ {u}, e(t, y) = α y . We will resort to this fact below without further notice. The way we chose both Γ P and ϕ P are also determining that the model (as in the above paragraph) is of finite depth. Contrary to what happens in the minimal modal logics, where the local deduction is naturally complete with respect to models of finite depth (indeed, bounded by the maximum modal depth of the formulas involved in the derivation), observe this is not the case in general for transitive logics. Lemma 3.3. Let M ∈ 4K A and u ∈ W be such that Γ P ⊢ M,u ϕ P . Then there is some z ∈ W such that Ruz, e(z, ϕ P ) < 1 and z has finite depth. Proof. The existence of z ∈ W such that Ruz and e(z, ϕ P ) < 1 follows by definition, since e(u, ϕ P ) < 1. To prove that z has finite depth, we can rely in the formula (2) from Γ P , the previous lemma and the formula in the right side of ϕ P and prove by transfinite induction on the depth of the world that for any t ∈ W such that Rut and any n ∈ ω, Rtr and d(r) = n. Then, for some 1 i m, By Induction Hypothesis, and since P does not have empty words, the previous is less or equal than α n y α y , and so, e(t, v) α n+1 y , proving the step. • Assume d(t) = ω. Then, for any n ∈ ω, there is some r n ∈ W with Rtr n and d(r n ) n. As before, and so, e(t, v) e(r n , v) for all n ∈ ω. By induction hypothesis, e(r n , v) α n y , and so, e(t, v) α n y for all n ∈ ω. Now, assume towards a contradiction that d(z) were to be infinite. From condition (1) it would hold that e(z, v) α n y for all n ∈ ω. Since the algebras in A were required to be weakly-archimedean, we know this implies that e(z, v)e(z, y) = e(z, v). However, since e(z, ϕ P ) < 1, in particular necessarily e(z, v → vy) < 1, contradicting the assumption and proving the lemma. ⊠ At this point, we have proven completeness with respect to to trees of finite depth (by simply taking a model given by the root, the world identified in the previous Lemma, and all the successors of it). We can now turn our attention to the behaviour of variables v, w y that model. Lemma 3.4. Let M ∈ 4K A be a tree of finite depth with root u such that Γ P ⊢ M,u ϕ P , and z be as in the previous lemma. Then, for each r ∈ W with Rzr or r = z, there are a r , b r ∈ ω for which e(r, v) = α ar y and e(r, w) = α br y . Moreover, if Rtr then a r < a t and b r < b t . Proof. We can prove it by induction in the depth of r. We do the case for v, the other one is analogous: • if d(r) = 0, then from (2) Observe that |{a t : Rrt}| = ω would imply that e(r, v) α n y , and thus e(z, v) α n y , for all n ∈ ω. Then, by the same reasoning from the previous lemma, we would get a contradiction with e(z, ϕ P ) < 1. This implies that necessarily |{a t : Rrt}| is a finite set, and so it has a maximum element a. Thus, e(r, v) = (α a y ) s v i α vi y proving the first part of the lemma. The last claim is a simple conclusion of the above relying in the fact that e(z, vy) < e(z, v) and e(z, wy) < e(z, w). ⊠ Observe this also proves that we can restrict the proof to witnessed models, since for any modal formula in Γ P , the value taken is no longer an infimum (respectively, supremum) but a minimum (maximum). Our objective is now to prove completeness with respect to the class of linearly ordered models in the sense of Figure 1. Since from the previous lemma we get that the model is witnessed, intuitively we are only lacking to prove that, for a given world, we can select a particular unique successor (up to transitivity), and that this action preserves the value of the relevant formulas. Formula (3) in Γ P takes care of this aspect. Lemma 3.5. Let M ∈ 4K A be finite tree with root u such that Γ P ⊢ M,u ϕ P , and let z as in 3.3. Then, for each t ∈ W with Rzt or t = z, and such that it has successors, there is some world t w ∈ W such that Rtt w and e(t, v) = e(t w , v) and e(t, w) = e(t w , w). Proof. Suppose towards a contradiction that there is not a common witness for v and w, i.e., there are r 1 , r 2 with Rtr 1 , Rtr 2 and • e(t, v) = e(r 1 , v) = α ar 1 y , • e(t, w) = e(r 2 , w) = α br 2 y , • For any r with Rtr, a r a r1 and b r b r2 , and one of them is a strict inequality. Then, for any Rtr, it holds that e(r, vw) α ar 1 +br 2 −1 y , so e(t, (vw)) α ar 1 +br 2 −1 y . On the other hand, e(t, v w) = α ar 1 y α br 2 y . Now, for formula (3) in Γ P to hold, it is necessary that α ar 1 +br 2 −1 y α ar 1 +br 2 y , and so, α ar 1 +br 2 +n y = α ar 1 +br 2 −1 y for any n ∈ ω. However, this leads to have that e(z, vw) = α ar 1 +br 2 −1 y = α ar 1 +br 2 +n y = e(z, vw)α y , which results in a contradiction since e(z, ϕ P ) < 1. ⊠ Relying in the previous results, we can conclude a completeness lemma with respect to a very particular class of models: namely, with frames like in Figure 1 and quite special evaluations. Let us denote by 4K A the class of models definable over frames with the structure in Fig. 1, i.e., for arbitrary but finite n ∈ ω, • W = {u 0 , u 1 , . . . , u n } and • R = { u i , u j : for all i j} Observe there is no bound on the size of the frames, while all of them are finite. Lemma 3.6. The following are equivalent: Proof. Soundness is immediate. Concerning the left-to-right direction, assume there is a model M ∈ 4K A with u ∈ W be such that Γ P ⊢ M,u ϕ P . Then consider the submodel M defined from M by taking its restriction to the universe W = It is a transitive model since the original M was so, and it clearly has the required frame (since z had finite depth in the original model, for some n onwards the set w n will be empty). Taking submodels does not change the value taken at each world by the propositional variables, i.e., for any p ∈ V (and thus, also for any non-modal formula) and any t ∈ W it holds that e (t, p) = e(t, p). Then we have that e (z, ϕ P ) = e(z, ϕ P ) < 1 (so e (u, ϕ P ) < 1) and also that e (u, y) = α y = e (u, ✸y) (from Lemma 3.2), taking care of formula (1) in Γ P . The remaining cases are the formulas with some modality and inside the scope of a operation in Γ P , namely We just need to check that the values of those formulas are preserved from M to M in any world t ∈ W \ {u}. To do that, observe the only modal subformulas appearing are v, w and (vw), so it is enough to show the values of those three modal formulas are preserved. This can be easily done by induction in the depth (over the restricted model) of the world t. • If d(t) = 0, then also in M the world t does not have successors, so clearly 1 = e (t, ϕ) = e(t, ϕ) for any formula ϕ. • For d(t) = n + 1, then also in M the world t has successors, so e(t, v) = e(t w , v) from Lemma 3.5, and we know that e (t, v) e (t w , v) = e(t w , v) by Induction (since t w ∈ W . Moreover, it is clear that also e (t, v) e(t, v) given that M is a submodel of M. Thus, e (t, v) = e(t, v) and the same for what concerns w. Moreover, also e(t, (v&w)) = e(t w , v&w), so the same reasoning applies. ⊠ It is an easy observation that whenever we use (2) from Γ P to get that, at a certain point r there is some 1 i m such that e(r, v) = e(r, v) s v i α vi Indeed, do not forget that (2) determines with the same index the value of v and that of w. Since there are no repetitions in P , for 1 i = j m it necessarily holds that either v i = v j or w i = w j . Assuming any of those inequalities leads to have some a ∈ ω such that α a y is idempotent, and moreover, if we consider the inequality for the vs (and the same happens for w), that e(z, v) = α b y for some a b. Then, e(z, v)α y = e(z, v), contradicting once again e(z, ϕ P ) < 1. It is now natural to obtain an exact characterization of v and w in terms of α y in each world of a model as in Figure 1 satisfying Γ P in world u 0 and not satisfying ϕ P in that world . Proof. We will prove the first claim by induction on j. The details are only given for the v case, the other one is proven in the same fashion. • If j = 1 we know that u 1 has no successors, so from formula (2) from Γ P we get • For j = n + 1 using again formula (2) we get that From Lemma 3.4 we get that e(u n+1 , v) = e(u n , v) (observe the other worlds to which u n+1 is related have all smaller depth, and so bigger values of v). Applying Induction Hypothesis we get the following chain of equalities e(u n+1 , v) = (α Concerning the second claim, suppose towards a contradiction that there is 1 . It then follows that e(u j , vy) = e(u j , v) and trivially, that e(u k , vy) = e(u k , v). This contradicts e(u k , ϕ P ) < 1, since this would require that e(u k , vy) < e(u k , v) The analogous reasoning serves the case where It is now a simple observation that in a model as the one appearing in the above lemma, e(u k , ϕ P ) < 1 implies that e(u k , v) = e(u k , w), since either those two values are equal or there is some natural number n > 1 that e(u k , v) ↔ e(u k , w) = α n y α y , and so, making e(u k , ϕ P ) = 1. Putting together all the previous results, we can provide a completeness condition for the Γ P ⊢ ϕ P deductions. It is now very natural to introduce the reduction itself from the Post Correspondence Problem to the local deduction over transitive models. Moreover, as we saw above, the reduction can be specified to finite models only. Proposition 3.9. Let P be an instance of the Post Correspondence Problem. Then the following are equivalent: (1) P is satisfiable; Proof. Trivially (3) implies (2). Moreover, Lemma 3.6 proves that (2) implies (3). On the other hand, the fact that (3) implies (1) follows immediately from Corollary 3.8. Indeed, if Γ P ⊢ 4K A ϕ P , then from that corollary we know there is some k and map f : {1 . . . k} → {1 . . . m} such that f (1), . . . , f (k) is a solution for P . To prove that (1) implies (3) assume that P has a solution i 1 , . . . , i k , and assume without loss of generality that there is no j < k such that i 1 , . . . , i j is a solution too. By assumption, there is some Then define the Kripke model M = W, R, e by letting • For each 1 j k, define the evaluation at each u j , for 1 j k, by: 6 -e(u j , y) = α, -e(u j , v) = α vi 1 ...vi j , -e(u j , w) = α wi 1 ...wi j 6 The evaluation of variables in u is irrelevant to the evaluation of Γ P , ϕ P . It is now a matter of simple calculations to see that M globally validates the formulas from Γ P . On the other hand, observe that e(u k , v ↔ w) = 1 (i 1 , . . . , i k was a solution for P). Since e(u k , y) = α < 1, and e(u k , (vw → vwy) < 1 for all 1 j k (since α was chosen non 2 · (v i1 . . . v i k )-contractive), this gives us that e(u k , ϕ P ) < 1, concluding the proof. ⊠ Theorem 3.1 results as a direct corollary of the previous result. Modal Lukasiewicz logics We can now turn our attention to two of the modal fuzzy logics studied in the previous section: the ones arising respectively from [0, 1] L and from {M V n : n ∈ ω}. We will see in this way some interesting phenomena that are revealed when comparing the minimal modal logics and their corresponding transitive versions. Interestingly enough, we can prove that the logic ⊢ K L is decidable. To the best of our knowledge, examples of logics turning undecidable when transitivity is involved affect more complex situations, referring for instance to the addition of a transitive closure operator to predicate logics [18], [19], or related to very expressive logics that include forward and backward accessibility relations and also allow a certain level of quantification [27]. The case of study here shows a relatively surprising example of a decidable local deduction whose transitive extension is undecidable. In order to prove decidability of ⊢ K L it is crucial the continuity of all underlying propositional operations, which will leads to a good behaviour of the Lukasiewicz Kripke models. It can be proven that the logic ⊢ K L is complete with respect to witnessed models, by relying in the analogous result for predicate (standard) Lukasiewicz logics ( [21], [5]). To prove the completeness of the modal logic wrt witnessed models, it is only necessary to use the natural translation from modal into predicate logics and back. Since it is lacking in the literature, we proceed with the details, but the main technical issue is the analogous proof of completeness in first order standard Lukasiewicz. No previous knowledge on the topic is required to proceed, through some observations and results from [20] [21] and [5] will be used. Recall that, given a type of relations {R i } of respective arity ar(R i ), a (standard FO) Lukasiewicz model is a structure . For a certain formula ϕ(x) we write ϕ[a] M to denote the value taken by ϕ in the structure under any evaluation that sends x to a, defined inductively by Since the Lukasiewicz negation is involutive, we have that ∀xϕ(x) = ¬∃x¬ϕ(x) (and correspondingly, in the modal logic, ϕ = ¬✸¬ϕ), so we will be referring below only to the existential quantifier (and respectively, to the ✸ modal operator). A L∀-embedding of an structure M into a structure N is a mapping h : D M → D N such that for any first order formula ϕ, and any a ∈ D ar(ϕ) M it holds that In particular, the valuation of sentences is preserved. Moreover, we say that a structure M is witnessed (analogously to the definition for Kripke models) whenever for any formula ∃xϕ(x, y) and any a tuple of |y| On the other hand, given two [0, 1] L Kripke models M and N, a mapping h : W M → W N is a LK-embedding whenever for any modal formula ϕ and any v ∈ W M , it holds that e(v, ϕ) = e(h(v), ϕ). From that, we can easily get the analogous result for K L . Then, consider a Lukasiewicz Kripke models M, and its corresponding FO model M ′ . The previous lemma gives us a witnessed FO model N ′ in which M can be L∀-embedded with a mapping σ. In particular, true sentences are preserved, so N ′ |= C R . Thus, we can use the previous bijection and refer to the Kripke model N over domain D N ′ and such that e N (v, ϕ) = ϕ ♯ [v] N ′ for any modal formula ϕ. It is easy to see also that N is witnessed too. For pick a modal formula ✸ϕ and a world v in the universe of N. If e(v, ✸ϕ) = 0 it is trivially witnessed (by any related world). Otherwise, we have the following chain of equalities: Clearly, the same mapping σ that was a L∀-embedding from M ′ to N ′ is also a LK-embedding from M to N, since for any formula ϕ and any v ∈ W M , ⊠ Corollary 4.3. ⊢ K L is complete with respect to witnessed models. From here, it is not hard to prove decidability of ⊢ K L , in a similar fashion to the procedure given in [21,Def. 3]. Observe that, since Γ ∪ {ϕ} is a finite set it has a maximum modal depth degree N , and Σ i is empty for all i N . Assume V is the set of propositional variables of Γ ∪ {ϕ}. Let us define V ✸ as the following extended set of propositional variables combining the two previous notions and the original set V: • x w for each x ∈ V, w ∈ W , • ✸ψ w for each ✸ψ ∈ Σ i and w ∈ W i , for 1 i < N . We will now use the previous language to define a set of propositional formulas that will determine intrinsically the same conditions that hold in a corresponding Kripke model. To do that, let us first define a translation from the original modal formulas (in V) to the natural correspondent over V ✸ . Let us now define the set of formulas Ψ(Γ ∪{ϕ}) that will determine the behaviour of modal formulas/variables, as the union of ✸ψ w σ ↔ ψ ♯ (w σ,✸ψ ) and for each ✸ψ w σ ∈ V ✸ . Observe that if ✸ψ w σ ∈ V ✸ , then for any w σ,✸χ ∈ W , the formula ψ ♯ (w σ,✸χ ) is in the language of V ✸ . Thus Ψ(Γ ∪ {ϕ}) is also a finite set of propositional Lukasiewicz formulas in the set of variables V ✸ . Proof. To prove left to right direction assume Γ ⊢ K L ϕ. From Corollary 4.3 we know there is a witnessed model M and w ∈ W such that e(w, Γ ) = 1 and e(w, ϕ) < 1. Since it is witnessed, for each formula ✸ψ and each world v ∈ W it holds that there is some world v ✸ψ such that Rvv ✸ψ (v, ✸ψ) and e(v, ✸ψ) = e(v ✸ψ , ψ). Let us denote w by w 0 and, inductively from w 0 , let w σ,✸ψ denote the world w σ ✸ψ . Corollary 4.5. The finitary companion of ⊢ K L is decidable. A second observation concerns the relation between the modal logics arising from the standard MV algebra (⊢ K L ) and from the family of all finite MV algebras (⊢ K ω L ). It is well known that at a propositional level, the two logics coincide (see eg. [20]). This fact, in combination with Lemma 4.4 above, give us a direct proof of the fact that the (minimal) local modal logic arising from K L and the one arising from K {MVn : n∈ω} coincide too. Indeed, while it is immediate that ⊢ K L ⊆⊢ 4K ω L , the other inclusion comes using the same construction of a Kripke model from a propositional homomorphism that sends the premises to 1 and the conclusion to less than 1, simply taking now h : V ✸ → M V n for some suitable (big enough) n. Surprisingly enough, the corresponding transitive logics do not coincide, as the following construction shows. On the other hand, suppose there is n ∈ ω and M a transitive model over M V n = {0, 1 n , . . . , n n }, with v a world of the model in which e(v, x) = l n with 0 < l < n, l ∈ ω. Assume further that e(v, ¬✸ ⊥) = 1, so any successor of v has also some successor world. For the other premise to hold in v, there must be some sequence of worlds {v i : i ∈ ω} with v 0 = v, Rv i v i+1 and e(v i , x) = e(v i+1 , x 2 ). However, for this sequence it would then hold e(v i , x) < e(v i+1 , x), while having e(v i , x) < 1 for all i (otherwise, the whole sequence would evaluate x to 1 and so would do the initial world v). Since M V n has finitely many elements, this increasing sequence cannot exist, proving our claim. ⊠ It can be proven that the previous example also serves to differentiate ⊢ 4KΠ and the transitive modal logic over a one-generated subalgebra of [0, 1] Π . However, we do not know whether their corresponding minimal modal logics (not transitive) coincide. A consequence of this fact is that it cannot exist a set of axioms and rules G4 such that both • the extension of ⊢ K L with G4 coincides with ⊢ 4K L , and • the extension of ⊢ K ω L with G4 coincides with ⊢ 4K ω L . In particular, usual axiom 4 : ϕ → ϕ is no longer enough to characterize transitive models of the class in at least one of the previous cases. The presence of ∆ As in fragments of predicate logics (see eg. [2]), in the presence of the projection operation ∆ we can translate the undecidability results to the set of theorems of the respective logics, and also to the local SAT problem 8 -since, with ∆, the problems of validity and local SAT are easily reducible one to the other, contrary to the situation without ∆. The observation is totally natural, but nevertheless, relevant for what concerns possible applications of these logics, since in practical uses, the possibility of talk about absolute truth of a formula seems reasonable. However, the fact that in its presence we can more easily fall in undecidable questions gives an idea of the possible step in expressibility power taken when adding ∆ to the language. Monteiro-Baaz ∆ operation is defined, for an arbitrary F L ew -chain by letting ∆(a) = 1 if a = 1 0 otherwise. Then, the Deduction Theorem, not necessarily holding in the modal logics studied in Section 3 9 is fully recovered. Indeed, we have that for any class C of models evaluated over F L ew -chains, γ ⊢ C ϕ if and only if ⊢ C ∆γ → ϕ Allow us to write ⊢ ∆ C to denote the logic over the class of models C whose language has been expanded by the ∆ operation interpreted (at each world) as described above. (1) The set of valid formulas of ⊢ ∆ 4KA is undecidable. Moreover, the set of valid formulas of ⊢ ∆ ω4KA is also undecidable. (2) The problems of local SAT in 4K A and in ω4K A with ∆ are undecidable. Proof. (1 ) follows naturally from the DT and Theorem 3.1. For the second, it is trivial that ϕ is valid in ⊢ ∆ 4KA (resp. ω ⊢ ∆ 4KA ) if and only if ¬∆ϕ is not locally SAT in 4K A (resp. ω4K A ) with ∆. ⊠ Conclusions and Future work We have studied the computability of a large family of transitive modal manyvalued logics, proving their undecidability. Moreover, we have compared the behaviour of the transitive Lukasiewicz modal logics (over [0, 1] L and over {M V n : n ∈ ω}) and their corresponding transitive versions, observing some particular behaviours that contrast with the known results in other modal logics. Several interesting open problems are remaining after this study. First natural question is whether transitive modal Gödel logic (over models with a crisp accessibility, in particular) is decidable, which would provide a full understanding of the three main left-continuous t-norm based logics. In ongoing works we are studying this question, non trivial from [7] since the logic is not necessarily complete with respect to models of finite depth. On the other hand, the question of whether the local modal product logic with crisp-accessibility models is decidable or not also remains open. In particular, the proof from [10] concerning decidability of SAT and theoremhood questions over the analogous logic over valued-accessibility models seems hardly adaptable to the crisp case, since it is crucial in the proof to allow the accessibility relation to be valued in (0, 1). Acknowledgements This project has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No 689176 (SYSMICS project) and by the grant no. CZ.02.2.69/0.0/0.0/17 050/0008361 of the Operational programme Research, Development, Education of the Ministry of Education, Youth and Sport of the Czech Republic, co-financed by the European Union. 9 Observe not even the usual local DT (analogous to the one holding in propositional Π and L logics) seems natural to prove: while for each particular model it is true that γ |= M ϕ iff there is some n ∈ ω such that |= M γ n → ϕ, this index may vary from one model to the other, and in particular, the family might fail to have a supremum in ω. A deeper study of this question is left for future works.
2019-04-02T13:42:42.000Z
2019-04-02T00:00:00.000
{ "year": 2019, "sha1": "e4028e23c672823fc2ecf5cf710f7f852879c5bb", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1904.01407", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e4028e23c672823fc2ecf5cf710f7f852879c5bb", "s2fieldsofstudy": [ "Mathematics", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
17623727
pes2o/s2orc
v3-fos-license
Extracellular Vesicles in Physiology, Pathology, and Therapy of the Immune and Central Nervous System, with Focus on Extracellular Vesicles Derived from Mesenchymal Stem Cells as Therapeutic Tools Extracellular vesicles (EVs) are membrane-surrounded structures released by most cell types. They are characterized by a specific set of proteins, lipids and nucleic acids. EVs have been recognized as potent vehicles of intercellular communication to transmit biological signals between cells. In addition, pathophysiological roles of EVs in conditions like cancer, infectious diseases and neurodegenerative disorders are well established. In recent years focus has been shifted on therapeutic use of stem cell derived-EVs. Use of stem cell derived-EVs present distinct advantage over the whole stem cells as EVs do not replicate and after intravenous administration, they are less likely to trap inside the lungs. From the therapeutic perspective, the most promising cellular sources of EVs are mesenchymal stem cells (MSCs), which are easy to obtain and maintain. Therapeutic activity of MSCs has been shown in numerous animal models and the beneficial paracrine effect of MSCs may be mediated by EVs. The various components of MSC derived-EVs such as proteins, lipids, and RNA might play a specific therapeutic role. In this review, we characterize the role of EVs in immune and central nervous system (CNS); present evidences for defective signaling of these vesicles in neurodegeneration and therapeutic role of EVs in CNS. INTRODUCTION Mesenchymal stem/stromal cells (MSCs) are of great interest in regenerative therapy for tissues damaged by various pathological conditions. The chief therapeutic attributes of MSCs are their ability to migrate into injured sites (Kraitchman et al., 2005;Kim et al., 2015), promote functional recovery and modulate immune responses. Although the process of MSC homing is not very effective and there were various strategies attempted to enhance it. The engineering of MSC facilitates reaching target organs (Nowakowski et al., 2016). Another approaches are based on more direct routes of cell delivery, which, however, are slightly more invasive. After finding conditions determining the safety of intra-arterial delivery (Janowski et al., 2013;Cui et al., 2015), this route has been found effective in animal model of stroke (Toyoshima et al., 2015). The intrathecal route is even less invasive and was also shown effective (Lim et al., 2011). The intracerebral route is more invasive and is going to be used rather as an addition to neurosurgical treatment such as evacuation of hematoma . In general, it was thought that the close proximity of transplanted MSCs is pivotal in achievement of substantial therapeutic effect, as the cells could act through various mechanisms such as direct cell-to-cell contact and secreted factors. However, an extensive metaanalysis of preclinical results of intravenous application of stem cell revealed that there is a good correlation between a dose of infused cells and therapeutic effect, however, such correlation does not exist between the outcome and number of cells that engrafted within the disordered brain area (Janowski et al., 2010). It indicates that, there are substantial therapeutic mechanisms that are not directly related to the presence of cells within the injury site. Additional, accumulating evidence over the past few years supports the notion that the predominant mechanism by which MSCs act in tissue repair is mainly related to their paracrine/secretory effect. Indeed, MSCs provide microenvironment with a multitude of trophic signals including growth factors and cytokines. It is likely that in parallel to soluble factors, MSCs release EVs that contribute to the reparative process by intercellular cross talk communication. The biological relevance of extracellular vesicles (EVs) that mirrors parental cells has been established in different experimental settings. Recent discoveries suggest that they have similar protective properties as their cellular counterparts to condition and reprogram the surrounding microenvironment influencing a variety of endogenous responses in particular in injured tissues. It has been shown that EVs can affect other cells via transfer of genetic cargo, transfer of receptors and ultimately initiating pathways (Momen-Heravi et al., 2013). They are able to modify cell fate, function and plasticity. Recent data indicate that EVs have the capacity to modulate immune response and facilitate tissue regeneration. Therefore the use of them may represent an interesting alternative therapy for various diseases compared to a cell-based approach. EVs released by MSCs are heterogeneous population that differs in size and biogenesis. They contain proteins, bioactive lipids and nucleic acids that can mediate various signaling functions contributing to homeostasis. Transfer of these molecules to neighboring cells promotes cellto-cell communication and modifies the activity of target cells. In the central nervous system (CNS) probably more than in other organs such communication between neurons and glial cells is very crucial in physiological conditions. There is also strong evidence that EVs play a role in learning and memory (Smalheiser, 2007). Moreover, the biomolecules delivered by these structures may support and protect neurons, remove debris and infectious agents as well as control inflammation in pathological situation. They are involved in removal of misfolded proteins, harmful cell metabolic products and viral particles (Inal et al., 2012;Ohno et al., 2013). The recent literature implicates that microvesicles have the potential to transfer a collection of biomolecules between cells locally or over long distance through the blood or other biological fluids (Baglio et al., 2012;Frühbeis et al., 2013;Raposo and Stoorvogel, 2013). The circulation half-life of EVs in blood is approximately 2 min (Takahashi et al., 2013;Saunderson et al., 2014) but they have been detected in lungs, liver, spleen, and pancreas 48 h after systemic injection (Wiklander et al., 2015). This opens novel therapeutic perspectives aimed at the development of cell-free strategies based on the use of MSCs secretome as a potentially more advantageous alternative to cell-based therapy approaches. In this review we summarize role of EVs in pathophysiology of different neurological and immunological conations, properties and functions of EVs derived from MSCs and their potential therapeutic role in neurological disorders. TYPES AND PRODUCTION OF EVs Extracellular vesicles can be categorized into three main classes based on their mode of origin: exosomes, shedding microvesicles and apoptotic bodies (Figure 1). In broad terms there are three types of EVs, however, in the literature the nomenclature is inconsistent and the term microvesicles is often used as an umbrella term to encompass exosomes and shedding microvesicles (Lai et al., 2015). Exosomes represent a specific subtype of secreted vesicles. They are presently the best-characterized species of EVs. Exosomes arise in the endocytic pathway and are released by exocytosis through a mechanism dependent on cytoskeleton activation regulated by p53 protein but independently of cell calcium influx (Tetta et al., 2011;Biancone et al., 2012). They are small spherical vesicles with a size of 30-120 nm, cup shaped, limited by lipid bilayer, and constituted a rather homogenous population. Exosomes are derived from late endocytic compartments, known as multivesicular bodies (MVBs). They are produced by inward invagination of endosomal membranes to form MVBs, which subsequently fuse with the plasma membrane and release their intraluminal vesicles as exosomes to the extracellular milieu (Lai and Breakefield, 2012). As endocytosis is most active at specific unique micro domains in the plasma membrane called lipid rafts, exosomes have membranes enriched in elements of lipid rafts such as GM1 gangliosides and transferrin receptors (Tan et al., 2013). They are rich in annexins, tetraspanins (CD63, CD81, and CD9) and heat shock proteins (Hsp60, Hsp70, and Hsp90), expose clathrin, calveolins as well as endosome-specific proteins such as Alix and Tsg101 and cell-type specific proteins (Biancone et al., 2012;Frühbeis et al., 2013;Sabin and Kikyo, 2014). Exosomes carry characteristic lipids and contain cholesterol, ceramide, sphingomyelin, and phosphatidylserine (Subra et al., 2007). As mentioned above, nucleic acids such as mRNA and miRNA are also present in exosomes (Mathivanan et al., 2010). Shedding vesicles known as ectosomes or microvesicles are another class of EVs. As the name implies, they are shed directly from the plasma membrane of the cell. Microvesicles are heterogeneous in size, ranging from 100 nm to 1 µm. Their release is initiated by outward budding from the membrane surface followed by a fission event similar to the abscission step observed in cytokinesis (Turturici et al., 2014). Shedding of vesicles is physiological phenomenon that accompanied cell activation and growth. Their detachment from small cytoplasmic protrusions depends on calcium influx, calpain, scramblase, floppase, and cytoskeleton reorganization (Cocucci et al., 2009;Tetta et al., 2011). Calcium ions are responsible for the changes in asymetric phospholipid distribution of the plasma membrane that lead to the formation of shedding vesicles (Biancone et al., 2012). The release of microvesicles occurs from all types of cells in resting state, or upon activation by soluble factors or oxidative stress, hypoxia or shear stress. However, the main function of shedding vesicles is signaling through specific interaction with target cells and the transfer of genetic information (mRNA). Microvesicles influence the behavior of target cells in multiple ways, such as signaling complexes by direct stimulation, transferring receptors between cells or delivering proteins. Microvesicles may transmit miRNA to neighboring cells that can alter the expression of genes in these cells. The content of microvesicles differs to some extent from that of exosomes. Shedding vesicles lack proteins of the endocytic pathway but they expose high amounts of phosphatidylserine, contain protein associated with lipid rafts such as integrins and flotillins and are enriched in cholesterol, sphingomyelin, and ceramide (Mathivanan et al., 2010). Although tetraspanins are considered as unique markers for exosomes, they can be expressed in shedding microvesicles in some cases. Apoptotic bodies represent another type of EVs. Although they resemble microvesicles they can be distinguished by their large size and irregular shape (György et al., 2011). In contrast to exosomes and microvesicles that derive from healthy cells, apoptotic bodies are released during the apoptosis. They are formed during the late stage of apoptosis and contain nuclear material, cellular organelles and membrane contents. They express phosphatidylserine on their surface and have a permeable membrane (Turturici et al., 2014). Apoptotic bodies tend to elicit an anti-inflammatory or tolerogenic response when taken up by neighboring cells (Lai et al., 2015; Table 1). PROTEOMIC ANALYSIS OF MSC-EVs Extracellular vesicles are composed of various molecules including proteins, lipids, and nucleic acids. Secreted proteins participate in intercellular communication and play a role in cell signaling, differentiation, cell adhesion, angiogenesis, and apoptosis. A variety of cytokines, chemokines, growth factors, extracellular matrix (ECM) proteins and remodeling enzymes Kim et al. (2012) and . Kim et al. (2012) using mass spectrometry profiled the proteome of microvesicles (in size from 50 to 200 nm) which were harvested by ultracentrifugation from human bone marrow MSCs and identified 730 proteins. distinguished 857 proteins with the same technique in exosomes isolated from human ESC-derived mesenchymal stem cells (MSCs; huES9.E1) line by high performance liquid chromatography. In these two articles we can find a common subset of 315 proteins (Supplementary Tables S1 and S2). Among the sets of proteins, in addition to cytoplasmic proteins a remarkable number of membrane proteins have been found. The proteins located in the plasma membrane and cytoplasm are more commonly sorted into EVs compared with the proteins in the nucleus and mitochondria of different cell types (Yoon et al., 2014). Specific markers of MSCs, i.e., CD9, CD63, CD81, CD109, CD151, CD248, and CD276 as well as surface receptors (PDGF-RB, EGF-R, and PLAUR) involve in tissue recruitment and signaling molecules (RRAS/NRAS, Wnt5B, MAPK1, GNA13/GNG12, RHO, CDC42, and VAV2) controlling self-renewal and cell differentiation of human BM-MSCs have been reported . Consistent with the previous finding that EVs proteome include the proteins associated with EVs biogenesis and trafficking. Proteins implicated in intracellular transport and fusion, i.e., RAB are also very often present in EVs (Kowal et al., 2014, review). RAB proteins are accompanied with granule secretion, Golgi apparatus transport, tight junctions formation and ligand sequestration at the plasma membrane. These proteins, i.e., RAB1A, RAB2A, RAB5A/B/C, RAB7A, and RAB8A regulate docking and fusion of MVs with the recipient cell as well as proper targeting of them to various cellular compartments. Functional properties are also represented by proteins engaged in BM-MSCs-MVs cell adhesion (FN1, E2R, IQGAP1, CD47, LGALs1/LGALS3, and integrins), migration, and morphogenesis . detected many important groups of peptides, i.e., ACTA, Alix, ANX, HSP, TUB, and TWHA that are secreted in the regular manner and be present in MVs derived from human ESC-MSCs. Some of these molecules were also characterized by Thery et al. (2009) in MVs derived from immune cells. Among them tetraspanins, clathrin, annexins, GAPDh, PK, EEF1A1, MFGES, MHC class I, cofilin1, ezrin, radixin, moesin, actin, and tubulin found in at least 50% of all examined exosomes (Thery et al., 2009). Nowadays, there are two public databases: EVpedia and ExoCarta containing data of EVs components of different cell types from several studies (Mathivanan et al., 2012;Choi et al., 2013). Similar to exosomes from other sources, protein components in MSCderived exosomes do not remain constant due to heterogeneity of MSCs. Moreover, variations in the cell preparation have an influence on secretome profile of MSC derived from different sources (Lavoie and Rosu-Myles, 2013). The differences in protein content have been also detected in various batches of MSC-EVs . The proteomic analysis of EVs derived from MSCs isolated from different sources reveals distinguishing features from EVs derived from other types of cells (Skalnikova et al., 2011) and among microvesicles originated from MSCs of varying sources. Only a few studies have identified the whole proteome contained in MSC-derived EVs . However, the functional differences between EVs originated from distinct MSC sources clearly indicate the existence of difference in their composition. In both, in vitro dorsal root ganglia neurons and cortical neuron cultures cells react differentially to treatment with bone narrow (BM), umbilical cord blood (UCB), chorion (Cho-SC) and human menstrual fluid (MenSC) MSCs derivedexosomes. From all mentioned vesicles only MenSC -exosomes are able to enhance neurit outgrowth in cortical neuron cultures, while Cho-SC-exosomes cause even decrease of total neuron branch number. Moreover BM-and MenSC-derived exosomes increased the rate of neuritic growth in dorsal root ganglia neurons culture in comparison to control cells (Lopez-Verrilli et al., 2016). Similar observations were made in case of glioblastoma research. Among microvesicles (MVs) acquired from BM, UCB, and adipose tissue (AT) MSC only BM-and UCB-derived MVs decreased proliferation rate of glioblastoma cells line, whereas AT-MSC MVs had opposite effect. Induction of neoplasm cells apoptosis was observed after treatment with microvesicles from BM and UCB-MSC with no result in case of AT-MSC MVs (Del Fattore et al., 2015b). Furthermore these functional differences have been demonstrated even between vesicles from the same source but belonging to other sub-populations. Exosome-enriched fraction derived from BM-MSCs enhanced neurite outgrowth whereas the microvesicle-enriched fraction showed inhibitory effect (Lopez-Verrilli et al., 2016). The realization of more comparative studies between EVs derived from MSC from different sources is required. Based on these data it appears that MSC-EVs hold many characteristics of the MSCs themselves. Interestingly metalloproteinase inhibitors TIMP-1 and TIMP-2 were expressed only in human BM-MSC-EVs but not in parental cells (Vallabhaneni et al., 2015). In literature we can find a few examples of proteins which were present in microvesicles although they were not detected in cells of their origin. Authors of these articles associate this phenomenon with existence of very precise proteins sorting system during microvesicles biogenesis or limitation of protein identification techniques ( Table 2) which very often suffer from high detection threshold or necessity of normalization of obtained results to the total protein level. LIPIDOMIC ANALYSIS OF EVs Except proteins EVs contain bioactive lipids. As for proteins, the lipid composition in EVs is distinct from that of the cell origin. Internal membranes of EVs isolated from different cell types are enriched in lysobisphosphatidic acids that modulate budding process and lipids associated with lipid rafts such as cholesterol, ceramide, sphingolipids, and glycerophospholipids with saturated fatty-acyl chains (Urbanelli et al., 2015). Sphingomyelin and cholesterol allow the tight packing of lipid bilayers and increase rigidity and stability of EVs derived from different cells, prevent their recognition by blood components and uptake, facilitate the fusion of EVs (Yoon et al., 2014). EVs also contain many lipid mediators such as prostaglandins and enzymes involved in their synthesis from membrane phospholipids. Subra et al have shown the presence of a set of phospholipases (A2, C, and D) in EVs isolated from RBL-2H3 cells. Also a large panel of free fatty acids including arachidonic acid were detected in EVs from mast cells (Subra et al., 2010). PROFILING RNA CONTENT IN EVs The RNA cargo has been well established as a component of EVs isolated from different cell types (Ratajczak et al., 2006). Various RNA species have been detected within EVs. The micro RNA (miRNA) is most abundant RNA species in human plasma and makeup over 76% of all mappable reads (Huang et al., 2013). Detailed analysis has shown that actually not mature miRNA but precursor miRNA (pre-miRNA) is mostly present in exosomes isolated from ESC-MSCs . There were found profound discrepancies between the exosomal and cellular content of miRNA suggesting an active process of sorting and packaging of miRNA into exosomes (Zhang J. et al., 2015). MSC-derived exosomes also contain significant amount of transfer RNA (tRNA), with striking differences in content between cells of AT or bone marrow origin, while no difference in miRNA content between these two cell sources has been found (Baglio et al., 2015). This may account for the differences observed between them (Muhammad et al., 2015). The translatable and fragmented mRNA is also present in EVs from different biological fluids (Rani et al., 2011). The RNA content varies depending of the exosome origin, for instance the fragments of ribosomal RNA (rRNA) was the most abundant RNA species found in breast cancer-derived EVs (Jenjaroenpun et al., 2013). The piwi-interacting RNA (piRNA) has been also recently detected in exosomes isolated from human saliva (Ogawa et al., 2013). It was recently found that EVs may not only cargo RNA, but also process it. Such process was observed in cancer-derived EVs and processed RNA was toxic for primary human cells (Chakrabortty et al., 2015). The high interest in RNA cargo of exosomes takes advantage of new methods of RNA isolation, which may bring more detailed characterization in near future (Enderle et al., 2015). MECHANISM OF CELLULAR UPTAKE OF EVs Extracellular vesicles released from parental cells may be broken down, thus releasing their content into extracellular space or neighbor cells may internalize them. The pathways through which EVs enter target cells impact EV-mediated biomolecule delivery. Several types of interactions between EVs and target cells have been demonstrated. Interaction may be direct, resulting in MVs fusion or endocytosis, i.e., observed between mouse dendritic cells ( (Mathivanan et al., 2010;Montecalvo et al., 2012) or indirect, by binding to surface receptors as visualized between tumor and immune cells (Clayton and Mason, 2009;Pan et al., 2014). EVs uptake is initiated by specific receptor-ligand interaction. The receptor-ligand binding is determined by several molecules such as integrins, tetraspanins and galectins and other adhesion molecules present on EVs isolated from different cell types and cell surface. (Rana et al., 2011;Raposo and Stoorvogel, 2013). The pattern of their expression is consistent with that of the cell origin. The correct orientation of these receptors enables their capability of encountering multiple ligands after their secretion from the cell (Morel et al., 2004). The presence of β1 and β2 integrins on exosomes of various cellular sources was shown. Clayton et al demonstrated that exosome integrins derived from B cells are capable of interactions with surrounding ECM and adhere to the ECM components such as collagen and fibronectin (Clayton et al., 2004). These adhesive interactions may limit diffusion of exosomes from the site of secretion. In inflammation or tissue injury, disruption of ECM leads to release ECMbound exosomes liberating them to interact with resident or inflammatory cells expressing up-regulated adhesion molecules, i.e., ICAM-1, LFA-1, TIM-1, or TIM-4 (Thery et al., 2009). EVs may also potentiate ECM digestion through their inclusion or activation of matrix metalloproteinases (MMPs) such as MMP-2 and MMP-9 (Candela et al., 2010). Some studies revealed that EVs derived from platelet contained cytokine receptors (TNF-RI, and TNR-RII), platelet endothelium receptors (CD41, CD61, and CD62) and special ligands (CD40L, and PF-4) which could be transported into the target cells and enable platelet adhesion (Baj-Krzyworzeka et al., 2002). Furthermore, platelet derived EVs are able to interact with monocytes and endothelial cells but not with neutrophils. (Lösche et al., 2004) whereas EVs derived from neutrophils interfere with endothelial cells, monocytes and dendritic cells (DCs; Gasser et al., 2003;Eken et al., 2008). Once attached onto plasma membrane EVs moved in a slow drifted mode, then the motion mode changed to a rapid directed mode, indicating that EVs internalization occurred (Tian et al., 2013). After cellular uptake EVs are segregated within endosomes and fuse with lysosomes for degradation or with endosome membranes thus releasing their cargo into the cytoplasm (Turturici et al., 2014). Other studies revealed the evidence for accumulation of EVs in phagocytic or endocytic compartments and suggest that EVs uptake depends on the actin cytoskeleton, dynamin-2 and phosphatidylinositol 3 kinase activity (Raposo and Stoorvogel, 2013). BIOLOGICAL ACTIVITIES OF EVs A very large collection of evidence shows that EVs are important regulators of many biological functions such as tissue homeostasis and immune response. EVs may influence the behavior of target cells by several different mechanisms. First they may act as signaling complexes. Indeed, EVs express several surface molecules, i.e., ICAM-1 interacting with the specific receptors LFA-1 present on T cells or δ-like 4 ligand bound to Notch receptors expressed by endothelial cells and neuronal cells thus activating these cells (Nolte-'t Hoen et al., 2009;Biancone et al., 2012). EVs play an important role in signaling and morphogenesis during development. It was demonstrated that certain morphogenes, i.e., sonic hedgehog or retinoic acid associated with the epithelial cell membrane were released via vesicles in response to FGF signaling (Greco et al., 2001). Extracellular vesicles may also transfer receptors, proteins, or bioactive lipids between cells after fusion with the target cell membrane. For example, the chemokine receptor CXCR4 or CCR5 could be transferred from lymphocytes to nonlymphoid cells (Rozmyslowicz et al., 2003). EVs-mediated transfer has been also described for adhesion molecules between platelets and hematopoietic cells (Baj-Krzyworzeka et al., 2002). Another biological activity of EVs is connected with delivering proteins to target cells. EVs may modulate the function of target cells by transferring intracellular proteins. By convey of pro-angiogenic factors, i.e., platelet-derived growth factor (PDGF), vascular endothelial growth factor (VEGF), basic fibroblast growth factor (BFGF) or leptin. EVs derived from CB-MSCs, BM-MSCs or shed from tumor cells can activate angiogenesis (Taraboletti et al., 2006;Zhang et al., 2012;Bian et al., 2014;Chen et al., 2014). It has been shown that EVs derived from activated monocytes are able to regulate apoptosis in target cells by transferring caspase-1 (Sarkar et al., 2009). Similarly, EV-related lipids induce several biological responses. The glycosphingolipids present in intracerebrally administered exosomes bind to betaamyloid and clear the brain and decrease pathology in mouse model of Alzheimer disease (Yuyama et al., 2014). However, beta-amyloid induces the incorporation of C18 ceramides to EVs produced in astrocytes, which in turn have pro-apoptotic properties . In turn, the gangliosides GM1 and GM3 present in exosomes isolated from neuroblastoma cells facilitate the aggregation of alpha-synuclein, a protein involved in development of Parkinson disease (Grey et al., 2015). The role of lipids was not specifically investigated in MSCderived EVs. The miRNA mediates many biological effects through inhibition of specific mRNA (Huang et al., 2015). For example, miR-16 was capable to downregulate VEGF in breast cancer cells . Human ESC-MSCderived exosomes are especially abundant with let-7 family of miRNA, which through HNF4A suppression contribute to maintenance of renewal of recipient stem cells (Koh et al., 2010). It was shown that mRNA shuttled between cells is functional and BM-MSC-derived exosomes transfer IGF-1R mRNA based on which protein is produced (Tomasoni et al., 2013). However, the most of mRNA present in exosomes is highly fragmented and cannot serve as a template for protein production, but it was found a very specific pattern of mRNA fragmentation within exosomes with enrichment in the 3intranslated regions. Since these regions are rich in miRNA binding sites, exosomal mRNA can compete with intracellular miRNA and disinhibit the mRNA translation (Batagov and Kurochkin, 2013). No specific biological activity of piwiinteracting RNA, tRNA, and rRNA present in exosomes has been reported. GENERAL CONSIDERATIONS ON THE ROLE OF EVs IN THE FUNCTION OF IMMUNE SYSTEM Extracellular vesicles are involved both in promoting and in inhibiting the immune response, depending on their cell of origin and on the signals present in the microenvironment. Macrophages infected by various pathogens (Mycobacterium and Toxoplasma) release EVs containing pathogen-derived proinflammatory molecular determinants inducing the secretion of pro-inflammatory cytokines by recipient macrophages (Bhatnagar and Schorey, 2007). Mycoplasma infection results in the release of EVs inducing polyclonal activation of B and T cells (Quah and O'Neill, 2007). EVs isolated from body fluids could exacerbate autoimmune diseases. In Rheumatoid Arthritis patients, fibroblasts isolated from synovial fluid secrete EVs expressing TNFα, which promotes survival of T lymphocytes (Zhang et al., 2006). EVs isolated from bronchoalveolar fluid of patients with sarcoidosis stimulates the secretion of pro-inflammatory cytokines by epithelial cells (Qazi et al., 2009). Extracellular vesicles secreted by DCs can either promote or inhibit immune response depending on the degree of maturation of their parent cells. EVs produced by mature DCs carry both antigenic material and MHC-peptide complexes required for the initiation of immune responses by APCs. In addition, secreted vesicles also express co-stimulatory molecules. More efficient T cell activation was obtained with exosomes purified from mature, rather than immature, DCs, suggesting that costimulatory molecules present in EVs play indeed a role in the immune response (Admyre et al., 2003). Such EVs are not only capable of presenting antigens directly to T cells but are also able to transfer both the MHC II molecule and the antigen to naïve DCs thus amplifying the immune response (Thery et al., 2002). EVs from mature DCs primed with male antigen peptide enhance male skin graft rejection by female mice (Segura et al., 2005). In vitro priming of DCs with specific antigens results in the production of EVs which can induce in vivo humoral responses against the same antigens (Clayton et al., 2001;Aline et al., 2004;Qazi et al., 2009), stimulating both T and B cells, leading to both memory Th1 and immunoglobulin responses (Qazi et al., 2010). A promoting effect on NK activity was observed in clinical trials of cancer patients treated with EVs from their own DCs primed in vitro with their cancer cells (Escudier et al., 2005;Viaud et al., 2009). Vesicles secreted by immune cells can also display immunosuppressive properties. As mentioned above, EVs secreted by immature DCs can induce tolerogenic, rather than effector immune responses (Peche et al., 2003). It was shown that such EVs promote graft survival (Peche et al., 2003) and reduce inflammation in animal models of arthritis (Kim et al., 2005), of inflammatory-bowel disease (Yang et al., 2010) and of septic shock (Miksa et al., 2006(Miksa et al., , 2009. Activated T cells secrete exosomes bearing FasL, which induce apoptosis of neighboring T cells, suggesting their participation in the regulation of immune response by a negative feedback mechanism (Monleon et al., 2001). Interestingly, placenta secretes EVs which seem to contribute to fetomaternal tolerance (Taylor et al., 2006). Exosomes in plasma of pregnant women bear FasL and reduce CD3ζ expression by T cells (Taylor et al., 2006) as well as NKG2D ligands, reducing the cytotoxicity of NK and CD8+ T cells (Hedlund et al., 2009). IMMUNOMODULATORY ROLE OF MSC-DERIVED EVs Recent studies indicate that the immune modulatory activity of MSCs can be at least partially mediated by their ability to release EVs. The inhibitory effects of MSCs on B-cell proliferation and differentiation in a CpG-stimulated peripheral blood mononuclear cell co-culture system could be fully reproduced by EVs isolated from MSC culture supernatants in a dose-dependent fashion (Budoni et al., 2013). A dosedependent inhibitory activity of MSC-EVs was also observed for IgM, IgG, and IgA production. Moreover, in the same coculture system 7-AA-negative and Annexin-positive MSC-EVs isolated from mesenchymal stromal cells were internalized in a subset of CD86/CD19 positive cells corresponding to activated B lymphocytes. The effect of EVs on T cells was investigated by Mokarizadeh et al. (2012) in a rodent model. These authors showed that EVs isolated from murine BM-SCs inhibited the proliferation of both syngenic and allogenic T lymphocytes. Additionally, they demonstrated that these microparticles were able to induce apoptosis in activated T cells. Interestingly, this inhibition was associated with an increased proportion of regulatory T CD4+-CD25+-FoxP3+ cells. Moreover, an increased secretion of IL-10 and TGFβ1 by cultured splenic cells added with MSC-EVs was observed. These results suggest that MSC-EVs can induce tolerogenic signaling. Similar results were observed in human PBMC cultures treated with human T cell activator CD3/CD28 beads (Del Fattore et al., 2015a). Stimulation increased the number of proliferating CD3+ cells as well as of T regulatory cells (Treg). Co-culture with MSCs inhibited the proliferation of CD3+ cells, with no significant changes in apoptosis. Addition of MSC-EVs to PBMCs did not affect proliferation of CD3+ cells, but induced the apoptosis of CD3+ cells and of the CD4+ subpopulation and increased the proliferation and the apoptosis of Treg. Moreover, MSC-EV treatment increased the Treg/Teff ratio and the immunosuppressive cytokine IL-10 concentration in culture medium. The activity of indoleamine 2,3-dioxygenase (IDO), an established mediator of MSC immunosuppressive effects, was increased in supernatants of PBMCs co-cultured with MSCs, but was not affected by the presence of MSC-EVs. In vitro results are also supported by in vivo observations in an animal model of inflammatory bowel disease (Del Fattore et al., 2014) induced by dextran sulfate sodium (DSS). Mice injected daily with MSC-EVs showed less weight loss, improved disease activity index and a less severe reduction in colon length when compared to DSS/vehicle-treated controls. qRT-PCR analysis performed on RNA extracted from colon tissue revealed a strong inhibition of the induction of inflammatory cytokines with respect to untreated animals. Collectively, these data suggest that EVs isolated from MSCs could reproduce the immunomodulatory effect of MSCs. Indeed, MSC-EVs are attracting increasing interest since they might represent a more convenient therapeutic tool with respect to their cells of origin. Interestingly, a case of successful treatment with MSC-EVs in a patient with steroid-resistant GVHD was recently reported (Kordelas et al., 2014). However, additional work both in vitro and in vivo is needed in order to better understand both the potency and the mechanisms of action of this novel potential immunosuppressive tool. EV-MEDIATED IMMUNOMODULATION IN NEUROLOGICAL DISORDERS The use of EVs for immunomodulation of neurological disorders is still in its infancy, however, several attempts have been devised. Genetically modified DCs to equip EVs with TGF-β1 inhibited the progression of murine experimental autoimmune encephalomyelitis (EAE; Yu et al., 2013). Immature DC-derived exosomes ameliorated the progression of experimental autoimmune myasthenia gravis (Bu et al., 2015). Exosomes derived from atorvastatin-modified DCs ameliorated experimental autoimmune myasthenia gravis by up-regulating the levels of IDO and of Tregs and shifting Th1/Th17 to Th2 cytokines (Li X.L. et al., 2013(Li X.L. et al., , 2016. The mesenchymal stromal cell-derived EVs rescued traumatic brain injury (TBI)induced cognitive impairment in part through reducing of neuroinflammation Kim et al., 2016). EVs IN THE BRAIN NEURAL-GLIAL NETWORKS In the nervous system EVs are released by many cells including cortical and hippocampal neurons, glial cells, astrocytes, and oligodendrocytes and that EVs have significant impact on communication within the CNS. EVs present in extracellular and cerebrospinal fluids transfer protein, lipid and nucleic acid cargo from one cell to another modifying the target cell phenotype and function (Agnati et al., 2010). Several lines of evidence reveal that EVs relay complex messages other than (or even superior to) those based on direct cell-to-cell contacts or secreted soluble factors. In neurons, EVs shed at the synapses are implicated in trans synaptic communication. They can be retaken by other neurons suggesting a novel way for inter neuronal communication. The first evidence of EVs release from neural cells was demonstrated in vitro using primary culture of embryonic cortical neurons isolated from rats and mice (Fauré et al., 2006). Additional studies reported secretion of EVs from fully differentiating cortical cells, which contained glutamatergic and GABAergic neurons in long-term culture (Lachenal et al., 2011). In mammalian cortical neurons EVs are predominantly distributed within somatodendritic compartment, where they are 50 times more represented than in axons (von Bartheld and Altick, 2011). EVs deriving from this compartment may exert alternating functions at the level of synapses. Indeed in neurons, EVs are present in both pre-and postsynaptic components. Studies on trafficking of synaptic AMPA type receptors, which represent the main mediators of fast synaptic transmission among glutamate receptors of the CNS showed that neuronal EVs act as stores for synaptic receptors (Kennedy and Ehlers, 2006). As neuronal EVs carry AMPA receptor subunits they may play a role in synaptic plasticity by regulating the AMPA receptors for glutamate transmission (Chivet et al., 2013). Moreover, EVs can transport functionally competent GPCRs adding a further level of plasticity forming the receptors that acquire the ability to respond to its neurotransmitter ligand (Guescini et al., 2012). It was shown that increasing cytosolic calcium, incubation with GABA receptor antagonists or neuron depolarization increased EVs secretion (Lachenal et al., 2011;Chivet et al., 2012;Pegtel et al., 2014). In addition to neurons, other cells in the CNS release higher amount of EVs. Astrocyte-derived EVs are heterogeneous and their composition depending on the environment. A large number of transfer compounds such as mitochondria, mitochondrial DNA, ATP, glutamate transporters, Hsp/Hsc70 and synapsin I involved in neuroprotection, factors modulating angiogenesis, i.e., FGF2, VEGF, PEDF, and endostatin, as well as MMPs mediated ECM proteolysis have been identified in astrocytic EVs (Frühbeis et al., 2013;Agnati and Fuxe, 2014;Pegtel et al., 2014). The target cells are both astrocytes and neurons, dependent on the cargo in EVs they may produce, being involved in neuronal growth and survival, synaptic transmission regulation or degeneration. Astrocytic EVs can contain excitatory amino acid transporters that may have special function in volume transmission by scavenging glutamate in the extracellular fluids, reducing excitation, and neurodegeneration (Agnati and Fuxe, 2014). Microglial provide the first line of defense during infection and brain injury. Upon stimulation reactive microglia release EVs that transmit inflammatory signals to recipient microglia which then upregulate the expression of genes enhancing inflammation, i.e., IL-1β, IL-6, iNOS, cyclooxygenase etc. (Verderio et al., 2012;Prada et al., 2013). Thus microglial EVs spread inflammatory reactions throughout the brain. It is of interest that microgliaderived EVs can interact with neurons and enhance excitatory transmission modulating synaptic activity (Antonucci et al., 2012). In addition microglia release EVs with protein content previously reported in B cell-and DC-derived EVs. Although MHC class II antigens are visualized in microglial EVs, their relevance for antigen presentation exhibited by microglia themselves is still open (Potolicchio et al., 2005). Oligodendrocytes produce the myelin sheath around axons thus facilitating impulse conduction. Recent studies suggest that these trophic functions may depend on the transfer of EVs from oligodendrocytes to neurons. Indeed, the oligodendrocyte EVs contain myelin proteins such as PLP, CNP, MAG, and MOG (Krämer-Albers et al., 2007;Frühbeis et al., 2013). The secretion of EVs from oligodendrocytes is regulating by neurotransmitter signaling. Axonally released glutamate activates EVs release from oligodendrocytes mediated mainly by NMDA receptors. In addition, oligodendrocyte-derived EVs have been suggested to negatively regulate myelin synthesis in an autocrine manner (Bakhti et al., 2011). However, Fruhbeis and colleges did not observe PLP-positive EVs in myelinating fibers in situ and postulated that EVs derived from oligodendrocytes are released into the periaxonal space and thus involve in axon-glia interaction (Frühbeis et al., 2013). Moreover, oligodendrocyte EVs improve the metabolic activity of cultured neurons under cell stress by delivery of supportive biomolecules. This is the evidence that EVs released from oligodendrocytes participate in bidirectional neuron-glial integrity. Extracellular vesicles released by neural cells into brain parenchyma could be potentially endocytosed by nearby cells. Then EVs cargos are released into the cytosol of receiving cell or re-express at the cell surface. Astrocytes, which enwrap a number of glutamatergic synapses could capture EVs released at synapses. Back-fusion EVs has been demonstrated to occur in CNS and could concern their cells of origin. On the other hand EVs released from particular neural cells can be engulfed by other type of cells in CNS. EVs secreted by neurons may be transferred between spines of the same neuron or across synapses to end up in afferent neurons. Microglial EVs are able to be internalized by the same cell or by the neighboring microglia in macro-pinocytic fashion (Chivet et al., 2012). Oligodendrocytederived EVs are usually taken by neurons. This uptake seems to be selective since astrocytes and oligodendrocytes internalize oligodendroglial EVs to a minor extent. There is an evidence that oligodendrocytes also interact with and respond to microglia via releasing EVs that are taken by recipient cells (Peferoen et al., 2013; Figure 2). EV-BASED STRATEGIES FOR DIAGNOSIS OF CNS DISEASES Extracellular vesicles are increasingly gaining attention in diagnostic tools, being used as potential biomarkers for the detection of early pathological conditions before the onset of clinical symptoms of the disease. The choice of potential sources for EVs includes blood, plasma and cerebrospinal fluids. Relative stability of EVs in body fluids and their ability to pass the bloodbrain barrier have suggested to exploit them very easy with minimally invasive procedure (so called "liquid biopsy"). Finding and testing such biomarkers in parallel with other diagnostic tools might be very important in CNS disorders to understand complex neurological conditions. The molecular content of EVs namely proteins, nucleic acids, and lipids reflects the origin and the pathophysiological status of the releasing cells. Several studies have demonstrated that EVs isolated from body fluids of neurological patients comprise molecules implicated in neurodegenerative diseases, metabolic, infectious diseases, or cancer. The concentration of EVs increases upon inflammation accompanied different neurological diseases and it is closely related with disease course (Lee et al., 1993;Verderio et al., 2012). Certain EV proteins provided a tool to distinguish a diseaserelated condition from a healthy state. For example, amyloid precursor protein fragments (APP) or tau phosphorylated at Thr181 are established biomarkers for Alzheimer's disease, and phosphorylated tau or α-synuclein are relevant for Parkinson's disease (Saman et al., 2012;Yang et al., 2015). The presence of scrapie form of the prion protein (PrPSC) in isolated EVs is usually the evidence of Creutzfeld-Jacob disease (Coleman et al., 2012;Coleman and Hill, 2015). Many recent reviews addressed different EVs-derived proteins as of brain tumors diagnosis (Pegtel et al., 2014;Kawikova and Askenase, 2015;Paschon et al., 2015). These proteins are usually present on EV surface and could be general cancer markers, cancer-type markers or tissue-type markers. Plasmatic levels of cancer-derived EVs were reported to be related to the tumor size, the metastatic behavior of tumor (calveolin-1) or their impact on angiogenesis, pro-survival, apoptosis, immunomodulation, or drug resistance (D'Asti et al., 2012;Redzic et al., 2014). In particular, the persistence of tumorspecific exosomes in body fluids such as blood after resection of tumor on the primary site can indicate the existence of metastases, which could be located in various organs including brain and may spur further diagnostics and treatment. The examination of EVs content in blood samples from control and glioblastoma patients was performed by Shao et al. (2012). They created a very innovative system in which isolated EVs were labeled with magnetic nanoparticles reacting with specific proteins and then identified by the miniaturized nuclear magnetic resonance system. This strategy allowed to identify with high detection sensitivity glioblastoma-secreted exosomes and to distinguish them from EVs originating from healthy individuals. The disease-specific proteins secreted in EVs or incorporated into their membrane were found in in vitro research in response to cells infection by viruses spreading on CNS like human immunodeficiency virus 1 (etiological cause of Acquired Immune Deficiency Syndrome), human T-cell leukemia virus-1 (evoking tropical spastic paresis), herpes simplex virus-1 (inducing herpes viral encephalitis; Sampey et al., 2014). Several studies have revealed genetic alterations in RNA in EVs derived from neurological disorders in comparison with healthy individuals (Rao et al., 2013;Mundalil Vasu et al., 2014). Profiling RNA expression patterns could facilitate presymptomatic disease detection. Recent reports point out EVs nucleic acids as the biomarkers corresponding to brain injury, neurodegenerative diseases, neuroinflammation, or brain tumors. Exosome Diagnostic (Cambridge, MA, USA) filed a patent reporting a technique to detect neurodegenerative diseases and brain injury based on the measure of RNA-s (mRNA, miRNA, siRNA, or shRNA) associated to CSF-derived EVs (Skog and Russo, 2015). In the reported examples, biomarkers associated with different neurodegenerative diseases were nucleic acids corresponding to APP, Aβ42, BACE1, and tau protein (Urbanelli et al., 2015). The study of Skog et al. (2008) revealed the presence of oncogenic acids in EVs derived from brain tumors into CNF. Molecular analysis of oncosomes shed from brain tumor cells indicates the presence of mRNA coding mutated genes, non-coding RNA (multiple miRNAs), transcripts for different oncoproteins or oncogenic DNA sequences. Amongst the described examples the point mutation in gene coding epidermal growth factor receptor (EGFRvIII) was found in EVs isolated from glioblastoma patients (Weller et al., 2014). Similarly, mRNA for mutated form of IDH1/2 gene and mRNA for abnormal C-myc gene were observed in EVs circulating in blood of glioma and medulloblastoma patients, respectively (Balaj et al., 2011;Chen et al., 2013). Recently, EVs from glioblastoma and astrocytoma have been shown to carry mtDNA and dsDNA representing the whole genomic DNA, which can be used to identify mutations present in tumor cells (Thakur et al., 2014). As stated above, EVs are involved not only in physiological processes but also in CNS diseases carrying specific pathologic cargo. Over the last decades, besides of using them as biomarkers of different diseases EVs have been proposed as therapeutic tools for neurological disorders. NON-INVASIVE IMAGING OF EVs Mesenchymal stem cells have the ability to release several prosurvival trophic and immunomodulatory factors (Abboud et al., 1991;Aggarwal and Pittenger, 2005;Wilkins et al., 2009;Jitschin et al., 2013). Because of these beneficial properties, MSCs have been successfully used in experimental animals to treat several neurological disorders and to improve graft survival in the CNS (Srivastava et al., 2016). Human MSCs derived from BM or UCB were shown to have a strong capacity for exosome secretion in response to cellular injuries (Baglio et al., 2012;Li T. et al., 2013) and the pro-survival and immunomodulatory effects of these cells may be attributable to exosome release. The imaging of exosomes in vivo may contribute to an understanding of the regenerative potential of exosomes released from MSCs and would also represent a significant advancement in translational exosome science. The ability to non-invasively track exosomes in vivo using different imaging modalities is still in its infancy. Because of their nanometre size, the traditional method of visualization of exosomes is scanning electron microscopy (SEM; Sharma et al., 2010;Sokolova et al., 2011). SEM allows particle size determination, and therefore helping to distinguish between exosomes and other vesicles. Fluorescence nanoparticle tracking analysis (NTA) has also been used to determine the exosome size on the basis of Brownian motion (Dragovic et al., 2011). Other methods include bright fluorescent labeling of cell-derived exosomes and high-resolution flow cytometry for quantitative and qualitative analysis (van der Vlist et al., 2012) and Tunable Resistive Pulse Sensing analysis, a high resolution technique that measures the change in electrical resistance in a pore as a particle passes through it (Coumans et al., 2014). However, these methods are not ideal for the visualization of in vivo localization and biodistribution of exosomes. In the last several years, the development of in vivo imaging techniques has significantly improved our ability to non-invasively track exosomes. With these techniques, we can now monitor the distribution of exosomes at the site of injury or elsewhere in the body. Exosomes could be visualized by introducing a labeling agent, and then, imaging the labeling agent as a surrogate for the exosomes. Depending on the labeling agent, exosomes can be imaged by optical imaging, magnetic resonance imaging (MRI), or single-photon emission computed tomography (SPECT). Lai et al. (2014) used an optical imaging approach to visualize exosomes. They labeled exosomes with Gaussia luciferase for non-invasive bioluminescence imaging (BLI). BLI is based on the emission of photons in reactions catalyzed by luciferase enzymes. Luciferases emit photons during the oxidation of a substrate, such as D-luciferin, in the presence of oxygen and ATP. BLI of immunodeficient, athymic nude mice systemically injected with exosomes showed a prominent bioluminescence signal in the spleen (Lai et al., 2014). In another study, Grange et al. (2014) used small-molecule near-infra red (NIR) fluorophores to label exosomes and track them for noninvasive visualization. They used two different exosome labeling protocols. In the first protocol, MSC-derived exosomes were directly labeled with Vybrant DiD during an ultracentrifugation procedure. In the second protocol, exosomes were indirectly labeled with fluorophores by incubating MSCs with a Vybrant DiD cell-labeling solution, and then, isolating exosomes from MSCs by ultracentrifugation. In vitro optical imaging showed a brighter fluorescence signal in exosomes directly labeled with DiD compared to exosomes obtained by MSCs that were previously labeled with DiD. In vivo optical imaging of mice with acute kidney injury, intravenously injected with directly or indirectly labeled exosomes, showed an accumulation of exosomes, especially in the kidneys (site of injury) of mice. Directly labeled exosomes showed a higher and brighter fluorescence compared to indirectly labeled exosomes. This study showed that both labeling methods were suitable for the in vivo detection of exosomes (Grange et al., 2014). However, one of the major limitations of optical imaging is light absorbance from hemoglobin and multi-layer anatomical barriers that limit the light emission. Another method for exosome visualization is MRI. High spatial resolution and the ability to gather accurate anatomical information and image deep inside the tissue are some of the greatest advantages of MRI. Superparamagnetic iron oxide (SPIO) nanoparticles are MRI contrast agents that are conventionally used for cell-tracking. SPIOs are based on magnetite or maghemite cores embedded and stabilized with a hydrophilic shell. The SPIO core contains several million iron atoms. These particles create a large dipolar magnetic field (Hao et al., 2010) and signal spin-spin dephasing due to the local field inhomogeneity induced in water molecules near the particles (Rogers et al., 2006). This results in negative contrast on T2-weighted MRI. Hu et al. (2014) utilized this property to track exosomes by labeling them with SPIOs. They labeled mouse B16-F10 melanoma cell-derived exosomes with SPIOs by using electroporation. These SPIO-labeled exosomes were then injected into the footpad of C57BL/6 mice. MRI of these mice showed visually appreciable nodal enhancement and apparent enlargement 48 h after the injection . This study was a proof-of-principle study that demonstrated that exosomes can be tracked by MRI. One of the major disadvantages of this method is the possible release of iron particles from exosomes and their deposition in the tissue. Deposited iron particles could be scavenged by macrophages and may generate a false-positive signal on MRI. The nuclear imaging modality, SPECT, could also be used to image exosomes. Because of the superior tissue penetration capability of SPECT, and its more quantitative nature than optical imaging (Massoud and Gambhir, 2003), the use of SPECT for exosome imaging presents a better clinical potential. A study by Hwang et al. (2015) reported a simple method for radiolabelling of macrophage-derived, exosome-mimetic nanovesicles (ENVs) with 99m Tc-HMPAO (a clinically used tracer) under physiologic conditions, and monitored the in vivo distribution of 99m Tc-HMPAO-ENVs using SPECT/CT in living mice. SPECT/CT images exhibited a higher uptake of ENVs in the liver and no uptake in the brain (Figure 3; Hwang et al., 2015). Although this technique shows greater promise for investigating the in vivo behavior of exosomes and is more clinical applicable, the tradeoff between half-life and long-term exposure to ionizing radiation and a possible transfer of radiometal to surrounding cells could be a major limitation of using 99m Tc-HMPAO for tracking. THERAPEUTIC POTENTIAL OF MSC-DERIVED EVs IN CNS DISORDERS While the use of MSCs in regenerative medicine raised high expectations in clinical settings, the use of MSC-derived vesicles released by these cells could have many advantages compared to a cell-based approach. Therapeutic potential of EVs seems to be more attractive because it reduces the risk associated with engraftment of cells, possible immune reactions against cells and emboli. Moreover, EVs have a unique ability to cross biological barriers as was shown in glioblastoma patients (Noerholm et al., 2012;Shao et al., 2012) what is very important for neurological disease therapy where the compounds that are administered systemically need to cross blood-brain barrier and blood-CSF barrier. Extracellular vesicles may affect cell senescence, proliferation and cell survival reducing apoptosis resulting from brain ischemic injury. Besides, as EVs from MSCs were shown to modulate several signaling pathways, they could be used to treat neurodegenerative diseases or brain tumors. MSC-derived EVs have been reported to contribute to tissue repair using the experimental models of brain injury. Xin et al. (2013a) showed that systemic administration of EVs generated from bone marrow-derived MSCs significantly increased axonal density and synaptophysin-positive area in the ischemic cortex and striatum of middle carotic artery occlusion (MCAo) rats. BM-MSC-derived EVs treatment increased also the number of newly formed doublecortin positive cells (neuroblasts) and improved functional recovery of stroke rats compared with PBS-treated controls. The observation of Xin et al. (2013a) that BM-MSCs exposure to ischemic rat brain extract induced the expression of miRNA133b in MSC and the previous statement of Yu et al. (2011) that miRNA 133b was essential for the functional regeneration of motor neuron axons after spinal cord injury in zebrafish prompted the authors to check this effect in vitro. Treatment of primary rat cortical neurons with EVs-derived from ischemic brain extract-treated MSCs increased the total number of neurites and their length after 48 h (Xin et al., 2012). Further studies have demonstrated that BM-MSC-derived EVs transfer of miRNA133b into rats subjected to MCAo induced neurite remodeling, increased axonal plasticity and functional outcome of rats 14 days after stroke onset (Xin et al., 2013b). This effect has been selectively specific since GTPase RhoA, an miRNA 133b inhibitor usage did not change neurite morphology. This was the proof that EVs-mediated secretion of miRNA contributes to the protective effect of MSCs on stroke. In addition to the beneficial effect on neurogenesis, EVs promote angiogenesis post stroke. Rats that received BM-MSC-EVs demonstrated a significant increase in the percentage of BrdU/vWF-positive cells in ischemic zone (Xin et al., 2013b). Cerebral endothelial cell proliferation contributed to neurovascular remodeling with the ischemic tissue. Therapeutic application of EVs obtained from human bone marrow derived mesenchymal stem cells (hBM-MSC) has been also shown in the experimental model of TBI in rats. Intravenous injection of secretome from hBM-MSC ameliorated TBI-induced rats by reducing neuronal cell loss in the injured cortex and promoting proangiogenic VEGF production resulting in functional outcome improvement (Chuang et al., 2012). Moreover, rats treated with EVs derived from BM-MSC of normal and cerebral ischemic rats had decreased infract volume in comparison to untreated animals. Similarly, the recovery of neurological functions after FIGURE 3 | In vivo SPECT/CT images of 99m Tc-HMPAO-ENVs injected in mice. After intravenous injection of 99m Tc-HMPAO-ENVs or 99m Tc-HMPAO, SPECT/CT images were acquired at 30 min, 3 h, and 5 h in BALB/c mice. The SPECT/CT imaging showed the significantly intense uptake of 99m Tc-HMPAO-ENVs in the liver and radioactivity in the salivary glands and intestine until 5 h. (Reproduced from Hwang et al., 2015.) ischemic stroke was observed after application of condition medium of rat MSC derived from bone marrow, accompanied with the increase of neuronal progenitor cells surrounding lateral ventricle in stroke affected hemisphere (Tsai et al., 2014). Recently, a new therapeutic possibility for using MSC-derived EVs against Alzheimer Disease was proposed by Katsuda et al. (2013). The authors have found that human AT-MSCs secrete nephrilysin (NEP), which is the most important β amyloid (Aβ)degrading enzyme in the brain. The functional NEP -bound exosome derived from hAT-MSC decreased Aβ overexpressed by neuroblastoma cells in co-culture settings (Katsuda et al., 2013). Apart from natural EVs, genetically engineered exosomes can be used as a delivery system for small molecule therapeutics for treating CNS diseases. EVs are proposed as ideal nucleic acid transporters. The study of Pusic et al. (2014) exploring the ability of DC-derived exosomes packed with miRNA showed a significant increase in myelination observed in hippocampal slice cultures subjected on oxidative stress. In another study, exogenous delivery of miR-124a by stereotactic injection of neuronal cell-derived EVs prevented pathological loss of GLT1 protein, an important glutamate transporter, selectively lost in amyotrophic lateral sclerosis, in SOD1 G93A mice the experimental model of ALS (Morel et al., 2013). The neuro-oncologic application of EVs harnessed with miRNA has come from the observation that such modified BM-MSC-derived EVs were incorporated by tumors cells in coculture system (Katakowski et al., 2010). Followed by in vivo studies, intra-tumor injection of endosomes-derived from miR-146 overexpressing BM-MSC significantly have been shown to reduce glioma xenograft growth in rat brain (Katakowski et al., 2013). Not only microRNA but also mRNA and siRNA have been overexpressed in donor cells and delivered by exosomes. Exosomes isolated from HEK-293T cells previously transfected with suicide mRNA triggered tumor cell apoptosis and tumor regression after their direct injection into schwannoma (nerve sheat tumor) in an orthotopic mouse model (Mizrak et al., 2013). EVs isolated from different cell types loaded with siRNA have also been shown to be successful cargo vehicles for siRNA delivery to the brain (El Andaloussi et al., 2013). The first in vivo example of how to exploit EVs carrying siRNA has come from Alvarez-Erriti studies. Systemic delivery of DC-derived EVs previously tackled with exogenous si-RNA induced knockdown of BACE1, a therapeutic target in AD in the mice brain (Alvarez-Erviti et al., 2011). It is important to recall that EVs may supress the immune response. This strategy may provide a novel therapeutic approach for treating brain inflammatory-related diseases of CNS such as Parkinson's disease, Alzheimer disease, multiple sclerosis, amyotrophic lateral sclerosis, meningitis, brain, spinal cord, and peripheral nerve injury and brain tumors. The anti-inflammatory effect of AT-MSC-derived EVs was shown to improve rat sciatic nerve regeneration after experimental transection (Raisi et al., 2014). Moreover, genetically modified EVs could deliver different immunosuppressant substances to modify immune reaction. The antioxidant curcumin-loaded exosomes isolated from murine macrophage cell line have been shown to decrease IL-6 and TNF levels in vitro and in vivo . In a separate study BV2 microglial cell line derived exosomes encapsulating signal transducer and activator of transcription 3 (STAT-3) inhibitor (JSI-124) were delivered to the brain and selectively taken up by microglial cells induced apoptosis (Zhuang et al., 2011). A specific application of exosomes released by MSCs was patented by Beelen et al. (2014). The authors claimed that exosome preparations derived from neonatal and adult tissuederived MSCs were effective for the therapy of inflammation pre-and post-natally acquired damages of the brain (Beelen et al., 2014). Another patent disclosed the preparation and use of exosomes isolated from neural stem cells induced from MSCs for the treatment of CNS diseases (Tian et al., 2013). The use of EVs derived from MSCs allows also for avoiding risk related to the direct deposition of MSCs in the CNS such as formation of fibrotic masses (Grigoriadis et al., 2011;Snyder, 2011). CONCLUSION The field of EVs is maturing with a fast progress in the untangling of structure and content of EVs. There are also advances in understanding of biological activities of EVs, in particular in cancer and in immunology-related diseases, but its role in CNS disorders also draws a lot of attention, recently. It translated to the attempts to use EVs as biomarkers of CNS disorders as well as therapeutic agents. MSs have been shown to be therapeutic in many neurological disorders despite the lack of homing to the CNS. Thus, there is growing appreciation of EVs as mediators of MSC-derived therapeutic effects. MSCs are easy to obtain and maintain, thus a lot of interest in now paid to replacement of MSCs by MSC-derived EVs as therapeutic agents. Altogether, the EVs shed a new light on physiology and pathology, as well as become an attractive source of therapeutic agents. AUTHOR CONTRIBUTIONS All authors declare the contribution in this paper. The authors were responsible for the following parts of the review: introduction (BL), types of EVs (SK and BL), proteomic analysis of EVs (SK and BL), profiling RNA content in EVs (MJ), mechanism of cellular uptake of EVs (SK and BL), biological activities of EVs (SK and BL), role of EVs in immune responses (MM), EVs in the brain neural-glial networks (AA and BL), EVs -based strategies for diagnosis of CNS diseases (AA and BL), non-invasive imaging of EVs (AS), therapeutic potential of MSCderived EVs in CNS disorders (AA and BL), conclusion (BL). ACKNOWLEDGMENTS This work was supported by the Polish Ministry of Scientific Research and Higher Education grant, KNOW 06 project: "The role of bone marrow mesenchymal stem cells and microvesicles derived from these cells in CNS repair of brain ischemia disorders." MM gratefully acknowledges the financial support of "Fondazione Citta dells Speranza" (Padova, Italy). We also thank Mary McAllister for editorial assistance. SUPPLEMENTARY MATERIAL The Supplementary Material for this article can be found online at: http://journal.frontiersin.org/article/10.3389/fncel. 2016.00109
2016-06-17T11:56:48.430Z
2016-05-02T00:00:00.000
{ "year": 2016, "sha1": "f1583aa8784db1dd5de3ffd586d9c37b162638f5", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fncel.2016.00109/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f1583aa8784db1dd5de3ffd586d9c37b162638f5", "s2fieldsofstudy": [ "Biology", "Engineering" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
264484974
pes2o/s2orc
v3-fos-license
Visualization of Balbiani Body disassembly during human primordial follicle activation Dormant human oocytes contain a perinuclear super-organelle, called the Balbiani Body, which is not present in mature oocytes. Here, we use confocal imaging to visualize two Balbiani Body markers—mitochondria and the DEAD-box helicase DDX4—in preantral follicles isolated from a 20-year-old female patient. In primordial follicles, mitochondria were concentrated in a ring near the oocyte nucleus, while DDX4 formed adjacent micron-scale spherical condensates. In primary and secondary follicles, the mitochondria were dispersed throughout the oocyte cytoplasm, and large DDX4 condensates were not visible. Our data suggest that the Balbiani Body breaks down during the primordial to primary follicle transition, thus releasing mitochondria and soluble DDX4 protein into the oocyte cytoplasm. Description Female humans generate their entire reserve of oocytes during the mid-gestation period and store them until sexual maturity and beyond.During the long storage period, these dormant oocytes remain arrested in the diplotene phase of meiotic prophase I and are surrounded by a single layer of flattened pre-granulosa cells.The oocyte and pre-granulosa cells together comprise the primordial follicle (Li and Albertini, 2013).Multiple signaling pathways have been implicated in the activation of oocytes and subsequent development of the primary follicle, but the exact mechanism is not known (Lee and Chang, 2019).How oocytes survive long-term storage is a long-standing open question. During formation, oocytes undergo dramatic restructuring of their cytoplasm.The oocytes of many egg-storing species package their mitochondria, RNA, Golgi, ribosomes, and various proteins in perinuclear super organelles (Boke et al., 2016;Hertig and Adams, 1967;Jamieson-Lucy and Mullins, 2019;Kloc et al., 2004;Woodruff et al., 2018).In zebrafish, frogs, and insects, this structure is called a Balbiani Body, and it is built from the prion-like proteins Bucky ball and Xvelo (Boke et al., 2016;Marlow and Mullins, 2008).These studies demonstrated that the Balbiani Body stores germ plasm needed to direct formation of the future germline.They proposed that the Balbiani Body packages maternal elements needed for oocyte development and early embryogenesis prior to zygotic genome activation (Marlow and Mullins, 2008).The Balbiani Body stores these contents during oocyte arrest and then releases them after oocyte activation. Negative stain electron microscopy of human primordial follicles identified a dense collection of mitochondria, other organelles, and membranes adjacent to the oocyte nucleus, which the authors termed a Balbiani Body (Hertig and Adams, 1967).However, it is not known how functionally similar this structure is to Balbiani Bodies seen in non-mammalian species.During mammalian embryogenesis, specification of the germ lineage is inductive, obviating the need for polarized storage of germ plasm in the oocyte (Seydoux and Braun, 2006).Furthermore, Bucky ball and Xvelo proteins have no clear homologs in humans.Mouse primary oocytes do not cluster their mitochondria at all and instead form a Golgi-rich ring around the nucleus (Dhandapani et al., 2022). Little is known about the architecture, composition, and dynamics of the Balbiani Body in human oocytes.Culture of primordial follicles that recapitulates long-term oocyte dormancy is not currently possible (McLaughlin et al., 2018).Thus, analyzing human primordial follicles requires surgical removal of ovarian tissue from patients and immediate processing to conserve natural architecture.Immunohistochemical staining of primordial follicles identified DEAD-Box Helicase 4 (DDX4; homolog of VASA) in a perinuclear region of oocytes (Albamonte et al., 2013, Albamonte et al., 2008, Dhandapani et al., 2022), suggesting that DDX4 is a Balbiani Body component.However, the spatial relationship between DDX4 and other Balbiani Body components, such as mitochondria, has not been examined thoroughly.Thus, in humans, it is not known if the Balbiani Body is a uniform protein phase that surrounds mitochondria or a heterogenous, multi-phase aggregate.Nor is it clear when the Balbiani Body breaks down during oocyte development.Toward clarifying these unknowns, we used fluorescence confocal microscopy and deconvolution to examine preantral oocytes isolated from a living 20-year-old patient with no history of infertility or cancer. To preserve native cytoplasmic structures within oocytes, we surgically removed strips of the ovarian cortex and immediately fixed the tissue in 4% formaldehyde.Hematoxylin and eosin staining showed oocytes and surrounding somatic cells with the expected morphology, indicating that the tissue was healthy at the time of fixation (Figure 1A).We identified primordial follicles based on the presence of a single layer of flattened pre-granulosa cells surrounding an oocyte of ~35 μm diameter (Gougeon, 1996;Westergaard et al., 2007). We then sectioned the tissue into 5 μm slices and used immunofluorescence to label mitochondria (HSP60 antibody) and DEAD-box Helicase 4 (DDX4 antibody).To examine oocyte substructures at high resolution, we imaged the ovarian sections using a confocal microscope equipped with a 100X silicone immersion objective (1.35 NA).Images were further resolved using deconvolution (Figure 1B,C).In oocytes contained in the primordial follicles, we saw that HSP60 and DDX4 staining was concentrated in a crescent-shaped ring around the nucleus, consistent with previous reports (Albamonte et al., 2013;Hertig and Adams, 1967). Our imaging revealed concentrations of DDX4 in micron-scale spherical compartments, consistent with a previous study (Dhandapani et al., 2022).These condensates did not overlap with mitochondria, indicating that they occupy spatially distinct sub-compartments within the Balbiani Body (Figure 1D,E).When over-expressed in cells or reconstituted in low salt buffer in vitro, DDX4 condenses into spherical droplets via liquid-liquid phase separation (Klosin et al., 2020;Nott et al., 2015).Our results thus suggest that the Balbiani Body is a heterogeneous amalgam of mitochondria and liquid-or gel-like DDX4 condensates. We then examined primary and secondary follicles in which oocytes have been activated to exit prolonged arrest.These follicles were staged based on the presence of a single (primary) or double (secondary) layer of cuboidal granulosa cells surrounding the oocyte (Gougeon, 1996).In the oocytes contained in these follicles, mitochondria were no longer concentrated in a perinuclear ring but rather distributed throughout the cytoplasm (Figure 1F,G).This result is consistent with mitochondrial dispersal that occurs coincident with Balbiani Body breakdown in Xenopus and Zebrafish oocytes (Gupta et al., 2010;Yang et al., 2022).The cumulative mass of DDX4 condensates per oocyte was lower in more mature oocytes (Figure 1H), and the remaining condensates were smaller and fewer compared to those found in primary follicles (Figure 1I).Diffuse DDX4 was still present in the cytoplasm of these maturing oocytes.Our data suggest that Balbiani Body breakdown is defined by dispersal of mitochondria into the cytoplasm and dissolution of DDX4 condensates.We conclude that Balbiani Body breakdown occurs at the primordial to primary follicle transition in human oocytes, and that this timing is conserved across multiple vertebrate clades. In summary, we used fluorescence confocal microscopy to visualize the subcellular architecture of oocytes from a healthy 20year-old patient.Human Balbiani Bodies contain micron-scale DDX4-positive condensates that are spatially distinct from the mitochondria.Although it was previously assumed that the Balbiani Body disassembles during oocyte maturation, we show that the mitochondria and DDX4 disperse into oocyte cytoplasm at the primordial-primary follicle transition.We propose that disassembly of the Balbiani Body is triggered by oocyte activation and may be one of the first requirements for oocyte maturation. DDX4 condensates have been reported to sequester housekeeping mRNA transcripts under stress (Klosin et al., 2020;Nott et al., 2015) and regulate translation of germline-specific transcripts (Raz, 2000).It is possible that the Balbiani Body functions to trap and maintain DDX4 condensates to store and protect mRNAs needed for oocyte maturation or embryogenesis prior to zygotic genome activation.Disassembly of the Balbiani Body would release these condensates, exposing them to cytoplasmic agents that promote dissolution and release of their mRNA.The volume of the oocyte cytoplasm increases coincident with Balbiani Body breakdown, which could also facilitate condensate dissolution (Figure 1J) (Westergaard et al., 2007).Furthermore, the packaging of mitochondria adjacent the nucleus may act as a protective mechanism to reduce the production of reactive oxygen species (ROS) or select for healthy mitochondria (Jamieson-Lucy and Mullins, 2019). Oocytes are some of the longest-lived cells in the mammalian body, surviving up to 50 years in humans.Since the Balbiani Body is unique to dormant oocytes, it could be key to preserving oocyte quality during decades of storage.It is possible that as females age this line of defense does not function properly, resulting in a higher percentage of genetically abnormal oocytes.Our work establishes nanometer-scale morphological metrics for Balbiani Body integrity, which could be a readout of primary oocyte quality.Our results also provide useful benchmarks for improving in vitro culture of follicles or vitrification of ovarian strips used for transplantation.10/17/2023 -Open Access Patient Information A 20-year-old female patient with no known pathology or history of infertility elected for total hysterectomy and bilateral salpingo-oopherectomy for purposes of transgender care.Informed consent was received prior to the procedure.Use of human ovarian tissue was approved by the Institute Review Board of the University of Texas Southwestern Medical Center (IRB#0801-404/012012-185). Ovarian tissue preparation Ovarian tissue, from both ovaries, was placed in 4% formaldehyde directly from the operating room and incubated for 24 hours for fixation.The tissue was embedded in paraffin and systematically cut into 5 μm sections by Leica Rotary Microtomes.Routine hematoxylin-eosin (H&E) staining was performed on tissue slices from each ovary.The remaining sections were allocated for immunohistochemistry. Ovarian follicle identification and categorization on H&E-stained tissue was performed according to Gougeon classification (Gougeon, 1996). To prepare the tissue slices for immunohistochemistry, the mounted paraffin sections were dewaxed and rehydrated in xylene and serial dilutions of ethanol, respectively.Following a wash step with phosphate-buffered solution (PBS), an antigen retrieval step was performed using a basic retrieval reagent (R&D, CTS013, USA) at 90-95°C for 5 minutes.The slides were cooled completely in PBS prior to processing. Immunohistochemistry To assess mitochondrial and DDX4 expression and localization, the tissue was inserted into a solution containing 10% goat serum, 3% bovine serum albumin, and 0.03% Triton X-100 (blocking solution) for one hour.The tissue was then incubated overnight with primary antibody at 4°C.The following primary antibodies were used: directly-labeled mouse monoclonal anti-HSP60 (Proteintech, 1:500 dilution; 66041-1-Ig, USA) and rabbit polyclonal anti-DDX4/VASA (Abcam, 1:500 dilution; ab277638, USA).Following primary antibody incubation, a series of washes were performed with PBS.The tissue was then incubated with a goat anti-rabbit secondary antibody (Invitrogen, 1:1000 dilution, A-11008) for 1 hour prior to staining for DNA with Vectashield (DAPI) antifade mounting media (Vectorlabs, H-1000-10, USA).Negative controls were included by incubating with no primary antibody. The directly labeled anti-HSP60 was created using an Alexa Fluor 647 Lightning-Link kit (Abcam, USA) following their instructions.This antibody was kept in the dark for storage and during incubations. Confocal Imaging Images were acquired using an inverted Nikon Eclipse Ti2-E microscope with a Yokogawa confocal scanner unit (CSU-W1), piezo Z stage, an iXon Ultra 888 EMCCD camera (Andor), a 100X (1.35 NA) silicone oil objective, 0.5 micron steps, and 1 X 1 binning.Images were deconvolved using the Nikon Elements 3D deconvolution algorithm (22 iterations, type: automatic).Image locations were coded by position on the stage to ensure oocytes were measured only once.Images were analyzed using FIJI (https://imagej.net/software/fiji/).For mitochdondria dispersal, the oocyte cytoplasm was segmented using a thresholding method (green signal, include holes), an ROI generated, then converted to a mask.All HSP60 signal outside the mask was removed.HSP60 signal inside the oocyte was segmented by thresholding and the particle analyzer function, then measured.For condensate dissolution, condensates were identified and measured using thresholding and the particle analyzer function.For each oocyte, the integrated densities of condensates were summed and displayed. Statistics and data representation Plotting of data and statistical analyses were performed using GraphPad Prism 9.The sample size, relevant statistical test, and significance are included in each figure legend or on the figure itself. Figure 1 . Figure 1.Subcellular localization of Balbiani Body components in quiescent and activated human oocytes:
2023-10-25T23:19:01.277Z
2023-10-17T00:00:00.000
{ "year": 2023, "sha1": "ebfb2ddc88a41e8ed00426d20636a19dae13a0f9", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "ebfb2ddc88a41e8ed00426d20636a19dae13a0f9", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
135353964
pes2o/s2orc
v3-fos-license
The Effects of Lead Contamination in Public Health Case: Pesarean Village, Tegal District, Indonesia Lead is one of the ten main chemicals that was found naturally in the earth's crust and has toxic effects to human health. Direct contamination due to the re-suspension of dust containing lead from soil can be inhaled by certain environmental conditions, for example in dry season and enter human body directly. The aim of this study was analyzing the effect of lead contamination toward human health in Pasarean Village.The research was conducted in Pesarean Village, Tegal District where metal smelting and used lead acid battery smelting had been done in more than 10 years ago and left the waste aside as an open dumping waste. The method that was used in this study were qualitative method using questionnaire and reviewing secondary data which was supported with literature reviews. The result showed that almost 80% of people has acute and chronic symptoms of lead poisoning and it showed that lead contained in blood was high or exceeding the blood lead level standard in human. Introduction Interaction is a major idea in ecology to understand the relationship between biological individuals, communities their relationship [1]. In the environment, there is a relationship from interaction between individuals in a group, living in an area and reflecting the relationship between phenomena in the environment [2]. Soil pollution is a condition where the chemicals enter and alter the natural soil of environment. The interaction between pollutants and soil components is strongly influenced by the biophysical-chemical properties of soil components and pollutants [3]. Soil pollution occurs when pollutants pollute the soil surface, enter the soil and settle as toxic chemicals in the soil [4]. Heavy metals can enter the environment anthropologically due to human activities such as through metal plating, mining, smelting, pesticide use, soil fertilizers, etc. [5]. One of dangerous heavy metal is lead (Pb). Lead is one of the ten main chemicals found naturally in the earth's crust and has toxic effects to human health [6]. Soil which is polluted by leador other heavy metal can come from human activities such as waste incineration, burning of coal and petroleum, metal smelting, and others [6]. Direct contamination due to the re-suspension of dust containing lead from soil can be inhaled by certain environmental conditions, for example in dry season and the deposition of direct air particulate matter against plant surfaces. Those way are likely to be pathway in which heavy metals in soil and plant accumulation connected [4]. In developing countries, there are around 600.000 new cases resulting intellectual disability of children every year and estimated 143.000 deaths occurs in each year. In Southeast Asia, a half of number diseases due to lead poisoning was occurred and a fifth part happened in Western and Eastern Mediterranean Pacific regions [7] [8].Lead poisoning recorded by WHO (2010) occurred in the region of Nigeria causing more than 100 children of Dareta Village and Yargalma poisoning. Severe exposure to lead pollution also occurred in urban areas in Sacramento, California in 2009. The data showed that almost 3% of children in the region have blood lead levels of more than 4.5 mg / dL [3]. On average, 10-30% lead is being inhaled and absorbed through the lungs while about 5-10% lead is ingested and absorbed through the gastrointestinal tract [9].Acute lead poisoning symptom are usually characterized by anxiety, poor focus, headache, muscle tremor, abdominal cramps or complaints of abdominal pain, kidney damage, hallucinations and forgetfulness and all these symptoms occur at level of 100-120 μg / dL lead in blood for adults. Signs of chronic lead poisoning in adults occur when lead level blood around 50-80 µg / dL, with symptoms of fatigue, insomnia, headaches, joint pain, gastrointestinal symptoms. In adults who areexposed after one to two years at the workplace, blood lead level reach around 40-60 μg / dL and will have symptoms such as muscle weakness, lack of intelligence, symptoms of peripheral and gastrointestinal neuropathic damage [10]. Materials and Methods The aim of the study was to analyze health impact of lead contamination in people especially people who lived in Pesarean Village. The study involved 51 respondents who lived in Pesarean Village, Adiwerna Subdistrict, Tegal District in area of RT 37 RW 08. The samples was determined by purposive sampling method with criteria people who had taken their blood in 2015 by Environmental Health, Ministry of Health, Yogyakarta Indonesia (BTKL Yogyakarta, Indonesia) [11] and stayed at least 10 years in Pesarean Village. Analysis of health impact was carried out using questionnaire. Questionnaires obtained were matched with secondary data and literature on diseases caused by lead contamination. Results and Discussions Based on the results of health impact conducted by questionnaires supported by Adiwerna district health center data and interview, metal and used lead acid battery (ULAB) smelting activity in the past have a major health impact on surrounded people especially in breathing. Data from Health center of Adiwerna District in 2015 and 2016 showed that diarrhea, skin problem and tuberculosis were the main diseases which often experienced by Pesarean villagers. The result also showed that 80% of respondents have complaints of coughing, shortness of breath and fever. This can be caused by lead pollution that occurs through the air media especially dust. These symptoms were also in accordance with the symptoms shown by the people of Cinangka Village who have the same conditions as Pesarean Village as the center of the used battery smelting industry. Based on the report of Indonesia Ministry of Environment in 2011 for Cinangka Village,lead poisoning effects in gastrointestinal (abdominal cramps, colic) usually began with constipation, nausea, vomiting. Those symptoms are counted as an acute exposure of lead. The emerging neurological symptom was encephalopathy such as headaches, confusion, frequent fainting. The main mechanism of neurogical disruption is the ionic mechanism which substitution of lead and calcium ions happened. It will allow lead to pass through the blood -brain barrier and when it is penetrated, lead will accumulate in astroglialcells, disrupting myelin sheath formation [12].Lead concentration also can affect the neural excitation and memory that related to neurotransmitter activity [13].The previous research stated that some chronic symptoms of lead poisoning were abdominal and muscular pain, arthralgia, irritability, depression, sleep disordered, memory damage and peripheral nerve damage while gastrointestinal colic caused by high lead exposure and associated with neuropathic damage [10,12]. In 2013, based on health center data of Adiwerna district, many people inPesarean Villagewere affected by Acute Respiratory Tract Infection (ISPA), weak muscle and tissue systems and the presence of people who were mentally retarded. Dust particles contained lead can enter the lungs and settle in the lungs resulting shortness of breath.The type of disease that appeared above was in accordance with on the acute symptoms of lead poisoning, a common symptom that arises in health due to lead poisoning. Acute intensive exposure can be seen from lead poisoning with symptoms of abdominal pain, constipation, fatigue, anemia and fever [15].The effect of acute exposure of lead beside neurological symptoms, is digestive system related symptomps such as diarhrhea, constipation, nausea and vomiting [16,17]. Figure 1) showed that the majority of respondents had lead levels above 20 μg / dL in blood. While 69% of respondents had blood lead levels above 40 μg/dL. The highest blood lead levels examined were 272.994 μg/dL which was owned by respondents who worked as smelters. Health impacts due to lead exposure are felt by all respondents. When linked to the occupation of respondent's profile, 51% of respondents work as smelters. The results of the questionnaire showed that the complaints most often felt by respondents who worked as smelter were coughing and shortness of breath, aches and abdominal pain. These symptoms can be interpreted as chronic symptoms due to exposure to lead. Lead exposure obtained by respondents who work as smelters was counted as occupational exposure through the respiratory and digestive tract which can mostly come from the smelting work environment. It is accordance with previous research that concluded levels of lead in blood can arise in addition of occupational exposure [18]. Conclusion Lead pollution due to metal smelting waste and used batteries has an impact on people health. Various acute and chronic symptoms of lead poisoning such as coughing, fatigue, muscle weakness, constipation and constipation were experienced by all respondents and almost 80% of the results of blood tests showed blood lead levels exceeding the maximum limit set while for the most severe chronic symptoms occurred in one family that had mentally retarded in their children.
2019-04-27T13:12:23.848Z
2018-11-27T00:00:00.000
{ "year": 2018, "sha1": "7c265a9bc1304b220c94768d9fcd6e2b0461bb3e", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2018/43/e3sconf_sricoenv2018_03012.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "1cf7f74cff4c3d30d476a8e30403ffcaba5650e7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Environmental Science" ] }
218079283
pes2o/s2orc
v3-fos-license
The Dutch LATER physical outcomes set for self-reported data in survivors of childhood cancer Purposes Studies investigating self-reported long-term morbidity in childhood cancer survivors (CCS) are using heterogeneous outcome definitions, which compromises comparability and include (un)treated asymptomatic and symptomatic outcomes. We generated a Dutch LATER core set of clinically relevant physical outcomes, based on self-reported data. Clinically relevant outcomes were defined as outcomes associated with clinical symptoms or requiring medical treatment. Methods First, we generated a draft outcome set based on existing questionnaires embedded in the Childhood Cancer Survivor Study, British Childhood Cancer Survivor Study, and Dutch LATER study. We added specific outcomes reported by survivors in the Dutch LATER questionnaire. Second, we selected a list of clinical relevant outcomes by agreement among a Dutch LATER experts team. Third, we compared the proposed clinically relevant outcomes to the severity grading of the Common Terminology Criteria for Adverse Events (CTCAE). Results A core set of 74 self-reported long-term clinically relevant physical morbidity outcomes was established. Comparison to the CTCAE showed that 36% of these clinically relevant outcomes were missing in the CTCAE. Implications for Cancer Survivors This proposed core outcome set of clinical relevant outcomes for self-reported data will be used to investigate the self-reported morbidity in the Dutch LATER study. Furthermore, this Dutch LATER outcome set can be used as a starting point for international harmonization for long-term outcomes in survivors of childhood cancer. Electronic supplementary material The online version of this article (10.1007/s11764-020-00880-0) contains supplementary material, which is available to authorized users. Introduction The vast majority of children diagnosed with cancer nowadays will achieve long-term survival [1,2]. Those childhood cancer survivors (CCS) are a growing, vulnerable group of individuals who are at risk of developing long-term morbidity due to previous treatment for cancer in early stages of life. Knowledge on the burden of long-term morbidity in CCS, its underlying types of health conditions and its risk factors, has been presented in various studies during the past decades [3][4][5]. In long-term morbidity research in CCS, a broad variety of outcome assessment methods is used. Long-term morbidity outcomes can be assessed by self-reporting via questionnaires [6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24], by medical evaluation during outpatient clinic visits [25][26][27][28][29][30][31][32][33][34] or by linkage with existing registries such as national hospital discharge registries [35][36][37][38][39]. Authors often include different types and different numbers of organ systems in their calculations of physical long-term morbidity . Also, incidence or prevalence estimates are often reported without describing which health conditions or organ systems were included in these calculations. Definitions of long-term morbidity outcomes also vary, for example, authors reporting on cardiovascular conditions generally report on heart failure, myocardial infarction, and hypertension, but some also include stroke as a cardiovascular condition [10,14,17,18,36]. While many authors do not grade the severity of the reported long-term morbidity in CCS, others use the Common Terminology Criteria for Adverse Events (CTCAE) [40], either in its original form or an adapted version incorporating specific additional outcomes that authors considered missing [41][42][43]. This lack of uniformity in types of outcomes, outcome definitions, and outcome grading-even among studies that use similar data ascertainment methods-limits interpretation, comparability, and generalizability of studies investigating the burden of long-term morbidity in CCS. Furthermore, the described outcomes in current studies include asymptomatic and symptomatic outcomes with or without treatment. To get a better insight in the overall burden for survivors, the Dutch LATER questionnaire study would like to evaluate only outcomes that are symptomatic and/or requiring medical treatment. The aim of this study is to develop a set of self-reported long-term physical outcomes that are clinically relevant for CCS, defined as morbidities with clinical symptoms and/or requiring medical treatment, to investigate the burden of morbidity in the Dutch LATER questionnaire study. Development of draft outcomes set based on existing questionnaires and input from survivors Three commonly used questionnaires addressing long-term morbidity in childhood cancer survivors were used for this article: the Dutch Childhood Oncology Group-Long-Term Effects After Childhood Cancer (Dutch LATER) study questionnaire which was used in the Dutch LATER research program [44], the Northern American Childhood Cancer Survivor Study questionnaire [45], and the British Childhood Cancer Survivor Study questionnaire [46]. See Supplementary Tables S1-S3 for the respective items. In long-term morbidity research, the Childhood Cancer Survivor Study questionnaire was used either in its original form [6-8, 10, 12-15, 18, 20, 22, 24, 47-52] or adapted by authors for their own specific study [9,21,53]. The questionnaires covered multiple dimensions of late side effects. For this article, we focused on self-reported physical outcomes, covered by the questionnaire sections on medical history and health conditions. The methods of comparing the three long-term morbidity questionnaires and selection of self-reported long-term physical outcomes for CCS are summarized in Fig. 1. We condensed all outcomes from the three questionnaires into 15 categories. All but two were defined per organ system, i.e., conditions of the eye, ear, speech, cardiac, vascular, pulmonary, gastro-intestinal, hepatic, renal and urinary tract, endocrine, musculoskeletal, neurologic conditions, and other conditions. In addition, surgical procedures and malignancies were considered (Supplementary Table S4). We listed the concordances and discordances in outcomes embedded in the three aforementioned questionnaires. The draft outcome set consisted of a selection of (concordant and discordant) outcomes. Next, we reviewed all health conditions that were reported in the open text fields by CCS participating in the Dutch LATER questionnaire study and added these outcomes to the draft outcome set by outcome category. Temporary or self-limiting morbidities, for example, urinary tract infections, pneumonia, and runner's knee, were not considered as potential outcomes due to their transient nature and were, therefore, removed from the draft outcome set. Childhood cancer-directed surgeries impacting CCS in later life, for example, limb amputation which results in a lifelong disability or removal of an eye which results in lifelong complications, were added to the draft outcome list. Also, obesity and underweight were added because they were no self-reported outcome in the aforementioned questionnaires. Selection of self-reported long-term physical outcomes for CCS The draft outcome set was reviewed in detail by the Dutch LATER experts team, which comprised a multidisciplinary team of late effects clinicians (pediatric oncology and medical oncology), late effects researchers, a pediatric endocrinologist, and a survivor representative, all of whom are involved in the late effects research. The experts team focused on health conditions that were relevant for childhood cancer survivors, i.e., health conditions that influence their daily life, either by resulting in symptoms or by requiring medical treatment. A proposal for a core outcome set was established by agreement by two authors (N.S. and L.F.), which was discussed by the experts team in a phone meeting. During this meeting, agreement was established regarding a final core set, containing all outcomes deemed relevant for survivors. Subsequently, for each outcome in the core set, definitions for clinical relevance were established by three authors (N.S., L.F., and L.K.), based on outcome-specific (potential) clinical symptoms and/or (potential) medical treatment. For obesity and underweight in adults, clinical relevance was defined according to the definitions used by the World Health Organization. These definitions were discussed by the experts team by e-mail, until agreement was reached for all clinical relevance criteria. Comparison between CTCAE and the new Dutch LATER core outcome set The CTCAE, originally developed to score acute treatment toxicities [40,54], is commonly used to grade the severity of outcomes in survivorship studies. This terminology comprises a 5-point grading scale for many adverse events, which are defined as unfavorable and unintended signs, symptoms, or disease, associated with the use of medical treatment. Severity grades rank from 1 (mild, asymptomatic or mild symptoms; clinical or diagnostic observations only; intervention not indicated) to 5 (death related to adverse event) [40]. To gain insight in the agreement between our newly defined outcome set and CTCAE grading, we added the CTCAE grade based on version 4.03 corresponding to our outcome definition for every proposed physical long-term morbidity outcome. Recently, researchers from the St. Jude Lifetime Cohort Study (SJLIFE) adjusted the CTCAE criteria to grade longterm morbidity in their cohort for which data was obtained during clinical assessment using multiple diagnostic modalities. To get insight in concordance between the CTCAE outcomes and the Dutch LATER core outcome set, we compared the different lists of outcomes. Results Selection of self-reported long-term physical outcomes of clinical relevance The process of selection of self-reported clinically relevant physical long-term physical outcomes, as displayed in Fig. 1, Fig. 1 Overview steps followed in the process of development of patient reported outcome list for research for physical long-term morbidity in childhood cancer survivors resulted in a core outcome set consisting of 74 proposed outcomes. The experts team decided on re-categorizing surgical procedures within their respective organ system and did not consider conditions of speech as clinically relevant. Therefore, the 15 initial outcome categories were re-categorized into 13 proposed main organ system categories: conditions of the eye, ear, cardiac, vascular, respiratory, gastro-intestinal, hepatobiliary tract, renal and urinary tract, endocrine, musculoskeletal, nervous system conditions, other conditions, and neoplasms (see Table 1). Agreement between the newly defined core outcome set and the CTCAE grading For each outcome, the minimum CTCAE grades that correspond with our criteria for clinical relevance are shown in Supplementary Table S5. In all, 27 out of 74 (36%) outcomes cannot be graded according to CTCAE because they are not present in the CTCAE as a separate entity. This group of outcomes can be categorized into three subgroups. First, it comprised certain surgeries of which the LATER experts team agreed upon clinical relevance (n = 18), because they influence CCS's daily life either by having medical consequences (e.g., splenectomy or organ transplantations) or by having cosmetic consequences (e.g., eye enucleation or limb amputation). Second, it comprised blindness and deafness, which are included in the CTCAE not as a specific outcome but as grading scale for several specific other eye and ear/nose/throat outcomes. The LATER experts team agreed that regardless of the underlying pathophysiological mechanism, blindness and deafness were both clinical relevant outcomes that should be included in the core outcome set. Third, specific outcomes that were not present as separate entities in the CTCAE were reported by CCS in the Dutch LATER questionnaire and were perceived as clinically relevant by the experts team (n = 7): aortic aneurysm, liver cirrhosis, tubular dysfunction of the kidneys, prolactinoma, polycystic ovarian syndrome, underweight, and pituitary dysfunction. Of the remaining 48 conditions, 11 (15%) fulfilled the definition for conditions with a CTCAE grade 3, that is, severe or medically significant but not immediately life-threatening. For 27 (36%) conditions, our criteria for clinical relevance corresponded with a CTCAE grade 2, moderate severity. For nine (12%) conditions (decreased pulmonary function, proteinuria, chronic kidney disease, precocious puberty, diabetes mellitus, ischemic cerebrovascular accident, transient ischemic attack, epilepsy, and headache), it was not possible to define the corresponding CTCAE grade for our established clinical relevance criteria, because additional clinical information was needed for CTCAE-based grading. Comparison to the SJLIFE-based grading showed that 34 conditions from our core set were not present in SJLIFE (46%) and additional information was needed for grading of 5 conditions (7%). A total of 23 clinically relevant conditions corresponded with SJLIFE grade 2 (31%) and two clinically relevant conditions (adrenal insufficiency and growth hormone deficiency) corresponded with SJLIFE grade 1 (3%). Discussion We present a proposal for a core set of 74 self-reported longterm physical outcomes of clinical relevance in survivors of childhood cancer. By comparison of existing survivorship questionnaires and by reviewing every specific morbidity reported by CCS in the open text fields in our Dutch nationwide questionnaire study, we followed an innovative method which focuses on outcomes that are clinically relevant for the survivor, due to the fact that its presence influences daily life. Our outcome set will be used for investigating the burden of longterm morbidity in the Dutch LATER questionnaire study. This set can also be used for international harmonization of a uniform core outcome set for long-term morbidity in CCS, to facilitate worldwide collaboration in late effects research. Compared with other grading scales used for long-term morbidity research in CCS, the newly developed Dutch LATER core outcome set differs on three important key points. First, this core outcome set was designed with the single purpose of investigating self-reported long-term morbidity in childhood cancer survivors, by combining existing questionnaires and outcomes reported by survivors. Second, we selected outcomes describing morbidity with clinical symptoms or requiring medical treatment, the so-called clinically relevant outcomes. Third we included outcomes where the treatment for childhood cancer caused direct damage that had persistent impact for the survivor also in later life, for example, limb amputation which results in a lifelong disability or removal of an eye which results in lifelong complications. Because the CTCAE criteria were originally designed for grading acute adverse events during adult cancer trials [54], the current CTCAE version 4.03 [40] does not cover the complete spectrum of long-term morbidity that CCS might encounter [42]. Several authors have already stated that relevant outcomes were missing for CCS and use adapted versions [41][42][43]. Comparison of our core set of long-term self-reported physical outcomes to the commonly used CTCAE showed that 36% of the outcomes were not present in the CTCAE. Moreover, CTCAE does not incorporate self-reported data to assess long-term morbidity [42]. For nine out of the 48 conditions that were present in the CTCAE, we could not perform severity grading because detailed additional clinical information was needed for appropriate grading, which was not available from current questionnaires and is often too complicated to directly ask patients in a questionnaire. Although often only health conditions grade 3 and higher are included when studying severe physical long-term morbidity in CCS, our results Growth hormone deficiency with clinical symptoms and confirmed by laboratory testing, with at least one of the following criteria: 1. Requiring medical treatment with growth hormone 2. For which growth hormone treatment was indicated, but the treating physician and/or parents decided not to start this treatment because of medical contra-indications Hypoparathyroidism Hypoparathyroidism with clinical symptoms and confirmed by laboratory testing, with at least one of the following criteria: show that many grade 2 conditions will have consequences for a survivor because of symptoms or needed treatment. From our core outcome set, up to 27 clinically relevant outcomes corresponded with CTCAE grade 2, for example, several endocrine deficiencies that require chronic medication use, and would have been missed in such studies. Comparison to the SJLIFE adapted CTCAE for grading of clinically ascertained data showed that more of our core outcomes were missing and that 24 clinically relevant conditions corresponded to grade 2 or even grade 1. Hence, our results support previous authors, concluding that the CTCAE in its current form is not optimal to grade severity of (self-reported) long-term physical morbidity outcomes for CCS [41][42][43]. To our knowledge, this is the first comprehensive proposal to define a core outcome set for self-reported long-term physical outcomes in CCS. A strength of this study is that we focused on clinical relevance for CCSA and a limitation is that we were not yet able to incorporate the prioritization of outcomes by survivors. This can be the focus of future research. Also, because the purpose of this core outcome set was facilitating the investigation of physical longterm morbidity in the Dutch LATER cohort, the proposed outcome definitions reflect the agreement among the Dutch LATER experts team only. To overcome any subjectivity in outcomes used by various childhood cancer survivorship research groups, we advocate international harmonization of a core outcome set for physical long-term morbidity in childhood cancer survivors. A uniform global core outcome set is highly needed to enable comparison of future long-term morbidity studies, to uniformly evaluate survivorship care and to facilitate collaboration within survivorship research. The International Guideline Harmonization Group [55] started an initiative to develop a harmonized outcome set by a Delphi method. This will facilitate international collaboration and data pooling. In conclusion, we propose a Dutch LATER core set of selfreported long-term physical outcomes of clinical relevance for CCS that will be used to investigate the burden of long-term morbidity in childhood cancer survivors from the Dutch LATER questionnaire study. We advocate to start international discussion and research to harmonize long-term physical morbidity outcomes that are clinically relevant for CCS. Authors' contributions All authors contributed to the design and data collection of the study. All authors contributed to the interpretation of data. NS, EF, MvdHL, JK, WT, RM, and LK drafted the manuscript and all other authors critically revised the manuscript. All authors approved the final version. Compliance with ethical standards Conflict of interest The authors declare that they have no conflict of interest. Data availability Data used for this study was not publicly available. Ethical statement The LATER questionnaire study was declared exempt from review of medical intervention research by the Medical Ethics Committee of the VU University Medical Center of Amsterdam and by the boards of all participating centers. All LATER questionnaire participants gave written informed consent. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
2020-05-03T13:53:22.581Z
2020-05-03T00:00:00.000
{ "year": 2020, "sha1": "a5d96a207810824a3a490ccb0b82409b233a316e", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11764-020-00880-0.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "a5d96a207810824a3a490ccb0b82409b233a316e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
740895
pes2o/s2orc
v3-fos-license
Single-channel properties of IKs potassium channels. Expressed in Xenopus oocytes, KvLQT1 channel subunits yield a small, rapidly activating, voltage- dependent potassium conductance. When coexpressed with the minK gene product, a slowly activating and much larger potassium current results. Using fluctuation analysis and single-channel recordings, we have studied the currents formed by human KvLQT1 subunits alone and in conjunction with human or rat minK subunits. With low external K+, the single-channel conductances of these three channel types are estimated to be 0.7, 4.5, and 6.5 pS, respectively, based on noise analysis at 20 kHz bandwidth of currents at +50 mV. Power spectra computed over the range 0.1 Hz-20 kHz show a weak frequency dependence, consistent with current interruptions occurring on a broad range of time scales. The broad spectrum causes the apparent single-channel current value to depend on the bandwidth of the recording, and is mirrored in very "flickery" single-channel events of the channels from coexpressed KvLQT1 and human minK subunits. The increase in macroscopic current due to the presence of the minK subunit is accounted for by the increased apparent single-channel conductance it confers on the expressed channels. The rat minK subunit also confers the property that the outward single-channel current is increased by external potassium ions. i n t r o d u c t i o n Expression of the minK protein is associated with potassium channel activity in a variety of tissues (Busch and Suessbrich, 1997). The minK (also called I SK ) protein underlies a slowly activating current in uterine smooth muscle (Boyle et al., 1987) that is developmentally regulated; it also underlies the slow delayed rectifier current I Ks in cardiac tissue (Freeman and Kass, 1993;Varnum et al., 1993) and a potassium current in epithelial cells of the ear (Sakagami et al., 1991;Marcus and Shen, 1994). Expression of this small (129-130 amino acids) protein in heterologous systems yields at most a small potassium current whose magnitude saturates at low expression levels Blumenthal and Kaczmarek, 1994), suggesting that it must combine with other subunit types to form functional I Ks channels. In the channel complex minK appears to be present in multiple copies (Tzounopoulos et al., 1995), quite possibly as few as two (Wang and Goldstein, 1995). The other partner in the I Ks channel is the product of the LQT1 gene. Long QT syndrome (LQTS) 1 is a genetically heterogeneous disorder that causes cardiac arrhythmias and leads to sudden death. One of several loci for this disorder, LQT1 is located on chromosome 11 (Keating et al., 1991) and is the gene for a potassium channel subunit named KvLQT1 (Wang et al., 1996 b ). Although KvLQT1 subunits produce a potassium current when expressed alone, much larger currents having the slow kinetic characteristics of I Ks are obtained from the coexpression of KvLQT1 and minK subunits (Barhanin et al., 1996;Sanguinetti et al., 1996;Yang et al., 1997). The LQTS-associated mutations in the KvLQT1 gene appear to reduce the expressed I Ks current in a dominant-negative fashion van den Berg et al., 1997). Because KvLQT1 subunits give rise to functional potassium channels when expressed alone, it is interesting to consider the nature of the interaction between minK and KvLQT1 that produces larger and more slowly activating currents when these genes are coexpressed. It has been argued that minK serves as a regulator of channel activity Ben-Efraim et al., 1996), but evidence is accumulating that minK residues form part of the pore of the I Ks channel complex (Wang et al., 1996 a ;Sesti and Goldstein, 1998;Tai and Goldstein, 1998). In a recent study using COS cells (Romey et al., 1997), it was concluded that the effect of minK coexpression was greatly to increase channel number while decreasing the single-channel conductance of the channels expressed from KvLQT1 subunits. In the present study, we revisit the single-channel properties of the KvLQT1 and coexpressed channels, making use of fluctuation analysis and single-channel recordings from Xenopus oocytes. A companion study considers the single-channel properties of coexpressed 666 I Ks Channel Properties channels containing mutant minK subunits as well (Sesti and Goldstein, 1998). m a t e r i a l s a n d m e t h o d s DNA and RNA Synthesis Human and synthetic rat minK cDNAs (Hausdorff et al., 1991;Goldstein and Miller, 1991) were obtained from Dr. S. Goldstein (Yale University) and propagated in pGEM-A and pBF2 vectors, respectively (Swanson et al., 1990;Tai and Goldstein, 1998). Point mutations in the minK constructs were made by PCR and verified by sequencing. Plasmids of rat and human minK were linearized with NotI and MluI, respectively. cRNAs were transcribed with the MEGAscript T7 and SP6 RNA polymerase kits (Ambion Inc., Austin, TX). Two human KvLQT1 constructs were obtained from Drs. M. Sanguinetti and M. Keating (University of Utah), which we call s-KvLQT1 and l-KvLQT1. The s-KvLQT1 (Sanguinetti et al., 1996) has a truncated NH 2 terminus, while l-KvLQT1 is full-length, having 95 additional residues at the NH 2 terminus . Each KvLQT1 gene was subcloned into a modified Bluescript vector (Bluescript KSM; gift from W. Joiner, Yale University) that incorporates ␤ -globin untranslated sequences and a poly-A tail for increased protein translation in oocytes. Plasmids of s-KvLQT1 and l-KvLQT1 were linearized with NotI and XbaI, respectively, and transcribed with the MEGAscript T3 RNA polymerase kit (Ambion Inc.). Sizes of transcribed cRNAs were verified by gel electrophoresis. Electrophysiology Human KvLQT1 cRNA (5.8 ng) was injected into Xenopus oocytes alone or in conjunction with 1 ng minK cRNA. We use the notation hI Ks to denote channels resulting from coexpression of human minK and human KvLQT1, rhI Ks to denote channels from the combination of rat minK and hKvLQT1, and I LQT to denote channels expressed from hKvLQT1 alone. In this study, only the full-length l-KvLQT1 variant was used for expressing hI Ks and I LQT channels; most rhI Ks recordings were made with this variant as well. The rhI Ks channels formed with the truncated s-KvLQT1 construct had identical behavior in terms of voltage dependence and single channel unitary current. Half-amplitude threshold analysis (Colquhoun and Sigworth, 1995) was used to idealize single-channel recordings for kinetic analysis and the reconstruction of ensemble time courses. For noise analysis, the macroscopic currents induced by a series of depolarizing pulses were recorded on video tape using a VR-10 Digital Data Recorder (Instrutech Corp.). Data were then transferred digitally from tape through the VR-10 Digital Recorder using the program VCatch developed in our laboratory. The raw data (94 kHz sampling rate) were filtered and decimated using a digital Gaussian filter. Power spectra were calculated from data decimated and filtered to 10 Hz, 100 Hz, 1 kHz, and 10 kHz bandwidths. A power spectrum covering the frequency range 0.1 Hz-20 kHz was obtained by combining the four individual spectra after correcting for filter responses. Statistical quantities are expressed as mean Ϯ SEM with the number of determinations n Ն 3 unless otherwise stated. r e s u l t s Activation of I Ks and I LQT Channels In oocytes coinjected with minK and KvLQT1 cRNAs, both cell-attached and inside-out patch recordings showed slowly activating outward currents with monotonically increasing noise during 5-s depolarizations (Fig. 1, A and B ). As characterized by Boltzmann fits to isochronal conductance-voltage curves, the voltage dependence of activation of these hI Ks and rhI Ks channels is quite shallow, with an effective charge of 1-1.2 e 0 and a midpoint voltage near ϩ 55 mV. The main difference in behavior between the two channel types is the more rapid time course of activation in the hI Ks channels. Both channel types show gradually increasing current even at the end of 60-s depolarizations to ϩ 50 mV (Fig. 1,D and E ). For comparison, the activation of currents resulting from the injection of hKvLQT1 cRNA alone is shown from a cell-attached giant patch recording in Fig. 1 C . The fragility of the giant patches precluded recordings at large positive voltages, but a Boltzmann fit over the accessible voltage range yields a half-activation voltage of Ϫ 6 mV, considerably more negative than that of I Ks channels and consistent with previous observations (Sanguinetti et al., 1996). The tail currents show a "hook" in the time course, characteristic of KvLQT1 currents (Sanguinetti et al., 1996;Pusch et al., 1998). The patch recordings shown in Fig. 1 all have similar kinetics and voltage dependence to the corresponding whole cellcurrents obtained by two-electrode voltage clamp. Reversal potentials of rhI Ks currents were measured from macroscopic tail currents obtained in inside-out or cell-attached patch recordings with 100 mM K ϩ , Na ϩ , Rb ϩ , or Cs ϩ in the pipette; in each case, the bath solution contained 100 mM K ϩ . Table I ( top ) shows the reversal potentials of the rhI Ks channels. The table also gives the computed permeability ratios. The permeability ratios are very similar to those obtained from voltage clamp recordings of oocytes injected with rat minK RNA (Hausdorff et al., 1991). Like many other potassium channels, the permeability sequence is K ϩ Ͼ Rb ϩ Ͼ Cs ϩ Ͼ Na ϩ . It was more difficult to obtain patch recordings with macroscopic hI Ks currents. Therefore, the ion selectiv-ity of these channels was characterized from whole cell currents with 100 mM K ϩ , Na ϩ , Rb ϩ , or Cs ϩ in the bath solution. For comparison, the reversal potentials of rhI Ks channels were also measured in this way, using the same batch of oocytes. There was no significant difference in reversal potentials between these two channel types (Table I). Single hI Ks Channel Current If we assume that the I Ks channel only has one conductance level with unitary current i , then for n channels the variance of current fluctuations will depend on the mean current I according to (1) (Sigworth, 1980). We shall denote by i v an estimate of i obtained from fitting Eq. 1 to the variance-mean relationship. For this analysis, a series of current sweeps was collected by applying repetitive depolarizing pulses to ϩ 50 mV. The mean current and variance from hI Ks channels ( Fig. 2 A ) were computed using groups of two sweeps to minimize errors due to slow current drifts (Heinemann and Conti, 1992). Shown in Fig. 2, B and C , are two mean-variance plots computed from data filtered to different extents. Fitting Eq. 1 yielded the estimates i v ϭ 0.28 pA at 100 Hz and 0.51 pA at 10 kHz σ 2 Ii I 2 n -----= Figure 1. Macroscopic currents from the channel types hI Ks (from human minK coexpressed with hKvLQT1), rhI Ks (rat minK coexpressed with hKvLQT1), and I LQT (hKvLQT1 expressed alone). (A) Activation of human I Ks currents from a cell-attached patch recording with standard patch solutions. Currents (top) were induced by depolarizations to Ϫ70 to ϩ110 mV in 20-mV steps from a Ϫ80-mV holding potential. No leak subtraction, filtered at 500 Hz. The isochronal voltage dependence of normalized conductance (bottom) was fitted with a Boltzmann function G ϭ G max /(1 ϩ exp[(V 1/2 Ϫ V)/k], with V 1/2 ϭ 52 mV and k ϭ 20 mV. (B) Activation of rhI Ks currents from an insideout patch recording with 130 mM K-aspartate, 10 mM KCl, 1 mM EGTA, pH 7.4 in the bath; 100 mM NaCl, 0.2 mM KCl, 1 mM MgCl 2 , 1.8 mM CaCl 2 , pH 7.4 in the pipette. Current activation (top) at Ϫ70 to ϩ150 mV in 20-mV steps from Ϫ80-mV holding potential. Leak current was subtracted by the P/5 protocol with Ϫ100-mV leak holding potential; data were filtered at 500 Hz. The isochronal voltage dependence of rhI Ks from three patches is fitted by a Boltzmann function with V 1/2 ϭ 56 mV and k ϭ 26 mV. (C) Activation of I LQT from a giant patch (30-m diameter pipette tip) with 93 mM K-aspartate, 7 mM KCl, 1 mM EGTA, pH 7.4 in the bath, 96 mM NaCl, 2 mM KCl, 1 mM MgCl 2 , 0.1 mM CaCl 2 , pH 7.4 in the pipette. Currents were induced by depolarizing pulses from a Ϫ80-mV holding potential to potentials of Ϫ70 to ϩ60 mV in 10-mV steps, and repolarization to Ϫ60 mV; filtered at 100 Hz. No leak current correction was applied. Normalized peak conductance was fitted with V 1/2 ϭ Ϫ6 mV and k ϭ 18 mV. Conductance in all three cases was computed assuming a linear open-channel current-voltage relationship with a reversal potential of Ϫ80 mV. (D) hI Ks current from a cell-attached patch recording with standard solutions, showing the response to a 60-s depolarization to ϩ50 mV. Filter bandwidth 40 Hz. (E) A corresponding recording of rhI Ks current. bandwidth. The discrepancy between the two estimates of unitary current suggests that a substantial amount of variance is contained in high-frequency components. To investigate the high-frequency components of the hI Ks current fluctuations, we computed the power spectrum of the macroscopic currents. Pairs of aligned current traces were subtracted as shown in Fig. 2 D. Power spectra were computed from subtracted traces (Sigworth, 1981) by fast Fourier transform and the resulting power spectrum after correction for background noise is shown in Fig. 2 E. It has a remarkably straight 1/f dependence over five decades of frequency. The weak frequency dependence of the spectral density implies that the observed noise variance will be heavily dependent on filter cutoff frequency. From Parseval's theorem, we have (2) where S(f) is the power spectral density of the current fluctuations and H(f) is the filter transfer function. To give an idea of the effect of filter bandwidth, the spectral density in Fig. 2 E was integrated numerically and converted into unitary current amplitude according to the expression (3) where is the time-averaged mean current and i s (f) is the apparent unitary current at bandwidth f. As can be seen in Fig. 2 F, i s increases strongly with filter bandwidth, and is still increasing at f ϭ 20 kHz. Thus, fluctuation analysis is expected to yield any of a variety of unitary current amplitudes, depending on the bandwidth. At 20 kHz, i s is 0.47 pA at ϩ50 mV. The expression in Eq. 3 is missing a correction term (Sigworth, 1981) and therefore underestimates the unitary current by a factor of about where is the mean open probability. Thus, the apparent unitary currents from spectral analysis (Fig. 2 F) of 0.2 and 0.47 pA, at 100 Hz and 20 kHz bandwidth, respectively, be- p come ‫52.0ف‬ and 0.6 pA when ϭ 0.2 is assumed. These values agree with those obtained from the meanvariance analysis (Fig. 2, B and C). Unitary currents roughly 0.5 pA in size should be visible in single-channel recordings. Obtaining single-channel patches was difficult, however, because the hI Ks channels appeared to be highly clustered in the oocyte membrane so that patches typically contained either tens of channels or no channels at all. The distribution of patch current density was very broad, as determined from more than 100 patches (Fig. 3). Fig. 4 A shows one of our best candidates for an hI Ks single-channel current. This sweep was recorded from a multiple-channel patch but appears to have only one channel active. As would be expected from the very broad power spectrum of macroscopic current fluctuations, the channel current shows very rapid flickering. From recordings at three voltages, the single-channel conductance is estimated to be 3 pS at 200 Hz bandwidth (Fig. 4 C). Conductance of I LQT Channels Injection of the KvLQT1 cRNA alone results in small K ϩ currents having more rapid kinetics than I Ks channels. Might the smaller current result from smaller single-channel currents? In the case of the hI Ks channels, mean-variance and spectral fluctuation analysis yielded reasonable estimates of the unitary current at ϩ50 mV, comparable to what was observed in a patch recording. To determine the I LQT unitary current, we used the same fluctuation-analysis methods and similar experimental protocols. The only difference was that in attempting to record those currents, we encountered a very low channel density. Using pipettes with 2-5-m tip diameters, we saw no current in 12 patches from oocytes having mean whole-cell currents of 4 A. Therefore, we used much larger pipettes (30-m tip diameter) to obtain macroscopic channel currents. Fig. 5 A shows one cell-attached giant patch having a mean current of 240 pA at ϩ50 mV. This recording shows the characteristic "hook" of outward tail current that is seen in I LQT channels. The mean-variance relationship, computed from data filtered at 200 Hz, is poorly fitted p t a b l e i Reversal Potential V r of I Ks Channels with Various External Ions Reversal potentials (in millivolts) were estimated from inside-out or cell-attached patch clamp recordings or from two-electrode voltage clamp. The pipette solution contained 100 mM of test cation X; bath solution contained 100 mM K ϩ . For voltage clamp measurements, reversal potentials were obtained using 100 mM of the test cation in the bath solution. The relative permeability of rhI Ks to ion X (bottom row) was calculated as P X /P K ϭ exp(F⌬V r / RT), where ⌬V r ϭ V r (X) Ϫ V r (K), as obtained from patch recordings. by the parabolic function of Eq. 1; however, linear fits are consistent with unitary currents of 0.03-0.04 pA, as is shown in Fig. 5 C. Spectral analysis of the fluctuations was also performed. The spectrum shows several discernible components, and can be well fitted by the sum of three Lorentzian functions (Fig. 5 E). The integral of the spectrum, scaled to show the apparent unitary current, shows i s increasing with bandwidth but possibly reaching a limiting value of ‫90.0ف‬ pA at 20 kHz. The unitary current is therefore about one fifth of that of the hI Ks channels. We were not able to obtain any convincing single-channel recordings of this current. Single Channel Properties of rhI Ks Channels We also studied channels formed by coexpression of rat minK with human KvLQT1 subunits. The currents from these channels (Fig. 1) show similar noise properties and voltage dependence to those containing human minK subunits. Fig. 6 shows the fluctuation analysis of these channels. The power spectrum (Fig. 6 B) does not have the simple power-law frequency dependence of the hI Ks channels, but can be fitted by one 1/f component plus several Lorentzian components, where a minimum of four Lorentzians was required for a good fit. The presence of discernible Lorentzians suggests that rhI Ks channels may have more clearly distin- The mean-variance analysis was also applied to same set of data. The fit of Eq. 1 to the mean-variance plot (Fig. 6 E) yields an estimate of the unitary current i v of ‫82.0ف‬ pA at 100 Hz bandwidth. This is similar to the value obtained from hI Ks channels at this bandwidth. The presence of discernible Lorentzian components in the power spectrum suggests that the rhI Ks channels should show less flickering than the hI Ks channels. Patch recordings indeed showed single-channel events, but as was the case with hI Ks channels, of Ͼ200 trials, we were unable to obtain a one-channel recording of sufficient duration to allow kinetic analysis. Shown in Fig. 7 is a recording from a patch containing three channels using pulses to ϩ50 mV. Channels open after a latency of a few seconds, often first to a subconductance level before reaching the full single-channel current (Fig. 7 B). To verify that these channel events correspond to the macroscopic currents, we computed the channel open probability from the idealizations of 60 sweeps. It has a slowly activating time course that reaches an open probability of 0.45 at the end of the 5-s depolarization. This time course superimposes well on the time course of current in a multichannel patch (Fig. 7 C). The time p Figure 3. Frequency distribution of hI Ks patch currents. Current was measured at the end of a 5-s depolarization to ϩ50 mV in each of 128 patches, and histograms were constructed. The inset shows an expanded histogram, where the bin at zero represents the 43 patches that showed no I Ks current. course of activation can be described by the distribution of first latencies to channel opening. Ignoring the subconductance levels, we measured the first latency to the fully open state and corrected it for the presence of three channels (Aldrich et al., 1983). When scaled by the factor 0.8, it matches very well the time course of the open probability. This correspondence is consistent with the idea that, once a channel opens, it remains open with a substantial probability (apparently 0.8 at this time resolution). Slow variations in single-channel activity were observed in this patch, with occasional null sweeps (Fig. 7 B, middle) occurring throughout the recording (Fig. 7 F). A subsequent recording at ϩ20 mV from the same patch (Fig. 8) shows similar kinetic behavior of the single channels. Again, subconductance levels are sometimes seen to precede the full opening of the channel (Fig. 8 A) and the slow time course of activation is explained by long first latencies (Fig. 8 B). At this smaller depolarization, a higher frequency of null sweeps was seen (Fig. 8 C). External Potassium Dependence of I Ks Channels The conductance and gating of some types of K ϩ channels depend on external K ϩ concentration. We tested Power spectrum calculated from subtracted currents after background noise subtraction. Solid curve is the sum of three Lorentzians with corner frequencies 5, 141, and 5,000 Hz and amplitudes 3.6 и 10 Ϫ25 , 3.7 и 10 Ϫ26 and 1.6 и 10 Ϫ27 A 2 /Hz, respectively. The corresponding mean current was 240 pA, and the estimated single-channel current i s ϭ 0.09 pA at 20 kHz. The bottom trace is the spectrum from another recording, displaced downward by one decade for clarity. The mean current in this case was 370 pA and i s (20 kHz) ϭ 0.08 pA. (F) Unitary current i s was calculated from the integral of the power spectrum; points beyond 10 kHz were computed from the fitted function. In the patch recording, the bath solution was 93 mM K-aspartate, 7 mM KCl, 1 mM EGTA, 10 HEPES, pH 7.4; the pipette solution contained 100 mM NaCl, 1 mM MgCl 2 , 0.1 mM CaCl 2 , 5 mM HEPES. the external potassium dependence of whole-cell currents using continuous bath perfusion. Fig. 9 shows currents at ϩ30 mV as the bath solution was switched between 0.2 and 10 mM K ϩ . The hI Ks and rhI Ks channels have opposite responses to the change in external potassium. Switching from 0.2 to 10 mM K ϩ reduces the hI Ks current by 20%, an effect that can be explained by the decrease in driving force; however, there is a 20% current increase under the same conditions with rhI Ks channels. That higher external potassium increases outward current was also observed in the I Kr current through HERG channels (Sanguinetti et al., 1995). The effect of external K ϩ was also tested for I LQT channels (Fig. 9 C). In this case, the currents were smaller when external K ϩ was increased. These experiments show that coexpression with the rat minK gene product changes the sensitivity the I LQT channels to external potassium. A comparison of the human and rat minK sequences (Murai et al., 1989;Fig. 9 D) shows many differences in the extracellular (NH 2 -terminal) and intracellular (COOHterminal) regions, but only one nonconserved residue in the putative transmembrane domain. As a first attempt to locate the region responsible for the differences in K ϩ sensitivity, we made complementary mutations at this position. The resulting constructs, human (V47I) and rat (I48V) minK were coexpressed with human KvLQT1 and the external potassium sensitivity was Figure 6. Fluctuation analysis of rhI Ks currents. (A) A pair of successive current traces, filtered at 1 kHz. The current was induced by ϩ50-mV depolarizing pulses from Ϫ80-mV holding potential, and repolarized to -60 mV in a cell-attached patch recording with 140 mM K-aspartate, 1 mM EGTA, 10 mM HEPES, pH 7.4 in the bath; 96 mM NaCl, 2 mM KCl, 1 mM MgCl 2 , 0.1 mM CaCl 2 , 5 mM HEPES in the pipette. Pulses were delivered every 33 s; mean current during the depolarizing pulse was 34 pA. (B) The corrected power spectrum of currents, computed from 36 traces. The solid curve is a fitted power-law function plus four Lorentzian components, of the form S ϭ 1.1 и 10 Ϫ29 ϩ 6.1 и 10 Ϫ25 /f 1.2 ϩ 6 и 10 Ϫ26 /[1 ϩ (f/11) 2 ] ϩ 2.9 и 10 Ϫ27 /[1 ϩ (f/141) 2 ] ϩ 2.2 и 10 Ϫ27 /[1 ϩ (f/1,195) 2 ] ϩ 8.8 и 10 Ϫ28 /[1 ϩ (f/3,535) 2 ], where f is in Hz and S is in A 2 /Hz. The unitary current i s ϭ 0.67 pA at 20 kHz bandwidth. The lower trace is the spectrum from another recording in which the mean current was 7 pA and i s ϭ 0.51 pA at 20 kHz. (C) Frequency dependence of unitary current calculated from the numeric integral of power spectrum over the frequency range 0.1 Hz-20 kHz. (D) Ensemble mean current and variance, from the same data filtered at 100 Hz. The variance trace was calculated from pairs of sweeps to minimize error due to drift. (E) Mean-variance plot. Superimposed is a parabolic fit, which yields the unitary current estimate i v ϭ 0.28 pA. assayed by the ratio of peak current in 10 mM external K ϩ to that in 0.2 mM K ϩ (Table II). The mutation in human minK had no significant effect on the K ϩ sensitivity, but, with the rat mutation increased, K ϩ significantly decreased the current (P Ͻ 0.01), which is the opposite effect to that seen with the wild-type rat minK subunit. Thus, this residue in the membrane-spanning region appears to contribute to the external potassium sensitivity. Because the single-channel rhI Ks currents can be resolved in patch-clamp recordings, it should be possible to examine the origin of the increase in outward current in these channels when extracellular potassium concentration is increased. Fig. 10 A shows three representative single-channel currents from inside-out patches at ϩ50 mV and filtered at 100 Hz. The currents were estimated to be 0.56, 0.44, and 0.37 pA with [K] o equal to 10, 0.2, and 0 mM. The single-channel current is seen to increase when [K] o increases, even as the driving force decreases. Recordings from six patches show increased conductance over the voltage range of Ϫ30 to ϩ80 mV with higher [K] o (Fig. 10 B). The [K] o dependence of conduc-tance appears to saturate above 2 mM, and its magnitude accounts for all of the increase in macroscopic current observed on raising extracellular potassium in the case of the rhI Ks channel. d i s c u s s i o n This study has considered the single-channel properties of I Ks channels obtained from the coexpression of the human or rat minK protein with human KvLQT1, and has compared these properties with the expression of KvLQT1 subunits alone in Xenopus oocytes. We conclude that the I Ks channels have a higher single-channel conductance than channels from KvLQT1 alone, and that the sensitivity to external potassium ions is reflected in the size of single-channel currents in the case of the rhI Ks hybrid channel. The I Ks single-channel currents are roughly 0.6 pA at ϩ50 mV. The relatively low conductance of these slowly activating channels might be important to reduce membrane potential fluctuations in cells where I Ks serves to shape long-duration action potentials. The open probability at the end of the 5-s depolarization was ‫.5.0ف‬ The superimposed step-wise curve is the first-latency distribution F 1 scaled by the factor 0.8. It was computed according to where F 3 is the observed first-latency distribution from 60 sweeps recorded from the three-channel patch. (E) Diary plot of the threechannel first latency F 3 in the patch recording. Latency values Size of Single-Channel Currents The rapid flickering of currents in these channels makes difficult the determination of the single-openchannel current. When fluctuation analysis is used to estimate the single-channel current, the bandwidth of the recording must be sufficient to capture the fastest fluctuations, or else the variance will be underestimated, providing an underestimate of the single-channel current. The direct observation of single-channel currents suffers from a similar limitation: if a channel's current contains many brief interruptions, a singlechannel recording at low bandwidth will show a reduced apparent single-channel current and increased apparent open probability. The very noisy appearance of the hI Ks recording at 500 Hz (Fig. 4 A) suggests that this bandwidth is not sufficient to resolve the true open-channel current. Fluctuation analysis allows a wider range of frequencies to be explored. A two-state channel with opening and closing rate constants ␣ and ␤ yields current fluctuations having a Lorentzian power spectrum with a corner frequency f c ϭ 1/2(␣ ϩ ␤). Above f c , the Lorentzian decays with frequency as f Ϫ2 ; this relatively rapid decay means that the observed variance of the fluctuations converges rapidly to the correct value as the bandwidth is increased above f c . On the other hand, no convergence results in the case of an f Ϫ1 frequency dependence, like that shown in Fig. 2 E for the hI Ks channels. Such a frequency dependence results in an observed variance that increases without limit as the bandwidth increases. Because bandwidth is related to the time scale of measurement, one could speak of an effective single-channel current value that depends on the time scale under which it is measured. Channels having stable open and closed states, such that the power spectrum of fluctuations from these channels decay rapidly at higher frequen- Currents were elicited by depolarizations to ϩ20 mV, 5-s duration, delivered at 8-s intervals from a holding potential of Ϫ80 mV. Of a total 77 sweeps, 37 showed some activity; 11 of those showed a second channel opening during depolarization, and one sweep showed three channels open simultaneously. cies, show a distinct single-channel current value given sufficient recording bandwidth. For the hI Ks channel, however, the 20-kHz limit of our power spectrum measurements was not sufficient to reach this regime. Thus we do not know the asymptotic value of the variance; we also do not know the exact open probability value that is necessary to correct the estimate of the singlechannel current. The variance computed from fluctuations up to 20 kHz (Fig. 2), when corrected for an estimated absolute open probability of ‫,2.0ف‬ result in the estimated single-channel current of 0.6 pA at ϩ50 mV, or a chord conductance of ‫5.4ف‬ pS. These values are consistent with the current extremes observed in single-channel recordings (Fig. 4). Another way to summarize the problem posed by the hI Ks channel is that the very rapid current fluctuations make it difficult experimentally to distinguish, on the basis of time scales, between gating or channel-block phenomena on the one hand and the ion conduction process on the other. The apparent single-channel conductance values are influenced by the very rapid interruptions in the channel current. The channels formed by hKvLQT1 subunits expressed alone or in combination with rat minK subunits showed less extreme behavior. Although the spectra of the current fluctuations are also very broad, they are not as featureless as those of hI Ks currents and can be fitted by multiple Lorentzian components. Limiting single-channel current values at ϩ50 mV of 0.09 and 0.84 pA are obtained from fluctuation analysis. These correspond to conductances of ‫7.0ف‬ and 6.5 pS. The rhI Ks channels resulting from coexpression were also The parameter r ϭ I peak (10 K)/I peak (0.2 K) is the ratio of currents measured at the end of 5-s depolarizations to ϩ30 mV with either 0.2-or 10-mM external potassium solutions bathing the oocyte. Oocytes were injected with the given constructs and currents were measured using the two-electrode voltage clamp. observed directly from single-channel recording at 100 Hz bandwidth. There they appeared to have singlechannel currents at ϩ50 mV of ‫5.0ف‬ pA, depending on extracellular K ϩ concentration (Fig. 10). In native tissues, the cardiac I Ks current has been seen to have small fluctuations. Walsh et al. (1991) estimated unitary conductances of Ͻ1 pS in guinea pig myocytes. Taking into account their recording bandwidth of 200 Hz, we obtain a similar value. At 200 Hz, we would estimate a conductance of ‫2ف‬ pS, as calculated from the estimated single-channel current at ϩ50 mV of ‫2.0ف‬ pA in both hI Ks and the hrI Ks channels (Figs. 2 F and 6 C). It should be kept in mind that fluctuation analysis depends on several assumptions about the behavior of channels. We assume homogeneous populations of in-dependently gating channels, and have also used the assumption that there is only one nonzero conductance level. It is likely that one or more of these assumptions is false. Evidence has been presented by Pusch et al. (1998) that KvLQT1 channels have two open states, and we see clear subconductance levels in single-channel recordings of the rhI Ks channels (Figs. 7 B and 8 A). If there are multiple conductance levels, the estimated single-channel current will lie between the largest and smallest single-channel current, and will depend on the probabilities of occupancy of the various conductance states. It should be kept in mind, however, that the high-conductance states tend to dominate the estimated conductance, because the contribution of a state's current i to the variance is proportional to i 2 . Thus, our single-channel conductance estimates are likely to approximate the values for the largest conducting states. A similar argument can be made concerning the possible heterogeneity of channel types. When minK and KvLQT1 cRNAs are coinjected, it is possible that hybrid channels are expressed having various stoichiometries, and the fluctuation analysis will give a weighted-average value. Again, it should be kept in mind that larger channel currents make larger contributions to the variance, and therefore predominate in the weighted average. Thus, if our coinjections produced a variety of channel types, the estimated conductance probably reflects the largest conductance value. Further, the good correspondence between the fluctuation analysis of rhI Ks currents and direct single-channel recordings argues that heterogeneity in channel conductances is not a serious problem. How Coexpression of minK Affects KvLQT1 Current Expression of KvLQT1 subunits produces small, rapidly activating potassium currents; coexpression of these with minK results in slowly activating I Ks currents that are several-fold larger. These differences in the expressed currents have been seen in a variety of expression systems, including Xenopus oocytes, Sf9 cells, and in the mammalian cell lines CHO and COS (Barhanin et al., 1996;Sanguinetti et al., 1996;Romey et al., 1997). Is the increase in KvLQT1 current on coexpression with minK due to an increase in channel density or an increase in the single-channel current? Romey et al. (1997) addressed this question through single-channel recordings and noise analysis of expressed currents in COS cells. They concluded that the addition of minK subunits to KvLQT1 channels caused a reduction of single-channel conductance from 7.6 to 0.6 pS. To account for the increase in macroscopic current, they conclude that coexpression with minK causes the channel density to increase by a large factor, some 60-fold. Our studies of these channels in Xenopus oocytes lead to the opposite conclusion, that a large part of the observed current increase on coexpression of minK arises Figure 10. Single-channel conductance of rI Ks channels with different external K ϩ solutions. (A) Representative current traces induced by ϩ50-mV depolarizing pulses from Ϫ80-mV holding potential. Data were filtered at 100 Hz. (B) Single-channel current as a function of voltage obtained from double-Gaussian fits to amplitude histograms of traces. Fitted lines have slopes of 1.9, 3.2, and 4.8 pS for 0, 0.2, and 10 mM external K ϩ , respectively. (C) External potassium dependence of single channel conductance. Error bars represent SEM (n ϭ 2 for 0 mM and n ϭ 4 for 10 mM; the points at 0.2 and 2 mM K ϩ are single observations). The superimposed fit is the function ␥ ϭ 1.8 ϩ 2.8/(1 ϩ 0.5 mM/[K]) pS. from an increase in single-channel conductance. From fluctuation analysis, we estimate a single-channel conductance of ‫7.0ف‬ pS for KvLQT1 channels. We estimate the conductance of human I Ks channels to be ‫5.4ف‬ pS. The discrepancy between our results and those of Romey et al. (1997) might be explained by the difference between COS cell and oocyte expression systems. This, however, is unlikely because the behavior of the channels is similar in the various systems; further, Romey et al. (1997) report the same single-channel conductance value for I Ks channels expressed in Xenopus oocytes as in COS cells. Our results disagree with this previous work in two respects. First, we obtain a larger single-channel conductance for the human I Ks channels than reported by Romey et al. (1997). This can be explained largely by the frequency dependence of fluctuations in this channel. Our value of 4.5 pS is based on fluctuation analysis at 20 kHz and on single-channel recordings at 500 Hz bandwidth; their value of 0.6 pS was based on fluctuation analysis at a bandwidth of 150 Hz under similar ionic conditions. Our conductance estimate of 6.5 pS for the closely related rhI Ks channel (Figs. 6-10), for which openings are more readily resolved, supports the higher conductance estimate. The other disagreement concerns the conductance of channels arising from the expression of KvLQT1 subunits alone. Romey et al. (1997) found well-resolved single-channel events in COS cells having a conductance of 7.6 pS. In our macropatch recordings, we find a remarkably noiseless current (Fig. 5). The power spectrum from the macropatch recording shows a broad frequency dependence, with a limiting conductance value of ‫7.0ف‬ pS apparently being reached at 20 kHz bandwidth. There is always the danger that the currents in patch-clamp recordings are not properly identified, such that unitary events from one channel type are ascribed to another. Although we have not performed a pharmacological identification of our currents, we note that the kinetics of activation and the tail currents in our macropatch recordings agree very well with the currents observed from KvLQT1 channels in whole oocytes and in other expression systems (Fig. 1 C; Barhanin et al., 1996;Sanguinetti et al., 1996;Romey et al., 1997), supporting the view that it is these channels whose fluctuations we have measured. Our results agree well with those of Sesti and Goldstein (1998), who studied channels expressed from KvLQT1 subunits alone and with human minK. They used symmetrical 100 mM potassium solutions and thereby obtained higher conductance values (4 and 16 pS) compared with ours (0.7 and 4.5 pS). Under the different ionic conditions, the single-channel outward currents are expected to be similar at large depolarizations. At ϩ50 mV and 20 kHz bandwidth, our estimate for the single-channel current of hI Ks channels is 0.6 Ϯ 0.2 pA; here the error bounds reflect an estimate of statistical and systematic errors in the fluctuation analysis used. The corresponding estimate at 25 kHz bandwidth given by Sesti and Goldstein (1998) is 0.8 Ϯ 0.2 pA. Kinetics of I Ks Channels In addition to rapid flickering, the currents through single I Ks channels show slow gating processes. At depolarizations to ϩ20 and ϩ50 mV (Figs. 7 and 8), the main determinant of the activation time course is seen to be the latency to first channel opening. In the rhI Ks channels where single-channel events could be readily resolved, dwells in a subconductance state were often seen to precede full channel opening. The rhI Ks channel activity also waxes and wanes on a time scale of ‫03ف‬ s, as seen by groups of successive blank sweeps in patch recordings (Figs. 7 and 8). The Conductance of minK "Channels" The discovery that KvLQT1 subunits coassemble with the minK gene product to produce the I Ks current (Barhanin et al., 1996;Sanguinetti et al., 1996) has clarified some of the puzzling aspects of the "I minK current" that is seen in Xenopus oocytes when minK is expressed alone (Busch and Suessbrich, 1997). It is now clear that this current results from the combination of minK with an endogenous, Xenopus KvLQT1 homologue that is expressed at low levels (Sanguinetti et al., 1996). The difficulty that we and others have had in attempting to define the single-channel characteristics of I minK are now understandable in view of the difficulties we have encountered in recording from single I Ks channels. In a preliminary communication (Yang and Sigworth, 1995), we reported fluctuation analysis of a slowly activating current seen in patches from Xenopus oocytes, but many subsequent attempts were unsuccessful to establish this current as the same as the macroscopic, potassium-selective I minK . It is possible that our patch currents, which from fluctuation analysis had a unitary current value below 1 fA, were contaminated with currents from an endogenous channel or ion transporter having slow kinetics, similar perhaps to the transporter studied by Schlief and Heinemann (1995). We thank W.N. Joiner for the Bluescript-KSM vector and Y. Yan for cRNA preparation and oocyte injection. We also thank Dr. S.A.N. Goldstein for minK cDNA, and Drs. M. Sanguinetti and M. Keating for the human KvLQT1 cDNA. Original version received 30 July 1998 and accepted version received 21 October 1998.
2016-05-04T20:20:58.661Z
1998-12-01T00:00:00.000
{ "year": 1998, "sha1": "fcd7efc3371c6d31428aeb909585fbf6875823a0", "oa_license": "CCBYNCSA", "oa_url": "http://jgp.rupress.org/content/112/6/665.full.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "fcd7efc3371c6d31428aeb909585fbf6875823a0", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
8097012
pes2o/s2orc
v3-fos-license
Effect of Different Cooking Methods on Histamine Levels in Selected Foods Background Histamine in food is known to cause food poisoning and allergic reactions. We usually ingest histamine in cooked food, but there are few studies about the influence of cooking method on the histamine level. Objective The purpose of this study was to determine the influence of cooking methods on the concentration of histamine in foods. Methods The foods chosen were those kinds consumed frequently and cooked by grilling, boiling, and frying. The histamine level of the food was measured using enzyme-linked immunosorbent assay. Results Grilled seafood had higher histamine levels than raw or boiled seafood. For meat, grilling increased the histamine level, whereas boiling decreased it. For eggs, there was not much difference in histamine level according to cooking method. Fried vegetables had higher histamine levels than raw vegetables. And fermented foods didn't show much difference in histamine level after being boiled. Conclusion The histamine level in food has changed according to the cooking method used to prepare it. Frying and grilling increased histamine level in foods, whereas boiling had little influence or even decreased it. The boiling method might be helpful to control the effect of histamine in histamine-sensitive or susceptible patients, compared with frying and grilling. INTRODUCTION Food-derived histamine is associated with non-allergic food intolerance and food poisoning, whereas endogenous histamine stored in mast cells is responsible for food allergy reactions 1 . At high concentrations, they are risk factors for food intoxications, whereas moderate levels, may lead to food intolerance 2 . Histamine intolerance and poisoning resulted from the disequilibrium of accumulated histamine and the capacity for histamine degradation by the enzyme diamine oxidase (DAO) [3][4][5] . Ingested histamine can be metabolized by DAO, which is mainly present in intestinal epithelial cells 5,6 . If we ingest excessive histamine, un-degraded histamine could be absorbed into the body. Impaired histamine degradation based on reduced DAO activity, and the resulting excess of histamine, may cause numerous symptoms mimicking an allergic reaction 1 . There are some concrete clinical examples of reduced DAO activity including the subgroup of patients those with atopic dermatitis 7 , that are pregnant 8 , and other chronic disease (e.g., liver cirrhosis 9 , anorexia nervosa 10 , inflammatory bowel disease: ulcerative colitis and Crohn's disease 11 ), and those that use anticancer drugs 12 . Even the abundance of histamine is not the only causative factor for food allergy, there are some evidences which address higher levels of histamine-containing food could induce more adverse reaction than lower levels of histamine-containing food in susceptible patients 5,7,8,13 . Various approaches, such as modified atmosphere packaging, irradiation, high hydrostatic pressure, and food additives and preservatives have been applied to control the accumulation of histamine in food products 14 . These methods of controlling histamine content rely mainly on growth inhibition of histamine-producing bacteria and histamine decarboxylase activities 15 . There are many reports regarding the histamine content in raw food 2,[16][17][18] . High histamine levels are found in food such as tuna, mackerel, anchovy, spinach, wine, cheese, sausage and fermented foods 2,18 . However, there are few reports regarding the histamine content in cooked food and the influence of cooking methods. Because we eat cooked food more often than raw, we need to know how cooking methods influence histamine level. To determine the influence of cooking method on the concentration of histamine, we started with a list of foods rich in histamine; then compared the histamine level of these foods when raw, and after household processing (e.g., frying, grilling and boiling). Foods Because there are numerous food items, the most commonly eaten foods were given priority. Twenty-seven foods often consumed by Koreans were selected for this study. We got help determining representative food items by consultation with those from the Department of Nutrition at Hallym University Kangnam Sacred Heart Hospital, Hallym University College of Medicine. These were categorized as 1) fishery products and processed marine products, 2) meat, processed meat, and eggs, 3) vegetables, 4) fermented pastes and dairy products (milk and cheese). The detailed foods list is presented in Table 1 Cooking methods We gave priority to the cooking methods most commonly used by Koreans, including boiling, grilling, and frying. When selecting representative cooking methods, we also got help and consultation from the same nutrition department. The foods were cooked by boiling, grilling, and frying. Because the histamine concentration is also dependent on the freshness of food, the foods were cooked immediately after purchase. Before cooking, all the raw foods were prepared by cutting into 1-g portions. The cooking temperature was monitored continuously using thermocouples. Boiling was conducted at 90 o C for 5∼10 minutes in a water bath (500 ml). Distilled water was also 90 o C when it was boiled in aluminum foil cups. Grilling was performed at 150 o C in a preheated pan for 1∼5 minutes without oil. Fishery products and processed marine products, eggs, meat and processed meat were grilled, or boiled until well done. For frying, fresh soy bean oil (10 ml) was used. Dried anchovy, eggs, onions, carrots, and laver seaweed were fried at 150 o C for 1∼5 minutes in a preheated pan. Spinach was blanched for 30 seconds at 90 o C in aluminum foil cups. The laboratory room temperature was controlled to 25 o C. Statistical analysis All experimental values were expressed as means±standard deviations of three replicates. The statistical significance of any differences between data was assessed by analysis of variance (ANOVA), followed by the Dunnett's test or Tukey's test. p-values <0.05 were considered to be statistically significant. All statistical analyses were conducted using PASW Statistics ver. 18.0 (IBM Co., Armonk, NY, USA). Fishery products and processed marine products Foods in the fishery products and processed marine products group were cooked according to the representative cooking methods: frying, grilling or boiling. Then, we measured the histamine levels in cooked foods. Fig. 1 shows the histamine levels of the cooked fishery products and processed marine products. Fried dried anchovy (13,347±10,738.81×10 −3 ppm) showed the highest histamine level in this group, followed by grilled dried anchovy (2,669±1,538.20×10 −3 ppm). In the case of dried anchovy, the histamine level increased after frying (13,347±10,738.81×10 −3 ppm), grilling (2,669± 1,538.20×10 −3 ppm), and boiling (105±41.94×10 −3 ppm). When fried, the histamine level of dried anchovy showed a >200-fold increase than when uncooked, and when grilled, it showed about a 45-fold increase. When boiled, Fig. 2. Histamine level in eggs, milk, meat, and processed meat. Data are presented as the mean±standard deviation (n=3). *p<0.05 and **p< 0.01 compared to uncooked food. The statistical significance of any difference between data was assessed by analysis of variance (ANOVA), followed by Dunnett's or Tukey's test. p-values <0.05 were considered to be statistically significant. the histamine level increased only 2-fold. In the case of tuna, the histamine level was increased about 5-fold by grilling (717±161.52×10 −3 ppm), but decreased slightly after boiling (78±63.55×10 −3 ppm). For shrimp, saury, hairtail, and mackerel, the histamine level slightly increased or was not changed, by boiling. However, grilling caused it to increase remarkably (8∼56 fold). The histamine level of Spanish mackerel was increased about 2-fold by grilling, while there was no distinctive change in histamine level from boiling. The method of cooking caused no distinctive changes in the histamine level of squid. Taken together, grilling in most of the fishery products and processed marine products group increased the histamine level more than boiling did. Also, regarding frying, only dried anchovy in the category of fishery/processed marine products was fried in our study. Meat, processed meat and egg Foods in the meat, processed meat and egg group were cooked by frying, grilling, or boiling; then we measured the histamine levels in the cooked foods (Fig. 2). Grilled pork showed the highest histamine level in this group (1,146±1,016.90×10 −3 ppm). For pork and chicken, the histamine level was increased about 1.5-fold by grilling, but was decreased 10%∼20% by boiling. The histamine level of beef changed less than of other cooked meats, grilling caused a 1.8-fold increase, while there was no distinctive change due to boiling. In the case of sausage, the histamine level was slightly increased (1.03-fold) by grilling (502±77.38×10 −3 ppm), but decreased 60% after boiling (193±20.42×10 −3 ppm). The histamine level of ham was increased about 1.4-fold by grilling (283±142.90 ×10 −3 ppm), but decreased 60% after boiling (79±19.52 ×10 −3 ppm). These results showed that grilling increased the histamine level of most meat and processed meat, while boiling decreased their histamine levels. For eggs, there was not much difference in histamine level in relation to cooking by boiling (12±1.73×10 −3 ppm) or frying (11±1×10 −3 ppm). Vegetables Foods in the vegetable group were cooked by frying or blanching. Fig. 3 showed the histamine levels of cooked vegetables. For onions and spinach, no distinct difference in histamine level resulted from these cooking methods. In the case of carrots, the histamine level was increased 2.5-fold by frying (31±6×10 −3 ppm). The histamine level of laver seaweed was increased about 4-fold by frying (168±39.69×10 −3 ppm). Frying increased the histamine level in carrots and laver seaweed. The histamine level of dairy cheese (418±85×10 −3 ppm) was significantly 20-fold greater than that of fresh milk (12±7×10 −3 ppm) (Fig. 4). Fig. 4. Histamine level in fermented paste and dairy products. Data are presented as the mean±standard deviation (n=3). *p<0.05, **p<0.01, and ***p<0.001 compared to uncooked food. This means that uncooked cabbage Kimchi, uncooked radish Kimchi, and uncooked cheese were compared to fresh cabbage, radish, and milk, respectively. The statistical significance of any difference between data was assessed by analysis of variance (ANOVA), followed by Dunnett's or Tukey's test. p-values <0.05 were considered to be statistically significant. Fig. 3. Histamine level in vegetables. Data are presented as the mean±standard deviation (n=3). *p<0.05 and **p<0.01 compared to uncooked food. The statistical significance of any difference between data was assessed by analysis of variance (ANOVA), followed by Dunnett's or Tukey's test. p-values <0.05 were considered to be statistically significant. DISCUSSION Biogenic amines are organic, basic nitrogenous compounds of low molecular weight, usually formed by decarboxylation of free amino acids. In addition to their well-known occurrence and important role as endogenous regulators of several human physiological processes, biogenic amines occur in many different foods and beverages. Their concentrations vary extensively, not only between different food varieties, but also within the varieties themselves 19 . It has been known for some time that uptake of biogenic amines from foods can have profound effects on human health and well-being 20 . The most frequent foodborne intoxications and intolerance caused by biogenic amines, involve histamine. Whereas high histamine consumption causes life threatening intoxication, lower amounts can lead to headache, nausea, hot flushes, skin rashes, sweating, respiratory distress, and cardiac and intestinal problems to histamine-sensitive people 3,4 . Most foods are usually eaten after cooking in various ways and we eat cooked food more often than raw. However, data on the effect of cooking on the histamine content of foods are still incomplete [21][22][23] . In the present study, we evaluated the effect of cooking practices on the concentration of histamine in foods by comparing the histamine levels between raw and cooked foods. The histamine level of most fishery products and processed marine products, except for squid, was increased by grilling, and it seems that most of them increased greatly. Regarding frying, only dried anchovy in the category of fishery/processed marine products was fried in our study. Therefore, further study will be needed to determine the effect of frying on the lev-el of histamine in other kinds of food in the category of fishery/processed marine products. However, in this group, boiling had little influence on histamine level, although it did increase very slightly. The histamine level of meat and processed meat was also increased by grilling, but not as much as fishery products and processed marine products. Meanwhile, boiling of most meats decreased the histamine level. For eggs, no significant changes in histamine level were observed in relation to frying and boiling. Fried vegetables had higher histamine levels than raw vegetables. As expected, the fermented food showed generally high histamine level. However, fermented pastes showed no changes in histamine level by boiling. Pre-requisites for the formation of histamine are availability of free amino acid, such as histidine, presence of decarboxylase active microorganisms, and favorable conditions for decarboxylation of amino acids 2 . Scombroid species of fish have naturally high levels of histidine in their muscle tissue, which can be used by microorganisms capable of producing the enzyme histidine decarboxylase, to convert histidine to histamine during growth 24,25 . Evidently, food rich in free histidine, such as some fish species (anchovies, scombroid fish, and herring) are potentially more likely to contain high histamine levels 26 . In our study, fishery group foods in their raw forms showed high histamine levels. High free histidine level and availability in the food is thought to result in high histamine level in food. In our study, as expected, fermented foods, including Kimchi, soybean paste, and red pepper paste, showed high histamine levels. Kimchi, soybean paste, and red pepper paste are traditional foods that Koreans enjoy eating, and there are some studies on histamine levels of these fermented foods 27,28 . Kimchi is made by fermenting vegetables, such as salted cabbage or radish, with a number of other ingredients, including such as red pepper powder, garlic, and ginger. It is fermented by lactic acid bacteria at low temperatures, ensuring proper ripening and preservation 29 . Soybean paste is generally made by additional fermentation of the solid material that separates from a mixture of Meju (fermented soybean lumps) and Ganjang fermented soy source). The latter is prepared by soaking Meju in solar salt solution (approximately 16%∼ 18% [w/v] salts) for approximately 1∼2 months 30,31 . Red pepper paste is produced by fermenting powdered red pepper combined with powdered Meju (fermented soybean powder), salt, malt-digested rice syrup, and rice flour for about six months 32,33 . The fermentation process extends the storage period while increasing the bioavailability of bioactive ingredients such as free amino acids, peptides, alcohols, organic acids, capsaicin, and fla-vonoids 33 . In fermented foods the contaminating microflora is mainly responsible for the generation of increasing histamine levels [34][35][36][37][38] . Besides that, salt, sugar, red pepper or food additives can affect the level of histamine 39,40 . Cooking causes inactivation of histamine-producing spoilage bacteria. However, histamine is heat resistant, so it can remain intact in cooked products 41 . Therefore, if histamine is produced in the product before cooking, it can cause illness if it presents in the product at toxic concentrations. The degradation change of histamine by a heating process, such as frying, has rarely been reported 38 . In our study, heating processes, such as grilling and frying, increased the histamine levels in foods. The possible reason for these changes may have been that the moisture lost by evaporation during grilling or frying could cause the histamine concentration to increase. This also showed how the histamine level of boiling in some foods decreased. Previous study found that the food absorbed water while boiling, so the histamine concentration was decreased by dilution 36 . The cellular components of foods might be softened and broken by boiling and consequently released into the boiling water. However, further studies will be needed to determine the losses from foods due to cooking (e.g., moisture loss) to confirm the precise mechanism of effect of cooking method on the histamine level in foods. Also, another possible reason for the differences is that the histamine formation is affected by histidine decarboxylase activity. Histidine decarboxylase is the enzyme that converts the histidine in food into histamine. The formation of histamine in food requires the presence of histidine decarboxylase-positive microorganisms, in conjunction with conditions allowing the growth and enzyme activity of these bacteria 42 . It is known that there are several factors (e.g., pH, temperature, and NaCl concentration) that affect histidine decarboxylase activity 14,43,44 . This enzyme activity increased with increasing temperature to 30 o C∼40 o C and decreased above 50 o C. Although histidine decarboxylase activity decreased at high temperature, histamine production would continue until the enzyme became inactive 44 . Histamine, once formed in food, is heat stable even if heating inactivates both the enzyme and the source microorganisms 21,34,35 . Consequently, during heat treatment the histamine in food would accumulate continuously until the enzyme was inactive. This may explain why the histamine level was increased by heat treatment of most seafood and meat in our study. Boiling is also one of the heat treatments that elevated the histamine level, but the effect could be reduced due to dilution, as mentioned. However, further studies monitoring the levels of histamine-producing bacteria and histidine-decarboxylase activity will also be needed to confirm this mechanism for the change of the histamine levels in cooked foods. For eggs, no significant changes in histamine level were observed in relation to cooking methods. The change in the water content of eggs was reported to be minimal after boiling and frying, which may account for the small change in the histamine level after cooking. In addition, because the histamine level was relatively low for eggs (10±2×10 −3 ppm), there is a possibility of detection failure because the changes after different cooking methods could also be too trivial to be measured 45 . When interpreting the results for frying, the influence of soybean oil should be considered. However, because soy bean oil has a relatively low histamine level (14×10 −3 ppm), it is thought to have little effect on the level of histamine in fried foods. Numerous factors during manufacture and distribution should be comsidered to affect on histamine contents in food. Previous study demonstrated that large variations in the amine content were found in retail Belgian sausages, and that these were related to the method of manufacture 46 . Furthermore, wide variations were observed in the amine content of different batches of the same commercial brand of fermented Spanish products 47 . Previous studies reported that fish could be contaminated with histamine-producing microorganisms during postharvest handling of the fish 24 . Fresh scombrotoxin-forming fish contain negligible amounts of histamine (<1 ppm), but high levels of histamine occur when harvested fish are held at temperatures above 15 o C for several hours, permitting spoilage microorganisms to grow 24,48,49 . Several studies clearly show that immediate storage on ice drastically decreases the rate of histamine formation [50][51][52] , Also, high levels of histamine in commercially produced canned-fish products have occurred and are primarily due to temperature abuse before canning 53 . Therefore, the histamine contents in foodstuff can be affected by numerous extrinsic factors during manufacture and distribution. Confirming the variability of histamine level regarding manufacture and distribution will require examining the histamine content in more food items, in similar foods from more manufacturers, and in similar foods distributed in other ways. The cooking time is another characteristic of a cooking method. The effect of cooking time on the histamine level should be considered. However, in our study, we created some variation in cooking time in the same cooking method. Fishery products and processed marine products, eggs, meat, and processed meat were grilled, or boiled until well done. It was considered meaningful to measure histamine levels of foods in actual ready-to-eat condition. Because the time needed to cook each food to reach that condition is different, we could not make the cooking time uniform. In our study, we determined the histamine levels in raw and cooked foods. Because the levels detected in raw and cooked food in this study were significantly lower that the toxic level set by Food and Drug Administration (50 ppm), these levels or the increase caused by cooking could not induce food poisoning or intolerance. However, the susceptibility to histamine varies according to the enteral environment and DAO activity of each individual 5 . Sensitive persons, with insufficient DAO activity, could suffer from numerous undesirable reactions after intake of foods containing low amounts of histamine. We speculated that using the fresh foods with good pre-cooking condition could affect the low level of histamines in most foods of our study. There are several limitations of this work and this article. The priority of the food items and cooking methods used in this work was based on their frequency of utilization by Koreans. Therefore, the outcome of this article would not definitely reflect the dietary characteristics of other (foreign) countries. Also, foodstuffs can be affected by numerous extrinsic factors during manufacture and distribution; so variability might exist among the same products. Also, only a few food items in each food category were used in this article; so it might be hard to generalize the results. It is necessary to confirm the difference in histamine levels related to more cooking methods and in more foods from more manufacturers. To the best of our knowledge, there are no previous studies about direct comparison between cooking methods; so it is meaningful to make lists of the histamine levels of various cooked foods. We also tried to find some tendencies between the cooking methods regarding the level of histamine. In our study, it seems that grilled and fermented food showed increased histamine levels. However, due to the wide variation of basal histamine levels, there was no statistical significance in the differences between any of the food items. Also, among the categories of food items, the tendencies were a bit different; so it was hard to perform direct comparison between the cooking methods. To conclude, this study showed that the histamine level in foods can change according to the cooking method used to prepare it. In our study, frying and grilling seems to increase histamine level in foods, whereas boiling had little influence or even decreased it. Considering our results, boiling, compared with frying, grilling and fermenting was found to be a more effective method for reducing the histamine content. This study could be beneficial for se-lection of cooking practice in histamine-sensitive people with food intolerance, by providing data for reducing the histamine level in their diets.
2018-04-03T04:47:09.515Z
2017-10-30T00:00:00.000
{ "year": 2017, "sha1": "f2e948648c44b28202f3e44d029f1f3084d73bc7", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc5705351?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "f2e948648c44b28202f3e44d029f1f3084d73bc7", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
18707404
pes2o/s2orc
v3-fos-license
CrBPF1 overexpression alters transcript levels of terpenoid indole alkaloid biosynthetic and regulatory genes Terpenoid indole alkaloid (TIA) biosynthesis in Catharanthus roseus is a complex and highly regulated process. Understanding the biochemistry and regulation of the TIA pathway is of particular interest as it may allow the engineering of plants to accumulate higher levels of pharmaceutically important alkaloids. Toward this end, we generated a transgenic C. roseus hairy root line that overexpresses the CrBPF1 transcriptional activator under the control of a β-estradiol inducible promoter. CrBPF1 is a MYB-like protein that was previously postulated to help regulate the expression of the TIA biosynthetic gene STR. However, the role of CrBPF1 in regulation of the TIA and related pathways had not been previously characterized. In this study, transcriptional profiling revealed that overexpression of CrBPF1 results in increased transcript levels for genes from both the indole and terpenoid biosynthetic pathways that provide precursors for TIA biosynthesis, as well as for genes in the TIA biosynthetic pathway. In addition, overexpression of CrBPF1 causes increases in the transcript levels for 11 out of 13 genes postulated to act as transcriptional regulators of genes from the TIA and TIA feeder pathways. Interestingly, overexpression of CrBPF1 causes increased transcript levels for both TIA transcriptional activators and repressors. Despite the fact that CrBPF1 overexpression affects transcript levels of a large percentage of TIA biosynthetic and regulatory genes, CrBPF1 overexpression has only very modest effects on the levels of the TIA metabolites analyzed. This finding may be due, at least in part, to the up-regulation of both transcriptional activators and repressors in response to CrBPF1 overexpression, suggesting that CrBPF1 may serve as a “fine-tune” regulator for TIA biosynthesis, acting to help regulate the timing and amplitude of TIA gene expression. Terpenoid indole alkaloid (TIA) biosynthesis in Catharanthus roseus is a complex and highly regulated process. Understanding the biochemistry and regulation of the TIA pathway is of particular interest as it may allow the engineering of plants to accumulate higher levels of pharmaceutically important alkaloids. Toward this end, we generated a transgenic C. roseus hairy root line that overexpresses the CrBPF1 transcriptional activator under the control of a β-estradiol inducible promoter. CrBPF1 is a MYBlike protein that was previously postulated to help regulate the expression of the TIA biosynthetic gene STR. However, the role of CrBPF1 in regulation of the TIA and related pathways had not been previously characterized. In this study, transcriptional profiling revealed that overexpression of CrBPF1 results in increased transcript levels for genes from both the indole and terpenoid biosynthetic pathways that provide precursors for TIA biosynthesis, as well as for genes in the TIA biosynthetic pathway. In addition, overexpression of CrBPF1 causes increases in the transcript levels for 11 out of 13 genes postulated to act as transcriptional regulators of genes from the TIA and TIA feeder pathways. Interestingly, overexpression of CrBPF1 causes increased transcript levels for both TIA transcriptional activators and repressors. Despite the fact that CrBPF1 overexpression affects transcript levels of a large percentage of TIA biosynthetic and regulatory genes, CrBPF1 overexpression has only very modest effects on the levels of the TIA metabolites analyzed. This finding may be due, at least in part, to the upregulation of both transcriptional activators and repressors in response to CrBPF1 overexpression, suggesting that CrBPF1 may serve as a "fine-tune" regulator for TIA biosynthesis, acting to help regulate the timing and amplitude of TIA gene expression. Introduction Madagascar periwinkle Catharanthus roseus (L.) G. Don is of substantial pharmaceutical interest as it produces over 130 terpenoid indole alkaloids (TIAs). Several of these TIAs are used to treat different medical conditions. For example, vinblastine and vincristine are widely used as anticancer agents in the treatment of lymphoma and leukemia (Gidding et al., 1999) and ajmalicine and serpentine may be used to treat hypertension. Unfortunately these plant-derived pharmaceutical compounds are very expensive as periwinkle produces TIAs in very low amounts. In addition, due to their complex chemical structures, TIAs tend to be difficult, and therefore expensive, to synthesize in vitro. To lower the costs of producing TIAs for use as pharmaceuticals, many efforts have been made to increase TIA production using plant tissue and cell cultures or bacterial cultures. However, despite efforts since the early 1980s to develop cost-effective methods for large-scale production of TIAs in cultures or in vitro, progress has been limited (van der Heijden et al., 2004;Shanks, 2005). A major reason why these efforts have not been more successful is that the TIA biosynthetic pathway and the regulation of the TIA pathway and TIA feeder pathways are not sufficiently well understood. Those deficiencies are beginning to be addressed, thanks to progress in identifying most of the genes encoding TIA biosynthetic enzymes and transcriptional regulators (Liu et al., 2007;Memelink and Gantet, 2007;Zhou et al., 2010a;Giddings et al., 2011;Asada et al., 2013;Besseau et al., 2013;Salim et al., 2013;Zhao et al., 2013;De Luca et al., 2014;Kellner et al., 2015;Qu et al., 2015;Van Moerkercke et al., 2015). TIA biosynthesis is a closely coordinated process involving many enzymatic steps that occur in several intra-and inter-cellular compartments (Burlat et al., 2004;van der Heijden et al., 2004;Murata et al., 2008;De Luca et al., 2014;Qu et al., 2015). TIA biosynthesis in C. roseus starts with the formation of strictosidine from tryptamine and secologanin, the two precursor molecules that are produced by the indole and the monoterpenoid pathways, respectively (Figure 1). The condensation process is catalyzed by strictosidine synthase (STR). Strictosidine is then deglucosylated by strictosidine β-D-glucosidase (SGD) to form strictosidine aglycone. Further enzymatic steps result in the formation of numerous TIAs via several specific branches. For example, one branch of the TIA pathway produces ajmalicine and serpentine, a second branch leads to production of lochnericine and hörhammericine, a third branch produces vindoline and a fourth branch produces catharanthine. The production of vindoline from tabersonine occurs via seven reactions. Genes encoding enzymes catalyzing all seven steps have now been identified (Vazquez-Flota et al., 1997;St-Pierre et al., 1998;Schröder et al., 1999;Levac et al., 2008;Liscombe et al., 2010;Qu et al., 2015). Vindoline and catharanthine are the substrates for a major class III peroxidase (PRX1), which catalyzes formation of α-3 ,4 -anhydrovinblastine (Costa et al., 2008). Vinblastine and vincristine, two of the most pharmaceutically important TIAs, are formed through multiple steps from α-3 ,4anhydrovinblastine. Transcriptional activators and repressors play important roles in regulating TIA biosynthesis. Expression of both Octadecanoid-Responsive Catharanthus AP2-domain protein 2 (ORCA2) and ORCA3 increases rapidly upon fungal elicitation (Menke et al., 1999;van der Fits and Memelink, 2000). ORCA2 and ORCA3 are AP2-domain transcription factors that activate STR expression through a jasmonate signal transduction pathway by binding to the jasmonate and elicitor-responsive element (JERE) in the STR promoter (Menke et al., 1999;van der Fits and Memelink, 2001). ORCA2 (Li et al., 2013) andORCA3 (van der Fits andPeebles et al., 2009;Wang et al., 2010;Zhou et al., 2010b;Tang et al., 2011;Pan et al., 2012;Van Moerkercke et al., 2015) have also been shown to regulate many additional TIA biosynthetic and regulatory genes. The levels of some TIAs are also affected by overexpression of ORCA2 Li et al., 2013) and/or ORCA3 (van der Fits andPeebles et al., 2009;Wang et al., 2010;Zhou et al., 2010b;Tang et al., 2011;Pan et al., 2012). BIS1, a basic Helix-Loop-Helix transcription factor, regulates expression of the genes encoding all of the enzymes necessary for the conversion of geranyl diphosphate to loganic acid (Van Moerkercke et al., 2015). The CrMYC2 (basic Helix-Loop-Helix) transcription factor helps regulate TIA production by controlling the jasmonate-responsive expression of the ORCA genes . Two AT-hook DNA binding proteins, 2D173 and 2D7, also help regulate ORCA3 expression (Vom Endt et al., 2007). The CrMYC1 transcription factor helps regulate STR expression and is responsive to both jasmonate and elicitor treatments (Chatel et al., 2003). Similarly, the CrWRKY1 and CrWRKY2 transcription factors exert positive regulatory effects on multiple TIA biosynthetic genes (Suttipanta, 2011;Suttipanta et al., 2011). However, overexpression of CrWRKY1 and CrWRKY2 has contrasting effects on expression of specific TIA regulatory genes. Overexpression of CrWRKY1 results in decreased transcript levels for the ORCA2, ORCA3, and CrMYC2 TIA transcriptional activators and increased transcript levels for the ZCT1, ZCT2, and ZCT3 TIA transcriptional repressors (Suttipanta, 2011), whereas overexpression of CrWRKY2 results in increased transcript levels for both the ORCA2, ORCA3, and CrWRKY1 TIA transcriptional activators and the ZCT1 and ZCT3 TIA transcriptional repressors . In addition to transcriptional activators, three zinc finger proteins, ZCT1, ZCT2, and ZCT3 (Pauw et al., 2004;Chebbi et al., 2014) and two G-box-binding factors, GBF1 and GBF2 (Sibéril et al., 2001) act as transcriptional repressors of specific genes from the TIA or TIA feeder pathways. ammonia-lyase (PAL) expression in parsley cell cultures suggests that BPF-1 might play an important role in disease resistance by helping regulate expression of plant defense genes (da Costa e Silva et al., 1993). In C. roseus CrBPF1 was found to bind specifically to the BA element within the STR promoter. CrBPF1 promotes STR transcription through a signal transduction pathway that is responsive to elicitors but not jasmonate and acts downstream of protein phosphorylation and calcium influx (van der Fits et al., 2000). However, current evidence indicates that CrBFP1 activity is not sufficient for elicitor-induced STR expression (van der Fits et al., 2000). Deletion of the BA fragment did not eliminate the ability of the STR promoter to respond to elicitor or jasmonate, whereas alteration or deletion of the JERE fragment rendered the STR promoter unable to respond to either of these compounds (Menke et al., 1999). Information regarding whether CrBPF1 plays a role in the regulation of TIA-related genes other than STR has not previously been reported, leaving the role of CrBPF1 in regulation of TIA metabolism unknown. To address this issue, a transgenic hairy root line of C. roseus that overexpresses CrBPF1 under the control of a β-estradiol inducible promoter was generated. The transcript levels of 31 TIA biosynthetic and regulatory genes were tracked over a 72-h period under β-estradiol-induced and un-induced condition. The levels of 14 metabolites from the TIA and TIA feeder pathways were also investigated over the same time course, with nine of those metabolites being present at detectable levels in the majority of the samples analyzed. The results of these transcriptional and metabolic profiling experiments have revealed the role of CrBPF1 in regulation of TIA metabolism. Plant Materials and Growth Conditions Catharanthus roseus, Vinca Little Bright Eye 1 , was used for this work. Seeds were surface sterilized and then germinated on B5 medium (Sigma, St. Louis, MO, USA) supplemented with Gamborg's vitamins (Sigma, St. Louis, MO, USA). Seeds were germinated in the dark at 26 • C for 2 weeks. The seedlings were then transferred to a 16-h-light/8-h-dark cycle with a light intensity of approximately 44 μmol m −2 s −1 for 4 weeks before inoculation with Agrobacterium tumefaciens. Generation of CrBPF1 Overexpression Construct The pERKT vector (Tsuda et al., 2012) is a modified version of the pMDC32 Gateway vector (Curtis and Grossniklaus, 2003) that expresses the XVE chimeric transcriptional activator (Zuo et al., 2000) under the control of a strong constitutive promoter. XVE allows estradiol-inducible expression of genes cloned behind an appropriate promoter sequence (Zuo et al., 2000). For this work, the full-length CrBPF1 open reading frame was inserted into pERKT behind a promoter that allows estradiol-inducible expression by XVE. DNA containing the CrBPF1 open reading was obtained by PCR amplification of C. roseus cDNAs using KOD Hot Start DNA polymerase (Novagen, Madison, WI, USA) and the following primer pair: 5 ATGGTGTTGAAGAGAAGGC 3 and 5 TTAATCCGCCTGAGCATCC 3 . The resulting PCR fragment was cloned into the pCR8/GW/TOPO entry vector (Invitrogen, Grand Island, NY, USA) and then transferred to pERKT by an LR reaction to form pERKT-CrBPF1 (Supplemental Figure S1). Generation of Transgenic C. roseus Hairy Roots Transformation of 6-weeks old C. roseus was carried out using an equal mixture of A. tumefaciens cultures transformed with pERKT-CrBPF1 or the pPZPROL plasmid that carries the rol ABC genes (Hong et al., 2006), as previously described (Hong et al., 2006). Hairy roots appeared on infection sites approximately 4 weeks after inoculation. These hairy roots were allowed to grow to a length of approximately 1 cm and then excised and transferred to solid medium supplemented with 30 g L −1 sucrose, 6 g L −1 agar, 250 mg L −1 cefotaxime, half-strength Gamborg's B5 salts, and full-strength Gamborg's vitamins (pH 5.8). One week after transfer to the above media, 30 mg L −1 hygromycin was used to select for roots carrying the pERKT-CrBPF1 construct. Hygromycin-resistant hairy roots were further screened by PCR using primers with sequences 5 ATGATCACAAGCTGATCCCC 3 and 5 GTGCGTTCGGAAAAAGAATC 3 to amplify a DNA sequence that spans CrBPF1 and adjacent vector sequences within pERKT-CrBPF1. Transgenic hairy root lines exhibiting hygromycinresistance and a positive reaction in the PCR test were transferred to 50 mL of liquid media containing half-strength Gamborg's B5 liquid solution supplemented with full-strength Gamborg's vitamins and 30 g L −1 sucrose in a 150-mL flask. Hairy root cultures were incubated in the dark on a shaker at 225 rpm and sub-cultured every 5 weeks. Induction of Transgene Expression and Tissue Collection One transgenic hairy root line carrying the pERKT-CrBPF1 overexpression construct (CrBPF1-OE) and one control line carrying only the pPZPROL transformation construct (Li et al., 2013) were used for time course analyses. Each transgenic hairy root culture was started from five hairy roots that were 3-4 cm in length. These cultures were grown for 35 days and then transferred to fresh liquid media with 20-μM β-estradiol (induced cultures) or without β-estradiol (un-induced cultures) for 0, 6, 12, 24, 48, or 72 h before being harvested. Three separate cultures were harvested for each hairy root line, time point and treatment condition. Hairy roots were flash frozen immediately after harvest using liquid nitrogen and then stored at -80 • C. Aliquots of the same tissue samples were used for analyses of both transcript and metabolite levels. RNA Extraction and qRT-PCR Analyses Total RNA was isolated using the Spectrum Total RNA Isolation Kit with on-column DNase 1 digestion (Sigma, St. Louis, MO, USA), as described (Huang et al., 2010). For quantification of transcripts produced by BIS1, the endogenous and trans CrBPF1 genes, CrMYC1, CrMYC2, CrWRKY1, CrWRKY2, DXS1, DXS2B, MAT, T16H2, and T19H the GoScript Reverse Transcription System (Promega, Madison, WI, USA) was used to produce cDNAs. These cDNAs were analyzed by qPCR using the LightCycler 480 SYBR Green I Master mix by Roche and a Roche LightCycler 480 II (Roche Diagnostics, Indianapolis, IN, USA). For quantification of transcripts produced by all other genes (including the total transcripts produced by the CrBPF1 endogenous and transgenes combined), SuperScript II reverse transcriptase (Invitrogen, Grand Island, NY, USA) was used for cDNA synthesis and qPCR was performed on an ABI 7900 HT (Applied Biosystems, Grand Island, NY, USA) with a 384-well ABI optical plate using Roche Universal Probes (Roche Applied Science, USA) and the Homebrew master mix (University of Minnesota Genomics Center, Minneapolis, MN, USA). PCR primers and Roche Universal Probe numbers are described in Supplemental Table S1. qPCR data were normalized using the geometric average (Vandesompele et al., 2002) of two control genes, EF1 and UBQ11, which exhibit particularly stable expression patterns in C. roseus (Wei, 2010). Differences in EF1 and UBQ11 CT levels were typically minor (Supplemental Table S2). Relative mRNA levels for each gene were converted to Ct values indicate the reverse situation. As the amount of PCR product approximately doubles with each reaction cycle, a Ct of one corresponds to approximately a twofold difference in transcript levels. Alkaloid Extraction and Analysis Frozen hairy root tissue samples were lyophilized and ground to a fine powder. Fifty milligram aliquots of the ground tissue samples were extracted on ice for 10 min using 10 mL of methanol and a Model VC 130PB sonicating probe (Sonics & Materials, Inc., Newton, CT, USA), as previously described (Sander, 2009). The extracts were centrifuged for 12 min at 4000 rpm at 15 • C. The biomass was re-extracted as above and the supernatants combined and passed through a 0.45 μm nylon filter (25 mm, PJ Cobert, St Louis, MO, USA). The supernatants were then dried using a nitrogen evaporator (Organomation Associates, Inc., Berlin, MA, USA). The residues were dissolved in 2 mL of methanol, filtered using a 0.22 μm nylon filter (13 mm, PJ Cobert) and then stored at -25 • C. Twenty-microliter aliquots of the extracted alkaloid concentrate were run on a Phenomenex Luna C18(2) column (250 mm × 4.6 mm) connected to a Waters high performance liquid chromatography system [1525 binary pump, 717plus Autosampler, 996 Photo Diode Array (PDA) detector] using three different solvent systems. TIAs were examined following a previously described method (Sander, 2009). PDA data extracted at 254 nm were compared to standards for the quantification of strictosidine (gift from Dr. . PDA data extracted at 329 nm were compared to standards for the quantification of tabersonine, lochnericine, and hörhammericine (all in-house standards). Tryptophan (Sigma, St. Louis, MO, USA) and tryptamine (Sigma, St. Louis, MO, USA) were investigated using a previously described method and PDA data extracted at 218 nm (Peebles et al., 2005). Loganin (Fluka/Sigma, St. Louis, MO, USA) and secologanin (Fluka/Sigma, St. Louis, MO, USA) were measured using a previously describe method and PDA data extracted at 239 nm (Li et al., 2013). Promoter Analysis Catharanthus roseus DNA sequences available in the Medicinal Plant Genomics Resource 2 and NCBI 3 databases were screened using BLASTN for sequences similar to the 16 (CAAAAGTATTATGATT) and 42 (CGCTATTTATCATATAATTATTTTACAATAATTAGTATTA GG) nucleotide CrBPF1 binding sites (van der Fits et al., 2000). Statistical Analyses A two-tailed Student's t-test was employed for statistical analyses. To identify statistically significant differences, results from β-estradiol induced hairy roots carrying the CrBPF1 overexpression construct were compared with results from un-induced hairy roots carrying the CrBPF1 overexpression construct. For qRT-PCR experiments, ( * ) was used to represent p ≤ 0.05 and ( * * ) was used to represent p ≤ 0.01. For analyses of metabolite levels, ( * ) was used to represent p ≤ 0.1 and ( * * ) was used to represent p ≤ 0.05. Generation of Transgenic Hairy Roots Expressing CrBPF1 under the Control of a β-Estradiol Inducible Promoter CrBPF1 was identified by screening for proteins that bind the STR promoter. However, information regarding whether CrBPF1 plays a role in the regulation of other TIA or TIA-related genes is lacking, preventing determination of the role of CrBPF1 in regulation of these pathways. To address this deficiency, C. roseus transgenic hairy root lines that express CrBPF1 under the control of an estradiol-inducible promoter (Zuo et al., 2000) were generated. Use of an estradiol-inducible promoter provides several advantages. An inducible promoter allows the timing and level of transgene expression to be controlled. This ability allows transgenic cultures to be grown without transgene expression, avoiding the possible deleterious consequences of long-term transgene expression. An inducible system also allows studies on the transient effects of transgene expression. To generate transgenic hairy roots, C. roseus seedlings were inoculated with a mixture of A. tumefaciens cells transformed with either pERKT-CrBPF1 (Supplemental Figure S1) or pPZPROL (Hong et al., 2006). A total of 62 hairy root lines were screened for hygromycin resistance and seven lines were found to be resistant. These seven hairy root lines were confirmed to carry the CrBPF1 transgene by using PCR to demonstrate the presence of a DNA fragment spanning part of the CrBPF1 gene and an adjacent sequence from the pERKT-CrBPF1 vector. These positive lines were transferred to liquid culture. Lines with good adaption to growth in liquid culture were screened for β-estradiol inducible expression of CrBPF1. As expression of the CrBPF1 transgene is particularly strongly induced by β-estradiol in the CrBPF1-OE line (Figure 2), and CrBPF1-OE also adapted well to growth in liquid media, further studies utilized this line. To characterize β-estradiol inducible expression of CrBPF1, a time course experiment was performed using the CrBPF1-OE and control transgenic lines. The transgenic hairy root control line was generated previously (Li et al., 2013) by transforming C. roseus with pPZPROL alone, and thus lacks the β-estradiolinducible CrBPF1 transgene. The CrBPF1-OE and control lines were grown for 35 days and then transferred to fresh media with 0 μM (un-induced) or 20 μM (induced) β-estradiol. Tissue samples were collected 0, 6, 12, 24, 48, and 72 h after addition of β-estradiol. CrBPF1 transcripts produced by the CrBPF1 endogenous gene and by the CrBPF1 transgene were quantified separately using qRT-PCR (Figure 2). CrBPF1 transgene mRNA levels increased rapidly in the CrBPF1-OE line after addition of 20 μM β-estradiol, rising approximately 50 fold within 6 h, and remained high for at least 72 h. Un-induced cultures of the CrBPF1-OE line exhibited much lower transcript levels for the CrBPF1 transgene than the induced cultures, indicating that β-estradiol is necessary for high-level expression of the CrBPF1 transgene (Figure 2A). As expected, qRT-PCR reactions using a primer pair specific for the CrBPF1 transgene produced only a very low signal from RNA isolated from the control line (data not shown), which lacks the CrBPF1 transgene. Transcript levels for the CrBPF1 endogenous gene were not significantly affected FIGURE 2 | Time course analysis of CrBPF1 endogenous and transgene mRNA levels. CrBPF1 transcripts produced from the endogenous CrBPF1 gene and the CrBPF1 transgene were quantified independently by qRT-PCR using primers specific for each gene (Supplemental Table S1). As expected, the primer pair specific for transcripts produced by the CrBPF1 transgene yields only a very low background signal for the control line (data not shown), which lacks the CrBPF1 transgene. by treatment with β-estradiol and were similar in the CrBF1-OE and control lines ( Figure 2B). Effects of CrBPF1 Overexpression on the Indole and Terpenoid Pathways CrBPF1 is one of only a few putative activators of the TIA pathway identified to date. Understanding the role played by CrBPF1 in regulation of the TIA pathway is important for developing strategies for in vivo manipulation of TIA production. Toward this end, the effects of CrBPF1 overexpression on transcript levels of 31 genes were analyzed. The genes chosen for analysis include two genes from the indole pathway (ASα and TDC), six genes from the monoterpenoid pathway (DXS1, DXS2A, DXS2B, G10H, CPR, and LAMT) and ten genes from the TIA pathway (STR, SGD, T16H1, T16H2, 16OMT, D4H, DAT, PRX1, T19H, and MAT). In addition, transcript levels of all of the cloned genes currently postulated to play a role in regulation of the TIA pathway were characterized. These genes include eight TIA transcriptional activators (ORCA2, ORCA3, CrBPF1, CrMYC1, CrMYC2, CrWRKY1, CrWRKY2, and BIS1) and five TIA transcriptional repressors (ZCT1, ZCT2, ZCT3, GBF1, and GBF2). In addition, the concentrations of 14 TIA metabolites were investigated, with nine of those metabolites being present at detectable levels. Both transcript and metabolite levels were tracked over a 72-h period in the CrBPF1-OE and control lines grown in the absence or presence of β-estradiol. The production of TIAs is dependent on the synthesis of tryptamine in the indole pathway, the synthesis of secologanin in the terpenoid pathway and their subsequent coupling to form strictosidine. Thus, it was of interest to determine whether overexpression of CrBPF1 affects expression of genes in the indole or terpenoid pathways. To characterize the effects of overexpressing CrBPF1 on the indole pathway, ASα and TDC transcripts were quantified. ASα encodes the alpha subunit of anthranilate synthase, which catalyzes the first committed step in tryptophan synthesis. Overexpression of CrBPF1 caused a significant increase in ASα transcript levels. ASα transcript levels were slightly higher in the induced versus un-induced CrBPF1-OE cultures 6 h after addition of β-estradiol, and reached a peak at the 12-h time point before beginning to decline. In contrast, addition of β-estradiol to the media had no significant effect on ASα transcript levels in the control line ( Figure 3A). TDC encodes tryptophan decarboxylase, which catalyzes the formation of tryptamine from tryptophan. TDC transcript levels were higher in the induced versus un-induced CrBPF1-OE cultures 12 and 72 h after addition of β-estradiol. However, overexpression of CrBPF1 caused only slight differences in TDC transcript levels, with a maximum difference of 75% higher transcript levels in the induced versus un-induced CrBPF1-OE cultures at the 12-h time point. Addition of β-estradiol to the media had no significant effect on TDC transcript levels in the control line ( Figure 3B). An attempt was also made to analyze tryptophan and tryptamine levels in aliquots of the same tissue samples used to analyze gene expression. However, tryptophan and tryptamine levels were below the detection threshold in many of the samples. The expression levels of six genes in the terpenoid pathway were determined. Three of these genes (DXS1, DXS2A, and DXS2B) encode different isoforms of 1-deoxy-D-xylulose 5phosphate synthase. DXS2A and DXS2B are induced by ORCA3 overexpression, whereas DXS1 is not regulated by ORCA3 (Han et al., 2013). Overexpression of CrBPF1 had only a very modest effect on expression of the DXS genes. DXS1 expression was 70% lower in the induced versus un-induced CrBPF1-OE cultures at the 72-h time point, but was not significantly altered at the other time points assayed. Addition of β-estradiol had a very slight effect on DXS1 expression in the control line, causing a 6 to 30% increase in DXS1 transcript levels at the 6, 12, and 72-h time points (Figure 4A). DXS2A expression was slightly induced by β-estradiol at the 12-h time point in the CrBPF1-OE line ( Figure 4B). DXS2B transcript levels were increased 1.6 and 2.0 fold in the induced versus un-induced CrBPF1-OE cultures at the 6 and 12-h time points, respectively. However, DXS2B expression was also induced 1.5X in the induced versus un-induced control cultures at the 6-h time point, suggesting that application of β-estradiol, rather than increased CrBPF1 expression, might be responsible for the alterations in DXS2B transcript levels ( Figure 4C). G10H transcript levels were significantly higher in the induced versus un-induced CrBPF1-OE cultures at the 6, 12, and 72-h time points, but the differences in transcript levels were slight, reaching a maximum difference of less than twofold at the 12-h time point (Figure 4D). CPR transcript levels were significantly higher in the induced versus un-induced CrBPF1-OE cultures at the 12, 48, and 72-h time points, but the differences in transcript levels were slight, with a maximum difference of approximately 1.5 fold at the 48-h time point (Figure 4E). LAMT transcript levels were approximately 2.5 fold higher in the induced versus un-induced CrBPF1-OE cultures at the 12h time point, but were not significantly different at the other time points analyzed ( Figure 4F). Addition of β-estradiol had no significant effects on transcript levels of G10H, CPR, or LAMT in the control line. The levels of loganin and secologanin were also determined in aliquots of the same tissue samples used for gene expression analyses. The addition of 20-μM β-estradiol to the media caused no substantial alterations in the levels of either of these metabolites over the time period analyzed (Figures 4G,H). Effects of CrBPF1 Overexpression on TIA Biosynthetic Gene mRNA Levels To characterize the effects of overexpressing CrBPF1 on the TIA pathway, transcript levels of ten TIA biosynthetic genes were characterized. STR and SGD encode the enzymes that catalyze the first two steps in TIA biosynthesis. Overexpression of CrBPF1 had no significant effects on STR transcript levels ( Figure 5A). SGD transcript levels were 75% higher in the induced versus un-induced CrBPF1-OE cultures at the 12-h time point, but were not significantly altered by CrBPF1 overexpression at the other time points assayed ( Figure 5B). T16H, 16OMT, D4H, and DAT encode enzymes that catalyze different steps in the pathway leading from tabersonine to formation of vindoline. Overexpression of CrBPF1 had no significant effects on T16H1 (Figure 5C) or T16H2 (Figure 5D) transcript levels. 16OMT transcript levels were 2.6-fold higher in induced versus uninduced CrBPF1-OE cultures at the 12-h time point, but were not significantly altered by CrBPF1 overexpression at the other time points assayed ( Figure 5E). D4H transcript levels were significantly higher in the induced versus un-induced CrBPF1-OE cultures at the 12 and 48-h time points, but not at the other time points analyzed ( Figure 5F). The effects of CrBPF1 overexpression on DAT transcript levels were more complex. DAT transcript levels were approximately 3-fold higher in the induced versus un-induced CrBPF1-OE cultures at the 12-h time point, but were almost twofold lower in the induced versus un-induced CrBPF1-OE cultures at the 48-h time point. Interestingly, DAT transcript levels in the CrBPF1-OE line were consistently below those in the control line ( Figure 5G). PRX1 encodes a vacuolar class III peroxidase that catalyzes the synthesis of α-3 , 4 -anhydrovinblastine from catharanthine and vindoline. Overexpression of CrBPF1 had no significant effects on PRX1 transcript levels ( Figure 5H). T19H transcript levels were 2.3fold higher in induced versus un-induced CrBPF1-OE cultures at the 48-h time point, but were not significantly altered by CrBPF1 overexpression at the other time points assayed ( Figure 5I). Overexpression of CrBPF1 had no significant effects on MAT transcript levels ( Figure 5J). Addition of β-estradiol to the media had little effect on TIA biosynthetic gene expression in the control line. Where TIA biosynthetic gene transcript levels did vary between control cultures grown in the presence of 0 versus 20 μM β-estradiol, transcript levels were typically higher in the cultures grown on 0 μM β-estradiol. For example, T16H1 transcript levels at the 48-h time point and DAT and PRX1 transcript levels at the 24-h time point were higher in control cultures grown on 0 μM β-estradiol than on 20 μM β-estradiol ( Figure 5). These results are in contrast to the results observed for the CrBPF1-OE cultures, where addition of 20-μM β-estradiol to the media tended to cause increased transcript levels. Effects of CrBPF1 Overexpression on TIA Metabolite Levels As CrBPF1 overexpression affects the transcript levels of many of the genes involved in synthesis of TIAs or TIA precursors, it was of interest to determine whether overexpression of CrBPF1 affects TIA metabolite levels. Toward that end, the levels of ten TIA metabolites were analyzed over a 72-h period in the CrBPF1-OE line grown in the presence or absence of 20-μM β-estradiol, with seven of those metabolites being present at detectable levels ( Figure 6). The metabolites analyzed were tabersonine, lochnericine, hörhammericine, catharanthine, serpentine, ajmalicine, strictosidine, vindoline, vincristine, and vinblastine, with the levels of the last three being below the detection threshold. Overexpression of CrBPF1 had only modest effects on the levels of the other seven metabolites, with the largest statistically significant effect being ∼40% lower serpentine levels in the induced versus un-induced CrBPF1-OE cultures at the 12-h time point. The levels of the same metabolites were also analyzed in the control line, at 0 and 24 h after transfer to fresh media with 0 or 20 μM β-estradiol. Addition of 20-μM β-estradiol to the media had little effect on the levels of any of the TIA metabolites analyzed in the control line (data not shown). Effects of CrBPF1 Overexpression on TIA Regulatory Genes In addition to regulating expression of biosynthetic genes directly, a transcriptional regulator may affect expression of biosynthetic genes by altering the activities of other transcriptional regulators. To determine whether overexpression of CrBPF1 affects the activities of TIA transcriptional regulators, points, reaching a maximum difference of 2.4 fold at the 48-h time point (Figure 7B). To determine whether CrBPF1 affects its own expression, transcripts produced by the endogenous CrBPF1 gene were analyzed in the CrBPF1-OE and control lines grown on 0 or 20 μM β-estradiol. Increased expression of the CrBPF1 transgene had no significant effect on CrBPF1 endogenous gene transcript levels, indicating that CrBPF1 does not regulate its own expression at the steady-state transcriptional level ( Figure 2B). CrMYC1 transcript levels were almost fourfold higher in induced versus un-induced CrBPF1-OE cultures at the 6 and 12-h time points, but were not significantly altered at later time points (Figure 7C). Overexpression of CrBPF1 had smaller, but more consistent effects on CrMYC2 transcript levels, with 30-60% higher CrMYC2 transcript levels in the induced versus uninduced CrBPF1-OE cultures at all time points assayed, except for the latest time point (Figure 7D). Overexpression of CrBPF1 also had a modest effect on CrWRKY1 transcript levels, which exhibited statistically significant 30% increases in the induced versus un-induced CrBPF1-OE cultures at the 12 and 48-h time points ( Figure 7E). In contrast, overexpression of CrBPF1 had no statistically significant effects on CrWRKY2 transcript levels ( Figure 7F). Overexpression of CrBPF1 caused a statistically significant 60% increase in BIS1 transcript levels at the 12-h time point and a 20% increase at the 72-h time point (Figure 7G). Overexpression of CrBPF1 had a comparatively large effect on expression of the ZCT1 transcriptional repressor. ZCT1 transcript levels were twofold to threefold higher in the induced versus uninduced CrBPF1-OE cultures at all time points assayed, except for the earliest, 6-h, time point ( Figure 7H). Overexpression of CrBPF1 also caused significant increases in ZCT2 ( Figure 7I) and ZCT3 (Figure 7J) expression levels at the 12, 48, and 72-h time points. Overexpression of CrBPF1 caused a larger increase in ZCT3 than in ZCT2 transcript levels, with ZCT3 transcript levels being approximately 2-2.5 fold higher in the induced versus un-induced CrBPF1-OE cultures, as opposed to approximately 1.25-2 fold increases in ZCT2 transcript levels. Overexpression of CrBPF1 had modest, but fairly consistent, effects on GBF1 expression. GBF1 transcript levels were significantly higher in the induced versus un-induced CrBPF1-OE cultures at all time points assayed, except for the 24-h time point, where the mean GBF1 transcript levels were higher in the induced than in the un-induced CrBPF1-OE cultures, but the difference was not statistically significant ( Figure 7K). Overexpression of CrBPF1 had only a very slight effect on GBF2 transcript levels, with the only statistically significant difference being an approximately 40% increase in GBF2 transcript levels in the induced versus the un-induced CrBPF1-OE cultures at the 12-h time point (Figure 7L). Addition of 20-μM β-estradiol to the media had little effect on expression of TIA regulatory genes in the control cultures. TIA Promoter Analysis DNase1 fingerprinting resulted in the identification of 16 and 42 nt CrBPF1 binding sites within the BA fragment of the C. roseus STR promoter (van der Fits et al., 2000). C. roseus DNA sequences available in the Medicinal Plant Genomics Resource 2 and NCBI 3 databases were searched using BLASTN for sequences similar to these 16 and 42-nucleotide CrBPF1 binding sites. Examination of the top 50 matches for the 16 and 42-nt sequences from the Medicinal Plant Genomics Resource did not reveal any matches to promoter sequences from genes believed to be involved in TIA biosynthesis, with the exception of STR. In contrast, searches of the sequences available in GenBank revealed several partial matches to the 5 regions of genes involved in TIA biosynthesis. The best matches to sequences lying within approximately 1,500 bp 5 of a transcription start site are listed in Supplemental Table S3. In addition to STR, partial matches for both the 16 and 42-nt sequences were found for ORCA3 and TDC. However, the spacing between these partial matches was much larger for ORCA3 and TDC than for STR. The 5 ends of the 16 and 42 nt sequences are 70 bp apart in the STR promoter, but are 365 and 552 bp apart in the ORCA3 and TDC promoters, respectively. Partial matches to the 16 nt sequence, but not to the 42 nt sequence, were found for CPR, PRX1, BIS1, and CrWRKY1. A partial match to the 42 nt sequence, but not to the 16 nt sequence, was found for DXS2B. Discussion CrBPF1 was identified as a MYB-like protein that binds the BA region of the C. roseus STR promoter (van der Fits et al., 2000). Although thirteen transcriptional regulators have been postulated to act in regulation of the TIA pathway, the effects of most of these transcriptional regulators on expression of the majority of the known TIA biosynthetic and regulatory genes have not yet been determined. To address this deficiency, a C. roseus transgenic hairy root line that expresses CrBPF1 under the control of a β-estradiol inducible promoter was generated and characterized. Addition of β-estradiol to the medium causes a large and rapid induction of CrBPF1 transgene expression in the CrBPF1-OE line but does not affect CrBPF1 expression in a control line, indicating that the presence of the transgene is necessary for increased CrBPF1 transcript levels. Terpenoid indole alkaloid production is dependent on the synthesis of precursors by both the indole and terpenoid pathways, the combination of these precursors by STR and subsequent reactions carried out by different branches of the TIA pathway. Overexpression of CrBPF1 caused increased expression of the majority of the genes analyzed from these pathways. However, these effects were typically transient and of limited magnitude. Interestingly, CrBPF1 overexpression may cause decreases in DAT expression. Although overexpression of CrBPF1 caused increased DAT transcript levels after 12 h, DAT transcript levels decreased after 48 h. In addition, DAT transcript levels were consistently lower in the CrBPF1-OE line than in the control line over the entire time course, suggesting that increased CrBPF1 expression tends to have a negative effect on DAT activity. Interestingly, DAT expression has been shown to be strongly decreased in response to overexpression of ORCA2 (Li et al., 2013). The finding that CrBPF1 overexpression does not have a significant effect on STR transcript levels might appear somewhat unexpected, given that CrBPF1 is known to bind the STR promoter (van der Fits et al., 2000). However, this finding is consistent with previous work suggesting that CrBPF1 might have only a limited effect on STR expression (van der Fits et al., 2000). Binding of CrBPF1 to the STR promoter is only part of the process promoting STR transcription, with binding of additional factors to other cis sequences playing an important role in regulation of STR activity (van der Fits et al., 2000;Chatel et al., 2003). Consequently, overexpression of CrBPF1 alone is insufficient to cause significant alterations in STR transcript levels. To determine whether overexpression of CrBPF1 affects the expression of other regulators, transcript levels for all genes postulated to encode TIA regulators were analyzed. Overexpression of CrBPF1 caused increased expression of all of the other TIA transcriptional activators except CrWRKY2. In addition, expression of the CrBPF1 transgene did not affect expression of the CrBPF1 endogenous gene, indicating that CrBPF1 does not regulate its own expression. Interestingly, overexpression of CrBPF1 also caused increased transcript levels for all five of the genes postulated to encode TIA transcriptional repressors. These results suggest that CrBPF1 overexpression could be altering the transcript levels of some of the biosynthetic genes characterized indirectly, by altering the activities of other TIA regulatory genes. The results of a promoter analysis are consistent with this possibility. Partial matches for both the 16 and 42-nt CrBPF1 binding sites from the STR promoter were found in the promoters of only ORCA3 and TDC, among the genes analyzed as part of this study. However, the spacing between these partial matches was much greater for ORCA3 and TDC than for STR. Partial matches to the 16-nt CrBPF1 binding sequence were also found for the CPR, PRX1, BIS1, and CrWRKY1 promoters and to the 42-nt sequence for DXS2B. As CrBPF1 overexpression affects the activities of several TIA biosynthetic genes and of most of the TIA regulatory genes analyzed, it was of interest to determine whether CrBPF1 overexpression also affects TIA metabolite levels. Toward that end the levels of 14 metabolites from different parts of the TIA and feeder pathways were analyzed, with the levels of nine of those metabolites being above the detection threshold. The results of these analyses indicate that CrBPF1 overexpression causes only slight, and transient, alterations in lochnericine, hörhammericine and serpentine levels, typically causing decreased levels of these metabolites. Lochnericine is the precursor for synthesis of hörhammericine, suggesting that flux to this branch of the TIA pathway may be altered by CrBPF1 overexpression. Serpentine is synthesized via a different branch of the TIA pathway. The relatively minor effects of CrBPF1 overexpression on TIA metabolite levels may be due to the fact that, although CrBPF1 overexpression increases the activities of many TIA and related genes, these increases in gene expression are typically of limited magnitude and duration. The results of this work indicate that although CrBPF1 regulates a high percentage of the genes analyzed, the effects of CrBPF1 overexpression on gene activity levels tend to be of comparatively limited magnitude and are often transient. These modest alterations in TIA biosynthetic gene expression may help explain the limited effects of CrBPF1 overexpression on the levels of the TIA metabolites analyzed. The relatively minor increases in gene expression may be due to the fact that CrBPF1 overexpression causes increased expression of all five TIA transcriptional repressors, in addition to causing increased activity of most of the TIA transcriptional activators. In contrast, overexpression of ORCA2 (Li et al., 2013) and ORCA3 (Peebles et al., 2009) causes increased expression of the ZCT transcriptional repressors, but not of the GBF transcriptional repressors. As overexpression of CrBPF1 has comparatively large effects on expression of TIA transcriptional repressors, future characterization of these repressors is expected to yield further insight into the mechanism by which CrBPF1 helps regulate the TIA and related pathways. The simultaneous activation of transcriptional repressors and activators has been proposed to act as part of a "fine tune" mechanism for regulating TIA production (Memelink and Gantet, 2007). The findings reported here support this model and suggest that CrBPF1 plays a role in the "fine tune" regulation of TIA metabolism.
2016-06-18T00:14:20.675Z
2015-10-01T00:00:00.000
{ "year": 2015, "sha1": "47709b421988cd1e33ac6dc9a87c0f7be3eb5082", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2015.00818/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "47709b421988cd1e33ac6dc9a87c0f7be3eb5082", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
113808609
pes2o/s2orc
v3-fos-license
Studying the Reasons for Delay and Cost Overrun in Construction Projects: The Case of Iran Undesirable delays in construction projects impose excessive costs and precipitate exacerbated durations. Investigating Iran, a developing Middle Eastern country, this paper focuses on the reasons for construction project delays. We conducted several interviews with owners, contractors, consultants, industry experts and regulatory bodies to accurately ascertain specific delay factors. Based on the results of our industry surveys, a statistical model was developed to quantitatively determine each delay factor's importance in construction project management. The statistical model categorises the delay factors under four major classes and determines the most significant delay factors in each class: owner defects, contractor defects, consultant defects and law, regulation and other general defects. The most significant delay factors in the owner defects category are lack of attention to inflation and inefficient budgeting schedule. In the contractor defects category, the most significant delay factors are inaccurate budgeting and resource planning, weak cash flow and inaccurate pricing and bidding. As for the consultant defects delay factors such as inaccurate first draft and inaccuracies in technical documents have the most contribution to the defects. On the other hand, outdated standard mandatory items in cost lists, outdated mandatory terms in contracts and weak governmental budgeting are the most important delay factors in the law, regulation and other general defects. Moreover, regression models demonstrate that a significant difference exists between the initial and final project duration and cost. According to the models, the average delay per year is 5.9 months and the overall cost overrun is 15.4%. Our findings can be useful in at least two ways: first, resolving the root causes of particularly important delay factors would significantly streamline project performance and second, the regression models could assist project managers and companies with revising initial timelines and estimated costs. This study does not consider all types of construction projects in Iran: the scope is limited to certain types of private and publicly funded projects as will be described. The data for this study has been gathered through a detailed questionnaire survey. INTRODUCTION Construction is among the most flourishing business sectors in the Middle East (Sweis et al., 2008). Construction projects absorb immense investments and play 1. Private sector as the owner: residential construction projects with total project area between 1,000 to 10,000 square meters. 2. Government as the owner: civilian construction projects including rehabilitation and maintenance projects for educational infrastructure with total project area between 1,000 to 10,000 square meters. Our paper includes educational infrastructure projects since the government of Iran funds several construction, rehabilitation and maintenance projects for the educational spaces and infrastructure throughout the country; moreover, such projects are usually homogenous in terms of the construction methods, budgeting and timelines. As a result, this study will provide a comprehensive outlook of the delay factors and their contributions to delays and cost overruns throughout Iran's construction industry. Accordingly, the contributions of this research are: (1) to determine the reasons of delay in the specified types of the construction projects of Iran as a developing country, (2) to determine the probability of occurrence of the identified reasons of delay with a subjective and unbiased approach, (3) to statistically test whether the delays and cost overruns are significant, (4) to provide recommendations to organisations and companies who play a role in the construction sector of Iran on how to mitigate the delays and (5) to facilitate the risk management efforts by developing regression models that allow the project 54/PENERBIT UNIVERSITI SAINS MALAYSIA Causes of delay in construction projects in Malaysia has been studied in several research papers. According to Abdul Kadir et al. (2005), the most important delay factors were shortage of material, late payments to suppliers, change orders, late submission of drawings and poor site management. Using a different questionnaire, Sambasivan and Soon (2007) described 10 reasons including improper planning, poor site management, lack of experience, late payments, problems with subcontractors, labour supply and shortage of material as the most important delay factors in Malaysian construction projects. Alaghbari et al. (2007) list financial and coordination problems as the most important delay factors in Malaysia. Hamzah et al. (2012) list several factors including labour productivity, material delivery, inflation, insufficient equipment and slow decision making as delay factors in Malaysia. One can confirm that although different studies list a number of common items as the delay factors in Malaysian construction projects, having non-recurrent factors between different studies is normal. Differences in the determined factors can be traced back to a number of inconsistencies between the studies, including dissimilar survey methods, different number of respondents, differences between the profiles of the respondents, dissimilar statistical methods, etc. Table 1 lists several papers that have identified the reasons for construction project delays in developing countries in the Middle East, Asia and Africa. Based on our review of the literature, we can clearly conclude the following: 1. Although some similarities exist between different studies, we note that each study explores the construction delay issue according to the influential parameters and specific environmental factors in which the research is conducted. In other words, the delay factors and their importance may be different between countries with different social and economic environments. Local laws and regulations, which are obviously dissimilar between various countries, exhibit a significant effect on the delay factors. The effect of laws and regulations on the delay factors can be best noticed from studies such as Odeh and Battaineh (2002) and Sweis et al. (2008) for Jordan; another example is Assaf and Al-Hejji (2006) and Al-Khalil and Al-Ghafly (1999) for Saudi Arabia. 2. There is a dearth of comprehensive studies to determine the reasons for delay in construction projects in Iran. RESEARCH METHOD AND STATISTICAL ANALYSES Data gathering was conducted in two separate phases: (1) identifying the delay factors and (2) determining the probability of occurrence of each delay factor. In order to accurately identify the delay factors, several interviews were conducted with owners, contractors, consultants, industry experts, and regulatory bodies. The interviewees were selected based on their experience and organisational position. Accordingly, the interviews were conducted with individuals employed at senior managerial levels of their companies. Several interviews were organised with professionals serving at the top managerial levels of Tehran's municipality. In addition, we stipulated that respondents required Iranian construction industry involvement as an owner, contractor or consultant in at least five projects. Table 2 provides more details about the interviewees. Results of these interviews were carefully discussed and compared with similar studies available in the literature. This comparison revealed that there are both similarities and differences between the delay factors in the literature and the delay factors mentioned by the interviewees of this research. Table 3 highlights some of such similarities and dissimilarities: a complete list of the delay factors of this paper is presented in Table 5. The main reason for the differences between the delay factors in this table is the differences in the business environment and socioeconomic factors in different countries. In this research, 36 delay factors in construction projects were identified and categorised under four main categories: (1) owner defects, (2) contractor defects, (3) consultant defects and (4) law, regulation and other general defects. In phase two of the data gathering process, a questionnaire was designed to obtain the probability of occurrence of each identified delay factor. A review of the literature indicates that most of the previous studies calculate the relative importance of the delay factors. We note that relative importance of delay factors can be defined in various ways. One of the most widely used approaches to illustrating relative importance is given in Equation 1 (Kometa, Olomolaiye and Harris, 1994;Chan and Kumaraswamy, 2002;Sambasivan and Soon, 2007;Fugar and Agyakwah-Baah, 2010;Gündüz, Nielsen and Özdemir, 2013): In this particular equation, RI is the relative importance index, W are the weights given to each factor by respondents, A is the highest possible weight and N is the total number of respondents. Shebob et al. (2012) employ the concept of severity index (SI) to rank the delay factors: As given in this equation, n corresponds to the frequency of the responses, and W and N have the same meaning as Equation 1. Other studies employ a combination of the relative importance as defined by Equation 1 and casespecific methods to quantify the relative importance of delay factors (Aibinu and Jagboro, 2002;Odeh and Battaineh, 2002;Frimpong, Oluwoye and Crawford, 2003;Fong, Wong and Wong, 2006;Zaneldin, 2006;Kaliba, Muya and Mumba, 2009). It can be verified that all of these studies use a Likert scale in their questionnaires to record the severity or weight of each delay factor. Undoubtedly, the weight or severity assigned to the delay factors depends on the opinion of the respondents: the respondents tend to under-estimate the risks and delays associated with their own role in a project and often over-estimate the delays caused by other parties that are part of the cause. As a result, the profile of the respondents can give effect on the calculated relative importance of the delay factors. In order to minimise this inevitable bias, the Likert scale is removed from the questionnaires of this paper. Moreover, this paper does not utilise the concept of relative importance of the delay factors, as practiced in the literature. Instead, a multinomial distribution interprets the responses of the respondents to a series of yes-no questions. To measure the internal consistency of the designed questionnaire, Cronbach's alpha was calculated and measured at 0.791, which is an indicator of the high internal consistency of the designed questionnaire (Hinton, 2004;Vogt and Johnson, 2011). This questionnaire was mailed to 200 respondents, all of whom were active in the construction industry. Respondents were asked if they had experienced delays in their last construction project. In case of a positive answer, the respondents were requested to indicate which delay factors contributed to this lateness. Results of these questionnaires were further used in data analysis and model development. Respondents were given the liberty to add project-specific delay factors to the prepared questionnaire in case a certain delay factor was missing from the list. Out of the 200 mailed questionnaires, 86 questionnaires were collected and considered for further investigation: a sample size of 86 questionnaires is enough to trigger the central limits theorem and guarantee the normality of the averages for the developed statistical model and hypothesis tests (Freund, 1991;Miller, Freund and Miller, 2014). Table 4 presents more details about the respondents. The developed statistical model will be discussed in the next section. Statistical Model In this paper, the multinomial distribution was selected to estimate the probability of occurrence of each delay factor. The multinomial probability distribution, an extension to the binomial distribution, models the probability of success in independent Bernoulli experiments (Miller, Freund and Miller, 2014;Ross, 2014). In the context of our study, the occurrence of a specific delay factor in a late construction project is considered a success, and the probability of this success is calculated in the statistical model. According to the multinomial distribution, if the probability of occurrence of This paper employs a questionnaire for sampling and determining the values of , 1 ≤ i ≤ k. Each , 1 ≤ i ≤ k represents the probability of occurrence of a specific delay factor. This paper deals with 36 delay factors: thus, k = 36. To determine the values of , 1 ≤ i ≤ 36, a questionnaire was designed with 36 yes-no questions. A respondent would select yes for a specific question if that particular delay factor was present in his/her delayed project. For instance, suppose that this questionnaire is filled by n respondents. Therefore, Equation 4 provides an unbiased estimator for parameter : In Equation 4, xi = 1 if a specific respondent selects yes for the ith delay factors, and it is zero otherwise. The above multinomial distribution function is utilised in this paper for the delay factors under each of the major categories, as described previously. As a result, four different multinomial distributions are developed. Mathematical explanations on how to calculate probability values for ̂ ; 1 ≤ i ≤ kj, j = 1, 2, 3, 4 (the probability of the occurrence of the ith delay factor in major category j) and ̂; j = 1, 2, 3, 4 (the probability of the occurrence of each major category in a delayed project) are summarised in Appendix 2: Normalising the Probabilities. An illustrative example about the calculations of the described multinomial model is explained in Appendix 3: Illustrative Example. Delay Estimates and Statistical Tests The results of the delay factor analysis, as given by the survey respondents, are presented in Table 5. Table 6 summarises the probabilities of each major category. The laws, regulations and other general defects category rank as the primary reasons for delays as they exhibit the highest probability of occurrence (31%). Contractor defects, on the other hand, rank fourth with the lowest probability of occurrence (17%). Hypothesis Tests Descriptive statistics from the questionnaires reveal that the average estimated duration of the studied construction projects at the beginning of the project is 13.78 months. However, the actual average duration of the projects is 21.44 months. The following numerical values provide the mean and variances for these two durations. Given these numerical differences, it may be interesting to test whether they are significant enough to conclude that a meaningful difference exists between the initial and actual durations of the construction projects, or whether the differences were merely observed because of chance. To perform this test, we conducted a paired t-test (Miller, Freund and Miller, 2014) using the initial and actual timelines. The test hypothesis is: In this hypothesis formulation, µ1 is the initial duration of the construction projects and µ2 is the final duration of the projects. The p-value of this test, which is 0.000 reveals that at a 95% confidence level, one can reject the null hypothesis and conclude that there is a meaningful difference between the initial and final duration of the delayed projects (Miller and Miller, 2012). The provided 95% confidence interval is as follows: Eq. 7 Another paired t-test can be performed on initial and final cost estimates. Descriptive statistics from the questionnaires reveal that ( 1 X and 2 X are in thousands of USD): We use the same hypothesis structure as in Equation 6, where µ1 and µ2 are the initial and final costs of the population of the projects. The p-value of the test is 0.000, which means that at the 95% confidence level the null hypothesis is rejected. In other words, there is a meaningful difference between the initial and final cost of a construction project. We can also ascertain the significant difference between the initial and final cost by observing the 95% confidence interval: 64/PENERBIT UNIVERSITI SAINS MALAYSIA Eq. 9 Thus, one can be 95% confident that the average difference between the initial cost estimate and the final cost of a delayed project is between USD 135,039 and USD 306,374. Considering the fact that the average initial estimated cost of the projects is USD 1,203,055, the above value is considerable and results in more than 11% increase in the initial estimated costs. Hence, we postulate that reducing construction project delays would provide a valuable investment to a company. Detailed tables results of the mentioned tests are presented in Appendix 4: Detailed Results of the Hypothesis Tests. Regression Analysis From the paired t-tests, it was concluded that a meaningful difference exists between the initial and final project costs and duration. Therefore, if a causal relationship exists between initial and final proposals (and in this case, it does), it is possible for the owners, consultants, and contractors to revise their initial proposals in terms of cost and duration. Such relationships can be obtained using regression analysis (Miller and Miller, 2012). This analysis is performed on the reported initial and final duration and cost values obtained from the questionnaires. Figure 1 depicts the scatter plot of the initial and final project duration while Figure 2 illustrates the relation between the initial and final project cost. Both of these figures reveal a high degree of linear relationship between these variables. In both figures, the horizontal axis corresponds to initial estimates while and the vertical axis includes actual values. Eq. 10 In this particular case, x is the number of initial months in the first proposal and y is the final duration of project in months. A manager could apply this model in actual practice by inputting the estimated initial months (as the x variable) and then using the regression equation to determine a predicted value for final project duration. Detailed discussions on the goodness of the regression are provided in Appendix 5: Goodness of Fit for Regression Analysis. Similarly, a regression line can be generated for project costs: Here, x is the initial cost in thousands of USD and y is the final cost of the project in thousands of USD. As with the earlier regression equation, a project owner could deploy this model by inserting the initial project cost as the x variable. The regression model would then calculate an expected final project cost. Results of the reported regression analyses are extremely important for owners, contractors and consultants if they wish to reduce project tardiness and propose a more accurate cost structure for a construction project. DISCUSSIONS AND RECOMMENDATIONS In Iran, the approval and execution of construction projects, especially those that are governmentally funded, are governed by complicated regulations. Owners, contractors and consultants have to follow procedures that are enacted to ensure successful completion of the projects. Figure 3 illustrates major steps that parties should follow in Iranian governmentally funded construction projects (Jalal, 2008). Selecting Contractors Traditionally, contractor selection has been based solely on the prices offered by the bidders. However, when it comes to selecting a contractor in today's project environment, many owners do not consider the price as the single selection criterion: instead they pay attention to a combination of several parameters such as price, reputation of the bidders, history of previous projects, major construction quality indicators, prepared drawings, suggested construction methods and so forth. Consequently, contractor selection is no longer a straightforward procedure performed by merely sorting the bids based on the offered price. Moreover, there rarely exists a bidder that can dominate the rest of the competitors in all of the relevant criteria (Zavadskas et al., 2010;Huang, 2011). In other words, owners occasionally do not select the best contractor as the final winner of the bid. As a result, this factor contributes to more than 8% of the delayed projects in Iran as given by item 1.8 in Table 4 (under the "Owner Defects" category). We note that government entities in Iran still must adhere to a set of regulations that obliges them to select the contractor that offers the lowest price. In other words, regulations require government authorities to disregard all the important criteria mentioned above and select a contractor only by the offered price. This emphasises the need for decision support systems that facilitate the construction management decision making process. Such software solutions should be in accord with the required laws and regulations and take into consideration the imperative elements in selecting the best contractor in the presence of a variety of qualitative and quantitative factors. We note that academic studies for developing reliable methods of contractor selection and evaluation in the construction industry based on a mixture of qualitative and quantitative factors are very limited. Indeed, a literature review reveals that this is an emerging research theme, especially in the recent years (Cheng and Kang, 2012;Alzober and Yaakub, 2014). Nonetheless, the important feature of developing decision support systems specifically designed to facilitate the decision making process in the Iranian construction sector has not received sufficient attention. Lack of Knowledge about Regulations In order to facilitate the offer and acceptance elements of construction contracts, the Office of the Vice-Presidency for Strategic Planning and Supervision in Iran publishes typical contracts: owners and contractors are obligated by law to employ these typical templates to design and sign their own contracts. Several other legal authorities are in place to supervise the environment and deploy the methods of implementation and execution as given by the templates. To improve the effectiveness of the articles of the typical contracts and to increase the efficiency of the construction sector of the country as a whole, legal authorities are allowed to issue corrections to some articles of the typical contracts or interpret the legal terminology of the related documents. Mainly due to the inconsistencies in the language and terminology of the corrections issued by different supervisory units, we note that owners, consultants and contractors feel that the corrections and interpretations cause unnecessary delays and unfortunate confusion. In addition, experienced legal consultants are not always available when owners and contractors have incompatible interpretations of the newly issued corrections: even if legal advisors are available, their services can be very expensive and therefore not within the financial means of many construction management companies. Consequently, the misinterpretation of the corrections to the typical contracts and inconsistent terminology of such corrections can lead to costly legal disputes between contractors and owners. This ultimately elevates project costs and precipitates unforeseen delays. Table 5 addresses this issue as items 1.12, 2.7 and 4.1: these items contribute to 8.9% of the delays under owner defects, 12% of the delays under contractor defects and 18.3% of the delays under law, regulations and other general defects, respectively. To reduce this delay factor's impact, we recommend establishing a single outlet to publish typical contracts as well as the associated corrections and interpretations. Deploying a unified channel may reduce inconsistent terminology, which will mitigate the confusions and misinterpretations of the owners, contractors and consultants. In addition, costly legal disputes can be avoided provided that the single outlet office offers economical legal guidance to the companies. Lack of Attention to Inflation Lack of attention to inflation is another important delay factors; in Table 5, this factor is indicated as items 1.11 for owners (lack of attention to inflation from the owner defects category), 2.6 for contractors (inaccurate pricing and bidding in the contractor defects category) and 4.4 for law, regulation and other general defects (lack of attention of government authorities to inflation). In particular, it contributes to 17.3% of the delays under the fourth category in Table 4. Figure 4 illustrates Iran's chronically high inflation rate in the past decade according to the Statistical Center of Iran. Therefore, government authorities have enacted certain rules to compensate owners and contractors when high inflation causes a spike in construction costs and reduces the forecast profits. However, these rules do not fully compensate the contractor for elevated costs and cause dissatisfaction (item 4.4). On the other hand, bidders do not pay attention to the inflation rate and construction costs throughout the life cycle of the project when they estimate the project costs (item 2.6). Lack of attention to the true inflation rate results in inaccurate bidding, as well as frustration and delay during the project's lifespan. In addition, owners do not pay full attention to the reported inflation rates in the bids since a lower inflation rate in the bid translates into a less expensive project. Therefore, owners disregard the true inflation rates during the bidding procedure, which results in disputes and costly legal actions between owners and contractors during the project life cycle (item 1.11). Occasionally, the inflation rate fluctuates significantly if the bidding procedure takes a few months to complete. This leads to inaccurate bidding and pricing, which may contribute to disputes between the different parties involved in the project. Another reason for such disputes is that there are at least two official organisations that calculate and announce the inflation rate: the Statistical Center of Iran and the Central Bank of Iran. Often, the announced rate of these two offices are different, thus causing confusion among all construction management parties about the legitimate rate. In addition, contractors always believe that the real inflation rate is more than the officially announced rate. As a result, most of the liquidity problems and weak cash flow are blamed on the inadequacy of common methods for compensation of rising costs associated with high inflation. One can notice that very high and unstable inflation rate causes major problems for the construction sector and is the root cause of many delays. While risk management techniques to deal with this issue exist in the literature (Loo and Abdul-Rahman, 2012;Augustine et al., 2013;Barber and El-Adaway, 2014), the effect of very high and volatile inflation rates on the construction sector of Iran has never been studied. The first step to alleviate this key delay factor is to oblige the owners and contractors to obtain and reflect genuine forecasts of the inflation rate. Accurate inflation rate figures are generated and published by governmental offices such as the Statistical Center of Iran. Official forecasts are more precise and are available for different industries and geographical regions. Using rigorous figures for the inflation rate will result in accurate forecasts for the project costs, which will diminish the extent of financial disputes between owners and contractors. Adherence to Outdated Construction Methods The construction industry is very competitive in Iran. Cost reduction and waste elimination form integral parts of every successful company in such a competitive market. Nevertheless, owners and consultants believe that contractors have remained loyal to traditional construction practices and have not paid sufficient attention to innovation, research and development as the primary method for reducing the costs and delays throughout the life cycle of the projects. As a result, contractors should be constantly encouraged that activities which contribute to research and innovation are not an extra burden on the project finances and innovation has a pivotal role in wealth creation and cost reduction. This is addressed as item 2.5 in T and contributes to more than 14% of the delays under contractor defects. Corporations are recommended to promote innovation as well as their knowledge management systems. Subsequently, we recommend that all the different entities involved in a construction project (including owners, contractors and consultants) design clear and consistent value management processes and adopt and follow the principles of lean construction management. Proper value management begins by defining the project plan as well as the key performance indicators (KPIs) of the project: afterwards, objective techniques will be put in place to measure project performance and progress as the tasks are completed. Although many companies decide to devise their own KPIs and measurement techniques, it is possible to follow standard guidelines about defining KPIs in construction sector (Lin et al., 2011;Jaapar et al., 2012;Ponz-Tienda, Pellicer and Yepes, 2012). Moreover, decision support systems are an imperative part of value management systems in construction context (Luo et al., 2011). While value management systems measure the progress of the project, lean construction management techniques are focused on waste elimination, cost reduction and delay prevention. Lean techniques expand the efficiency of the firms and promote the defined KPIs of the project. Therefore, the practice of these techniques is recommended during the lifespan of the construction projects. Outdated Standard Mandatory Items in Cost Lists In Iran, government authorities publish a standard list of construction items and materials on an annual basis. According to regulations, this list must be used by owners and contractors as a basis for estimating project costs. However, the published lists do not always include the new construction materials and innovative items that are introduced to the market. This results in inaccurate cost estimates and disagreements between owners and contractors when selecting construction materials. This issue is indicated under item 4.2 in T (outdated standard mandatory items in cost lists), and is responsible for more than 18% of the delays under laws, regulations and other general defects. Additionally, item 4.2 (outdated standard mandatory items in cost lists) further contributes to item 3.8 (having too many unforeseen items in cost lists) under consultant defects, and item 2.6 (inaccurate pricing and bidding) under contractor defects. Government authorities are concerned that if parties were not required to estimate project costs based on the list of standard items, then the owners would experience a decline in the quality of the used materials. On the other hand, contractors, owners and consultants express that this move will supply them with the flexibility to innovate and reduce the costs and delays. The literature suggests that although having a standard price book is beneficial for cost estimation, governments should not interfere with the process of cost estimation by publishing a standard list of items and materials: instead, governments should enforce the quality requirements by developing consistent standards as well as deploying effective procedures for frequent inspections and audits, promoting insurance policies, and penalising deviations from the set standards (Ashworth, 2013;Alrashed, Philips and Kantamaneni, 2014;Kang et al., 2014). Projects Owned by the Government In Iran, construction projects are defined by the government for a variety of reasons. Once the government defines all the construction projects, it intends to launch during a certain fiscal year, a budget approval request is sent to the parliament. The time span and budgets for these construction projects are determined primarily due to political considerations. Insufficient attention is devoted to the accompanying feasibility studies. Once a project is enacted by parliament and a budget is assigned to it, the government calls for tenders; at this point, consultants and contractors scrutinise the timelines and the assigned budgets. If they conclude that the assigned budget and enacted timelines are not realistic, the government sends revision requests to the parliament. This inefficient procedure is responsible for more than 18% of the delays under law, regulation, and other general defects and is presented as item 4.3, financial difficulties stemming from governmental budgeting. In order to avoid such delays, special attention should be paid to proactive planning and risk management. For instance, government could develop various risk profiles and categorise different construction projects accordingly. Once the profiles are proposed, government should develop and maintain contingency plans for different projects based on the risk profiles. In addition, contractors and consultants could review the risk profiles and contingency plans to obtain a better evaluation about the financial viability of the project, project timelines, and the involved risks. Undoubtedly, political instability has a direct impact on the risk profile of construction projects at various levels. Political instability, due to its high interaction with other risk factors, often results in economic and financial instability and increases the risk of cost overrun and delays. This fact should be taken into full consideration at all stages of the procedure of defining a governmentally funded project, including when the government defines a project, at the time of budget approval by the parliament, and so forth. Reducing the political instability will result in a reduction in all types of risks. Therefore, government and parliament are recommended to reduce the political instability by creating a common language through acquiring project and risk management services. FUTURE RESEARCH DIRECTIONS It can be noted that a significant amount of delay stems from regulations, outdated standard contract terms and lack of planning by government authorities. For instance, ineffective regulations result in improper supervisory and executive procedures that further contribute to delays and disputes. Consequently, it is recommend that governmental regulatory bodies determine prompt and effective resolutions to these problems, which defines a promising future research direction. In other words, government entities should investigate, analyse, and resolve the delay factors resulted from laws and regulations. Success of such efforts not only depends on close partnerships between the government regulatory bodies and the private sector, but also requires a deep understanding of the economy, business environment, and the construction industry of Iran. A strengths, weaknesses, opportunities and threats (SWOT) analysis of the Iranian construction sector should be considered as a first step. Ghahramanzadeh (2013) concentrates on a typical construction project as the main building block of the SWOT analysis to define the internal and external risk factors; these risk factors include political and governmental factors (external), managerial and technical factors (internal), economic and financial factors (external), cultural and social factors (internal) and natural factors (external). Moreover, developing an expert system with learning abilities that can update and correct the results of this study and other similar studies would be crucial to increasing the body of knowledge in this area. The expert system would be quite valuable for regulatory bodies and government authorities, should they wish to reduce delays and the accompanying costs. Another future research direction is to compare the reasons of delay of the construction projects among the Middle Eastern and other developing countries to identify best practices. A comparative study between the reasons of delay in developing countries and the corresponding reasons in developed countries (such as in Europe and North America) would also contribute to a more thorough understanding of construction management process improvement. Moreover, the researchers may focus on the most common methods to cope with delays in the developed countries to investigate whether the solutions to common causes of delay and cost overrun in the developed countries can be applied to the construction industry in the developing countries, including Iran. CONCLUSIONS This paper studied the reasons for delay in construction projects. As a case study, we selected and Iran as a developing country with several ongoing construction projects. This paper used a rigorous methodology to determine the role and importance of common delay factors in Iranian construction projects. In this paper, an open questionnaire was used along with an extensive literature review to identify the reasons for delays in construction projects. Several interviews with owners, active contractors, consultants, and other experts were conducted accordingly. Afterward, a closed questionnaire was developed and mailed to 200 respondents. A multinomial probability model was developed to estimate the amount of contribution of each delay factor in a construction project. The delay factors and their interactions with each other were further discussed. The most important delay factors under owner defects were lack of attention to inflation (11.9%) and inefficient budgeting schedule (11.5%), lack of knowledge about different defined execution models (5.7%) and lack of attention to the results of feasibility studies and improper location planning (6.7%) were among the least important delay factors in this category. In the contractor defects category, inaccurate budgeting and resource planning is the most important delay factor (21.7%), weak cash flow (17.3%) and inaccurate pricing and bidding (15.5%) are the other important delay factors. On the other end of the spectrum in this category are factors such as ineffective project planning (4.9%) and using low quality material and inadequate equipment (6.4%). The most important delay factors in the consultant defects are inaccurate first drafts (13.8%) and mistakes in technical documents (12.3%). In this category, factors such as ineffective project planning (7.2%) and assigning inexperienced personnel to supervisory duties (8.3%) are deemed least important. Finally, in the law, regulation and other general defects category, the most important delay factors are outdated standard mandatory items in cost lists (18.8%), financial difficulties stemming from governmental budgeting (18.5%) and outdated standard mandatory terms in contracts (18.3%). In this category, extreme weather conditions are the least important factor (12.9%). Furthermore, a number of hypotheses tests were conducted to statistically test whether the differences between initial and final estimates were significant. Statistical analyses prove that the differences were indeed significant. There exists a meaningful difference between the initial and final costs and durations. As a result, regression analysis was performed to provide more insight for owners, contractors and consultants about the differences between initial and final estimates of a typical construction project in terms of both duration and cost. Regression analysis provides a baseline for project managers and cost estimators, should they aim to reduce inaccuracies in terms of project duration and cost. Furthermore, managers could use these regression models to predict final project cost or duration based on initial estimates for these variables. Statistical analyses confirmed the reliability of the models. According to the models, the average delay per year is 5.9 months (one can expect 11.8 months of delay if the original project duration is 24 months): the overall cost overrun is 15.4%. It should be noted that the results of this study can be employed by project managers to recalibrate the risk management techniques and to avoid the delays as much as possible. Moreover, this paper provided several practical recommendations for government entities to assist with finding the root causes of the delays and to enact the most important laws and regulations to alleviate the construction project inefficiencies. A detailed list for the future research directions was also provided. Table 7 presents the results of the intra-class correlation coefficient for the designed closed questionnaire, which is an output of the Chronbach's alpha for the internal consistency of the questionnaire. According to this table, the value of the Chronbach's alpha is 0.791, which indicates a high internal consistency. Moreover, the intraclass correlation for single measure is 0.059, which is a very low value and another indication on the high consistency of the designed questionnaire. The reported p-values is 0.000 for both of the measures; this concludes that the calculated measures are significant. In other words, out of n = 86 observations, 45 respondents have determined "lack of attention to the results of feasibility studies and improper location planning" as a factor that has contributed to a delayed construction project in Iran. According to Equation 4, an unbiased point estimator for p1 of the multinomial distribution is: Table 9 provides the results of the goodness of the regression test for project duration at a 95% confidence level. The reported p values is 0.000 for the regression coefficient and 0.000 for regression constant. Thus, it can be concluded that the regression line is significant. The last two columns of this table present 95% confidence interval for the coefficient and constant values. Table 10 presents the results of the goodness of the regression test for project costs at the 95% confidence level. Once again, the resulting p values conclude a significant regression line in the selected confidence level. Appendix 6: Validation of the Regression Analyses To verify the validity of the developed regression models, three assumptions should be tested (Doane and Seward, 2015): (1) the errors should be normally distributed, (2) the errors should have constant variance (homoscedastic) and (3) the errors should be independent. Figure Figure 5 illustrates that for the duration regression model, residuals are very close to the normal line. This figure proves the correctness of the first assumption. Figure 6 belongs to the scatterplot of the residuals for the duration regression model. It can be verified that the residuals are randomly scattered: also, the scatterplot of residuals does not show a visible trend, which proves that the residuals are independent (Miller and Miller, 2012). Figure 8 demonstrates that the residuals are homoscedastic and are not correlated (Miller and Miller, 2012), Figure 7 reveals that the residuals do not have a normal distribution. However, non-normality of errors is considered a mild violation since the regression parameter remains unbiased and consistent (Miller and Miller, 2012). The main consequence is that the confidence intervals may not be trustworthy because of this violation. However, since the sample size is large enough (n > 80) the regression equation is reliable (Doane and Seward, 2015). The reader should note that in the duration regression equation R 2 = 52.6%. Thus, the regression equation is able to explain 52.6% of the variation in the final duration of the projects based on the initial duration of the projects. In other words, there are other effective factors involved in determining the final duration of the projects that are not considered in the regression analysis. In fact, this study counts 36 effective delay factors. Including each of these delay factors in the regression equation should improve the coefficient of determination. However, this over-complicates the regression equation to the point where it is not a practical model anymore. Hence, project managers must interpret the results of the duration regression analysis with more caution.
2019-04-15T13:03:57.122Z
2016-07-31T00:00:00.000
{ "year": 2016, "sha1": "48e1e6eedadbb9a411a5d4bbc5ef00a63e8976b1", "oa_license": "CCBY", "oa_url": "https://doi.org/10.21315/jcdc2016.21.1.4", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "48e1e6eedadbb9a411a5d4bbc5ef00a63e8976b1", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Engineering" ] }
238692740
pes2o/s2orc
v3-fos-license
Use of technology for the objective evaluation of scratching behavior: A systematic review Introduction Pruritus is a common symptom across various dermatologic conditions, with a negative impact on quality of life. Devices to quantify itch objectively primarily use scratch as a proxy. This review compares and evaluates the performance of technologies aimed at objectively measuring scratch behavior. Methods Articles identified from literature searches performed in October 2020 were reviewed and those that did not report a primary statistical performance measure (eg, sensitivity, specificity) were excluded. The articles were independently reviewed by 2 authors. Results The literature search resulted in 6231 articles, of which 24 met eligibility criteria. Studies were categorized by technology, with actigraphy being the most studied (n = 21). Wrist actigraphy's performance is poorer in pruritic patients and inherently limited in finger-dominant scratch detection. It has moderate correlations with objective measures (Eczema and Area Severity Index/Investigator's Global Assessment: rs(ρ) = 0.70-0.76), but correlations with subjective measures are poor (r2 = 0.06, rs(ρ) = 0.18-0.40 for itch measured using a visual analog scale). This may be due to varied subjective perception of itch or actigraphy's underestimation of scratch. Conclusion Actigraphy's large variability in performance and limited understanding of its specificity for scratch merits larger studies looking at validation of data analysis algorithms and device performance, particularly within target patient populations. INTRODUCTION Pruritus is a common symptom of systemic and dermatologic disorders, and scratching is the innate reflex. 1 The itch-scratch cycle is a hallmark symptom of atopic dermatitis (AD) and perpetuates skin barrier dysfunction. Notably more severe during sleep, itch in AD has been shown to impact sleep quality. [2][3][4][5] Historically, itch has been assessed subjectively through visual analog scales (VAS) and numeric rating scales. 6 However, these measures often do not correlate to visually observed scratch, especially in children. [7][8][9] More recently, studies have explored device-driven methods to objectively measure scratch as a proxy for itch. Actigraphy is the most commonly tested method and entails the use of accelerometers to monitor wrist movements, a proxy for scratching. Other technologies include acoustic devices, 10,11 strain gauges, 12,13 pressure sensors, 12,14 and vibratory sensors. 12,13,[15][16][17][18] The commonly accepted gold standard is video recording of scratching with manual coding by an observer, which is time-consuming and impractical in clinical settings. [19][20][21] The purpose of this systematic review is to assess the performance and algorithms of technological methods currently available to evaluate scratching behavior objectively. Search strategy We Study selection Eligibility assessment was performed independently by 2 authors. Included articles must feature critical assessment of a technology designed to measure itch objectively and report at least 1 of the primary outcomes described below. Exclusion criteria included studies of nonhuman subjects, articles without original data, and studies describing technology without assessing its performance. Quality assessment Study quality was assessed using a rating scheme (1)(2)(3)(4)(5), which was modified from the Oxford Centre for Evidence-Based Medicine 22 for rating levels of evidence. The individual studies assessed are described in Tables I and II and assessment was performed by at least 2 authors. Data extraction and outcomes Performance values were extracted using a standardized survey. Primary outcomes included sensitivity, specificity, and positive and negative predictive values of scratch detection methods. Secondary outcomes included correlations of detection methods to other technologies and subjective assessments. Performance metrics Sensitivity is defined as the ability to detect the number of true positives (eg, true scratching) and specificity is the ability to detect the number of true negatives (eg, nonscratching movements). Positive predictive value (PPV, precision) is the proportion of positives that are true positives (eg, movements labeled as scratch that are true scratches). The F1 score encompasses both sensitivity and precision. Root mean square error (RMSE) is the standard deviation of residuals and is effectively an estimation of how well an algorithm predicts the observed data (ie, accuracy). Algorithms To efficiently extract and analyze device data, algorithms capable of distinguishing scratch from nonscratch movements are essential. Linear regression modeling is generated from the number of activity counts above a frequency threshold and total scratch time; however, this model is limited by confounding movements (eg, walking, restlessness). 23 Logistic regression modeling is a simple approach to binary classification (eg, scratch vs nonscratch) and analogous to linear regression. Bidirectional recurrent neural networks are a form of machine learning whereby the network can detect patterns directly (eg, scratch waveforms) from raw input data, thereby eliminating precursory extraction of patterns required for other models. 24 The k-means clustering analysis is another approach that involves clustering a set number of subgroups within a data set. The algorithm then allocates device signals into their respective subgroups based on frequency, waveform, or other qualities. 23 RESULTS Of the 6231 articles identified, 72 were assessed based on exclusion criteria and 24 fully met eligibility criteria. Most articles looked at AD, although other conditions were also examined (eg, urticaria). Articles reporting performance and correlation measures are summarized in Tables I and II The overall performance of current objective tools for quantifying itch suffers from low accuracy and variable performance. Further development will allow for more-objective evaluation of disease management and treatment. Sensitivity and specificity ranges of technologies compared to video recording are summarized in Table III. An overview of benefits and limitations is seen in Table IV. Each algorithm has its limitations. The k-means clustering analysis algorithm of Feuerstein et al 23 yielded high performance values, but required all anticipated movements to be determined a priori. While logistic regression approach from Petersen et al 29 for detecting total nocturnal scratch time yielded comparable performance to the algorithm from Feuerstein et al, 23 the model had significantly decreased performance when tested with a separate data set. 24 The bidirectional recurrent neural networks algorithm proposed by Moreau et al 24 yielded higher sensitivity, PPV, and F1-scores than the logistic regression model; however, it has not been tested in further datasets. Correlation between actigraphy data and video recording was evaluated by Moreau et al, 24 reporting Spearman rank correlation coefficients (r s (r)) of 0.95-0.96. 28 Other studies report correlations between actigraphy and video recording for total scratching time percentage (TST%) calculation (r s (r) = 0.91), 25 and correlation values between actigraphy and video recording of sleep efficiency were all reported to be greater than 0.92 by Benjamin et al. 21 Correlations with other objective and subjective measures. Several articles explored correlations between actigraphy and subjective sleep measures, disease severity, AD-associated serum markers, and subjective itch measures. Ten articles compared actigraphy to subjective sleep measures, with 4 reporting correlations. VAS sleep, a patient-reported measure of sleep quality, was examined in 1 article, reporting correlation coefficients of À0.44 in adults and 0.48 in children when compared to average hourly activity scores. 26 The total scoring AD index, which includes both subjective (eg, itch and sleep) and objective (eg, disease severity) measures, had moderate correlations with various activity measures ranging from 0.53-0.64 in adults (P \ .05) and 0.42-0.62 in children (P \ .05). 26,27,30 While total and objective total scoring AD indexes both resulted in r s (r) = 0.52 (P \ .001) in children (n = 24) compared to wrist activity, correlations with pruritus and sleep subscores were not significant. 30 Two articles evaluated other disease severity indices in children and adults, with moderate correlation for objective measures (Eczema and Area Severity Index and Investigator's Global Assessment) compared to actigraphic wake after sleep onset, ranging from 0.70-0.76 (P \ .02, n = 10). 32 Six area six sign AD was found to have a weak correlation with average nocturnal movement (r s (r) = 0.15, P = .02, n = 235). 33 Four articles investigated -serum markers associated with AD. Statistically significant correlations with actigraphy measurements ranged from 0.51-0.93. 17,27,30,31 These studies were not compared to video recording, however, and thus conclusions specifically related to scratch are difficult to make. While there seems to be a moderate correlation between actigraphy and objective measures, this is not the case with subjective measures. Fourteen articles compared actigraphy to subjective itch, with 2 articles reporting correlation coefficients. Comparison between VAS itch and mean actigraphy scores yielded coefficients of determination (r 2 ) of 0.06 in children and adults with various pruritic conditions (n = 118) and 0.08 in adult AD subjects (n = 20). 8 VAS itch and hourly activity scores yielded r s (r) = 0.40 (P = .049) in children and 0.18 (P = .9) in adults. 26 Actigraphy-based scratch measurements correlate poorly to VAS itch scores, sleep quality, and other subjective patient-reported outcomes. 8,21,26,27,30 The reasons for this are likely multifactorial. In pediatric populations, proxy measures may be under or overestimated by caregivers. More likely, there are inherent differences between a subject's perception of itch and the objective actions of scratching. An individual may report a high level of subjective itch Abbreviations used: AD: Atopic dermatitis PPV: Positive predictive value RMSE: Root mean square error VAS: Visual analog scale TST%: total scratching time percentage AD, Atopic dermatitis; r 2 , coefficient of determination; RMSE, root mean square error; r s (r), Spearman's rank correlation coefficient; TST%, total sleep time percentage. *Study quality was assessed using a rating scheme modified from the Oxford Centre for Evidence-Based Medicine for ratings of individual studies: (1) properly powered and conducted randomized clinical trial or systematic review with meta-analysis; (2) well-designed controlled trial without randomization or prospective comparative cohort trial; (3) case-control study or retrospective cohort study; (4) case series with or without intervention or cross-sectional study; and (5) opinion of respected authorities or case reports. 22 y P \ .005. z P \ .001. but exhibit an equally high level of scratching restraint. In contrast, some individuals with chronic itch are habituated to it and report low scores despite frequent scratching. Ultimately, scratch measurements with objective tools and subjectreported outcomes are interrelated outputs that provide complementary information. Smartwatch applications Applications leveraging smartwatches and their accelerometers show comparable performance in detecting scratch when compared to actigraphs. Three articles examined smartwatch applications compared to video recording. In preliminary testing of their ''Itchtector'' app, Lee et al 34 reported sensitivity (0.63-1.00), specificity (0.98-1.00), PPV (0.83-0.98), negative predictive value (0.93-1.00), and accuracy (93.3%-99.0%) in healthy adults (n = 3). When cross-validated in pruritic subjects (n = 13), the app yielded lower sensitivity (0.75), PPV (0.74), and accuracy (90%), which may be due to the small initial sample size, different subject populations, and different smartwatches. 35 Ikoma et al 36 also tested the ''ItchTracker'' app in adult AD subjects (n = 5) and reported a sensitivity of 0.85 and PPV of 0.90. They reported a correlation between the app and video recording for an hourly scratch duration of r s (r) = 0.851-0.901 (P \.001). The authors further compared scratching duration percentage to current and 7-day itch in healthy and AD adults, reporting r s (r) = 0.36-0.43 (P \.001). Similar findings were reported regarding self-reported sleep disturbance (r s (r) = 0.45) and daytime disturbance (r s (r) = 0.42). Disease severity measured by the Eczema and Area Severity Index was significantly correlated to scratching duration percentage (r s (r) = 0.60). 36 However, they excluded finger-only scratching movements. Additionally, the small sample size should be taken into consideration. Although smartwatch applications show good sensitivity, there are no reported specificity ranges for pruritic subjects, making it difficult to assess their ability to distinguish between scratch and nonscratch movements. Acoustic Acoustic devices detect sound waves generated from scratching. Two articles studied healthy subjects and compared the performance of their respective devices to that of video recording. No sensitivity or specificity values were reported. The finger-mounted microphone presented by Kurihara et al 12 yielded an RMSE of 1.09% for TST% calculation when compared to video recording. Noro et al 10 reported r 2 = 0.98 when comparing scratching rate captured by their acoustic sensor and scratching rate obtained from video observation. While the devices show strong accuracy in detecting fine finger movements, the technology is not widely available and follow-up studies have not been conducted since first reported in 2014. Vibratory Vibratory devices allow for nonintrusive monitoring of body movements and mitigate lesion exacerbation by devices that require skin contact. Four articles studied bed vibratory sensors compared to video recording. 12,13,18,37 Accuracy was measured by RMSE, ranging 0.56-1.29s for scratching time 18 and 0.87%-6.31% for TST% calculation. 12 Shino et al 37 reported comparable RMSE values for their TST% algorithm (0.68-0.79s) when compared to visually scored device outputs (0.40-0.94s) (n = 1). For both studies, the vibratory RMSE values were among the lowest when compared to other technologies. While vibratory devices have comparable accuracy to actigraphy and are largely burden-free once installed, their cost and setup may be deterrents. Pressure sensors Pressure sensors placed on the dorsal hand detect pressure changes with hand movements. Only 1 of 2 articles was compared to video recording. Kurihara et al 12 compared a ceramic sheet to other devices in healthy subjects, and reported a RMSE of 0.72% for TST% calculation when compared to video recording, the lowest among the devices tested. Although not compared to video recording, the Scratch Monitor pressure sensor presented by Endo et al 14 was tested in healthy adults and yielded sensitivity ranging from 0.65-0.83. Strain gauge Strain gauges placed on the index finger to measure finger bending were evaluated in 2 studies, both of which were compared to video recording and tested in healthy subjects. The devices yielded an RMSE of 2.41% for TST% calculation, half that of wrist actigraphy. 12 The devices also yielded a TST% error of 1.38% when automatically extracted via an algorithm proposed by Shino et al, 37 which was compared to a TST% error of À1.54% when the data were visually scored. No sensitivity or specificity values were reported. It should be noted that strain gauges may be more susceptible to false positives (eg, nonscratching finger movements). DISCUSSION While the development of existing and novel devices has progressed tremendously, their performances reveal large areas in need of improvement. Actigraphy-based algorithms appear to have good sensitivity and specificity in healthy subjects; however, their performance deteriorates considerably when applied to pruritic subjects. This may be due to a lack of algorithm generalizability and failure to capture finger scratching. Additionally, most data used for establishing scratch parameters were obtained from small healthy samples. While there have been cross-validation studies with data from small AD samples, testing in larger samples of pruritic patients has not been performed. The same principle applies to newer scratch technologies, whereby further testing in both populations is needed for robust algorithms. While certain devices have demonstrated greater sensitivity for detecting finger scratching, the studies do not explicitly mention their abilities to detect rubbing or use of other scratching tools (eg, back scratchers). Rubbing, like scratching, is a natural reaction to itch; if devices are unable to distinguish rubbing or use of scratching tools from other motions, they may be underestimating itch. Further development of these technologies may help provide a more comprehensive picture of itch. Performance metrics and algorithms With advances in machine learning, data-driven approaches for objective scratch monitoring have gained significant interest. Various metrics have been employed to evaluate performance. While specificity and accuracy are useful, they need to be used with caution as they can be prone to class imbalances. Under typical situations, scratching arises sporadically, each over a brief period, ranging from a few seconds to several minutes depending on symptom severity. Thus, the majority of data collected features nonscratching behaviors; only a small amount of data feature scratching, resulting in a significant class imbalance. For example, a poor classification algorithm that predicts nonscratch all the time will, most likely, produce excellent accuracy and specificity. Given this problem, other metrics, such as sensitivity, precision, and F1-score are deemed more appropriate to quantify performance. Future considerations While patient history and examination remain important tools in assessing itch, there remains an ongoing need for adjunctive objective and precise, tools to quantify itch, such as in the case of subconscious habitual scratching. Many technologies and algorithmic strategies have been studied, though their performances are highly variable, with validation studies rarely extending beyond small samples. In addition, most studies focus on nocturnal scratching. Given that the perception of itch varies during the day, daytime scratching remains an important behavior that is largely unstudied. In this review, very few studies reported specificity values. While this is understandable in nocturnal scratching, during which the targeted behavior scratching is rare overall, daytime wear introduces other confounders, such as texting or walking. Thus, specificity may hold greater relevance in daytime wear, during which the wristwatch-based systems may struggle to differentiate scratching from other movements. Our group has introduced a novel mechano-acoustic skin device that incorporates actigraphy and acoustic detection of scratching by conforming to the dorsal hand and sampling at higher frequencies (~1600 Hz) compared to actigraphy (20-100 Hz). Scratch algorithm development performed in healthy subjects yielded high sensitivity and specificity with comparable performance among AD datasets using an IR camera gold standard, even with confounders. 38,39 A comparison of data outputs for scratch from actigraphy, smartwatch application, and mechano-acoustic device is shown in Supplemental Fig 1. CONCLUSION While actigraphy remains the most frequently studied modality in clinical studies, performance is variable with no assessment of daytime performance. Further testing of these technologies will be needed before used in the clinical setting. A reliable technological modality would allow for objective support of drug development outcomes, 40-42 guide disease management, and assess treatment response. Conflicts of interest Drs Yang, Nguyen, Li, Lee, Chun, Wu, Fishbein, and Paller have no conflicts of interest to declare. Dr Xu has equity in a private company with a commercial interest in scratch sensors and inventorship interest in patents related to a scratch sensor.
2021-10-14T00:06:46.945Z
2021-08-17T00:00:00.000
{ "year": 2021, "sha1": "b9320587cb81173df33cd3fca4c504267a81c7b8", "oa_license": "CCBYNCND", "oa_url": "http://www.jaadinternational.org/article/S2666328721000481/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c42a1947a4efaccb647343b512a3fb3b8985b1c0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
17035139
pes2o/s2orc
v3-fos-license
Coincidences of Dark Energy with Dark Matter -- Clues for a Simple Alternative? A rare coincidence of scales in standard particle physics is needed to explain why $\Lambda$ or the negative pressure of cosmological dark energy (DE) coincides with the positive pressure $P_0$ of random motion of dark matter (DM) in bright galaxies. Recently Zlosnik et al. (2007) propose to modify the Einsteinian curvature by adding a non-linear pressure from a medium flowing with a four-velocity vector field $U^\mu$. We propose to check whether a smooth extension of GR with a simple kinetic Lagrangian of $U^\mu$ can be constructed, and whether the pressure can bend space-time sufficiently to replace the roles of a $w=-1$ DE, $w=0$ Cold DM and heavy neutrinos in explaining anomalous accelerations at all scales. As a specific proof of concept we find a Vector-for-$\Lambda$ model (${\mathbf V\Lambda}$-model) and its variants. With essentially {\it no free parameters}, these appear broadly consistent with the solar system, gravitational potentials in dwarf spiral galaxies and the bullet cluster of galaxies, early universe with inflation, structure formation and BBN, and late acceleration with a 1:3 ratio of DM:DE. Subject headings: Dark Matter; Cosmology; Gravitation The incompleteness of standard physics and Einstein's General Relativity (GR) is evident from the smallness of the cosmological constant Λ or the vacuum energy density (Turyshev, Nieto, Anderson 2006), both represent acceleration discrepancies of order ∼ 7a 0 , driven by unidentified (likely unrelated) pressures ∼ 72P 0 , where a 0 ≡ 1.2Å/ sec 2 , and P 0 ≡ a 2 0 8πG are scales of acceleration and pressure. On intermediate scales, galaxy clusters and spiral galaxies often reveal a discrepant acceleration of order (0.1 − 2)a 0 . GR, if sourced primarily by baryons and photons with negligible mass density of neutrinos and other particles in the Standard Model or variations, appears an adequate and beautiful theory in the inner solar system, but appears increasingly inadequate in accounting for astronomical observations as we move up in scales from 100AU to 1 kpc to 1Gpc. The universe made of known material of positive pressure should show a de-accelerating expansion as an open universe, but instead it is turning into accelerating now, evidenced by much dimmer supervonae detected at redshift unity. A standard remedy to restore harmony with GR and fit successfully large scale observations (Spergel et al. 2006 and references therein) is to introduce a "dark sector", in which two exotic components dominate the matter-energy budget of the Universe at the redshift z with a split of Ω DE : Ω DM = 3 : (1 + z) 3 approximately: a Dark Energy (DE) as a negative pressure and nearly homogeneous field described by unknown physics, and a Cold Dark Matter (DM) as a colissionless and pressureless fluid motivated by perhaps the MSSM physics. However, anticipating several new particles from the LHC, the success of this Concordance Model still gives little clue to the physics of governing the present 1 : 3 ratio of its constituents. This ratio is widely considered improbable, because standard particle physics expects a ratio 1 : 10 120 . Here we speculate whether the 3 : (1 + z) 3 ratio could come from a coincidence of scales of a 0 ≡ 1.2Å/ sec 2 with the cosmological baryon energy density ρ b c 2 ∼ 3.5 × (1 + z) 3 P 0 . A deeper link of DM and DE: It is curious that the distribution of DM in dwarf galaxies is extremely ordered, something that the cuspy ΛCDM halos are still struggling to explain even with the maximum baryonic feedback (Gnedin & Zhao 2002). For example, on galaxy scales the Newtonian gravity of DM g DM = V 2 c R − g B and Newtonian gravity of baryons g B = GM B R 2 have a tight correlation: where n ≥ 1 (Zhao & Famaey 2006). This rule holds approximately at all radii R of all spiral galaxies of baryonic mass M B (R) and circular velocity V c (R) within the uncertainty of the stellar mass-to-light and object distance. For low surface brightness galaxies or at the very outer edge of bright spirals, the gravity g is weaker than a 0 , our empirical formula predicts which is essentially the normalisation of (baryonic) Tully-Fisher relation (McGaugh 2005). Bulges and central part of elliptical galaxies are dominated by baryons inside a transition radius where the baryon and DM contribute about equally to the rotation curve, Eq.(1) predicts g DM = g B = a 0 /2; we can define a DM pressure P 0 ≡ a 0 × a 0 8πG at transition by multiplying the local gravity (g DM + g B ) = a 0 with the DM column density above this radius g DM 4πG = a 0 /(8πG). This scale P 0 appears on larger scales too. All X-ray clusters have gas pressure and the DM random energy density comparable to P 0 . The amplitude of the scale a 0 appears in the r −1 cusp of CDM halos too (Xu, Wu, Zhao 2007, Kaplinhat & Turner 2001. These can be understood since the last scattering shell at z = 1000 has a thickness 2L ∼ 10Mpc and contains typical potential wells of depth c 2 /N ∼ (1000 km s −1 ) 2 due to inflation, where N ≡ 10 5 , hence the typical internal acceleration is c 2 /N/L ∼ 0.2a 0 . Also a DM sphere of radius 5Mpc turning non-linear now would fall in with an acceleration ∼ 200 × H 2 0 × 5Mpc ∼ a 0 . While correlations of baryon and DM can generally be understood in a galaxy formation theory where DM and baryons interact, the unlimited freedoms of dark particles means a good spread of its concentration, hence the correlation would have substantial history-dependent variance from galaxies to galaxies and radii to radii. For example, DM is unexpected in Tidal Dwarf Galaxies, is observed for its a 0 acceleration ). The tightness of such hidden regulations on DM at all radii for all galaxies is anomalous, at least challenging in the standard framework. It is even more curious that DM in various systems and DE are tuned to a common scale P 0 , hence requiring a coincidence in two dark sectors. These empirical facts are unlikely random coincidences of the fundamental parameters of the dark sectors. Since all these anomalies are based on the gravitational acceleration of ordinary matter in GR, one wonders if the dark sectors are not just a sign of an overlooked possible field in the gravitational sector. Continue along Zhao (2006), here we propose to investigate whether the roles of both DM and DE could be replaced by a vector field in a modified metric theory. This follows from two long lines of investigations pursued by Kostelecky, Jacobson, Lim and others on consequences of symmetry-breaking in string theory, and by Milgrom, Bekenstein, Sanders, Skordis and others driven by astronomical needs. These two independent lines were first merged by the pioneering work of Zlosnik et al.(2007). Existence of an explicit Lagrangian satisfying main constraints for the solar system, galaxy rotation curves and cosmological concordance ratio remains to be demonstrated. Warming up to vector field: In Einstein's theory of gravity, the slightly bent metrics for a galaxy in an uniform expanding background set by the flat FRW cosmology is given by where dl 2 = (dx 2 + dy 2 + dz 2 ) is the Euclidian distance in cartesian coordinates. In the collapsed region of galaxies, the metric is quasi-static with the potential Φ(t, x, y, z) = Ψ(t, x, y, z) due to DM plus baryon, which all follow the geodesics of g µν . Modified gravity theories are often inspired to preserve the Weak Equivalence Principle, i.e., particles or small objects still go on geodesics of above physical metric independent of their chemical composition. Unlike in Einstein's theory, the Strong Equivalence Principle and CPT can be violated by, e.g., creating a preferred frame using a vector field. The Einstein-Aether theory of Jacobson & Mattingly (2001) is such a simple construction, where a unit vector field U µ is designed to couple only to the metric but not matter directly. It has a kinetic Lagrangian with linear superposition of quadratic co-variant derivatives ∇(c 2 U)∇(c 2 U), where c 2 U µ is constrained to be a time-like four-momentum vector per unit mass by −g µν U µ U ν = 1. The norm condition means the vector field introduces up to 3 new degrees of freedom; e.g., a perturbation in the FRW metric (Eq.2) has c 2 U µ ≡ g µν c 2 U ν ≈ (c 2 + Φ, Ax c , Ay c , Az c ), containing a four-vector made of an electric-like potential Φ and three new magnetic-like potentials. But for spin-0 mode perturbations with a wavenumber vector k, we can approximate , which contains just one degree of freedom, i.e., the flow potential V (t, x, y, z). We expect an initial fluctuation of c|k|V ∼ |Φ| ∼ c 2 N −1 ≡ 10 −5 c 2 can be sourced by a standard inflaton; the vector field tracks the spectrum of metric perturbation (Lim 2004). Most recently Zlosnik et al. (2007) suggested to replace the linear λ∇U∇U with a nonlinear kinetic Lagrangian F (λ∇U∇U) to extend Jacobson's framework. They showed this class of non-linear models is promising to produce the DE effect in cosmology and the DMlike effect in the weak field limit. Here we continue along the lines of the pioneering authors, but aim for a single Lagrangian with parameters in good match with basic observations of a range of scales. A Simple Lagrangian for Λ: The difficulty of writing down a specific Lagrangian is that there are infinite ways to form pressure-like terms quadratic to co-variant derivatives of the vector field. Simplicity is the guide when choosing gravity since GR plus ΛCDM largely works. Let's start with forming two pressure terms for any four-momentum-like field A µ with a positive norm mc 2 ≡ −g αβ A α A β by where the RHSs are co-variant with dimension of acceleration squared, and ∇ = A α ∇ α or ∇ α stands for the co-variant derivative with space-time coordinates along the direction of the vector A or the dummy index α respectively. From these we can generate two simpler pressure terms K and J of the unit vector field U α by where the approximations hold for U α with negligible spatial components and nearly flat metric (Eq.2). Note the J and K are constructed so that we can control time-like Hubble expansion and space-like galaxy dynamics separately. 1 The K-term, with a characteristic 1 A full study should include space-like terms 8πGK 12 ≡ 2g αβ (c 2 ∇ α U γ )(c 2 ∇ β U γ ) − 2 3 (c 2 ∇ α U α ) 2 and pressure scale a 2 0 8πG = P 0 in galaxies, is the key for our model. The J-term, meaning critical density, has a characteristic scale N 2 P 0 ∼ 10 10 P 0 : at the epoch of recombination z = 1000 when baryons, neutrinos, and photons contribute ∼ (8, 3, 5) × 10 9 P 0 respectively to the term J = 3c 2 H 2 8πG ; so the epochs of equality and recombination nearly coincide. Now we are ready to construct our total action S = d 4 x|−g| 1 2 L in physical coordinates, where the Langrangian density where R is the Ricci scalar, L m is the ordinary matter Lagrangian. For the vector field part, L m is the Lagrangian multiplier for the unit norm and we propose the new Lagrangian where the non-negative continuous functions λ where the subscript i = either n or N. Incidentally, n = 0 gives GR. The cutoffs (e.g., with n = ±1) guarantee a bounded Hamiltonian with kinetic terms L K and L J always bounded between ±N 2 P 0 (e.g., in a lab near Earth K ∼ (10 13 − 10 22 )P 0 > N 2 P 0 , so L K = 0). The condition at the tidal boundary K = J = 0 is well-behaved too (cf. eq.44-48 of Famaey et al. 2007 on Cauchy problem). Note 1 − dL K dK > µ min ≡ (1 + N/n) −n ∼ 10 −15 and 1 − dL J dJ > µ B ≡ (1 + N/N) −n ∼ 2 −3 . Taking variations of the action with respect to the metric and the vector field, we can derive the modified Einstein's equation (EE) and the dynamical equation for the vector field. The expressions are generally tedious (Halle 2007), but the results simplifies in the perturbation and matter-dominated regime that interest us. As anticipated in Lim (2004) the ij-cross-term of EE yields Ψ − Φ = 0 for all our models, which means incidentally twice as much deflection for light rays as in Newtonian. As anticipated in Dodelson & Liguri (2007), the ti-term EE can be casted into that of an unstable harmonic oscillator equation with a negative string constant˙V we expect that HV tracks Φ. The tt-term of the EE takes the form which change details of structure formation, PPN, and gravitational waves, which are beyond our goal here. where we approximated 1 − λ N (x) ∼ 2 −n = µ B as a constant in matter dominated regime where J < N 2 P 0 and the Q-term is zero for static galaxies and uniform FRW flat cosmology. So the tt-equation of Einstein reduces to the simple form Here the pressure from the vector field creates new sources for the curvature. The term in the Poisson equation acts as if adding DM for quasi-static galaxies. A cosmological constant in the Hubble equation is created by For binary stars and the solar system, 4πGρ − ∇ 2 Φ ≈ 0 is true because the gravity at distances 0.3AU to 30AU from a Sun-like star is much greater than the maximum vector field gradient strength Na 0 , so dL K dK = 0; in fact, |∇Φ| ≈ GM ⊙ r 2 ∼ (10 9 − 10 5 )a 0 , and the typical anomalous acceleration is Na 0 µ min ∼ 10 −10 a 0 , well-below the current detection limit of 10 −4 a 0 (Soreno & Jezter 2006). This might explain why most tests of non-GR effects around binary pulsars, black holes and in the solar system yield negative results; Pluto at 40 AU and the Pioneer satellites at 100 AU might show interesting effects. Extrapolating the analysis of Foster & Jacobson (2006), we expect GR-like PPN parameters and gravitational wave speeds in the inner solar system. For the Hubble expansion: the vector field creates cosmological constant-like term Λ 0 c 2 8πG ≈ 9P 0 below the zero-point of the energy density in the solar system because the zero point of our Lagrangian (Eq.6) is chosen at N 2 P 0 ≤ K < +∞. During matter domination, the contribution of matter 8πGρ and Λ 0 to the Hubble expansion H 2 (Eq.9) is further scaled-up because the effective Gravitational Constant G ef f = G/µ B = 2 n G ≥ G with GR being the n = 0 special case. 2 Coming back to the original issue of the 3 : 1 ratio of matter density to our cosmological constant, Eq.(9) predicts that Λ 0 c 2 , which is close to the desired 3 : (1 + z) 3 ratio. Adding neutrinos makes the explanation slightly poorer. So the DE scale is traced back to a separate coincidence of scale, i.e., the present baryon energy densityρ b c 2 ∼ 4P 0 , where P 0 contains a scale a 0 for the anomalous accelerations on galactic scale. Our model predicts that DE is due to a constant of vacuum, preset by the modification parameter n of the gravity; n = 0 gives GR. The BBN also anchors any modification to GR. In the radiation-dominated era |J| = 3c 2 H 2 8πG ≫ N 2 P 0 , the dynamics is driven by where Λ N c 2 8πG = − ∞ 0 λ N (x)d(P 0 N 2 x 2 ) = −N 2 P 0 /8 for n = 3 is a finite negative number, much smaller than the radiation pressure ∼ (z/1000) 3 N 2 P 0 . So the early universe is GR-like, especially the Hubble parameter at the BBN, insensitive to the precise value of N 2 P 0 . Note a more general version of our Vector-for-Λ model has a Lagrangian with four vector degrees in Uλ 1 N J and the scalar λ K /λ J ; it is optional to replace Uλ 1 N K with Uλ 1 N J to reduce the total freedom to 4 as in Bekenstein's TeVeS. Our simple model is equivalent to the special case of two non-dynamical scalar fields λ K and λ J with 1 N ∼ 1 N → 0, hence K = K(U) = K and J = J (U) = J (Eq.3). The potential is smooth with P 0 V(λ K , λ J ) = 1 µ min H(µ min + λ K − λ) − N 2 n 2 H(λ − λ J − 2 −n ) P n dλ, where P n ≡ (λ − 1 n − 1) 2 n 2 P 0 and H(y) is the Heaviside function of y. A vector field A µ ≈ (mc 2 +mΦ, mcA) with a mass scale m has a quantum degeneracy pressure limit ∼ c 5 3 m 4 . It is intriguing that our model suggests the existence of a zero point vacuum energy Λ 0 c 2 8πG ∼ P 0 V(1, 1) ∼ 9P 0 ∼ (0.001eV) 4 . And the (positive) radiation pressure at the epoch of baryon-radiation equality coincides with the cutoff energy density P 0 V(0, 0) ∼ −N 2 P 0 ∼ −(0.3eV) 4 , and the vacuum-to-cutoff energy density ratio ∼ 9/N 2 ∼ 10 −9 coincides with the cosmic baryon-to-photon or baryon-to-neutrino number ratio η ∼ 3 × 10 −10 due to a tiny asymmetry with antibaryons. Can theories like quantum gravity and inflation explain these coincidences? Understanding these might give clues to how the four-vector potential of photons decouples from the baryon current vector, and decouples from our E&M-like vector field A µ in spontaneous symmetry breaking in the string theory (Kostelecky & Samuel 1989, Carroll & Shu 2006). Massive neutrinos are optional for our model because the L J term creates a massive neutrino-like effect in cosmology without affecting galaxy rotation curves. There are a few ways to create the impression of a fluid of 2eV neutrino in clusters of galaxies as well , Sanders 2005). E.g., a general Lagrangian with N ∼ n would have new dynamical freedoms µ ≡ 1 − λ K and 1 − λ J , which satisfy second order differential equations in time in galaxies, reminiscent of fluid equations for DM. Then the Bekenstein-Milgrom µ-function would acquire a history-dependent non-local relativistic correction of order c N a 0 τ ∼ 1 if the temporal variation (relaxation) time scale τ of the scalar field λ K is comparable to the Hubble time. This dynamical correction is hard to simulate, but is most important at the tidal boundary of (merging) systems where a condensate of the dynamical freedoms λ K and λ J oscillate rapidly, could in principle act as an extra DM source to explain some outliers to the Bekenstein-Milgrom theory, e.g., the merging bullet cluster with its efficient lensing and high speed (Angus & McGaugh 2007). A dynamical field λ J is desirable as an inflaton to seed perturbations (Kanno & Soda 2004). In summary, we demonstrate as a proof of concept that at least one alternative Lagrangian for the gravity (Eq.5,14) can be sketched down to resemble the GR plus ΛCDM but with somewhat less-fining in terms of fitting several types of observations from dwarf spiral galaxies to the cosmic acceleration. The keys are a zero-point pressure scale P 0 at the edge of galaxies, and a universal convergence source term 1−µ B 8πG (c 2 ∇ α U α ) 2 below the cutoff pressure N 2 P 0 , which is near the epoch of equality and the last scattering. However, the CMB should be sensitive to the µ B ≡ 2 −n modification parameter. 3 It should be feasible to falsify the present model and variations by simultaneous fits to the supernovae distances and the CMB. HSZ acknowledges helpful comments from Anaelle Halle, Benoit Famaey, Tom Zlosnik, Pedro Ferreira, Constantinous Skordis, David Mota, Eugene Lim, Meng Su and the anonymous referee, and the support from KITP during the gravitational lensing program 2006.
2007-10-21T18:25:18.000Z
2007-10-19T00:00:00.000
{ "year": 2007, "sha1": "b9df4532c41e832db9f02da312f8aa5016ee1e85", "oa_license": null, "oa_url": "http://iopscience.iop.org/article/10.1086/524731/pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "b9df4532c41e832db9f02da312f8aa5016ee1e85", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
249961360
pes2o/s2orc
v3-fos-license
Exploring the facilitators and barriers to high-risk behaviors among school transportation drivers: a qualitative study Background School transportation (ST) crashes are associated with serious adverse consequences, particularly for students in developing countries. High-risk behaviors (HRBs) of ST drivers are a major factor contributing to ST crashes. This study aimed at exploring the facilitators and barriers to HRBs among ST drivers. Methods This qualitative study was conducted in 2019–2020. Participants were ST drivers, students, parents, and school staff purposively selected from Tehran, Iran. Data were collected through in-depth semi-structured interviews and focus group discussions and were concurrently analyzed through conventional content analysis. Findings Participants were fifteen ST drivers with a mean age of 45 ± 10.2 years and 24 students, parents, and school staff with a mean age of 28.62 ± 16.08 years. The facilitators and barriers to HRBs came into five main categories, namely previous experiences of HRBs, perceived gains and risks of HRBs, motivating and inhibiting feelings and emotions, positive and negative subjective norms, and perceived mastery in driving. Conclusion A wide range of facilitators and barriers can affect HRBs among ST drivers. Strategies for preventing HRBs among ST drivers should be multidimensional and individualized and should focus on strengthening the barriers and removing the facilitators to HRBs. Background School transportation (ST)is an important type of transportation [1]. Some students use ST due to their parents' employment or their long distance to school [2]. Each day, more than 25 million students in the United States use ST to go to school and return to home [3]. In 2018, around 1.7 million students in Iran used ST [4]. Parents expect their children to go to school and return to home in safety [5] and ST can be an appropriate route for safe student transportation [6]. Nonetheless, ST carries different risks for students, increases their vulnerability [1], and creates heavy socioeconomic burden [7]. Therefore, ST drivers need to prioritize student safety and health [8]. ST crashes in all countries cause serious physical injuries and even death for students and have negative effects on communities [9]. For example, more than forty children in China died during one year due to ST crashes [10]. In the United States, 800 children die each year due to motor vehicle accidents during school time and 2% of these deaths are due to school vehicle accidents [3]. In developing countries, injuries due to ST crashes are more serious and have increasing prevalence [10]. For example, number of student death in ST crashes in Tehran, the capital of Iran, increased from fifteen in 2016 to 22 Fathizadeh et al. BMC Public Health (2022) 22:1245 in 2018 [11]. The most prevalent injuries caused by ST crashes among children less than ten years and children aged 10-19 years are head trauma and lower extremity injuries, respectively [12]. Therefore, ST crashes are considered as serious threats to student health [13]. Safe driving and protecting passengers against potential risks are among the main responsibilities of ST drivers and ST authorities [2]. Nonetheless, drivers' behaviors are a major factor contributing to traffic accidents [1]. Two studies reported human errors as the main reason of 75%-90% of traffic accidents [14,15]. ST drivers' HRBs not only cause accident injuries, but also can negatively affect students' behaviors [8]. ST drivers are the first and the last individuals who are in contact with students in the time interval between home leaving and returning to home and play significant role in ensuring student safety [2]. However, they may endanger student safety through engagement in HRBs and commitment of driving offenses such as speeding, non-observance of the right of way, carrying excessive passengers, and driving a defective car [3,9]. Given the high prevalence of ST crashes in Iran [16], the importance of protecting students' physical and mental health [2]. The significant role of ST drivers in protecting student health [5], and the significant effects of ST drivers' behaviors on ST crashes [8] and on student behaviors [8], quality education about safe driving and safe ST for ST drivers is necessary to improve their driving behaviors [2,17]. A key step to educational programs for ST drivers is to study their driving behaviors and their contributing factors. A study in Great Britain reported that the most important factors contributing to traffic accidents among young drivers were risk taking, inexperience, and distraction due to using mobile phone, while the most important factors contributing to traffic accidents among elder drivers were medical conditions, defective eyesight, and slow driver reactions [18]. Other studies also reported driver-related factors, such as recognition and decision errors [19], socioeconomic background [20], fatigue, driving stress, irritability due to long-term driving [21], physical and mental abilities, and personality traits [20], as the most important factors contributing to HRBs among drivers. The contributing factors of HRBs among drivers largely depend on the immediate sociocultural context [20] and hence, the results of studies in this area in one context may not easily be generalizable to other contexts [22,23]. Some scholars also noted that some contributing factors of HRBs are still unknown [22]. Moreover, there are limited data in this area in Iran [24]. These gaps highlight the necessity of further studies to produce clearer evidence in this area. Therefore, the present study was conducted using a qualitative design in order to explore the facilitators and barriers to HRBs among ST drivers. Social behaviors, such as driving behaviors, are complex phenomena [25]. Scholars believe that quantitative designs are not appropriate for studying complex and poorly known phenomena [22,25]. On the other hand, qualitative studies are appropriate for exploring complex phenomena, such as driving behaviors, based on the immediate sociocultural factors [22,26]. Therefore, a qualitative design was used in the present study. Design This qualitative study was conducted from April 2019 to March 2020 using conventional content analysis. Conventional content analysis is appropriate for describing poorly known phenomena, about which there are limited theories or literature [27]. Participants and setting The main study participants were fifteen male and female ST drivers with rich experience of ST driving in Tehran, Iran. The mean of their age was 45 years. Besides, nine students with a mean age of eleven years, seven students' mothers with a mean age of 31 years, five students' fathers with a mean age of 46 years, and three school staff (two school principals and a teacher) with a mean age of 41 years were included in the study in order to explore the different aspects of the facilitators and barriers to HRBs among ST drivers. Sampling was purposively performed with maximum variation respecting the educational degree of students and the geographical area of schools. Participants were selected from all five main geographical areas of Tehran, namely the north, east, west, south, and center of the city. Data collection Data were collected through in-depth semi-structured interviews and focus group discussions started using questions about demographic and occupational characteristics such as age, gender, educational level, main occupation, work experience as ST driver, number of ST services per day, and type of car. Then, broad questions were used to guide the interviews. Examples of these questions for ST drivers were, "Can you describe one of your working days?" and "What factors contribute to your HRBs?" The type of the interview questions for other participants varied according to the gaps in the data. An example was, "Can you explain your experiences of ST driver's behaviors during ST?" Probing questions such as "Can you explain this more?" "What do you mean?" "Why and how?" and "Can you provide an example?" were also used to further explore participants' experiences. Participants had the opportunity to freely explain their experiences. The first author and a trained male colleague collected the data in Persian in a safe and quiet place in school dean offices, taxi agency offices, or city streets. Interviews and group discussions lasted 25-40 min, audio-recorded with participants' permission, and continued up to data saturation, i.e., when no new data were obtained. Accordingly, three focus group discussions with nineteen participants and twenty interviews with twenty participants were held. Data analysis Data were analyzed using the three-step conventional content analysis proposed by Elo and Kyngäs [26]. In the data preparation step, each interview was transcribed word by word and its transcript was perused for several times in order to obtain a general understanding about its main ideas. In the data organization step, the data were reduced through reviewing the transcript and determining and labeling meaning units to generate primary codes. Cods were constantly compared with each other and grouped into subcategories according to their similarities. Similarly, subcategories were compared and grouped into larger categories. Codes, subcategories, and categories were further developed and revised based on new interviews. Finally, the data were reported in the data reporting step. Rigor The trustworthiness of the data was ensured using Lincoln and Guba's criteria [28], namely credibility, confirmability, and transferability. Credibility was ensured via prolonged engagement with participants for more than one year in order to better understand their experiences. Moreover, data collection and analysis were performed concurrently and circularly. Triangulation of data source and data collection methods was also used to overcome the weaknesses of the different data sources and data collection methods. Constant comparison analysis was also used during data analysis. Confirmability was maintained through member checking by participants and peer checking by coauthors and then, findings were revised according to their comments. Moreover, findings were compared with the findings of previous studies in the external report check process. To ensure transferability, clear descriptions were provided about participants' characteristics and original data were kept for subsequent assessment. Moreover, the processes of data collection and analysis were described step by step in order to provide others with the opportunity of the stepwise replication of the study. This study was approved by the Ethics Committee of Isfahan University of Medical Sciences (approval code: IR.MUI.REC.1398.385) and all methods were performed in accordance with the relevant guidelines and regulations. Results Participants were fifteen ST drivers and 24 students, parents, and school staff. ST drivers were ten males and five females with a mean age of 45 ± 10.2 years and a mean work experience of 6 ± 2.96 years. Other participants were eleven students, ten parents, and three school staff (eleven males and thirteen females) with a mean age of 28.62 ± 16.08 years. Table 1 shows participants' characteristics. Data analysis revealed that five main categories of factors can affect ST drivers' HRBs. These five categories were previous experiences of HRBs, perceived gains and risks of HRBs, motivating and inhibiting feelings and emotions, positive and negative subjective norms, and perceived mastery in driving. The final pattern in the data revealed that each of these factors was a spectrum with facilitators at one end and barriers at the other end ( Table 2). Theme 1: previous experiences of HRBs Participants' experiences showed that previous experiences of HRBs can act on a spectrum as both facilitators and barriers to HRBs among ST drivers. The direct or indirect experiences of HRBs without any adverse consequence at one end of the spectrum were a facilitator to HRBs, while the direct or indirect experiences of HRBs with adverse consequences were a barrier to HRBs among ST drivers. Subthem1: experiences of HRBs without any adverse consequence: a facilitator to HRBs Some participants reported that their direct experiences of HRBs with no adverse consequence for themselves and others were a facilitator to their HRBs. "I have picked up four students on the back seat so far and haven't experienced any problem. Previously, my car had no seat belt and nothing occurred for my passengers. Therefore, I don't insist that students should fasten seat belt" [male ST driver, P1] Some participating ST drivers also reported that they engaged in HRBs due to witnessing or hearing about the HRBs of their colleagues which had had no adverse consequences. "My colleagues always drive the wrong way in this one-way street and have never experienced any problem. I also learned to do so and haven't experienced any problem so far" [ Some ST drivers also noted that they picked up or drop off students at the top of streets or alleys instead of their home doors and committed some driving offences in Subtheme 2: perceived risks: a barrier to HRBs The perceived risks of HRBs were a major barrier to HRBs. One of these risks was injuries to students due to HRBs. "I don't know how some drivers dare to drop off students in street. I don't dare because they are at risk for accidents" [female ST driver, P9] Most participants agreed that HRBs, such as dropping off students in unsafe places, no predetermined time for ST, and change of car driver without previous announcement, not only can cause physical injuries, but also can negatively affect the mental health of students and families. "I always pick them up right at the predetermined time. Missing a student causes the student stress and causes families distrust and upset" [male ST driver, P3] "My son had been left behind the door without being able to ring the doorbell. In these cases, my little son is at risk for different adverse events. What if they kidnap my son? Drivers will never leave a child alone if they perceive these risks" [student's mother, P30] Some participants also referred to the negative educational and behavioral effects of HRBs as a barrier to HRBs. They noted that any HRB or rule violation can negatively affect students' mentality and behaviors. "ST drivers should be good role models for students. Unfortunately, some ST drivers don't have appropriate behaviors. Students spend about one hour of their time each day with ST drivers and hence, ST drivers' violation of rules can waste parents' and teachers' attempt for educating students" [male school staff, P38] Some ST drivers also reported financial disadvantages of HRBs such as damage to car and suspension of their job as a barrier to their HRBs and noted that the financial consequences of HRBs may be far beyond ST drivers ' financial affordance. Theme 3: motivating and inhibiting feelings and emotions Participants' experiences showed that feelings and emotions can motivate or inhibit engagement in HRBs. Sensation seeking, pleasant feelings, and management of negative feelings were facilitators to HRBs, while unpleasant feelings about HRBs were a barrier to HRBs. Subtheme 1: motivating feelings and emotions: a facilitator to HRBs Some participating ST drivers reported engagement in HRBs and violation of traffic rules to seek sensation and pleasant feelings. Subtheme 2: inhibiting feelings and emotions: a barrier to HRBs Participants' experiences showed that some ST drivers felt tension, unpleasant feelings, and pangs of conscience during HRBs and had good feelings and satisfaction when they could have healthy behaviors and observe traffic rules. "Last year, I picked up six and sometime seven students. I didn't want to do so but our contractor required us to do so" [female ST driver, P4] Theme 4: subjective norms Subjective norms or others' opinions about HRBs were also among the facilitators and barriers to HRBs. Participants' experiences showed that others' approval of HRBs facilitated their engagement in HRBs, while others' disapproval of HRBs was a barrier to HRBs. Subtheme 1: significant others' approval of HRBs: a facilitator to HRBs Some participating ST drivers noted that they highly valued their friends' and colleagues' opinions about their driving and reported engagement in HRBs with their friends' and colleagues' approval. Subtheme 2: significant others' disapproval of HRBs: a barrier to HRBs Participants' experiences showed that the disapproval of HRBs by significant others including family members, students, police, and school staff made ST drivers avoid HRBs. Theme 5: perceived mastery in driving Participants reported perceived mastery in driving, perceived ability to engage in HRBs, and perceived ability to avoid HRBs as the facilitators and barriers to HRBs. The two subcategories of this category were perceived superiority and self-efficacy for avoiding HRBs. Subtheme 1: perceived superiority: a facilitator to HRBs Perceived superiority in driving was a major facilitator to HRBs. Participants' experiences showed that some ST drivers felt more experienced and more competent than other drivers and believed that they had mastery in driving and hence, engaged in HRBs. A young ST driver with limited driving experience explained his competence in driving by saying, Discussion This study aimed at exploring the facilitators and barriers to HRBs among ST drivers. Findings revealed that the major facilitators and barriers to HRBs among ST drivers were previous experiences of HRBs, perceived gains and risks of HRBs, motivating and inhibiting feelings and emotions, positive and negative subjective norms, and perceived mastery in driving. These barriers and facilitators are discussed in what follows. Theme 1: previous experiences of HRBs Findings showed that previous direct and indirect experiences of HRBs with or without negative consequences acted on a spectrum as facilitator and barrier to HRBs among ST drivers. In line with this finding, previous studies in China [29], Cyprus [30], and Spain [31] also reported that previous experiences of traffic accidents increase risk perception and thereby, act as a barrier to HRBs and a facilitator to engagement in protective behaviors [32]. However, a study reported that previous experiences and risk perception may not necessarily lead to protective behaviors among ST drivers [32]. Another study in South Africa also showed that accidents had no significant effects on risk taking among taxi drivers [33]. It seems that the consequences of previous HRB-related experiences may not have strong inhibitory effects to prevent ST drivers' re-engagement in HRBs. The findings of the present study respecting the effects of previous experiences of HRBs can be used to redefine the concept of "previous experiences" in the Self-Efficacy Theory [34]and the Social Cognitive Theory [35]. Moreover, our findings highlight the need for developing more effective road safety programs to reduce HRBs among drivers who frequently engage in them [21]. The developers of educational programs can use messages about the negative HRB-related experiences of drivers and the negative consequences of HRBs (such as physical disability and financial problems) in order to correct other drivers misconceptions about HRBs. Theme 2: perceived gains and risks of HRBs Our findings also showed that ST drivers' perceptions of the gains and the risks of HRBs can act as a facilitator or a barrier to HRBs. Similarly, the Prospect Theory holds that weighing advantages of a behavior against its disadvantages affects engagement in that behavior [36]. One of the reasons of ST drivers in the present study for engagement in HRBs was personal or familial gains such as the possibility to earn more income. This is in line with the findings of two former studies which reported perceived benefits as an influential factor in modifying health-related behaviors [37,38]. Pender also highlights that individuals usually select behaviors which are most beneficial [39]. Moreover, a study showed that perceived benefits of HRBs require drivers to engage in HRBs [40]. Two other studies also found that HRB benefits such as early arrival at destination, perceived superiority over other drivers, ability to concurrently perform several tasks [40,41], saving more time, and sense of freedom [42] were among the facilitators of drivers' engagement in HRBs. A study on drivers in Australia also reported the better use of time as a benefit of using mobile phone while driving [43]. Moreover, our findings revealed that some ST drivers engaged in HRBs in order to be able to have more time for their other job(s). Great fatigue due to having two or more jobs can impair concentration and functioning, cause frequent distractions, increase the likelihood of engagement in HRBs, and increase the risk of accidents. On the other hand, study findings showed that perceived risks of HRBs, such as physical and mental injuries and financial problems, acted as a barrier to HRBs. Perceived risks can affect behavioral intention [44] and behavior [45] so that personal differences in risk perception can explain differences in engagement in HRBs such as traffic rule violation [46]. A study in Australia showed that higher perception of the risks of unsafe driving is associated with lower probability of engagement in HRBs and violation of traffic rules, though some drivers may engage in HRBs despite knowing their risks and disadvantages [43]. Individuals weigh the gains of a given behavior against its risks and then, decide to engage or not to engage in that behavior [47]. HRBs can cause adverse consequences for different people [48]; nonetheless, individuals may decide to engage in them based on their perceptions of the potential gains or risks. Therefore, simple strategies, such as risk messages, which focus on improving individuals' understanding of HRB-associated risks may not be effective enough to motivate ST drivers to avoid HRBs. Comprehensive educational interventions to highlight the importance of the risks of HRBs and the unimportance of HRB-associated gains may help drivers decide not to engage in them. Theme 3: motivating and inhibiting feelings and emotions Study findings also indicated that feelings and emotions can affect STdrivers' HRBs. In line with the findings of two former studies [49,50], our findings revealed that negative feelings such as fatigue and low mood can facilitate ST drivers' engagement in HRBs. Moreover, we found sensation seeking as a facilitator to HRBs. Similarly, two studies reported that drivers who enjoy HRBs are more likely to engage in them [51,52]. Sensation seeking has significant role in determining driving behaviors and driving culture and significantly increases accident-related injuries [53]. High levels of sensation seeking may be associated with higher probability of engagement in HRBs such as speeding, not fastening seat belt, drunk driving, and competition with other drivers [54]. On the other hand, our findings showed tension, unpleasant feelings, and pangs of conscience after HRBs as barriers to HRBs. The Cognitive Dissonance Theory [55] also holds that behaviors which are incongruent with individuals' cognitions cause them tension and unpleasant feelings and hence, they attempt to avoid such behaviors in order to prevent such feelings and modify their behaviors to have pleasant feelings [55,56]. Previous studies showed that appropriate educational interventions can be used for attitude and behavior modifications and promote healthy behaviors among individuals with HRBs [57,58]. Theme 4: positive and negative subjective norms Study findings showed that positive and negative subjective norms can affect ST drivers' HRBs. The Theory of Planned Behavior also states that perceived pressure by significant others can affect engagement in a given behavior [59]. Two other studies also reported that significant others' pressure has significant effects on behaviors [60,61]. Our findings also revealed that positive subjective norms were a facilitator to HRBs. This is in agreement with the findings of two previous studies in Iran [20,62]. On the other hand, our findings revealed that negative subjective norms, such as the negative attitudes of families, parents, school staff, and police, acted as a barrier to HRBs. Similarly, two former studies reported the significant effects of subjective norms on HRBs among drivers [60,63]. These findings highlight that colleagues' and significant others' negative attitudes towards HRBs can reduce the prevalence of HRBs among ST drivers. Therefore, safety-based educational interventions for students, parents, and drivers can reduce HRBs among drivers. Theme 5: perceived mastery in driving We also found that perceived mastery in driving acted as a facilitator and a barrier to HRBs among ST drivers so that perceived superiority in driving moved ST drivers, particularly the younger ones, toward engagement in HRBs. In agreement with this finding, a previous study found that drivers who overestimated their driving mastery authorized themselves for engagement in HRBs [64]. Overestimation of driving mastery and low risk perception can make drivers violate traffic rules and engage in HRBs, particularly speeding [30]. On the other hand, perceived self-efficacy for avoiding HRBs was found in the present study as a barrier to HRBs among ST drivers. Self-efficacy refers to individuals' perceptions of their control over their behaviors [65] or their perceived ability to avoid risky or unhealthy behaviors [59]. Selfefficacy is a significant predictor of behavioral intention and safe behavior, particularly with respect to speeding [65]. A study in Spain also reported self-efficacy as a significant determinant of drunk driving [66]. Compared with other factors, personal factors have the greatest effects on drivers' engagement in HRBs and hence, educational interventions are essential to modify drivers' beliefs and perceptions. Educational messages about the consequences of HRBs can be used to improve drivers' risk perception and thereby, reduce their engagement in HRBs. This study had three main limitations. First, some ST drivers refused participation in the study due to their concerns over losing their job. Second, like all qualitative studies, this study was conducted on a small sample of individuals and hence, findings may have limited generalizability. Third, as most ST drivers in Iran are male, most study participants were male ST drivers and we could not compare the HRB-related experiences of male and female ST drivers. A strength of the study was the inclusion of individuals with a wide range of direct and indirect HRBrelated experiences. Moreover, the present study provided a basis for further studies into HRBs of ST drivers in psychological or behavioral paradigms. Conclusion This study suggests that previous experiences of HRBs, perceived gains and risks of HRBs, feelings and emotions, positive and negative subjective norms, and perceived mastery in driving can act as facilitators and barriers to HRBs among ST drivers. Moreover, this study highlights that ST drivers' engagement in HRBs largely depends on their HRB-related beliefs, perceptions, and experiences. ST drivers with greater risk perception and firmer belief in the negative consequences of HRBs are more likely to avoid these behaviors. On the other hand, the significant contribution of the perceived gains of HRBs to ST drivers' engagement in HRBs highlights the need for modifying ST drivers' perceptions about the triviality of the gains in comparison with the risks of HRBs. Moreover, this study shows that despite good risk perception, some ST drivers may still engage in HRBs due to their perceived superiority in managing potential HRB-related risks. Perceived superiority is a poorly known factor in the area of HRBs among ST drivers and deserves further exploration. Given the wide variety of the facilitators and the barriers to HRBs among ST drivers, one-sizefits-all approaches cannot be used to prevent ST drivers' HRBs. Rather, individualized approaches should be developed based on the characteristics of each ST driver in order to more effectively prevent HRBs and their associated negative physical, mental, and behavioral consequences.
2022-06-24T13:23:50.620Z
2022-06-23T00:00:00.000
{ "year": 2022, "sha1": "39948d478053ba42fc2d6cf8ef3dbd53a2b5c41a", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "39948d478053ba42fc2d6cf8ef3dbd53a2b5c41a", "s2fieldsofstudy": [ "Education", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
259821693
pes2o/s2orc
v3-fos-license
Two-Level Full Factorial Design Approach for the Analysis of Multi-Lane Highway Section under Saturated and Unsaturated Traffic Flow Conditions : Oversaturation of highways occurs due to their inadequate assessment and design. In this paper, we propose both a mathematical queuing model and a Discrete-Event Simulation (DES) framework based on Newell’s triangular flow-density relationship for the performance analysis of a multi-lane highway section. The proposed framework is a finite capacity queuing system, which captures an increase in the flow with the vehicle density up to the capacity of the section in an unsaturated condition and a decrease in the flow in the case of a saturated condition, depicting the actual traffic conditions on the highway section. First, the Birth–Death Process is used to build the mathematical queuing model (BDP), and the average number of vehicles (average queue length) and blocking probability on the highway section are estimated. Then, the accuracy of the mathematical queuing model is verified by the proposed DES framework. The “significance and effects” of different design factors are evaluated using the two-level full factorial design technique. The analysis of the experimental results reveals that the length of the highway section and the number of lanes are the most significant factors affecting the average queue length and blocking probability, while the jam density only has a significant effect on the average queue length and does not affect the blocking probability. In case of a two-way interaction, the combined effect of the “length-lanes” significantly affects the average queue length. In the end, a multiple-factor linear regression model is also developed for the prediction of the average number of vehicles on the highway section based on the design factors. Introduction Traffic congestion on a highway network is a condition that appears when the volume of traffic on a highway reaches or exceeds its capacity at a particular segment. The issue of traffic congestion is a real problem with the ever-increasing population and use of vehicles, and it draws the interest of academic researchers and traffic engineers. Due to long traffic jams and queues, the lack of productive time has an adverse impact on socioeconomic costs. Studying the interactions between cars, drivers, and the environment (roads, traffic control systems, etc.), with the goal of designing the ideal highway network and ensuring efficient traffic flow as well as minimal traffic congestion issues, is a highly challenging problem for researchers. The smooth movements of vehicles, smooth operation of traffic, and safety on the highways are the main concerns affecting the recognition of traffic and the highway system. Scientific studies started in the 1930s to study traffic issues in order to understand and help in the prevention and resolution of traffic congestion problems. Several speeddensity-flow models were developed over the years and categorized as single-regime or multi-regime models. Single-regime models assume a continuous relationship between speed, density, and traffic flow [1][2][3], while the relationship is discontinuous in multi-regime models depending on the level of density. Similarly, Lighthill and Whitham presented one of the most popular macroscopic traffic models, which was based on the fluid dynamic theory [4]. Treating the traffic flow as a 1-D compressible fluid, they studied the traffic jam as a shockwave. Prigogine presented the gas-kinetic model based on the Boltzmann equation [5]. Using the premise of a delayed adaptation of velocity, Newell proposed the microscopic, optimum velocity model [6]. Several queuing models were developed by different researchers for the analysis and design of transportation systems due to the inherent characteristics of urban transit systems, namely, the connection between the service facilities (walkways, stairs, ticket machines at transit stations, etc.) and the flow of entities (pedestrians) [7][8][9][10][11][12][13][14][15]. Similarly, Jain and MacGregor Smith [16] established state-dependent queuing models for simulating vehicle traffic flows and revealed that they are more realistic in extremely congested conditions. Afterwards, several other researchers carried out performance assessments of multi-lane highways based on the queuing theory and simulation models [17][18][19][20][21]. However, these queuing and simulation modeling models neglected the bottleneck and spillback phenomenon on highways and in transit terminal facilities. In the spillback effect, traffic congestion propagates upstream, which leads to the delay of and speed reduction by passengers as well as vehicular traffic, which is the true saturation condition. Therefore, in this research, we describe the realistic propagation of vehicular traffic congestion by the insertion of different phenomena such as queuing, the spillback effect from downstream traffic, and dissipation along the highway. Here, we develop both a mathematical queuing model and a simulation model, which incorporates Newell's kinetic wave model [22]. Newell's flow model is basically based on a triangular flow-density (q − K) relationship, which is shown by the fundamental diagram in Figure 1. issue of traffic congestion is a real problem with the ever-increasing population and use of vehicles, and it draws the interest of academic researchers and traffic engineers. Due to long traffic jams and queues, the lack of productive time has an adverse impact on socioeconomic costs. Studying the interactions between cars, drivers, and the environment (roads, traffic control systems, etc.), with the goal of designing the ideal highway network and ensuring efficient traffic flow as well as minimal traffic congestion issues, is a highly challenging problem for researchers. The smooth movements of vehicles, smooth operation of traffic, and safety on the highways are the main concerns affecting the recognition of traffic and the highway system. Scientific studies started in the 1930s to study traffic issues in order to understand and help in the prevention and resolution of traffic congestion problems. Several speeddensity-flow models were developed over the years and categorized as single-regime or multi-regime models. Single-regime models assume a continuous relationship between speed, density, and traffic flow [1][2][3], while the relationship is discontinuous in multiregime models depending on the level of density. Similarly, Lighthill and Whitham presented one of the most popular macroscopic traffic models, which was based on the fluid dynamic theory [4]. Treating the traffic flow as a 1-D compressible fluid, they studied the traffic jam as a shockwave. Prigogine presented the gas-kinetic model based on the Boltzmann equation [5]. Using the premise of a delayed adaptation of velocity, Newell proposed the microscopic, optimum velocity model [6]. Several queuing models were developed by different researchers for the analysis and design of transportation systems due to the inherent characteristics of urban transit systems, namely, the connection between the service facilities (walkways, stairs, ticket machines at transit stations, etc.) and the flow of entities (pedestrians) [7][8][9][10][11][12][13][14][15]. Similarly, Jain and MacGregor Smith [16] established state-dependent queuing models for simulating vehicle traffic flows and revealed that they are more realistic in extremely congested conditions. Afterwards, several other researchers carried out performance assessments of multi-lane highways based on the queuing theory and simulation models [17][18][19][20][21]. However, these queuing and simulation modeling models neglected the bottleneck and spillback phenomenon on highways and in transit terminal facilities. In the spillback effect, traffic congestion propagates upstream, which leads to the delay of and speed reduction by passengers as well as vehicular traffic, which is the true saturation condition. Therefore, in this research, we describe the realistic propagation of vehicular traffic congestion by the insertion of different phenomena such as queuing, the spillback effect from downstream traffic, and dissipation along the highway. Here, we develop both a mathematical queuing model and a simulation model, which incorporates Newell's kinetic wave model [22]. Newell's flow model is basically based on a triangular flow-density ( ) q K − relationship, which is shown by the fundamental diagram in Figure 1. This basic diagram highlights a few key features, which are the highway capacity (C h ) based on the jam density (density occurring at zero speed) K j , number of lanes (N) and length of the highway (L), critical density (K cr ) (the density and speed experienced during peak operations), and maximum flow (q max ). Within this triangular relationship, there are two traffic conditions, i.e., a congested traffic flow condition and an uncongested traffic flow condition. In the case of the congested traffic condition (K < K cr ), the vehicles' speed (wave pace) (V) is the slope of (q − K). When the traffic is not congested, low flow densities result in a monotonic increase in the traffic flow. The vehicles continue to move freely until the maximum flow is reached. At a critical density, the traffic state changes from free flow traffic to congested traffic. Beyond that point, in the case of the congested condition (K > K cr ), the space between vehicles becomes less as the density rises, which ultimately slows down the traffic. Since queues take up space, there is spillback in this case, and a bottleneck queue propagates upstream on the highway. Based on the above analysis, our research is divided into three stages. First, we propose a mathematical queuing model for the vehicular traffic flow on a multi-lane highway section based on Newell's triangular flow-density relationship to consider both the free-flow and congested conditions. The proposed queuing model is used to estimate the average number of vehicles on the multi-lane highway section. Secondly, for the verification of the accuracy of the proposed mathematical queuing model, a Discrete-Event Simulation (DES) framework is also developed in SimEvents ® . Third, in order to assess the "significance and effect" of various design factors, a statistical two-level full factorial design approach is implemented in MINTAB software [23,24]. A regression model is also developed in the end to predict the average number of vehicles on the highway section based on various design factors. Description of Multi-Lane Highway Section as a Queuing System A section of highway is shown in Figure 2, in which vehicles occupy spaces as they enter the section. The available spaces on the highway section act as "servers", and they become busy as vehicles enter the multi-lane highway segment. , the space between vehicles becomes less as the density rises, which ultimately slows down the traffic. Since queues take up space, there is spillback in this case, and a bottleneck queue propagates upstream on the highway. Based on the above analysis, our research is divided into three stages. First, we propose a mathematical queuing model for the vehicular traffic flow on a multi-lane highway section based on Newell's triangular flow-density relationship to consider both the freeflow and congested conditions. The proposed queuing model is used to estimate the average number of vehicles on the multi-lane highway section. Secondly, for the verification of the accuracy of the proposed mathematical queuing model, a Discrete-Event Simulation (DES) framework is also developed in SimEvents ® . Third, in order to assess the "significance and effect" of various design factors, a statistical two-level full factorial design approach is implemented in MINTAB software [23,24]. A regression model is also developed in the end to predict the average number of vehicles on the highway section based on various design factors. Description of Multi-Lane Highway Section as a Queuing System A section of highway is shown in Figure 2, in which vehicles occupy spaces as they enter the section. The available spaces on the highway section act as "servers", and they become busy as vehicles enter the multi-lane highway segment. The occupancy of these spaces causes an increase in the lane density (K). When all the vacant spaces are occupied, the vehicles' movement ceases, and the traffic flow stops. Jam density refers to the lane density associated with the jam condition on a highway segment K j . If a highway segment has a length (L) and number of lanes (N), and the jam density K j of the section is known, the capacity (C h ) of the multi-lane highway section can be calculated by using Equation (1). Due to the limited capacity (C h ) of the highway section, the number of empty spaces reduces as the number of vehicles (n) increases. The density (K) increases with the ultimate reduction in vehicular speed (V). Thus, a multi-lane highway section is described as a state-dependent multi-server finite-capacity queuing system. The vehicular arrival rate can be described by 'X , and the state-dependent service rate of the multi-lane highway section based on the flow-density model is represented by 'Y(n) q−k . The available spaces on the multi-lane highway section act as servers. The capacity of the multi-lane highway section is the sum of all the servers, i.e., the number of available spaces. Therefore, the general form to represent a finite-capacity multi-lane highway section is a X/Y(n) q−k /C h /C h queuing system. Mathematical Modeling Based on Birth-Death Process (BDP) In this section, we discuss the mathematical queuing model's formulation for a highway section in which the arrival process of vehicles is assumed to be based on Poisson's distribution, and the state-dependent service process of the highway section is based on exponential distribution. The uni-directional flow of vehicular traffic is considered in this research. To formulate the M/M(n) q−k /C h /C h model for the performance analysis of the highway section based on the state-dependent Newell's flow-density model, we employ the Birth-Death (BD) Process [25]. In the case of Newell's flow-density model, there are two conditions, i.e., a saturated condition and an unsaturated condition. Therefore, it is necessary to determine a transition point that demarcates the unsaturated condition from the saturated condition. Let n cr be the traffic volume below which the condition is unsaturated (n < n cr ) and above which the condition is saturated (n ≥ n cr ). For a finite capacity M/M(n) q−k /C h /C h queuing system, the state transition diagram is shown in Figure 3. segment . If a highway segment has a length and number of lanes , and the jam density ( ) j K of the section is known, the capacity ( ) h C of the multi-lane highway section can be calculated by using Equation (1). The available spaces on the multi-lane highway section act as servers. The capacity of the multi-lane highway section is the sum of all the servers, i.e., the number of available spaces. Therefore, the general form to represent a finite-capacity multi-lane highway section is a Mathematical Modeling Based on Birth-Death Process (BDP) In this section, we discuss the mathematical queuing model's formulation for a highway section in which the arrival process of vehicles is assumed to be based on Poisson's distribution, and the state-dependent service process of the highway section is based on exponential distribution. The uni-directional flow of vehicular traffic is considered in this research. To formulate the n model for the performance analysis of the highway section based on the state-dependent Newell's flow-density model, we employ the Birth-Death (BD) Process [25]. In the case of Newell's flow-density model, there are two conditions, i.e., a saturated condition and an unsaturated condition. Therefore, it is necessary to determine a transition point that demarcates the unsaturated condition from the saturated condition. Let The vertical dotted line demarcates the saturated condition and the unsaturated condition. An infinitesimal generator matrix with birth rate λ and death rate µ (n) can be created from the state transition diagram, based on the BDP, which is given by Equation (2). Let 'π be the state probability, i.e., the number of vehicles on the highway section. Using Equations (2) and (3), the steady-state (equilibrium) equations can be obtained as Using the principle equation, making appropriate substitutions, and using the mean of recursion, all the state probabilities can be expressed in terms of zero-state probability. The zero-state probability can be obtained by the normalization condition, i.e., C ∑ n=0 π n = 1. The vehicles' flow rate on any highway section is related to the speed of the vehicles on the section, the number of vehicles on the section, and the length of the section, which is given by Equation (9): However, there are two conditions, i.e., the unsaturated and saturated traffic conditions. The traffic flow rate can be expressed in terms of both the unsaturated and saturated flow rates, as shown by Equation (10). With the substitution shown by Equation (10), we obtain all the state probabilities for both the saturated and unsaturated conditions, as shown by Equation (11). In order to obtain the performance measures, such as the blocking probability and the average number of vehicles on the highway section, we can use Equations (12) and (13), which are based on the state probabilities from Equation (11). Discrete-Event Simulation Architecture of Multi-Lane Highway Section To ensure that our suggested queuing model is accurate for the multi-lane highway section, the DES model is developed in the SimEvents ® toolbox of the MATLAB programming environment, as shown in Figure 4. The DES model consists of two phases, i.e., (1) average vehicles' arrival phase; (2) flow-density based on the service phase of the highway. The major blocksets of the Si-mEvents ® toolbox for the DES model are the FIFO_Queue blocks, which assign queueing spaces to the vehicles, the Server blocks, which temporarily store the passengers, and the Start and Read Timers, which display the average dwelling time of the vehicles on the highway section. The Level-2 S-function block computes and updates the state-dependent speed of the vehicles as the number of vehicles increases on the section. The Constant blocks are used to input values for various parameters in the DES model, such as arrival rate, length of the section, number of lanes, etc. The Display block displays the output, i.e., average number of vehicles on the highway section. As discussed earlier, the highway section has a finite capacity (shown in Figure 2) in which the arriving demand cannot overcome the overall capacity of the highway section. Therefore, during the vehicles' arriving phases, it must be assured that the number of arrivals stays less than or equal to the capacity ( ) • This block computes the blocking probability as the highway jams, i.e., operates at capacity. When the entrance to the FIFO_Queue block is blocked for vehicles, the blocked vehicles are simultaneously activated and registered through the output switch block's second entity port (OUT2). Based on the blocking probability, the average number of vehicles is estimated from the FIFO_Queue block of the DES model. The DES model consists of two phases, i.e., (1) average vehicles' arrival phase; (2) flowdensity based on the service phase of the highway. The major blocksets of the SimEvents ® toolbox for the DES model are the FIFO_Queue blocks, which assign queueing spaces to the vehicles, the Server blocks, which temporarily store the passengers, and the Start and Read Timers, which display the average dwelling time of the vehicles on the highway section. The Level-2 S-function block computes and updates the state-dependent speed of the vehicles as the number of vehicles increases on the section. The Constant blocks are used to input values for various parameters in the DES model, such as arrival rate, length of the section, number of lanes, etc. The Display block displays the output, i.e., average number of vehicles on the highway section. As discussed earlier, the highway section has a finite capacity (shown in Figure 2) in which the arriving demand cannot overcome the overall capacity of the highway section. Therefore, during the vehicles' arriving phases, it must be assured that the number of arrivals stays less than or equal to the capacity C h = K j LN of the highway section. To execute this scenario in the DES environment, the vehicles' arrival from the upstream section stays in the FIFO_Queue block before entering the Server block. The vehicles are subsequently sent to the downstream section. The MATLAB ® Function block is used to perform the following functions: • It assesses the number of vehicles arriving at the highway section with its capacity C h = K j LN . This block prevents the entry of entities into the FIFO_Queue block when the highway section is at capacity n = C h . • This block computes the blocking probability as the highway jams, i.e., operates at capacity. When the entrance to the FIFO_Queue block is blocked for vehicles, the blocked vehicles are simultaneously activated and registered through the output switch block's second entity port (OUT2). Based on the blocking probability, the average number of vehicles is estimated from the FIFO_Queue block of the DES model. Computational Experiments In this section, first the accuracy of the mathematical queuing model is verified by the proposed simulation model. The two-level full factorial design approach is used to statistically analyze the effect of various factors on the average number of vehicles on the highway section. Table 1. During simulation experiments, the mean values of 30 replications are recorded and tabulated. Each simulation test is performed with a simulation time of 50,000 time units to achieve outputs at steady-state conditions. The 95% confidence interval approximations for the distribution of the mean, which follow the normal distribution, are also recorded in Table 1. All the computational experiments are carried out on a PC with Intel ® Core™ i5-4570 CPU@ 3.20 GHz and 8 GB of RAM under a Windows ® operating system. The mean computational times (CPU times in minutes) is also recorded for all the experiments. As shown in Table 1, both the mathematical and simulation models show a clear consistency. Therefore, the models can be used for the performance assessment of a multi-lane highway section under saturated and unsaturated conditions. Factorial Design Approach The two-level full factorial design approach is conducted to evaluate the significance and effect of four different factors such as the length of the highway section, the number of lanes of the highway section, the average vehicles' arrival rate, and the jam density on the average number of vehicles (mean queue length) (EN) on the highway section. The average number of vehicles on the highway section is obtained from our proposed mathematical queuing and simulation models. Table 2 Table 3 displays the main effect estimates as well as the interaction effect estimates. The difference between the average response of a factor at a high level and that of a factor at a low level is referred to as the effect. The interaction effect between two factors (say, A*B) is defined as the mean difference between the effects of the "length of the highway section" at a high level for the "number of lanes" and the effect of the "length of the highway section" at a low level for the "number of lanes". [14]. The effect estimates of the factors can be obtained by multiplying the regression coefficient by a factor of 2. Table 4 shows the analysis of variance (ANOVA) of up to two factors' interactions. An ANOVA with higher order interactions of three or four variables results in an ill model, so it is deserted due to the sparsity of effect principle. It asserts that the majority of processes are regulated by a few major factors and a few low-order interactions. Effects of the Factors From Figure 5, it can be observed that the effect estimates of the length of the highway section, number of lanes, and jam density are positive (positive slope), which means that they have a positive effect on the average number of vehicles (average queue length) on the highway section. It asserts that the majority of processes are regulated by a few major factors and a few low-order interactions. Effects of the Factors From Figure 5, it can be observed that the effect estimates of the length of the highway section, number of lanes, and jam density are positive (positive slope), which means that they have a positive effect on the average number of vehicles (average queue length) on the highway section. The length plot shows a relatively high positive slope, but the jam density plot has a somewhat positive slope. This suggests that increasing the amounts of these parameters reduces the average queue length. This shows that with a slight increase in the length of the highway section, i.e., from 0.1 mi to 0.2 mi, the average queue length also increases, when keeping all other factors constant. The length plot shows a relatively high positive slope, but the jam density plot has a somewhat positive slope. This suggests that increasing the amounts of these parameters reduces the average queue length. This shows that with a slight increase in the length of the highway section, i.e., from 0.1 mi to 0.2 mi, the average queue length also increases, when keeping all other factors constant. Similarly, the average vehicles' arrival rate is negative, which means that it has a negative impact on the queue length and the increase in this factor causes an increase in the average queue length, which is quite obvious in all cases. A cumulative normal probability plot of the main and interaction effects of the factors at α = 0.05 is shown in Figure 6. It is obvious that none of the components are in a straight line. Outlier variables, shown by dark red squares, mostly concern estimating the average queue length and are thought to be the most dominating elements. The factors that are insignificant tend to fall along a straight line on this plot, whereas the significant main effects and interaction have nonzero means and do not lie along the straight line. We can note that the average vehicle arrival rate is not a potential outlier; consequently, the influence of the average vehicle arrival rate explored in this study (for this specific highway section) is not significant compared to the length of the highway section, which is a potential outlier. This could be explained by the fact that the average vehicle arrival rate range (1500-2000 veh/h) covered in this study is rather small. The significance of factors can also be assessed by using a Pareto chart, as shown in Figure 7. It is obvious that none of the components are in a straight line. Outlier variables, shown by dark red squares, mostly concern estimating the average queue length and are thought to be the most dominating elements. The factors that are insignificant tend to fall along a straight line on this plot, whereas the significant main effects and interaction have nonzero means and do not lie along the straight line. We can note that the average vehicle arrival rate is not a potential outlier; consequently, the influence of the average vehicle arrival rate explored in this study (for this specific highway section) is not significant compared to the length of the highway section, which is a potential outlier. This could be explained by the fact that the average vehicle arrival rate range (1500-2000 veh/h) covered in this study is rather small. The significance of factors can also be assessed by using a Pareto chart, as shown in Figure 7. The factors with a horizontal bar extended beyond the dashed vertical line illustrate the significance of the factors. Interaction Effects of the Factors An interaction is the failure of one factor to produce the same effect on the response at the different levels of another factor. The interaction between two factors is said to occur when a change in the values of one variable alters the effect on another factor. This implies that insignificant factor interactions produce similar trends in response to the different levels of another factor. The two-way interaction (two-factor interaction) effect is shown in Figure 8, and an insignificant two-factor interaction (B*D, p-value = 0.657) and one significant two-factor interaction (A*B, p-value = 0.012) at α = 0.05 are presented. The factors with a horizontal bar extended beyond the dashed vertical line illustrate the significance of the factors. Interaction Effects of the Factors An interaction is the failure of one factor to produce the same effect on the response at the different levels of another factor. The interaction between two factors is said to occur when a change in the values of one variable alters the effect on another factor. This implies that insignificant factor interactions produce similar trends in response to the different levels of another factor. The two-way interaction (two-factor interaction) effect is shown in Figure 8, and an insignificant two-factor interaction (B*D, p-value = 0.657) and one significant two-factor interaction (A*B, p-value = 0.012) at α = 0.05 are presented. It is clear that when the length of the highway section is increased, while keeping the number of lanes at a low level, the average queue length (average number of vehicles) increases. Similarly, with an increase in the length of the highway section, while keeping the number of lanes at a higher level, the queue length again increases. It is clear that when the length of the highway section is increased, while keeping the number of lanes at a low level, the average queue length (average number of vehicles) increases. Similarly, with an increase in the length of the highway section, while keeping the number of lanes at a higher level, the queue length again increases. Regression Model In order to predict future observations, the regression analysis shows the statistical relationship between one or more independent factors and the response factor. In this research, by considering only the main (linear) and interaction terms (two-factor interaction), a multiple-factor linear regression model is obtained for the prediction of the average vehicles' queue length on the highway section, which is given by the following equation: The coefficient of the determination R 2 of the regression model is 0.976, and the adjusted R 2 is 0.930. R 2 = 93% in the model shows that more than 93% of the variability of the responses can be explained by the factors of regression model. Conclusions and Recommendations In this research, we proposed a mathematical queuing and simulation model for the performance evaluation of a multi-lane highway section under congested and uncontested conditions using Newell's triangular flow-density relationship. This model is more realistic as it considered different traffic flow conditions. The two-level-full-factorial design approach is used to determine the significant and insignificant design parameters such as the length of the highway section, number of lanes, average vehicles' arrival rate, and jam density. It was observed that the length of the highway section, number of lanes, and jam density have a positive effect, which shows that increasing the level of these factors results in an increased value for the average queue length. The jam density has a mild positive slope, which shows that it has a slight significance compared to the other highly significant factors. Additionally, it was found that the majority of the highest-order interactions are insignificant, which supports the sparsity of effect principle. Only the two-way interaction of the "length-lanes" is a significant interaction affecting the average queue length on the highway section. A regression model to predict the average queue length from the two-level full factorial design is developed, which takes into account various design factors and their interactions. The effects of the four factors studied are limited to a specific range of values. More research with a broader range at both low and high levels is recommended. Other different sites can be considered for the performance evaluation based on the full factorial design approach. Additionally, the proposed model could be modified slightly and used for the analysis of freeways. With a few minor adjustments, the proposed model may also be used to analyze passenger flow in airport and metro station corridors.
2023-07-12T16:30:32.835Z
2023-06-07T00:00:00.000
{ "year": 2023, "sha1": "9d1fef396b538550b1028c9cc560bd36a457bd69", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/15/12/9194/pdf?version=1686108722", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "90879012b35e0c1475329212d508795128b27af1", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [] }
3729857
pes2o/s2orc
v3-fos-license
Extracellular Vesicles: Potential Roles in Regenerative Medicine Extracellular vesicles (EV) consist of exosomes, which are released upon fusion of the multivesicular body with the cell membrane, and microvesicles, which are released directly from the cell membrane. EV can mediate cell–cell communication and are involved in many processes, including immune signaling, angiogenesis, stress response, senescence, proliferation, and cell differentiation. The vast amount of processes that EV are involved in and the versatility of manner in which they can influence the behavior of recipient cells make EV an interesting source for both therapeutic and diagnostic applications. Successes in the fields of tumor biology and immunology sparked the exploration of the potential of EV in the field of regenerative medicine. Indeed, EV are involved in restoring tissue and organ damage, and may partially explain the paracrine effects observed in stem cell-based therapeutic approaches. The function and content of EV may also harbor information that can be used in tissue engineering, in which paracrine signaling is employed to modulate cell recruitment, differentiation, and proliferation. In this review, we discuss the function and role of EV in regenerative medicine and elaborate on potential applications in tissue engineering. INTRODUCTION Regenerative medicine aims at the functional restoration of a damaged, malfunctioning, or missing tissue. There are three main approaches in regenerative medicine. The first approach is cellbased therapies, where cells are administered to restore a tissue either directly or through paracrine functions. The second approach is often referred to as classical tissue engineering, and consists of the combined use of cells and a bio-degradable scaffold to form a tissue. Lastly, much progress has been made in materialbased approaches, which rely on bio-degradable materials, often functionalized with cellular functions. The first development in replacing malfunctioning tissues was by transplanting organs, tissues, or cells. Over the course of the last century vast improvements were made in the field of transplantation, starting with bone and cornea transplants at the beginning of the twentieth century, followed by the first kidney transplantation in the 1950s (1)(2)(3). As transplantation techniques for other organs developed over the following decades, the limiting factor for these procedures shifted from technical limitations to the supply of suitable organs and tissues. Besides shortage in supply, organ and tissue transplantation have another major drawback: the risk of immune rejection and the required chronic immunosuppression treatment. In response to these issues, research focused on strategies that allow functional restoration of damaged tissues by cellfree approaches or approaches using autologous cell and tissue sources. Embracing the rapid developments in technology and our understanding of biological processes, the field of regenerative medicine is focusing on a wide array of techniques and approaches to restore tissue function. Suitable approaches depend on the function and environment of the newly generated tissue. For instance, in the replacement of insulin-producing cells in patients with type-1 diabetes, there is little need for load-bearing structures, but rather for structures mimicking the extracellular matrix (ECM) like hydrogels, to retain and stimulate insulinproducing cells (4). Heart valve replacements on the other hand require materials that are able to withstand large forces in addition to high flexibility (5), but due to their direct contact with a patients' circulation also require the use of materials with high hemocompatibility and low immunogenicity. Utilizing autologous stem-, progenitor-, and tissue-specific cells to restore damaged tissues may bypass the problem of immunogenic responses against these implants. Following recent insights that the structural contribution of stem cells to regenerated tissues is limited, and that rather the stimulation of local healing processes plays an important role (6)(7)(8)(9), research has increasingly focused on the paracrine hypothesis, investigating the stimulating factors released by these stem-and progenitor cells, including growth factors, cytokines, and extracellular vesicles (EV). At the same time, major breakthroughs in the field of EV have uncovered roles for EV in many processes including angiogenesis, regulation of immune responses, and ECM remodeling (10)(11)(12)(13), which may be of specific interest for tissue engineering. Here, we review the recent developments in regenerative medicine and EV research, and discuss potential www.frontiersin.org therapeutic applications of EV in restoring function in damaged tissues. REGENERATIVE MEDICINE: CELL THERAPIES One of the earliest applications of cell therapy was the administration of cells for the reconstitution of blood or bone marrow (14,15). As a result of developments during the last decades, including improved techniques in both transient and permanent regulation of gene expression, methods of cell isolation and propagation, and improved protocols to regulate differentiation of cells, cell therapies currently play a prominent role in the field of regenerative medicine (16). Cell therapies can directly aid repair by forming new functional tissues, or support tissue repair through paracrine mechanisms, for instance by secreting growth factors, immunomodulatory molecules, and EV. Examples of direct tissue formation by cell therapy are the use of autologous epithelial cells to repair cornea injuries (17), expansion, and transplantation of chondrocytes in cartilage repair (18), or the administration of endothelial colony-forming cells (ECFC) in a murine hind limb ischemia model to increase neovascularization (19). In these studies cell populations were isolated, expanded ex vivo, and re-introduced at the site of injury to generate new, functional tissues. The ex vivo expansion step allows the use of only limited amounts of tissue and the proper characterization of isolated cells. Adverse effects as dedifferentiation and induction of senescence are great challenges adhered to this approach (20). For instance, in vitro passaging of mesenchymal stem cells (MSC) results in cell enlargement, differentiation, and decrease in proliferation within 10 passages (21), and causes a strong response to microenvironment stiffness, affecting cell morphology, and function (22). Progenitor cells from aged or diseased donors show decreased proliferation, prevalence, as well as functionality (23)(24)(25). Despite these challenges, promising results have been achieved, for instance in treatment of patients with severe autoimmune diseases with hematopoetic stem cell transplantation (26). It has become increasingly apparent that a more supporting role, employed by secretion products of stem and progenitor cells is responsible for many of the observed effects of stem cell therapies (6)(7)(8)(9). These paracrine factors secreted by stem-and progenitor cells, like growth factors and cytokines, are of major interest to discover new therapeutics that stimulate local tissue regeneration for the use in tissue engineering as well [reviewed in Ref. (27,28)]. TISSUE-ENGINEERING: (BIO-)ENGINEERED SUPPORT Repair of damaged tissue requires not only the presence of cells capable of restoring the damaged structure, but requires a microenvironment that promotes appropriate tissue regeneration as well. In addition, cells need to be guided to form a structure of the appropriate size and shape, and in many cases (for instance in bone or cartilage repair, as well as in cardiovascular substitutes), require structural support. In a healthy tissue, the ECM plays a key role in guiding and regulating these processes, whereas in damaged tissue, the ECM is often absent, damaged, or functionally impaired. To address this problem and allow in situ regeneration, structures that (temporarily) provide the requirements for cell retention and tissue regeneration are employed and are referred to as scaffolds. Scaffolds can either be of natural origin, such as decellularized ECM or modified elastin-or collagen gels, or of synthetic origin, such as synthetic hydrogels or porous polymer scaffolds. Using decellularized ECM from xenogenic or allogenic donors provides scaffolds that are most similar to the natural extracellular environment. Use of decellularized matrices is a promising technique, which yields biocompatible scaffolds with appropriate physical and biological properties. Many ECM components, as well as growth factors, are often conserved and can aid in proper regeneration of functional tissues (29). To decrease the risk of immune responses against antigens in these scaffolds, as well as the potential transfer of pathogens, a combination of enzymatic, physical, and chemical treatments is used to remove cellular components from the tissue (29). Decellularized matrices have been used for tissue engineering of several tissues, including heart valves (30), vascular grafts (31), and trachea (32). However, use of decellularized matrices has several disadvantages. Acquiring and isolating of appropriate tissues, followed by decellularization protocols, can be a relatively time-consuming and expensive procedure, and incomplete decellularization or antigen removal can result in immune reactions against grafts (33). Cell seeding of decellularized matrices can be technically challenging due to structural dimensions and porosity. Furthermore, control over the exact content of the matrices is limited due to donor variation, and despite pretreatment still there exists the risk of transfer of pathogens. In order to create scaffolds in a safe, reproducible, affordable, and controlled manner, extensive research is ongoing on the production of artificial porous scaffolds, exploring various production techniques and materials (34). Artificial porous scaffolds should meet specific requirements to allow homing of appropriate cell populations. Ideally, a synthetic scaffold temporarily provides the required support and micro-environment, is bio-degradable and eventually replaced by autologous ECM. For cells to be able to migrate or be seeded in the scaffold and allow an environment with proper supply of nutrients, a porous structure is required (35). There are several techniques to generate porous scaffolds, including solvent casting, forming emulsions before polymerization, gas foaming, as well as binding of polymeric fibers by chemical treatment or heating (36)(37)(38)(39). Using these techniques in generating scaffolds with consistent porosity in complex shapes, containing areas of varying thickness and materials, is technically challenging. Currently, the most commonly used technique in generating porous synthetic scaffold is electrospinning, which allows the generation of constructs with complex geometry, consisting of combinations of fiber types in both mixed and layered patterns (40). Bio-degradable polymers used in electrospinning include poly(ε-caprolactone) (PCL), poly(glycolic acid) (PGA), poly(hydroxy alkanoate) (PHA), and poly(lactic acid) (PLA). Mixing fiber types in specific patterns allows modulation of degradability, strength, and biological activity of scaffolds (41). Electrospun scaffolds can be pre-seeded with autologous cells, which may be re-programmed, differentiated, and expanded in vitro, and can then be directly implanted or incubated in a bioreactor until the electrospun meshwork is fully degraded and replaced with ECM (5). Alternatively, scaffolds can be implanted Frontiers in Immunology | Immunotherapies and Vaccines Incorporation of bio-active components into the fibers will result in gradual release during fiber degradation (right, green). After electrospinning, fibers can also be pre-seeded with appropriate cell populations to induce ECM production, angiogenesis, or immunomodulation (right, yellow). without pre-seeding, allowing in situ recruitment of autologous cells and circumventing the expensive, time-consuming, and challenging process of cell isolation and expansion in vitro. Incorporation of bio-active molecules into the scaffold may be used to recruit proper cell populations, modulate the immune response, or guide cells to differentiate (Figure 1). For instance, ECM-derived peptides like the integrin recognition site peptide Arg-Gly-Asp (RGD) enhance cell adhesion and cell viability in scaffolds (42,43), whereas coating with type I collagen-mimetic peptide enhances the migration, proliferation, and osteogenic differentiation of MSC (44). Scaffolds can also be designed to release peptides, proteins, or cytokines during degradation, or by coating fibers with a mixture of these bio-active molecules in a bio-degradable substance like fibrin or gelatin. For example, gradual release of vascular endothelial growth factor (VEGF) -a hypoxia-regulated growth factor that plays a key role in angiogenesis -and plateletderived growth factor (PDGF) promoted endothelialization and smooth muscle cell ingrowth in electrospun scaffolds (45). Release of stromal cell-derived factor (SDF)-1α -a chemokine that is up-regulated in tissue damage and hypoxia, attracts hematopoietic stem cells, and induces endothelial progenitor cell (EPC) recruitment -by electrospun poly(lactic-co-glycolic acid) (PLGA) scaffolds reduced mast cell degranulation, and increased angiogenesis and decreased fibrosis (46). Coating of interposition grafts with SDF-1α combined with the ECM component fibronectin (47), or treatment with VEGF (48) has been reported to enhance graft endothelialization. Many of the bio-active compounds used in these approaches act as paracrine factors in natural healing processes, or on the secretome of stem-or progenitor cell populations that induce local tissue regeneration in vivo (27). EV constitute a part of the secretome that also play an important role in local induction of tissue regeneration. For example, cardioprotective effects of conditioned medium from MSC in ischemia/reperfusion injury were shown to be mainly mediated by EV (49). Given the previous successes of paracrine factors in tissue engineering, these mediators of intercellular communication could also be of interest in the field of regenerative medicine. EXTRACELLULAR VESICLE CHARACTERISTICS Extracellular vesicles are lipid membrane vesicles, containing a variety of RNA species (including mRNAs, miRNAs), soluble (cytosolic) proteins, and transmembrane proteins presented in the appropriate, and functional orientation (50)(51)(52). EV play a role in many processes, including intercellular communication, recycling of membrane proteins and lipids, immune modulation, senescence, angiogenesis, and cellular proliferation and differentiation (10,13,(52)(53)(54)(55)(56). Cells release several types of vesicles with different physiological properties, content, and function, as a result of their different mechanisms of generation, and include exosomes, microvesicles, and apoptotic bodies (57). In the EV research community, a full consensus in terminology and classification of vesicles is yet to be achieved (58). In the past, vesicle nomenclature was mainly based on the tissue of their origin. More recently, the field has started to shift toward a terminology that focuses rather on the mechanisms of generation of these vesicles. Vesicles in the first category, exosomes, originate in multivesicular bodies (MVB) (Figure 2, left). When MVB fuse with the plasma membrane, the intraluminal vesicles are released from the cell and are from thereon referred to as exosomes. Exosomes are reported to be between 40 and 150 nm in size, with a density ranging from 1.09 to 1.18 g/ml. The most common markers used are tetraspanins such as CD9, CD63, CD81, and CD82, lipid raft markers Flotillin-1 and -2, as well as Alix and Tsg101. Other markers that are used are heat shock proteins, MHC molecules, various components of the ESCRT complex and proteins of the Rab protein family (50,51,(59)(60)(61). Microvesicles are shed directly from the plasma membrane and can be a lot larger than exosomes (50-1000 nm) (62). There is, however, an overlap in size between these two populations. Microvesicles also contain mRNAs and miRNAs, as well as soluble and transmembrane proteins. Like exosomes, microvesicles are able to transfer functional genomic and proteomic content to target cells (63,64). Apoptotic bodies originate at the cell membrane as cells undergo apoptosis. Even though these vesicles are of interest in biomarker research, and have been shown to have effects on other cells, research on these vesicles in intercellular communication is limited (65)(66)(67). Furthermore, vesicular cell-derived microparticles with biological functions have been described (68)(69)(70). However, most descriptions of microparticles are heterogeneous with regard to the isolated biomaterials or refer to characteristics of non-cell-derived compounds, and depending on the protocols used these microparticles may contain exosomes, microvesicles, apoptotic bodies, or varying combinations of these vesicle populations. Generally, the term EV is used when discussing exosomes or microvesicles, or a combination of these vesicle populations, depending on isolation techniques. However, due to the technical limitations of current isolation techniques, samples may occasionally also contain apoptotic bodies and protein aggregates. The first report of a cellular function of exosomes was the shedding of the transferrin receptor by maturing reticulocytes (55,71). Pan and Johnstone showed that removal of this receptor from the cell membrane occurred through endocytosis, followed by formation of intraluminal vesicles (forming the MVB), which were released when the MVB fused with the cell membrane. After this discovery, it was believed that the exosome pathway was mainly involved in cell homeostasis, by secreting cellular waste (72). Not until a study of Raposo et al. for the first time showed an immunological role for exosomes, the stimulation of CD4 + Tcells by EBV-transformed B-cells in an antigen specific manner, did researchers begin to explore additional functions (12). Primarily being studied in the context of immunology, exosomes were increasingly considered potential mediators for intercellular communication. However, it was only after the discovery that exosomes are able to transfer functional mRNAs and miRNAs from one cell to another, that the field gained its full momentum (52,73). Microvesicles have also been reported to transfer functional mRNAs and miRNAs to cells (66,67). Extracellular vesicles can communicate with target cells through several mechanisms (Figure 2, right). Firstly, transmembrane proteins on the EV membrane can interact with receptors on the cell membrane. These receptor-ligand interactions can then activate signaling cascades to affect target cells. EV can also fuse with their target cells to release their cargo, either by direct fusion with the cell membrane or by endocytosis, after which mRNAs, miRNAs, and proteins are released into the cytosol. Fusion of EV with target cells can either occur directly at cell membrane, or after endocytosis. After fusion, mRNAs transferred by EVs can be translated in to protein, and delivered miRNAs inhibit mRNA translation and affect cellular processes. The cargo and function of EV depends on their producing cells, and it has been shown that also cellular stress affects EV content, suggesting that intercellular communication through EV is a dynamic system, adapting its "message" depending on the condition of the producing cells (50)(51)(52)74). EXTRACELLULAR VESICLES IN REGENERATIVE MEDICINE Extracellular vesicles are able to affect cell phenotype, recruitment, proliferation, and differentiation in a paracrine manner. These paracrine effects of EV have a potential benefit in regenerative medicine. EV can be incorporated in regenerative therapies, for example by (co-)injection, mixing with hydrogels, or coating scaffolds with EV using fibrin gels or specific linkers (Figure 3). Here, we will discuss the role of EV in essential processes in regenerative medicine: cell viability, immune responses, ECM interaction, and angiogenesis. CELL SENESCENCE, VIABILITY, AND PROLIFERATION Prevention of cell death and cell senescence is vital in optimizing efficiency of regenerative medicine, both in cell therapies as well as in tissue engineering (75). Cell senescence depends on both the cell source and the environment to which cells will be introduced. Bone marrow-derived MSC from aged donors show increased senescence, and decreased proliferative potential (76,77), and uremic toxins promote cell senescence (78). Pretreatment of progenitor cells such as MSC affects cell senescence as well. For example, long-term in vitro expansion of MSC induces senescence, and reduces differentiation potential (79,80). Extracellular vesicles may affect cell senescence, proliferation, and cell survival. We recently demonstrated that endothelial cell-derived exosomes induced angiogenesis by inhibition of cellular senescence, and that transfer of miR-214 downregulated ataxia telangiectasia mutated (ATM) expression in recipient cells, resulting in decreased cellular senescence (13). Human umbilical cord MSC-derived microvesicle treatment suppressed cisplatin-induced apoptosis, and resulted in increased cell proliferation through regulation of the ERK 1/2 and MAPK pathways, both in vitro and in vivo (81). EV derived from human cardiac progenitor cells contain anti-apoptotic miRNAs, miR-210 and www.frontiersin.org miR-132, and treatment with these EV in a myocardial infarction resulted in decreased cardiomyocyte apoptosis (82). Similarly, bone marrow MSC-derived exosomes were able to decrease apoptosis and increase cell proliferation in an acute kidney injury model, and the authors hypothesized that this was the result of exosome-mediated RNA transfer (83). Similar results were obtained by Bruno et al., who showed that administration of MSC-derived microvesicles decreased apoptosis in an acute kidney injury model and in vitro in cisplatin treated human epithelial cells, through up-regulation of anti-apoptotic genes and downregulation of several apoptotic genes (84). Further in vitro studies showed that cardiomyocyte protection by MSC is partially mediated by transfer of miR-221 in microvesicles, resulting in reduced caspase activity after ischemic injury (85). Certain EV have also been shown to increase cell proliferation. Tumor-derived EV were reported to induce proliferation in a variety of tissues (86)(87)(88). MSC-derived EV have also been found to increase proliferation: bone marrow MSC-derived exosomes induced proximal tubular epithelial cell proliferation in an acute kidney injury model (89), and umbilical cord MSC-derived exosomes increased in vitro skin cell proliferation as well as migration after heat-stress, through Wnt signaling by trafficking of Wnt4 (90). Interestingly, Zhang et al. also observed that treatment with these vesicles in a rat skin burn model resulted in accelerated epithelialization (90). Exosomes derived from tubular epithelial cells stimulated with hypoxia activated fibroblasts through TGF-β1 signaling, resulting in increased fibroblast proliferation, which could aid in acceleration of tissue repair (91). These studies indicate that EV play a role in local tissue repair through regulation of cell proliferation. The capacity of EV to regulate cell senescence, apoptosis, and proliferation, parameters that greatly affect tissue engineering and cell therapy outcome, suggest therapeutic potential in regenerative medicine. Indeed, MSC-derived vesicles show positive effects on tissue repair through various pathways, even reducing apoptosis as a result of ischemic injury (92). This is of interest, since ischemia in larger tissue-engineered constructs is a substantial issue (93). ANGIOGENESIS Tissue engineering of large tissues requires proper vascularization for sufficient supply of nutrients and oxygen, and draining of cellular waste. Since tissue-engineered constructs thicker than 100-200 µm already run in to problems in respect to oxygenation, nutrient supply, and removal of waste products, controlled vascularization of neo-tissue is vital (93). Strategies to induce vascularization include addition of endothelial (progenitor) cells, engineering vasculature, as well as the use of paracrine factors (93)(94)(95). Several studies on cancer-derived EV demonstrated their role in tumor angiogenesis through a variety of pathways, including cell cycle-related mRNAs, several major intracellular kinase pathways, transfer of miRNAs, and by carrying pro-angiogenic cytokines (96)(97)(98)(99)(100). EV from endothelial cells have also been demonstrated to induce an angiogenic program in target endothelial cells in vitro and in vivo both through Notch-dependent tip-cell formation and induction of a pro-angiogenic program in parallel to miR-214-dependent repression of senescence (13,101). EV from other cell types have been demonstrated to stimulate in vitro and in vivo vessel formation by endothelial cells as well. For example, adipose MSC-derived EV, which could be increased in function and number by PDGF stimulation (102), as well as bone marrow MSC-derived EV, promoted angiogenesis in a rat myocardial infarction model (103). In the latter model, hypoxic stimulation of the EV-producing cells was required to obtain functional EV. Similar effects of hypoxia were observed in microvesicles from human umbilical cord MSC, which promote angiogenesis in vitro as well as in vivo in a rat hindlimb ischemia model (103,104). These findings underline the importance of culturing conditions of their producing cells on EV content (74). Cantaluppi et al. showed that EPC-derived microvesicles increase endothelial cell proliferation, migration, and vessel formation in vitro by transfer of pro-angiogenic miRNAs, miR-126 and miR-296. These EPC microvesicles also increased vascularization of islet endothelium and β-cells transplanted in SCID mice (105) and, in a SCID mouse hind limb ischemia model increased capillary density, enhanced limb perfusion, and reduced injury after 7 days (106). A study by Sahoo et al. in 2011 showed that exosomes isolated from CD34 + mononuclear cells increased endothelial cell viability, proliferation and tube formation in vitro, and stimulated angiogenesis in vivo in both matrigel plug-and corneal assays, and that the pro-angiogenic effect of these cells was mainly through these EV (107). Overall, different types of EV appear to be able to induce angiogenesis through a variety of pathways, and through transfer of mRNA, miRNAs, and proteins, underlining their potential in tissue engineering. EXTRACELLULAR MATRIX INTERACTIONS The ECM plays a major role in tissue engineering, providing shape and strength to the newly formed tissue as well as a site for interactions with and guidance of cells. Both ECM architecture and molecular composition are determinants for cell recruitment, retention, and differentiation, and thus the final local cell phenotype. In tissue engineering strategies using bio-degradable scaffolds, the load-bearing and cell retaining function of the scaffold will have to be fulfilled by the locally produced ECM after the scaffold is degraded. EV are able to influence ECM composition through direct ECM interactions, or by interacting with ECM-producing cells. Extracellular vesicles express adhesion molecules, including members of the immunoglobin superfamily and integrins. Exosomes derived from B-cells, endothelial cells, and dendritic cells, express ICAM-1 (74,108,109), and endothelial cell-derived exosomes express CD44, CD166, PECAM, and B-CAM (74). Reticulocyte-derived exosomes have been shown to bind to fibronectin via integrin α4β1 (110). B-cell-derived exosomes contain β1 and β2 integrins, which were able to bind to collagen-1, fibronectin, and TNF-α activated fibroblasts (108). Exosomes derived from dendritic cells have also been reported to contain integrins (111). These studies show that EV may not only bind to and interact with cells, but also bind to various ECM components. It has been suggested that EV could adhere to the ECM to form a gradient or potential reservoir that could be released in case of inflammation or ECM degradation (108). Besides molecules responsible for ECM interaction, EV have also been shown to express ECM-remodeling proteins, like matrix Frontiers in Immunology | Immunotherapies and Vaccines metalloproteinases (MMPs), which can degrade collagens, elastin, fibronectin, and laminin. These processes are important in ECM re-structuring, as well as cytokine release, angiogenesis, and cell migration (112,113). For example, human fibrosarcoma and melanoma cell-derived exosomes contain both full length and proteolytically processed MMP14, shown to be enzymatically active since these exosomes activated pro-MMP2 resulting in the degradation of both collagen-1 and gelatin (114). Cardiomyocyte progenitor-derived exosomes expressed enzymatically active MMP2, as well as MMP-activator EMMPRIN (115). EMMPRIN has also been found on CD8 + T-cell microparticles, which have been shown to induce fibrolytic activation in hepatic stellate cells (70). Madin-Darby canine kidney cells (MDCK) that have undergone epithelial to mesenchymal transition (EMT) showed an increase in MMP1, -14, and -19 expression in their exosomes, as well as several integrins (116). Additionally, EV can also stimulate MMP production in target cells. Keratinocyte-like cells are able to stimulate MMP1 expression in dermal fibroblasts through transfer of several 14-3-3 isoforms by EV (117). Furthermore, monocyte and T-cell-derived microparticles are able to induce production of MMP-1, MMP-3, MMP-9, and MMP-13 in fibroblasts (68). Thus, EV can influence MMP abundance and activity on several levels. Extracellular vesicles also have the ability to contribute to ECM strength. Members of the lysyl oxidase family crosslink collagens and elastin, increasing ECM load-bearing properties. Lysyl oxidase treatment of tissue-engineered cartilage constructs results in increased stiffness and enhanced cartilage integration, and lysyl oxidase-like 2 induces angiogenic sprouting through interacting with collagen-4 in the basal membrane (118,119). Lysyl oxidase was shown to be enriched in exosomes derived from hypoxic glyoma cells (98) and lysyl oxidase-like 2 in endothelial cells (74). Interestingly, exosomes from hypoxic endothelial cells also showed increased abundances of the ECM components fibronectin, collagen-4 and -12 subunits, and perlecan, suggesting a hypoxia-mediated role in focal ECM modification by exosomes (74). EV are also able to affect local ECM production. Borges et al. found that upon hypoxic stimulation, epithelial cells stimulate fibroblasts through exosome-mediated TGF-β1 signaling, resulting in increased collagen-1 production (91) and suggesting an exosome-mediated response resulting in local tissue repair. The effects of EV on both ECM production and remodeling could be of use in the steering of in situ ECM formation. IMMUNOMODULATION Modulating immune responses is vital in tissue engineering. The type and severity of the immune response against an implant depends on several factors including injury from surgery, the (bio)materials used, location of the graft, and the condition of the patient (120). An excessive or inappropriate immune response could result in damage, encapsulation or rejection of a tissueengineered construct. On the other hand, immune responses are potent triggers for regenerative processes, including cell recruitment, proliferation, and angiogenesis, which are key to the success of in situ tissue engineering (121). When transplanting a tissue-engineered construct, the innate immune response consists of the acute and the chronic phase. The acute immune response is an immediate reaction against foreign structures, such as certain (bio)materials. An influx of neutrophils and macrophages induces the release of inflammatory cytokines, which results in local inflammation and the recruitment of additional immune cells. Cross-talk between macrophages and T-cells, as well as environmental cues, regulate a shift in macrophage sub-types in to either M1 (inflammatory), or the M2 (anti-inflammatory, regenerative) subtype (122). M1 macrophages promote recruitment of inflammatory immune cells, and release ECM-degrading proteins to allow quick migration through inflamed tissues. As the subtype of macrophages shifts to M2, pro-inflammatory cytokine release is inhibited, angiogenic stimulation is increased, and local fibroblasts are activated in order to produce and restore the ECM. Long-term inflammation results in a foreign body response (FBR) in which case a foreign tissue is encapsulated by a fibrous, barely vascularized connective scarlike tissue (123). An antibody-mediated immune response against allografts or tissues seeded with non-autologous cells could result in rejection of a graft. These findings underline the importance of tuning the immune response in tissue engineering: sufficient to induce vascularization, cell recruitment, and ECM production, while preventing fibrosis, tissue damage, and FBR. The modulatory role of EV in innate immune responses could prove beneficial in tissue engineering. MSC-derived exosomes induced an M2-like phenotype in monocytes in vitro, resulting in polarization of activated CD4 T-cells to regulatory T-cells (124). Additionally, tumor-derived exosomes have been shown to induce a shift toward an activated M2 phenotype (125), as well as an M1 phenotype (126). Furthermore, EV can play a role in the suppression of allograft rejection. Autologous regulatory T-cellderived exosomes postponed allograft rejection in a rat kidney transplantation model (92). Immature dendritic cell-derived exosomes induced allograft tolerance in a cardiac allograft mouse model (127), as well as in a rat intestinal transplantation model (128) by increasing regulatory T-cell populations. Mesenchymal stem cells themselves have been a tool of interest for their immunosuppressive capacities, inhibiting B-and T-cells, natural killer cells, macrophages, and dendritic cells (129)(130)(131). Accordingly, MSC-derived exosomes promote secretion of anti-inflammatory cytokines, and contain an array of tolerogenic molecules (132), and administration of MSC-derived exosomes in a myocardial ischemia/reperfusion injury model showed a significant reduction of local and systemic inflammation after 24 h (133). In a renal ischemia-reperfusion model in rats, MSCderived microvesicles administered to the caudal vein inhibited inflammation as well as renal fibrosis (134). Indeed, a systematic literature study of MSC-derived EV revealed that modulation of EV responses, as well as repair of organ injury and suppression of tumor growth in preclinical studies, shows therapeutic potential (135). The potential immunomodulatory role of EV may be relevant for regenerative medicine by steering vascularization, cell recruitment, and ECM formation, as well as the prevention of tissue damage, and FBR. EXTRACELLULAR VESICLES POTENTIAL All in all, EV show great potential for a role in regenerative medicine because of their role in cell recruitment, differentiation, and www.frontiersin.org immunomodulation ( Table 1). Many of these functions of EV may also be combined with other regenerative strategies as their effects on nutrient and oxygen supply, immune responses, and cell viability and senescence may benefit efficacy of approaches in regenerative medicine, such as cell therapies or in situ tissue engineering (27,75,93). Given the role of EV in processes that greatly affect tissue regeneration, further studies in EV-mediated paracrine signaling and exploration of new methods to utilize EV or components thereof is warranted and may lead to the discovery of novel regenerative therapeutics, as well as methods to improve current techniques. APPLICATIONS OF EXTRACELLULAR VESICLES IN REGENERATIVE MEDICINE Even though, the existence of EV was discovered decades ago, interest in their role as paracrine factor was only relatively recently sparked. Much remains unknown about the pathways that determine the content of EV, and many tissue-specific functions of EV remain to be uncovered. Future studies will provide new insights in EV function and biogenesis, and reveal the roles of proteins and miRNAs in EV function. EV are important components of the secretome involved in intercellular communication, of which content and function can change depending on the conditions of the vesicle producing cells (74,91,(102)(103)(104). Therefore, changes in EV content upon stimulation of producing cells with conditions relevant in development, tissue regeneration, and wound repair may reveal new pathways and insights in intercellular signaling that play key roles in these conditions. Altogether, these qualities make EV an interesting target for the potential discovery of new therapeutics in regenerative medicine. EXTRACELLULAR VESICLES AS THERAPEUTICS Extracellular vesicles from specific cell types and conditions have positive effects on regeneration in many tissues (136). It has also been observed that certain EV display multiple functions. For instance, MSC-derived EV are able to steer cell viability, proliferation, angiogenesis, and immune responses (81-83, 103, 104, 124, 132). Harnessing the paracrine effects of stem-and progenitor cells without having to administer living, replicating, potentially pluripotent cell populations is an advantage in regard to safety, regulation, and complexity. However, there are challenges to overcome. The current golden standard in isolation of functional EV remains ultracentrifugation (58), which is a time-consuming and costly procedure that requires a large amount of cells. Although faster commercial reagents are available, which isolate higher yields of EV, these products still require optimization in specificity as they have been reported to also precipitate non-EV contaminants such as lipoproteins (137). Despite decades of research, EV cargo trafficking pathways have not completely been elucidated, and therefore control over the content of EV, and unspecific additional effects, is limited. Research in both biogenesis of EV, as well as techniques for engineering for artificial alternatives for EV is therefore warranted. EXTRACELLULAR VESICLES MODIFICATION The concept of developing synthetic alternatives for EV is motivated by the challenges that have been described above: the ability to form synthetic EVs would allow control over these elements, which would facilitate clinical translation. The approach could vary from modulation of biological EV synthesis to a purely synthetic production method. In the first approach, the EVs are still harvested from cells, but the producing cells have been engineered to enrich EVs with tags or therapeutic molecules. Incorporated tags could be used to assist in EV purification, or for targeting toward specific tissues, cells, or synthetic scaffolds. Also, the therapeutic payload can be enriched by overexpression of specific RNAs or proteins (138,139). More control over EV content can be achieved by a semisynthetic approach, which is based on techniques used in the therapeutic enveloped virus-field. Here, the viral envelope is solubilized in a high critical micelle concentration detergent. As a result the proteins and lipids that are part of the envelope are present in micelles that can be separated from the viral capsid. By removing the detergent, the envelope is reconstituted, and "virosomes" are formed (140). Translating this approach to EVs may improve the control over the composition of the bilayer, which additionally can be enriched with desired molecules, as well as during the reconstitution step, offering full control over the encapsulated (therapeutic) compounds in the aqueous core. At the same time, the naturally encapsulated molecules are removed. SYNTHETIC EXTRACELLULAR VESICLES The semi-synthetic approach still relies on the biological production of vesicles. The power of synthetic strategies lies in the scalability of the process. The minimal EV mimic is already on the market and is known as liposomes (141). Liposomes consist of a phospholipid bilayer around an aqueous core, and have been investigated as therapeutic delivery systems over the last 40 years. Therapeutic liposomes tend to be around 100 nm in size and have a lipid composition that allows them to circulate for prolonged periods in the blood stream. Generally, therapeutic liposomes are prepared in batches that vary between liters to hundreds of liters in size, with a colloidal stability of several years, even in solution. Still the translation of liposome technology to mimic EVs has some obstacles to overcome. For instance, the lipid and protein composition of EV, which may be important for their cellular interactions, is often complex, and the current production process of liposomes involves simple synthetic lipid mixtures without other components within the bilayer. However, liposomes have been successfully equipped with targeting ligands (such as antibodies) and a variety of therapeutic payloads including biologicals (142). These characteristics are several orders of magnitude away from the current state of the art in the EV field, but do illustrate the potential value of synthetic EV. CONCLUSION Over the past decades, it has been shown that EV play a regulatory role, and have modulatory potential, in many biological processes. EV show great potential for therapeutics, biomarker research, and even alternatives to stem-cell-based therapies which rely on paracrine effects. These new approaches have great potential for the support of endogenous repair, including enhancements of existing regenerative medicine approaches. This potential merits further research in the potential of EV, as well the study of new techniques to produce and utilize engineered EV.
2016-06-17T03:13:00.198Z
2014-12-03T00:00:00.000
{ "year": 2014, "sha1": "648b4b7739d57526f1607685b9a2ffdb9e9c65df", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2014.00608/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "96ee5731fbdc9a2ce4318f403bb91992f4f58fb7", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
236741362
pes2o/s2orc
v3-fos-license
Wear issues of pumping units The article presents the possibility of increasing the service life of pumping units. Particular attention is paid to the regulation of the speed of rotation of the pumping unit. There are some assumptions in the mathematical model that do not affect the final result. The factors influencing the operating mode are given. It is indicated that the speed of rotation of the pump shaft significantly affects the wear resistance of the pump blades. Thus, the regulation of the pump rotation speed will rationally increase its service life. Introduction The efficiency of the technological lines of mining and metallurgical enterprises is largely determined by the stability of the operating modes of centrifugal pumps. However, during operation, insufficient durability of the pumps is noted, which is caused by increased hydroabrasive wear. This disadvantage is especially evident when processing highly abrasive soils. To assess the effective operation of a centrifugal pump, an indicator is required that considers the technical condition of the mechanism and its ability to meet technological requirements and the economic feasibility of operating it in a controlled mode. This indicator can be the amount of wear of pump parts. During operation, pump parts (blades, armored discs, etc.) are in contact with soil particles, which lead to intense hydroabrasive wear [1][2][3][4]. Materials and Methods Hydroabrasive wear is understood as the destruction of parts of the flow path of hydraulic machines due to the mechanical action of solid particles of the working fluid. In the process of destruction, the shape and linear dimensions of the parts change. Therefore, concerning turbo mechanisms, the total value of destruction during abrasive wear changes by a decrease in the pump's volume and weight [1][2][3][4][5]. Destruction occurs due to continuous collisions, transported by the flow of solid particles with the part's surface. At the moment of collision, the kinetic energy of the moving particle is converted into the work of deformation of the material of the streamlined part. In case of residual deformations, the particles of the surface layer will be separated from the bulk of the part, leaving a trace that has significant roughness due to the nature of the impact, crystal structure, and inhomogeneity of the metal. The countless collisions of solid particles transported by the flow with the surface of the part, even if they only cause elastic deformations of the material, also ultimately lead to surface destruction due to the phenomena of metal fatigue [6][7][8][9]. Due to the wear of the impeller of the dredge pump, the size of its blades decreases, and the impeller stops developing the design head and productivity. Thus, there is a change in the pressure characteristics, a decrease in the efficiency of the pump; because the gaps and losses increase, the effective cross-section of the working cavities decreases, which leads to an increase in electricity consumption for pumping liquid. As a result, a moment comes when the turbomachine's parameters cease to meet the technological requirements. According to research results [10][11][12], if pump parts have lost 20-25% of their original weight due to wear, their replacement or restoration is required. According to [13][14], on average, for every 1000 m3 of soil processed by hydromechanization, about 0.75 kg of metal of pump parts is lost due to wear. The cost of wear is the cost of the parts and the cost of downtime for the dredger. For example, as experience has shown, the limiting wear of the 8Gr-8 dredge pump causes a decrease in pressure by 10-15% and productivity by 30% or more. As the pump wears out, its operating characteristic changes. This is mainly due to changes in the hydraulic and volumetric losses in the pump, which result from changes in the size of the main organs of the dredge pump during wear. The development of imbalance of the impeller from uneven wear of its blades causes an increase in power consumption. The main part of consumable spare parts is impellers (45-50%), housings (25%), and armored discs (less than 25%). As a result of the analysis of sources, it was found that the wear rate of dredge pumps is not the same in different periods of its operation, which can be divided into three periods: -the running-in period of parts, characterized by intense wear, a short period; -the period related to the normal operation of the unit, and wear occurs relatively slowly during this period; -the period of the most intensive wear and wear is relatively fast. In this regard, it can be concluded that the wear rate at the beginning of operation is relatively low; then, due to the weakening of the stressed state of the surface layer and the appearance of microcracks, it increases significantly. This allows us to assume that the rate of wear of the mineral-polymer coating depends on the degree of damage. Results and Discussion Numerous scientific research works indicate the possibility of a significant increase in the wear resistance of turbine mechanisms by making their working bodies from special materials and improving the designs and shapes of the pump channel capable of withstanding the effects of abrasive wear. The wear of pump parts is complicated by the presence of many additional factors, for determining which it is currently impossible to select exact mathematical relationships. Continuous change and pulsation of velocities and pressures during the flow of the slurry through the elements of the flow path of the turbine mechanism, the separation of the flow into several separate streams, the uneven separation of the velocities along the sections, the presence of sharp turns, the inhomogeneity of the composition of suspended particles, variable operating modes-all this extremely complicates the actual picture of hydroabrasive wear. The wear rate of the pump unit is influenced by several factors, namely: size, hardness, and relative speed of soil particles; the angle of the meeting of soil particles with a wearing part; pulp flow density; wear resistance of the part material, i.e. where  is the pulp flow rate, / m s ;  -pulp density, 3 / kg m ;  is ultimate strength of the wearing part;  is hardness of the wearing part, Pа ;  -hardness of particles in the pulp, Pа ;  is design of pump parts in contact with the slurry;  is the angle of attack of the pulp flow velocity vector. The operation of pumping units in the hydrometallurgical industry shows that the amount of wear depends on the following factors: with an increase in the particle diameter, the wear rate first increases rapidly, then grows slowly in the size range from 1-1.5 to 20-25 mm, and after the particle size particles exceed 25 mm, a rapid increase in intensity begins again; with the roundness of the particles, the wear rate first drops rapidly and then slowly; with an increase in the hardness of the particles, the wear rate at first grows slowly, then when the hardness of the soil particles becomes equal to the hardness of the material of the part, the wear rate grows very quickly, but after the hardness of the soil particles becomes significantly greater than the hardness of the part, the wear rate stops increasing; the wear rate is inversely proportional to the wear resistance of the part material; the wear rate increases very quickly with an increase in the particle speed and the increase in wear rate is proportional to the third power of the speed; with an increase in the pulp density, the wear rate initially increases very quickly, but from a certain moment, depending on the size of the pumped material, the density ceases to be reflected in the wear rate; the intensity of wear, depending on the angle of meeting of the pulp flow with the surface of the part, first increases to a certain angle and then begins to fall. Of all the listed factors (1) affecting hydro abrasive wear, significant, from the point of view of the impact on the wear process by choosing a rational mode of operation of the pumps, is -the density of the pumped slurry (kg/m 3 ) and -the slurry flow rate, the slurry flow rate, m/s. The density of the pulp flow is determined by the requirement of the technological process and is 1400 / 3 . The experience of slurry pumping pumps shows that in the practice of operating the slurry pumping system, changes in the density of the pumped slurry are possible. As a result, the system starts to operate in a transient mode with changed parameters. Calculations show, as shown in Figure 1 that with an increase in the density of the pulp, the power consumption can exceed the permissible for a very long time, and with a decrease in the density of the pulp, the speed of the pulp in the pipeline may decrease with the danger of blocking the pipeline. Also, increased pulp density is accompanied by increased wear, but the proportion of crushed ore pumped out increases compared to low density pulp pumping as a function of time. In variable mass hydraulics, it is proved that the equations of dynamics of variable mass can be replaced by a single equation where m is the total mass; u is the average flow rate; t is time; where s is the cross-sectional area of the pipeline; L is the length of the pipeline. Equation (2) where F н is the force causing the slurry to move due to the pump head; F tr is force of resistance to the movement of the pipe in the pipeline; l is the distance from the beginning of the pipeline to the moving boundary of the pulp density change. The forces F н and F tr are expressed through the pump head and the pressure loss in the pipeline. Then we get   where H n is pump head as energy per unit volume of liquid; Н tr1 is pressure losses on the part of the pipeline length (L-l), filled with pulp density 1  ; Н tr2 is the same for the rest of the pipeline length with slurry density 2  . The analysis of the operating mode with the changed pulp density shows that when the limit of the pulp density change reaches the pump, its head increases due to the increase in the pulp density in the pump. However, the pipeline at this time is still filled with slurry with a lower density, and the pressure loss in the pipeline has not yet significantly increased. As a result, there is a temporary excess of the pump head over the head loss in the pipeline and, as a result, a temporary increase in the slurry flow rate. In the future, as the border of the change in the density of the pulp moves along the pipeline, the pressure loss will increase, and the flow rate of the pulp will begin to decrease. In the transient mode, caused by a decrease in the density of the pumped pulp, a similar temporary decrease in the pulp consumption occurs against the background of its general increase. Such a temporary decrease in flow rate can lead to a decrease in the pulp velocity below critical. Studies of the kinematics of movement of large abrasive particles in the pump have shown that their relative velocity at the moment of impact on the leading edges of the blades is close in magnitude to the peripheral velocity of the latter. Naturally, to increase the service life of the blades, it is necessary to reduce the peripheral speed of their input edges and, first of all, of the part that adjoins the rear disc of the wheel. The peripheral speed of the leading edges of the blades can be reduced in two ways: by reducing the number of revolutions of the impeller and by displacement in the design of the considered elements on a circle of smaller diameters. The first of these methods relate to the operating mode of the pumps, and the second to the design of the impellers. We are considering the first method, in which the stabilization of operating modes of pumping units with dredge pumps is mainly associated with the possibility of regulating their operating modes. From expression (1) it follows that wear is determined by several interrelated factors. But the most significant, from the point of view of the possibility of influencing the wear process, is the pulp flow rate - . The wear process can be influenced by adjusting the pump shaft speed. It is known that the slurry flow rate is a function of the impeller speed, i.e.   f n   , and the pump wear rate is proportional to the cube of the pump speed n 3 E An  (6) where A is the proportionality coefficient. In connection with (6), the main direction in increasing the wear resistance of pumps is to reduce the speed of rotation of the pump shaft. Our studies of the pumping equipment operation showed that the 8Gr-8 dredge pumps used are subject to significant wear, and their service life without replacing the corundum coating is 400-600 hours. Figure 2 shows the dependence of the wear time on the speed of rotation of the pump shaft, i.e., for pump 8Gr-8. Calculations show that when the pump operates at the minimum rotation speed, taking into account the slurry rise by, the pump service life increases more than 6 times. Taking into account the control range, we determine the minimum rotation speed n min [15][16] (8) where Н st is the static head, m; 1 H is pressure developed by the pump at nom n , m; nom n is rated rotation speed, rpm. Abrasive wear of the dredge pump leads to a change in the hydrotransportation mode. As our calculations have shown, the decrease in pump performance due to wear by 25 and 45% corresponds to the operation of the pump with a decrease in the rotation speed, respectively, to 570 and 420 rpm. Pump characteristics for these modes are shown in Fig. 3. It is shown that the change in the parameters of productivity, head, and efficiency is due to the deterioration of the hydraulic properties of the dredge pump as it wears out. Considering that the efficiency of the dredge pump determines the breadth of its industrial use, let us analyze its operation in the field of changing modes. A decrease in the rotor speed of the dredge pump from 1 n to 2 n leads to insignificant changes inefficiency (from A  to В  ) due to the proportionality of losses in the seals and bearings, respectively, of the first and second degrees, while hydraulic losses are proportional to the third degree of the pump rotor speed. The change in efficiency depending on the speed of rotation is determined using the Moody formula, transformed for pumps [17][18]. where nom  is the efficiency value at the rated pump speed nom n ; n is current pump rotation speed. It should be noted that the pump efficiency at rated speed is not the rated efficiency. For the rated efficiency of the pump, only the maximum value of efficiency is taken at the rated speed of the pump impeller [17]. As you can see, a decrease in the speed of rotation of the dredge pump increases the wear resistance and the overhaul cycle of the pumps. But, one pump with a minimum rotation speed does not meet the requirements of the technological process. In this regard, to meet the technological process requirements, it is necessary to regulate the pump capacity at the dictating sump point. From an energetic point of view, the level of the pulp at the dictating point of the sump should be maintained at the highest point since this reduces the static lift height and, accordingly, the minimum rotation speed. This can be achieved through an appropriate control system. Let us determine the economically justified limiting operating modes of dredge pumps. As noted, with the wear of the pumps, the power consumption and the energy consumption for pumping out the slurry increase, which leads to a decrease in the pump efficiency. It follows from this that the replacement of worn-out pump equipment is necessary when the cost of the excessive consumption of electrical energy from the operation of the pumping unit exceeds the cost of its repair. This problem can be solved by setting the loss of electricity when the pump is worn out: where ρ is the pulp density, kg / m 3 ; nom  -nominal pump efficiency; izm  -Efficiency of a worn out pump. Figure 5 shows the dependence of electrical losses (kW) for 600 hours of pump operation when it is worn out. If we assume that the cost of electricity losses , then replacement of worn-out equipment is necessary when 12 Here A is the cost of consumed electricity; W 11 -the cost of repair and replacement of pump elements; W 12 is the cost of newly installed equipment. Conclusions Thus, it has been determined that the amount of wear of pump parts is one of the main factors, taking into account the technical condition of the pump and its ability to meet technological requirements and the economic feasibility of its use. It has been established that the operation of 8Gr-8 pumps in the mode of the minimum rotation speed n min , taking into account the control range, will increase the service life of the pumps without replacing the corundum coating by more than 6 times.
2021-08-03T00:06:22.995Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "17ab968f6403702ef1bc7c3c2e2e29f676a6f700", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/40/e3sconf_conmechydro2021_04081.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ed6c7c6a3c9c1384643f50bc608de4464d1b08c0", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [] }
11871287
pes2o/s2orc
v3-fos-license
Increased multiaxial lumbar motion responses during multiple-impulse mechanical force manually assisted spinal manipulation Background Spinal manipulation has been found to create demonstrable segmental and intersegmental spinal motions thought to be biomechanically related to its mechanisms. In the case of impulsive-type instrument device comparisons, significant differences in the force-time characteristics and concomitant motion responses of spinal manipulative instruments have been reported, but studies investigating the response to multiple thrusts (multiple impulse trains) have not been conducted. The purpose of this study was to determine multi-axial segmental and intersegmental motion responses of ovine lumbar vertebrae to single impulse and multiple impulse spinal manipulative thrusts (SMTs). Methods Fifteen adolescent Merino sheep were examined. Tri-axial accelerometers were attached to intraosseous pins rigidly fixed to the L1 and L2 lumbar spinous processes under fluoroscopic guidance while the animals were anesthetized. A hand-held electromechanical chiropractic adjusting instrument (Impulse) was used to apply single and repeated force impulses (13 total over a 2.5 second time interval) at three different force settings (low, medium, and high) along the posteroanterior axis of the T12 spinous process. Axial (AX), posteroanterior (PA), and medial-lateral (ML) acceleration responses in adjacent segments (L1, L2) were recorded at a rate of 5000 samples per second. Peak-peak segmental accelerations (L1, L2) and intersegmental acceleration transfer (L1–L2) for each axis and each force setting were computed from the acceleration-time recordings. The initial acceleration response for a single thrust and the maximum acceleration response observed during the 12 multiple impulse trains were compared using a paired observations t-test (POTT, alpha = .05). Results Segmental and intersegmental acceleration responses mirrored the peak force magnitude produced by the Impulse Adjusting Instrument. Accelerations were greatest for AX and PA measurement axes. Compared to the initial impulse acceleration response, subsequent multiple SMT impulses were found to produce significantly greater (3% to 25%, P < 0.005) AX, PA and ML segmental and intersegmental acceleration responses. Increases in segmental motion responses were greatest for the low force setting (18%–26%), followed by the medium (5%–26%) and high (3%–26%) settings. Adjacent segment (L1) motion responses were maximized following the application of several multiple SMT impulses. Conclusion Knowledge of the vertebral motion responses produced by impulse-type, instrument-based adjusting instruments provide biomechanical benchmarks that support the clinical rationale for patient treatment. Our results indicate that impulse-type adjusting instruments that deliver multiple impulse SMTs significantly increase multi-axial spinal motion. SMT impulses were found to produce significantly greater (3% to 25%, P < 0.005) AX, PA and ML segmental and intersegmental acceleration responses. Increases in segmental motion responses were greatest for the low force setting (18%-26%), followed by the medium (5%-26%) and high (3%-26%) settings. Adjacent segment (L1) motion responses were maximized following the application of several multiple SMT impulses. Conclusion: Knowledge of the vertebral motion responses produced by impulse-type, instrument-based adjusting instruments provide biomechanical benchmarks that support the clinical rationale for patient treatment. Our results indicate that impulse-type adjusting instruments that deliver multiple impulse SMTs significantly increase multi-axial spinal motion. Background Spinal manipulation is the most commonly performed therapeutic procedure provided by doctors of chiropractic [1]. Likewise, chiropractic techniques have evolved over the past few decades providing clinicians with new choices in the delivery of particular force-time profiles that are deemed appropriate for a particular patient or condition. In Australia, Canada, and the United States of America mechanical force manually assisted (MFMA) procedures are one of the most popular chiropractic adjusting technique, utilized by approximately 70% of chiropractors [2]. Clinically, single impulse, short duration, MFMA spinal adjustment procedures have been shown to mobilize or oscillate the spine [3][4][5][6], elicit neurophysiologic responses [5][6][7][8][9][10], and enhance acute trunk muscle function [11], However, basic experimental evidence is still lacking that can identify biomechanical mechanisms linked to beneficial therapeutic procedures [12]. Both experimental studies [3,4,[13][14][15] and mathematical models [16,17] indicate that the motion response of the lumbar spine is dependent on the force magnitude, forcetime profile and force vector applied. Biomechanical comparisons of hand-held, MFMA-type chiropractic adjusting instruments indicate that the force-time profile (shape, amplitude and duration) significantly affects spinal motion, and suggests that instruments can be tuned to provide optimal force delivery [6,15]. Indeed, a recent animal study [18] demonstrated that oscillatory mechanical forces applied at or near the natural frequency of the lumbar spine are associated with significantly greater displacements (over 2-fold) in comparison to forces that are static or quasi-static. Other animal studies have shown that lumbar spine neuromuscular responses and vertebral displacements are enhanced by increasing force amplitude and pulse duration, while vertebral oscillations (acceleration amplitude and duration) are increased by increasing force amplitude and decreasing pulse duration [6]. We are not aware of any studies, however, that have investigated the biomechanical response of the spine to repeated or multiple impulse MFMA-type mechanical excitation. The inherent goal of chiropractic adjustments are to induce spinal mobility, therefore research methodology that identifies mechanisms to increase spinal motion is of paramount importance and of great interest to researchers and clinicians. The purpose of this study was to determine the multi-axial segmental and intersegmental motion (acceleration) responses of ovine lumbar vertebral subjected to single and multiple impulse spinal manipulative thrusts (SMTs). Animal preparation Fifteen adolescent Merino sheep (mean 47.7 s.d. 4.9 kg) were examined using a research protocol approved by the Animal Ethics Committees and Institutional Review Board of the Institute of Medical and Veterinary Science (Adelaide, South Australia). Sheep were fasted for 24 hours prior to surgery and anesthesia was induced with an intravenous injection of 1 g thiopentone. General anesthesia was maintained after endotracheal intubation by 2.5% halothane and monitored by pulse oximetry and end tidal CO 2 measurement. Animals were ventilated and the respiration rate was linked to the tidal volume keeping the monitored C0 2 between 40-60 mmHg. Accelerometers Following anesthesia, the animals were placed in a standardized prone-lying position with the abdomen and thorax supported by a rigid wooden platform and foam padding, respectively, thereby positioning the lumbar spine parallel to the operating table and load frame. Following animal preparation, 10-g piezoelectric tri-axial accelerometers (Crossbow Model CXL100HF3, Crossbow Technology, Inc., San Jose, CA) were attached to intraosseous pins that were rigidly fixed to the L1 and L2 lumbar spinous processes under fluoroscopic guidance ( Figure 1). The accelerometers are high frequency vibration measurement devices comprised of an advanced piezoelectric material integrated with signal conditioning (charge amp) and current regulation electronics. The sensors feature low noise (300-µg rms), wide bandwidth (0.3 -10,000 Hz) and low nonlinearity (<1% of full scale) and are precision calibrated by the manufacturer. The x-, y-and z-axes of the accelerometer were oriented with respect to the mediallateral (ML), posterior-anterior (PA) and cranial-caudal or axial (AX) axes of the vertebrae. The in situ natural frequency of the pin and transducer was determined intraoperatively by "tapping" the pins in the ML, PA and AX axes, and was found to be greater than 80 Hertz. This is approximately 20 times greater than the natural frequency of the ovine spine [18], which also exhibits significantly damped motion responses (increased stiffness) for oscillatory PA loads above 15 Hz. SMT testing protocol An Impulse Adjusting Instrument ® (Neuromechanical Innovations, LLC, Phoenix, AZ, U.S.A., Impulse) was used to apply posteroanterior (PA) spinal manipulative thrusts to the T12 spinous process of the ovine spine ( Figure 1). The T12 spinous process was located by palpation as the first spinous process cephalad to the fluoroscopically ver-ified L1 vertebra containing the accelerometer pin mount. The neoprene end member of the stylus was then placed on the spinous process of T12 and held perpendicularly with a preload of 20 N. Thirteen mechanical excitation impulses were applied over a 2.5 second interval and included a single impulse followed one-half second later by twelve mechanical excitation pulse trains delivered every 160 ms. The Impulse Adjusting Instrument utilizes a microprocessor-controlled electromagnetic coil to produce a haversine-like impulse, approximately 2 ms in duration. Haversine impulse profiles result in a uniform mechanical energy delivery to the test structure over a broad frequency range [6,18], in this case 0 to 200 Hz. The pulse trains were applied at three different force settings: low (133 N), medium (245 N), and high (380 N). Based upon bench-test experiments, the precision of Impulse device (CoV = standard deviation/mean) was 3.5%, 2.4%, and 1.0% for the low, medium and high force settings, respectively. A doctor of chiropractic with ten years clinical experience administered spinal manipulative thrusts. L1 and L2 vertebral accelerations were recorded at a sampling frequency of 5,000 Hz using a 16 channel, 16-bit MP150 data acquisition system (Biopac Systems, Inc., Goleta, CA, U.S.A.). The sampling period (0.2 ms) was an order of magnitude greater than the Impulse force pulse duration, and the sampling frequency was nearly two orders of magnitude greater than the natural frequency of the pin-accelerometer-bone mount, which ensured that the SMT-induced vertebral oscillations were captured with appropriate signal bandwidth. Data analysis and statistics Acceleration transfer (L1-L2, m/sec 2 , 9.81 m/sec 2 = 1-g) between the L1 and L2 vertebrae was estimated by subtracting the L2 accelerometer acceleration-time curve from the L1 acceleration-time curve. The maximum peak-peak acceleration response during the multi-pulse phase (total of 12 pulse trains) was determined and compared to the peak-peak segmental and intersegmental acceleration response obtained during the first impulse. A paired observations t-test was used to determine if the acceleration response during the multi-pulse phase was significantly greater than the initial single impulse (POTT, p < .05 -significant difference). Descriptive statistics (mean, standard deviation S.D.) were also computed, and the changes in motion responses are reported as a percentage of the first thrust. Results Typical segmental (L1, L2) and intersegmental (L1-L2) acceleration responses obtained from the multiple impulse adjusting protocol are shown in Figure 2. The short duration (2 ms) mechanical excitation produced by the Impulse Adjusting Instrument ® elicited oscillations in Experimental setup illustrating the Impulse Adjusting Instru-ment ® positioned over the T12 spinous process and the two triaxial accelerometers rigidly attached to stainless steel pins at L1 and L2 Figure 1 Experimental setup illustrating the Impulse Adjusting Instrument ® positioned over the T12 spinous process and the two triaxial accelerometers rigidly attached to stainless steel pins at L1 and L2. the adjacent vertebrae that damped out after approximately 100 to 150 ms. Segmental and intersegmental acceleration responses mirrored the peak force magnitude produced by the Impulse Adjusting Instrument ® . Accelerations were greatest for AX, followed by PA and ML measurement axes and increased in a linear manner with increasing force magnitude (Table 1). At the highest force setting, the L1 segment ML and PA acceleration responses were 5.6% and 15.4% greater, respectively, in comparison to the L2 segment. The AX accelerations were 17.5% lower at the L1 segment in comparison to the L2 segment (high force setting). Compared to the initial single impulse acceleration response, subsequent SMT impulses produced significantly greater (3% to 25%, P < 0.005) AX, PA and ML segmental and intersegmental acceleration responses (Figures 3, 4, 5). Increases in segmental motion responses (ML, PA, AX) were greatest for the low force setting (18%-26%), followed by the medium (5%-26%) and high (3%-26%) settings. ML, PA and AX motion responses in the L1 segment (adjacent to the applied force) were maximized after the 7 th , 5 th and 3 rd SMT impulse (high force setting), respectively. The PA motion response was maximized after the 4 th SMT impulse for the low and medium force settings. Discussion Increased segmental and intersegmental acceleration responses were observed when multiple force impulses were applied to the ovine lumbar spine. The increased motion response most likely reflects the dynamic nature of the Impulse Adjusting Instrument ® , which has a short force-time pulse duration (approximately 2 milliseconds) and causes the ovine spine to oscillate or vibrate for up to 150 ms following the application of the force impulse. The haversine wave shape of the Impulse Adjusting Instrument ® creates an efficient mechanical excitation and Typical segmental (L1, superior and L2, inferior) and intersegmental (L1-L2) medial-lateral (ML), posterior-anterior (PA), and axial acceleration responses (m/s 2 ) during the application of haversine-like mechanical excitation to the ovine spine (high force setting at T12 spinous process, 13 pulse trains) Figure 2 Typical segmental (L1, superior and L2, inferior) and intersegmental (L1-L2) medial-lateral (ML), posterior-anterior (PA), and axial acceleration responses (m/s 2 ) during the application of haversine-like mechanical excitation to the ovine spine (high force setting at T12 spinous process, 13 pulse trains). energy transfer to the spine, which in turn excites a broad range of vibration frequencies (0-200 Hz) in the contacted and adjacent vertebral segments [6]. This frequency range encompassing the resonant frequency (4 Hz) of the ovine spine [18] which, when coupled with the repeated (multiple impulse) mechanical excitation of the spine, amplifies the spinal motion response. Increasing vertebral motions via tuning the frequency and speed of the mechanical inputs during SMT has long been an objective of chiropractic delivery, especially in the development of chiropractic adjusting instruments [16,17,19,20]. A number of studies have quantified the applied forces and concomitant mechanical response of the spine during SMT [9,[19][20][21][22][23][24]. In previous work, we have demonstrated that the stiffness and therefore motion response of different regions of the human [20,25] and animal [18] lumbar spine varied with the mechanical stimulus frequency. Knowledge of the frequency-dependent stiffness characteristics of the spine aids chiropractors in determining the manner in which forces are transmitted to the spine during chiropractic adjustment/spinal manipulation. Such information is important in assessing the biomechanical characteristics of chiropractic treatments, spinal modeling, treatment efficacy, and assessment of risk in the medicolegal arena. To our knowledge, this is the first study to quantify the motion response of the lumbar spine during repeated impulse loading. Our findings indicate that application of multiple short-duration impulses to the spine can increase the magnitude of ensuing vertebral oscillations. The chiropractic adjusting instrument examined in this study (Impulse Adjusting Instrument ® ) produces a forcetime profile with a very short pulse duration (2 ms). Forces that are relatively large in magnitude, but act for a very short time (much less than the natural period of oscillation of the structure), are called impulsive [19]. Impulsive forces acting on a mass will result in a sudden change in velocity, but are typically associated with smaller amplitude displacements in comparison to longer duration forces. However, the sudden change in velocity associated with impulsive forces causes the spine to oscillate or vibrate for long periods of time. In the current study we observed that the ovine spine oscillated for a period of time roughly equal to the time interval between impulses (e.g. 160 ms). This corresponds to an impulse loading frequency of 6.25 Hertz, and the application of repeated mechanical excitation resulted in a continuous chain of oscillations in the sheep spine. The motion response of the spine is closely coupled to the frequency or the time history of the applied force [16]. When external mechanical forces are applied at or near the natural frequency of the spine, greater segmental and intersegmental displacements result (over 2-fold) in comparison to external forces that are static or quasi-static [16]. Thus, it is possible to achieve comparable segmental and intersegmental motion responses for lower applied forces during spinal manipulation, provided that the forces are delivered over time intervals at or near the period corresponding to the natural frequency. Based on the findings of this study, application of repeated mechanical excitation at 6.25 Hertz produces a significantly increased segmental and intersegmental motion response -up to 26% increase in adjacent segment acceleration following the application of several consecutive SMT impulses. Since the oscillations induced in the spine are mostly damped out prior to the onset of the next pulse train, the increased acceleration response is most likely Mean percent change (maximum multi-impulse value com-pared to first impulse) in low force, segmental (L1, L2) and intersegmental (L1-L2) acceleration responses for the medial-lateral (ML), posterior-anterior (PA), and axial (AX) axes Figure 3 Mean percent change (maximum multi-impulse value compared to first impulse) in low force, segmental (L1, L2) and intersegmental (L1-L2) acceleration responses for the medial-lateral (ML), posterior-anterior (PA), and axial (AX) axes. Asterisks (*) indicate significant change from first impulse. due to mechanical conditioning of the spinal tissues, a desired feature in accomplishing chiropractic adjustment. Noteworthy, axial and medial-lateral accelerations were observed that represent a coupled response to the PA (dorsoventral) forces applied to the ovine spine. We have previously shown that PA thrusts induce coupled motions in both the ML and AX axes [4]. Coupled motions are dependent on a number of factors, including spinal geometry and material properties as well as the force vector applied [16]. As noted in the aforementioned paper, the motion response and coupling are dependent on the intrinsic material properties and geometry, which vary from segment to segment, producing complicated patterns load transmission within the spinal column. Indeed, the decreased axial acceleration response (6-10%) observed for the segment closest to the thrust most likely reflects underlying spinal geometry and material properties. Further research is needed to improve the mechanical excitation characteristics of chiropractic adjustment/spinal manipulation devices and treatment regimes, including force vector, force amplitude, force duration, forcetime profile and number of oscillations or impulses applied. We hypothesize that optimization of the mechanical excitation delivered to the spine will enhance neuromechanical and clinical responses in patients. There are inherent limitations of this study. First and foremost, an animal model was used to study the motion response of the spine. The sheep spine is comprised of structures (ligaments, bone, intervertebral discs) that have qualitatively similar properties as the human spine [26,27], but differ in a number of respects, most notably geometry or morphology. Sheep lumbar vertebrae, and vertebrae of other ungulates (hoofed animals) are more slender and smaller in size compared to human lumbar vertebrae. As a result, the PA stiffness of the ovine lumbar spine is substantially lower (approximately 4-fold) than the human lumbar spine [18]. However, using an animal model we were able to perform invasive measurements of bone movement, which are otherwise difficult to perform in humans [3][4][5]. Measurement of bone movement using intra-osseous pins equipped with accelerometers [3][4][5] and other invasive motion measurement devices [28,29] has been previously shown to be a very precise measure of spine segmental motion. Moreover, the short duration (impulsive) mechanical excitation produced very small displacements in the T12 and adjacent vertebrae so the coordinate axes of the vertebrae and accelerometers did not change appreciably. Hence, intersegmental acceleration transfer could be estimated directly from the acceleration-time recordings of the adjacent sensors. However, subtraction of the L1 and L2 time-domain signals to obtain the intersegmental motion response does not take into account the inherent phase differences in the acceleration-time signals. A more comprehensive frequency domain analysis of the acceleration data could be performed [3,16], but this was beyond the scope of this paper. In addition, testing was performed on anesthetized sheep, so muscle tone was deficient during the tests. The presence of normal or hyper-normal muscle tone may modulate the vibration response of the spine, so we are currently conducting impulsive force measurements while the animals are undergoing muscle stimulation. Finally, vertebral bone acceleration measurements were obtained for vertebrae (L1, L2) adjacent to the point of force application, but we did not quantify the acceleration response of the segment under test (T12) as the accelerometer pin mount and force vector applied precluded contacting the instrumented segment. As a result, the motion amplification response that we observed in adjacent segments following repeated loading may not be representative of the response of the segment under test, which is deemed by most practitioners to be of primary importance. Adjacent segment motion responses, however, are important as it is our belief that the putative effects of MFMA procedures are due to intersegmental motions, which are more similar to intersegmental motions predicted for manual thrusts, as opposed to segmental motions, which are very dissimilar in comparison to manual thrusts [4,5,16,17]. Additional work is needed to quantify both the thrust segment and adjacent segment motion responses to repeated mechanical excitation.
2014-10-01T00:00:00.000Z
2006-04-06T00:00:00.000
{ "year": 2006, "sha1": "7d70d52439cd2e6645dd61ddf21b9bd50b81d4f3", "oa_license": "CCBY", "oa_url": "https://chiromt.biomedcentral.com/track/pdf/10.1186/1746-1340-14-6", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "6a425b8fdd727c304484ff629ecfa6f9e166927b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259274966
pes2o/s2orc
v3-fos-license
Companions to the Andrews-Gordon and Andrews-Bressoud identities, and recent conjectures of Capparelli, Meurman, Primc, and Primc We find bivariate generating functions for the $k=1$ cases of recently conjectured colored partition identities of Capparelli, Meurman, A. Primc, and M. Primc that are slight variants of the generating functions for the sum sides of the Andrews-Gordon and Andrews-Bressoud identities. As a consequence, we prove sum-to-product identities for these cases, thus proving the conjectures. Our starting point is the following well-known partition theorem of Basil Gordon [17], and its analytic form due to George E. Andrews [2] (see also chapter 7 of Andrews's text [7] or chapter 3 of Andrew V. Sills's text [32]): Theorem 1. Fix ℓ ≥ 1, and let 0 ≤ i ≤ ℓ. Let B ℓ,i (n) denote the number of partitions of n satisfying λ j − λ j+ℓ ≥ 2, and there are at most i occurrences of 1 as parts. Let A ℓ,i (n) denote the number of partitions of n into parts ≡ 0, ±(i + 1) (mod 2ℓ + 3). Then A ℓ,i (n) = B ℓ,i (n) for all n. We now turn our attention to a recent article [14] of Stefano Capparelli, Arne Meurman, Andrej Primc, and Mirko Primc, in which they conjectured intriguing (colored) partition identities related to standard representations of the affine Lie algebra of type C (1) ℓ (for ℓ ≥ 2). These (conjectured) identities follow in a long vein of research connected partition identities to the representation theory of affine Lie algebras; see, for instance, [25,13,27,28,20,21,34]. Most directly, these conjectures built off of the work of M. Primc and TomislavŠikić [31,30] and Goran Trupčević [35]. First, let us consider the conjectures in section 4 of their paper [14]. Fix ℓ ≥ 2, and construct the following array with 2ℓ rows (and ℓ copies of N): Parts with the same value (but in different rows) are distinct, and can be thought of as having distinct colors. We will sometimes call these colored partitions. To the above array we will associate an array of frequencies where each f j indicates how many times the corresponding part occurs in the original array. The nonnegative integers k 0 , . . . , k ℓ provide initial conditions. Note that our ordering of k 0 , . . . , k ℓ is different than that of the original paper. We say that an array of frequencies is [k 0 , . . . , k ℓ ]-admissible if, for all downward paths Z in the frequency array, where k = ℓ i=0 k i . Here are two sample downward paths in the case with ℓ = 3: . . . , 0, 1, 0, . . . , 0]admissible colored partitions of n with exactly j parts, where k i = 1 (and all others are 0). Then, define P i (z, q) to be the bivariate generating function (1.9) The first goal of this paper will be to prove the following multisum for P i (z, q) : Take a moment to appreciate just how similar the bivariate multisum in Theorem 5 is to the multisum of (1.2). The only difference is that the factor x N 1 +N 2 +···+N ℓ has changed to z n 1 +n 2 +···+n ℓ , which can be rewritten as z N 1 . As an immediate corollary, by Theorem 2. Now, let us consider the conjectured identities in section 3 of Capparelli, Meurman, A. Primc, and M. Primc [14], along with its sequel by M. Primc [29]. We still fix ℓ ≥ 2, but now construct arrays with 2ℓ − 1 rows. Section 3 of [14] dealt with arrays whose top and bottom rows consist of odd integers: The k = 1 case of the conjectures of Capparelli, Meurman, A. Primc, and M. Primc (corresponding to odd first and last rows) were proved by Jehanne Dousse and Isaac Konan [15] using perfect crystals. This is the same case that we will prove, but our proof methods are diferent, and we will simultaneously prove the two cases (odd first and last rows and even first and last rows). We again form arrays of frequencies, and attach nonnegative integers k 0 , . . . , k ℓ . Our definition of an admissible partition is the same as above, and we once again restrict our attention to the case k = 1. See section 5 for specific details. Again, for 0 ≤ i ≤ ℓ, let F ⋆ (i, j, n) be the number of [k 0 , k 1 . . . , k ℓ ] = [0, . . . , 0, 1, 0, . . . , 0]admissible colored partitions of n with exactly j parts, where the 1 in [0, . . . , 0, 1, 0, . . . , 0] is located in position i. Then, define P ⋆ i (z, q) to be the bivariate generating function (1.14) We then have the corresponding result here for P ⋆ i (z, q): Theorem 6. P ⋆ i (z, q) = n 1 ,n 2 ,...,n ℓ ≥0 z n 1 +n 2 +···+n ℓ q N 2 which again is very similar to (1.4), and which gives us the sum-to-product identity The rest of the paper is organized as follows. In section 2, we produce functional equations for the generating functions for the admissible partitions on arrays with 2ℓ rows, corresponding to the Andrews-Gordon case. We illustrate our work with an example in the case ℓ = 4. In section 3, we carry out computations to derive relations between multisums based on those on the right side of (1.10). Once again, our work is illustrated in the ℓ = 4 case. We then use the results of section 3 in section 4 to show that the series T i (z, q) do, in fact, satisfy the same functional equations as the generating functions from section 2, which allows us to prove Theorem 5. Sections 5, 6, and 7 repeat this work for arrays on 2ℓ − 1 rows (corresponding to the Andrews-Bressoud case), where section 5 works with functional equations for generating functions of admissible partitions, section 6 deals with multisum relations, and section 7 completes the proof of Theorem 6. We conclude in section 8 with some avenues for further research. Acknowledgments. The author would like to express his gratitude to Shashank Kanade for many fruitful conversations, along with Stefano Capparelli, Arne Meurman, and Mirko Primc for comments on drafts of this paper. Fix ℓ and take an array with 2ℓ rows to it (with ℓ copies of N). Define P i (z, q) to be the bivariate generating function for [k 0 , k 1 , . . . , k ℓ ] = [0, . . . , 0, 1, 0, . . . , 0]-admissible partitions, where below k i = 1 and all other k j = 0: Here, for i odd, k i occurs in the i th row, while for i even and i ≥ 2, k i occurs in the (2ℓ + 1 − i) th row. Finally, k 0 always occurs in the (2ℓ) th row. We will now show that these satisfy the following system of functional equations: Throughout the paper, empty sums are defined to equal zero. As an example, the instance of (2.2) for i = 0 simply states that P 0 (z, q) − P 1 (zq, q) = 0. We begin with the case where i is odd (and i = ℓ). Suppose that we have an admissible partition counted by P i (z, q), and consider the set of parts j that occur in row i − j + 1 for 1 ≤ j ≤ i. At most one of these parts can occur in this partition. If j does occur in the (i − j + 1) th row, then that contributes zq j to the generating function, and then the remaining parts will be counted by P i−j+1 zq j+1 , q . If none of these parts occur, then deleting those parts from consideration shows that we have a partition that could be counted by P i+1 (zq, q). This then produces the functional . Second, consider the case where i is even (and i ∈ {0, ℓ}). Suppose that we have an admissible partition counted by P i (z, q), and consider the set of parts j that occur in row 2ℓ−i+j for 1 ≤ j ≤ i. At most one of these parts can occur in this partition. If j does occur in the (2ℓ − i + j) th row, then that contributes zq j to the generating function, and then the remaining parts will be counted by P i−j+1 zq j+1 , q . If none of these parts occur, then deleting those parts from consideration shows that we have a partition that could be counted by P i+1 (zq, q). Putting everything together produces the functional equation P i (z, q) = i j=1 zq j P i−j+1 zq j+1 , q + P i+1 (zq, q). Next, consider the case i = ℓ. We follow one of the previous two cases, based on whether ℓ is odd or even. The only difference is that if none of the highlighted parts 1, . . . , ℓ occur, then deleting those parts from consideration produces a partition that is counted by P ℓ (zq, q). This demonstrates P ℓ (z, q) = ℓ j=1 zq j P ℓ−j+1 zq j+1 , q + P ℓ (zq, q). Finally, consider the case i = 0. From inspection, it should be clear that P 0 (z, q) = P 1 (zq, q). As the sum in (2.2) is empty for i = 0, that finishes off that case. We now show how to deduce the functional equations corresponding to (2.2) and (2.3) in this particular case: • P 0 (z, q) = P 1 (zq, q): This is clear by inspection: simply "flip" the picture in (2.8) upside down. This flipping trick will be used constantly throught the rest of this section. • P 1 (z, q) = zq 1 P 1 zq 2 , q + P 2 (zq, q): Consider a partition counted by P 1 (z, q). Either there is a 1 in this partition, or there is not. If there is, we have the following picture: Here (and following), the red 1 indicates the location of the 1 we are assuming to be in this partition, while the red ⋆s show the frequencies that are forced to equal zero by the inclusion of this part. We can see that this corresponds to zqP 1 zq 2 , q . The other possibility is that there is no 1: which corresponds to P 2 (zq, q). Either a 1 occurs in the partition: corresponding to zq 4 P 1 zq 5 , q , or none of these occur: corresponding to P 4 (zq, q). In this section, we will carry out computations similar to those of [19,9]. The difference is that we will prove everything for general ℓ and 0 ≤ i ≤ ℓ, without relying on computers to prove special cases. The computations in this section were aided by Frank Garvan's qseries Maple package [16], along with references to the survey on Rogers-Ramanujan-Slater-type identities by James Mc Laughlin, Sills, and Peter Zimmer [26]. Let v = v 1 , v 2 , . . . , v ℓ and n = n 1 , n 2 , . . . , n ℓ be vectors in R ℓ . Then, define where N i = n i + n i+1 + · · · + n ℓ and v · n is the usual dot product. In this section, we will suppress the arguments and simply write S v for S v (z, q). To make our computations flow more smoothly, we will make the following definitions: where, in e i and t i , the 1 is located in position i. We will also define t ℓ+1 = 0. Proof. We can obtain this as the following linear combinations of relations: Above, we used the equality which follows from 2 (e 1 + · · · + e i−1 ) = 2 − 2t i + 2t i+1 . Partitions: Andrews-Bressoud case. Fix ℓ, and take i such that 0 ≤ i ≤ ℓ. If i is even, then construct the following array with 2ℓ − In the above arrays, set k i = 1, and all other k j = 0. Here, for i odd, k i occurs in the i th row, while for i even and i ≥ 2, k i occurs in the (2ℓ + 1 − i) th row. Finally, k 0 always occurs in the (2ℓ) th row. Note that the entries in the leftmost column are always · · · in row j for j > ℓ. We will now show that the bivariate generating functions P ⋆ i (z, q) for these admissible partitions satisfy the following system of functional equations: We begin with the case where i ∈ {0, ℓ}. Suppose that we have an admissible partition counted by P ⋆ i (z, q), and consider the set of parts j that occur in row i − j + 1 for 1 ≤ j ≤ i. At most one of these parts can occur in this partition. If j does occur in the (i − j + 1) th row, then that contributes zq j to the generating function, and then the remaining parts will be counted by P ⋆ i−j+1 zq j+1 , q . If none of these parts occur, then deleting those parts from consideration shows that we have a partition that could be counted by P ⋆ i+1 (zq, q). Putting everything together produces the functional equation P ⋆ i (z, q) = i j=1 zq j P ⋆ i−j+1 zq j+1 , q + P ⋆ i+1 (zq, q). Next, consider the case i = ℓ. We follow the previous case, except that, if none of the highlighted parts 1, . . . , ℓ occur, then deleting those parts from consideration produces a partition that is counted by P ⋆ ℓ−1 (zq, q). This demonstrates P ⋆ ℓ (z, q) = ℓ j=1 zq j P ⋆ ℓ−j+1 zq j+1 , q + P ⋆ ℓ (zq, q). Finally, consider the case i = 0. From inspection, it should be clear that P ⋆ 0 (z, q) = P ⋆ 1 (zq, q). As the sum in (5.3) is empty for i = 0, that case is completed. Proof. As before: for 1 ≤ i ≤ ℓ−1, multiply S ⋆ v by (1 − q n i ), cancel out a factor in the denominator, and reïndex with respect to n i to obtain (6.2). Also, multiply S ⋆ v by 1 − q 2n ℓ and do the same to obtain (6.3). So now define, for 1 ≤ i ≤ ℓ − 1, and for i = ℓ, The atomic relations are nearly the same as those in section 3. The only difference is that rel ℓ v has changed. Proof. This is the equivalent proposition to the previous Proposition 9, but with S replaced by S ⋆ . Since that one only relied on rel i v for 1 ≤ i ≤ ℓ − 1, which are exactly the same in both cases, the proof is exactly the same. Conclusion We conclude by listing some questions suggested by the present research. We focus on the case with an even number of rows (Theorem 5) corresponding to the Andrews-Gordon identities, but many of the comments below would equally apply to the other case. • Is there a bijection between the admissible partitions of this paper and the partitions counted by B ℓ,i (n) in Theorem 1? • There exist many refinements and modifications of the Andrews-Gordon identities. For example, the parity of parts has been investigated [8,23,24,22]. A typical case to study here studies partitions counted by B ℓ,i (n) in Theorem 1 that further have the requirement that even parts must appear an even number of times. Once again, there is a bivariate multisum for these partitions with a factor of x N 1 +···+N k ; if this is modified to z n 1 +···+n k , is there a nice subfamily of these generalized partitions that are now counted? • Another combinatorial interpretation of the sum on the right side of (1.2) involves Durfee squares and rectangles [6] (see also [1]). One nice feature of this combinatorial interpretation is that the refinement obtained provides information about the values of each of the variables n 1 , . . . , n ℓ in the sum. Again, is there a corresponding combinatorial interpretation for the new bivariate generating function here?
2023-06-29T06:43:01.074Z
2023-06-28T00:00:00.000
{ "year": 2023, "sha1": "aa9015dfb937f5789783dcec4e4b022f773feee3", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "aa9015dfb937f5789783dcec4e4b022f773feee3", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
119289193
pes2o/s2orc
v3-fos-license
Overlapping hot spots and charge modulation in cuprates Particle-hole instabilities are studied within a two dimensional model of fermions interacting with antiferromagnetic spin fluctuations (spin-fermion model). In contrast to previous works, we assume that neighboring hot spots overlap due to a shallow dispersion of the electron spectrum in the antinodal region and include in the consideration effects of a remnant low energy and momentum Coulomb interaction. It turns out that this modification of the model drastically changes the behavior of the system. The leading particle-hole instability at not very weak fermion-fermion interaction is no longer a charge density wave with a modulation along the diagonals of the Brillouin zone predicted previously but a Pomeranchuk-type deformation of the Fermi surface breaking the C$_{4}$ symmetry of the system. This order does not prevent from further phase transitions at lower temperatures. We show that, depending on parameters of the interaction, either d-wave superconductivity or charge density wave with modulations along the bonds of the $CuO$ lattice is possible. The low momentum remnant Coulomb interaction enhances the d-form factor of the charge density wave. Comparison with experimental data allows us to conclude that in many cuprate compounds the conditions for the proposed scenario are indeed fulfilled. Our results may explain important features of the charge modulations observed recently. Several common properties of this CDW state in hole-doped cuprates have been identified. The transition temperature T CDW is higher than T c but lower or equal to the pseudogap temperature T * . The temperature and magnetic filed dependence of the CDW amplitude(e.g. [5]) is consistent with the CDW state competing with the superconductivity. The CDW wave vectors seen in the experiments [3,5,14,10] are directed along the Cu − O − Cu bonds of the CuO 2 plane (axes of the Brillouin zone, axial CDW). The CDW period is approximately equal along both the axes and increases with doping [5,7,9]. Recent studies have also revealed important information about the distribution of the modulated charge inside the unit cell, i.e. the CDW form factor. It has been found for Bi-2212 [11] and YBCO [3] that the charge is modulated approximately in antiphase at two oxygen sites of the unit cell with the charge at Cu site being constant. In other words, the CDW form factor is characterized by a dominant d-component. The properties mentioned up to now are quite different from the stripe state of the La-based compounds [24,4]. Considerable attention has been also drawn to the nanoscale structure of the CDW. Quantum resistance oscillation experiments [17,18,19] have been interpreted [20] as being due to a checkerboard modulation, where CDWs with two orientations uniformly coexist throughout the sample. Results of studies [10,2] suggest, however, show that the charge ordered state consists of domains where CDW is unidirectional. There have been a number of attempts to obtain the CDW state with the properties discussed above from microscopic calculations. In a model of fermions interacting with antiferromagnetic critical spin fluctuations [25] (spin fermion (SF) model) a charge order appears in perturbation theory as a subleading instability [26] hindered by the curvature of the Fermi surface. This order is a checkerboard CDW with d-form factor and wave vectors directed along the diagonals of the BZ [27,28]. The nearest-neighbor Coulomb interaction can, in principle, make this state leading as has been shown in Ref. [29]. Moreover, thermal fluctuations between this charge order and SC have been shown to be able to destroy both the orders while pertaining a single-particle gap [27], which can explain the pseudogap phase. Qualitative aspects of CDW-SC competition are also wellcaptured in the SF model: moderate magnetic fields suppressing the superconductivity have been shown to favor CDW [30] resembling the experiment [22]. The vortex cores in the SC state have been shown to contain CDW [31] which is seen in STM [32,33]. The diagonal direction of modulation wavevectors contrasting the experiments, however, has proved to be quite robust. Some proposals have been put forward to overcome this contradiction. A CDW with the correct wavevector direction has been obtained in Refs. [34,35], however the form factor has been found to lack the dominant d-symmetry with a large s-component. In Ref. [36] a mixture of the states proposed in Ref. [27] and Ref. [35] has been considered, which should contain either the diagonal modulation or an axial CDW with a non d-form factor. CDW considerations using other models [37,38,39,40,41,42,43]also do not seem to explain the robustness of the axial d-form factor CDW in the cuprates. In this contribution we review the treatment of the SF model allowing the neighboring hot spots to overlap, such that eight hot spots merge into two hot regions entirely covering the antinodal portions of the Fermi surface. This corresponds to sufficiently small values |ε(π, 0) − E F | Γ , where E F is the Fermi energy, ε (π, 0) is the energy in the middle of the Brillouin zone edge, and Γ is a characteristic energy of the fermionfermion interaction due to the antiferromagnetic fluctuations. Consideration of this limit is motivated by ARPES data [44,45,46,47] showing that the energy separation between the hot spots and (π, 0); (0, π) is actually quite small. In addition to the electron-electron interaction via paramagnons, we consider also the effects of low-energy (low-momentum) part of the Coulomb interaction, which should not contradict the philosophy of the low energy SF model. A detailed derivation and discussion of the results can be found in our paper [48]. Model and main equations We consider a single band of fermions interacting through critical antiferromagnetic (AF) fluctuations (paramagnons) represented by a spinful bosonic field as well as the Coulomb force. As the AF fluctuations peak at momentum transfer (π, π) we restrict our model to two regions of the Fermi surface connected with this vector represented in Fig. 1. Inside these regions we do not specify individual hot spots, i.e. points on the FS connected by (π, π) as we assume the interaction to be important in all the whole region. This assumption is supported by ARPES experiments [44,45,46,47] showing that |ε(π, 0) − E F | is actually smaller than the pseudogap energy, which can be taken as the interaction scale. Overlapping hot spots and charge modulation in cuprates. 3 The fermion-paramagnon part of the Lagrangian takes the form: where ε ν (p) is the electron dispersion in region ν = 1, 2 (including the chemical potential), v s is the velocity of spin waves and ξ is the magnetic correlation length. We shall not write explicitly the terms corresponding the Coulomb interaction as we will take their effect into account qualitatively. Assuming that the regions 1 and 2 occupy a small portion of the BZ we expand ε 1 (2) where µ 0 is the chemical potential counted from ε(π, 0) = ε(0, π). Moreover, we will average the curvature term (the one with β) inside each region leading to the final form: where µ = µ 0 + βp 2 . To study particle-hole instabilities we define the order parameter: As has been shown in [48] this order parameter is related to density modulations at the three atoms of the unit cell in the following way: As both the regions we consider yield approximately cos(k x a 0 ) + cos(k y a 0 ) ≈ 0 we have δn Ox (r) + δn Oy (r) ≈ 0 in or model, i.e. charge is modulated in antiphase at the two oxygen sites of the unit cell. Now we can discuss the qualitative effects of the Coulomb interaction in the CuO 2 plane. The strong onsite repulsion prohibits any real charge modulations on the Cu sites leading to the constraint: δn Cu = 0 for the order parameter. Together with δn Ox (r) + δn Oy (r) ≈ 0 discussed above this leads to the conclusion that the charge modulations obtained in our model will have the d-form factor in accord with the experiments [11,3]. The nearest-neighbor Coulomb interaction has been shown in [29] to suppress superconductivity and support charge ordering, explaining T CDW > T c . This allows one to consider the particle-hole channel of the model separately from the particle-particle one. Pomeranchuk instability and intra-cell charge modulation. Our main finding is that for sufficiently small µ the leading particle-hole instability is the one with Q = 0. The ordered state is then characterized not by a CDW, but rather a deformation of the FS (this type of transition is known as Pomeranchuk instability [49], [50]). Moreover, it follows from (4) that such a deformation leads to a redistribution of charge between the oxygen sites of the unit cell (see Fig.2). One can obtain this result analytically for a simplified BCS-like model where the paramagnon propagator is replaced by a constant. For that case a mean-field analysis yields that if µ/T P om ≤ 1.1 then it is the leading instability. T P om is given in this case by 1 2α λ0Λ 4π 2 2 where λ 0 is the dimensionless coupling constant and Λ is the size of a single region in the momentum space. This expression contrasts the usual exponential dependence obtained in BCS-like theories. A detailed account on this simplified case is presented in [48]. Now let us turn to the model presented here. As a starting approximation we will use the self-consistent equations represented by diagrams in Fig. 3 The integral over momentum in the fermionic self-energy can be greatly simplified provided µv 2 s /α ≪ (v s /ξ) 2 , i.e. that the correlation length is not too large. Then the self-energy and polarization operator do not depend on the momentum. To analyze the FS deformation we distinguish the 'even' (Σ 1 + Σ 2 )/2 ≡ iε n − if (ε n ) and 'odd' (Σ 1 − Σ 2 )/2 ≡ P contributions to self-energy, if(ε n + ω n ) +μ +P (ε n + ω n ) . with the latter being zero in the normal state. After the momentum integration one can introduce an energy and write the self-consistency equations in the dimensionless form (see Eq. 5), wherē a denotes (v s /ξ). Note that the polarization operator Ω(ω n ) −ω 2 n contains a factor v 2 s /αΓ absent in the fermionic self-energy part. This factor will also arise if one calculates the vertex correction, as there one has to integrate a product of fermionic Green's functions like in the polarization operator. This allows us to use v 2 s /αΓ as a small parameter to justify the Eliashberglike approximation given by Fig. 3. We shall not neglect, however, the polarization operator, as it behaves linearly at low frequencies and might outpower the initial quadratic dispersion. The equations (5) have been numerically solved by an iteration scheme, yielding the transition temperature T P om where the 'odd' self-energy P becomes non-zero. To show that this transition can be indeed leading we have also computed the transition temperature for a CDW with wavevector along the BZ diagonal. This instability has been found to be universally leading in previous studies. The transition temperature can be found from the linearized equation for the CDW order parameter W diag (ε n ): The results of the numerical solutions are presented in Fig.4. One can clearly see that forμ less than a certain value Pomeranchuk instability is the leading one. Note that the ratio µ/T P om can be as high as 12 for v 2 s /αΓ = 0.5 and 9 for v 2 s /αΓ = 0.1. As the Fermi Surface seen in ARPES experiments is universally found to be C 4 -symmetric and in the light of the domained CDW structure [12,2], we assume that Pomeranchuk order should also be organized in domains with different sign of the order parameter. This constitutes a way of 'masking' a C 4 breaking alternative to the one proposed in [51]. The deformed Fermi surface of Fig.2 can be unstable to CDW formation at lower temperatures. The direction and the magnitude of the wavevector are directly related to the sign and the magnitude of P . We assume that the CDW wavevector should yield nesting in the region where the FS 'expands' due to the FS deformation. Then one has: Q SF (T ) = 2 (µ + 0.5 |P (−πT ) + P (πT )|)/α. In our model the FS in the second region 'closes' moving out of the considered region for P > µ. However, as is seen from Fig.2, in reality such a deformation can lead to emergent nesting in this region with the same vector Overlapping hot spots and charge modulation in cuprates. direction as in the first one. The best-case scenario is that the nesting vectors in both regions coincide also in magnitude, i.e. Q 1 = Q 2 (see Fig.2). We shall assume that this is indeed the case, thus providing an upper limit on the T CDW . In this case the equation for the CDW transition is: The results of numerical calculations are presented in Fig.5. It turns out that the CDW transition can closely follow the onset of the FS deformation. Comparison with experiments and conclusions. Motivated by the existing ARPES data [44,45,46] we have considered the SF model with overlapping hotspots and demonstrated that the d-wave Fermi surface distortion can be the leading instability. The transition is further followed at a lower temperature by a transition into a state with a d-form factor CDW directed along one of the BZ axes. The corresponding transition temperatures T P om and T CDW can be not far away from each other. The results obtained allow us to draw the following qualitative picture of the charge order formation: • At T P om ≥ T * C 4 symmetry is broken by a Pomeranchuk transition. The Fermi surface is deformed(see Fig. 2) and doped holes are redistributed between the oxygen orbitals of the unit cell. The sample consists of domains with different signs of the order parameter corresponding to two alternatives presented in Fig.2. • At T CDW < T P om a uniaxial d-form factor CDW forms in each domain. CDW wave vector is along one of the BZ axes dependinig on the sign of the Pomeranchuk order parameter inside each domain (see Fig.2). The CDW period generally exceeds the one corresponding to antinodal nesting and is determined self-consistently by the interaction and parameters of the Fermi surface. Qualitatively, the CDW wavevector tracks the FS and should decrease with hole doping (thus the CDW period should increase). Our findings help us in understanding the results of recent experiments. The Pomeranchuk deformation explains well why the C 4 -symmetry at commensurate peaks in Fourier transformed STM data [10]. Formation of domains with different directions of the C 4 -symmetry breaking is seen in STM experiments [12] and can also help explaining results of the transport measurements in YBCO [52]. It is important to note, though, that the effects of the deformation of the Fermi surface on transport can be masked by existence of the domains. This may also resolve the apparent contradiction to the ARPES data [44,46] always showing a C 4 -symmetric Fermi surface. The most important aspect of the Pomeranchuk order is that it explains the robustness of the axial dform factor CDW in the cuprates. We also note that the organization of the CDW phase in the unidirectional domains is indeed seen in STM [12] and XRD [2,14] measurements. The coexistence of the unidirectional CDW and Pomeranchuk order also allows one to resolve a seeming contradiction to results obtained in experiments on quantum oscillations [20]. Although the unidirectional CDW leads to an open Fermi surface that does not support quantum oscillations, the simultaneous presence of a C 4 -symmetry breaking can indeed [53] close the Fermi surface leading to quantum oscillations in high magnetic fields.
2016-02-29T15:05:13.000Z
2016-01-15T00:00:00.000
{ "year": 2016, "sha1": "125c2dd8d1b8484243a5adb9ae3d924acca5fe24", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1603.02320", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "125c2dd8d1b8484243a5adb9ae3d924acca5fe24", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
213759624
pes2o/s2orc
v3-fos-license
NUMERICAL ANALYSES ON VAPOR PRESSURE DROP IN A CENTERED-WICK ULTRA-THIN HEAT PIPE Frontiers in Heat and Mass Transfer This paper describes extended numerical analyses on vapor pressure distribution in a centered-wick ultra-thin heat pipe. Analyses were conducted by using a three-dimensional model developed by the author. Numerical results were obtained changing design parameters and operating conditions of the heat pipe. Discussion was made on the heat transfer limit as well as the vapor pressure drop. Moreover, a simple method was also presented to evaluate the vapor pressure drop in the ultra-thin heat pipe. Calculated results with the simple method agreed in 10 % with the three-dimensional numerical results. INTRODUCTION In recent years, among many studies on heat pipes (e.g., Mirmanto et al., 2018;Petrucci and Faghri, 2018;Orr et al., 2019;Taft and Irick, 2019), attempts have been made to develop an ultra-thin heat pipe, which has been used especially for the enhancement of cooling of smartphones. Removal of heat is required for smartphones with the increase in performance. Same as conventional heat pipes of normal size, the ultrathin heat pipe also transports heat passively from a heated to cooled sections utilizing latent heat of a working fluid. Evaporation and condensation take place in the heat pipe. However, compared to the conventional heat pipes, a thickness of the ultra-thin heat pipe is very small; the heat pipes with the thickness of less than 1 mm were already developed. Ahamed et al. (2015) introduced a centered wick structure named as "Center Fiber Wick" and fabricated an ultra-thin flattened heat pipe. The structure of the heat pipe was made by flattening a copper tube, and the size of the flattened heat pipe was 100 mm (length) × 3 mm (width) × 0.4 mm (thickness). Ahamed et al. (2017) disclosed extended experimental results on the thermal performance of the ultra-thin flattened heat pipe. In this experiment, a length, a width and a thickness of the heat pipe were changed as 50 mm -120 mm, 3.0 mm -7.8 mm and 0.35 mm -0.60 mm, respectively. Recent studies concerning the ultra-thin flattened heat pipes were already reviewed in the author's previous paper (Koito, 2019). In addition, Tang et al. (2017) fabricated a novel sintered copper mesh wick and Zhou et al. (2019) developed a novel bi-porous spiral woven mesh wick in order to enhance the thermal performance of the ultra-thin flattened heat pipes. A structure of the ultrathin heat pipe is not limited to the above-mentioned flattened type; a flatplate type, which is also referred to as "Vapor Chamber", and a loop type have been also developed in recent years. Zhang et al. (2019) and Chen et al. (2019) developed the ultra-thin flat-plate heat pipes; their dimensions were 26 mm × 200 mm × 1.5 mm (thickness) and 120 mm × 120 mm × 2.0 mm (thickness), respectively. Ultra-thin loop heat pipes, on the other hand, were fabricated by Zhou et al. (2016) (2017). Same as conventional loop heat pipes, these ultra-thin loop heat pipes were also composed of an evaporator and condenser sections with a vapor and liquid lines connecting them. Zhou et al. (2016) employed a 1.2 mm thick flat evaporator and a vapor line, liquid line and condenser with a 1.0 mm thickness. The loop heat pipe made by Hong et al. (2017) was 1.5 mm in thickness. A vapor flow space in the ultra-thin heat pipe is very small, and therefore a vapor pressure drop due to viscous friction would be large compared to that in a conventional heat pipe of normal size. Since the vapor pressure drop would influence the heat pipe performance, the discussion concerning the vapor pressure drop is indispensable for further development of ultra-thin heat pipes. In the previous study (Koito, 2019), therefore, the author developed a three-dimensional mathematical model to clarify the velocity, pressure and temperature distributions in the ultra-thin heat pipe. By using the experimental results by Zhou et al. (2017), the confirmation was obtained on the validity of the mathematical model. This paper describes extended numerical analyses on the vapor pressure drop in the ultra-thin heat pipe. A wick structure was positioned at center of a vapor flow space. The above-mentioned author's threedimensional mathematical model was used, and numerical results were obtained by changing design parameters and operating conditions. Discussion was made on the heat transfer limit as well as the vapor pressure drop. A simple method was also presented to evaluate the vapor pressure drop in the heat pipe. Frontiers in Heat and Mass Transfer Available at www.ThermalFluidsCentral.org MATHEMATICAL MODEL AND NUMERICAL CONDITIONS Details of mathematical modeling were already described in the author' previous paper (Koito, 2019). Therefore, its brief summary is described below. A cross section of the ultra-thin heat pipe with a centered wick structure is shown in Fig. 1. A computational domain was indicated with dotted lines in this figure. Because the cross section was symmetrical, a half domain of the heat pipe was analyzed. As shown in Fig. 2, the mathematical model (length: lt, width: wl + wv, height: h) consisted of two regions of a vapor and a liquid-wick. The widths of the vapor and liquidwick regions were wv and wl, respectively while the length and the height were both lt and h, respectively. One end (length: lh, width: wl) of the bottom surface was heated while the other end (length: lc, width: wl) cooled. The following equations were solved numerically to obtain the distributions of the velocities, u, v, w, in x, y, z directions, the pressure, p, and the temperature, T, in the vapor and liquid-wick regions. For the vapor region: For the liquid-wick region: where ρ is the density, µ the viscosity, cp the specific heat at constant pressure, k the thermal conductivity. V is the velocity vector (= (u, v, w)). Darcy's law was employed in Eq. (5) using the porosity, ε, and the permeability, K. The effective thermal conductivity, keff, was employed in Eq. (6). Subscripts of v and l mean the vapor and liquid-wick regions, respectively. At the interface between the vapor and liquid-wick regions, the temperature was considered to be a saturated temperature, Tsat, and the boundary conditions were expressed as follows: where hfg is the latent heat. The Clausius-Clapeyron equation was employed using the reference temperature, Tref, the reference pressure, pref, and the gas constant, Rg. The boundary conditions at the symmetric surface (x = 0) as well as the heated and cooled sections are shown in Fig. 2, where q is the given heat flux. Except for the heated and cooled sections, an adiabatic condition was applied on the outer surface of the model. In addition, because only temperature gradients were given on the outer surface, the temperature at x = wl, y = lt /2, z = h/2 was also prescribed as an operating temperature. This operating temperature was denoted by To. Numerical conditions are shown in Table 1, where the values of lc, h, q and To were changed. In addition, sintered copper powder and water were selected as a wick structure and a working fluid, respectively. The value of keff was evaluated using Yagi-Kunii's equation (JAHP, 2001). ε = 0.4 and K = 9.00 × 10 −13 m 2 were given in the present numerical analyses. The value of porosity was obtained from a wick supplier. The value of permeability was cited from Faghri (2016). Results of Three-dimensional Numerical Analyses The vapor pressure distributions in y direction at h = 0.2 mm, 0.3 mm, 0.4 mm and 0.5 mm are shown in Fig. 3 when lc = 10 mm, q = 20 W/cm 2 and To = 50 °C. The numerical results of pv at z = h/2 on the vapor-liquid interface are shown in this figure. Under the same heat inputs, as mentioned in the author's previous paper (Koito, 2019), the vapor velocity becomes higher as the cross section (= wv × h) of the vapor flow space decreases. Therefore, the vapor pressure difference over the vapor region became larger as h decreased. It was found that the vapor pressure difference over the vapor region was comparatively large at h = 0.2 mm. The difference between the vapor pressure distributions at h = 0.4 mm and h = 0.5 mm was very small; however, relatively large difference was found between h = 0.2 mm and h = 0.3 mm. Although the difference in h was only 0.1 mm, the vapor pressure drop at h = 0.2 mm was considerably larger than that at h = 0.3 mm. Since the vapor region was in a saturated condition, the vapor temperature drop also became larger with the vapor pressure drop causing to increase the thermal resistance of the heat pipe. The vapor velocity distributions are shown in Fig. 4 when h = 0.2 mm, q = 20 W/cm 2 and To = 50 °C. The two cases of (a) lc = 10 mm and (b) lc = 60 mm are compared in this figure. The vapor flows from the vapor-liquid interface at the heated section to that at the cooled section were shown in these figures; however, the vapor velocity at the cooled section for (b) lc = 60 mm was found to be smaller than that for (a) lc = 10 mm implying that the vapor velocity was decreased with the increase in lc. As shown in Fig. 2, the heat flux at the cooled section was given and the value was calculated as q(lh/lc). Therefore, although the cooling surface area was increased, the value of q(lh/lc) became smaller with the increase in lc causing the decrease in the vapor velocity over the cooled section. The vapor pressure distributions in y direction at lc = 10 mm, 30 mm, 60 mm and 90 mm are shown in Fig. 5 when h = 0.2 mm, q = 20 W/cm 2 and To = 50 °C. Same as Fig. 3, the numerical results of pv at z = h/2 on the vapor-liquid interface are shown in this figure. It was found that the vapor pressure difference over the vapor region became smaller as lc increased confirming that the cooled surface area was one of the factors affecting the vapor pressure drop in the heat pipe with the ultrathin structure. From each numerical result, the minimum values of the vapor pressure, pv,min, were obtained and the heat transfer rate of the heat pipe, Q, was calculated by the following equation: Since the computational domain was a half of an actual centered-wick heat pipe (see Fig. 1), the value of Q was obtained by multiplying the heat input to the heated section (= q wl lh) by 2. The relations between pv,min and Q are shown in Fig. 6 for the three cases of (1) h = 0.2 mm, To = 50 °C, (2) h = 0.4 mm, To = 50 °C and (3) h = 0.2 mm, To = 40 °C when lc = 10 mm and q = 20 W/cm 2 . In all cases, pv,min deceased with the (1) and (2), it was found that the decrease in pv,min with the increase in Q for h = 0.2 mm was more significant than that for h = 0.4 mm. This was due to a smaller cross section of the vapor region. According to a heat pipe theory (Faghri, 2016), a heat pipe encounters heat transfer limitation when pv,min = 0. This heat transfer limitation is categorized as a viscous limit. The value of Q when pv,min = 0, which implies the maximum heat transfer rate, Qmax, was evaluated for the two cases of (1) and (3) by extrapolating the numerical results of pv,min as shown in the figure by dashed lines. The values of Qmax are also shown in the figure. The difference in To between the two cases of (1) and (3) was 10 °C; nevertheless, the value of Qmax for To = 40 °C was found to be much smaller than that for To = 50 °C. Therefore, regarding the ultrathin heat pipe, it was confirmed that the viscous limit was a possible limitation and the maximum heat transfer rate was greatly affected by the operating temperature of the heat pipe. Simple Evaluation of Vapor Pressure Drop An attempt was also made to present a simple method to evaluate the vapor pressure drop. In this calculation, a one-dimensional vapor flow in y direction between two parallel walls was considered. These walls were positioned with a gap of h. A z axis was also given perpendicular to the walls. Under this condition, Eq. (2) was simplified as follows: The integration of Eq. (9) with vv = 0 both at z = 0 and z = h yielded the following equation: The vapor volume flow rate, Vv, was calculated by = � ℎ 0 (11) and the substitution of Eq. (10) into Eq. (11) yielded the following equation: From a mass balance, on the other hand, a change in the vapor volume flow rate in y direction, dVv /dy, was given as follows for the heated, adiabatic and cooled sections, respectively. The vapor pressure distributions over the vapor region were obtained simply with Eqs. (12) and (13). The calculated results with Eqs. (12) and (13) were compared with the results of three-dimensional numerical analyses. The comparisons were shown in Figs. 7 and 8 concerning the vapor pressure distribution, pv, and the total vapor pressure difference, ∆p, respectively. ∆p was evaluated by the following equation: where pi,h and pi,c are the vapor pressures on the vapor-liquid interface at the ends of heated side (y = 0, z = h/2) and cooled side (y = lt, z = h/2), respectively. In Fig. 7, the comparison of pv was made for the two cases of h = 0.2 mm and h = 0.4 mm when lc = 10 mm, q = 20 W/cm 2 and To = 50 °C. Fig. 8 was obtained changing q and lc as shown in the figure. Because the effect of friction at the walls of x = wl and x = wl + wv in Fig. 2 was not considered in the simple calculations, the vapor pressure distributions in Fig. 7 and the total pressure differences in Fig. 8 calculated with Eqs. (12) and (13) were found to be slightly smaller than those of the three-dimensional numerical analyses. In a range of the present calculations, the difference between the simple calculations and the numerical results was found to be 10 % confirming the validity of the simple calculations. CONCLUSIONS Extended numerical analyses were conducted concerning the vapor pressure drop and the viscous limit of the centered-wick ultra-thin heat pipe. The numerical results were obtained using the three-dimensional mathematical model developed by the author. Regarding the ultra-thin heat pipe, the findings were summarized as follows under the present numerical conditions and the calculation range. • The vapor pressure drop with the vapor flow space of 0.2 mm in height was much larger than that of 0.3 mm although their difference in height was only 0.1 mm. The cooled surface area was also one of the factors affecting the vapor pressure drop.
2020-01-02T21:48:28.739Z
2019-12-28T00:00:00.000
{ "year": 2019, "sha1": "9df35a996895850693e870404ac860345ca67e53", "oa_license": "CCBY", "oa_url": "http://thermalfluidscentral.org/journals/index.php/Heat_Mass_Transfer/article/view/1042/732", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "27ecffed4e865801c78505664ee1f6f2ecfff98a", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
15755369
pes2o/s2orc
v3-fos-license
Subresultants in Multiple Roots We extend our previous work on Poisson-like formulas for subresultants in roots to the case of polynomials with multiple roots in both the univariate and multivariate case, and also explore some closed formulas in roots for univariate polynomials in this multiple roots setting. Introduction In [DKS2006] we presented Poisson-like formulas for multivariate subresultants in terms of the roots of the system given by all but one of the input polynomials, provided that all the roots were simple, i.e. that the ideal generated by these polynomials is zero-dimensional and radical. Multivariate resultants were mainly introduced by Macaulay in [Mac1902], after earlier work by Euler, Sylvester and Cayley, while multivariate subresultants were first defined by Gonzalez-Vega in [GLV1990, GLV1991], generalizing Habicht's method [Hab1948]. The notion of subresultants that we use in this text was introduced by Chardin in [Cha1995]. Later on, in [DHKS2007, DHKS2009], we focused on the classical univariate case and reworked the relation between subresultants and double Sylvester sums, always in the simple roots case (where double sums are actually well-defined). This is also the subject of the more recent articles [RS2011, KS2012]. As one of the referees of the MEGA'2007 conference pointed out to us, working out these results for the case of polynomials with multiple roots would also be interesting. This paper is a first attempt in that direction. We succeed in describing Poisson like formulas for univariate and multivariate subresultants in the presence of multiple roots, as well as to obtain formulas in roots in the univariate setting for subresultants of degree 1 and of degree immediately below the minimum of the degrees of the input polynomials: the two nontrivial extremal cases in the sequence of subresultants. We cannot generalize these formulas for other intermediate degrees, and it is still not clear for us which is the correct way of generalizing Sylvester double sums in the multiple roots case. The paper is organized as follows: In Section 2 we recall the definitions of the classical univariate subresultants and Sylvester double sums, and of the generalized Wronskian and Vandermonde matrices. We then show how the Poisson formulas obtained in [Hon1999] for the subresultants in the case of simple roots extend to the multiple roots setting by means of these generalized matrices. We also obtain formulas in roots for subresultants in the two extremal non-trivial cases mentioned above. In Section 3 we present Poisson-like formulas for multivariate subresultants in the case of multiple roots, generalizing our previous results described in [DKS2006]. Acknowledgements: We wish to thank the referee for her/his careful reading and comments. A preliminary version of these results was presented at the MEGA 2009 Conference in Barcelona. Part of this work was done at the Fields Institute in Toronto while the authors were participating in the Fall 2009 Thematic Program on Foundations of Computational Mathematics. Univariate Case: Subresultants in multiple roots 2.1. Notation. We first establish a notation that will make the presentation of the problem and the state of the art simpler. We associate to A and B the monic polynomials f and g of degrees d and e respectively, and the set R(A, B), where with natural limits when the roots are packed 2.2. Subresultants and Sylvester double sums. We recall that for 0 ≤ t ≤ d < e or 0 ≤ t < d = e, the t-th subresultant of the polynomials f = a d x d + · · · + a 0 and g = b e x e + · · · + b 0 , introduced by J.J. Sylvester in [Sylv1853], is defined as with a ℓ = b ℓ = 0 for ℓ < 0. When t = 0 we have Sres t (f, g) = Res(f, g). In the same article Sylvester also introduced for 0 ≤ p ≤ d, 0 ≤ q ≤ e the following double-sum expression in A and B, We note that Sylv p,q (A, B; x) only makes sense when α i = α j and β i = β j for i = j, since otherwise some denominators in Sylv p,q (A, B; x) would vanish. The following relation between these double sums and the subresultants (for monic polynomials with simple roots f and g) was described by Sylvester: for any choice of 0 ≤ p ≤ d and 0 ≤ q ≤ e such that t := p + q satisfies t < d ≤ e or t = d < e, one has This gives an expression for the subresultant in terms of the differences of the roots -generalizing the well-known formula (1)in case f and g have only simple roots. However, when the roots are packed, i.e. when we deal with A and B, the expression for the resultant is stable, i.e. while not only there is no simple expression of what Sres t (f , g) is in terms of differences of roots but moreover there is no simple definition of what Sylv p,q (A, B; x) should be in order to preserve Identity (2). Of course, since Sres t (f , g) is defined anyway, Sylv p,q (A, B; x) could be defined as the result but this is not quite satisfactory because on one hand this does not clarify how Sres t behaves in terms of the roots when these are packed, and on the other hand, Sylv p,q (A, B; x) is defined for every 0 ≤ p ≤ d and 0 ≤ q ≤ e while Sres t is only defined for t := p + q ≤ min{d, e}. In what follows we express some particular cases of the subresultant of two univariate polynomials in terms of the roots of the polynomials, when these polynomials have multiple roots. These are partial answers to the questions raised above, since we were not able to give a right expression for what the Sylvester double sums should be, even in the particular cases we could consider. Nevertheless the results we obtained give a hint of how complex it can be to give complete general answers, at least in terms of double or multiple sums, see Theorem 2.7 below. 2.3. Generalized Vandermonde and Wronskian matrices. We need to recall some facts on generalized Vandermonde and Wronskian matrices. For example The determinant of a square confluent matrix is non-zero, and satisfies, [Ait1939], In the same way that the usual Vandermonde matrix V (A) is related to the Lagrange Interpolation Problem on A, the generalized Vandermonde matrix V (A) is associated with the Hermite Interpolation Problem on A [Kal1984]: there exists a unique polynomial p of degree deg(p) < d which satisfies the following conditions: This Hermite polynomial p = a 0 + a 1 x + · · · + a d−1 x d−1 is given by the only solution of (a 0 a 1 . . . a d−1 ) · V (A) = (y 1,0 y 1,1 . . . y m,dm−1 ) (here the right vector is indexed by the pairs (i, The polynomial p can also be viewed in a more suitable basis, where the corresponding "Vandermonde" matrix has more structure. We introduce the d polynomials in this basis. Then, in this basis, the polynomial and satisfies The following proposition generalizes these two extremal formulas. Then where k 1 + · · · + k i + · · · + k m denotes the sum without k i . (When m = 1, the right expression under brackets is understood to equal 1 for k = 0 and 0 otherwise.) Proof. Applying for instance [Sp1960, Th. 1], we first remark that Then we plug into the expression the following, given by Leibnitz rule for the derivative of a product: The basic Hermite polynomials enable us to compute the inverse of the confluent matrix V (A): . Now we set the notation for a slight modification of a case of generalized Wronskian matrices. When u = d, we omit the sub-index u and write W h (A). For example for h(z) = x − z and A = (α, 3), The determinant of a square Wronskian matrix is easily obtainable performing row operations in the case of one block, and by induction in the size of the matrix in general: 2.4. Subresultants in multiple roots. In this section, we describe explicit formulas we can get for the non-trivial extremal cases of subresultants in terms of both sets of roots of More precisely, we present formulas for Sres t (f , g) for the cases t = d − 1 < e (Proposition 2.6 below) and t = 1 < d ≤ e (Theorem 2.7). We will derive them from Theorem 2.5 below, a generalization of [Hon1999, Th. 3.1] and [DHKS2007, Lem. 2] which includes the multiple roots case (and is also strongly related to a multiple roots case version of [DHKS2009, Th. 1]). The main drawback of this approach to obtain formulas for all cases of t is the fact that submatrices of generalized Vandermonde matrices are not always generalized Vandermonde matrices, so in general their determinants cannot be expressed as products of differences. This is why the search for nice formulas in double sums in the case of multiple roots is more challenging. Then where c := max{e (mod 2), d − t (mod 2)}. Proof. The proof is quite similar to the proofs of Lemmas 2 and 3 in [DHKS2007], replacing the usual Vandermonde and Wronskian matrices by their generalized counterparts. We will thus omit the intermediate computations. We introduce the following matrices of [DHKS2007]: : Also, exactly as in the proof of [DHKS2007, Lem. 2], This implies first the generalization of [DHKS2007, Lem. 2] to the multiple roots case: where the second equality is a consequence of obvious row and column operations. Next, we get as in the proof of [DHKS2007, Lem. 3 We note that starting from the first equality above and applying similar arguments, we also get very simply As mentioned above, when t = 0 the formula in roots for Sres 0 (f, g) specializes well when considering Sres 0 (f , g). When t = d < e, the formula Sres d (f, g) = 1≤i≤d (x − α i ) also specializes well as Sres Our purpose now is to understand formulas in roots for the following extremal subresultants, i.e for Sres 1 and Sres d−1 , in case of multiple roots. • The case t = d − 1 < e: When f has simple roots, it is known (or can easily be derived for instance from Sylvester's Identity (2) for p = d − 1 and q = 0) that where p i is the basic Lagrange interpolation polynomial of degree strictly bounded by d such that p i (α i ) = 1 and p i (α j ) = 0 for j = i. In other words, Sres d−1 (f, g) is the Lagrange interpolation polynomial of degree strictly bounded by d which coincides with g in the d values α 1 , . . . , α d . This formula does not apply when f has multiple roots, but we can show that we get the natural generalization of this fact, that is, that Sres d−1 (f , g) is the Hermite interpolation polynomial of degree strictly bounded by d which coincides with g and its derivatives up to the corresponding orders in the m values α 1 , . . . , α m : Proposition 2.6. the basic Hermite interpolation polynomial defined by Condition (4) or Proposition 2.3 for A. Proof. In this case, applying the first statement of Theorem 2.5 we get where when following the subindex notation of Formula (3), we note that The conclusion follows by Formula (3). For example, when A = (α, d), we get the Taylor expansion of g up to order d − 1. • The case t = 1 < d ≤ e: We keep Notation 2.2. When f has simple roots, it is known (or can easily be derived for instance from Sylvester's Identity (2) for p = 1 and q = 0) that The general situation is a bit less obvious, but in any case we can get an expression of Sres 1 (f , g) by using the coefficients of the Hermite interpolation polynomial, in this case of the whole data A ∪ B := (α 1 , d 1 ); . . . ; (α m , d m ); (β 1 , e 1 ); . . . ; (β n , e n ) . We note that which holds even when α i = β j for some i, j. Proof. Setting t = 1 in Expression (5) we get We expand the determinant w.r.t. the first row, and observe that when we delete the first row and column j, the matrix that survives coincides with V (A ∪ B) (d+e,j) , the submatrix of V (A ∪ B) obtained by deleting the last row and column j. Therefore, where φ(i) equals the number of the column corresponding to (1, α i , . . . , α d+e−1 (by the cofactor expression for the inverse) and We set h := f g, and for i = 1, . . . , m, and when d i > 1, Therefore, we obtain the statement by applying Leibnitz rule Note that in the case that f has simple roots we immediately recover Identity (6) while when f = (x − α) d for d ≥ 2, we recover Proposition 3.2 of [DKS2009]: Multivariate Case: Poisson-like formulas for Subresultants We turn to the multivariate case, considering the definition of subresultants introduced in [Cha1995]. Our goal is to generalize Theorem 3.2 in [DKS2006] -that we recall below-to the case when the considered polynomials have multiple roots. We first fix the notation, referring the reader to [DKS2006] for more details. Fix t ∈ N. Let k := H D 1 ...D n+1 (t) be the Hilbert function at t of a regular sequence of n + 1 homogeneous polynomials in n + 1 variables of degrees D 1 , . . . , D n+1 , i.e. [Cha1995]. Here, f h i denotes the homogenization of f i by the variable x n+1 . We recall that the subresultant ∆ S is a polynomial in the coefficients of the f h i of degree H D 1 ...D i−1 D i+1 ...D n+1 (t − D i ) for 1 ≤ i ≤ n + 1, having the following property: ∆ S = 0 if and only if I t ∪ S h does not generate the space of all forms of degree t in k[x 1 , . . . , x n+1 ], where I t denotes the degree t part of the ideal generated by the f h i 's. By [Cha1994] we know that where M S denotes the Macaulay-Chardin matrix obtained from by deleting the columns indexed by the monomials in S, and E(t) is the extraneous factor defined as the determinant of a specific square submatrix of (8) (see [Cha1995, Cha1994, DKS2006]). 3.2. Poisson-like formula for subresultants. From now on we assume that f 1 , . . . , f n are generic in the sense they have no roots at infinity (which implies by Bézout theorem that the quotient algebra A := K[x]/(f 1 , . . . , f n ) is a finitely dimensional K-vector space of dimension D, which equals the number of common roots in K n of these polynomials, counted with multiplicity, see e.g. [CLO1998, Ch. 3, Th. 5.5]), and that T is a basis of A. In [DKS2006] we treated the case of general polynomials with indeterminate coefficients, which specializes well under our assumptions to the case when the common roots ξ 1 , . . . , ξ D of f 1 , . . . , f n in K n are all simple. Set Z := {ξ 1 , . . . , ξ D }. We introduced the Vandermonde matrix whose determinant is non zero, since T is assumed to be a basis of A, and we defined Theorem 3.1. [DKS2006, Th. 3.2] For any t ∈ Z ≥0 and for any S = {x γ 1 , . . . , In order to generalize this result to systems with multiple roots, and obtain an expression for the subresultant in terms of the roots of the first n polynomials f 1 , . . . , f n , we need to introduce notions of the multiplicity structure of the roots that are sufficient to define (f 1 , . . . , f n ). To be more precise, in the case of multiple roots, the set of evaluation maps {ev ξ : A → K | ξ common root of f 1 , . . . , f n } is not anymore a basis of A * , the dual of the quotient ring A as a K-vector space, though still linearly independent. Hence other forms must be considered in order to describe A * and to get a non-singular matrix generalizing V T (Z). All along this section we will use the language of dual algebras to generalize Theorem 3.1 for the multiple roots case (see for instance in [KK1987, BCRS1996] and the references therein). In Theorem 3.4 below we show that any basis of the dual A * gives rise to generalizations of Theorem 3.1, as long as we assume that T is a basis of A. This is the most general setting where a generalization of Theorem 3.1 will hold. However, this version of the Theorem, using general elements of the dual, does not give a formula for the subresultant in terms of the roots. In order to obtain these expressions, we need to consider a specific basis of A * which contains the evaluation maps described above. It turns out that one can define a basis for A * in terms of linear combinations of higher order derivative operators evaluated at roots of f 1 , . . . , f n . This is the content of the so called theory of "inverse systems" introduced by Macaulay in [Mac1916], and developed in a context closer to our situation under the name of "Gröbner duality " in [Gr1970, MMM1995, EM2007] among others. The following is a multivariate analogue of Definition 2.4: Definition 3.2. Let Λ := {Λ 1 , . . . , Λ D } be a basis of A * as a K-vector space. Given any set E = {x α 1 , . . . , x αu } of u monomials and given any polynomial h ∈ K[x], the generalized Vandermonde matrix V E (Λ) and the generalized Wronskian matrix W h,E (Λ) corresponding to E, Λ and h are the following u × D matrices: We modify the definition of the matrix O S (Z) in (12) as follows: (9) and R as in (10). Then Note that by our assumption on T being a basis of A and Λ being a basis of A * , we have det V T (Λ) = 0. The following is the extension of Theorem 3.1 to the multiple roots case. Theorem 3.4. Let (f 1 , . . . .f n+1 ) ⊂ K[x] and T := ∪ j≥0 T j specified in (9) satisfying our assumptions, and Λ be an arbitrary basis of A * . For any t ∈ Z ≥0 and for any S = {x γ 1 , . . . , Proof of Theorem 3.4. The proof is similar to the proof of Theorem 3.2 in [DKS2006], to which we refer for notations and details. Extra care must be taken however, as we are not anymore considering the polynomials f 1 , . . . , f n to have simple common roots. Using the exact same argument as in the proof of Theorem 3.2 in [DKS2006] we can prove that is the submatrix of (8) obtained by removing the columns corresponding the monomials in T . In [DKS2006] we also showed that so the claim is proved when E(t) = 0. If E(t) = 0, we consider a perturbation "à la Canny" as in [Can1990], i.e. we replace f i by f i,λ : , where λ is a new parameter, for 1 ≤ i ≤ n. It is easy to see that this perturbed system has no roots at infinity over the algebraic closure K(λ) of K(λ), since the leading term in λ of the resultant of its homogeneous components of degrees D 1 , . . . , D n does not vanish, and hence the dimension of the quotient ring A λ := K(λ)[x]/(f 1,λ , . . . , f n,λ ) as a K(λ)-vector space is also equal to D. It can also be shown (see [Can1990]) that E λ (t) = 0, where E λ (t) denotes the extraneous factor in Macaulay's formulation applied to the polynomials f i,λ , 1 ≤ i ≤ n. Indeed, if E t is the matrix whose determinant gives E(t) with rows and columns ordered properly, it is easy to see that the perturbed matrix is equal to E t + λ I, where I is the identity matrix. Therefore, the statement holds for this perturbed family: T j ( f 1,λ , . . . , f n,λ )). The subresultants appearing in (14) are polynomials in λ, that, when evaluated in λ = 0, satisfy: So, in order to prove the claim, it is enough to show that there exists a basis of A * λ which "specializes" to Λ when setting λ = 0, i.e. to find a basis Λ λ of A * λ such that and then to apply Identity (14) to Λ λ and to specialize it at λ = 0. We now construct the basis Λ λ : The monomial basis T = {x α 1 , . . . , x α D } of A is also a monomial basis of A λ , since clearly linearly independent, and therefore it defines the dual bases {y α 1 , . . . , y α D } ⊂ A * and {y α 1 ,λ , . . . , y α D ,λ } ⊂ A * λ , satisfying for 1 ≤ j, k ≤ D, y α k (x α j ) = y α k ,λ (x α j ) = 1 if k = j and 0 otherwise . We write Λ i = D k=1 c ik y α k for 1 ≤ i ≤ D, where c ik ∈ K, and then set Λ λ := {Λ 1,λ , . . . , Λ D,λ }, with Λ i,λ := D k=1 c ik y α k ,λ , 1 ≤ i ≤ D. Note that as the matrix (c ij ) 1≤i,j≤D is invertible. This implies that Λ λ is a basis of A * λ . We claim now that, for every α ∈ N n , there exist polynomials p α and A j,α , For this, it suffices to express the monomial x α in terms of the basis T of A λ and take p α (λ) as a common denominator when lifting the expression to K[λ][x], satisfying the condition gcd p α , A j,α , 1 ≤ j ≤ D) = 1. It is clear that p α (0) = 0 because T is also a basis of A, and by assumption 0 is not a common root of p α and A j,α , 1 ≤ j ≤ D. Applying Λ i,λ to Identity (18) and Λ i to Identity (18) specialized at λ = 0, we then get by (16) for 1 ≤ i ≤ D: This implies that the entries of the matrix O S (Λ λ ) are the same K-linear combinations of the quotients pα(λ) than the entries of the matrix O S (Λ) in terms of A j,α (0) pα(0) . This and Identity (17) implies (15), which proves the statement. We set Z := {ξ 1 , . . . , ξ D } for the set of all common roots of f 1 , . . . , f n in K n , with multiplicities, and m ξ i ⊂ K[x] for the maximal ideal corresponding to ξ i for 1 ≤ i ≤ m. Using Gröbner duality, we are now able to give an expression for the subresultant in terms of the roots of f 1 , . . . , f n . For D ∈ K[[∂]]and ξ ∈ K n , we denote by D| ξ the element of A * defined as D| ξ (f ) = D(f )(ξ). In particular, under this notation, 1| ξ = ev ξ . Note that the above choice for the dual basis Λ contains the evaluation maps for the roots of I, and using this Λ in Theorem 3.4 gives an expression for the subresultant in terms of the roots of I.
2012-11-05T17:14:01.000Z
2010-12-21T00:00:00.000
{ "year": 2010, "sha1": "822455061318014058a21e61383febe2f91937f3", "oa_license": "elsevier-specific: oa user license", "oa_url": "https://doi.org/10.1016/j.laa.2012.11.004", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "822455061318014058a21e61383febe2f91937f3", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
221614717
pes2o/s2orc
v3-fos-license
The Impact of Self-Reported Recurrent Headache on Absenteeism and Presenteeism at Work Among Finnish Municipal Female Employees Purpose The aim of this cross-sectional, observational study was to determine the impact of self-reported headache on absenteeism and presenteeism in a female working-age population. Subjects and Methods The study population consisted of 594 Finnish female municipal employees, who answered self-administered questionnaires including sociodemographic, lifestyle, health, and work-related data. Sickness absence days were obtained from the official records of the employer. Headache recurrence was defined by asking whether headache was occasional or recurrent. Headache impact was measured by the HIT-6. Results In our study, 456 (77%) females had headache, and headache was recurrent in 178 (39%). The self-reported recurrence of headache was related to age, AUDIT-C, health-rated quality-of-life, self-rated work ability, depressive symptoms, and work stress (P for linearity <0.001). They also had more depressive symptoms and work stress (P for linearity <0.001). Mental work load was highest in those with recurrent headache (P=0.042), and work engagement was highest in those without headache (P=0.038). There was no statistically significant difference in absenteeism days between the headache groups when adjusted with confounding variables. Presenteeism was associated with the recurrence of headache (P for linearity <0.001). Presenteeism and the HIT-6 score were significantly associated in the recurrent headache group (P=0.009). Conclusion Headache was not related to absenteeism, but the self-reported recurrence of headache was clearly associated with presenteeism in this female working-age population. Introduction Headache is one of the most common disorders of the nervous system; its global lifetime prevalence is 66%. [1][2][3] Headache is associated with substantial disability. 4 Numerous earlier studies have shown that frequent or chronic headache impairs health-related quality-of-life. 5,6 Burden of headache is present also during interictal periods in patients with episodic headache. 7 Similarly to other pain conditions headache is more prevalent in females, and also comorbidity between pain conditions and mental and somatic problems is higher in women than in men. 8 Comorbid conditions, for example other physical diseases or mental disorders, are often the major explanation for role disability in the headache population. 9 The presence of a chronic illness has been shown to be a strong risk factor for sickness absence from work. 10 Illness is also a cause of decreased productivity while working, a state called presenteeism, which in turn seems to lead to absenteeism. 11,12 It is widely known that headache, especially severe migraine and chronic headache, causes absenteeism, and the socio-economic burden of headache for the individual and for society is substantial. 1,[13][14][15][16] Headache prevalence is high, especially among working-age females, and headache has a negative impact on different aspects of life. 17 Headache severity is more relevant than the headache diagnosis regarding work ability. 18 Work-related psychosocial risk factors such as role conflict, low social climate, and bullying may provoke headache. 19 To our knowledge, only one study has evaluated work engagement in a headache population. 20 It showed that job demands and job resources are important for work ability in employees with chronic headache. The aim of this cross-sectional, observational study was to determine the impact of the self-reported recurrent headache on absenteeism and presenteeism among a female working-age population. Study Population This report is based on a longitudinal cohort study comprising employees of the city of Pori (Southwestern Finland). The study material was collected in 2014-2015 as part of the PORTAAT (PORi To Aid Against Threats) study. 21 The managers of the work units sent the invitation and study information letters by e-mail to the employees. No exclusion criteria were applied. Subjects from 10 work units were enrolled and occupations included were librarians, museum employees, groundkeepers, computer workers, social workers, nurses, physicians, administrative officials, and general office staff. All study subjects were informed in an appointment with the study nurse and consented to the study. For the present analyses, we included 594 females who completed the follow-up visit in 2015, and who had completed the work-related questionnaires and the headache questionnaire. The study subjects answered the question "Have you had headache during the past year?" (yes/no). If the subject had suffered from headache, the recurrence of it was assessed with the question "Has your headache been recurrent?" (yes/no). Accordingly, the headache population was categorized as having occasional or recurrent self-reported headache. Both headache groups filled in the HIT-6 questionnaire. Sociodemographic, Lifestyle, and Health-Related Factors Demographic, lifestyle, and health data were collected using self-administered questionnaires. Information was gathered about years of education, financial satisfaction (with the question "Do you have save on expenditures?"; "yes" or "no"), marital status ("cohabiting or not"), smoking ("current smoker or non-smoking (having never smoked or having stopped smoking > 12 months ago"), alcohol consumption (the 3-item Alcohol Use Disorders Identification Test, AUDIT-C), and quality of sleep ("good" or "not good"). 22 Leisure-time physical activity (LTPA) was assessed by asking for the recurrence and duration of physical activities during a typical week. LTPA was considered high if activity was ≥30 minutes at a time for four or more times a week; moderate if ≥30 minutes at a time for two to three times a week and low for ≥30 minutes at a time for a maximum of once a week. Health-related quality-of-life was assessed with the EQ-5D questionnaire. 23 The subjects self-rated their mood using the Major Depression Inventory (MDI) questionnaire. MDI measures depressive symptoms during the past 2 weeks. 24 MDI consist of ten items, each evaluated on a Likert-type scale from 0=never to 5=all the time. The total score of the MDI ranges from 0 to 50. The higher the score, the more severe is the subject´s depression. Optimal cut-off score for major (moderate-to-severe) depression is 26. Height and weight of the participants were measured by the study nurse. Body mass index (BMI) was calculated as weight (kg) divided by the square of height (m 2 ). Medical records and self-administered questionnaires served the source of information on the subjects´ chronic diseases and regular medications. A study subject was considered to have diabetes, malignancies, musculoskeletal, cardiovascular, psychiatric, pulmonary, gastroenterological, or neurological diseases, if the disease was diagnosed by a physician and/or she used appropriate medication. Headache Impact Test (HIT-6) To assess the impact of headache the HIT-6 was used. It is a self-administered questionnaire presenting three questions concerning headache during the past 4-week period and a further three questions about headaches with no time limit. 25 Each of the HIT-6 items is scored for frequency using the scale never=6, rarely=8, sometimes=10, very often=11 and always=13, the total score ranging from 36 to 78. Based on the total HIT-6 score, the subjects can be categorized into four groups according to the impact of headache: little or no impact (<50), some impact (50-55), substantial impact (56-59), and very severe impact (≥60). 26 The study subjects filled in the new Finnish version of the HIT-6 questionnaire, which was produced by the forward-backward translation process. A new Finnish translation was done because of problems in the earlier Finnish version of HIT-6. 27 The new Finnish translation was performed without the approval of OptumInsight Life Sciences (QualityMetrics), but a retroactive license has since been issued. Work-Related Factors Work Engagement The Utrecht Work Engagement Scale (UWES-9) measures the work engagement. 28 UWES-9 consists of three subscales, which focus on vigor, dedication, and absorption. Sub-scales were rated on a 7-point Likert scale ranging from 0 (strongly disagree) to 6 (strongly agree). The overall work engagement score was obtained by summing up all items and dividing by the number of items in each scale. The overall work engagement is higher when the item is rated higher. In our study work engagement tertiles were 1; <4.5, 2; 4.6-5.2, and 3; >5.3. Work Ability To evaluate the work ability, a single question was presented to the subjects: "What is your current work ability compared to your lifetime best?". This is the first item in the widely used Work Ability Index (WAI), defined as the Work Ability Score (WAS). 29 WAS is obtained using a 0-10 response scale, 0 denoting complete inability to work and 10 indicating "work ability at its best". Reference values for WAS transformed to WAI; poor (0-5 points), moderate (6-7), good (8)(9), and excellent (10). WAS and WAI are strongly associated and are accurate indicators of work ability. 30 Physical and Mental Workload Physical workload was assessed with the question "How strenuous is your work physically?" and mental workload with the question "How strenuous is your work mentally?". Answers were given with a 100 mm long visual analog scale (VAS) and with a 0-100 response scale (0=very light to 100=very hard). Work Stress The Bergen Burnout Indicator (BBI-15) was used to evaluate work stress. 31 BBI-15 measures occupational burnout using 15 questions and the answers are given using Likerttype scales from 1 to 6 (1=completely disagree to 6=completely agree), that are summed up to score from 15 to 90, a high score indicating high levels of work stress. Daytime Work and Absenteeism Records The data concerning daytime or shift-work and the count of sickness absence days during the 2-year period of January 1, 2014-December 31, 2015 were obtained from the official records of the employer (the city of Pori). The mean of sick leave days of 2014 and 2015 was used. Presenteeism Presenteeism at work was assessed with a question and a 100 mm long visual analog scale with advice for use: "If you had work days during the past month, evaluate how much your health problems have affected your work performance while working"(from 0=no problems to 100=completely hindered my work performance). Ethics Approval and Consent to Participate The Ethics Committee of the Hospital District of Southwestern Finland reviewed and approved the study protocol and consent forms. The written informed consent was given by all participants. Statistical Methods The statistical significance for the unadjusted hypothesis of linearity across categories (quartiles) of headache and characteristics of the study participants were evaluated using the Cochran-Armitage test for trend, analysis of variance (ANOVA), and logistic (ordinal) models with an appropriate contrast. Adjusted relationships between categories of headache and absenteeism days or presenteeism were analyzed using generalized linear models with appropriate distribution and link function. In the case of violation of the assumptions (eg, non-normality), a bootstrap-type test was used. The normality of variables was evaluated graphically and using the Shapiro-Wilk W-test. Stata 16.0 (StataCorp LP, College Station, TX, USA) was used for the analysis. Results The study population consisted of 594 female employees, of whom 456 (77%) had headache symptoms during the last year. Headache was recurrent in 178 (39%) subjects. The characteristics of the subjects according to the selfreported headache recurrence are shown in Table 1. Females with an increasing level of headache were more likely to be younger, have higher BMI, lower AUDIT-C score, lower health-related quality-of-life, and lower work ability. Recurrence of headache was related to age, AUDIT-C, health-related quality-of-life, self-rated work ability, depressive symptoms, and work stress (P for linearity <0.001). Mental work load was highest in those with recurrent headache (P=0.042), and work engagement was highest in those without headache (P=0.038). There was no statistically significant difference in absenteeism days between the headache groups when adjusted with confounding variables. Presenteeism was associated with the recurrence of headache (P for linearity <0.001). Presenteeism and the HIT-6 score were significantly associated in the recurrent headache group (P=0.009). The mean number of absenteeism days and the mean level of presenteeism are presented in Table 2, both as crude results (model I) and after adjustments (models II-IV). The number of absenteeism days was highest in the recurrent headache group both as crude results (model I) and after adjustments (models II-IV), but the relation was statistically significant only in models I and II (ie, crude results and when adjusted for age, BMI, and education years). Presenteeism had a significant positive association with headache recurrence. The number of absenteeism days and the level of presenteeism by HIT-6 categories in the occasional and recurrent headache groups using model IV (adjusted for age, BMI, education years, smoking, AUDIT-c score, LTPA, MDI, BBI, daytime work, and number of chronic illnesses) are presented in Figure 1. In the recurrent headache group categories of HIT-6 were positively associated with presenteeism (P=0.009) but not with absenteeism (P=0.36). In the occasional headache group neither absenteeism (P=0.29) nor presenteeism (P=0.71) was associated with the HIT-6 categories. Discussion The main finding of this study consisting of female municipal employees was that self-reported recurrent headache is associated with presenteeism but not with absenteeism, even after adjustments. In the recurrent headache group, but not in the occasional headache group, presenteeism was significantly associated with the burden of headache. To our knowledge, our study is the first to assess the association of the HIT-6 score with presenteeism in a working aged female population. Presenteeism was clearly more evident among the females suffering from recurrent headache than among those with occasional or no headache. This highlights the importance of recognizing the patients with recurrent symptoms. The HIT-6 is easy to use in everyday clinical practice. Thus, we encourage using the questionnaire to find the patients with high burden of headache and increased risk for presenteeism. It has been shown that presenteeism is a risk factor for absenteeism and the economic costs of presenteeism have been suggested to even exceed those of corresponding absenteeism 11,12,32,33 . Although often multifactorial, recurrent headache can usually be treated efficiently with low cost procedures when noticed early (eg, good migraine acute treatment and prevention, relieving muscle tension, giving lifestyle guidance and psychological support during difficult life events). There was no association between HIT-6 score and absenteeism in this study, which may be explained by the characteristics of the study population. Absenteeism was highest in the recurrent headache group compared to occasional headache or no headache groups with no or minor adjustments, but when adjusted by several lifestyle and health-related variables absenteeism did not significantly correlate to the headache recurrence. According to several earlier studies headache is associated with absenteeism, but comorbidities, especially mental disorders, have a substantial role in absenteeism in a headache population. 9,15 Our study subjects were quite healthy municipal employees with only mild mental symptoms, which presumably explains the low absenteeism in this study. The headache prevalence in our study was approximately of the same magnitude as in earlier Scandinavian studies. 2 The characteristics of headache are also of concern when speculating the reason for absenteeism. Decreased work ability is clearly shown in those with migraine and also in frequent or chronic headache populations. 13,16,17,[34][35][36] Population-based studies have shown that in a headache population, especially in episodic headache populations, presenteeism is more substantial than absenteeism. 37,38 It has been estimated that presenteeism is responsible for two thirds and absenteeism for one third of migraine-related indirect costs. 36 If headache is mild or moderate or is rapidly alleviated by acute medication (which is the case in most migraine patients), the subject may consider sick-leave excessive and goes to work. Headache-related absence is known to be stigmatized, which may also explain avoidance of sick leave days due to headache. 39 The strength of the present study is a well-characterized and relatively large cohort of employees comprising a study population with relatively homogeneous cultural background. The participants receive equitable salaries, their working conditions are regulated by the same collective agreement, their employment status is stable, and they share a uniform occupational healthcare system, even though they represent different work units and widely varying tasks. Only female employees were included in this substudy, because the total number of males in the PORTAAT study was low and hence the homogeneity of the study population was increased. The questionnaires which were used in this study are valid and reliable for measuring workrelated factors and the impact of headache. The data on sick-leave days were gathered from the employer, ie, from an official register. The comprehensive adjustments were made, because besides illnesses of the employee, numerous other sociodemographic, health-related, and work-related factors have been recognized as risk factors for absenteeism and presenteeism at work. 40,41 The major limitation of this study is the cross-sectional design, which does not allow us to draw any causal conclusions. Another limitation is the lack of exact headache diagnosis and headache frequency due to the study design. This study observes the headache as a symptom, and the burden of the pain is measured by the HIT-6 which is not a diagnose-specific questionnaire. It is likely that most females in the recurrent headache group have a headache disease, such as migraine, whereas in the occasional headache group the symptom may be a random sign of infection, hypertension, lack of sleep, etc. It is also liable that females with higher HIT-6 score have a headache diagnosis, eg, migraine or chronic headache. There might also be some overlapping between the headache groups, for example females with only minor migraine symptoms may have been categorized in the occasional headache group. Nonetheless, our aim was to study the association between reduced work ability (both absenteeism and presenteeism) and the self-reported recurrence of headache, regardless of the exact diagnosis in a working-aged female population, so missing data do not affect the results or conclusions. There are only a few women with high headache burden (HIT-6 score over 55) in the occasional headache group, a phenomenon seen as wide confidential intervals in Figure 1. Lastly, because this study consists of only female subjects, the results cannot be generalized to the male population. Conclusion This study showed that in female municipal employees self-reported recurrent headache was associated with impaired productivity at work, mostly by presenteeism and with low absenteeism. Increased headache burden measured by the HIT-6 was related to presenteeism, but not to absenteeism. Disclosure We gratefully acknowledge the unrestricted research grants from the Mutual Insurance Company Etera and from the Finnish Cultural Foundation. Dr Maija Haanpää reports personal fees from Pfizer, outside the submitted work. The authors report no other conflicts of interest in this work.
2020-08-27T09:03:40.753Z
2020-08-01T00:00:00.000
{ "year": 2020, "sha1": "53d790b17324c2e4e5a9697a94047145c95ded5f", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=60846", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8ad408c8029261c35c18f5af58b357f2306e14a3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
2344721
pes2o/s2orc
v3-fos-license
Magnetostriction and magneto-structural domains in antiferromagnetic YBa$_{2}$Cu$_{3}$O$_{6}$ We have used high-resolution neutron Larmor diffraction and capacitative dilatometry to investigate spontaneous and forced magnetostriction in undoped, antiferromagnetic YBa$_2$Cu$_3$O$_{6.0}$, the parent compound of a prominent family of high-temperature superconductors. Upon cooling below the N\'eel temperature, $T_N = 420$~K, Larmor diffraction reveals the formation of magneto-structural domains of characteristic size $\sim 240$~nm. In the antiferromagnetic state, dilatometry reveals a minute ($4 \times 10^{-6}$) orthorhombic distortion of the crystal lattice in external magnetic fields. We attribute these observations to exchange striction and spin-orbit coupling induced magnetostriction, respectively, and show that they have an important influence on the thermal and charge transport properties of undoped and lightly doped cuprates. Correlated-electron systems exhibit multiple collective ordering phenomena whose interdependence and competition are subjects of intense current research.The macroscopic properties of materials with strongly correlated electrons are influenced not only by atomic-scale correlations, but also by emergent domain structures on nanoscopic and mesoscopic length scales [1].Recent advances in research on some of the most prominent correlated-electron materials, the cuprate hightemperature superconductors, [2] have reinforced efforts to establish quantitative links between the doping dependent spin and charge correlations and the thermodynamic and transport properties [3][4][5].These efforts are, however, complicated by the presence of defects and associated strains of the crystal lattice, which are invariably associated with doping and strongly affect the mesoscopic organization of the electron system [6].Recent examples include magnetic hysteresis phenomena [7,8] and charge density wave pinning [9][10][11] in moderately doped superconducting cuprates, whose origins have not yet been conclusively identified. To provide a solid basis for the investigation of doped high-temperature superconductors, it is important to establish a firm understanding of electronic correlations and their coupling to the crystal lattice in the undoped, largely defect-free parent compounds that exhibit antiferromagnetic long-range order.Although the atomic-scale spin correlations of undoped cuprates are well understood, there is little direct information on antiferromagnetic domain structures and associated lattice strains, despite indications that they profoundly affect the charge [12,13] and heat [14,15] transport properties and may act as seeds for mesoscopic inhomogeneities in doped compounds [2,6].In particular, an anomalous magnetoresistance has been reported for lightly doped, antiferromagnetic YBa 2 Cu 3 O 6+δ , [12,13] a material that has served as a model compound for recent research on high-temperature superconductivity [2].The magnetore-sistance in the CuO 2 -planes was found to exhibit a "dwave" symmetry upon rotation of the magnetic field in this plane, that is, the resistance increases (decreases) when the magnetic field is parallel (perpendicular) to the current flow [12,13].This finding was unexpected, because at low doping levels the crystal lattice is believed to be tetragonal [16].In this lattice structure, the two orthogonal a axes in the CuO 2 planes are equivalent, and current flow along both axes should be identical.Ando et al. [12] attributed the anomalous magnetoresistance to the magnetic-field-induced reorientation of charge stripes that locally break the tetragonal symmetry of the CuO 2 planes.Related ideas have also been discussed for other families of cuprate superconductors [2].An alternative model [13,[17][18][19][20] invokes antiferromagnetic domains that are accompanied by a small orthorhombic lattice distortion due to magnetostriction and are reoriented by the magnetic field.The orthorhombic distortion was estimated [13] as a/b − 1 ∼ 6 × 10 −6 , a value too small to be directly observed by x-ray or neutron diffraction techniques.Likewise, direct evidence of the purported charge-stripe or magneto-elastic domains has thus far not been reported for undoped and lightly doped YBa 2 Cu 3 O 6+δ . In the present work, we used high-resolution neutron Larmor diffraction to directly measure the magnetostructural domain size, and capacitative dilatometry to determine the minute orthorhombicity in the antiferromagnetic state by field-aligning the magnetic domains.We discuss these phenomena in terms of different mechanisms of magnetostriction, and compare the results quantitatively with heat and charge transport data on undoped and lightly doped cuprates.The methodology we introduce provides interesting perspectives for the investigation of domain structures associated with charge density waves in more highly doped cuprates, and with electronic ordering phenomena in other correlated-electron materials such as the iron pnictides and chalcogenides.The experiments were carried out on high-quality YBa 2 Cu 3 O 6.0 single crystals of typical size 1×1×0.1 mm 3 and mosaicity ≤ 0.1 • , which were grown by a flux method [21].For the dilatometry measurements, a single specimen was mounted in a capacitance dilatometer [22], such that the expansion of the a-axis was measured.A small force of 20 N along the a-axis was applied to hold the crystal, resulting in a uniaxial pressure of 200 MPa.The dilatometer was installed in three different orientations in a 10 T magnet to apply the field along the crystallographic a, b, or c-axes.For the neutron scattering experiments, fifteen crystals of total mass 0.1 g were coaligned with combined mosaicity ∼ 1 • .The temperature dependence of the magnetic (0.5 0.5 5) Bragg peak intensity (Fig. 1a) shows a Néel temperature of T N = 420 K, corresponding to full oxygen stoichiometry (6.0 oxygen atoms per formula unit) [23]. The neutron Larmor diffraction experiments were conducted at the TRISP spectrometer at the Heinz-Maier-Leibnitz Zentrum in Garching [24].The basic principle of a Larmor diffractometry (LD) is shown in Fig. 1c.[25].A spin-polarized neutron beam crosses a uniform magnetic field H twice, before and after being diffracted at lattice planes with spacing d hkl = 2π/G hkl , where G hkl is the reciprocal lattice vector.The boundaries of H are aligned parallel to the lattice planes.Inside the field, the neutron spins precess with the Larmor frequency ω L = 2πγH, where γ is the neutron's gyromagnetic ratio.The total precession angle is φ = ω L t, where t = 2L/v ⊥ is the time the neutron spends in the field.t only depends on the velocity component v ⊥ = ( /m)G hkl /2, which is independent of the Bragg angle (m is the neutron mass).The total phase φ = 2m/(π )ω L d hkl is thus a measure for d hkl .A broadening of the Bragg reflection ∆G hkl gives rise to a linear variation of the Larmor phase ∆φ/φ = hkl , with hkl = ∆G hkl /G hkl .The beam polarization P (φ) is then the Fourier transform of the momentum-space profile f ( hkl ) of the Bragg reflection, so that the width of P is the inverse of the width of f : Conventional diffractometers are based on measurements of the Bragg angle, where the resolution is limited by the collimation and the monochromaticity of the neutron beam.The resolution of LD, on the other hand, is limited by the relative error δφ/φ.The leading contribution to δφ are fluctuations of H, which can be strongly reduced by replacing the static field by four radio-frequency spin-flip coils C1-C4 (Fig. 1c).In this way, the momentum-space resolution can be enhanced by about two orders of magnitude [26]. Figure 2 shows P (φ) profiles for several nuclear and magnetic Bragg reflections.The instrumental resolution was taken into account by normalizing the profiles to the one obtained from a perfect germanium crystal.For clarity, the data are displayed after normalization to P (0) = 1.To extract the widths of Bragg reflections from the LD data, the P (φ) curves were fitted to Eq. 1 with Gaussian peak profiles f ( hkl ) (lines in Fig. 2).The widths of the (2 0 0) and (2 2 0) nuclear Bragg peaks determined in this way are quite different (Fig. 2).For T > T N , the width of the (200) reflection, = 5.2×10 −4 , translates into a characteristic length L = 370 nm, and the ratio of 1.4 between the widths of the (220) and ( 200) reflections matches the ratio of their respective reciprocal lattice vectors.The LD data are thus consistent with square-shaped structural mosaic blocks of characteristic size L along the CuO 2 planes.The domain size along the c-axis extracted from the (006) reflection (inset in Fig. 2) is L ⊥ ∼ 390 nm.Possible origins of structural domain formation include a small number of residual impurities (such as interstitial oxygen) and associated microstrains.A detailed analysis of the lattice defects in the paramagnetic state will require a survey of multiple Bragg reflections and is beyond the scope of this paper, which is focused on the influence of the electronically driven antiferromagnetic transition on the lattice structure. To this end, we have carefully monitored the evolution of the P (φ) profiles across the antiferromagnetic phase transition (Fig. 2).The width of the (2 0 0) reflection for T < T N translates into a characteristic domain size of L ∼ 340 nm, about 10% smaller than in the paramagnetic state.The T -dependence of the profiles (Fig. 1b) demonstrates that the broadening of P (φ) and the reduction of L set in at T = T N .Within the experimental error, the ratio of the (2 0 0) and (2 2 0) widths is preserved upon cooling across T N (Fig. 2), indicating shape-preserving shrinkage of the structural mosaic blocks as the spin fluctuations are arrested in the antiferromagnetic state. The anomalous broadening of the P (φ) profiles is a manifestation of coupling between the antiferromagnetic order parameter and the crystal lattice.In rare-earth antiferromagnets, magneto-structural interactions have been detected through anomalies in the thermal expansion at the Néel temperature, and were attributed to the dependence of the exchange interactions on the distance between the magnetic ions ("exchange striction") [27].In the cuprates, however, such anomalies are much harder to recognize because of the quasi-two-dimensional nature of the magnetism, which implies that the spin correlations in the CuO 2 planes are already well developed for T = T N .[28] Our data establish Larmor diffraction as an alternative, highly sensitive probe of magnetostriction in this situation.Following prior theoretical work [27], the reduction of the structural domain size at T N observed in YBa 2 Cu 3 O 6 can be understood as a consequence of exchange striction, which stiffens the crystal lattice so that it can less easily accommodate strains from residual impurities and defects.The fact that the shape of the mosaic blocks remains unchanged at the Néel transition agrees with the observation that the exchange Hamilto-nian has the same (tetragonal) symmetry as the crystal lattice (apart from the minute effect of the spin-orbit interaction, to be discussed below).In the iron arsenides, by contrast, the symmetry of the magnetic bond network differs from the one of the crystal lattice in the paramagnetic state, giving rise to a sequence of distinct structural and magnetic phase transitions. The width of the LD profile of the antiferromagnetic Bragg reflection ( 1 2 1 2 5) is comparable to, but somewhat larger than those of the structural reflections (Fig. 2), consistent with the expectation that structural domain boundaries resulting from magnetostriction will usually disrupt magnetic order [29].The spatially averaged antiferromagnetic domain size of 240 nm is quite comparable to the magnetic domain size measured by LD in classical antiferromagnets [30]. Since LD with radio-frequency coils is restricted to zero magnetic field, we used capacitative dilatometry as a complementary tool to investigate manifestations of forced magnetostriction in the antiferromagnetic state for T = 2 K. Figure 3 shows the relative expansion of the x-axis along the Cu-O-Cu bonds, with magnetic field B along x, y (in the CuO 2 planes), and z (perpendicular to the planes).For B y (B x), ∆x/x is positive (negative), corresponding to expansion and contraction, respectively.The resulting field-induced orthorhombic distortion of the crystal increases rapidly for small B and crosses over to a more gradual evolution for B c ≥ 5 T (defined as the inflection point in the ∆x/x-versus-B relation).The expansion for B z is close to zero.In stark contrast to classical antiferromagnets [31,32], there is no discernible field hysteresis of the forced magnetostriction which would indicate pinning of antiferromagnetic domain walls. The dilatometry data indicate that the lattice expansion is coupled to the magnetic moment direction.Related effects have been observed in other antiferromagnets including rare-earth magnets, where they can be understood as consequences of the spin-orbit interaction [27].Briefly, the spin-orbit interaction ties the spin direction to the orbital magnetization and hence to the shape of the valence electron cloud around the magnetic ions, which in turn is coupled to the lattice structure via crystalline electric fields.The small magnitude of the forced magnetostriction, compared to the manifestations of isotropic exchange striction discussed above, can then be attributed to the quenching of the spin-orbit interaction in the cuprates, where the magnetic dipole moment arises almost exclusively from the spin-1/2 of the Cu 2+ ions.Nonetheless, the observed g-factor anisotropy of the Cu moments [33] indicates a small residual orbital magnetization that can act as a source of magnetostriction. The inset in Figure 3 illustrates the spin-orbit mediated magnetostriction.For B = 0, both neutron diffraction [34] and electron spin resonance [20] find an equal population of domains with Cu spins oriented along the two orthogonal easy axes in the CuO 2 plane.Within each domain, the a and b axes are slightly different as a consequence of the spin-orbit interaction, but domain averaging results in a macroscopically tetragonal structure.For increasing B y, the Cu spins in the domain with spins pointing along y flip by 90 • to gain advantage of the Zeeman energy, whereas spins already along x do not flip.The observed macroscopic expansion, ∆x/x, is due to the slight orthorhombic distortion of each domain that is tied to the spin direction.For the same reason, ∆x/x is opposite in sign for B x. (The slight difference in the magnitudes of ∆x/x for B along x and y presumably arises from the uniaxial pressure along x exerted by the sample holder, which increases the population of the domains with long axes ⊥ x.)For B ≥ B c , most spins are oriented nearly perpendicular to the magnetic field, and the crystal structure is macroscopically orthorhombic.For larger fields, the gradual canting of the magnetic moments towards B is an additional source of magnetostriction, but this contribution is small because it is opposed by the large in-plane exchange interaction (J ∼ 100 meV).The remarkable absence of field hysteresis may then be attributed to the approximate coincidence of magnetic and structural domain boundaries noted above.Since most structural mosaic blocks include a single magnetic domain, pinning of magnetic domain walls is largely suppressed. We now discuss the relationship between the compre-hensive picture of the magneto-structural coupling we have obtained to the transport properties of undoped and lightly doped cuprates reported earlier.First, measurements of the magnon-mediated thermal conductivity of undoped, antiferromagnetic La 2 CuO 4 have yielded lowtemperature mean free paths in the range ∼ 100−150 nm, [14,15] somewhat lower than the magneto-structural domain size of ∼ 240 nm inferred from our LD measurements on YBa 2 Cu 3 O 6.0 , where thermal conductivity measurements have not yet been reported.Since both experiments were carried out on different materials, we regard the agreement as quite satisfactory.Our results suggest that magneto-structural domains limit the lowtemperature heat conductivity mediated by magnons, and they provide a motivation for more detailed model calculations along these lines. The spin-orbit mediated forced magnetostriction we identified in the antiferromagnetic state has the same "dwave" symmetry (i.e., positive parallel and negative perpendicular to the B-field) and a similar crossover field (B c ∼ 5 T) as the magnetoresistance in lightly doped antiferromagnetic YBa 2 Cu 3 O 6+δ [12,13].Our observations thus support models that ascribe the anomalous magnetoresistance to the magnetic field alignment of the orthorhombic magnetic domains.[13,[17][18][19][20].The orthorhombicity a/b − 1 = 4 × 10 −6 determined from the forced magnetostriction (Fig. 3) is somewhat smaller than the one estimated [13] on the basis of magnetoresistance data on YBa 2 Cu 3 O 6.25 , but since this estimate is rather indirect, and both sets of measurements were taken on samples with different oxygen concentrations, the agreement is again quite satisfactory.There is thus no need to invoke charge-stripe ordering in lightly doped YBa 2 Cu 3 O 6+δ to explain the magnetoresistance.This is in accord with current knowledge of the phase diagram of this compound, where charge order only sets in at higher doping levels (δ ≥ 0.5) [2]. In summary, the complementary combination of neutron Larmor diffraction and capacitative dilatometry has provided direct insight into the mesoscopic structure of the antiferromagnetic state in undoped YBa 2 Cu 3 O 6.0 .Our data allowed us to elucidate the magneto-structural coupling mechanisms and their influence on the heat and charge transport properties.Based on the solid foundation we have laid here, our experimental approach can be straightforwardly applied to more highly doped cuprates, where domain structures associated with spin density wave, charge density wave, and "nematic" ordering phenomena and their influence on the macroscopic properties are subjects of intense current research and debate [2][3][4][5][6][7][8][9][10][11].More generally, we have established neutron Larmor diffraction as a versatile probe of antiferromagnetic and magneto-structural domain structures with sub-micrometer length scales, which opens up new perspectives for the investigation of a large variety of correlated-electron materials [1]. We are grateful to A. Jánossy FIG. 3 . FIG. 3. (color online) Forced magnetostriction at T = 2 K measured by dilatometry parallel to the Cu-O-Cu bond direction, x, in the CuO2 plane.The field-induced change of the sample length along x with magnetic field B applied parallel to the x, y, z directions is plotted in red, green, and blue, respectively.Inset: Illustration of spin-orbit coupling induced magnetostriction for a single magneto-structural domain with B b. Due to magnetostriction, the spin-flop transition induced by the field is associated with a realignment of the crystallographic unit cell (dashed line for B = 0, solid line for B 5 T.) The orthorhombic distortion is exaggerated for clarity. , S.P. Bayrakci, and J. Porras for discussions.The work was supported by the DFG under Grant No. SFB/TRR 80. B.N. acknowledges support by the Prospective Research Program No. PBELP2-125427 of the Swiss NSF, and by the European Commission through the Research Infrastructures action of the Capacities Programme, NMI3-II, Grant Agreement number 283883.
2016-01-06T04:58:17.000Z
2016-01-06T00:00:00.000
{ "year": 2016, "sha1": "eb33d3e35b950b0f39d262f7dde769784eba24d7", "oa_license": "publisher-specific, author manuscript", "oa_url": "https://link.aps.org/accepted/10.1103/PhysRevLett.116.047001", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "eb33d3e35b950b0f39d262f7dde769784eba24d7", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Physics", "Medicine" ] }
26456675
pes2o/s2orc
v3-fos-license
Corporate Values in Global Supply Chains The problem of finding a balance between the economic, social, and environmental values of human development is now at the heart of the debate that involves men of culture, men of faith, economists and politicians, who are working together to tackle the problems of globalisation. Chains are part of a gradual globalisation, with growing interdependence between the various manufacturing organisations and a convergence of different elements that creates development, but also negative effects such as environmental degradation and serious social imbalance. In the corporate culture, with the evolution of marketing, the act of purchasing is a sum of ethical and aesthetic values. In this sense, global companies’ supply must contain a ‘synthesis’ of consumers’ expectations. Corporate Economics and Global Supply Chains Europe, its culture and the positive forces that animate it, are immediately and naturally exposed to other realities and other cultures, and they reveal a sense of a broad and highly diverse universe, but one that is also interdependent and citizen of a single place, which tackles the contradictions between 'society, environment and economics', trying to make compatible the goals of economic entities and those of society that the ample gamut of market economy has now extended to the whole of humanity. The problem is tackled primarily by considering companies and their relations with consumers, other companies and the relevant environments-markets. Attention is focused on the fact that they belong to a chain, in other words to a collection of entities that together are responsible for complete manufacturing, distribution and consumption cycles, with principal rolesfor manufacturers, distributors and consumersand secondary roles, for logistics companies, banks, insurance companies and all the other companies that sustain global development processes. Today these chains are part of a gradual globalisation, with growing interdependence between the various manufacturing organisations and a convergence of different elements that creates development, with stimuli and opportunities for companies, but also contradictions and negative effects such as environmental degradation and serious social imbalance, which introduce issues such as that of just distribution, working conditions, individual freedom and, more generally, values to believe in.This prompts the search for principles and rules that can shed new light on economic thought that must also become political thought: i.e. winning back the comprehensive identityof political economicsthat it originally expressed. The Debate about Corporate Social and Economic Values The importance of 'good conduct' in economic and commercial activities, and the need for economic entities to operate in a context of widespread consensus were already the subject of analysis by early economists, although it later attenuated in consideration of the natural balances of the market which could justify a profit-based logic in individual operators. But a logic that is founded on means (money) and not ends, can transform trade into a conflictual, occasionally even destabilising, relationship: "the mediation of the group, of society and of politics proves necessary… Economic exchange must be re-introduced to the sphere of the common good as a central element of social relationships' (Latouche, 2005). The problem of finding a balance between the economic, social, and environmental values of human development is now at the heart of the debate that involves men of culture, men of faith, economists and politicians, who are working together to tackle the problems of globalisation.At the centre is a reflection of the responsibilities of politics and economics that is related to the imbalance of the world's social and economic order, putting in danger the ultimate goal for a civil society which is to guarantee an acceptable human condition that may be 'shared', starting from the common membership of a network of international relationship that is central to our age. We do not meditate on ideal models but to guarantee dignity in life.The problem of justice becomes first and foremost a problem of limits (setting a limit to the degradation of the environment, unfair working conditions, and the economic divergence between people and countries). The change necessary is above all cultural.If, in the traditional culture of manufacturing and consumption, attention was concentrated on the collocation efficiency/effectiveness and the quality/convenient ratio, what is now emerging in the new culture of businesses and consumers is a 'system of values'. In fact, in the corporate culture, with the evolution of marketing, the act of purchasing is no longer the simple exchange of money for goods, but is becoming the expression of complex human behaviour, and as such full of sensations, sentiments, emotions, actions and thoughts corresponding to a sum of ethical and aesthetic values which has been enriched and customised in advanced markets. In this sense, companies' supply must contain a 'synthesis' of consumers' expectations, if possible solving any conflicts inside their conscience: some opt for clear choices and extreme consistency with the moral values professed, but most peoplevery humanlyaspire to overall satisfaction that is ethical and aesthetic, which demands compromise and may come from a sum of elements that have been made compatible and well balanced. The Demand for Ethical and Aesthetical Values and the Response from Companies By studying the consumer's behaviour, companies have understood that trade can become a means of expresses choices that are not only addressed to the purchase of products, but also strive to share the values that these convey.The complexity of the system in which they live, and the need for gratification, the aspirations, fears and uncertainties this demands are reflected in the behaviour of consumers in advanced countries. Faced with the need to grasp the changes in consumers' behaviour and to identify the most suitable way to respond, companies propose two attitudes that are already well-known in literature: active behaviour and reactive behaviour.In fact, change may come either from a better perception of the complexity of the relationship between company and environment and a capacity to tackle it better; or from a reaction that derives from the need to come to terms with the deterioration of this relationship.In more active companies, it is up to marketing to sense the new tendencies, which are expressed today in terms of 'experience'.The last frontier of marketing tries to sell experiencesas well as productsinviting sensations and emotions that involve the individual's corporeal perception, sentiments, actions and thoughts, bringing into play aesthetic and ethical values at the moment of consumption and purchase and in all reactions that are created directly or indirectly (with manufacturers, distributors, other customers, other human beings).The behaviour of the consumer, and of human beings generally, is dominated by widespread individualism, but some people assert that a sense of responsibility towards oneself and others can develop from this very individualism (Bauman, 2006). This establishes a mentality of rights: the individual has rights as a result of his very existence.As a consequence, the power of businesses to influence the behaviour of their interlocutors (consumers, competitors, institutions) is often perceived as an abuse of power; a softer touch is therefore needed, and more incisive involvement based on more articulated stimuli. Faced with the weakness of social structures, the complexity and fragmentation of reality, a reaction develops which can capitalise new knowledge, starting from diversity, and can be invested in new solidarity and a capacity to understand.A mobilisation develops that is expressed not only in the purchasing behaviour of the individualwho boycotts products of companies considered incorrectbut also through consumer associations that solicit the political powers to issue new laws and to activate procedures to monitor food quality, the fight against pollution, the security of products and of the workplace.In this sense the 'consumerism' movement represents a cultural turnaround.The age-old conflict inside the factory has been joined by conflict between consumers and large brands.Safety, health and social fairness become important demands, and for companies that decentralise production at an international level, towards areas characterised by less protection for workers, this contradiction may also become more incisive than that between capital and work (Pepe, 2007). The stimulus to act to re-establish fair distribution and sustainable development, at least in part, therefore stems from greater consumer accountability, rather than enlightened planners or from companies.Responsible consumers are still a minority, but we must not underestimate their capacity to have a significant influence on the behaviour of other consumers, of institutions or of competitors. On the other hand, companies reveal an approach that is more in keeping with their nature.They conceive the adoption of new values as part of a model of greater effectiveness and competitiveness on the market, and used as an tool to improve results in terms of profits and positioning. Contextualisation of Values Today, in this debate, the adoption of ethical values appears most topicalaspiring to a fairer human condition, sustainable development, containment of risks and uncertaintybut this attitude cannot mortify the attitude of aesthetic values, which remain a fundamental aspect of our culture.Ethical values and aesthetic values are linked, above all in advanced markets where consumers' needs are expressed and customised better.The reference to marketing as a means of highlighting experience reflects the fact that companies are now focusing on these needs, which must however be contextualised. Ethical and aesthetic values are closely linked to the environment in which they are expressed and experienced.Aesthetics is defined as the subjective sentiment of the harmonious immersion in the environment, while ethics is the subjective sentiment of respect and harmonious interaction with the environment.Ethics enables us to conserve and protect our aesthetic experience, in other words our harmonious coexistence with the Other (made of men and nature).In this sense, the ethics crisis is also the crisis of our co-evolutionary relationship with the Other: it is a crisis in the system of relationships (Longo, 2004). Since the subjectivity of sensations and principles confers a sense of arbitrariness on the aesthetic and ethical codes, in order to achieve a common definition we should negotiate between the many points of view.Faced with elements of subjective discretionality, there is a need for mediation that makes individual planning incomplete and obliges the individual to turn to other subjects and to cooperate with them. It is a known fact that the possibility of achieving a balance in relationships reflects two opposing trends: the first is the exercise of the greater power, the other regards cooperation that exploits resources and complementarity for common goals, seeking suitable mediation for the discordant aspects. Equilibrium based on asymmetriesof power, information and economic resourcescurrently prevails, but it is clear that the complexity of processes that involve economic entities makes this type of balance increasingly precarious, resulting in shared resources, knowledge and risks, as the fact that companies adopt a variety of forms of national and international cooperation underlines. Shared Responsibilities and Global Chains: Research Hypothesis The need to cooperate in order to guarantee a consistent supply that is instilled with value, to lower the level of risk and to increase the availability and variety of resources, obliges companies to identify common goals, to rebalance relations between partners, and to create the conditions to improve the results of shared activities.The hypothesis for our research is that possibilities for change can be found in this very interdependence.So our attention is focused on relations within the chain, in particular international relations because they constitute the framework, the skeleton, and the bearing structure of globalisation.In them lies the key to many problems linked to the environmental and social sustainability of development, but also the possibility of identifying some problems linked to this change. By focusing attention on the international chains and on the nature of relations between the entities that are part of them, we get a better picture of the roles and responsibilities of the individual, and we grasp the sense of how they operate as a system (SA8000 codes of practice also list the requirements of behaviour that is valid not only for individual operators but for the entire circuit they belong to). Starting from the consumer's new sensitivity, and going back through the chain, the goal is to involve all stages of the economic cycle, so as to conceive a common responsibility in relation to the achievement of economic and non-economic objectives.To this end, we need strong relational structures, investments in communications, innovation, product certification systems and tracking systems along the chain.The cooperative aspect in particular must be transformed into continuous relations and common controlling actions in order to achieve the overall consistency that is presented to the end consumer as a guarantee of a system of values and also as an additional element of value. Convergence in the Chains of North-South Trade In many European countries there is a rapidly growing phenomenon that sees alternative fair trade chains and traditional chains, often dominated by multinationals and large retails groups converge and mutually cross-pollenate.This phenomenon is developing along several alternative lines, creating a multiplicity of circuits of international North-South trade. Alongside the traditional chains and the fair trade chains a number of hybrid chains are developing, which envisage the sale of fair trade products in traditional retail stores.At the same time, a growing number of traditional operators (industrial and commercial) are managing chains and products certified by fair trade organisms.Then there are the imitation chains, whose operators declare that they respect certain values (ethics, fairness, respect of the environment, etc.) but without requiring certification, for the products or for the chain, but relying solely on the force of their brands and the trust they have won from the clientele. The spread of these various operating methods shows that (to varying degrees of reliability) it is possible to combine values of economic sustainability with those of environmental and social sustainability, transferring the practices of the alternative trading and ethical finance circuits within processes controlled by operators who are naturally oriented to profit, addressing a broader market.The type of chain described above takes shape in them as a source of possible change: i.e. one that on the basis of cooperation made necessary by market reasons, achieves better exploitation of the resources together with greater ethical responsibility and fair distribution. We can briefly examine the innovations and problems found in the chains that we have analysed. There can be no doubt that these circuits, more than others, must work with a system logic.When different operators work together, some contents must necessarily be shared, over and above individual motivations and objectives which remain different (linked for some to a social and environmental conscience, and for others motivated above all by the search for a competitive advantage).The relationships are governed by agreed principles, but also by new experimental forms, particularly policies for sales and communications in the premises of mass distribution, although, as we can see in Italy, the presentation of fair trade products or those of non-economic value is still insufficiently incisive and unable to successfully convey the identity of these products. The importance of the quality of relations between operators, in terms of continuous collaboration, transparency and mutual trust highlights an aspect, that of control, which remains critical in our eyes, both in hybrid chainswhere fair trade products are sold through large retail circuitsand in chains that sell certified fair trade products; chains that carry out self-certification are free from any form of external control. Another critical aspect could be the quantities produced and distributed in response to demand and the size of contracts by large manufacturers or distributors.This obliges small manufacturers in the Southern hemisphere to grow in size through forms of cooperation.The phenomenon is widespread, but the time necessary for this aggregation is not always fast and there are always risks for the weaker producers, who still need to be protected by slow, constant growth rates.What is more, excessively broad supply networks can make it even more difficult to control them. From the viewpoint of overall strategy and its impact on the market, we can confirm that the values linked to social responsibility represent an innovative component of the product that has a strong potential for development, both for the quantities requested by the market and for the variety of products offered (which all operators, traditional and non-traditional, say they wish to enlarge). The strategic and economic validity of these chains (Depperu, Todisco, 2007) therefore shows how unjustified criticism of fair trade products is.They are accused of sustaining inefficiency producers, who are incapable of competing on the market with their own strengths and guilty of aggravating the problems of overproduction.The impossibility of competing on the market is often due to an inability to relate rather than to the inadequacy of the product; on the other hand, the presence of numerous small and medium producers in international chains would not be possible without a larger, better informed purchaser to spread the products (Pepe, 2007). However, the market remains supreme and while recognition by consumers of the distinctive element, i.e. the human validity of the process, remains fundamental, reducing the sense of impotence in the face of the evils of the world and a perception of the risks that may derive from themrecreating for the consumer an experience that is both gratifying and reassuringit is still important to focus attention on product quality and, occasionally, to adapt them to the needs of consumers in developed countries, supplemented by suitable information and communications.The striving for corporate policies that are able to express a correct combination of ethical and aesthetic values (in the sense of a contextualisation of values, as explained earlier) is another of the experiences on which fair trade operators converge today, as well as traditional operators who represent the more innovative part of the phenomenon under examination.
2017-09-07T06:23:09.527Z
2007-12-01T00:00:00.000
{ "year": 2007, "sha1": "59d0ed35ed398ea5160689e77488668bff4e54d0", "oa_license": "CCBY", "oa_url": "https://symphonya.unicusano.it/article/download/2007.2.02pepe/8798", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "59d0ed35ed398ea5160689e77488668bff4e54d0", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Economics" ] }
4462945
pes2o/s2orc
v3-fos-license
Embryological Consideration of Dural Arteriovenous Fistulas The topographical distribution of dural arteriovenous fistulas (DAVFs) was analyzed based on the embryological anatomy of the dural membrane. Sixty-six consecutive cases of intracranial and spinal DAVFs were analyzed based on the angiography, and each shunt point was identified according to the embryological bony structures. The area of dural membranes was categorized into three different groups: a ventral group located on the endochondral bone (VE group), a dorsal group located on the membranous bone (DM group) and a falcotentorial group (FT group) located in the falx cerebri, tentorium cerebelli, falx cerebelli, and diaphragm sellae. The FT group was designated when the dural membrane was formed only with the dura propria (meningeal layer of the dura mater) and not from the endosteal dura. Cavernous sinus, sigmoid sinus, and anterior condylar confluence was categorized to VE group, which had a female predominance, more benign clinical presentations, and a lower rate of cortical and spinal venous reflux. Transverse sinus, confluence, and superior sagittal sinus belonged to the DM group. Olfactory groove, falx, tent of the cerebellum, and nerve sleeve of spinal cord were categorized to the FT group, which presented later in life and which had a male predominance, more aggressive clinical presentations, and significant cortical and spinal venous reflux. The DAVFs was associated with the layers of the dural membrane characterized by the two different embryological bony structures. The FT group was formed only with the dura propria as an independent risk factor for aggressive clinical course and hemorrhage of DAVFs. Introduction Most popular classifications of dural arteriovenous fistulas (DaVFs) in the literature are hemodynamic classification based on the angiographic findings. [1][2][3] Geibprasert et al. reported a new classification for DaVFs based on the craniospinal epidural venous anatomy and concluded that there were significant differences between the groups with regard to biological and/or developmental characteristics according to epidural region. 4) They suggested that DaVFs had heterogeneous pathology and that susceptibility to shunt formation on the surface of dura mater varied according to this classification. The shunt point of DaVFs is usually located on a certain area of dural membrane, such as transversesigmoid sinus, carotid cavernous sinus, cribriform plate of olfactory groove, falco-tentorial surface and anterior condylar confluence, [1][2][3] as these specific anatomical areas are vulnerable to DaVF formation. 4) Embryologically, the intracranial dural membrane is derived from bony structures, and these bony structures consist of two types of bony tissue: 5,6) endochondral bone with cartilaginous ossification and membranous bone based on the intramembranous ossification. By contrast, the falco-tentorial dural membrane is independent from bony structures. This means there are at least three different anatomical domains of dural membrane. This study retrospectively analyzed the correlation between distribution of DaVFs and the embryological domains of bony structures corresponding to these two different dural compartments. Materials and Methods Sixty-six consecutive of DaVFs (32 men and 34 women; age range, 38-80 years; mean age, 68.4 years) were analyzed with selective and superselective digital subtraction angiography, three-dimensional (3D) rotational angiography and high-resolution cone beam computed tomography (CT). Based on these imaging modalities, each shunt point was identified and was categorized into one of three different dural compartments related to the embryologic bony structures, as follows: 1. The ventral group of endochondral bone from the dura propria and osteal dura (VE group) 2. The dorsal group of membranous bone from the dura propria and osteal dura (DM group) 3. The falx and tent of the cerebellum group only from the dura propria (FT group) The patients were diagnosed in our hospital between January 2006 and December 2014. all patients underwent digital subtraction angiography with selective catheterization to identify the shunt points. 3D rotational angiography and/or highresolution cone beam CT were performed when it was difficult to identify the precise location of the shunt point. Each shunt point was plotted on the map of the dural membrane to define the anatomical distribution on the surface of dural membrane. In cases of multiple shunts of DaVFs, superselective angiography from the dominant feeder was performed, and the highest flow compartment was defined as the primary shunt point. Then, the topographical distribution was categorized into three different domains on the surface of dural membrane that were derived from three different embryological structures, as follows: 1. VE group: Ventral group on the surface of endochondral bone The carotid cavernous sinus, sigmoid sinus and anterior condylar confluence belong to the VE group. This dural membrane consists of the osteal dura and the dura propria ( Fig. 1, red-colored area). 2. DM group: Dorsal group on the surface of membranous bone The transverse sinus, confluence (torcular Herophili), marginal sinus (dorsal portion), medial occipital sinus and accessory epidural sinuses on the dorsal surface of posterior fossa belong to the DM group. This dural membrane consists of the osteal dura and dura propria (Fig. 1, yellow-colored area). 3. FT group: Falx and tent of the cerebellum group were defined as the dural membrane that was apart from the bony structures (Figs. 1, 2, green-colored area). The olfactory groove (paramedian surface of crista galli), superior sagittal sinus, tent of the cerebellum, cerebral falx, falcine sinus and inferior sagittal sinus belong to the FT group. They are derived from part of the neural crest cells and form the dural membrane that is apart from the skull base and cranial vault. [7][8][9][10] Based on anatomical considerations, the falx and tent of the cerebellum arise from two folding layers of the dura propria (Fig. 3), which distinguished this group from the other two groups. Spinal cord DaVFs are also categorized to this group. The shunt point of spinal DaVF is located on the nerve sleeve that corresponds to the border zone between the vertebral body (endochondral bone) and the paired laminae of the vertebra arch (membranous bone). as the FT group is formed relatively apart from the major bony structures during the embryological stage and consists of dura propria alone, the FT group is independent from the both VE and DM groups in terms of embryological domain on the dural membrane. In fact, the spinal dura mater consists only of dura propria and lacks the periosteal layer of the cranial dura (Figs. 3,4). Clinical manifestations, existence of cortical venous reflux, angioarchitecture of the terminal feeding arteries, initial venous outlet, and types of Borden et al.'s classification were investigated retrospectively. Sixty-two of 66 patients underwent management via the endovascular approach. The other four patients underwent only diagnostic angiography, which showed no indication for intervention. Statistical data were processed with Stat Plus R software (a free statistical analysis application for the Macintosh operating system). a P-value of <0.05 was used to indicate statistical significance. Distribution of DVFs and epidemiology Thirty patients (45.5%) had lesions classified to the VE group. This group consisted of eight men and 22 women, indicating a female predominance (22 of 30 [73%]; P < 0.001). Mean age was 72.1 years. Shunt points were at the carotid cavernous sinus in 19 patients, at the sigmoid sinus in eight, and at the anterior condylar confluence in three. For carotid cavernous lesions, the majority of shunt points were located at the level of the posterior clinoid processes that belonged to the endochondral bony structure of clivus. all these shunt points were localized at the paramedian unilateral posterior compartment of the cavernous sinus rather than at the midline. Twenty-one patients (31.8%) had lesions classified to the DM group. These included 16 patients with transverse sinus DaVFs and five with confluence DaVFs. There was no marked sex predominance (male:female ratio = 13:8; P = 0.383), and mean age was 54.5 years. There were 15 patients (22.7%) who had lesions classified to the FT group. These included three patients with olfactory groove DaVFs, three with cerebellar tentorium DaVFs, five with superior sagittal sinus DaVFs, and four with spinal cord DaVFs. There was a strong male predominance (13 of 15 [87%]; P < 0.001), and mean age was 75.2 years, which was significantly older than the other two groups ( Clinical manifestations and cortical venous reflux In the 30 patients of the VE group, major clinical symptoms were ophthalmoplegia, tinnitus and chemosis, which indicated a benign clinical course. There were two patients with neurological deficit associated with perifocal edema due to cortical venous reflux, and one patient experienced intracerebral hemorrhage because of the cortical venous reflux. The degree of cortical venous reflux varied depending on thrombosis and/or the intercompartmental architectures of the affected sinus. In the 21 patients of the DM group, major symptoms were tinnitus. There were only two patients who presented ataxia as a cerebellar sign associated with parenchymal edema caused by cortical venous reflux. One patient experienced seizure due to the venous congestion of the occipital lobe. The severity of cortical venous reflux depended on the degree of thrombosis in the affected sinus as the major factor of venous outflow restriction (Fig. 7). In the FT group, 12 of 15 (80%) patients presented with aggressive clinical symptoms (P < 0.001). There were five patients with neurological deficit associated with perifocal edema due to the cortical venous reflux, three with intracerebral hemorrhage, and four with spinal cord DaVFs who presented with progressive myelopathy. Regarding the angioarchitecture of this group, venous outlet of the arteriovenous (aV) shunts was independent from the main sinus; therefore, 100% of shunt flow created reflux directly into the pial vein of the brain or spinal cord (Figs. 2, 8 and 9). This was the main reason why the FT group showed an aggressive clinical course (i.e., Cognar types III, IV, and V, and Borden et al.'s classification type 3) (Fig. 10). Mode of embolization The decision as to whether the transvenous and/ or transarterial approach should be done was made according to the angioarchitecture of the lesions. This mode of embolization was fully influenced by the embryological domain of the dural membrane at which the shunt point was located. In the VE group, the majority of affected sinuses were cavernous sinus or sigmoid sinus; therefore, transvenous embolization was primarily indicated. In the FT group, 14 patients (93%) underwent management via the transarterial approach. all the shunt points of the FT group were independent from the main sinuses, except for some of the superior sagittal sinus DaVFs; therefore, the transarterial approach was predominantly selected. One case of superior sagittal sinus with multiple shunt points was embolized with both the transarterial and transvenous approach. Discussion The dural membrane is the most outer tough connective tissue covering the arachnoid membrane, and it attaches to the inner surface of the cranial vault and skull base. 5,11) The dura mater, arachnoid mater, and of the skull, and the inner dural layer forms the dural folds (falx and tentorium) that contain the dural sinuses. 11) The cranial dura mater is a tough, fibrous membrane consisting of two connective tissue layers: an external periosteal layer and an inner meningeal layer. These two layers are fused together, except for where the dural venous sinuses are located (eg., superior sagittal sinus). The periosteal layer of the dura mater adheres to the inner surface of the skull bone pia mater develop from the meninx primitive that is one of the meningeal mesenchymes containing the mesodermal and neural crest. 8,11) at the level of the skull, the outer dural layer forms the inner periosteum and is highly vascular and innervated. The meningeal layer of the dura is smooth and avascular and is lined by mesothelium (a single layer of squamous-like, flattened cells) on its inner surface. at the foramen magnum (a large opening at the base of the occipital bone through which the medulla is continuous with the spinal cord), the meningeal layer of the cranial dura joins the spinal dura. The spinal dura mater consists of only the meningeal layer and lacks the periosteal layer of the cranial dura. at the level of the spinal cord, the dura mater is separated from the periosteum of the vertebral canal by an epidural space. This means there is no interdural space at the level of spinal cord. In fact, there are no dural sinuses in the spinal canal. The definition of the craniospinal epidural venous system by Geibprasert is the venous structures locating the epimeningeal layer of dural membrane that corresponds to dura propria, because there is no periosteal layer in the spinal canal. The histology of the dural membrane is affected by the differences of bony structures. 4,7,8) Both mesenchymal and neural crest-derived cells appear to be involved in the formation of the primary meninx that differentiates during embryonic development. 5) The tent of cerebellar and falcine sinus (FT group) is formed in this early stage, but they develop relatively independently from the bony structure because the topographical location of the FT group is apart from the bony structures. The vulnerability of the dural membrane can be presumed and predicted from the process of development in the early stage of embryo in terms of shunt formation. There are several classifications of DaVFs in the literature, and the majority are grading systems concerning the flow direction of the main sinuses and the presence or absence of cortical venous reflux based on angiographic findings. 1 4) In that report, the investigators introduced three different types of epidural spaces at which the shunt points are located: the group of ventral epidural shunts, the group of dorsal epidural shunts, and the group of lateral epidural shunts. They showed that ventral epidural shunts were linked to the vertebral body, basioccipital, sigmoid sinus, petrous pyramid, basisphenoid (cavernous sinus) and adjacent sphenoid wings, and related dural structures. Dorsal epidural shunts were associated with the transverse sinus, occipital sinus, and superior sagittal sinus. Lateral epidural shunts were related to spinal dural aV shunts, marginal sinus (lateral portion of the foramen magnum) with the emissary-bridging vein to the condyloid vein, falcotentorial (vein of Galen), petrosal and basitentorial, sphenoparietal sinus, paracavernous region (embryonic tentorial sinus remnants), intraorbital shunts, and lamina cribriformis. Their ventral epidural shunts corresponded to our VE group, and their dorsal epidural shunts partly corresponded to our DM group. Their ventral epidural group included the sigmoid sinus, but the sigmoid sinus was surrounded with membranous bone, and therefore, it was categorized to the DM group in our classification. The main difference between their classification and ours was with regards to lateral epidural shunts. There was some controversy in that anterior condylar confluence DaVFs were defined as lateral epidural shunts despite the fact that the hypoglossal canal belongs to the basioccipital bone that was categorized as a ventral epidural shunt. We categorized the anterior condylar confluence DaVFs within the VE group simply because the shunt points were located at the level of the hypoglossal canal from endochondral bony structures. additionally, there were some common characteristics between anterior condylar confluence DaVFs and carotid cavernous DaVFs. Both DaVFs had meningeal dural supply as well as intraosseous terminal feeding arteries that were not usually observed in the FT group (Fig. 11). 12,13) The FT group was defined as an embryological domain of the dural membrane that consisted of only dura propria and that was considered as the structures derived from neural crest cells. [5][6][7][8][9][10] This topographical area contained the entire falx, the tent of the cerebellum, and the dural membrane covering the nerve sheaths of the spinal cord. The olfactory groove (lamina cribriformis) also belongs to this system as the most anterior part of the falx. This concept is consistent with the fact that there was a strong male predominance and that symptoms presented later in life in patients with spinal cord, olfactory groove, falx and tent of cerebellum among this FT group (Fig. 6). Because of the aggressive clinical presentations, it was evident that transarterial embolization is indicated for the management of patients in the FT group. 14) There are two major weak points of this study. One was that the vulnerability of dural membrane at the level of interdural space has not yet been proven histologically. The other weak point was that the initial trigger of shunt formation and its mechanisms are still unknown. Regardless, characteristics of angioarchitecture and the natural history of DaVFs could be predicted according to classification based on the embryological domains of intracranial and spinal cord dural membrane. 15,16) Further investigation of this concept may provide additional information to help understand the pathoetiology of DaVFs. Conclusions This reference to embryology enabled us to analyze intracranial and spinal DaVFs in terms of homologues. The presented classification based on the concept of embryological domain is useful to understand the pathoetiology and epidemiology of DaVFs. This principle can aid in decision making regarding the management of this disease. Segmental vulnerability of the dural membrane might be related to the biological and/or hormonal differences that are influenced by the embryological bony structures.
2018-04-03T03:55:37.758Z
2016-06-01T00:00:00.000
{ "year": 2016, "sha1": "4c63857feca7af576c0743c677e6ae5e916fdc5e", "oa_license": "CCBYNCND", "oa_url": "https://www.jstage.jst.go.jp/article/nmc/56/9/56_oa.2015-0313/_pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4c63857feca7af576c0743c677e6ae5e916fdc5e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252779319
pes2o/s2orc
v3-fos-license
A two–tiered system for selective receptor and transporter protein degradation Diverse physiology relies on receptor and transporter protein down–regulation and degradation mediated by ESCRTs. Loss–of–function mutations in human ESCRT genes linked to cancers and neurological disorders are thought to block this process. However, when homologous mutations are introduced into model organisms, cells thrive and degradation persists, suggesting other mechanisms compensate. To better understand this secondary process, we studied degradation of transporter (Mup1) or receptor (Ste3) proteins when ESCRT genes (VPS27, VPS36) are deleted in Saccharomyces cerevisiae using live-cell imaging and organelle biochemistry. We find that endocytosis remains intact, but internalized proteins aberrantly accumulate on vacuolar lysosome membranes within cells. Here they are sorted for degradation by the intralumenal fragment (ILF) pathway, constitutively or when triggered by substrates, misfolding or TOR activation in vivo and in vitro. Thus, the ILF pathway functions as fail–safe layer of defense when ESCRTs disregard their clients, representing a two–tiered system that ensures degradation of surface polytopic proteins. This study was executed very well with the proper controls and their conclusions are supported by the data. I believe this paper will be of high impact in the field and suggest that it published as is. Thank you. We appreciate the concise summary and positive comments. Reviewer #2: I'd like to start by apologizing to the authors for the delay in my review of this manuscript; I completely understand the stress of having to wait beyond the suggested timeframe for receipt of a review. Unfortunately, COVID does not appear to respect the peer review process. Please accept my apology, and thank you for your patience and understanding. In this manuscript by Golden, et al., the authors present a series of interesting in vivo and in vitro experiments to provide insight into the relevance of the yeast vacuolar intralumenal fragment (ILF) pathway in the degradation of unfolded polytopic proteins. While much previous research has shown that turnover of these proteins is usually ESCRT complex-dependent, yeast strains lacking ESCRT complex activity remain able to degrade some membrane proteins through the ILF pathway, especially when grown in the presence of stressors that impact protein stability (like heat). Interestingly, this ESCRT-independent pathway appears to be selective (only half of the proteins tested appear to be degraded in this manner), and the authors suggest that the ILF pathway is a 'fail-safe' mechanism designed to clear unfolded/unneeded proteins under conditions where ESCRT proteins cannot complete their functions in MVB/intralumenal vesicle formation. Importantly, the ILF pathway is a result -and potential side effect -of homotypic vacuole fusion, so the machineries which drive vacuole:vacuole fusion should be absolutely required for ILF-dependent protein degradation. Response to reviewers, page 2 of 7 The Brett laboratory is the leader in the dissection of the ILF pathway in yeast, and has already provided several important studies on ILF function. This submitted work relies heavily on HILO microscopy of purified vacuoles and yeast cells to show the effects of ILF on selected cargo proteins , and results appear to show that Mup1-GFP is both recruited to vacuole membranes, and then degraded, in the absence of Vps36p, which would confirm the existence of an ESCRT-independent PM/VM protein turnover pathway. Over the past few years, however, a controversy regarding the importance and relevance of ILF in yeast has arisen, and this current study should be also be viewed in the context of that published work. While this manuscript is extremely well-written and straightforward, a number of missing experimental controls and the failure to discuss this current work in the context of recent publications reduce my enthusiasm for the conclusions of this submitted manuscript. If properly addressed, however, this study could provide important new insight into the presence of an alternative protein degradation pathway that does not rely on ESCRT protein activity. Our model proposes that the ILF pathway degrades Mup1 when ESCRTs are absent and we offer extensive evidence to support it in the original manuscript. We do not hypothesize what occurs when both pathways are blocked. But based on our model there are two potential outcomes: Mup1 is no longer degraded, or another, unknown pathway compensates for the loss of both mechanisms. To address this concern, we conducted this experiment and present new results in Figure 3F of the revised manuscript. We find that deleting the vacuolar Qa-SNARE VAM3 alone (vam3∆) or with VPS36 (vps36∆vam3∆) prevented Mup1-GFP cleavage assessed by Western blot analysis. Of note, Mup1-GFP from vps36∆vam3∆ migrates relatively slowly compared to other strains, perhaps because protein trafficking is severely disrupted in these mutants (although the basis is unresolved). Also, it was very difficult to generate the double mutant, which is why submitting the revised manuscript was delayed. After the third attempt to generate this strain, I suspected that the genetic mutations were synthetically lethal. But the last attempt proved fruitful, although the resulting double mutant grows poorly. Micrographic analysis (i.e. presence of fragmented Response to reviewers, page 3 of 7 vacuoles) and genome sequencing confirmed the mutations were valid. We added the following text to the revised manuscript to describe these new results: Page 9, line 5: "To further implicate membrane fusion (a requisite for ILF formation) in Mup1 degradation, we deleted VAM3 which encodes the Qa-SNARE required for vacuole fusion (Karim et al., 2018a). If the ILF pathway contributes, we hypothesize that its loss will block Mup1 degradation in vps36∆vam3∆ cells. As predicted, cleavage of GFP from Mup1 was abolished in vps36∆vam3∆ cells by Western blot analysis ( Figure 3F). Of note, Mup1-GFP from double mutant cell lysates migrated slower, consistent with severe protein trafficking defects. Cells missing only VAM3 also showed diminished cleavage. Together, these results suggest that vacuole membrane fusion is important for Mup1 degradation particularly in the absence of ESCRTs." In Figure 2C, the authors show that Fth1p is strongly accumulated at the vacuolar boundary membrane, and therefore should be turned over via ILF. The Brett laboratory has already shown that they observe this fact (McNally, 2017). However, recent published work has shown that Fth1-GFP (and other protein cargo) is NOT turned over post-heat stress in different yeast strain backgrounds (SEY vs BY) over at least 4h, even when intralumenal fragments were observed to form (Yang, et al. 2021. JCB). As heat shock is a critical stress used in this submitted manuscript to show protein turnover via ILF, these discrepancies are important to address. In fact, the authors have completely ignored the presence of this 2021 paper, which strongly questions the relevance of the ILF pathway in protein turnover. Excellent point. We deliberately omitted discussion of this paper from the manuscript for many reasons. For example, we initially submitted this manuscript in early 2020 and it was reviewed three times for publication at two journals, all prior to the publication of Yang et al., 2021. As such, our experimental design does not incorporate strategies to address potential discrepancies. Rather than add a few experiments to an existing and complete study, we reasoned that a better approach is to conduct an entirely new study that directly and comprehensively addresses all issues raised by Ming Li and colleagues. This study is near completion and will be submitted soon as a separate manuscript. It is important to note that the focus of this study is to understand how proteins are degraded in the absence of ESCRTs, when (ESCRT-mediated) microautophagy is disengaged, eliminating its potential contribution to Mup1 degradation. We also cite studies conducted by many different, independent groups that support of our model and validate our observations. However, we agree that the 2021 paper should be acknowledged and to address this concern, we added the following text, limiting discussion to potential discrepancies resolved by data presented in the manuscript: Page 17, line 21: "A recent report challenges whether ubiquitylated proteins are degraded by the ILF pathway, arguing that ESCRT-mediated microautophagy is exclusively responsible instead (Yang et al., 2021). We do not directly test this hypothesis in our study, nor do we present data suggesting ubiquitin is required for protein degradation by the ILF pathway -although it is the focus of ongoing research. However, our results show that when triggered by methionine, Response to reviewers, page 4 of 7 cycloheximide or heat stress, GFP-or pHluorin-tagged Mup1 is degraded within the lumen of vacuoles in cells missing ESCRT genes VPS36 or VPS27 (Figures 2 and 4). Because intact ESCRTs mediate this form of microautophagy (Yang et al., 2021), it unlikely contributes to Mup1 degradation in these mutant cells. In the near real time videos presented (Figures 3D and 4E; Videos 1-3), a vacuole membrane structure resembling intermediates of both pathways are observed within live cells: A lumenally protruding tube or flap connected to the outside membrane on one end and severed on the other side is a structure formed after partial Rab-and SNARE-dependent fusion of two vacuoles immediately prior to ILF formation (Mattie et al., 2017;McNally et al. 2017;Karim et al., 2018b). A similar structure, called a macroautophagic tubule, is formed after invagination of the membrane lining a single vacuole, and subsequent membrane fission by ESCRT-III generates lumenal vesicles (Zhu et al., 2017;Yang et al., 2021). Prior to observing these intermediates in these videos, two tethered vacuoles are present and an intact interface containing Mup1-GFP between them is observed spanning the entire contact site, demonstrating that resulting lumenal structures (intermediates and ILFs) are most likely products of membrane fusion. In support, deleting VAM3 a vacuole Qa-SNARE required for fusion in vps36∆ cells blocks Mup1 degradation in vivo. Blocking vacuole fusion in vitro with the Rab inhibitor Gdi1 also inhibits Mup1 degradation, confirming that vacuole fusion is necessary. When considering additional evidence presented using complementary approaches, we conclude that the ILF pathway mediates protein degradation in ESCRT mutants." We also added Yang et al. 2021 JCB to the References in the revised manuscript. The authors show that a variety of ESCRT mutant strains are not particularly sensitive to a transient heat shock, unlike ssa2∆ or hsa82∆ strains, as measured by methylene blue staining (Fig. 4H,I) . The authors conclude that the unfolded proteins in the cell can still be turned over via ILF in these ESCRT mutant backgrounds, thereby reducing the stress of the accumulation of unfolded proteins. If ILF is indeed a 'fail-safe' pathway to remove unfolded proteins, this experiment should be repeated with vacuole fusion mutant strains to show that cell death strongly increases in the escrt∆ fusion∆ strains after heat shock. We conducted the proposed experiment and results are shown in Figure 4H and I in the revised manuscript. As predicted, we find that vps36∆vam3∆ cells are more sensitive to heat stress than vps36∆ cells. We added the following text to the revised manuscript to describe these new results: Page 10, line 34: "Cells lacking VAM3, a Qa-SNARE needed for vacuole fusion and ILF formation, showed sensitivity to heat stress, and cells missing genes needed for both pathways (vps36∆vam3∆) showed greater sensitivity to heat stress ( Figure 4H and I), suggesting that Vam3-mediated fusion is likely needed to degrade toxic misfolded proteins in cells, especially those lacking ESCRT function." In Figure 5, the authors rely on purified yeast vacuoles to show that Mup1-GFP accumulates in the lumen of yeast vacuoles isolated from vps27∆ strains when they are allowed to fuse. The experiment in 5A (non-heat shock) should include fusion inhibitors to show that this accumulation is fusion-dependent. Response to reviewers, page 5 of 7 Good point. In Figure 5G, we show that accumulation of Mup1-GFP in the vacuole lumen after fusion is significantly diminished in the presence of the fusion inhibitor Gdi1 (although only a single time point was analyzed). We also provide evidence for this using an equivalent (arguably more quantitative) assay in Figure 5F. Here we demonstrate that loss of Mup1-pHluorin fluorescence over time, presumably by exposure to low lumenal pH within vacuoles after internalization of Mup1 upon fusion, is blocked by the inhibitor Gdi1. We show that Mup1-GFP cleavage is blocked by Gdi1 ( Figure 5H, bottom blot) and Mup1-GFP sorting into boundaries is blocked by Gdi1 ( Figure 5D and E). We argue that together these results provide sufficient evidence to support our conclusion that blocking fusion prevents sorting ( Figure 5D,E), internalization (5F, G) and degradation (5H) of Mup1-GFP within the lumen of vacuoles. Fig. 5D (good), but a heat stress on isolated vacuoles seems problematic here. The authors do not include information on using a vacuolar protease inhibitor during in vitro vacuole fusion experiments (Pbi2p), which is potentially problematic. Loss of vacuolar lumenal contents during in vitro fusion reactions does occur at some background level (Starai, et al. 2007 . PNAS), which I might expect to increase after a heat shock. Is the degradation of Mup1-GFP observed in this study a result of lysed and resealed vacuoles? Is released and activated Prb1p (or other vacuolar proteases) responsible for this observation?. Excellent points. Yes we use Pbi2p in our experiments and have clarified this in the text (Page 22, line 34). Although it is possible that some lumenal proteases escape from isolated vacuoles during heat stress, it is unlikely that this accounts for observe cleavage of GFP from Mup1 because there is no Mup1-GFP cleavage during heat stress in the presence of the fusion inhibitor Gdi1 ( Figure 5H). Under these conditions, proteases could leak out and cleave GFP from Mup1 on the outside surface of VMs but the data refutes this possibility. What happens to Mup1-pHlourin fluorescence when vacuoles isolated from the vps27∆ strains are forced to fuse in the absence of ATP? Supplementation of these reactions with recombinant Vam7p will force vacuole fusion, but will not acidify the lumen of the vacuole Good point. We have extensively characterized effects of adding recombinant Vam7 on ILF formation during homotypic vacuole fusion (see McNally et al. 2017 Dev Cell;Mattie et al. 2017 MBoC). In sum, using rVam7 to drive fusion prevents selective cargo sorting into boundaries (all proteins get in) and causes abnormally large ILFs to form. Thus, adding rVam7 to prevent lumenal acidification during fusion will introduce confounding effects making results difficult to interpret (noting that rVam7-mediated fusion also enhances acid hydrolase leakage according to Starai et al., 2017 PNAS). In McNally et al. 2017 Dev Cell, we first describe this pHluorinbased assay in detail and provide many controls similar to the requested experiment. For example, to collapse the pH gradient across VMs we use the protonophore nigericin and show that changes in pHluorin florescence are lost (e.g. Supplemental Figure 1 in McNally et al. 2017 Dev Cell). Herein, we show that rGdi1 blocks changes in pHluorin fluorescence as a control and argue that it is sufficient (along with supporting data generated using many other approaches) to support our conclusion. Response to reviewers, page 6 of 7 Overall, the authors have failed to address the Yang,Cot1,Vph1,etc) fail to be degraded in the absence of ESCRT complex activity, even under CHX or heat shock conditions. This is a major discrepancy that must be addressed in in this manuscript prior to publication;… We understand and appreciate the concern and it was largely addressed above. We did not examine Fth1, Cot1 or Vph1 in this study. As they are not the focus of this work, we argue that this manuscript is not the forum to present a detailed discussion to directly address potentially conflicting datasets presented in our other published papers. But as mentioned above, we agree that this is important and are currently working it. …it is not immediately clear to me why HILO microscopy would be a more sensitive technique for these types of studies, as opposed to microfluidic/confocal real time techniques used in the Yang paper. We do not argue that real time HILO microscopy is more sensitive technique for studying the ILF pathway. Results from this method (HILO) are adequate to support our conclusions. If it matters, we have previously used real time confocal microscopy (without microfluidics) to unequivocally show that ILFs are produced by two fusing vacuoles in vitro (McNally et al., 2017 Dev Cell). Herein, results from real time HILO microscopy (i.e. movies) show that Mup1-GFP on vacuole membranes within living cells is present in boundary membranes and is internalized within ILFs during homotypic fusion, defining features of the ILF pathway. How Mup1 is delivered to vacuole membranes may be better studied using microfluidics, but we cannot envision how it would further improve visualization of the ILF pathway and homotypic vacuole fusion itself (which in our opinion was not adequately addressed in Yang et al., 2021). Thus, without extensive comparative analysis, it is currently unclear how the imaging method used could be responsible for potential discrepancies. Based on this, and because we conducted nearly all experiments prior to publication of Yang et al. 2021 in JCB, we cannot justify using their methods (i.e. design and build a custom microfluidics device and acquire an expense confocal microscope with similar imaging capabilities as our existing system) to repeat experiments described in the manuscript. Does the accumulation of PM proteins on the VM in escrt∆ strains after heat shock depend upon MVB:vacuole fusion? Is this reduced in pep12∆ strains? Getting these PM proteins to the vacuole in the absence of ubiquitination, endocytosis, and delivery to the VM is somewhat mysterious in this background (as the authors note). Based on the canonical model of the MVB pathway, our impression is that ESCRTs recognize ubiquitylated Mup1 at endosome/MVB membranes after it is labeled by independent ubiquitylation machinery (e.g. E3 ubiquitin ligases, adapter proteins) at the PM and subsequently undergoes endocytosis. In support, we find that Mup1 seems to be ubiquitylated and clearly undergoes endocytosis as predicted because components of ESCRT-I or ESCRT-II are not responsible. ESCRTs do not directly mediate membrane fusion either. Multiple groups including our own have shown that MVB-vacuole membrane fusion persists when components of ESCRTs Response to reviewers, page 7 of 7 are deleted in support of the canonical model (e.g. Karim & Brett 2018 in MBoC). Consistent with these observations, other groups observed the presence of cargo proteins (e.g. Ste3) on VMs in ESCRT∆ cells prior to this study, which motivated us to conduct these experiments as stated in the Introduction (Page 5, Line 7). Thus, we did not intend to give the impression that Mup1 appearing on vacuole membranes in cells missing ESCRTs is mysterious. We addressed this at length in the Discussion under the subheading "How are client proteins recognized by both pathways?" in the original manuscript. However, to address this concern, we modified the text to prevent readers from getting the impression that some observations were "mysterious". We agree that deleting PEP12 should diminish MVB-vacuole fusion but it is unclear how conducting this experiment would further support our central hypothesis: The ILF pathway can only recognize cargos present on the VM. In ESCRT∆ cells, Mup1 is present on VMs where it is be degraded by the ILF pathway. Although we appreciate the suggestion, we argue that further uncovering the mechanisms that deliver Mup1 to the VM beyond what is already present in the manuscript (e.g. endocytosis and VM delivery of Mup1 is observed in living ESCRT∆ cells; e.g. Figure 1) will not strengthen the central conclusion. Minor point Figures 2 and 3 have their final panel (F and E, respectively) in a really strange spot and not in order. Thank you for bringing this to our attention. We rearranged the panels in these figures to better present the data in the revised manuscript.
2022-10-11T06:16:40.637Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "f263c3bebeb5168085109239823e70009c455ff7", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosgenetics/article/file?id=10.1371/journal.pgen.1010446&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e4537e3b4a6ea31cdd389d3fa66c6df63966de08", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
54895665
pes2o/s2orc
v3-fos-license
FLORAL PHENOLOGY, FLORAL REWARDS AND INSECT VISITATION IN AN ORNAMENTAL SPECIES This 4-year study examined the flowering pattern, floral display, nectar and pollen production as well as insect visitation to a perennial Geranium platypetalum Fisch. & C. A. Mey. G. platypetalum bloomed from the end of May until the end of June. The pattern of flowering shows the skewed distribution with a tendency towards a more symmetrical curve. The flower display size fluctuated during the flowering season. The most intense blooming fell in the 2 nd and 3 rd flowering week. The flowers exhibit incomplete protandry. Nectar productivity differed significantly between and stage of flower development. Ten and flowers secreted 29.8 mg and 17.6 mg of nectar, on average, respectively, with mean sugar content of 33.9% and 43.1%. The mean total sugar mass in nectar was similar for both stages and the values were 10.2 mg and 8.2 mg, respectively. Pollen mass per 10 flowers was 19.06 mg. Bees (Apoidea) were the principal visitors on Geranium flowers. The peak of daily activity of visitors occurred between 10.00 and 14.00 hrs. The insects gathered mainly nectar. The mean visiting rate was 0.149 visit per flower×min -1 . Increased use of G. platypetalum in parks and gardens is recommended in order to enrich the nectar pasture for A. mellifera and wild Apoidea. INTRODUCTION The flowering pattern of a species is of a great importance when the insect-plant relationships are investigated, especially if the taxon is considered as a food source for visitors. The Geranium species are among plants which can supply insects with ample nectar and pollen food. The genus Geranium L. (Geraniaceae), crane's bill, comprises approximately 400 species distributed in temperate areas and tropical mountains throughout most of the world (Z o m f l e r , 1994; A e d o et al. 2007). According to R u t k o w s k i (1998) 21 Geranium species occur in Polish flora. Some species are used as medicinal plants with antioxidant properties due to a high flavonoid and tannin content (M i łk o w s k a et al. 1998; G h i m i r e et al. 2006; A nt a l , 2010). They also provide essential oils (mainly geranium oil) for cosmetic purposes. Numerous crane-'s bill species and cultivars are widely grown as ornamentals, especially in naturalistic parks and gardens. Their decorative value derives from beautiful and abundant flowers as well as from dense leaf canopies covering the ground. These plants are very easy in cultivation and perform well both in full sun and in shade conditions. Geranium flowers appear to be typically entomophilous because of their visually attractive petals, robust stigmas, abundant pollen and the presence of active nectaries. Moreover, these actinomorphic flowers produce conspicuous nectar guides, generally similar on all petals (L i n k , 1990). Wild crane's bills are eagerly visited by insects attracted mainly by nectar with a high sugar content. Among taxa occurring in natural habitats, the best known, valuable nectariferous plants are Geranium The aim of present study was: (1) to examine floral phenology in one of most showy ornamental species Geranium platypetalum, (2) to investigate floral rewards available to visiting insects and (3) to monitor the activity and spectrum of insects collecting these rewards. The primary purpose of this investigation was to determine the floral traits in G. platypetalum beneficial for relationships with visiting insects as well as to check if these plants can supply their flower visitors with an ample, high quality food during spring time, i.g. the period of high food demand by bees. Study site and plant species The present study was conducted on a cultivated plot of G. platypetalum grown on a loess-origin soil in the Botanical Garden of Maria Skłodowska-Curie University, Lublin, Poland (N-51 o 09', E -22 o 27'). The census plot occupied an area of 7 m 2 . The observed plants grew in a dense patch so it was difficult to discriminate the individuals and to count their number. Geranium platypetalum Fisch. & C. A. Mey., synonymous with G. ibericum var. platypetalum (Fisch. & C. A. Mey.) Boiss., is a perennial herb, 25-57 cm tall, which is native to northeast Turkey, Caucasus, and northern Iran and cultivated since 1802. The stem is erect, leafy, herbaceous. Basal leaves in a rosette. Rootstock ± horizontal. The flowers are hermaphroditic, actinomorphic, disc-shaped, 3-4.5 cm in diameter, deep blue with red veins, clustered in a dichasial cyme with 2-flowered cymules. The sexual parts, consisting of a pistil and surrounding 10 stamens, stand straight in the center of flower. Anthers are blue-black and gynoecium is dark purplish. Phenology and abundance of flowering The study on phenology of flowering was carried out on the species level as well as flower level (D a f n i , 1992). During the period 2009-2011 flowering onset and flowering termination were recorded to determine the timing and duration of flowering. Moreover, in the years 2009-2010 the dynamics of flowering per area unit was examined as well. For this purpose, prior to opening of the first flowers, 5 plots, each 50×50 cm in size were selected at random on the patch and marked. On each plot, daily or every second day, all new flowers that opened were counted and marked with a marker until blooming terminated. The values were recalculated per 1 m 2 area. The dynamics of flowering was expressed as the percentage of newly opened flowers on successive flowering days in relation to the total number of flowers eventually formed on the plot (100%). The total number of flowers was determined per 1 m 2 area as well as per inflorescence. Moreover, the development stages of G. platypetalum flowers were observed. Floral persistence was counted as the days from the opening to the falling of all petals. The length of flowering of a single inflorescence was determined, too. Floral reward measure Preliminary observations on localization of floral nectaries were conducted. After removing elements of the corolla and anthers, the whole glands from fresh material were examined under a stereoscopic microscope. Nectar productivity was examined in the years 2007, 2009-2010. To determine it, flower buds were isolated in the field and nectar was collected from perfect flowers at two different stages of flower development: male stage ( )-flowers with full pollen presentation ( Fig. 1), and the female stage ( )-flowers after pollen exposure with fully expanded stigmas (Fig. 2). The nectar was gathered using glass micropippets and its amount was measured (in mg). A total of 22 and 18 samples for and stage, respectively, were collected during this study. Each sample contained nectar collected from 1 to 10 flowers. Nectar sugar concentration was measured with the Abbe refractometer. Then, nectar amount and sugar concentration of nectar were used to calculate the total sugar amount (in mg) secreted in nectar per 10 flowers of each stage. The pollen mass available to insects was determined by the ether method (W a r a k o m s k a , 1972). In the years 2007In the years , 2009In the years -2011 samples of 50 mature anthers were collected each year. Pollen production was expressed in mg per 100 anthers = 10 flowers. Insect visitation In the years 2010-2011, insect visiting and foraging activity on G. platypetalum flowers was monitored throughout peak blooming period. The number of open flowers and number of working insects in a field of view (0.25 m 2 ) were counted for 5 minutes, three times every hour, from 8.00 h to 18.00 h (GMT+2h). The counts were converted to a visiting rate (visits per flower×min -1 ). Observation discriminated four insect categories: honey bees, bumblebees, solitary bees, others (flies, ants). Relative abundance and daily visitation pattern of these categories were determined. Data analysis Whenever possible, parametric statistical analysis was used on variables by applying standard analysis of variance procedures. When significant differences were stated, the ANOVAs were followed by the HSD Tukey test at =0.05 (S t a n i s z , 2006). Descriptive statistics were calculated and are presented as the means ± S.D. Data in figures are presented as the average values. Differences in the nectar amount, nectar sugar concentration and total sugar amount in nectar between the floral stages and years of study were tested by means of two-way ANOVAs. Differences in the amount of pollen per 10 flowers between years of study were analysed by one-way ANOVA. Non-normally distributed data, the total number of flowers/inflorescence and total flower number×m -2 were compared with the Kolmogorov-Smirnov test while the life-span of a flower and an inflorescence were subjected to Kruskal-Wallis ANOVA and H-test for nonparametric data. Data analyses were performed with STATISTI-CA v.7.1 (StatSoft Poland, Krakow). Examining the flowering dynamics curves, i.e. the number of newly open flowers per census day for the duration of flowering, it can be stated that the flowering of the investigated species begins with a small number of flowers, then steadily or quickly reaches a peak with few local maxima before tailing off, leading to a skewed distribution. In 2009, the skewed shape of the flowering curve more approximates a symmetrical distribution (Fig. 3a). The detailed data concerning the blooming abundance of studied species are shown in Table 2. The influence of the growing season on the number of flowers formed per inflorescence was significant. However, the total number of flowers×m -2 was similar for the years of the study. Flowering was facilitated by good temperature and moisture conditions both during the flowering period and in the period preceding it. The life-span of a single flower was 3 days while a single inflorescence persisted 11 days, on average (Table 2). Perfect flowers of G. platypetalum exhibit incomplete protandry at the intrafloral level. The stigmas became receptive and exposed when anthers continued to shed pollen in a flower. Therefore, there is an overlap in the presentation of the pollen and stigmas. Anthers started to shed pollen 1-2 hours after bud opening whereas stigma presentation started the following day after a flower opened. Floral nectar secretion In a flower of G. platypetalum nectar is secreted by five phanarothetic-discoid nectaries. The characteristics of nectar produced by flowers in the male and female stage are shown in Tables 3 and 4. The amounts of nectar collected in the male and female phases differed significantly (Table 3). Flowers in the male stage produced by 40.8% more nectar, on average, when compared to the female stage flowers (Table 4). The sugar nectar concentration showed a significant stage and year effect (Table 3). In contrast to the nectar amo-unt, the higher values were found for samples collected from the female stage flowers. The nectar collected in 2010 was more concentrated than that gathered in 2007 and 2009the mean sugar content exceeded 60%. Finally, the total nectar sugar amount secreted in nectar for both stages of flower development did not differ significantly (Table 3), however the values obtained for the male stage were slightly higher than those for the female stage. The year effect was significant (Table 3). In 2010, with the highest temperatures and sunny days during flowering of the species, nectar sugar yield increased twice to 2.5× when compared to 2009 and 2007. Pollen production The pollen output from 100 anthers (equal to 10 flowers) is shown Fig. 4. The mass of pollen did show a year effect (Table 3). Extremely low amounts were produced by flowers in 2010 whereas the highest values were found in 2007. The variation within species in the amount produced by the same number of anthers can be caused by differences in anther size and by the percentage of non-fertile pollen grains, which can vary from year to year. In G. platypetalum a tendency to produce even empty anthers was observed. Insect visitation Flowers of G. platypetalum attract numerous insect visitors. Under good weather conditions they visited flowers throughout a day with their peak activity between 10.00 and 14.00 hrs and the rate of visitation increased slightly after 17.00 h (Fig. 5). The daily visitation patterns of various groups of insects on Geranium flowers are shown in Figure 6. The insects gathered mainly nectar. In 2010 and 2011 the mean visiting rate was 0.152 and 0.145 visit per flower×min -1 , respectively. The visitor assemblage changed during the years of observations. The principal visitors were Hymenoptera and among them Apis mellifera dominated. In 2010 honey bee workers were the only visitors observed on Geranium flowers while in 2011 they comprised 86.6% of all insects observed. Moreover, the flowers were visited by a number of bumblebees, solitary bees, flies and ants (Fig. 7). Solitary bees Other insects spp. DISCUSSION In the environmental conditions of central Europe as well in the region of its origin G. platypetalum blooms from May until August (M a r c i n k o w s k i , 2002; A e d o et al. 2007). In the present study the flowering period of this species was much shorter and usually terminated before the end of June. The differences in the length of flowering can be caused by heterogenous environmental conditions (e.g. soil type and weather conditions), differences among genotypes, or phenotypic plasticity (R a t h c k e and L a c e y , 1985; E l z i n g a et al. 2007). Plants (1978) found that the pattern of flower production in Phaseolus vulgaris varied from a concentrated skewed pattern to a longer and more normally distributed pattern. G e n t r y (1974) pointed out that the great majority of temperate, generalist plants employed 'cornucopia' strategy. T h o m s o n (1980,1985) as well as F o r r e s t and T h o m s o n (2009) suggest that selection favoured asymmetrical, positively skewed curves. By displaying numerous flowers from the beginning of bloom, the plant obtains the services of faithful visitors that will continue to visit despite subsequent decreases in the rate of flower production (T h o m s o n , 1985). The observed differences in the flowering curves of G. platypetalum reflect a high susceptibility of this species to weather conditions prevailing in the blooming period. Generally, the opening of new flowers intensified in the 2 nd week of flowering as it was previously observed in G. sanguineum (M a s i e r o w s k a , 2006). The studied species produced numerous flowers; their density per 1 m 2 exceeded 1100 flowers. Similar values were found for G. sylvaticum (J a b ł o ńs k i and K o ł t o w s k i , 2002). However, significant differences between the years of study were found only for the of number of flowers in a single inflorescence. The abundance of blooming may be influenced by weather conditions during the vegetation period, as was observed during this study. Flowers of G. platypetalum show a typical entomophilous character offering to insects both nectar and pollen. However, nectar is the main reward collected by them. In the flowers of this taxon the secretory tissue forms five glabrous nectaries at the bases of the episepal filaments of the anthers, between the petal insertion. According to L i n k (1990), this is a phanerothethic-discoid gland type, commonly occurring in the Geranium genus. The arrangement of floral elements in Geranium flowers makes nectar easy accessible to various insect visitors. Flowers of G. platypetalum exhibit incomplete protandry based on the criteria of L l o y d and W e b b (1986). This kind of dichogamy occurs commonly in the Geraniaceae family and is regarded as one of the mechanisms to avoid selfing and to promote outcrossing in these plants (F ae g r i and v a n d e r P i j l , 1980; Z o m f l e r , 1994; K a n d o r i , 2002; A s ik a i n e n and M u t i k a i n e n , 2005). According to F i z et al. (2008), G. platypetalum is a xenogamous species. In the present study clear differences in nectar yield occurred during the two sexual phases within a flower. The male phase averaged 1.7-fold as much nectar and a similar nectar-sugar amount as the female phase did. In contrast, the sugar content in nectar was much higher in the female stage, compensating the differences in the secreted amounts of nectar. The results of this study are difficult to compare because the relevant data concerning nectar production in different stages of flower development in G. platypetalum or another Geranium species are lacking. In contrast, in the protandrous flowers of Carum carvi, Apiaceae (L a n g e n b e r g e r and D a v i s , 2002) and Echinacea purpurea, Asteraceae (W i s t and D a v i s , 2006), the highest average values of the nectar amount, nectar-solute concentration as well as nectar-sugar quantity occurred in the pistillate-stage florets when nectar becomes the major reward available in flowers. In the female stage of G. platypetalum flowers nectar attractiveness improved mainly due to a 1.7-fold increase in nectar concentration which exceeded 61%. The high concentration of nectar in Geranium The discrepancies in a pattern of nectar production in a protandrous species obtained by Davis and co-workers and in the present study can be attributed not only to a species effect but also to diverse conditions in which those experiments were conducted. The Carum and Echinacea plants were grown in a growth chamber, whereas in the present investigations nectar samples were collected directly from the plants grown in the field. To explain the different tendency in the nectar secretion pattern a more detailed study under controlled conditions is necessary. Overall, the nectar yield from 10 flowers of G. platypetalum was not very high and the nectar-sugar quantities were in the range of the values obtained for G. collinum (M i ń k o v , 1974) or G. sanguineum (M a s i e r o w s k a , 2006). Throughout the whole lifetimethat means that nectar was collected at the end of the female stageten flowers of these species secreted 3-18 mg and 4.7-14.8 mg of sugar in nectar, respectively. Both these taxa are considered to be valuable mellieferous plants but they secreted nectar abundantly and regularly only under high air humidity and air temperature -similarly to G. platypetalum. The pollen output from 10 flowers of G. platypetalum was 19.06 mg and it was lower when compared to mean pollen productivity of 10 flowers of G. sanguineum (M a s i e r o w s k a , 2006). According to M a u r i z i o and G r a f l (1969), crane's bills can provide 1-3% of pollen yield in mountain areas. But interestingly, in the male phase of G. platypetalum flowers the visitors concentrated on nectar and did not actively gather the released pollen. Large insects collecting nectar such as A. mellifera and Bombus spp. were usually located upon the sexual parts and deposited pollen on their abdominal side (sternotribic pollen deposition) while small insects were often located on the peripherial petals, walking on them and having much less contact with the anthers. Similar behavior was previously described in some Geranium species by K a n d o r i (2002) and K o z u h a r o v a (2002). The phenotypic generalization of G. platypetalum flowers, including easly accessible nectar rewards, welcomes an array of insect visitors, e.g. bees, flies and ants. The present observations showed that although the pollinator assemblage differed in the successive years of study, the key visitors were hymenopterans, mainly honey bees. Honey bees as the most abundant visitors in Geranium flowers as well as the presence of bumblebees and wild bees from the Andrenidae, Halictidae and Megachilidae families were reported in several studies, e.g. K a n d o r i (2002), M a s i e r o w s k a (2006), F i z et al. (2008). Also syrphids and ants were listed among flower visitors in these plants. The mean total number of visits per flo-wer×min -1 for G. platypetalum was higher when compared to G. pheum or G. sylvaticum (reviewed in F i z et al. 2008) -wild and sometimes cultivated species eagerly visited by bees. The intensity of visits is an important factor influencing the pollination success. In general, the greater the visitation intensity, the higher the chance of pollination; however, numerous studies have shown that insects with high visitation rates could be poorer pollen depositors (E n g e l and I r w i n , 2003). Insect visit frequency changed over the course of a day. The pattern of daily activity of flower visitors was associated with the opening of new flowers as well as with floral rewards availability. CONCLUSIONS Geranium platypetalum exhibits several floral traits beneficial for insect visitors at both population and intrafloral levels including the flowering pattern, floral display and floral rewards. In the course of this study these plants were observed to be a highly valuable source of nectar flow, especially for hymenopterans during spring. Increased planting of G. platypetalum in naturalistic parks and gardens is recommended in order to achieve excellent ornamental effect as well as to enrich the nectar pasture for A. mellifera and wild Apoidea.
2018-12-12T05:57:58.719Z
2012-01-01T00:00:00.000
{ "year": 2012, "sha1": "c69a75b5d632a46b3e63338f5f4899ec3786addf", "oa_license": "CCBY", "oa_url": "https://pbsociety.org.pl/journals/index.php/aa/article/download/aa.2012.055/1018", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ae96bcdfb53ba40c6cf1751f69e9cd140aff2af3", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
227925948
pes2o/s2orc
v3-fos-license
Deep-Learning-Based Multivariate Pattern Analysis (dMVPA): A Tutorial and a Toolbox In recent years, multivariate pattern analysis (MVPA) has been hugely beneficial for cognitive neuroscience by making new experiment designs possible and by increasing the inferential power of functional magnetic resonance imaging (fMRI), electroencephalography (EEG), and other neuroimaging methodologies. In a similar time frame, “deep learning” (a term for the use of artificial neural networks with convolutional, recurrent, or similarly sophisticated architectures) has produced a parallel revolution in the field of machine learning and has been employed across a wide variety of applications. Traditional MVPA also uses a form of machine learning, but most commonly with much simpler techniques based on linear calculations; a number of studies have applied deep learning techniques to neuroimaging data, but we believe that those have barely scratched the surface of the potential deep learning holds for the field. In this paper, we provide a brief introduction to deep learning for those new to the technique, explore the logistical pros and cons of using deep learning to analyze neuroimaging data – which we term “deep MVPA,” or dMVPA – and introduce a new software toolbox (the “Deep Learning In Neuroimaging: Exploration, Analysis, Tools, and Education” package, DeLINEATE for short) intended to facilitate dMVPA for neuroscientists (and indeed, scientists more broadly) everywhere. In recent years, multivariate pattern analysis (MVPA) has been hugely beneficial for cognitive neuroscience by making new experiment designs possible and by increasing the inferential power of functional magnetic resonance imaging (fMRI), electroencephalography (EEG), and other neuroimaging methodologies. In a similar time frame, "deep learning" (a term for the use of artificial neural networks with convolutional, recurrent, or similarly sophisticated architectures) has produced a parallel revolution in the field of machine learning and has been employed across a wide variety of applications. Traditional MVPA also uses a form of machine learning, but most commonly with much simpler techniques based on linear calculations; a number of studies have applied deep learning techniques to neuroimaging data, but we believe that those have barely scratched the surface of the potential deep learning holds for the field. In this paper, we provide a brief introduction to deep learning for those new to the technique, explore the logistical pros and cons of using deep learning to analyze neuroimaging data -which we term "deep MVPA," or dMVPA -and introduce a new software toolbox (the "Deep Learning In Neuroimaging: Exploration, Analysis, Tools, and Education" package, DeLINEATE for short) intended to facilitate dMVPA for neuroscientists (and indeed, scientists more broadly) everywhere. INTRODUCTION: A dMVPA TUTORIAL Although the roots of cognitive neuroscience date to the 1920s (the advent of electroencephalography, EEG; Berger, 1929), the modern neuroimaging era began in the mid-1990s, with the development of functional magnetic resonance imaging (fMRI) methodology and the increasingly widespread availability of (affordable) desktop computing workstations powerful enough to process fMRI datasets. In those days, data analysis was primarily limited to univariate investigations such as event-related potentials (ERPs) in EEG and univariate general linear model (GLM) analyses aimed at detecting "blobs" of activation with fMRI (as well as differences in activity, e.g., between experimental conditions, within such blobs) 1 . However, as the field has grown in quantity of research and expanded in breadth of topics, researchers have naturally sought to create ever-more sophisticated models of brain function and test evermore refined and detailed hypotheses; this, in turn, has created a demand for corresponding developments in the form of more complex and mathematically advanced analysis techniques. Thus, somewhat more recently (beginning in the early-tomid-2000s), a second age in neuroimaging analysis arose with the advent of multivariate pattern analysis (MVPA; Haxby et al., 2001;Haxby et al., 2014). Rather than focusing on whether a certain cognitive event elicits activity in a particular cluster of fMRI voxels (or a voltage peak at a particular temporal latency with ERP), MVPA is instead concerned with how a neural pattern or multivariate "brain state" comprising multiple voxels (fMRI) or electrode/timepoint combinations (EEG) might collectively correspond to a certain cognitive event or state. Numerous MVPA variations exist, including those based on correlation (either Pearson or rank-based; Haxby et al., 2001), support vector machines (SVMs; De Martino et al., 2008;Dosenbach et al., 2010), logistic regression (Akama et al., 2012), sparse multinomial logistic regression (SMLR; Krishnapuram et al., 2005;Kohler et al., 2013), naïve Bayes classifiers (Kassam et al., 2013), and more. Many of these techniques concern classification of brain patterns into discrete cognitive states, whereas others examine different aspects of the data (e.g., overall similarity between brain patterns; Xue et al., 2010;Lim et al., 2019) without explicit categorization, but all of them represent increases in mathematical and conceptual sophistication over univariate techniques. Importantly, when compared to earlier univariate techniques, MVPA has enabled us to examine in a much more nuanced fashion how brain activity patterns encode mental states. Although traditional MVPA techniques are substantially more complex than univariate techniques, they are nonetheless still fairly simple, both mathematically and conceptually. Traditional MVPA is a form of machine learning (ML), but it is among the simplest forms; most MVPA approaches use straightforward linear mathematical models. This comparative simplicity certainly confers advantages -for example, faster computation times than more complex techniques (with some caveats 2 ), and a generally lower risk of "overfitting" 3 . However, 1 Although most of our discussion focuses on fMRI and EEG, as those are the most common techniques in our field of cognitive neuroscience, most points should translate well to related technologies like structural MRI, magnetoencephalography (MEG), or electrocorticography (ECoG), and even to less closely related methods such as extracellular recordings (e.g., from rodents or nonhuman primates). 2 For example, SVMs may take inordinately long to converge on extremely highdimensional datasets that are handled more easily by deep neural networks. (At least for the common SVM implementations included in contemporary MVPA software packages; other, more scalable options may exist but are not, to our knowledge, readily accessible to the MVPA research community). As discussed later, deep networks also have better support for GPU-based parallelization than simpler linear techniques, which can offset their computational costs. 3 The creation of a predictive model that is highly customized to the data used to train the model, but generalizes poorly to new datasets that do not perfectly match the idiosyncrasies of the training data; a significant concern in ML. A good analogy simpler mathematical formulations are necessarily limited in what we call "informational resolution" -the specificity of the neural patterns and cognitive states that they are able to capture. How much informational resolution is required to glean as much about brain function as is possible using current neuroimaging technology? The answer is hard to pin down, partly because it is difficult to establish firm estimates of the "noise ceiling" 4 for these techniques. As neuroimagers, we often complain that our techniques are "noisy, " but with proper usage, the signal-to-noise ratios of EEG and fMRI are really rather high, when considering only measurement noise from the instruments themselves and the surrounding physical environment. Of significantly greater concern are "noise" sources such as subject head/body motion, physiological artifacts (cardiac, respiratory, muscular, etc.), and cognitive artifacts (distraction, poor understanding of instructions, falling asleep). Noise ceilings for certain analytic techniques and datasets can be estimated (Kay et al., 2008;Nili et al., 2014), but ultimately they will depend on which data components are considered "noise"; aside from the noise that arises from the physics of the measurement itself, other biological and subject-driven artifacts have some hope of being detected, modeled, and/or removed. And, much like the signal components we actually care about (i.e., those related to our experimental questions), our ability to detect and account for noise depends largely on the sophistication of our analytic techniques. What we do know is that the brain is a highly complex, highly nonlinear system (Koch and Laurent, 1999;Sporns et al., 2000;Buzsaki and Mizuseki, 2014), and the addition of noise sources that are also complex and nonlinear makes brain data no easier to analyze and interpret. Although the limits of the usefulness of traditional MVPA, with its relatively low informational resolution, have not yet been reached, those limits do loom on the horizon. As the size of neuroscience data continues to grow 5 , traditional MVPA's limitations become ever more apparent. It is a statistical truism that more complex analytic models, with more parameters to fit, allow us to account for a greater proportion of a dataset's variance, but they also require larger input data to estimate their parameters reliably. Yet the sizes of many contemporary datasets are now such that they can potentially accommodate significantly more sophisticated statistical models than traditional MVPA, with greater power to identify, extract, and distinguish noise sources and signals of interest. Thus, we believe it is time for cognitive neuroscience and related fields to place increased emphasis on developing, exploring, and using more mathematically and computationally is a bespoke garment perfectly tailored to the contours of a specific individual, which would fit him/her perfectly but look terrible on most others. Conversely, an off-the-rack outfit with a simpler design would fit many individuals of roughly similar proportions reasonably well. 4 Informally defined, the best we might be expected to do in using statistics to explain variance in the data, accounting for the fact that a certain amount of unexplainable variance, aka noise, will always exist. 5 E.g., from better spatiotemporal resolution due to technological improvements; from increasingly large sample sizes, particularly from big-data initiatives such as the Human Connectome Project (Van Essen et al., 2013) and OpenNeuro (formerly OpenfMRI; Poldrack et al., 2013); and simply from the ongoing accumulation of data stockpiles from many years' worth of research studies. sophisticated techniques, and on producing tools that can be used to perform that exploration more effectively and efficiently. The Case for Deep Learning There are numerous potential analytic methods of greater complexity and sophistication than traditional MVPA. One class of ML techniques that has been gaining popularity, and the one we endorse in this paper, is "deep learning." Deep learning, briefly defined, refers to the use of artificial neural networks (ANNs), typically with recurrent and/or convolutional architectures, that are more complex, flexible, and (potentially) powerful than both earlier generations of ANN architectures and the techniques used for traditional MVPA. In the last few years, such deep neural networks (DNNs) have been used increasingly heavily in a number of fields that employ ML for all kinds of purposes. Such usage includes an ever-growing collection of studies in human neuroscience and related disciplines, although a relatively small proportion have been devoted to neuroimaging analysis, and fewer still devoted to decoding cognitive states from functional measurements of brain activity, which is a topic of great interest to many. Admittedly, the long-term utility of deep learning to neuroscience remains unproven; however, based on the studies we have seen so far and the extent of deep learning's impact on other fields, our conjecture is that those studies represent only the tip of the proverbial iceberg in terms of what is achievable by using DNNs to analyze neuroscience datasets. In fact, we believe deep learning has the potential to perform most of the tasks for which traditional MVPA is typically employed, but with greater speed, flexibility, and power, and thus we advocate for the more widespread use of what we call "deep MVPA, " or dMVPA for short. To achieve more widespread adoption of deep learning in the neurosciences, notable challenges to confront include (1) a relatively low level of knowledge/awareness of these techniques, and (2) insufficient availability of software tools to make dMVPA as approachable as traditional MVPA. In this paper we address the first challenge by providing a brief review of deep learning techniques, including how they can be used in neuroscience investigations, and the pros and cons of dMVPA versus traditional MVPA. We address the second challenge by introducing a new Python-based software toolbox (the "Deep Learning In Neuroimaging: Exploration, Analysis, Tools, and Education" package; DeLINEATE for short) that builds upon previous DNN and MVPA tools and aims to make dMVPA more approachable and efficient for other researchers. The Case Against Deep Learning We have encountered two common points of resistance to the adoption of deep learning methods for MVPA, which we will address briefly here. The first is that it does not always appear to be necessary; shallower network architectures and simpler, more traditional classifiers can often do the job. This is true. None of our analytical tools are or can be ideally suited to every task, and deep neural networks would be an inappropriate choice for many neuroscience questions. However, we might counter that this is also a time for exploration. We cannot know which neuroscientific problems these techniques will be best suited for until a great many more experimentalists and theoreticians have spent some time with them, and enabling that exploration is the primary goal of this paper and our software toolbox. The second is a more general wariness of MVPA. Fully addressing this topic is rather outside our current scope, but skepticism is to be valued and we wish to address the most immediately salient aspects of this viewpoint. MVPA, and by extension our proposal for deep-learning-based versions thereof, does not have so long a history of theoretical work and education within the field to guide interpretation as the more traditional inferential techniques. Given that substantial confusion still exists in the field surrounding the proper use and interpretation of oldfashioned statistics, some caution about using and interpreting newer, more complex techniques is certainly warranted. With that said, there are cases wherein no other existing tool answers quite the question one wishes to ask. Better, we think, to use new tools than to ask worse or more limited questions. The most immediately obvious use case for MVPA, deep or shallow, concerns multivariate outcomes. Condensing the results of an fMRI or EEG recording session into a single traditional dependent variable is a tremendously fraught task; it certainly requires discarding potentially useful information along the way, and at worst can obscure the true nature of the underlying data. (For example, averaging ERPs with different latencies can produce a grand average that appears lower in both amplitude and frequency than any individual trial's ERP actually is). In our own prior work, we have most certainly found that MVPA enabled us to address questions that could not be answered with previous univariate approaches; for example, that scene-selective visual brain areas represented not only category but exemplarlevel information during mental imagery (Johnson and Johnson, 2014), that ERPs related to mental attention that did not exhibit reliable differences between stimulus categories at a grandaverage level still contained category information with MVPA (Johnson et al., 2015), and that gradual drift in fine-grained information patterns in visual cortex during working memory could be used to predict memory errors (Lim et al., 2019). In short, we agree to a certain extent with (d)MVPA skeptics that these should not be the tools of choice for every research question, and in the long run, they may not even be the tools of choice for a particularly large portion of questions. However, more work is needed before we can truly understand the relative values of these approaches, and performing that exploration requires appropriate software tools. Overview and Intended Audience We have tried to make this paper as useful as possible to as many readers as possible, although we recognize that readers may come in with a wide variety of backgrounds, and thus it is not possible to cover every topic as comprehensively as we would like. For readers almost entirely unfamiliar with neural networks and deep learning, the sections "A Brief History of Neural Networks" through "The Deep Learning Renaissance" cover the historical context of these methods and some general overview of deep learning concepts, although complete novices may wish to supplement with other introductory resources on machine learning generally and/or deep learning in particular. The section "Pros, Cons, and Caveats of dMVPA" discusses pros and cons of dMVPA, mostly by comparison to traditional MVPA, so it may be most useful to researchers who already have some familiarity with simpler machine-learning techniques but are considering incorporating dMVPA into their research portfolio. (Researchers almost entirely unfamiliar with traditional MVPA may wish to read other review articles on that, and try running traditional MVPA on their own datasets, before proceeding much further into dMVPA territory). The section "A Brief Introduction to Network Architecture" gives a fairly high-level introduction to neural network architectures for readers who are ready to begin experimenting with dMVPA and usage of the DeLINEATE toolbox, although these readers may also wish to supplement with other introductory resources to flesh out their knowledge as they get deeper into their explorations of those concepts. The section "Method: A Toolbox for dMVPA" contains an overview of the more practical aspects of using the DeLINEATE toolbox; however, for full implementation details, users should consult the online documentation and video tutorials. We have also included sample walkthroughs of some toolbox usage cases in the Supplementary Material. The "Results" section contains more practical information in the form of hardware/software requirements and benchmark results (also discussed somewhat further in the Supplementary Material), and The "Discussion" section discusses future directions. A Brief History of Neural Networks The techniques we now collectively call "deep learning" are generally extensions of older "shallow" ANNs, which are significantly less complex and powerful than DNNs but not much different in their basic principles. As such, although knowledge of ANN history is not strictly necessary for conceptually understanding deep learning, the historical development of ANN techniques can provide a useful scaffold to help learners structure their newfound knowledge. The concept behind all ANNs originates from a highly abstracted view of non-artificial neural networks, i.e., the biological nervous system ( Figure 1A). In this framework, most implementation details are stripped away, and what remains is the basic idea of a network of simple computational units ("neurons") that receive input (which can typically be excitatory or inhibitory), perform an operation on their inputs (typically some variation on summation), and produce an output (typically a single value analogous to an action potential or a firing rate), which might then serve as input to one or more downstream neurons 6 . The original and simplest case is the McCulloch-Pitts neuron (McCulloch and Pitts, 1943; Figure 1B), a processing unit whose input and output values are exclusively binary (0 or 1). The McCulloch-Pitts neuron sums its FIGURE 1 | Comparison of biological and artificial neural models. (A) A simplified "textbook" model of a biological neuron. Inputs come in via the dendrites in the form of action potentials (or the lack thereof). The inputs are summed in the cell body (soma) and, if the threshold voltage is reached, the cell produces an action potential as output that is delivered via the cell's axon. (B) The original and simplest version of an artificial neuron model, the McCulloch-Pitts neuron. Similar to the biological neuron, inputs (x i ) and outputs are binary (although we now know this to be an oversimplified view of biological neurons). Inputs are summed and the result passed to a thresholding function; if the threshold is met, an output of 1 is produced, and otherwise the output is 0. (C) A perceptron, a more sophisticated revision of the McCulloch-Pitts neuron that has an important place in modern artificial neural networks. The concept of trainable weights (w i ; mimicking biological potentiation at synapses) is introduced, and inputs are now multiplied by their corresponding weight before summation. In addition, in contemporary perceptron models, the threshold function can be replaced by any arbitrary function, called the "activation function." Popular activation functions like the hyperbolic tangent may still act largely like thresholding functions, but with the ability to deliver graded rather than strictly binary output values. inputs, compares the sum to some threshold value, and outputs a 1 ("action potential") or 0 according to whether the sum exceeds the threshold. Although a pioneering idea and an interesting (if highly simplified) early model of neural information processing, Frontiers in Human Neuroscience | www.frontiersin.org McCulloch-Pitts neurons can only implement a limited set of functions and are thus not considered very useful for modern ML applications. A few years later, though, ANNs took a significant step forward when Rosenblatt (1958) incorporated Hebb's theoretical views on the strengthening and weakening of synaptic connections (Hebb, 1949) into a McCulloch-Pitts-like unit that came to be called the perceptron 7 . In its simplest form, a perceptron ( Figure 1C) is largely identical to a McCulloch-Pitts neuron with one critical addition: Each input is now associated with a "synaptic weight" (often denoted w 0 , w 1 , etc.) that determines whether it is excitatory or inhibitory and how strongly it influences the output. Summation is then performed on the inputs after they have been multiplied by their respective weights. Statistically inclined readers may recognize this as not-dissimilar-to a regression model, particularly logistic regression; to conceptually convert between a multiple regression model and a perceptron, simply rename the weights from the β i typically used in regression equations to w i and pass the regression output through a thresholding function, logistic function (to essentially replicate logistic regression), or other function as desired 8 . This function is known as the artificial neuron's activation function; activation functions are a key feature of contemporary ANN designs, and there are many options to choose from. In most important ways, the perceptron-like artificial neural units used in some DNNs today are not substantially different than the classical perceptrons discussed by Minsky and Papert (1969) in their seminal book some 50 years ago. Yet the original perceptron architectures retained many of the McCulloch-Pitts neurons' limitations and still had significant constraints on the classes of problems they could solve. The key developments that distinguish the more powerful deep learning techniques of today from the toy models of the past are (1) improved methods for establishing what the proper synaptic weights should be for a given dataset/problem, i.e., training the neural network, and (2) new and ultimately better ways of digitally connecting groups of artificial neurons together into more complex structures, i.e., improved ANN architectures 9 . Training Algorithms and Neural Network Architectures The earliest ANN architectures were very simple indeed; either a single artificial neuron or, in the next major architectural 7 As originally conceived, "perceptron" referred to a more complex network of units that could be implemented in a physical machine to produce artificial vision, hence the name. However, the most salient feature that researchers latched onto was the structure of the neural units, and via synecdoche "perceptron" came to be the name of such a unit, so there is some degree of fuzziness around nomenclature and definitions. Here, we use the contemporary sense of "perceptron" to refer to the architecture of an artificial neural unit, rather than the original plan for the physical perceptron machine. 8 The concept of the perceptron is also somewhat looser than the McCulloch-Pitts neuron regarding whether inputs and outputs are constrained to be binary or can be continuously valued, and regarding what kind of thresholding or other function the summed inputs are passed through in order to create the output. 9 Not to mention the ∼billion-fold increase in computational power (IBM 704 at 12,000 flops versus a recent desktop GPU at ∼11 teraflops, for an NVIDIA GeForce GTX 1080 Ti) that helps to make such sophisticated architectures viable. advance after that, a layer of such units. In this latter (still very simple) architecture, the units are fully connected, meaning that each unit receives a copy of each possible input value (see Figure 2A). Note that in this figure, as in many neural network diagrams, inputs and outputs are represented as "layers" of a sort, but there is only one true layer of computational units 10 . If the ANN is meant to calculate a classification problem (a common application), the outputs are typically assumed to each correspond to one of the possible classes, and are interpreted in a winner-take-all fashion (i.e., for a given set of input data, whichever output value is highest is interpreted as the network's prediction of the class that the input data belong to). Although the transition from single-neuron to single-layer architectures laid a critical foundation for later work, single layers of perceptrons were soon shown not to be terribly useful as artificial intelligence agents, no matter what their synaptic weights were or how those weights were determined. As Minsky and Papert demonstrated in Perceptrons (1969), it is mathematically impossible for any singlelayer perceptron network -no matter how many units are in itto perform certain fundamental computational operations 11 . This revelation may not seem surprising in retrospect; after all, a single layer of neurons, all receiving the same inputs, is not a very viable architecture for a biological neural network either. Still, it was enough to significantly dampen enthusiasm for ANN research for over a decade. Although adding another layer of computational units (known as a hidden layer) would allow the network to maintain an intermediate representation of the input and enable more complex operations 12 , the algorithms available for training single-layer perceptron networks could not be readily extended to multi-layer architectures. In the 1980s, however, interest was reignited with the popular (re-)discovery of the backpropagation algorithm (or simply backprop, to its friends). This algorithm was known and even applied to ANNs previously (Linnainmaa, 1970;Werbos, 1974), but it did not reach mainstream awareness until the publication of Rumelhart and colleagues' (Rumelhart et al., 1986a,b) seminal formulations of it. Backprop proved to be a highly robust method for training ANNs across many applications, and is still the dominant training algorithm in use today. The main principle behind backprop is to take any errors made by the network during training and propagate responsibility for them from the output layer (where the error is assessed, by comparing the network's decision to the known correct decision 13 ) backwards through the network towards the input layer, penalizing the synaptic weights most responsible for the error along the way. It is analogous to the human behavior encapsulated by the vernacular phrase, "Shit rolls 10 In a biological neural network, one might relate these to a layer of dendrites, a layer of cell bodies, and a layer of axons, but all of those together would comprise a single layer of neurons. 11 Put more formally, single-layer networks cannot solve problems that are not linearly separable, which famously includes the relatively simple XOR function. (For binary inputs A and B, respond "yes" if A is true and B is false, or if A is false and B is true, but respond "no" if A and B have the same value). 12 Including XOR and many others. 13 As backprop is performed by comparing the performance of a network on a training dataset against an already-known ground truth for that dataset, it is thus considered a form of supervised learning, in ML parlance. FIGURE 2 | Examples of artificial neural network architectures. (A) A simple fully connected multi-layer perceptron model with 12 input values, a middle layer comprising three perceptrons, and an output layer with two perceptrons. Line lightness is used to represent synaptic weight strength. (B) An example of a convolutional neural network layer that might be used to analyze 2-D input. Here, the layer looks less like a set of artificial neurons and more like a digital filter used in image processing. Two-dimensional input is convolved with a 2-D filter to yield a 2-D output, sized similarly to the input. During training, it is the values in the convolutional filter that get adjusted. Square lightness represents the numeric values in the cells of each 2-D matrix. (C) An example of a simple recurrent neural network. There are many types of recurrent neural network structures with varying degrees of complexity, but all share the property that recurrent units' output gets passed back into them (represented here by curved arrows), giving them some form of "memory" for previous input values. (D) An example of a complete neural network architecture that might be used to analyze 3-D input such as MRI data for a two-class classification problem. In this simple example, 12 input values in a 2 × 2 × 3 array are first passed through a 2 × 2 × 2 convolutional filter, yielding another 2 × 2 × 3 array as output. This is then passed through a "flattening" layer to convert it to a 12 × 1 vector, which then passes through a 3-unit dense layer to a 2-unit output layer (as shown in panel A). downhill." For example, imagine that a CEO -the final decisionmaker in her company's chain of command -makes a decision that loses the company money. She turns to her immediate inferiors and doles out punishment to them proportional to how influential they were in guiding that decision, and decides to trust those influential individuals less in the future. In turn, each of those upper-level managers passes along the punishment and distrust they have received to their immediate inferiors, again proportional to their influence on the upper managers' actions, and so on down the corporate hierarchy. In this way, one hopes that the next time a similar decision is faced, the shift in influences and communication channels throughout the hierarchy will produce a better outcome. The advent of effective backprop-based training for ANNs reignited interest in them for a time, and backprop-trained ANNs were found to perform admirably in a number of ML domains. Still, before long, interest waned again, as neural nets with many hidden layers were found to present mathematical difficulties for backpropagation algorithms, and complex networks also took a long time to train on the CPUs of the era. Concurrently, the 1990s also saw the development of promising alternative ML algorithms, most notably the modern incarnation of support vector machines (SVMs; Boser et al., 1992;Cortes and Vapnik, 1995). SVMs were easier to work with than ANNs and performed nearly equivalently (or even better) in many problem domains of the time. Thus, when traditional MVPA techniques arose in neuroimaging in the 2000s, it is unsurprising that SVMs and other similarly robust linear classification algorithms, wellsuited to the mid-sized datasets of the time, dominated within that emerging field. The Deep Learning Renaissance Research interest in ANNs experienced another upswing, which has continued to the present, beginning around 2006. This rebirth happened for several reasons, including: (1) solutions to some of the technical and mathematical problems that had plagued networks with complex, many-layered architectures (Hinton et al., 2006); (2) methods for training ANNs on desktop workstations using the GPU instead of the CPU, producing speed improvements of up to ∼70x (Raina et al., 2009); (3) the advent of the so-called "Big Data" era, which provided the larger datasets required to adequately train more complex neural architectures; and (4) the re-branding of neural net research as "Deep Learning, " which, despite being more public relations than true substance, still likely helped ignite new interest in a field formerly seen as relatively tired and unpopular. Since this Renaissance began, there have naturally been several key architectural and methodological developments 14 . However, these newer architectures are still trained and used similarly to the older, simpler networks described above, and the variations are not too difficult to comprehend once one understands the fundamental concepts and terminology behind ANNs. During the early days of this revival, deep learning research had a number of notable successes, including advances in speech recognition, natural language processing, computer vision, financial fraud detection, and more. Large technology companies, who had access to Big Data and financial motivations for finding better ways to process it, also had their interest piqued. Thus they began to invest in deep learning research themselves, including developing improved software tools (for example, the TensorFlow toolbox, developed primarily at Google, and PyTorch, developed primarily at Facebook). These tools typically rely on lower-level driver and software library support for GPU-based computation, most notably NVIDIA's CUDA libraries for general GPU-accelerated computing and their cuDNN framework, built atop CUDA, specifically for DNN applications 15 . Although the use of such tools has exploded in the technology sphere and in basic computer science research, adoption in other areas, such as cognitive neuroscience, has been slower. This lag can partly be attributed to fundamental limitations and difficulties of DNN-based data analysis (e.g., potential for overfitting), but another large factor is the lack of higher-level software tools that make it convenient for neuroscience researchers to implement dMVPA without needing to write large amounts of their own code. And, when better software tools exist, it will be more efficient to explore the space of possibilities and limitations of dMVPA, and thus establish its long-term viability as one of the arrows in the working neuroscientist's quiver. In short, we argue that neuroscience and related fields need more software tools that match, or exceed, the versatility and ease-of-use of existing traditional MVPA tools. This is the goal of the DeLINEATE toolbox (Deep Learning In Neuroimaging: Exploration, Analysis, Tools, and Education), which we introduce below. Pros, Cons, and Caveats of dMVPA Pro: Potentially Greater Suitability for Complex, Many-Featured Datasets As discussed earlier, one great promise of dMVPA is the potential to unearth more fine-grained patterns in neuroscience data than the simpler (and commonly linear) techniques of traditional MVPA. However, a fundamental principle of statistics is that more powerful (i.e., more complex) models require more parameters 16 , and reliably estimating more parameters requires larger input datasets. Hence, why deep learning and Big Data are commonly associated with each other. Unlike, say, the Google Images team, most neuroscientists are unfortunately not swimming in training data for sophisticated machine learning; neuroscience data are frequently "Big, " but more from features 17 15 However, alternatives for other GPU architectures do exist, such as the CoreML library used in Apple devices, which use primarily non-NVIDIA GPUs. At the time of writing, Apple had just recently released the first Macintosh computers powered by their new Apple Silicon hardware, and developers were in the process of porting TensorFlow and other machine learning tools to the new architecture. Thus, by the time of publication or shortly thereafter, accelerated deep learning via TensorFlow, our own toolbox, and other tools may be available on Apple Silicon devices as well. 16 "Parameters" used in the statistical sense, i.e., numeric values that need to be estimated. 17 In the machine learning sense; for example, the number of voxels in a trial of fMRI data or the number of (electrodes × timepoints) in a trial of EEG data. than from number of examples 18 . Of course, in deep learning (and most statistical analyses), the inverse situation is usually more desirable: A relatively large ratio of examples to features. Potential solutions to the too-many-features problem include finding ways to intelligently select (feature selection) or algorithmically condense 19 the feature set. However, beyond those options, most traditional MVPA techniques do not offer many choices for constraining the feature set, and in particular lack any built-in ability to take the structure of the input data into account. This is unfortunate because neuroscience data 20 tend to be highly structured (temporally and spatially) in ways that could be informative for MVPA 21 . DNNs, on the other hand, have numerous potential architectural configurations that can be optimized to take advantage of known structure in the input data. Most notably, certain types of ANN layers (e.g., convolutional layers) can handle multi-dimensional input data, whereas traditional MVPA's linear classifiers typically just vectorize multi-dimensional inputs. Thus, dMVPA makes it possible to design customized classifiers that are more suited to a particular shape/dimensionality of input data. For example, in one deep-learning-based analytic technique we recently developed (Williams et al., 2020; discussed further below), traditional MVPA was unable to learn to compare two different trials of neuroimaging data and predict whether they were drawn from the same class, but a dMVPA with properly structured input data was able to learn that comparison operation easily. Caveat Having more architectural options for structuring and condensing complex input data also leads to a paradox of choice; how can one possibly decide on the best DNN architecture for a given dataset? Unfortunately, dMVPA is still a young field, and we are still working on establishing good heuristics for network architectures to handle many-featured datasets. Also unfortunately, this is not one of those methodological choices where differences between options can be chalked up to rounding error; the wrong dMVPA architecture may completely fail to perform above chance in situations where a superior architecture classifies the data fairly accurately. To the extent possible, we have discussed some basic guiding principles of trying to design a good dMVPA architecture in the section "Practical Advice" below, and in the Supplementary Material. 18 Also in the machine learning sense, i.e., instances of a set of features that can be assigned a category label. In psychology and neuroscience, such "examples" are generally called "trials" (e.g., of a cognitive task), although in some cases examples may correspond to experimental subjects -an even more limited resource. 19 For example, in techniques like elastic nets (Zou and Hastie, 2005) or SMLR, which use regularization or similar tricks to reduce the number of predictor features. 20 Again, our discussion focuses on neuroscience data, but these techniques, lessons, and software tools can readily be translated to related (or even notso-related) research fields with similarly-structured datasets and classification problems. 21 For example, it may be useful to condense several spatially adjacent EEG electrodes with similar waveforms into a single data channel. Or, if trying to classify whether a subject is viewing faces or houses, to construct a feature detector that is sensitive to a certain voltage peak (say, the N170; Bentin et al., 1996) but time-invariant within a ∼20ms window, to account for trial-to-trial latency variability. Con: Many Potential Types of Analysis Architecture; Many of These Carry an Increased Danger of Overfitting Most conventional MVPA techniques (SVM, SMLR, etc.) have a relatively small number of hyperparameters 22 to adjust, and those hyperparameters can often either be left at default values or automatically estimated by the algorithm without serious adverse effects on performance. In contrast, the number of possible hyperparameters to adjust in dMVPA is effectively infinite. These hyperparameters include the number of layers in the network, the number of units in each layer, the type of each layer 23 , and any number of additional layer-type-specific hyperparameters that can be separately specified for each layer. Thus, even choosing a starting point for how to construct a dMVPA model can be daunting for inexperienced researchers (and experienced ones, too). Furthermore, thanks to the No Free Lunch (NFL) theorem(s) (Wolpert and Macready, 1997;Shalev-Shwartz and Ben-David, 2014), we know that no estimation-or optimization-based analysis technique will be optimal for every dataset or problem domain, and therefore it is impossible to know a priori whether a given analysis technique will be optimal for a particular problem. Put another way, if we knew in advance that a particular analysis technique were optimal for our problem, then that technique would necessarily be exquisitely tailored to the problem -which means we would essentially already know the structure of the data perfectly, which obviates the need to conduct the analysis. Compounding the problem, there is no real upper limit, other than available computing power, to how complex dMVPA models can be allowed to grow 24 . For the current status quo of neuroscience data, most possible dMVPA models would be far too complex; many would even contain more parameters to estimate than there are data points in the input set! It would be inaccurate to say these models would fit the data poorly; rather, they would fit the training data too well. It is not uncommon to see a complex dMVPA model effectively memorize its training data, producing perfect classification of the training dataset but extremely poor generalization to a test dataset -the classic problem of overfitting. Caveat Much as SVMs provide a fairly robust method for classification across a surprisingly wide range of data types and problem domains (though they are rarely truly optimal due to NFL), there is some hope that such "pretty good, most of the time" dMVPA architectures might exist as well. Again, the field is 22 This term is less commonly used in the MVPA literature than the ANN literature, but it refers essentially to a parameter of the algorithm set by the user before running the analysis (for example, the amount of regularization), to distinguish those values from plain (non-hyper) parameters, which are the values estimated by the statistical process or model-fitting algorithm. 23 A full rundown of layer types is beyond this article's scope and better-suited to a general introduction to deep learning, but common types include perceptron-style "dense" layers, "convolutional" layers, "recurrent" layers, and supporting utility layers that calculate simpler mathematical functions; discussed in more detail below and in the Supplementary Material. 24 Complexity could be defined many ways, but for now, we will use it mainly to refer to how many parameters (not hyperparameters) need to be estimated for a given model. young, but during development of the DeLINEATE toolbox, we have often found that relatively simple dMVPA models, consisting of just 1-2 convolutional layers and 1-2 dense layers 25 , perform comparably to (or better than) the industry workhorse of SVMs. A bit of customization is often required to fit the size and shape of the input dataset, and it can be useful to test out different variations of dMVPA architecture on one portion of the dataset before applying the best-performing architecture to the remaining held-out data, but a satisfactory architecture is typically not too difficult to find without excessive trial-anderror. We have found that after some experience using dMVPA, one begins to develop fairly good intuitions about what kinds of architecture might be best suited to a specific problem, but it is still far from an exact science. As the field progresses, we hope that it will converge on more heuristics for designing dMVPA architectures that perform as robustly as SVMs across datasets, while still retaining the flexibility and other advantages of dMVPA. Still, for many practical applications, it is less important to identify an optimal model than it is to determine if the data can be reliably classified above chance (Hebart and Baker, 2018). With properly implemented cross-validation, this can often be achieved by a wide variety of architectures (assuming the data do contain enough meaningful signal for reliable decoding), with the accuracy difference between sets of hyperparameters being only a few percentage points. Conversely, if the input data contain only noise with respect to the classification problem, any sane architecture should perform at chance on the test set. Thus, while some trial and error may be necessary before deciding that data cannot be classified, exhaustive model search is seldom required. When possible, it is often helpful to conduct a traditional MVPA to get a ballpark estimate of how a reasonably well-configured dMVPA should be expected to perform. Further discussion of these issues, along with a basic walkthrough of how one might begin to explore the hyperparameter space for analyzing a sample dataset, can be found in the Supplementary Material. Pro: Intrinsically Multiclass Classification One advantage of dMVPA that has historically received relatively little attention in the literature is that it is straightforward to design a "true" multiclass classifier, whereas most traditional MVPA methods are intrinsically binary. Thus, in traditional MVPA, multiclass decisions must generally be built from a combination of binary classifiers 26 . While there is nothing methodologically wrong per se with building multiclass decisions from binary ones, the implications are slightly different than those of a true multi-way decision, which should be taken into account when interpreting results. Furthermore, in some commonly used MVPA tools (e.g., PyMVPA), the multiclass 25 Technically, these "deep" MVPA networks would not be very deep in terms of how many layers they contain. Still, a fair portion of "deep" learning these days does not use particularly complex network structures; the term now seems to refer more to the contemporary era of ANN-based data analysis than any particular network structure. 26 Typically, if we have classes ABC, the multiclass decision would be made either by training up classifiers "A vs not-A, " "B vs not-B, " and "C vs not-C, " or by training up classifiers "A vs B, " "A vs C, " and "B vs C, " and then summing up the scores in favor of each category across classifiers in order to obtain an overall score for that category. decision procedure is not always transparent to the end user, which can be a point of confusion. Conversely, dMVPA classifiers are able to consider all classification options simultaneously; as a consequence, it is also trivially easy to obtain meaningful prediction scores across all classes for each example in the testing set, which can then be used in analyses that go beyond simple winner-take-all accuracy measures. Pro/Con: Performance Performance, in the sense of speed, can be either an advantage or a disadvantage of dMVPA. Although dMVPA network architectures can vary so widely that it is difficult to generalize, prima facie dMVPA should typically run slower than traditional MVPA, because the calculations involved in training a dMVPA network are more complex. However, for larger datasets (in terms of numbers of features and/or examples), the performance of traditional MVPA techniques may scale more poorly than dMVPA, depending on the particular hardware and the particular software implementations involved. (See "Benchmarks" below, Table 1 and the Supplementary Material for details). Thus, beyond a certain dataset size, dMVPA may be the more attractive choice. Also, because the network architecture of dMVPA can be adjusted, researchers have more options; e.g., whether to employ a simpler network that may not achieve maximum accuracy but runs quickly, versus a more complex network that runs slower. Caveat As alluded earlier, dMVPA's computational costs can be somewhat offset by parallelization, which is better supported by deep-learning software tools than most traditional MVPA tools. This is true even if parallelizing across CPUs/cores, but especially true if using the computer's GPU. Results vary widely depending on dataset size, network architecture, and the specific hardware involved, but users might roughly anticipate anywhere from a 5x-100x or more speedup for running dMVPA on a GPU versus a CPU. On one hand, these benefits make dMVPA a more competitive option, speed-wise. On the other hand, GPUaccelerated dMVPA does require more specialized hardware and more human effort setting up the relevant drivers and software packages. While we have striven in our toolbox and documentation to keep this process as painless as possible, it is still more effort than is required to run non-GPU-accelerated analyses; whether that effort is well-spent will heavily depend on individual users and what tasks they are trying to accomplish. Pro: Flexibility of Applications Although our focus has been on dMVPA, we should note that modern neural networks have an ever-increasing number of uses beyond simple classification. For example, one currently popular strategy is to train a model for categorization within some domain (e.g., the contents of a photograph) and then interrogate the model's intermediate layers, in an attempt to understand what strategy the model is using (Zeiler and Fergus, 2014). Autoencoder-style architectures allow for, e.g., unsupervised learning of feature structure (Xie et al., 2016), feature-sharpening for degraded inputs (Lore et al., 2017), and principled fusion of multimodal data (Ngiam et al., 2011). Deep networks can also be used to implement classification techniques that are not wellsuited to traditional MVPA -for example, "transfer learning, " in which a network is initially trained on one dataset, and then refined by training it further on a different dataset. As another example, we have recently explored using deep networks to create "smarter" similarity/distance metrics tailored to particular datasets/applications, unlike traditional formula-based metrics (e.g., Pearson correlation, Euclidean distance), which do not afford such flexibility (Williams et al., 2020). The DeLINEATE toolbox can, with varying degrees of effort, support many of these novel or specialized applications. Con: Field and Dependencies Are in Active Development While the software tools for traditional MVPA will presumably keep receiving periodic updates, the field overall is fairly mature and not changing particularly rapidly. However, deep learning and dMVPA are newer; as such, the techniques and their underlying software tools are continually being updated. This means that documentation can rapidly go out of date, and incompatibilities can arise easily if developers are not careful. We have aspired to make our own toolbox as robust as possible to the changing software landscape, but it is still worth being aware of. Of course, there are mitigating strategies: Users can find one version that works and refuse to update anything, but this deprives them of future enhancements. Alternately, they can continually update, but this makes it harder to exactly replicate earlier work run with previous software versions. If only Python toolboxes (our DeLINEATE toolbox, and the Keras/PyMVPA backends it relies on) are updated, Python's "virtual environment" feature can be helpful for maintaining different software setups, each in their own containers. But, if later updates require newer hardware drivers, and users wish to maintain backward compatibility with their earlier work, they may wish to do what our lab has done: Purchase several small hard drives for each machine, set up a fresh operating system for each new major driver version, and simply reboot from a different boot drive when one wishes to work with current vs. legacy versions of the software. A Brief Introduction to Network Architecture In an abstract sense, all feedforward 27 neural networks may be viewed as a collection of mathematical operations to be applied in sequence to an input of some fixed size, along with rules for updating the parameters of those operations during training. In a classic perceptron, the core operations are multiplication (input data times weight values), summation, and then activation (a thresholding operation, traditionally). In a multi-layer perceptron network (Figure 2A), this complete multiplication-summation-activation sequence is repeated, with each layer's outputs becoming the next layer's inputs. A typical, slightly simplified mental model for such networks treats those multiplication-summation-activation operations as all occurring within a self-contained unit or node, like in a biological neuron; a number of such units in parallel constitutes a layer of the network, and the main free parameter chosen by the designer of the network is the number of units in each layer. However, unlike a biological neuron, in an ANN this set of operations is not immutable -one might opt to omit activation, invert values after every step, or do any other sort of mathematical transformation, at any step of the sequence. One could also adopt a different mental framework in which every individual operation is a layer of the network, such that each layer of a perceptron network expands into three sequential computational layers: a multiplication layer, a summation layer, and an activation layer. In Keras, the Python framework upon which the DeLINEATE toolbox's deep-learning functionality rests, it is possible to work with either of these conceptualizations -e.g., there are individual layer types that can perform thresholding/activation, but the activation operation can also be specified as an argument of other layer types, with the understanding that activation is applied last, after that layer's primary operation. In lay terms, when sufficiently tortured and beaten into submission, contemporary deep learning frameworks can be mangled into performing virtually any kind of mathematical operation or transformation on the input data. A full discussion of all the possibilities could fill several books, and is thus beyond the introductory scope of this paper. However, there are a few broadly useful kinds of operation/layer that are particularly worth understanding; novices to deep learning should focus on understanding the basic gist of these fundamental tropes before getting lost in the details. Here, they are described briefly in 27 "Feedforward" meaning that all outputs from earlier (closer to the input) layers are fed "forward" into later (closer to the output) layers; outputs are never fed back into earlier layers. Feedforward networks are generally easier to work with and design. Our toolbox currently supports only networks with a broadly feedforward design (implemented via the "Sequential" model class in Keras) when using the graphical interface or text-based job files; however, when using it as a collection of Python functions, other network types are possible. One exception is recurrent layers, which feed their output back into themselves; thus networks containing recurrent layers are not strictly feedforward. However, as implemented in our toolbox and the Keras backend we rely on, the recurrency can be viewed as something that recurrent layers handle within themselves; the user does not have to think about this recurrency in terms of their network architecture. From the user's point of view, the layers of the network still follow a feedforward/sequential structure, even if the individual units within some layers have recurrency built-in. broad categories; Keras has several subtypes of each depending on details of the desired implementation. Classic Called "Dense" layers in Keras, these are layers made of perceptrons (Figure 2A). They compute weighted sums and apply an activation function. Varying the number of computational units in such a layer allows one to increase (e.g., consider more potential weightings) or decrease (e.g., prune less informative features) the dimensionality of the data as it passes through the layer. By default, these layers are fully connected, meaning that all outputs from one layer are used as inputs for each computational unit in the next layer of the network. As noted earlier, a neural network made entirely of dense layers is sometimes called a "multi-layer perceptron" network architecture. Convolutional Convolutional layers ( Figure 2B) may be conceptualized as collections of filters that are swept across (in mathematical terms, convolved with) their input. When used to process 2-D photographic data, their function is often likened to visual neurons, which take input from a spatially restricted receptive field, extract some feature if present, and pass along the result to the next layer of the visual processing hierarchy. For readers familiar with digital image processing, they are essentially like other kinds of digital filters (e.g., a blur filter, an edge detector), except that convolutional layers can work with any dimensionality of data (not just 2-D images) and their parameters are learned over the course of training, rather than being predefined. The combination of filter shape and input data structure will determine what kinds of feature may be selected for and passed along as output. For example, if each example of input data is a 32 × 1000 array of EEG voltages (e.g., 2 seconds of 32-channel data sampled at 500 Hz), a set of 1 × 10 filters would be capable of detecting high-frequency patterns within individual channels (in this example, patterns that fit inside a 20 ms time window), but insensitive to lower-frequency or purely spatial patterns. Conversely, a set of 10 × 1 filters could detect patterns distributed across multiple channels, but only those that occur instantaneously. However, one could instead employ, for example, a set of 8 × 20 filters, which would be capable of detecting patterns spread across up to eight adjacent channels over a 40 ms time window. Choices about data structure are consequently more important for this class of layers than for a multi-layer perceptron; the input examples would contain identical information if flattened from 32 channels × 1000 timepoints to a single 1 × 32,000 vector, but the meaning of a 1 × 10 filter bank's outputs would be very different. Recurrent Recurrent layers (Figure 2C) are named for their property of having their outputs fed back into themselves as inputs. By maintaining an internal state determined by previous inputs, recurrent units develop a form of memory for sequential data. For example, a 1 × 10 vector input to a classic dense unit would be combined to a single value in only two steps -multiplying each element of the vector by its weight and then summing the results. If the same vector were fed into a recurrent unit (typically called a cell), the first element would be handled in isolation, but evaluation of the second element would include the output of the cell's operation on the first element. The result of this would, in turn, update the unit's state to influence its response to the third element, and so on until each element of the input is consumed. Recurrent networks are frequently used to process natural language data (both audio and text) and in general are considered good choices for timeseries data. In our own work, we have not observed any significant benefit over convolutional layers when working with human neuroscience data, and have found recurrent-based networks to take longer to train than convolutional-based networks; however, these findings are likely highly dependent on details of the dataset and research question. As alluded earlier, for common types of recurrent cells, the recurrency is handled within the cell as a form of internal "memory" that is not visible to the rest of the network, so network architectures using recurrent layers can still be considered broadly "sequential" or feedforward, and are thus supported by our toolbox. Supporting This is a broad category of operations that, for various reasons, are generally thought of as secondary or historically bakedin to more interesting operations. In Keras, this includes activation layers, various purely utilitarian data-reshaping or simple mathematical operations, dropout (an operation in which some percentage of a layer's units are ignored; thought to mitigate overfitting), etc. Some of these operations (e.g., activation functions) can be specified either as distinct layers or as parameters to a primary layer, whereas others (e.g., a layer that downsamples the output of the previous layer via averaging) can only be specified as distinct layers. Practical Advice The following is a combination of our experience and advice we have received from other colleagues. We hope it is helpful as a starting point, but readers should not feel overly constrained by it. While the modern leaders in image recognition involve dozens of layers (Szegedy et al., 2016), in our experience the aim of dMVPA can typically be accomplished with much smaller networks. When working with minimally processed fMRI/EEG/eye-tracking data, we have found that a good starting point often consists of 1-2 convolutional layers followed by 2-3 dense layers; based on preliminary results from that architecture, one could add or remove layers, adjust the layers' sizes, or tweak other hyperparameters. See Figure 2D for an example. For maximal effectiveness and interpretability, consideration should be given to the match between the shape of the per-example input data and shape of convolutional filters (e.g., should the filters look across EEG channels, or only within? If across, are channels arranged to be spatially adjacent in the data?). Leaky ReLU is usually our preferred activation function, and we have often found dropout values of ∼0.3 in dense layers to be beneficial. We have found Stochastic Gradient Descent (SGD) with the momentum parameter (classical or Nesterov) set to something on the order of 0.9 to be a generally successful optimizer, although the Adam optimizer (Kingma and Ba, 2014) also performs well in some situations 28 . One of the great advantages of deep learning is the effectively infinite configurability of neural network architectures and their hyperparameters, which allow them to be exquisitely customized to particular applications and datasets. However, as we alluded above in the section "Pros, Cons, and Caveats of dMVPA, " it is difficult to offer very many hard-and-fast rules about best practices, because there really is no such thing as a "typical" neuroimaging dataset or experimental structure. This is one of the reasons that simpler linear techniques have dominated MVPA for so long; they trade off decreased configurability for increased generalizability. A good analogy is that traditional MVPA is like a standard four-door sedan -a reasonably good choice, for most people, most of the time -and deep learning covers all other vehicles, from racecars to garbage trucks to unicycles. Just as it is difficult to offer good advice on vehicles without knowing whether the intended use case is racing or garbage collection, it is rather difficult to offer detailed pointers on network architecture without knowing all the details of a user's dataset and experimental design. However, in the Supplementary Material, we have included a walkthrough of how users might explore some architectural options for analyzing one dataset, as well as a table of some of the most frequently encountered options/hyperparameters, along with some guidelines on the kinds of scenarios that might be appropriate for certain choices. New users are encouraged to experiment with everything and keep track of the results; soon, you will likely develop your own favorite architectures and hyperparameters. Do not be afraid to experiment broadly; dMVPA has some powerful advantages, but we are also in a more exploratory phase for this kind of research, and designing a sufficiently performant dMVPA architecture can take significant trial-and-error. Of course, the extent to which that exploration might constitute p-hacking depends on your research aims; if that is a potential concern, you may want to design your analysis based on an independent dataset (e.g., one of the sample datasets included in our toolbox), or consider a split-half design in which one half of your data is used to explore analysis architectures and the other half is used for confirmatory purposes. Overview of the DeLINEATE Toolbox Now that we have covered some of the fundamentals of deep learning and dMVPA, and the combination of promises and challenges entailed with these methods, we turn attention to the more practical elements of conducting dMVPA with our own DeLINEATE toolbox. One major purpose of the DeLINEATE toolbox is to enable rapid exploration of model architectures/hyperparameters while maintaining an accurate record of what was done and how it turned out. These are conflicting goals in common practice -a researcher attempting to iterate on an analysis is often tweaking a script or working directly with a command-line interpreter, perhaps in a Notebook type environment (Grus, 2018), and discarding fruitless branches of exploration along the way. Maintaining an accurate record of each tweak and its results during such rapid prototyping is not easy, and can take more time and coding discipline than many of us have. Our solution to this problem was a processing pipeline in which a single JSON (JavaScript Object Notation) format 29 job configuration file fully specifies an analysis: the input data, how it will be divided for cross-validation and rescaled, the model architecture to be trained and evaluated, and the outputs to be saved ( Figure 3A). The toolbox translates this JSON file into Python code to execute the specified analysis (or analyses), and saves all desired outputs into .tsv (tab-separated values) files with names that include a user-defined prefix linking them to the original JSON file. A copy of that original JSON file can also be saved alongside the other output, so that even if the original is subsequently overwritten during the exploration process, the "output" copy remains a pristine record of what was run to create a particular set of results. A secondary goal was to facilitate comparison of dMVPA approaches to traditional MVPA while, as much as possible, maintaining parity in data handling. To this end, classic MVPA is also supported alongside the dMVPAs that are our primary focus. This is currently implemented with a PyMVPA backend. Traditional MVPA uses the same JSON job file format as dMVPA, as well as similar output file formats, cross-validation/rescaling options, etc., making it a simple task to conduct parallel MVPA and dMVPA on the same data. Currently we support SVM (Support Vector Machine) and SMLR (Sparse Multinomial Logistic Regression) classifiers for traditional MVPA, although our framework is readily extensible to most other classifiers in the PyMVPA toolbox. For a typical user, the primary entry point to the toolbox is delineate.py, a simple script that accepts one or more JSONformat configuration files as arguments, validates their contents, and uses them to create and run one or more analysis job(s). This allows users to run analyses without requiring them to write any code of their own. To further increase accessibility, we have recently developed a simple graphical user interface (GUI) that some find more approachable than a text editor ( Figure 3B). GUI users can click on a collection of interactive menus to create properly formatted job configuration files, which can then be used as input to the main delineate.py script. The GUI can also auto-populate selections based on an existing job configuration file for users who have a starting point (such as one of the included sample job files) they wish to modify for future analyses. For Python-proficient users who want more complex or flexible analysis options, the toolbox can also be used as a Python 29 JSON is a format that allows data structures to be written to plain text files with human-readable syntax. Although not as intuitive as a graphical interface, editing JSON-formatted job files is certainly easier for beginners than writing their own Python code. There are also JSON modules available for many popular text editors and a handful of standalone JSON editing programs to make the task even easier. FIGURE 3 | Ways that users can configure an analysis in the DeLINEATE toolbox. (A) Most users will likely configure analyses using a text-based JSON (JavaScript Object Notation) format job file. In this example, the file is open in a generic text-editor program, but JSON-format-specific editing software also exists. Each job has four main sections: "model," "data," "analysis," and "output," corresponding to the major object types in the toolbox. The file shown is configured to run 10 iterations of a PyMVPA-based SMLR analysis using a sample face-scene-object-viewing EEG dataset, using a randomly selected 95% of trials as training data and 5% as test data on each iteration. (B) A basic graphical user interface (GUI) that allows users to configure a job file without having to edit the text directly. The most frequently used options for several common analysis types are available (although editing the text file directly will always allow more flexibility than is possible to express in a GUI). The GUI also contains sections for data, analysis, model, and output, as well as buttons for loading in an existing job file and saving the settings configured in the dialog box to a new JSON file. The settings shown are configured to run 20 iterations of a Keras-based deep learning analysis, using 70% of trials as training data, 15% as validation data, and 15% as test data on each iteration. programming library, and users can write their own code instead of creating JSON files. JSON functionality and code-library functionality can also be mixed-and-matched (e.g., JSON files can be used to create a template analysis, which can then be tweaked and iterated upon with custom code). For users who wish to write their own Python code as well as JSON users who simply want some familiarity with the toolbox's underlying functionality, we next present a brief overview of the code structure; more detail is available in the toolbox documentation. DeLINEATE Toolbox Structure The DeLINEATE toolbox is an object-oriented collection of Python modules, each responsible for a different aspect of the (d)MVPA process. It comprises five main object classes and a small number of supporting files that contain utility functions or facilitate batch analysis. Each main class is housed in a .py file named for that class. In typical usage, the toolbox follows a minimum-import philosophy; to use it as a code library, one simply needs to navigate to its main directory and directly import the desired class file(s). The primary classes are: (1) DTJob, responsible for parsing JSON files that define DeLINEATE jobs and passing the appropriate information to constructors for the other object types. In typical usage, a DTJob is responsible for creating one of each other object type and then triggering the DTAnalysis object to actually run the analysis. However, users can also eschew DTJob entirely if they prefer to instantiate the other objects manually in their own Python code. (2) DTAnalysis, a parent class that contains one instance each of DTModel, DTData, and DTOutput; it is responsible for coordinating the operations of those other objects. This includes dividing data into training/validation/testing sets, iterating through portions of the data when desired (e.g., to loop through individual subjects), and initiating the model training/testing procedures. (3) DTModel, responsible for constructing the model in the appropriate machine learning backend (currently, either Keras or PyMVPA). The "model" in this sense refers either to the artificial neural network (Keras) or an object representing a simpler classifier, e.g., a support vector machine with a linear kernel and parameter C = 1 (PyMVPA). (4) DTData, responsible for loading the dataset from a data file, storing it, and performing certain operations on it (such as scaling/normalization or slicing it up into smaller training, validation, and/or test subsets). (5) DTOutput, responsible for writing analysis results to output files. The four main sections of a JSON-format job file are the analysis, model, data, and output sections, which map directly onto the corresponding Python classes; each section contains the parameters necessary to instantiate an object of the appropriate class 30 . Another (purely optional) class, DTGui, implements the aforementioned GUI. Model Types and Backends At present, the DeLINEATE toolbox has been used in-house for approximately two years to conduct analyses across a number of studies. It is a high-level toolbox with a flexible, extensible architecture that potentially allows it to sit atop multiple underlying machine-learning libraries. Currently, we support a subset of functionality for two backends: Keras (Chollet, 2015) for dMVPA and PyMVPA (Hanke et al., 2009) for traditional MVPA. With our heavy focus on providing a flexible architecture, it is relatively easy to add support for additional backends in the future, as well as enhancing the breadth of support for features of Keras and PyMVPA, enabling new data types to be imported, etc. The relative prioritization of such extensions will be guided by user demand. Cross-Validation We currently support two approaches to cross-validation. The first is a "universal" approach (specified in configuration files with the name "single") in which all data are treated as belonging to a single pool, which is randomly divided into training/validation/test sets according to percentages specified in the configuration file. The second divides the data according to some attribute of the samples 31 and iterates through each value of this property, dividing the data within each iteration into training/validation/test sets (specified in configuration files as "loop_over_sa"). Regardless of which scheme is used, because classification performance can be influenced by a model's initial conditions 32 , it is common practice to run multiple complete cross-validation iterations in order to ensure a stable estimate of the architecture's performance. With properly configured input data (see below), these two cross-validation schemes can cover most common MVPA use cases; however, additional schemes can be added in the future according to demand. Rescaling Although some MVPA methods are invariant to the scaling of the input data, others, such as many dMVPA applications, require data to be on a certain scale for good classification. The issue is slightly complicated by the need to prevent features of the test data from influencing the training data. We support several methods for rescaling data that avoid this issue by calculating 30 Although a non-Python-savvy user does not need to know these implementation details, the parity between job file sections and Python classes makes it easy for more experienced coders to switch back and forth between job files and their own Python scripts. As noted above, it is also possible to mix-and-match the two approaches. 31 A "sample attribute, " if you will, which is the terminology used by other MVPA toolboxes for a tag or property associated with each data sample/example. For instance, a subject ID or session ID. 32 Especially for dMVPA; for a given architecture, a classification might sometimes perform well and sometimes at chance depending on the random values assigned to weights at the beginning of training, which is generally a sign that the architecture needs adjusting. Other classification techniques, such as SVMs, can be solved deterministically; as such, they may or may not benefit from multiple cross-validation iterations, depending on dataset and cross-validation scheme. necessary parameters solely on the training data, and using those parameters to adjust validation/test data as well. Again, these methods are readily extensible with additional options, or users can always pre-scale their own data however they like. Currently supported methods are: (1) "percentile, " which identifies the value at a specified percentile of the data and divides all data by that value, (2) "standardize, " which mean-centers and divides all values by the standard deviation of the data, (3) "mean_center, " which subtracts the mean of the data from all values, (4) "map_range, " which translates values into the range between a user-specified minimum and maximum (0 and 1, by default). Input Data and Loaders By far, the most common question we have received from potential users concerns the necessary format for input data. The toolbox operates on, at minimum, one NumPy array and one Python dictionary. The former contains the actual data to be analyzed in a two-or-more-dimensional array, where one dimension represents examples (e.g., trials) and the other dimension(s) are feature dimensions. For instance, an fMRI dataset might be shaped as (examples × voxels), whereas an EEG dataset might be (examples × electrode × timepoint). Higherdimensional structure is ignored in traditional MVPA and simply collapsed into a 2-D (examples × features) array, as those simple classifiers can only operate on vectors of data. However, dMVPA, when run with an appropriate network architecture, can operate on any dimensionality of data and can potentially take that information into account for classification. If the spatiotemporal structure of the data is meaningful, this may produce superior performance. The Python dictionary contains the metadata needed to interpret the data array, in the form of one or more "sample attributes" (defined earlier; e.g., experimental condition, participant identity) for each sample. These sample attributes may be used as targets for classification (i.e., the class labels to be predicted) or as grouping variables in cross-validation (e.g., for leave-one-subject-out cross-validation). Data are read into the toolbox by a "loader" Python function specified in the job configuration file. Loaders can reside in a specific subdirectory of the toolbox or in an arbitrary userspecified location. We include several example datasets and corresponding loader functions that should be easily modifiable by researchers to fit their own needs. This is the main place where a typical user might need to write their own Python code; because of the many idiosyncratic formats used to store experimental data, some users may need to write a short function to read their files in and reshape them into the expected format. However, if the format is well-supported by NumPy or other Python libraries, these functions can typically be quite short (on the order of 10 lines of code). We also provide generic functions included for data in the NumPy and MATLAB native file formats, which will accept any .mat or .npy file containing one array variable of data examples and at least one variable of sample attributes. Thus, if users are able to save their data in one of those formats beforehand, there may be no need for a custom loader function. Because neuroscience data vary widely in format, we recognize that a need for additional loader options could still present a barrier to some researchers. We encourage such individuals to reach out to us so that we can offer assistance and expand the range of formats we are able to support natively. On the other hand, the overall flexibility in format means that with just a few lines of code, any dataset that can be represented as a multidimensional array is a candidate for analysis with our toolbox, not limited to neuroscience data; for instance, we have used the toolbox to analyze eye-tracking data (Cole et al., under review), photographic images, and more. Output Types and Results Assessment Most of the output from the toolbox will come in the form of tabseparated value (TSV) text files, which are both human-readable and easily imported into other analysis/statistics software for further examination. Users can request only certain outputs to be generated, or simply specify "all" output types (which will generate all but a small number of specialized output types that only make sense in specific circumstances). Foremost among these are test accuracies, which include classification accuracy on the test dataset for each iteration of cross-validation (and also, for dMVPA, the value of the neural network's loss function for each iteration). Other TSV outputs include raw classification scores (before thresholding them to make a categorical classification decision), which could be used to generate receiver operating characteristic (ROC) curves or perform other more nuanced forms of results assessment; class labels for the test dataset (typically useful in conjunction with the classification scores); training/validation accuracies (potentially useful in assessing overfitting/underfitting in conjunction with the test accuracies); timestamps for each cross-validation iteration; and metadata about the hardware/software used to run a particular analysis. Non-TSV output consists primarily of the option, for dMVPA, to write out a copy of the trained neural network, so that users can potentially apply it to a different dataset than the one it was trained on and thus assess generalizability (or use it for other applications or research questions). As noted earlier, non-TSV outputs can also include a duplicate copy of the JSON job configuration used to generate a particular set of results. For convenience in assessing classification accuracy, we provide a simple Python command-line script that computes and displays the mean, standard deviation, and standard error across cross-validation iterations for one or more TSV-format accuracy files. This is particularly helpful if the user has explored a large number of models and wants a quick, concise synopsis of their relative performance. Other options for assessing and/or visualizing results are currently rather limited, as the needs of research users are so variable, many other analysis/statistics software packages already have highly advanced visualization options (e.g., R, Matlab, the Python module Matplotlib, etc.), and many users may already have preferred workflows using those tools. Thus, our focus to date has been on making our output straightforward to import using other software. However, providing more advanced assessment/visualization options within the DeLINEATE toolbox itself is certainly in our development roadmap for future releases. Graphical User Interface As described earlier, the GUI currently allows users to generate a job configuration structure via menu selections and free-entry fields ( Figure 3B) that can be auto-populated by loading an existing job file. For frequently used Keras layer types, some reasonable default hyperparameters are provided; however, there are minimal defaults available for less common layer types, and in general it is still recommended for users to have some baseline knowledge of Keras's workings and hyperparameter options, even when using the GUI. As the number of potential analysis configurations is effectively limitless and this module is a relatively recent addition, error checking is currently somewhat limited. Still, we recognize that a usable GUI is a critical feature for some users, and we expect this to be a primary target for expansion and refinement in upcoming releases. Availability All toolbox code is currently hosted at https://bitbucket.org/ delineate/delineate and is freely accessible and open-source under the MIT License. There is also a project website at http:// delineate.it that hosts older releases, documentation, links to video tutorials, and more. Hardware/Software Requirements The DeLINEATE toolbox has few software dependencies of its own. However, as noted earlier, it requires either a Keras or PyMVPA backend to perform dMVPA or traditional MVPA, respectively, and those packages have their own corresponding dependencies. Fortunately, both Keras and PyMVPA are welldocumented and readily available; we also provide start-to-finish setup guides on the toolbox website. In brief, DeLINEATE is compatible with any recent version of either backend, and in principle can be run on any Python version from 2.7 onward, including all versions of Python 3; however, specific Python version compatibility may depend on which version of Keras/PyMVPA the user is running, and which Python versions those libraries are compatible with. The only additional dependency of DeLINEATE is Python support for Tcl/Tk (a graphical interface toolkit) if one wishes to use DTGui; most Python installations include Tcl/Tk libraries, but some might require a separate installation. As Python is available on all major operating systems (Windows, macOS, and Linux), DeLINEATE will also run on any of them, although hardware choices may constrain operating system options. In terms of hardware, a bare-bones DeLINEATE installation will run on any computer with enough RAM to hold the user's dataset in memory, as long as the user only wishes to run analyses on the CPU. Traditional MVPA via PyMVPA does not presently employ GPU acceleration, but most dMVPA users will want to enable GPU acceleration for a dramatic increase in speed (see "Benchmarks" below). As Keras relies on the TensorFlow library for its own backend (or the older Theano library; now deprecated in recent Keras versions but still supported by DeLINEATE), which in turn relies on the CUDA (Compute Unified Device Architecture) and cuDNN (CUDA deep neural network) libraries from NVIDIA, effectively this means that an NVIDIA-compatible GPU is required for accelerated dMVPA. Different GPUs will have different compatibility with various versions of CUDA, cuDNN, TensorFlow/Theano, and Keras; however, as long as compatible versions of those tools are installed, DeLINEATE should work with any of them. At the time of writing, we recommend midrange to high-end GPUs from the GeForce 10 series or higher; our lab's workstations mostly use GeForce GTX 1070 through GeForce GTX 1080 Ti cards, but other users may have higher or lower requirements. Currently, a reasonably powerful workstation for many dMVPA applications could be built from parts for $1500-2000 US 33 , although prices can vary widely depending on users' specific requirements and budgets. Since no current Apple computers support compatible NVIDIA GPUs, GPU-accelerated dMVPA is currently unavailable on macOS. Generally, for scientific computing, we recommend Linux-based operating systems for their widespread compatibility and open-source nature; however, GPU-accelerated dMVPA will work on Windows as well. If the macOS/NVIDIA compatibility situation changes, if the recently released (at the time of writing) Apple Silicon platform allows for hardware-accelerated TensorFlow support, or if DeLINEATE adds support for additional backends, GPUaccelerated dMVPA may become available on macOS in the relatively near future. It has historically been difficult to implement large neural networks without setting up dedicated hardware, largely because the virtualization approaches favored for cloud-based computing do not provide sufficient access to GPUs. However, we have recently seen the emergence of an option that may be useful to those who lack either the budget or the technical confidence to set up their own deep learning environments. Google Colab 34 is a browser-based Python environment akin to Jupyter Notebooks with some access to GPUs. Because the provided environment includes Keras/TensorFlow and allows interaction with files stored on Google Drive, it is relatively straightforward to execute DeLINEATE-based analyses by importing some of the classes and manually calling the method that begins an analysis. An example IPython notebook is provided in the Colab subfolder of the DeLINEATE repository. This approach requires some proficiency in Python and is subject to fluctuating resource limitations, so no promises can be made about speed or stability; however, it may be a good jumping-off point for beginning users wishing to explore the toolbox before investing in their own equipment. 33 Based on market prices for parts to build a system similar to ours at the time they were built, with an eight-core Intel i7-9700K CPU, GeForce GTX 1070 GPU, 32GB RAM, 1TB SSD primary storage, 4TB HDD secondary storage, and a compatible CPU cooler, motherboard, case, and power supply, for a total of $1750 US. Newer GPUs and other parts have been released since those were built, but pricing for current parts is in a similar range. 34 https://colab.research.google.com Benchmarks For both traditional MVPA and dMVPA, performance (both accuracy and computation time) will vary drastically across datasets, hardware, and choice of MVPA classifier or neural network architecture. Thus, the generalizability of any benchmarks is limited. However, to give readers a rough sense of the computational advantages of dMVPA and how running times scale for different dataset sizes, we prepared several datasets and analyzed them with both traditional MVPA and dMVPA. These benchmark datasets emulate the format of an fMRI dataset, but are entirely synthetic. The code to generate them is included in the toolbox. We simulated datasets with three conditions (classes). Datasets ranged from 200 features (e.g., voxels) to 25,600 features in a doubling progression (200, 400, 800, . . .). The number of examples (trials) per condition ranged from 100 to 10,000 in the progression: 10ˆ2, 20ˆ2, 30ˆ2, . . .. Full details are given in the code and in the Supplementary Material. Briefly, for each condition, a random signal with the appropriate number of features was generated. Then, supposing for this example that we are generating 900 trials/condition, 30 variations on the "canonical" signal for that condition would be generated by blending the canonical signal with a certain proportion of random noise. Then, for each of those 30 variations, 30 subvariations were generated by the same process. Although we did not particularly strive for biological verisimilitude, the intent was to somewhat mimic a circumstance where brain patterns had a small number of "true" variations (e.g., if the condition were "faces, " subjects might have slightly different voxel response patterns for different genders/races) as well as trial-to-trial variations due to stimulus exemplar effects and/or measurement noise. To make the classification more challenging, each trial's signal was also blended with a proportion of the signal of a trial from each of the other two conditions. The datasets were analyzed with three classifier models: a simple CNN, SMLR, and SVM. The CNN used GPU acceleration (NVIDIA GeForce GTX 1080 Ti), whereas the other models used only the CPU (Intel Xeon X5650 @ 2.67GHz). Each analysis was typically run for 10 iterations (cycles of training/test with different randomly selected training/test sets) except when running times became prohibitive, in which case the analysis was terminated after as few as five iterations. Mean running times (Table 1) ranged drastically, from less than one second to several days. As expected, running times for all model types generally increased with greater numbers of features and trials. SVMs had both the shortest and longest running times. Compared to SVMs, SMLR had both a longer shortest running time and a shorter longest running time (i.e., the range was compressed on both ends), and CNNs continued this trend with an even longer shortest running time and a still shorter longest running time (i.e., the range was even more compressed). Notably, the CNN never took less than 10 s (largely due to a relatively fixed start-up time for Keras models) but its longest running times, for the most complex datasets, were still under 15 min. By comparison, SMLR's longest running times were over 4 h, and SVMs' were multiple days. (And a few SVM models never converged in any reasonable amount of time). Thus, as expected, deep learning models were less time-efficient than traditional MVPA for simpler datasets but were vastly more scalable for large datasets. Benchmark datasets were intended to be classifiable at moderate accuracies but not particularly designed to be benchmarks of accuracy, so we do not report comprehensive accuracy results, which could invite misleading extrapolations to real data. However, generally all methods performed above chance, in a comparable range. Typically, the CNN had the lowest accuracy of all three models on datasets with few trials but usually had the highest accuracy with large trial counts, especially when feature counts were low. Conversely, SVM had the highest accuracy when trial counts were low or with very high feature counts, although in those high-feature-count analyses, the SVM running time was long enough to be unusable in many real-world scenarios. SMLR accuracy almost always fell between CNN and SVM. Again, we do not expect these accuracies on synthetic data to perfectly reflect performance on real-world data, but they do fit general expectations of how models of varying complexity might be expected to overfit or underfit datasets of varying sizes. Future Development Toolbox development is ongoing and will largely be steered by community feedback. Current goals include adding support for non-sequential Keras models (e.g., those including feedback connections), transfer learning, model introspection, Generative Adversarial Networks (GANs), and additional built-in data loaders and cross-validation schemes. We also plan to make the GUI more informative and intuitive for users who are less familiar with Keras, and to include additional tools for visualization and potentially analysis of results (although this remains an unsettled topic; see Hebart and Baker, 2018, for relevant discussion). Although we have kept discussion in this paper fairly general, information is still liable to go out-ofdate quickly due to the rapid pace of deep learning methods development; users are encouraged to consult our website for the most updated details. Summary Deep learning continues to grow and offer new possibilities for computation in many areas of research and private industry. While it is being increasingly used in neuroimaging and other neuroscience applications, adoption has been hampered by the complexity of the topic and the lack of approachable software tools. We hope that this tutorial review will help researchers new to deep learning address the former, and that the DeLINEATE software toolbox will help address the latter. In years to come, we expect dMVPA to enable a forward leap in neuroscience discoveries comparable to, or exceeding, that of traditional MVPA over older analyses. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: http://delineate.it/, https://bitbucket.org/delineate/delineate/src/master/. AUTHOR CONTRIBUTIONS KK, JW, PL, and MJ worked on toolbox code and co-wrote the manuscript. AS and PR consulted on the analyses and related projects intertwined with toolbox development and contributed to the writing of the manuscript. All authors contributed to the article and approved the submitted version. FUNDING This work was supported by the NSF Grant CMMI 1719388, Biosensor Data Fusion for Real-time Monitoring of Global Neurophysiological Function awarded to PR and colleagues, as well as NSF/EPSCoR Grant 1632849, RII Track-2 FEC: Neural networks underlying the integration of knowledge and perception, and NIH P20 GM130461, Rural Drug Addiction Research Center, awarded to MJ and colleagues. We also received a GPU grant from NVIDIA Corporation. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health or the University of Nebraska.
2020-12-09T14:21:14.113Z
2020-12-04T00:00:00.000
{ "year": 2021, "sha1": "1c4102c311d4fecbd0b24fbb2d60bf1708a1331b", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnhum.2021.638052/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e10356be8490ee0da893f4f7a5986012d0a55dee", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Biology", "Medicine", "Computer Science" ] }
226262399
pes2o/s2orc
v3-fos-license
Entity Enhanced BERT Pre-training for Chinese NER Character-level BERT pre-trained in Chinese suffers a limitation of lacking lexicon information, which shows effectiveness for Chinese NER. To integrate the lexicon into pre-trained LMs for Chinese NER, we investigate a semi-supervised entity enhanced BERT pre-training method. In particular, we first extract an entity lexicon from the relevant raw text using a new-word discovery method. We then integrate the entity information into BERT using Char-Entity-Transformer, which augments the self-attention using a combination of character and entity representations. In addition, an entity classification task helps inject the entity information into model parameters in pre-training. The pre-trained models are used for NER fine-tuning. Experiments on a news dataset and two datasets annotated by ourselves for NER in long-text show that our method is highly effective and achieves the best results. Introduction As a fundamental task in information extraction, named entity recognition (NER) is useful for NLP tasks such as relation extraction (Zelenko et al., 2003), event detection (Kumaran and Allan, 2004) and machine translation (Babych and Hartley, 2003). We investigate Chinese NER (Gao et al., 2005), for which the state-of-the-art methods use a character-based neural encoder augmented with lexicon word information (Zhang and Yang, 2018;Gui et al., 2019a,b;Xue et al., 2019). NER has been a challenging task due to the flexibility of named entities. There can be a large number of OOV named entities in the open domain, which poses challenges to supervised learning algorithms. In addition, named entities can be ambiguous. Take Figure 1 for example. The term "老 妇人(the old lady)" literally means "older woman". * Equal contribution. Figure 1: Entity enhanced pre-training for NER. "老妇 人(The old lady)", the nickname of a football club Juventus F.C., is extracted by new-word discovery and integrated into the Transformer structure. After pretraining, the embedding of "老妇人(The old lady)" has the global information and correctly classifies itself as an ORG, which also helps recognize "意甲(Serie A)" as an ORG. However, in the context of football news, it means the nickname of a football club Juventus F.C.. Thus entity lexicons that contain domain knowledge can be useful for the task (Radford et al., 2015;. Intuitively, such lexicons can be collected automatically from a set of documents that are relevant to the input text. For example, in the news domain, a set of news articles in the same domain and concurrent with the input text can contain highly relevant entities. In the finance domain, the financial report of a company over the years can serve as a context for collecting named entities when conducting NER for a current-year report. In the science domain, relevant articles can mention the same technological terms, which can facilitate recognition of the terms. In the literature domain, a fulllength novel itself can serve as a context for mining entities. There has been work exploiting lexicon knowledge for NER (Passos et al., 2014;Zhang and Yang, 2018). However, little has been done integrating entity information into BERT, which gives the stateof-the-art for Chinese NER. We consider enriching BERT (Devlin et al., 2019) with automatically extracted domain knowledge as mentioned above. In particular, We leverage the strength of newword discovery on large documents by calculating point-wise mutual information to identify entities in the documents. Information over such entities is integrated into the BERT model by replacing the original self-attention modules (Vaswani et al., 2017) with a Char-Entity-Self-Attention mechanism, which captures the contextual similarities of characters and document-specific entities, and explicitly combines character hidden states with entity embeddings in each layer. The extended BERT model is then used for both LM pre-training and NER fine-tuning. We investigate the effectiveness of this semisupervised framework on three NER datasets, including a news dataset and two annotated datasets (novels and financial reports) by ourselves, which aims to evaluate NER for long-text. We make comparisons with two groups of state-of-the-art Chinese NER methods, including BERT and ERNIE (Sun et al., 2019a,b). For more reasonable comparison, we also complement both BERT and ERNIE with our entity dictionary and further pre-train on the same raw text as ours. Results on the three datasets show that our method outperforms these methods and achieves the best results, which demonstrates the effectiveness of the proposed Char-Entity-Transformer structure for integrating entity information in LM pre-training for Chinese NER. To our knowledge, we are the first to investigate how to make use of the scale of the input document text for enhancing NER. Our code and NER datasets are released at https://github.com/ jiachenwestlake/Entity_BERT. Related Work Chinese NER. Previous work has shown that character-based approaches perform better for Chinese NER than word-based approaches because of the freedom from Chinese word segmentation errors (He and Wang, 2008;Liu et al., 2010;Li et al., 2014). Lexicon features have been applied so that the external word-level information enhances NER training (Luo et al., 2015;Zhang and Yang, 2018;Gui et al., 2019a,b;Xue et al., 2019). However, these methods are supervised models, which cannot deal with a dataset with relatively little labeled data. We address this problem by using a semisupervised method by using a pre-trained LM. Pre-trained Language Models. Pre-trained language models have been applied as an integral component in modern NLP systems for effectively improving downstream tasks (Peters et al., 2018;Radford et al., 2019;Devlin et al., 2019;Liu et al., 2019b). Recently, there is an increasing interest to augment such contextualized representation with external knowledge (Zhang et al., 2019;Liu et al., 2019a;Peters et al., 2019). These methods focus on augmenting BERT by integrating KG embeddings such as TransE (Bordes et al., 2013). Different from the line of work, our model dynamically integrates document-specific entities without using any pre-trained entity embeddings. A more similar method is ERNIE (Sun et al., 2019a,b), which enhances BERT through knowledge integration. In particular, instead of masking individual subword tokens as BERT does, ERNIE is trained by masking full entities. The entity-level masking trick for ERNIE pre-training can be seen as an implicit way to integrate entity information through error backpropagation. In contrast, our method uses an explicit way to encode the entities to the Transformer structure. Method As shown in Figure 2, the overall architecture of our method can be viewed as a Transformer structure with multi-task learning. There are three output components, namely masked LM, entity classification and NER. With only the masked language model component, the model resembles BERT without the next sentence prediction task, and the entity classification task is added to enhance pretraining. While only NER outputs are yielded, the model is a sequence labeler for NER. We integrate entity-level information by extending the standard Transformer. New-Word Discovery In order to enhance a BERT LM with documentspecific entities, we adopt an unsupervised method by Bouma (2009) adds these three values as the validity score of possible entities. The specific induction process is shown in Appendix A. Char-Entity-Transformer We construct models based on the Transformer structure of BERT BASE for Chinese (Devlin et al., 2019). In order to make use of the extracted entities, we extend the baseline Transformer to Char-Entity-Transformer, which consists of a stack of multihead Char-Entity-Self-Attention blocks. We denote the hidden dimension of characters and the hidden dimension of new-words (entities) as H c and H e , respectively. L is the number of layers, and A is the number of self-attention heads. Baseline Transformer. The Transformer encoder (Vaswani et al., 2017) is constructed with a stacked layer structure. Each layer consists of a multi-head self-attention sub-layer. In particular, given the hidden representation of a sequence {h l−1 1 , ..., h l−1 T } for the (l − 1)-th layer and packed together as a matrix h l−1 ∈ R T ×Hc , the self-attention function of the l-th layer is a linear transformation on the Value V l space by means of Query Q l and Key K l mappings, represented as: where dk is the scaling factor and W l q , W l k , W l v ∈ R Hc×Hc are trainable parameters of the l-th layer. The result of Atten(Q l , K l , V l ) is further fed to a Algorithm 1 Maximum entity matching. i ← max{k + 1, i + 1} 8: end for 9: end while feed-forward network sub-layer with layer normalization to obtain the final representation h l of the l-th layer. Char-Entity matching. Given a character sequence c = {c 1 , . . . , c T } and an extracted entity dictionary E ent 1 , we use the maximum entity matching algorithm to obtain the corresponding entity-labeled sequence e = {e 1 , ..., e T }. In particular, we label each character with the index of the longest entity in E ent that includes the character, and label characters with no entity matches with 0. The process is summarized in Algorithm 1. Char-Entity-Self-Attention. The Char-Entity-Self-Attention structure is shown in Figure 2 (right). Following BERT (Devlin et al., 2019), given a character sequence c = {c 1 , . . . , c T }, the representation of the t-th (t ∈ {1, . . . , T }) character in the input layer is the sum of character, segment and position embeddings, represented as: (2) where E c , E s , E p represent character embedding lookup table, segment embedding lookup table and position embedding lookup table, respectively. In particular, the segment index s ∈ {0, 1} is used to distinguish the order of input sentences for the next sentence prediction task in BERT (Devlin et al., 2019), which is not included in our method. Thus we set the segment index s as a constant 0. Given the (l − 1)-th layer character hidden se- we compute the combination of the character hidden and its corresponding entity embedding as: where W l h,q , W l h,k , W l h,v ∈ R Hc×Hc are trainable parameters of the l-th layer, and W l e,k , W l e,v ∈ R He×Hc are trainable parameters for the corresponding entities. E ent is the entity embedding lookup table. As shown in Eq. (3), if there is no corresponding entity for a character, the representation is equal to the baseline self-attention. To show how a character and its corresponding entity are encoded jointly, we denote a pack of entity embeddings {E ent [e 1 ], . . . , E ent [e T ]} as e ∈ R T ×He . The attention score of the i-th character in the l-th layer S l i is computed as: where a char-to-char attention score s c t is computed equally to the baseline self-attention. A char-toentity attention score s e t represents the similarity between a character and the corresponding entity. Before normalization, the attention score of the i-th character and t-th character {S l i } t is s c t s e t , which is the geometric mean of s c t and s e t . This shows that the similarity between two characters by Char-Entity-Self-Attention is computed as a combination of the char-to-char geometric distance and the char-to-entity geometric distance. Given the attention score S l i , Atten(q l i , K l , V l ) is computed as a weighted sum of the Value V l , which is a combination of character values and entity values. Masked Language Modeling Task Following Devlin et al. (2019), we use the masked LM (MLM) task for pre-training. In particular, given a character sequence c = {c 1 , . . . , c T }, we randomly select 15% of input characters and replace them with [MASK] tokens. Formally, given the the hidden outputs of the last layer {h L 1 , . . . , h L T }, for each masked character c t in a character sequence, the prediction probability of MLM p(c t |c <t ∪ c >t ) is computed as: where E c is the character embedding lookup table. V is the character vocabulary. Entity Classification Task In order to further enhance the coherence between characters and their corresponding entities, we propose an entity classification task, which predicts the specific entity that the current character belongs to. A theoretical explanation of this task is to maximize the mutual information I(e; c) between the character c ∼ p(c) and the corresponding entity e ∼ p(e), where p(c) and p(e) represent the probability distributions of c and e, respectively. where H(e) indicates the entropy of e ∼ p(e), represented as H(e) = −E e∼p(e) [log p(e)], which is a constant corresponding to the frequency of entities in a document. Thus the maximization of the mutual information I(e; c) is equivalent to the maximization of the expectation of log p(e|c). Considering the computational complexity due to the excessive number of candidate entities, we employ sampling softmax for output prediction (Jean et al., 2015). Formally, given the hidden outputs of last layer {h L 1 , . . . , h L T } and its corresponding entity labeled sequence e = {e 1 , . . . , e T }, we compute the probability of each character c t (s.t. e t = 0) aligning with its corresponding entity e t as: where R − represents the randomly sampled negative set from the candidate entities of the current input document. E ent is the entity embedding lookup table and b e is the bias of entity e. NER Task Given the hidden outputs of the last layer {h L 1 , . . . , h L T }, the output layer for NER is a linear classifier f : R Hc → Y, where Y is a (m − 1)simplex and m is the number of NER tags. The probability that the character c t aligns with the k-th NER tag is computed using softmax: where w k ∈ R Hc and b k are trainable parameters specific to the k-th NER tag. We adopt the B-I-O tagging scheme for NER. Training Procedure Our model is initialized using a pre-trained BERT model 2 , and the other parameters are randomly initialized. During training, we first pre-train an LM over all of the raw text to acquire the entityenhanced model parameters and then fine-tune the parameters using the NER task. Pre-training. Given raw text with induced entities D lm = {(c n , e n )} N n=1 , where c n is a character sequence and e n is its corresponding entity sequence detected by Algorithm 1, we feed each training character sequence and its corresponding 2 https://github.com/google-research/ bert, which is pre-trained on Chinese Wikipedia. We denote the masked subset of D lm as D + lm = {(n, t)|c n t = [MASK], c n ∈ D lm }, the loss of the masked LM task is: We denote the entity prediction subset of D lm as D e lm = {(n, t)|e n t = 0, c n ∈ D lm }, the loss of the entity classification task is: To jointly train the masked LM task and the entity classification task in pre-training, we minimize the overall loss: Fine-tuning. Given an NER dataset D ner = {(c n , y n )} N n=1 , we train the NER output layer and fine-tune both the pre-trained LM and entity embeddings by the NER loss: The overall process of pre-training and finetuning is summarized in Algorithm 2. Experiments We empirically verify the effectiveness of entity enhanced BERT pre-training on different NER datasets. In addition, we also investigate how different components in the model impact the performance of NER with different settings. Datasets We conduct experiments on three datasets, including one public NER dataset, CLUENER-2020 (Xu et al., 2020), and two datasets annotated by ourselves, which are also contributions of this paper. The statistics of the datasets are listed in Table 1. News dataset. We use the CLUENER-2020 (Xu et al., 2020) dataset. Compared with OntoNotes (Weischedel et al., 2012) and MSRA (Levow, 2006) datasets for Chinese news NER, CLUENER-2020 is constructed as a fine-grained Chinese NER dataset with 10 entity types, and its labeled sentences belong to different news domains rather than one domain. We randomly sample 5.2K, 0.6K and 0.7K sentences from the original CLUENER-2020 dataset as the training 3 , dev and test sets, respectively. The corresponding raw text is taken from THUCNews (Sum et al., 2016) in four news domains 4 , namely GAM (game), ENT (entertainment), LOT (lottery) and FIN (finance), with a total number of about 100M characters. The detailed entity statistics are shown in Appendix B.1. Novel dataset. We select three Chinese Internet novels, titled "天荒神域(Stories in Myth)", "道破 天穹(Taoist Stories)" and "茅山诡术师(Maoshan Wizards)", respectively, and manually label around 0.9K sentences for each novel as the development 3 In practice, a little manual labeling can be performed on each news domain separately for the best results. However, considering the expense of performing experiments to study the influence of training data scale, we use a single set of training data for all the news domains. This setting is also used for the novel dataset. 4 The original CLUENER-2020 dataset has no domain divisions, but our method aims to leverage domain-specific entity information for NER. Thus we select some specific news domains according to raw text from THUCNews and construct an entity dictionary for each domain. We also released a smaller version of CLUENER-2020 with domain divisions. and test sets. We also label around 6.7K sentences from six other novels for the training set. Considering the literature genre, we annotate six types of entities. Besides, we use the original text of the nine novels with about 48M characters for pre-training. The details of annotation and entity statistics are shown in Appendix B.2. Financial report dataset. We collect annual financial reports of 12 banks in China for five years and select about 2k sentences to annotate as the test set. The annotation rules follow the MSRA dataset (Levow, 2006), and the annotation process follows the novel dataset. In addition, we use the MSRA training and dev sets as our training and dev data. The unannotated annual reports of about 26M characters are used in LM pre-training. The detailed entity statistics are shown in Appendix B.3. Experimental Settings Model size. Our model is constructed using BERT BASE (Devlin et al., 2019), with the number of layers L = 12, the number of self-attention heads A = 12, the hidden size of characters H c = 768 and the hidden size of entities H e = 64. The total amount of non-embedding model parameters is about 86M. The total amount of non-embedding parameters of BERT BASE is about 85M. The entity integration module occupies only a small proportion in the whole model. Therefore, it has little impact on training efficiency. Hyperparameters. For pre-training, we largely follow the default hyperparameters of BERT (Devlin et al., 2019). We use the Adam optimizer with an initial learning rate of 5e −5 and a maximum epoch number of 10 for fine-tuning. We list the details about pre-training and fine-tuning hyperparameters in Table 2. Baselines. We compare our methods with three groups of state-of-the-art methods to Chinese NER. BERT baselines. BERT (Devlin et al., 2019) directly fine-tunes a pre-trained Chinese BERT on NER. BERT+FUR uses the same raw text as ours to further pre-train the BERT with only the masked LM task. BERT+FUR+ENT uses the sum of character embeddings and the corresponding entity embeddings by the same entity matching algorithm as ours only in the input layer, and then further pre-trains BERT on the same raw text as ours. ERNIE baselines. ERNIE 5 (Sun et al., 2019a,b) enhances BERT through knowledge integration using a entity-level masked LM task and more raw text from the Web resources, which achieves the currently best results on Chinese NER. ERNIE+FUR+ENT is a stronger baseline, which uses the same entity dictionary as ours for entitylevel masking and further pre-trains ERNIE on the same raw text as ours. LSTM baselines. We compare character-level BILSTM (Lample et al., 2016) and BILSTM+ENT, which concatenates the character embeddings and its corresponding entity embeddings as inputs. We also compare a gazetteer based method LATTICE (Zhang and Yang, 2018) and LATTICE (REENT), which replaces the word gazetteer of LATTICE with our entity dictionary for fair comparison. We use the same embeddings as (Zhang and Yang, 2018), which are pre-trained on Giga-Word 6 using Word2vec (Mikolov et al., 2013). The entity embeddings are randomly initialized and fine-tuned during training. Overall Results The overall F 1 -scores are listed in Table 3. Comparison with BERT baselines. BERT+FUR achieves a slightly better result than BERT on the news dataset All (75.14% F 1 5 https://github.com/PaddlePaddle/ ERNIE/tree/repro 6 https://catalog.ldc.upenn.edu/ LDC2011T13 v.s. 74.22% F 1 ), but similar results on the novel dataset All and the financial report dataset. This shows that simply further pre-training BERT on document-specific raw text can hardly improve the performances. After using a naive method to integrate entity information, BERT+FUR+ENT achieves significantly better results on the novel dataset All (76.23% F 1 v.s. 73.22% F 1 ) compared to BERT+FUR, but lower F 1 on the news and the financial report datasets, which shows that this naive method cannot effectively benefit from the entities of arbitrary text genre. Compared with BERT, Ours achieves more significantly better results on the novel dataset and the fiancial report dataset than the news dataset (at least over 4% F 1 v.s. 2.4% F 1 ), indicating the effectiveness of Ours for long-text genre. Compared with all of the BERT baselines, Ours achieves significant improvement (over at least 1.5% F 1 on the news dataset All, over 1.3% F 1 on the novel dataset All and over 4% F 1 on the financial report dataset), which shows that the Char-Entity-Transformer structure effectively integrates the document-specific entities extracted by newword discovery and benefits for Chinese NER. Comparison with the state-of-the-art. We make comparisons with ERNIE baselines. Even though ERNIE uses more raw text and entity information from the Web resources for pre-training, Ours outperforms ERNIE significantly (about 1% F 1 on the news dataset All, over 4% F 1 on both the novel dataset All and the financial report dataset), which shows the importance of document-specific entities for pre-training. Using the same entity dictionary as Ours to further pre-train ERNIE on the same raw text as Ours, ERNIE+FUR+ENT achieves better results on the novel dataset and the financial report dataset Proportion of NWD-extracted entities (%) NWD-extracted entities Figure 3: Performances of new-word discovery against word frequency on the news dataset. We ignore the interval >1000, because it occupies less than 5% newwords or entities. than ERNIE, but suffers a decrease on the news dataset All, which shows that integrating documentspecific entity dictionary benefits ERNIE for Chinese NER in long-text genre. Compared with ERNIE+FUR+ENT, Ours achieves significant improvements, which shows that our explicit method of integrating entity information by the Char-Entity-Transformer structure is more effective than entitylevel masking for Chinese NER. Finally, BERT and ERNIE outperform the LSTM baselines on all of the three datasets, indicating the effectiveness of LM pre-training for Chinese NER. Analysis MI-based new-word discovery. Figure 3 illustrates the relationships between new-words extracted by the MI-based new-word discovery (NWD) and the named entities with the scope of the news dataset. On the one hand, within the scope of the news dataset, the proportion of entities extracted by the MI-based NWD is relatively higher when they are more frequently appearing n-grams in the raw text (overall 31.04% of the named entities are extracted by the NWD), as shown by the red line in Figure 3. On the other hand, within the n-grams in the news dataset, new-words with lower frequencies extracted by the MI-based NWD are more likely to be named entities (overall 3.86% of new words within the news dataset are named entities), as shown by the blue line in Figure 3. Fine-grained comparison. In order to study the performances of our method on different entity types, we make fine-grained comparisons on the news dataset, which has plenty of entity types in different news domains. Figure 4 illustrates F 1scores of several typical entity types, including GOV (government), BOO (book), MOV (movie) and ADD (address), for fine-grained comparison on the news dataset with BERT and ERNIE. The trends are consistent with the overall results. The full table is shown in Appendix C. Ablation study. As shown in Table 4, we use two groups of ablation study to investigate the effect of entity information. (1) Entity prediction task. We consider (i) NO-ENT-CLASS, which does not use the entity classification task in pre-training; and (ii) NO-PRETRAIN, which does not use entity enhanced pre-training. Results of these methods suffer significantly decreases compared to FINAL, which shows that pre-training, especially with the entity classification task, plays an important role in integrating the entity information. In addition, we also explore the effect of raw text quantity. The result of (iii) HALF-RAW shows that a larger amount of the raw text is helpful. (2) Entity dictionary. We consider (i) HALF-ENT, which uses 50% randomly selected entities from the original entity dictionary; (ii) N-GRAMS, which uses randomly selected n-grams from the raw text; specific entity dictionary benefits the performance, and the new-word discovery method is effective for collecting entity dictionary. The amount of NER training data. To compare performances of different models under different numbers of labeled training sentences, we randomly select different numbers of training sentences for training on the novel dataset. As shown in Figure 5, in nearly unsupervised settings, Ours gives the largest improvements (33.92% F 1 over BILSTM+ENT, 20.80% F 1 over BERT+FUR and 2.81% F 1 over ERNIE+FUR+ENT). With only 500 training sentences, Ours achieves competitive result, which shows the effectiveness of our LM pre-training method for the few-shot setting. Case study. Table 5 shows a case study on the news dataset. "花旗中国(Citi China)" is a COM (company) and "《辐射》(Radiation)" is a MOV (movie). Since the text genre and entities in the news are so different from Wikipedia, BERT does not recognize the company name "花旗中国(Citi China)" and misclassifies "《辐射》(Radiation)" as a GAM (game). Benefiting from integrating entity information into LM pre-training, both ERNIE and Ours recognize "花 旗 中 国(Citi China)". We use an example in the news dataset, "休顿很难鼓 舞将士。(It is difficult for Hughton to encourage team members.)". Ours uses document-specific entities to pre-train on raw news text. So with the global information, Ours also classifies "《辐射》(Radiation)" accurately as a MOV. Visualization. Figure 6 uses BertViz (Vig, 2019) to visualize the last-layer attention patterns of "休(Hugh)" in a news example. BERT only has a higher attention score to itself, while Ours has relatively higher attention scores to all the tokens in the current entity "休顿(Hughton)", especially for the first attention head (in blue). This shows that Ours enables entity information to enhance the contextual representation. Conclusion We investigated an entity enhanced BERT pretraining method for Chinese NER. Results on a news dataset and two long-text NER datasets show that it is highly effective to explicitly integrate the document-specific entities into BERT pre-training with a Char-Entity-Transformer structure, and our method outperforms the state-of-the-art methods for Chinese NER. where E L and E R represent the left and right entropy, respectively. w represents an N-gram substring. A and B are the sets of words that appear to the left or right of w, respectively. Finally, we add the three values MI, E L and E R as the validity score of possible new entities, remove the common words based on an open-domain dictionary from Jieba 8 , and save the top 50% of the remaining words as the potential input documentspecific entity dictionary. B Details of the Datasets B.1 News Dataset Entity statistics. As listed in Table 6, the finegrained news dataset consists of 10 entity types, including GAM (game), POS (position), MOV (movie), NAM (name), ORG (organization), SCE (scene), COM (company), GOV (government), BOO (book) and ADD (address). The four test domains have obvious different distributions of entity types, which are visualized by the gray scale of color in Table 6. B.2 Novel Dataset Data collection. We construct our corpus from a professional Chinese novel reading site named Babel Novel 9 . Unlike news, the novel dataset covers a mixture of literary style including historical 8 http://github.com/fxsjy/jieba 9 https://babelnovel.com/ novels, and martial arts novels in the genre of fantasy, mystery, romance, military, etc. Therefore, unique characteristics of this dataset such as novelspecific types of named entities present challenges for NER. Annotation. Considering the literature genre, we annotate three more entity types other than PER (person), LOC (location) and ORG (organization) in MSRA (Levow, 2006), namely (i) TIT (title), which represents the appellation or nickname of a person, such as "冥 界 之 主(Load of Underworld)" and "无极剑圣(Sward Master)"; (ii) WEA (weapon), which represents weapons or objects with specialpurpose (e.g. "天龙战戟(Dragon Spear)" and "星 辰法杖(Stardust Wand)"); and (iii) KUN (kongfu), which represents the name of martial arts such as "太极(Tai Chi)" and "忍术(Ninjutsu)". The annotation work is undertaken by five undergraduate students and two experts. All of the annotators have read the whole novels before annotation, which aims to prevent the labeling inconsistent problem. In terms of annotation progress, each sentence is first annotated by at least two students, and then the experts select the examples with inconsistent annotations and modify the mistakes. The inter-annotator agreement exceeded a Cohen's kappa value (McHugh, 2012) of 0.915 on the novel dataset. Entity statistics. The statistics for the above six entity types are listed in Table 7. We can see that the entity distributions on the three test novels are similar with only a few differences, which are because of the differences in the topics of novels. B.3 Financial Report Dataset Annotation. The annotation process is similar to that of the novel dataset. The inter-annotator agreement exceeded a Cohen's kappa value (McHugh, 2012) of 0.923 on the financial report dataset. Entity statistics. The detailed statistics for the financial report dataset are listed in Table 7. C Fine-grained Comparison The total results of fine-grained comparisons on the news dataset are listed in Table 8. The news dataset has a total of 10 entity types, including GAM (game), POS (position), MOV (movie), NAM (name), ORG (organization), SCE (scene), COM (company), GOV (government), BOO (book) and ADD (address).
2020-11-06T22:09:23.233Z
2020-11-01T00:00:00.000
{ "year": 2020, "sha1": "81a774a6935e3c1eaa44cd94b4e95a407baa7b6a", "oa_license": "CCBY", "oa_url": "https://www.aclweb.org/anthology/2020.emnlp-main.518.pdf", "oa_status": "HYBRID", "pdf_src": "ACL", "pdf_hash": "81a774a6935e3c1eaa44cd94b4e95a407baa7b6a", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
1055865
pes2o/s2orc
v3-fos-license
Effects of oral glucose intake on gastric myoelectrical activity and gastric emptying To investigate the effect of oral glucose intake on gastric motility, we measured gastric myoelectrical activity and gastric emptying on two test conditions: 1) glucose intake and 2) water intake in the same 10 healthy male volunteers (20 to 29 years old). Gastric motility was evaluated with cutaneous-recorded electrogastrography (EGG) for 30 min both on fasting and after glucose or water intake, while gastric emptying was measured using acetaminophen-absorption method. There were no significant changes in EGG dominant frequency after water intake, but the frequency increased significantly after glucose intake. A postprandial dip (i.e., a transient decrease in frequency immediately after the food intake) was observed in 3 subjects after water intake and in 8 subjects following glucose intake. The EGG power ratio was significantly larger after glucose than water intake, with delayed gastric emptying in the former case. These results suggest that glucose is one of the components responsible for postprandial gastric motility. Introduction Various factors contribute to the onset of postprandial gastric motility patterns.Some studies in normal subjects (Stacher et al., 1990;Cunningham et al., 1991;Cunningham et al., 1991;Harris et al., 1991) have shown that the intake of a caloric high-meal or -fat meal causes physiological changes including delayed gastric emptying and modulation of gastrointestinal motility.It is not clear whether there are differences in the gastric motility patterns after meals between low-and high-caloric intakes.Furthermore, the role of ingested nutrients such as glucose or fat in postprandial gastric motility is poorly understood.myoelectrical activity and gastric emptying after oral liquid intake between an equal volume of water and glucose-containing solution. Subjects Studies were performed in 10 healthy male volunteers (20-29 years old).All subjects understood the purpose of this study and provided written informed consent.No subjects had metabolic or gastrointestinal disorders, and none had been taking any medications.The glucose and water intake tests were conducted on each subject.The temperature of both glucose solution and water is set at 10°C.The two independent tests were randomly performed on different days. Experimental procedure Gastric motility was assessed with cutaneous-recorded electrogastrography (EGG) and by measurements of gastric emptying using the acetaminophen-absorption method.After fasting for at least 4 hours, the EGG was recorded for 30 min in the supine position.The subjects then stood up and ingested 20 mg/kg of acetaminophen powder mixed with either 225 ml of water or 75 g/225 ml of glucose (TRELAN G75, Shimizu Pharmaceutical., Shimizu, Japan).Immediately following the intake, the subjects returned to the supine position and the EGG record was repeated for a further 30 min.The EGG was measured using bipolar Ag-AgCl electrodes placed on the right and left midclavicular lines along the long axis of the stomach over the surface of the upper abdomen.The EGG signals were low-pass filtered with a cut-off frequency of 0.1 Hz, and recorded on an FM data recorder (MR-30, TEAC, Tokyo, Japan).The obtained data were sampled at 1 kHz using an analog/digital converter (ADX-98E, Canopus Electronics, Kobe, Japan).The power spectral density in the EGG was computed with a program using an autoregressive model.The following parameters were obtained from the EGG using autoregressive power spectral analyses and evaluated for each subject.1. EGG dominant frequency: the frequency at which the power was highest within the range of 0.02-0.107Hz (1.2-6.4 cpm) of an entire EGG recording.It has been suggested that the dominant frequency of the EGG reflects the frequency of the gastric slow waves (Familoni et al., 1991;Chen et al., 1994).2. Percentage of normogastria: defined as the percentage of time during which normal 0.04-0.06Hz (2.4-3.6 cpm) slow waves were present over the entire observation period.This parameter reflects the regularity of the gastric myoelectrical activity.An EGG frequency higher than 0.06 Hz (3.6 cpm) was defined as tachygastria and one slower than 0.04 Hz (2.4 cpm) was defined as bradygastria. 3. Power ratio: defined as the ratio of after to before water or glucose intake EGG dominant power values (i.e., postload power/preload power), where the dominant power refers to the power at the EGG dominant frequency.It is suggested that changes in the EGG dominant power reflect gastric contractility (Smout et al., 1980;Hamilton et al., 1986;Chen et al., 1994).4. Postprandial dip (PD): a transient frequency decrease that is usually seen in the EGG in normal subjects immediately after food intake (Geldof et al., 1986;Geldof et al., 1989;Kaneko et al., 1995).In this study it was determined by visual inspection.Venous blood (~10 ml) was collected at identical time points for subsequent measurement of plasma glucose and serum insulin and acetaminophen.The serum acetaminophen concentration was determinded by fluorescence polarization immunoassay (TDX system, DAINABOT Co, Ltd, Tokyo, Japan).The degree of gastric emptying was expressed as the serum acetaminophen concentration 45 min after glucose or water intake. Statistical analysis Values of EGG parameters and serum acetaminophen concentration are expressed as mean ± SD.The changes in the EGG dominant frequency and the percentage of normogastria as well as the comparisons of the EGG power ratio and serum acetaminophen concentration were analyzed using the paired t-test.The significances in difference in occurrence of a PD and normogastria were assessed using Yates chi-square test.A probability of less than 0.05% (P<0.05) was considered statistically significant. Electrogastrography During the preload period, the EGG spectra in all 10 subjects contained a dominant frequency in the normal range of 0.04-0.06Hz (2.4-3.6 cpm) in all tests.No significant changes in EGG dominant frequency and the percentage of normogastria were seen between before and after water intake.In contrast, a significant increase in EGG dominant frequency was observed after glucose intake (Table 1).All 10 subjects showed a higher value of EGG power ratio after glucose intake compared with that after water intake (Fig. 1).EGG power ratios associated with glucose intake were significantly higher than those with water intake.PD was observed in 3 subjects in the case of water intake, and in 8 subjects after glucose intake (Table 1). Gastric emptying Nine subjects showed a lower value of serum acetaminophen concentration after glucose intake compared with that after water intake (Fig. 2).There was a significant decrease in serum acetaminophen concentration after glucose intake compared with that after water intake, indicating that gastric emptying was delayed after glucose intake (Table 1).There was a significant inverse correlation between the change in serum acetaminophen concentration and the change in EGG power ratio from oral intake of water to that of glucose (Fig. 3).Plasma glucose and serum insulin showed maximal values at 30 min after glucose intake. Discussion The present study showed that there was a difference in the effects on gastric myoelectrical activity and gastric emptying between oral glucose and water intakes.After glucose intake, a slight increase in the dominant frequency in the EGG and a delay in gastric emptying were observed.These results suggest that glucose is one of components responsible for postprandial gastric motility. In this study, we used the EGG and assessed gastric emptying by the acetaminophenabsorption method to qualify gastric motility.EGG involves recording of gastric myoelectrical activity from abdominal surface electrodes (Geldof et al., 1986;Alvarez, 1992;Chen et al., 1993), and has recently been attracting attention as a simple, non-invasive method for investigating gastric motility in both fasting and postprandial states.The stomach itself has a myogenic mechanism for essential determining movement.It has been suggested that the EGG accurately reflects gastric slow waves (Familoni et al., 1991;Chen et al., 1994) and that an increase in the power of the EGG reflects contraction-related spike potentials (Smout et al., 1980;Hamilton et al., 1986).The acetaminophen-absorption method is a reliable and simple test for evaluating gastric emptying.Heading et al. (1973Heading et al. ( , 1976) reported a significantly negative correlation between the half-time of gastric emptying measured with the scintiscanning technique and the serum acetaminophen concentrations at 30 min and 60 min.Harasawa et al. (1979) reported an inverse correlation between the half-time of gastric emptying and the acetaminophen concentration at 45 min.That study assessed the clinical applications of this method by measuring gastric emptying in patients with peptic ulcers (Harasawa et al., 1979;Kamiya et al., 1998), gastritis and gastric cancer (Tatsuta et al., 1990). From the present study, it appears that oral glucose intake caused a change in the gastric pace-maker that generates slow waves.A slight but significant increase in the EGG dominant frequency was observed following glucose intake, whereas this was not seen after oral water intake.In some studies (Geldof et al., 1986;Koch et al., 1987;Geldof et al., 1989;Chen et al., 1991;Kobayashi et al., 1997), a slight but significant increase in EGG dominant frequency was observed after food intake in healthy subjects.The changes in EGG dominant frequency after glucose intake are similar to the responses to food intake.Whereas some studies (Verhamgen et al., 1998;Macintosh et al., 2001) have shown that the EGG frequency remained unchanged after intraduodenal glucose infusion in healthy young subjects.The mechanism responsible for the increase in EGG dominant frequency during the postprandial period is not understood. Moreover, in the present study the EGG power ratio was significantly greater after oral glucose intake than after oral water intake.Postprandial increases in EGG power have been reported (Smout et al., 1980;Stern et al., 1989;Chen et al., 1991;Kaneko et al., 1995;Kobayashi et al., 1998;Riezzo et al., 2000;Chou et al., 2001) previously and Macintosh et al. (2001) reportedthe increase in the EGG power ratio during and after intraduodenal glucose infusion in healthy young and old men. The increase in the EGG power seems to be mediated by two factors: 1) the postprandial increase in the gastric contraction and 2) the gastric distension bringing the stomach closer to the recording electrodes.Since the volume of liquid ingested was same in both experiments in our study, it can be inferred that while the increase in power (1.4 ± 0.3) after water intake is principally attributable to the physical expansion of the stomach, the power increase following glucose intake (2.2 ± 0.6) be due to the addition of contraction-related spike potentials. PD, which is a transient decrease in frequency that occurs within a few minutes following ingestion of food (Geldof et al., 1986(Geldof et al., , 1989;;Kaneko et al., 1995), was observed in 3 subjects following water intake and in 8 subjects after glucose intake.PD is usually found in normal subjects, and its onset is probably mediated by the physical distension of stomach due to food intake.From these results, it is suggested that the onset of PD may be mediated by physical stimulation by the entry of food into the stomach and the ensuring of the stomach.However, PD was not observed in 7 normal subjects after water intake and 2 normal subjects following glucose intake.This finding points to the possibility that other factors may mediate such phenomenon.The physiological significance and the cause of the PD still need to be elucidated.Gastric emptying was delayed significantly after glucose intake compared with that following water intake.The factors that regulate gastric emptying consist of dilation of the fundus, peristalsis of the gastric body, contraction of the antrum and pyloric function.In this study, there was a significant inverse correlation between the reduced gastric emptying and the increased EGG power ratio in oral intake of glucose.It is suggested that stomach does not distend enough and the ingested water flows into the duodenal bulbs immediately after water intake.In contrast, in case of the glucose intake, the contraction of gastric antrum and pyloric part increases and the obstruction of pyloric ring give rise to the delayed gastric emptying. These effects may be mediated by the small intestinal nutrient-feed back mechanism.Previous reports (Hebbard et al., 1996;Rayner et al., 2000) in normal subjects and diabetic patients have indicated that hyperglycemia increases gastric compliance.Others have reported (Verhamgen et al., 1998) an increase in basal pyloric pressure waves after intraduodenal glucose infusion. Reports in diabetic patients (Schvarcz et al., 1997;Rayner et al., 2001) have shown that hyperglycemia delays gastric emptying.Our data support these previous studies. Our results suggest that oral glucose intake causes changes in the gastric pacemaker activity, which is one of the factors that regulate the gastric emptying.No data are available to explain whether these changes in gastric motility are due to the action of high blood glucose levels alone, to changes mediated by the nervous system or to the action of humoral factors such as insulin and gastrin.The changes in gastric motility following oral glucose intake in the present study are thought to have been produced by the complex interaction of these humoral and nervous-system factors.Further study including other nutrients such as fat intake is needed to elucidate the effects of these factors. In conclusion, the changes in gastric myoelectrical activity and gastric emptying after oral glucose intake were similar to the patterns recognized in the postprandial state.It is suggested that glucose is one component responsible for postprandial gastric motility. Fig. 1 . Fig. 1.The power ratio of EGG in each individual with water or glucose intake. Fig. 2 . Fig. 2. Gastric emptying expressed as the serum acetaminophen concentration 45 minutes after water or glucose intake. Fig. 3 . Fig. 3. Relationship between the change in gastric emptying and the change in EGG power ratio.∆Serum acetaminophen and ∆EGG power ratio were calculated by subtraction of values in glucose intake from values in water intake of each subject. Table 1 Changes in EGG parameters and gastric emptying induced by an equal volume of fluid intake of water and glucose solution Values are mean ± SD.EGG: electrogastrography, cpm: cycle per minute, D.F.: dominant frequency.*P<0.05 vs. water intake, **P<0.05 vs. water intake by chi-square test, ***P<0.01 vs. water intake.
2018-04-03T03:10:13.036Z
2004-10-01T00:00:00.000
{ "year": 2004, "sha1": "c0395212924e879b2ecde48548865a1d95622582", "oa_license": "CCBYNC", "oa_url": "https://www.jstage.jst.go.jp/article/jsmr/40/4,5/40_4,5_169/_pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c0395212924e879b2ecde48548865a1d95622582", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
89472648
pes2o/s2orc
v3-fos-license
Multilocus phylogeny reveals Gibsmithia hawaiiensis ( Dumontiaceae , Rhodophyta ) to be a species complex from the Indo-Pacific , with the proposal of G . eilatensis sp . nov Gibsmithia hawaiiensis is a peculiar red alga characterized by furry gelatinous lobes arising from a cartilaginous stalk. The species has been recorded from tropical reef systems throughout the Indo-Pacific. A multilocus phylogeny (UPA, rbcL, COI5P) of 36 specimens collected throughout the species distribution range, showed high genetic diversity at species level. Two major groups were identified, each consisting of multiple lineages. Genetic variability was low in the Hawaiian Islands and the northern Red Sea and high in the Western Indian Ocean and the Coral Triangle, where lineages overlap in distribution. Genetic distances suggest that G. hawaiiensis represents a complex of five cryptic species, with no difference observed in the external morphology corresponding to separate lineages. Anatomical and reproductive differences were observed at the microscopic level for the lineage from the Red Sea, which is here described as G. eilatensis sp. nov. The geographic range of the species complex is here expanded to include Madagascar, the Red Sea and the Indo-Malay region, and the generitype seems endemic to the Hawaiian Islands. Algal diversity on coral reef systems is discussed from a conservation perspective using G. hawaiiensis as an example. Introduction The genus Gibsmithia (Dumontiaceae, Gigartinales) was erected in 1963 (Doty) for a marine red alga characterized by an ephemeral cluster of gelatinous lobes borne from a perennial cartilaginous stalk. There are currently four species within the genus, G. hawaiiensis Doty 1963:458 (type of the genus), G. dotyi Kraft &R.W. Ricker 1984:433, G. larkumii Kraft 1986:439, andG. womersleyi Kraft & Ricker ex Kraft 1986:441, each with well-defined diagnostic features based on their external morphology (Kraft 1986). G. hawaiiensis stands out from its congeners by having cortical filaments that extend beyond the surface of the gelatinous lobes, giving the plant an overall furry appearance ( Fig. 1). Nevertheless, the above four species of Gibsmithia share a female reproductive system consisting of separate carpogonial-and auxiliary-cell filaments, and isomorphic tetrasporophytes (Kraft 1986). Gibsmithia is widespread throughout the Indo-Pacific, with G. hawaiiensis presenting the largest distribution range (Fig. 2), occurring on coral reefs from South Africa to French Polynesia and the Hawaiian Islands (Guiry & Guiry 2016). Recent records from scattered localities suggest that the species is more common than once thought (Abbott 1999) and might have been overlooked by collectors because of (1) its ephemeral occurrence in the subtidal, (2) low population abundances, and (3) its resemblance to soft corals. The disjunct distribution pattern of the species, its short-lived reproductive structures (Huisman et al. 2007), and the low survival rate of the soft and gelatinous thallus during long periods of drift (Millar and Kraft 1984) suggest that the species may be a poor disperser. Such a lack of connectivity between populations, however, has not been reflected in external morphological differences between samples from distant geographical localities. Recent genetic studies have demonstrated the high degree of cryptic diversity among red algae (e.g., Freshwater et al. 2010;Payo et al. 2013), particularly in groups with simple morphologies like the Dumontiaceae (Saunders 2008). Phylogenetic assessments of other members of the family that included Gibsmithia hawaiiensis and G. dotyi suggested the monophyly of the genus (Sherwood et al. 2010;Clarkston and Saunders 2012). The present study is the first to focus on assessing the molecular diversity within the genus, with extensive sampling of its most common species throughout the Indo-Pacific, i.e. G. hawaiiensis. Materials and Methods Seventeen specimens were collected by snorkeling or SCUBA diving during collection trips to Israel, Indonesia, Malaysia and Hawaii, from 2005 to 2011. Samples were preserved in a 5% formalin-seawater solution, pressed or airdried on herbarium sheets, with a subsample dried in silica gel or kept in 96% ethanol. Twelve Hawaiian specimens were obtained from the personal collections of Dr. I.A. Abbott, and ten additional samples were obtained from herbaria ( Table 1). Identification of specimens used was based on the identification key provided in Kraft (1986). Morphological analysis. Permanent slides were prepared from squashes using 1% aniline blue (Tsuda & Abbott 1985), mounted in 50% corn syrup-water with the addition of a few drops of phenol. Anatomical observations were done using an Olympus BX60 light microscope (Tokyo, Japan) and cells were measured (presented as length × width) using an eyepiece with scales coupled to the microscope. Pictures were taken with a Canon EOS Rebel T2i digital camera (Tokyo, Japan) connected to the microscope, using the software EOS Utility 2.8.1 (Tokyo, Japan). All images were edited in Photoshop 6.0.1 (San Jose, USA). Multi-gene alignment and phylogenetic analysis. Sequences were aligned manually in MacClade 4.08a (Maddison and Maddison 2005) and were unambiguous. Phylogenetic datasets were constructed for each marker individually and for all markers concatenated, being gaps considered as missing data. The model for phylogenetic analysis was determined using PartitionFinder v1.1.1 (Lanfear et al. 2012). The non-protein coding ribosomal DNA UPA was treated as a whole and the two protein coding genes, rbcL and COI-5P, were divided into the three codon positions. All possible groupings of partitions were tested. On the basis of the corrected Akaike information criterion, the partition chosen was a six-part partition based on codon position in rbcL (1)(2)(3) and COI-5P (4)(5)(6), with UPA included in the same partition as second codon position of COI-5P (5). The models determined for each partition were: (1) GTR+G, (2) GTR+I+G, (3) GTR+I+G, (4) GTR+I+G, (5) GTR+G and (6) GTR+I+G. The partitioned dataset was analyzed using maximum likelihood (ML) and Bayesian inference (BI) methods. The ML analysis was performed using RAXML v 7.2.8 (Stamatakis, 2006) using the model GTR+I+G with 1000 searches with a random starting tree and the default, rapid hill-climbing algorithm. Nodal support values were determined using nonparametric bootstrapping with 1000 replicates. A BI Analysis was implemented in Mr Bayes v3.1.2 (Ronquist & Huelsenbeck 2003) with default priors and the models of evolution previously specified. Markov Chain Monte Carlo searches consisted of two independent runs of four chains with three heated chains and one cold chain for 10,000,000 generations, sampled every 100 generations. Likelihood values were plotted against generation number to determine the burn-in (5,000 trees), and a majority rule consensus tree of the remaining trees was then obtained. Barcoding gap values were calculated by dividing the minimum interspecific sequence divergence by the maximum intraspecific sequence divergence (Freshwater et al. 2010). Intraspecific values of zero were replaced by a value corresponding to a single nucleotide change, i.e., 0.27 (1 in 368 bp) for UPA and 0.15 (1 in 665 bp) for COI-5P. Results The complete sequence matrix (Table 1) Although it was not possible to sequence all three molecular markers for all individuals, representatives of each clade were observed in the individual gene phylogenies (Electronic Supplementary Material, Fig. S1-S3). When analyzed separately, the different loci resulted in very similar tree topologies with ML and BI. The multilocus analyses revealed unexpected high genetic diversity among samples of G. hawaiiensis. Tree topologies were similar among phylogenetic reconstructions, forming a well-supported monophyletic clade with two major monophyletic groups, each encompassing multiple lineages (Fig. 3). One group contained a lineage from the type locality (lineage A), the Hawaiian Islands, a lineage from the Central Indo-Pacific and Polynesia (lineage B), and a third from the Western Indian Ocean (lineage C). The second group contained a lineage from the Indo-Malayan region (lineage D) and another from the Red Sea (lineage E). Posterior-probabilities values from BI analysis for each of the lineages A-E was 0.89 or higher, except for the poorly sampled Western Indian Ocean lineage (BI PP=0.60). ML bootstrap values were also moderate (above 70) for most branches, failing to resolve the relation between lineages B and C, and within lineage C. From the five lineages identified in G. hawaiiensis, lineages from isolated regions such as Hawaii and the Red Sea exhibited lower genetic diversity while lineages from the Western Indian Ocean and the Coral Triangle presented greater variability. In the Indo-Malay region, lineages B and D, each from a different major genetic group, overlapped in their distribution. Uncorrected pairwise distance within lineages of G. hawaiiensis ranged from 0.0 to 0.5% for UPA (Table S1), 0.0 to 1.1% for rbcL (Table S2), and 0.0 to 2.6% for COI-5P (Table S3). Distance between lineages ranged from 0.3 to 2.2% for UPA, 0.8 to 4.1% for rbcL, and 3.2 to 9.0% for COI-5P. Similar values have been reported for UPA (e.g., Clarkston & Saunders 2012), rbcL (e.g., Gabriel et al. 2009), and COI-5P (e. g., Le Gall & Saunders 2010) between species of other red algal genera. The exception observed for COI-5P, with genetic divergences within lineage B above 0.75%, has also been reported for other taxonomic groups (e.g., 2.86% for Callophyllis edentata Kylin 1925: 34 (Clarkston & Saunders 2013. Although genetic distances between G. hawaiiensis lineages were higher than the values used to separate species, those distances were lower than the divergence values between those lineages and G. dotyi: 1.6 to 2.4% for UPA, 3.9 to 7.3% for rbcL and 9.7 to 11.6% for COI-5P. The minimum between-lineage divergence was always greater than the maximum within-lineage divergence for all markers, and is here presented as a barcoding gap (Freshwater et al. 2010). The calculated gap ranged from 1.0x to 7.0x for UPA, 1.55x to 6.67x for rbcL, and 1.23x to 60x for COI-5P (Tables S1-S3). The genealogical concordance between multiple unlinked loci (Fig. 3) and the high genetic distances separating the lineages indicated that G. hawaiiensis is a complex of five cryptic species, following the definition of Bickford et al. (2007), i.e. species that have been classified as a single nominal species for being 'at least superficially morphologically indistinguishable'. Differences in external morphology such as frond color, shape, size or branching were linked to the environmental conditions under which the specimen was found (Figs 1, 4-5), and did not correspond to differences between genetic lineages. Examples of this phenotypic response to environmental conditions are: long lobes with pink extremities when shaded under a coral wall (Fig. 1A, 1D, 4C); dark pink fronds with short branches when inserted between coral tips (Figs 1B, 4B); translucent fronds with long "hairs" when attached to coral surface (Figs 1C, 1E, 4A); small fronds with globular lobes when attached to dead coral fragments partially buried in the sand (Figs 1F-H). The size, branching and number of growth rings on the stalk was independent of the frond variation and is most likely to be age related, since the stalk is perennial. After an extensive histological study, the only lineage for which all reproductive phases were observed was the one from the Red Sea. Therefore, based on a complete suite of evidences (genetic, morphological and reproductive data), this lineage is elevated to species rank and here described as G. eilatensis sp. nov. Further sampling is required for a conclusive assessment of anatomical and reproductive features to characterize the other genetic lineages within the G. hawaiiensis complex. Figure 4 TYPE LOCALITY: Honolulu, Oahu, Hawaiian Islands GEOGRAPHIC DISTRIBUTION: The Hawaiian Islands. Based on the present study, G. hawaiiensis presents a much narrower distribution range than previously thought, being restricted to the type location. Records of the species from outside the Hawaiian Islands await further study. Gibsmithia hawaiiensis Doty NOTE: The species has been described and illustrated in detail by Doty (1963). Descriptions from Abbott (1999) and Kraft (1986) ECOLOGY: Plants growing individually, subtidally to 25 m depth, usually in cavities of coral colonies in reef environments. G. eilatensis was the only species of gelatinous red algae observed in the area during the collection period (mid-June to late July). Signs of herbivory or senescence were observed in specimen DG219. GEOGRAPHIC DISTRIBUTION: G. eilatensis was collected all along the shores of Eilat in the Gulf of Aqaba (Red Sea) and its occurrence outside this area is unknown. DESCRIPTION: Thalli of furry gelatinous clusters of unbranched to pseudodichotomously branched blush, pink to white lobes, with unbranched cartilaginous stalk, and round, dark to light pink holdfast. Medullary filaments abundant, colorless, of elongate cells subtending assimilatory filaments comprised of main percurrent filaments and straight cortical filaments comprised of 20-35 rectilinear to semispherical cells, sparingly branched laterals, and thin rhizoidal filaments growing thallus inwards, intertwined with medullary filaments. Seirospore filaments present. Plants dioecious, with isomorphic tetrasporophytes. Carpogonial branches borne along the middle portion of cortical filaments, straight, 5-(8-10)-15 cells long, with the terminal 5-7 cells modified; basal cells of carpogonial branches occasionally bearing short unbranched vegetative laterals; conical carpogonium cut off by oblique division, offcentered on large hypogynous cell; subhypogynous cell smaller and rounder than the cells flanking it; trichogyne straight, elongate; hypogynous cell in functional carpogonial branches extend laterally, producing a bulge. Auxiliary cell branch 13-(19-20)-24 cells long, with 3-(5)-7 modified cells, rounder than surrounding cells, darkly staining; basal 3-10 cells with unbranched filaments surrounding carposporangia; 4-12 terminal cells rectilinear, decreasing in size towards apex. Connecting filaments septate, often diploidizing multiple auxiliary cells (>5) nearby. Gonimoblast initials cut off from bulge at junction of outgoing connecting filament with auxiliary cell. Spermatangia borne radially on 3-(5)-6 terminal cortical cells, forming corncob-like structures; spermatangial heads borne in a similar position of tetrasporangia. Tetrasporangia sessile, isolated, usually borne on the adaxial side of percurrent filaments, terminal or lateral on short lateral branches, in patches along the upper part of gelatinous branches, 10-20 cells below the surface; tetrasporangia decussately to cruciately divided. Habit and vegetative morphology. Thalli consist of furry, gelatinous clusters of unbranched to pseudodichotomously branched lobes (Fig. 6) that are predominantly blush in color (Fig. 4), occasionally pink to whitish (Fig. 1A). Exserted cortical filaments from the lobes' surface are prominently visible underwater (Fig. 1A). The unbranched cartilaginous stalk (Figs 4, 6A) is attached to the substratum by a round, dark to light pink holdfast. Cortical filaments are cut off alternately and sparingly from widely divergent main assimilatory filaments referred to as percurrent filaments and consist of rectilinear to spherical cells decreasing in size towards the surface (5-166.25) × (3.75-11.25) µm (Figs 6B-C). Abundant colorless medullary cells are elongated and transition into distal assimilatory percurrent filaments (Fig. 6C) whose cells measure 36.25-300 × 3.75-11.25 µm. Cells of lower cortical filaments cut off narrow rhizoidal filaments that connect to nearby rhizoidal cells (Fig. 6B). Hair-like structures (6D) of variable length (12.5-120) × 1.25 µm occasionally extend from the apical cells of cortical filaments and staining darkly with aniline blue when short and become almost colorless when long. Rhizoidal filaments cut off from lower cortical cells grow thallus inward where they become intertwined with medullary filaments. Straight, small-celled, scarcely branched and unbranched seirospore filaments are present (Fig. 6E) in tetrasporophytes. Reproductive morphology. Thalli are dioecious, with isomorphic tetrasporophytes. Female pre-fertilization stages are scattered among elongated, narrow assimilatory filaments (Fig. 6C). Carpogonial branches are formed laterally on assimilatory filaments, replacing normal vegetative filaments (Figs 6F-G) in which the 5-9 upper cells transform into spherical cells while the subtending lower cells remain narrow and elliptic like other vegetative cells (Figs 6F-6G). Transformed cells of the carpogonial filaments remain unbranched while the basal and lowermost untransformed cells may cut off short unbranched lateral cell strands growing distally and obliquely (Figs 7A-B). The unfertilized carpogonium is conical with a long, straight trichogyne (Figs 6F-G, 7A-B), separated by an oblique division from the hypogynous cell. The hypogynous cell extends laterally and obliquely (Figs 6F-G). The subelliptical subhypogynous cell is very small (Figs 6F-G, 7A), with the cell subtending it the largest cell of the carpogonial branch. Auxiliary cell filaments (Fig. 7C) are homologous in position and origin to the carpogonial branches and cortical filaments. Three to seven transformed intercalary cells of an auxiliary cell filaments (Figs 7C-E) become darkly staining, enlarged and roundish, and flank below and above an auxiliary cell that remains small, cytoplasm-poor and colorless (Figs 7D-F). Some of these dark cells or untransformed cells below bear unbranched laterals growing distally and obliquely (Fig. 7C). After presumed fertilization, the carpogonium produces a lateral extension that follows the orientation of the expanding hypogynous cell (Figs 7A-7B). Upon reaching an auxiliary cell, an unsegmented, hyaline incoming connecting filament connects laterally to the upper side of the auxiliary cell (Fig. 7D). It is assumed that during this process a product of the fertilization nucleus is transferred from the connecting filament to the auxiliary cell. Following this diploidization of an auxiliary cell, the incoming connecting filament continues to grow as an outgoing connecting filament (Fig. 7D). When reaching another auxiliary cell, the former outgoing filament acts as a new incoming filament, developing cross walls at the site of a new diploidization (Fig. 7E) that leads to the formation of a short segment that partly fuses with the auxiliary cell and then continues as an outgoing connecting filament (Fig. 7E). A single roundish gonimoblast initial (Figs 7E-F) is cut off from the short segment formed on the point of contact between the auxiliary cell and the connecting filament. The outgoing connecting filament continues to reach nearby auxiliary cells and before reaching a new auxiliary cell develops a cross wall, forms a short segment that contains part of the divided fertilization nucleus which fuses with the auxiliary cell (Figs 7E-F). Following the formation of the first gonimoblast initial, the fertilization nucleus in the generative segment cuts off additional gonimoblast cells that each will continue to divide laterally and radially resulting in small clusters of cells (Figs 7G-H). The issuing of a connecting filament primordium from the carpogonium was only observed once (Fig. 7B), prior to its fusion with the hypogynous cell and cells below the subhypogynous cell. Mature auxiliary cell branches are 13-(19-20)-24 cells long (Figs 7C-H). The lowermost 3-10 cells of the auxiliary cell filament typically cut off short-celled unbranched vegetative filaments that surround the gonimoblast cells (7G-H) as they mature into carposporangia. The upper 4-12 terminal cells of the auxiliary cell filament are rectilinear, decreasing in size toward the apex (Figs 7C, 7E, 7H). Gonimoblast cells resulting from a single diploidization event mature gradually in a synchronized fashion, resulting in same-sized cells (Fig. 7H). Mature carposporophytes were not observed. Young clusters of spermatangia are formed on terminal and lateral cells of assimilatory cortical filaments on dioecious gametophytes (Fig. 7I). The spermatangial initials are cut off radially as multiple small protrusions on 3-(5)-6 consecutive cells in a single filament. When the lateral cell filament is 7 cells long, the 6-celled spermatangial branch is pedicellate (Figs 8A-B); if 6 cells long the corncob-like structure is sessile, and if more than 6 cells long, the male reproductive structure is terminally positioned on a lateral branch (Fig. 7I). Mature spermatangial heads,, are oblong and borne in a similar position as the tetrasporangia on cortical filaments. Spermatangia are abundant, measuring (2-2.5) × (1.5-2) µm (Fig. 8B). Tetrasporangial initials (Figs 8C-D) are cut off unilaterally in short series either obliquely from the upper side of cells on short laterals on cortical filaments, or are terminal on these short laterals in the upper part of the gelatinous branches. Tetrasporangia are usually decussately divided (Fig. 8E), sometimes cruciate (Fig. 8C), (15-18.75) × (11.25-13.75) µm. NOTE: The present description was based on a small number of samples found after extensive searches along the coast of Eilat. Although all three life history phases were observed for G. eilatensis, connecting filament formation and mature cystocarps were not observed in our extensive squash preparations. Unlike those of G. hawaiiensis, cortical filaments in G. eilatensis are rarely cut off oppositely from the main percurrent filaments and are usually unbranched or sparingly alternately branched (Table 2). Seirospores are found in chains as in G. hawaiiensis, but occasionally present short branchlets. Carpogonial branches bear an enlarged hypogynous cell and a very small subhypogynous cell, while all cells of the carpogonial branch are similar in size and shape in G. hawaiiensis. Auxiliary cell branches are also composed of similarly shaped modified cells in G. hawaiiensis, while auxiliary cell is smaller in G. eilatensis. The origin of the gonimoblast initials is from the point of junction between the connecting filament and the auxiliary cell, contrasting with the generitype, where gonimoblasts are initiated from the auxiliary cell. Tetrasporangia are mostly borne on short lateral branches on inner cells of cortical filaments, while in G. hawaiiensis they have a 2-3-celled pedicel. Discussion Based on an unreported number of specimens, Karam-Kerimian (1976) suggested the existence of more than one species of Gibsmithia with exerted filaments in French Polynesia, but failed to indicate the reason for species distinctions. This suggestion was refuted by Kraft (1986) who interpreted the variability in anatomy as plastic phenotypic traits. The present data confirm that G. hawaiiensis is in fact a species complex, presenting high cryptic diversity as previously reported in other members of Dumontiaceae by Saunders (2008) and overall highlights the importance of molecular analyses when assessing variation within this family. The here newly reported and sequenced samples of the G. hawaiiensis complex extend the known distribution range of the group (Fig. 2). Latitudinally, the distribution is extended to the north by the first Gibsmithia record for the Red Sea (Fig. 1A), here described as G. eilatensis sp. nov. In addition, the occurrence of the species complex in Raja Ampat (Indonesia; Fig. 1F), Kepulauan Seribu (Indonesia; Fig. 1B), and East Sabah (Malaysia) confirms the provisional records of the alga for these areas as proposed by Atmadja and Prud'homme van Reine (2010) and Draisma (2012). The species complex is also reported for the first time for Madagascar, the North Moluccas (Indonesia; Fig. 1C) and Visayas (Philippines). Unpublished data (S.G.A. Draisma, pers. obs.) also suggest the presence of the complex in Bali, North Sulawesi (Fig. 1H), East Kalimantan, East Nusa Tenggara (Indonesia), East Johor (Fig. 1E), Pulau Labuan (Fig. 1G), West Sabah (Fig. 1D) and North Sabah (Malaysia), and Mindanao (Philippines), further extending the present distribution. Records of the species complex in other (sub)tropical locations within the Indo-Pacific are likely correct due to the alga's unmistakable appearance, but accurate assessments at the species-level will require further genetic studies. Although the remaining three lineages of the complex await further studies for the description of each new species, we conclude that the generitype is restricted to the Hawaiian Islands, raising the number of endemic marine algal species in the archipelago to 57 out of a total of 519 species (Tsuda 2014). Besides G. eilatensis, which is restricted to the Red Sea, most of the to be described species are also endemics for subregions within the Western Indian Ocean and the Coral Triangle. Aside from the species belonging to the Gibsmithia hawaiiensis complex, the other congenerics present an overall non-hairy appearance, and can be clearly discriminated from each other by the branching pattern of the gelatinous thallus (Kraft 1986): rosette clusters of short flattened branches in G. dotyi, subdichotomous long cylindrical branches in G. womersleyi, and very irregularly branched lobes in G. larkumii. The unique combination of gelatinous branches growing on cartilaginous stalks is common throughout the genus except in G. larkumii in which the gelatinous thalli are attached to the substratum only by cartilaginous discs (Schils & Coppejans 2002). No anatomical variation has been previously reported in the few records of G. larkumii (Kraft 1986;Schils & Copejans 2002;N'Yeurt & Payri 2010) and G. womersleyi (Kraft 1986;Womersley 1994). Interestingly, 'consistent habit differences' were also reported within G. dotyi since its original description, when Kraft (1986) suggested the existence of a separate species but refrained from making the separation since diagnostic features were not consistently present in the small sample size. Genetic analyses of G. dotyi throughout its distribution range are needed to assess the species boundaries and may possibly uncover cryptic diversity as here reported for G. hawaiiensis. This is the first study focusing on the molecular phylogeny of tropical members in the Dumontiaceae, since all 17 genera in the family primarily occur in cool-and cold-temperate waters, except for Gibsmithia and Dudresnaya (Kraft, 1986). The unsuspected species diversity reported in the present study suggests the existence of a larger number of taxa, and therefore the family might be underrepresented in checklists of tropical algae worldwide. Rare and cryptic species that remain undescribed are at the greatest risk of extinction (Brodie et al. 2009). Recognizing the high genetic diversity within the G. hawaiiensis species complex highlights the unknown algal diversity still to be discovered, as emphasized by De . Although seaweeds are rarely subject to specific environmental protection laws, coral reefs usually are, and therefore protecting this alga's habitat enables the species to adapt to the environment as it changes (Brodie et al. 2009). Little is known about this alga's resilience to environmental fluctuations. Although the stalk is persistent, the reproductive structures grow inside the sensitive gelatinous branches (Huisman et al. 2007). Unraveling the genetic diversity of the G. hawaiiensis complex and determining its distribution pattern is a first step towards understanding the evolutionary history and biogeography of these red algae. The recognition of its lineages as new species in future studies would contribute to resolving the taxonomic diversity of this complex and contribute to accurate marine biodiversity assessments to guide marine conservation strategies.
2018-12-30T00:08:50.633Z
2016-09-23T00:00:00.000
{ "year": 2016, "sha1": "f172bb08f1ef5c6b21743f7445010784842eb034", "oa_license": "CCBY", "oa_url": "http://www.vliz.be/imisdocs/publications/307074.pdf", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "e6ab3ce182cdf1bd9fdafb2bb5fdaef25d12fd3d", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
224819416
pes2o/s2orc
v3-fos-license
The Politics of Law of Pancasila-based Democracy in Indonesia as the World’s Third Largest Democracy The Indonesian Constitution explicitly states that Indonesia is a country that manages the democratic political system. In the practice of managing the State, democracy that takes place in Indonesia always changes following the development of the Indonesian constitutional system. The aim of this research is to investigate the political development of democratic law in the world's third largest democracy and the ideal democratic system for Indonesia at this time. The research method used is normative juridical, with descriptive analytical research specifications and secondary data types. The data collection method was carried out through a literature study and research results were analyzed by qualitative methods. The results showed that the ideal democracy for the Indonesian state was Pancasila democracy as stated in the Preamble to the Constitution, which stated that democracy was led by wisdom in the deliberative representation and stated that sovereignty was in the hands of the people and implemented according to the Constitution. Introduction In state theory, democracy is defined as a form of state organization in which the government is carried out by the people, as opposed to a monarchie government or an oligarchy government (Wahyono, 1989). Related to this, it can be said that Indonesia is a country that embraces a democratic political system because in its constitution it explicitly states that sovereignty is in the hands of the people. Democracy is a term that is loaded with meaning and interpretation. One thing that is certain is that understanding is closely linked to the social system that supports it (Muladi, 1997). Thus, it would appear that, in addition to containing universal elements or common denominator, democracy also contains contextual contents inherent in a particular social system or cultural relativism. In this case it is often said that perhaps not a single word has been given more meaning than democracy. In Indonesia, there have been several changes in the democratic system in line with the development of Indonesian state administration (Saraswati et al., 2020). Broadly speaking, the development of state administration in Indonesia can be divided into periods after independence, the era of the new order government and the reform order (Parasong, 2014). According to Hidayat (2006), democracy in Indonesia can be divided into parliamentary democracy (1945)(1946)(1947)(1948)(1949), guided democracy (1959)(1960)(1961)(1962)(1963)(1964)(1965)(1966), and Pancasila democracy , democracy in the reformation era (1998-present). It can be seen that democracy in Indonesia is strongly influenced by the development of the constitutional system in Indonesia. Furthermore, Mahfud (2009) states that legal politics refers to legal policy or official policy lines regarding the law that will be enforced either by making new laws or by replacing old laws, in order to achieve the goals of the country. The change in democracy in Indonesia is closely related to the political politics that apply when democracy is implemented in accordance with its period (Mahfud, 2012). Politics of law is manifested in changes in law, both in the form of making new laws and replacing old laws (Budiardjo, 2008). By looking at the development of democracy in Indonesia, since Indonesia's independence until now, this study seeks to investigate the development of democracy and Indonesian state administration from the perspective of legal politics that now prevails in Indonesia. The ideal question is the suitability of the political system and state administration for a multicultural Indonesia. Theoretically, the adoption of democracy and the state administration system can ultimately lead to the realization of the ideals of the country, namely creating a just and prosperous society based on the Pancasila and the Constitution of the Republic of Indonesia. Methods The method of approach in this study is normative juridical, which is an approach that uses the concept of positivist legis which states that law is identical with written norms created and enacted by authorized institutions or officials (Marzuki, 2003). In addition, this concept also views law as a normative system that is autonomous, closed and independent of people's lives (Soemitro, 1988). From the research specifications, this research is analytical descriptive, namely describing, describing or revealing data that has relevance to the problem (Alwasilah, 2008). Based on the types and methods of data collection, the types of data from this study are secondary data. Secondary data consists of primary legal materials, in the form of laws and regulations consisting of the constitution that was and is currently in force in Indonesia. Secondary legal material in the form of books and literature related to the problem under study and primary legal material in the form of the internet. In accordance with the type of data, the data collection method used is library research. Data analysis in this study was carried out with qualitative data analysis. Qualitative methods are research procedures that produce descriptive data in the form of written or spoken words from people and observable behavior (Moleong, 2014). All data needed is processed and arranged systematically, so that it can produce conclusions in accordance with the problems and research objectives delivered in a descriptive form. History of Indonesian Democracy and Constitution The development of democracy in Indonesia is closely related to the development of Indonesian state administration, which can be divided into four main developments. First, presidential democracy (August 17, 1945-Nov. 11, 1945. Since Indonesia became independent and sovereign as a country on August 17, 1945, the nation founders through the 1945 Constitution which was passed on August 18, 1945 have determined that the Unitary State of the Republic of Indonesia adheres to the democracy, where sovereignty is in the hands of the people and fully implemented by the People's Consultative Assembly. These provisions can be seen in the formulation of Article 1 paragraph (2) of the 1945 Constitution which states that sovereignty is in the hands of the people and is carried out entirely by the People's Consultative Assembly. Thus, it also means that the Republic of Indonesia is classified as a country that embraces the representative democracy. The second development was Parliamentary democracy starting with the issuance of the November 14, 1945 announcement by the President, which contained a new cabinet arrangement under Prime Minister Sutan Syahir, accountable to Parliament, namely the Central Indonesian National Committee/KNIP. Based on the announcement of November 14, 1945 the President lost his position as Head of Government, because in the parliamentary system of government carried out by the Cabinet led by a Prime Minister responsible to Parliament/KNIP). The center of executive power has shifted from the President to the Prime Minister, because the day-to-day responsibilities of the government are in the hands of the Minister who will then be held accountable to Parliament and the President is only the head of state. There has been a fundamental change in the state administration in the political system o from a presidential system to a parliamentary system without making changes to the Constitution. At the time the parliamentary system came into force, the people had begun to enjoy basic rights freely including the right to assemble and associate by establishing and becoming members of political parties as evidenced by the emergence of political parties and a number of youth and women's mass organizations that were wrongly affiliated one party. The main function of the party was to join the government to win the independence revolution by instilling state awareness and the spirit of anti-imperialism and colonialism. Meanwhile, other basic elements of democracy cannot be fully realized. Accountability of office holders elected by the people, rotations of power, open political recruitment and general elections cannot be carried out (Feith & Castles, 2007). Third is guided democracy emerged because, since the end of the general election in 1955. Political parties in this time had been very much oriented towards their own ideological interests and paid less attention to national political interests as a whole. Therefore, Sukarno as the President of the Republic of Indonesia then threw up the idea that parliamentary democracy was incompatible with the personality of the Indonesian nation imbued with a spirit of kinship and mutual cooperation. In an effort to overcome the conflict that had the potential to disintegrate the Republic of Indonesia at that time, President Soekarno finally changed the democratic system from parliamentary democracy to the leading democracy marked by the issuance of the Presidential Decree on July 5, 1959 (Lev, 2009). The contents of the Presidential Decree included re-enacting The 1945 Constitution. Since the issuance of the Presidential Decree, since then the guided democracy model has been implemented which is claimed to be in accordance with the ideology of the State of Pancasila and integralisticism which teaches about unity between the people and the state. The fourth is Pancasila Democracy (1966 -1998) which began with Soeharto's appointment as President replacing Soekarno as the 2nd President of Indonesia and applying a different model of democracy, namely called New Order's Pancasila Democracy, to confirm the claim that the democratic model this is actually in accordance with the ideology of the Pancasila state (Morfit, 1981). The implementation of the New Order democracy was marked by the issuance of the March 11, 1966 Order, the New Order was determined to implement the Pancasila and the Constitution in a pure and consistent manner. The beginning of the New Order gave new hope to the people of development in all fields through the Five Year Development Plan. However, a visible development is the widening gap between state power and society. The New Order state manifested itself as a strong and relatively autonomous force, and while society was increasingly alienated from the environment of power and policy formulation processes, through centralized authority, minimization of political parties, and the role of the military (Crouch, 1972). The fifth is democracy in the reformation era (1998-present). The end of the New Order period was marked by the transfer of power from President Soeharto to Vice President BJ Habibie on May 21, 1998 (Lindsey, 2002). With the fall of the New Order regime, the structuring of the constitutional system towards the consolidation of the democratic system in Indonesia began (O'Rourke, 2002). The most important consolidation is by making changes and replacing various laws and regulations which are considered not to provide space for democratic life and the principles of popular sovereignty with amendments to the constitution, decentralization of regional government, and the formation of democratic supporting institutions. Amendments to the Constitution are carried out as the main prerequisite for the implementation of a democratic constitutional system (Horowitz, 2013). This is because the systematics contained in the old constitution did not provide sufficient space to develop the concept of democratic governance and the principle of the sovereignty of the people. Politics of Law in Democracy and Constitutional System Democracy is a system of government in which all citizens have equal rights in decision making that can change their lives. Democracy allows citizens to participate, either directly or through representation in the formulation, development and formation of the law (Michels & De Graaf, 2010). Democracy includes social, economic and cultural conditions that allow for the practice of free and equal political freedom. Democracy can also be interpreted that the people have the highest power in a country. A democratic government is different from the form of oligarchy, and monarchy government in which one or limited person holds power (Bastian, 2015). Democracy has several principles, such as equality between citizens in which every citizen has equality in political practice, citizen involvement in making political decisions and recognition of freedom by the state. In detail, to implement democratic values, it is necessary to hold several important institutions. First, there are state institutions such as the executive government and the House of Representatives that have groups and interests in society and are elected by free elections and on the basis of at least two candidates for each seat. This House of Representatives conducts oversight and controls that allow constructive opposition and allows continuous evaluation of government policies (Sorensen, 1993). Second is a party organization that includes one or more political parties, where the parties have a continuous relationship between the general public and its leaders (Pradityo, 2018). Third is the press, mass media and civil society who are free to express their opinions (Hidayat, 2006). Fourth is a free judicial system to guarantee the rights of speeches and maintain justice (Butt, 2015). In the practice of Indonesian constitutional life since the early days of independence until today, representative democracy implemented in Indonesia consists of several models of representative democracy that differ from one another. This is the implication of the development of the state administration of Indonesia, starting from post-independence to the reform era. In this case, the development of state administration in Indonesia is in line with changes to the Basic Law which is used as the highest legal basis. In terms of politics of law, changes and amendments to the Constitution in Indonesia also have implications for the development of democracy in Indonesia (Nggilu, 2015). This is in accordance with the opinion of Mahfud (2009) which states that law as a political product is not only law in the sense of law, but also includes other laws, including the constitution and the Constitution. Furthermore, Mahfud (2009) also states that politics as an independent variable is extreme divided into two forms of democratic politics and authoritarian politics. Meanwhile, the law as a dependent variable is distinguished from responsive law and orthodox law. A democratic legal configuration will give birth to responsive laws, while an authoritarian political configuration will give birth to orthodox or conservative laws. At present, the reform era has been going on for 22 years. The state administrators always strive to realize a democratic government based on the 1945 Constitution of the Republic of Indonesia, among others by making various changes related to legal politics relating to the implementation of democracy in Indonesia. At present the party system used in Indonesia is multi-party and the electoral system used to elect legislative members is proportional to the list of open candidates, so it is very difficult to turn a single party into a single party. Article 7 of the 1945 Constitution expressly states that the President and Vice-President shall hold office for five years and thereafter may be re-elected in the same office, for only one term. Based on the provisions of Article 7 of this Constitution, it is not possible for the President in Indonesia to hold a position as President for a long period of time. Hidayat (2015, interview) states that the most appropriate democracy for the Indonesian state is Democracy as stipulated in the Preamble to the Constitution Paragraph IV which states that democracy is led by wisdom in the deliberative deliberations and Article 1 paragraph (2) of the Constitution, which states that sovereignty is in the hands of the people and is carried out according to the Constitution. Based on these two provisions, Indonesia adheres to direct democracy and indirect democracy. According to Hidayat (2015, interview), the implementation of democracy based on the Preamble of the 1945 Constitution is a manifestation of the implementation of indirect democracy which manifests the filling of positions in state institutions, where the election is carried out by the House of Representatives, among others in filling positions for members of the Judicial Commission, Constitutional Court Judge, Supreme Court Judge at the Supreme Court, Member of the General Election Commission and the Election Supervisory Board (Herawati & Sukma, 2019). Furthermore, the implementation of direct democracy is the embodiment of Article 1 paragraph (2) of the Constitution, namely in the form of general elections conducted by the people directly, to elect the President and/vice President, members of the House of Representatives, members of the Regional Representative Council and members of the Regional People's Representative Council. Furthermore, according to Hidayat (2015, interview), all democracies in Indonesia, both direct and indirect, are Pancasila Democracy, because Pancasila is an ideology of the Indonesian state which is believed to be true by the Indonesian people. This can be seen in the Preamble to the Constitution of Alenia IV, which affirms to "compile national independence in a constitution of the Indonesian state which is formed in an arrangement of the Republic of Indonesia sovereignty of the people based on: Godhead of the Almighty. " Here, it is clear that the state of the Republic of Indonesia is a sovereign state of people or a democratic state. The democracy implemented is democracy based on Pancasila, or called Pancasila Democracy. This is different model with New Order model of democracy. This democracy contains the notion of democracy that is imbued, encouraged, colored, and based on the Indonesian nation and state, which is imbued and integrated with the noble values of Pancasila. Searching for Democratic Formats for Multicultural Country In order to find the most appropriate format of democracy for Indonesia today, Gafar (2005) proposes an alternative to democracy in Indonesia, namely a workable democracy, which can function and be able to maintain national political stability and create and create an effective government, strong, accountable, built in a society with a very high level of social segregation like Indonesia. The alternative democracy is uncommon democracy. Democracy is not a normal democracy that does not meet the requirements as democracy is understood by people, both normatively and empirically. In a democracy like this there is one dominant party that is able to outperform other parties. Of course the dominant party is obtained through democratic elections. The dominance of the dominant party is usually manifested in the number of seats around 60%. There are four dominant party characteristics. First, dominant in numbers. This means that the party has a greater number of seats than other parties in parliament. Second, the dominant party is also able to dominate the bargaining position. it means that the party is able to convince its opponents strongly if they bargain, so that a lot of its policies can be accepted and therefore remain in the government seat. Third, chronologically, the dominant party indeed controlled the wheel of government for a long time, not only five or six years, but several decades, as the LDP in Japan and the Congress Party in India (see, Boucek, 2003;Reddy, 2005). Fourth, the dominant party dominates the government. This means that the party is able to master the formation of public policies, ranging from breaking down the public agenda to its implementation. Related to the proposed democratic model, a strong government is needed. A strong government is not identical with an authoritarian government, which runs the government without regard to the rule of law and basic rights of citizens. A strong government is a government that is able to survive in the face of various waves of crisis, and its resilience does not use political rules. In general, Pancasila democracy which has adopted in Indonesian current constitutional system, has characteristics such as prioritizing consensus agreement, prioritizing the interests of the state and society, no compulsion of will on others, a family spirit, responsibility in carrying out the results of deliberations, carried out with common sense and in accordance with a conscience, and the moral accountability of decisions to God based on the values of truth and justice. In the reform era, the implementation of Pancasila Democracy must also place more emphasis on upholding people's sovereignty by empowering the supervision of state institutions, political and social institutions, strict division of authority/powers between executive, legislative, and judiciary institutions and respect for the diversity of principles, features, aspirations, and and multiparty political party programs. The point is that in Pancasila Democracy decisions are taken by consensus through joint deliberations. According to Maciver (1965), democracy is a form of government that has never been fully achieved. Democracy grows according to its own nature. In line with MacIver's (1965) opinion, Pancasila Democracy is a type of democracy that has actually been adopted in Indonesia since the enactment of the 1945 Constitution. This cannot be separated from the ideology adopted by the Indonesian nation, namely Pancasila. Nevertheless, democracy itself continues to develop in accordance with the conditions and conditions prevailing in the country concerned. Pancasila democracy is a democracy based on Pancasila. Thus there are five basic values that underlie Indonesian democracy, namely godhood, humanity, unity, consultation/representation and social justice. In addition to the five basic values, it is necessary to consider two other basic values that also underlie Indonesian democracy, namely independence and equality. Independence is defined as freedom of thought and issuing opinions and adheres to one's own beliefs, freedom to unite with fellow friends in achieving a goal and independence to regulate one's own life and life (Soemantri, 2014). The next basic value is the equation. Independence and equality are basic values that cannot be separated. There are two things that will affect the implementation of Pancasila democracy. First is the philosophy or ideology embraced by nations, and second is globalization as a result of the emergence of technology as a force. The constitution that has become a common agreement plays an important role in the life of society, nation and state. In this regard, what needs attention in detail, especially from state administrators and state government administrators, is that Indonesia is a state based on law, Indonesia adheres to the presidential system, where the President is not responsible to parliament and the ministers are assistants President who is not responsible to parliament (Cipto, 2015). Conclusion The findings show that the most appropriate democracy for the Indonesian state is the Constitutional Democracy based on Pancasila. Based on the provisions of Article 1 paragraph (2) of the Constitution, democracy adopted in Indonesia is a direct democracy whose implementation is in the form of direct elections to elect the President and/or Vice President, Members of the House of Representatives, Members of the Regional Representatives and Members of Representatives Regional People. Based on the provisions in paragraph IV of the Preamble of the Constitution, and indirect democracy/representative democracy, in filling positions in several state institutions, such as the Constitutional Court, Judicial Commission, Supreme Court, Election Commission, Election Supervisory Body whose election carried out by the House of Representatives. For the Pancasila Democracy to be carried out in accordance with the mandate of the 1945 Constitution of the Republic of Indonesia, the participation of all Indonesian citizens is needed. To increase the participation of all citizens in the implementation of the Pancasila Democracy, it is needed seriousness from political parties to carry out their functions, especially in terms of their function as a means of political communication. The function of political parties as a means of political communication is urgently needed, namely to communicate the aspirations of the people to the government and government policies to the people, so that there is no large gap between the wishes of the people as sovereignty holders and the government as the organizer of the state government. Communication between the people as holders of sovereignty and the people as administrators of the state can be well established, so that a Pancasila democracy can be realized, prioritizing consensus agreement, the interests of the state and society, not forcing the will of others, always overwhelmed by a family spirit, the existence a sense of responsibility in carrying out the results of deliberations conducted with common sense and in accordance with a noble conscience, making decisions that can be morally accountable to God Almighty based on the values of truth and justice.
2020-07-16T09:06:21.070Z
2020-07-10T00:00:00.000
{ "year": 2020, "sha1": "6b3c74400ece7eabf620c8a4a542946cb65e5540", "oa_license": "CCBYNC", "oa_url": "https://www.richtmann.org/journal/index.php/ajis/article/download/12133/11732", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "6e07b5123e3b8424d702e3c47b8f0d99a203150c", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Political Science" ] }
218469016
pes2o/s2orc
v3-fos-license
Alternative Multidisciplinary Management Options for Locally Advanced NSCLC During the Coronavirus Disease 2019 Global Pandemic The coronavirus disease 2019 (COVID-19) pandemic is currently accelerating. Patients with locally advanced NSCLC (LA-NSCLC) may require treatment in locations where resources are limited, and the prevalence of infection is high. Patients with LA-NSCLC frequently present with comorbidities that increase the risk of severe morbidity and mortality from COVID-19. These risks may be further increased by treatments for LA-NSCLC. Although guiding data is scarce, we present an expert thoracic oncology multidisciplinary (radiation oncology, medical oncology, surgical oncology) consensus of alternative strategies for the treatment of LA-NSCLC during a pandemic. The overarching goals of these approaches are the following: (1) reduce the number of visits to a health care facility, (2) reduce the risk of exposure to severe acute respiratory syndrome–coronavirus-2, (3) attenuate the immunocompromising effects of lung cancer therapies, and (4) provide effective oncologic therapy. Patients with resectable disease can be treated with definitive nonoperative management if surgical resources are limited or the risks of perioperative care are high. Nonoperative options include chemotherapy, chemoimmunotherapy, and radiation therapy with sequential schedules that may or may not affect long-term outcomes in an era in which immunotherapy is available. The order of treatments may be on the basis of patient factors and clinical resources. Whenever radiation therapy is delivered without concurrent chemotherapy, hypofractionated schedules are appropriate. For patients who are confirmed to have COVID-19, usually, cancer therapies may be withheld until symptoms have resolved with negative viral test results. The risk of severe treatment-related morbidity and mortality is increased for patients undergoing treatment for LA-NSCLC during the COVID-19 pandemic. Adapting alternative treatment strategies as quickly as possible may save lives and should be implemented through communication with the multidisciplinary cancer team. Introduction The novel betacoronavirus severe acute respiratory syndrome-coronavirus-2 (SARS-CoV-2), which causes coronavirus disease 2019 (COVID- 19), has led to a global pandemic. As of this writing, over 1.8 million cases have been diagnosed in the world and the incidence continues to rise. 1 As a result, health care facilities in many areas are becoming or have already become resourceconstrained. Nevertheless, locally advanced NSCLC (LA-NSCLC) continues to be diagnosed in many patients, many of whom are symptomatic at the time of diagnosis and may suffer from the progression of their disease if initiation of treatment is delayed. Patients must be made aware that LA-NSCLC is a life-threatening disease that requires treatment; however, both workup and treatment may increase their risk of exposure to COVID-19. Cancer therapies can lead to prolonged immunocompromised states that may affect the probability of severe infection-related morbidities and mortality. Patients require multiple visits to health care facilities, putting them at high risk of COVID-19. As such, an urgent need has emerged to consider alternative management options that may save lives. Owing to the novel nature of the SARS-CoV-2 virus, there are limited data regarding the long-term benefits or costs of modifications to the standard of care, which are made because of the risks of infection or constrained resources. Thus, the consequences of modifications to the standard of care are unknown. However, most patients with LA-NSCLC cannot wait for until data become available, or until the risk of infection has completely passed, or till resources are plentiful before treatment is started. Clinical judgment will be necessary to decide whether a modification in the standard of care is warranted. Several organizations including the American Society of Radiation Oncology (ASTRO) and American Society of Clinical Oncology have published recommendations for cancer care during the COVID-19 pandemic. 2,3 A more radiotherapy-focused addition was a consensus statement that was recently published on April 1, 2020, by the European Society of Radiation Oncology and ASTRO, which covered various clinical scenarios for lung cancer radiotherapy. 4 The European Society of Radiation Oncology-ASTRO consensus was developed through a modified Delphi process with 32 experts in radiation oncology. Adding to this consensus statement, the report submitted herein presents a focused set of recommendations for managing LA-NSCLC that was developed by a multidisciplinary panel of 11 experts in radiation, medical, and surgical oncology from a series of virtual meetings and discussions. A summary of these recommendations can be found in Table 1. Why Consider Alternative Treatment Strategies? Patients with LA-NSCLC are generally of advanced age and frequently present with high-risk comorbidities that may limit their ability to survive a COVID-19 infection. The evidence for factors that contribute to COVID-19-related mortality continues to emerge and may include baseline comorbidities that are common in patients with lung cancer, including poor pulmonary reserve owing to chronic lung disease, cardiac disease, diabetes, and other diseases. [5][6][7] Patients may also have symptoms related to lung cancer that may mask any evidence of COVID-19 and could delay detection. Unfortunately, treatments for LA-NSCLC require numerous visits and exposures to health care facilities that increase the patient's and health care provider's risk of acquiring COVID-19. Surgical care may entail multiple cycles of induction chemotherapy followed by perioperative care that requires hospitalization, all of which carry a high risk of exposure to the infection for both the patient and health care staff. The standard nonoperative management is concurrent chemoradiotherapy, generally administered over 6 weeks with daily radiation, and weekly or every 3 weeks of chemotherapy followed by 12 months of durvalumab, which is delivered every 2 weeks. According to preliminary data from the People's Republic of China and Italy, cancer, along with other comorbidities, may increase the mortality from COVID-19. However, a lot of uncertainty remains, as the observations may be the result of malignancy, treatment effects, or both. [8][9][10][11] These reports were about retrospectively evaluated small cohorts that had methodologic limitations, which are to be expected during a pandemic. Yet, they provide a reason for a pause in the way we manage patients with LA-NSCLC during this pandemic, given the potential consequences of viral pneumonia, acute respiratory distress, and sepsis that can occur in patients undergoing treatment for LA-NSCLC. More importantly, emerging data also reveals a mortality rate of around 10% for hospitalized patients with COVID-19 and 50% mortality for those requiring intensive care unit admission. 12,13 Alternative treatment strategies for patients with LA-NSCLC offer opportunities to mitigate the risk of harm by reducing patient exposure to health care facilities and the immunocompromising effects of treatment. Though there are some early theories that immunomodulation may be beneficial for those with COVID-19, 14-16 immunosuppression may increase the risk of secondary infections in addition to COVID-19. These changes will likely have minimal impact on overall survival and should be weighed against the risk of death and disability from developing COVID-19, even though they are not supported by clinical trial evidence. Pathologic Diagnosis and Nodal Staging During a Pandemic Inevitably, an increased number of visits to health care facilities and all diagnostic procedures increase the risk of exposure to COVID-19. At this time, a pathologic diagnosis of malignancy should be sufficient to initiate treatment for LA-NSCLC. There is no evidence at this time from any study that a repeat biopsy for further subtyping or determination of a specific oncogenic mutation affects management. The decision to stage the mediastinum with bronchoscopy or mediastinoscopy should be balanced against the risk of exposure of patients and staff to COVID-19, as these procedures may increase the risk owing to aerosolization. 17,18 The availability of invasive staging procedures may ultimately be limited because pulmonologists and surgeons are increasingly needed for critical care services and operating room time may be unavailable; therefore, Consider staging the mediastinum with 18F-FDG-PET-CT and avoid invasive mediastinal staging with endobronchial ultrasound or mediastinoscopy. If 18F-FDG-PET-CT is not available, consider staging the hilum and mediastinum using a contrast CT. Surgical management The decision of whether to proceed with the surgical procedure is highly dependent on the current phase of the pandemic at the treating institution. Avoid surgical intervention in patients positive for SARS-CoV-2, even if asymptomatic. Preoperative CT of the chest may help detect asymptomatic carriers of SARS-CoV-2 before the surgical procedure. The nonoperative management of LA-NSCLC may be used Chemoradiation Consider sequential chemoradiation If using sequential therapy, the decision of whether to pursue systemic therapy or radiation first will depend on clinical factors and available resources. Consider induction systemic therapy. Consider durvalumab after concurrent or sequential chemoradiation. If using durvalumab after chemoradiation, consider 4-weekly dosing or delaying initiation of durvalumab (within 42 days). Radiation techniques Use IMRT with a hypofractionated schedule. Elective nodal coverage is not needed and may increase toxicity. In the absence of pathologic hilar or mediastinal staging, treat all hypermetabolic lymph nodes as if they are positive for the disease. If CTwith contrast was used alone, consider treatment of all lymph nodes >1 cm in the short axis. Treating patients with COVID-19 The decision to treat these patients depends on patient and disease factors and the availability of resources. Consider brief radiotherapy in patients who have tumor obstructions, hemoptysis, or are symptomatic from their LA-NSCLC. Hold treatment in patients who are symptomatic from the infection and whose malignancy would most likely not progress during the delay or interruption. The focus of care should shift to supportive and anti-infectious methods so that the patient can resume optimal oncologic therapy as soon as possible. In these patients, oncologic care can be possibly resumed when symptoms have resolved and the patient tests negative. In patients who have had severe disease, it is likely that definitive management will no longer be possible and the goals of care will be altered. Communication Robust communication is needed with all members of the patient's care team. Deviations from the standard of care should be discussed in the multidisciplinary tumor board. The patient must be made aware of the risks and benefits of treatment, including deviations in the standard of care. Avoiding nihilism We must advocate for our patients as many have good outcomes. initiation of treatment should not be delayed just because these procedures cannot be performed in a timely manner. Nodal staging by imaging has a relatively high accuracy and can be an appropriate substitute at this time. [19][20][21] Positron emission tomography (PET) with 2-deoxy-2-[fluorine-18]fluoro-D-glucose, integrated with computed tomography (CT) scans, has a high sensitivity for detection of occult regional and distant disease and can also help define treatment targets for radiation therapy planning. 22 Acquisition of PET-CT images requires considerably more time (h) in radiology departments than contrast-enhanced CT (min), and the limitations of each should be considered. If PET-CT staging is not available, staging can be performed by CT alone through the categorization of any mediastinal lymph node larger than 1 cm on the short-axis diameter as positive, which, therefore, should be targeted by radiation therapy. 23 Surgical Management Surgical procedure plays a key role in the treatment of patients with single-station and low-volume stage IIIA NSCLC. Surgical decisions are typically guided by careful patient-centered risk-benefit assessments; but currently, health care resource allocation and risk to the treatment team must be included in that challenging decision. The Centers for Disease Control, American College of Surgeons, and numerous other government agencies have recommended cancellation of elective surgical procedures during the global pandemic and creation of a tiered system for prioritization of other surgical procedures. 25,26 Lung cancer resections fall into the scope of medically necessary time-sensitive procedures. 27 For patients with LA-NSCLC who have completed induction therapy, time sensitivity is particularly crucial as the window for meaningful and safe surgical intervention typically spans over a 4-to 12-week period. Unfortunately, exposure to various risks and resource utilization for lung resections are high; they require general anesthesia, manipulation of the aerodigestive tract, and hospital stay. The decision to proceed with surgical intervention is highly dependent on the current phase of the pandemic at the treating institution. Those in the early phase in which there are still adequate health care resources (such as operating rooms, ventilators, hospital beds, surgeons, and anesthesiologists) may choose to proceed, whereas those in the later stages will have little choice but to defer planned resections. Although there is an argument that a brief operative intervention for stage I disease might only require a one-to two-night hospital stay and may have less risk of exposure and resource utilization than radiation therapy, the same does not apply for LA-NSCLC, in which the potential postinduction hilar and mediastinal fibrosis increases the risk for an open procedure (if delayed beyond the "window of opportunity"), prolonged hospital stay, and perioperative complications. Early reports regarding patients who were initially asymptomatic but manifested severe COVID-19-related complications with resultant high postoperative mortality after undergoing elective operations are sobering, and resources in most communities do not allow for preoperative screening of patients who are asymptomatic. 28 If a surgical procedure is still to be performed, some centers recommend rapid COVID-19 testing and an immediate CT scan before the operation to look for early bilateral pulmonary infiltrates; but uniform recommendations do not currently exist. Unfortunately, the accessibility of surgical procedures for patients with lung cancer will become more limited over the coming weeks and we need to consider alternative treatment strategies. Most of these patients who would normally have surgical management of their locally advanced lung cancer may instead receive nonsurgical management owing to constrained resources and risks associated with perioperative management. 29,30 Therefore, nonoperative management of patients with surgically resectable LA-NSCLC should be considered at this time during a pandemic. Nonoperative Management There are two key dilemmas for the nonoperative management of LA-NSCLC. The first relates to the general approach of treatment with 6 weeks of concurrent chemoradiation therapy followed by durvalumab consolidation, given every 2 weeks for an additional 12 consecutive months. The frequency and duration of these treatments could subject patients, health care providers, and other potentially immunocompromised patients with cancer to a high risk of acquiring COVID-19. The second dilemma relates to the immunocompromising effects of chemotherapy and radiation therapy, which may be increased when given concurrently 5 days a week for 6 consecutive weeks. This may increase the chance of infections, including COVID-19. Each of these dilemmas creates a high-risk environment for a concurrent COVID-19 that will generally lead to a treatment interruption, which is known to increase mortality. 31 Strategies to mitigate these risks are described below. The use of sequential radiation and chemotherapy offers an opportunity to reduce the combined immunosuppressive effects of concurrent chemoradiotherapy, and deliver a treatment that is better tolerated but comes at a cost of longer total treatment time. 32,33 Such a modification requires careful coordination between medical and radiation oncologists as this approach represents a departure from accepted standards. Although the overall survival benefits of concurrent chemoradiation versus sequential therapy have been established using randomized phase III data, the absolute benefit is modest and may be outweighed by the acute toxicities that could emerge and be more difficult to manage during the pandemic. 32,34 Whenever considering sequential therapy, the order of therapy initiation should be on the basis of clinical and situational factors, such as patient symptoms, rate of progression of the disease, overall disease burden, and available resources, among other factors. Upfront radiation therapy should be considered whenever tumors are either causing or likely to cause symptoms owing to the presence of hilar disease, bronchial or vascular compression, atelectasis, pulmonary symptoms, and pneumonia. 35 Patients without these features may be best treated with upfront systemic therapy, followed by radiotherapy alone or chemoradiotherapy. Upfront systemic therapy decreases patient exposure to one visit every several weeks and postpones the initiation of daily radiation therapy treatment. Durvalumab 10 mg/kg every 2 weeks is now routinely given after chemoradiotherapy on the basis of results of the phase III A Global Study to Assess the Effects of MEDI4736 Following Concurrent Chemoradiation in Patients With Stage III Unresectable Non-Small Cell Lung Cancer (PACIFIC) trial, which reported substantial improvement in overall survival (HR ¼ 0.68). In that study, 25.3% of patients received induction chemotherapy followed by chemoradiation therapy before receiving durvalumab. 36 Patients in the PACIFIC trial received concurrent chemoradiation, and the use of sequential chemotherapy and radiation followed by durvalumab is currently being investigated in the PACIFIC 6 trial. 37 After radiotherapy and chemotherapy, durvalumab may be administered at a dose of 1500 mg every 4 weeks as this schedule has been used in other trials. [38][39][40] Per the PACIFIC trial, durvalumab can be administered up to 6 weeks after chemoradiation. The course of the COVID-19 pandemic is projected to peak at a certain point (i.e., days when the highest number of new cases is achieved) and then decelerate. The goal is to minimize exposure during this peak while treating patients, with daily radiation only given when there is less risk of exposure. Sequential therapy with systemic therapy followed by hypofractionated radiation may accomplish this. Patients treated with sequential therapy could be treated with a hypofractionated schedule with treatment courses as short as 3 weeks. Multiple studies have revealed such an approach to be safe and effective. [41][42][43] Shorter courses have been associated with decreased immunosuppression in other cancers, such as leukemia and pancreatic cancer. [44][45][46] This may offer advantages for LA-NSCLC, although the primary benefit during a pandemic is to minimize the number of encounters. Approaches employing 15-and 20-fraction schedules preceded the current pandemic and have already been reported to be safe and effective. For those who have access to a proton facility, protons can be used for hypofractionated lung radiation. 47 Hypofractionated courses can be delivered with either induction or adjuvant chemotherapy or chemoimmunotherapy. Some centers are comfortable with treatment using concurrent chemotherapy and hypofractionated therapy; though other centers have concerns of increase in late toxicity. Standard chemotherapy schedules can be referenced in evidence-based guidelines, such as those found in the National Cancer Network Guidelines, with consideration of schedules that minimize the frequency of visits to a health care institution. 48 Table 2 summarizes the frequently used hypofractionated schedules for locally advanced lung cancer. [49][50][51][52][53][54][55][56] The use of intensity-modulated radiation therapy techniques is encouraged for hypofractionation as it can minimize the volume of normal tissues exposed to the prescription dose. Whenever normal organ dose constraints cannot be met during hypofractionated radiotherapy treatment planning, radiation oncologists should consider delivering the full prescription dose to the gross tumor volume and planning treatment volume margin while reducing the dose prescribed for the clinical treatment volume and planning treatment volume margin. This can be often achieved with intensitymodulated radiation therapy using a simultaneous integrated boost technique. 57 Regarding the definition of radiation treatment volumes, we urge our colleagues to avoid elective lymph node irradiation, which is an outdated approach to the radiotherapeutic management of LA-NSCLC. [58][59][60] The extension of target volumes to include even one nodal stage superiorly or inferiorly increases the risk of treatment-related immunosuppression and pneumonitis that may mask early symptoms of COVID-19. If contouring nodal stations with CT alone, radiation oncologists should consider including any lymph nodes larger than 1 cm in the short axis (not greatest dimension). In addition, daily image guidance using cone-beam CT may help assess the development of infiltrates in patients who are asymptomatic. 61 If there is a preference to deliver concurrent chemoradiation, then induction chemotherapy may be considered to delay the time when patients need to come in for daily chemoradiotherapy treatments. Results from randomized phase II and III trials support such an approach after having compared chemoradiotherapy with and without induction chemotherapy, which revealed similar survival rates. [62][63][64] Although the use of induction chemotherapy prolongs the total length of treatment, it offers an opportunity at this time to delay initiation of chemoradiotherapy, which requires daily visits at facilities that may soon be resource-constrained. Patients Infected With COVID-19 Once a patient with LA-NSCLC contracts COVID-19, specific patient and treatment factors should be considered before the difficult decision of whether to hold or continue therapy is made. These factors include patient symptoms, status, the growth rate of NSCLC, and available resources within the health care facility, among other factors. There are currently no data that support either proceeding with or withholding treatment on the basis of COVID-19 status, presence of symptoms, or severity of symptoms. If a patient has a obstruction, hemoptysis, or other symptoms that may be alleviated by oncologic treatment, it may be necessary to treat regardless of the status of COVID-19 symptoms. This may be radiotherapy alone. The use of concurrent chemotherapy is unlikely to be of benefit given the potential for myelosuppression, esophagitis, and pneumonitis and should only be considered if reduction of a large mass is necessary to alleviate acute pulmonary compromise. However, patients with severe symptoms should not receive any cancer therapies until they recover. For patients with mild to no symptoms of COVID-19, treatment strategies such as strict personal protective equipment use for patients and staff, physical separation of infected and uninfected patients, and frequent and robust sanitation of equipment should be implemented. For all patients receiving radiotherapy, it may be ideal to provide separate waiting rooms, changing facilities, and different machines, if possible. Treating patients with COVID-19 at the end of the day and thorough cleaning of all facilities may also limit the spread of the virus. These strategies have been described in-depth in reports from the People's Republic of China, Italy, and the United States. [65][66][67][68] Management of a patient who tests positive for COVID-19 but is asymptomatic is more difficult. They may be considered physiologically unfit to tolerate any further lung cancer therapies as many patients develop worsening symptoms in the second week of infection. 69 The influence of active treatment for lung cancer on COVID-19 is unknown at this time. As such, a 1-to 2week waiting period can help confirm whether the patient manifests any viral symptoms. Early studies have revealed that COVID-19 can lead to acute respiratory distress syndrome with pathologic changes, such as the destruction of the alveoli and cellular fibromyxoid exudates. 70,71 These pathologic changes may lead to prolonged injury of lung tissue, even after clearing the viral infection. COVID-19 can also lead to an increase in inflammatory biomarkers, which have been correlated with the severity of morbidity. 72 Similarly, serum inflammatory cytokines have been shown to be predictive of both thoracic toxicity and survival after thoracic radiotherapy. [73][74][75][76][77] Although these data raise concerns, we are not entirely sure how they can manifest in patients with COVID-19 as data on this pandemic are still emerging, and the degree of morbidity seen has been variable. For some patients, the effects of the virus and lung cancer therapies may simultaneously damage pulmonary tissue and increase the chance of pneumonitis. At the same time, these risks should be carefully weighed to avoid treatment delays or interruptions whenever possible, given that improperly treated lung cancer can lead to worse oncologic outcomes. 31 Given the lack of actual clinical data, a reasonable approach would be to retest patients with asymptomatic or minimally symptomatic COVID-19 in 14 days and initiate therapy if they are both asymptomatic and have negative results. If a patient becomes symptomatic and tests positive during concurrent or sequential chemoradiotherapy, it is probably appropriate to temporarily halt treatment and wait until the resolution of symptoms owing to COVID-19. In the event of hospitalization, the treating oncologic team must be in communication with the inpatient team regarding the possibility of confusing radiologic findings of radiation pneumonitis, immunotherapy pneumonitis, and COVID-19. Resumption of treatment after recovery from symptomatic COVID-19 is likely to be very difficult given the severe impairment experienced by many patients. For patients with COVID-19, the clinical focus should include aggressive and supportive care with the consultation of infectious disease and critical care specialists and possible enrollment into clinical trials directed at COVID-19. These efforts can help restart cancer therapy as soon as possible. It is important to emphasize that the challenges of managing patients with COVID-19 are best avoided by the previously recommended strategies to prevent an infection in the first place. Preserving Effective Communication and a Multidisciplinary Approach Communication must be frequent with all health care providers who may be unfamiliar with any of the recommended alternative treatment approaches. Clinicians should have heightened awareness about the importance of careful and thorough documentation, especially if cross-coverage is needed in case of an unexpected event. Multidisciplinary tumor board reviews should continue to be held for all patients (remotely if necessary) at this time. This ensures thoughtful reviews for each case with input from radiologists, as mentioned above. It also affords an opportunity for medical oncologists to discuss the risks and benefits of different systematic treatment approaches in the context of COVID-19 and allows radiation oncologists to describe the rationale for alternative treatment schedules. Clinicians also need to have careful and informative conversations with each patient regarding any deviations from the standard of care. Topics such as risks, benefits, and reasoning for these deviations must be explained to the patients so that they can have a proper understanding of the trajectory of their care. Patient preference must also be factored into management. Patients should also be advised to selfisolate during treatment to reduce the risk of acquiring the infection and spreading it to other patients with cancer. Final Considerations Regarding Lung Cancer Stigma and Nihilism Recent reports in the media indicate that health care systems are taking measures during the global pandemic to ration resources that may limit access to care for patients living with lung cancer. 78 This includes restricting access to a ventilator if they develop respiratory distress for any reason whether or not related to COVID-19. Educating our colleagues and citing evidence may be helpful, such as the recent results of the PACIFIC trial, which revealed a 3-year overall survival of 66% among patients with LA-NSCLC treated with chemoradiotherapy and consolidation durvalumab. 79 Health care administrators and colleagues taking care of these acutely ill patients on the front line may be largely unaware of recent advances in lung cancer care and the opportunity for long-term survival in all patients with lung cancer regardless of stage. Therefore, efforts to protect against nihilism may be needed now more than ever. Conclusions Patients with newly diagnosed LA-NSCLC are a vulnerable population during the COVID-19 global pandemic. Standard of care strategies, including surgical procedure, radiation therapy, and systemic therapies, can take a long time to deliver and expose patients to multiple visits to health care facilities. For patients who are confirmed to have COVID-19, consideration should be given to withholding their cancer therapies until they are fully recovered. However, the initiation of treatment for patients without an infection should not be delayed. The alternative treatment strategies presented in this report can reduce the risk of contracting and transmitting the infection and should be considered for each patient while the global pandemic persists. There are little data, and the consequences of modifications to the standard of care are not fully known. Some surgical, radiation, and medical oncology practices may still currently be operating in geographic locations that have not yet been hit by this pandemic; however, with this pandemic having reached across the globe, the authors urge all oncologists to consider these measures to optimize patient outcomes until this current crisis is over.
2020-04-30T09:07:00.728Z
2020-04-28T00:00:00.000
{ "year": 2020, "sha1": "1ceb60961282f56e74429130480bde5fd26d0f6d", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc7194660?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "6b8d3b44646e2c9e885c40ecef6b8062f334406a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252658115
pes2o/s2orc
v3-fos-license
The prevalence and burden of four major chronic diseases in the Shanxi Province of Northern China Background Chronic non-communicable diseases constitute an important public health problem that is closely related to behavioral risk factors. The study examined the prevalence, burden, and behavioral risk factors relevant to four major chronic diseases in Shanxi Province, China. The results obtained could provide a basis for the formulation of chronic disease prevention and control strategies in north China. Methods A multi-stage random sampling method was used to select 14,137 residents aged ≥15 years who completed a questionnaire survey and physical examination. The disease burden was evaluated using the disability-adjusted life years (DALY) index. The extent of disease burden attributable to smoking and drinking behavior was analyzed using counterfactual analysis. Results The total DALYs due to the four major chronic diseases was 938,100. The years of life lost due to stroke accounted for 74.86%; the years of life lived with disabilities accounted for 54.0 and 68.1% of the total disease burden of coronary heart disease and diabetes. Coronary heart disease attributed to smoking (105,600) was the highest, followed by stroke (77,200), hypertension (6,000), and diabetes mellitus (5,900). Stroke attributed to drinking (30,700) was the highest followed by coronary heart disease (16,700) and diabetes (1,100). The disease burden caused by smoking and drinking was higher in men (164,000 and 40,700, respectively) than in women (30,700 and 7,300, respectively). Conclusion There is a high prevalence and significant burden associated with major chronic diseases in Shanxi Province. Therefore, the need for the application of various interventions to control smoking and drinking (the major predisposing factors) should be applied to reduce this burden. Introduction With social-economic development, acceleration of urbanization, and population aging, chronic non-communicable diseases have become a global public health issue that is associated with premature death and disability (1). According to the Global Burden of Disease (GBD) study conducted in 2017 (2), chronic diseases have become one of the leading causes of death worldwide. The top four leading non-communicable diseases of death are due to cardiovascular and cerebrovascular diseases (44%), cancer (22%), chronic respiratory diseases (9%), and diabetes mellitus (4%) (3). In recent years, China has faced great challenges in the field of chronic disease prevention and control. According to the National Health Commission of the People's Republic of China, more than 300 million patients have been diagnosed with chronic diseases, which accounted for 86.6% of the total causes of death. The rapid increase in chronic disease morbidity and mortality has affected national health and poses an immense societal burden. In the most recent analysis of the GBD study, chronic diseases accounted for the majority (62%) of the total GBD, expressed as the disability-adjusted life years (DALY), representing an increase of 16% from 2007 to 2017 (4). The DALY caused by chronic diseases accounts for 82.75% of the total causes of death in China, and the DALY lost from the four major chronic diseases accounted for 59.06% of total chronic disease DALYs (5). The occurrence and development of chronic diseases are closely related to key lifestyle-based risk factors. The results of GBD 2019 showed that the top three attributable risk factors for deaths of women globally were high systolic blood pressure, poor diet, and hyperglycemia. For men, the risk factors were tobacco exposure (smoking, secondhand smoke, and the use of chewing tobacco), high systolic blood pressure, and poor dietary habits (6). The results of the surveillance survey of chronic diseases and their risk factors in China showed a high prevalence of bad lifestyle habits such as smoking, poor eating habits, and excessive drinking among Chinese residents (7), leading to a rise in the incidence of chronic diseases and a heavy disease burden. Therefore, analyzing the disease burden of major chronic diseases and behavioral risk factors is of great importance for promoting the development of preventive and treatment strategies. Shanxi Province is located in north China, with a written history dating 3,000 years back, and is known as the "cradle of Chinese civilization". The permanent resident population is about 34.8 million, and the proportion of people >60 years old is 18.92% (8). In contrast, the medical care of Shanxi Province is relatively poorly developed. According to the data from the China Health Statistical Yearbook, the total health expenditure per capita of Shanxi Province is 3,282 yuan, which is among the lowest five in China (9). Studies on the burden of noncommunicable diseases in Chinese provinces and cities are ongoing. Ma et al. (10) estimated the death and disease burden of cardiovascular and cerebrovascular diseases in Beijing based on data from GBD 2016. Chen et al. (11) analyzed the DALY and economic burden of smoking in Hangzhou city in 2013. There are comparatively fewer studies on the disease burden of chronic diseases and risk factors in Shanxi Province in Northern China. Therefore, quantifying the disease burden to improve the prevention and control of major diseases in north China and inform public health policies is an urgent challenge. In this study, we investigated the prevalence of chronic diseases among 14,137 residents of Shanxi Province enrolled using a multi-stage cluster sampling approach. According to the prevalence ranking, the four most prevailing chronic diseases were hypertension, diabetes mellitus, coronary heart diseases (CHD), and stroke. The disease burden caused by major chronic diseases, smoking and drinking-attributable disease burden, and cause of death monitoring data were evaluated. Data sources Data on the cause of death were obtained from "The Chinese Cause of Death Surveillance Dataset 2017". We used the death data of Central China, including the eight provinces of Shanxi, Jilin, Heilongjiang, Anhui, Jiangxi, Henan, Hubei, and Hunan. The Chinese cause of death surveillance system monitors a population of more than 300 million, with good provincial representation. The dataset is under strict quality control by eliminating data from some monitoring points that are considered to have serious underreporting and have the potential to affect the overall results. Data on chronic disease prevalence were obtained from the Shanxi Provincial Chronic Disease Current Conditions survey conducted from 2017 to 2019, which covered 11 cities in the Shanxi Province. Sex-and age-stratified population-based data were obtained from the sixth census data in the Shanxi Provincial Statistical Yearbook. Sampling method Eleven cities in Shanxi Province were used as sampling units in rural areas. A multi-stage random sampling method was used. Five districts (counties) were randomly selected in each city, two to three villages were randomly selected in each district (county), 50 households were selected in the selected villages according to the simple random sampling method, and two permanent residents (residents for at least 6 months) aged 15 years or older were selected in each household as study participants. In Taiyuan City, a total of 13 streets were selected by multi-stage random sampling method, and three to four communities were selected . /fpubh. . for each street. One hundred households were randomly selected from the selected communities, and two permanent residents (living or having lived for 6 months) aged 15 years or above were selected from each household as the study participants. A total of 15,000 questionnaires were distributed, and 14,137 valid questionnaires were returned, with a valid return rate of 94.25%. Survey content • Questionnaires: Self-designed questionnaires were used, and on-site surveys were administered by uniformly trained investigators. The content included general demographic characteristics, chronic disease prevalence (e.g., chronic disease types, family disease history, and medication compliance), and lifestyle factors (e.g., smoking, alcohol consumption, diet, and physical exercise). Chronic diseases were self-reported by residents diagnosed at a local medical institution at the county level or above. Smokers were defined as those who smoked more than one cigarette per day and smoked continuously or cumulatively for more than 6 months, and those who drank alcohol more than once a week, and drank continuously or cumulatively for more than 6 months, were defined as alcohol drinkers. • Physical measurements: Height, weight, waist circumference, and blood pressure were measured by uniformly trained medical examiners, and blood glucose levels were tested. Patients with hypertension were defined as those with systolic blood pressure ≥140 mmHg and/or diastolic blood pressure ≥90 mmHg; or those with a previous history of hypertension who were currently taking anti-hypertensive medication and whose blood pressure had fallen below these standards. Patients with diabetes mellitus were defined as those with fasting blood glucose ≥7.0 mmol/L or blood glucose level ≥11.1 mmol/L after a 2 h oral glucose tolerance test, or with a previous history of diabetes mellitus and currently taking anti-diabetic medication. The DALY To emphasize the influence of chronic disease caused early death on the life reducing, we applied YLL to evaluate the disease burden of chronic disease, including YLL and YLL rate. The age range of "premature death" was from 0 to >80 years old. The YLL rate was calculated from the 6th census data in the statistical yearbook of Shanxi Province. The calculation method was as follows: N is the number of deaths for each age and sex group, and L is the value of life lost for each age group. We used the 2000-2017 life expectancy table from the World Health Organization's Burden of Disease Study to calculate the standard life expectancy for each age group (12). where P is the number of people in each age group. Where Prev is the number of patients with sequelae of a disease in a certain age group, and DW is the disability weight for that sequelae. The weight of disability in this study was adopted from the GBD data, where the weight of hypertension and its complications was 0; the weight of diabetes mellitus was 0.033; the weight of stroke was 0.244 for the 15-60 years age group and 0.258 for the over 60 years age group and older, and the weight of CHD was 0.395 (13). P is the number of people in each age group. The chronic disease burden attributable to lifestyle risk factors including smoking and drinking was calculated with reference to the RR values from the GBD 2017 study (2), with a theoretical minimum exposure level of 0 cigarettes/day and 0 g/day for smoking and drinking respectively. The risk attributable to smoking was calculated as: Where P is the prevalence of smoking and RR is the relative risk. The risk of attributable to drinking was calculated as follows: Where P i is prevalence of drinking at the category i, RR i is the relative risk at level i, and n is the number of exposure levels. The attributed disease burden due to smoking and drinking is given by: Where AB x is the disease burden attributable to smoking and drinking, B j is the disease burden of disease j (YLL, YLD, and DALY), and PAF xj is the population attribution score of risk factor x and disease j. Statistical analysis We calculated the indicators of disease burden for the four major chronic diseases, including the YLL, YLD, DALY, YLL rate, YLD rate, and DALY rate. Data from surveillance of chronic diseases and their risk factors were entered using Epidata version 3.1 to create a database. Data were analyzed using SPSS version 24.0 for descriptive analysis of chronic disease prevalence. Counterfactual analysis was used to calculate the burden of disease attributable to smoking and alcohol consumption for the four major chronic diseases of interest in this study. Ethics statement The studies involving human participants were reviewed and approved by Shanxi Medical University Ethics Committee. Written informed consent to participate in this study and publication of any potentially identifiable images or data included in this article was provided by the participants or their legal guardian/next of kin. Analysis of disease burden of major chronic diseases in Shanxi province The four major chronic diseases in Shanxi Province caused a total of 938,100 person-years of DALY, in descending order: CHD, stroke, diabetes mellitus, and hypertension ( Figure 1). All four chronic diseases caused a higher disease burden in men compared to women. Diabetes mellitus and CHD had the highest DALY loss in the 45-59-year-old age group, stroke in the Estimation of the disease burden attributable to smoking and drinking in Shanxi province The population attributable fraction (PAF) of four chronic disease deaths due to smoking in Shanxi Province were, in descending order: CHD, stroke, hypertension, and diabetes mellitus. These were all higher in men than in women. Among them, the PAF of men with CHD, stroke, and hypertension gradually decreased with age, and the PAF was higher than 30% in the 30-60-year-old age group (Table 3). The PAF of deaths from three chronic diseases due to drinking in Shanxi Province were, in descending order: stroke, CHD, and diabetes mellitus. The PAF for CHD and stroke was higher for men than for women. The PAF was the highest in men aged 40-45 years old for CHD and stroke, and in women aged 60-70 years old for stroke and diabetes mellitus, respectively ( Table 4). The DALY for the four major chronic diseases attributable to smoking in Shanxi Province was 194,700 person-years. YLL was 110,100 person-years, accounting for 56.55%, and YLD was 84,600 person-years, accounting for 43.45%. Of the four chronic diseases, CHD had the highest disease burden attributable to smoking, with a DALY of 105,600 person-years, followed by stroke (77,200 person-years), hypertension (60,000 person-years), and diabetes mellitus (5,900 person-years). The attributable YLD was higher than the YLL for CHD and diabetes, and the attributable YLL was higher than the YLD for stroke. The DALY for the three main chronic diseases attributable to drinking was 48,100 person-years, YLL 30,900 person-years . /fpubh. . (64.24%), and YLD 17,200 person-years (35.76%). Stroke had the highest disease burden attributable to drinking, with a DALY of 30,700 person-years, followed by CHD (16,700 person-years), and diabetes mellitus (11,000 person-years). The attributable YLD was higher than the YLL for CHD and diabetes, and the attributable YLL was higher than the YLD for stroke (Figure 3). In terms of sex, the smoking and drinking attributable YLL, YLD, and DALY were significantly higher in men than in women. In terms of age groups, the YLL, YLD, and DALY attributed to smoking showed an overall increasing trend in the 30-60 year old age group, and the disease burden attributed to smoking decreased significantly in the >60 year old age group; the YLD and DALY attributed to drinking showed a trend of an initial increase followed by a decrease with age, and the disease burden reached the highest in the >60 year old age group, and then decreased significantly (Table 5). Discussion This is the first study to quantitatively estimate the disease burden of major chronic diseases in Shanxi Province. Here, we used DALY as a metric and evaluated the disease burden of major chronic diseases caused by two common risk factors, i.e., smoking and drinking. The latest national health services survey showed that the most common chronic diseases were hypertension, diabetes mellitus, and CHD (9). In this study, the prevalence of these four diseases was high. Owing to the differences in population, age structure, and lifestyle habits across different provinces of China, there are geographical differences in the prevalence of major chronic diseases. According to the results of the previous nationwide survey, the prevalence of diabetes mellitus and hypertension is higher in the northern region than in the southern region of China (14)(15)(16). The result of this study showed a lower prevalence of diabetes mellitus in the urban area than in a province in northeast China (20.21%) (17), but higher than in a province in southeastern coastal (9.4%) (18). The prevalence of hypertension in the rural area was higher than that of Jiangxi Province (24.04%) (19), lower than that of Ningxia Hui Autonomous Region (20) in line with the characteristics of "high in the north and low in the south". There is still a high prevalence of CHD and stroke in China, with a steady increase from 1990 to 2017, which is still ongoing (21). In recent overviews, the incidence and mortality rates of stroke in China appear to be the highest in the world (22). Compared to a study conducted in rural areas in Shanxi Province in 2017 (23), the prevalence of all four diseases increased. Therefore, in the prevention and control of chronic diseases, Shanxi Province should take them as the major focus and establish a long-term investment mechanism to maximize the effectiveness of prevention and control. We found that CHD was the most significant contributor to DALY among the residents of Shanxi Province. In China, the DALY rate for cardiovascular diseases (CVD) has declined in every province from 1990 to 2016, but the northern provinces had the highest age-standardized DALY rate for ischemic stroke (24). In this study, the DALY rate was higher than those of Sichuan province (25) and Tianjin city (26), especially the YLD rate. The main burden of CHD in Shanxi Province was therefore the loss of life caused by disabling life-threatening conditions. The difference between provinces is partly due to the gaps in cardiovascular care (27,28), demonstrating the importance of increasing investment in the prevention and treatment of CHD in Shanxi Province. Stroke was second only to CHD as a cause . /fpubh. . of disease burden in Shanxi province. As the third leading cause of death in China, stroke has a higher disease burden than the global average, which will continue to increase in the next decade (29). Early death is the leading cause of stroke disease burden in China (30). YLL accounts for 76% of the total burden of disease in this study, owing to the many rural areas in Shanxi Province and relatively scarce medical resources. Thus, the YLL was higher than YLD because of delayed treatment after stroke. The capacity building of stroke treatment should be strengthened in the future, and the capacity of stroke diagnosis and treatment at the grassroots level should be improved. The impact of smoking and excessive alcohol use on adverse cardiac and cerebrovascular outcomes is well-established. Our study also discovered that CHD had the highest disease burden attributable to smoking, with a DALY of 105,600 person-years. Among smoking-related deaths, CVD accounts for approximately one-third of cases worldwide (31). Tobacco smoking, both active and passive (i.e., secondhand smoke), . /fpubh. . increases the incidence of all phases of atherosclerosis, from endothelial dysfunction to various types of CVD (32,33). Even smoking a single cigarette daily increases the risk of developing CHD and stroke. Currently, smoking is not yet completely banned in public places and indoor workplaces in the Shanxi Province, which still has a high smoking rate. Beijing has effectively reduced the number of smokers and decreased the hospital admission of CVD by adopting the strictest tobacco control policy implemented in China to date (34). Therefore, it is necessary to take more stringent measures to reduce the CHD burden caused by the use of tobacco products. Notably, the burden of disease attributable to alcohol consumption was greater for stroke than for CHD, implying that moderate alcohol consumption may have a protective effect against CVD. The relationship between alcohol consumption and CVD is controversial (35). The relationship between alcohol intake and CVD risk is mostly dose-dependent, i.e., the greater the amount of alcohol consumed, the greater the relative increase in disease risk. However, in several meta-analyses, regular alcohol consumption in small amounts was found to have a protective effect against CHD and ischemic stroke (36,37). A study conducted in Argentina also revealed that drinking was estimated to save 85,772 DALY from CHD, but was responsible for 52,171 lost from stroke (38). In addition, the DALY rate showed an increasing trend in the 30-60 age group and began to gradually decline above the age of 60 years. This is related to the higher prevalence of smoking and alcohol consumption in the 30-60 years age group, which began to decline gradually after the age of 60 years. Aging is perhaps the most important risk factor affecting cardiovascular homeostasis (39). The disease burden attributable to smoking and alcohol consumption is significantly higher in men than in women, consistent with a study conducted in China (40). Thus, sex and age should be considered in the prevention of chronic diseases, focusing on screening and intervention of important factors. The occurrence of cardiovascular and cerebrovascular disease deaths due to smoking and alcohol consumption is a process with long-term effects, and interventions in smoking and alcohol consumption from adolescence have important potential benefits in preventing death in middle and old age (41). Our study had a few limitations. First, the mortality data were obtained from the central regional data of the Chinese cause of death surveillance data set, which might have resulted in underestimated uncertainties for the DALY. Second, this study used relevant parameters from the GBD studies, such as disability weights and relative risk of disease, which may not be fully applicable to Shanxi Province, and may introduced some bias to the study results. However, the use of common parameters is beneficial to the comparison of results between different regions. Thirdly, chronic diseases were identified based on self-report, which may be affected by measurement errors or lack of accuracy. However, the literature shows that self-reported measures of chronic diseases are widely used in large population-based studies and show reasonable accuracy. The situation of chronic disease prevalence in Shanxi Province remains serious, with a high prevalence of hypertension and diabetes mellitus. The disease burden . /fpubh. . caused by major chronic diseases is high, and reducing smoking and drinking behaviors can help reduce the disease burden caused by premature death due to hypertension, diabetes mellitus, CHD, and stroke. The relevant departments need to focus on the prevention and control of major chronic diseases, pay attention to the changes in the DALY loss by sex and age group, develop targeted prevention and control strategies, strengthen the health education of the population, and promote smoking cessation and alcohol restriction to effectively reduce the level of population mortality and increase life expectancy. Data availability statement The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. Ethics statement The studies involving human participants were reviewed and approved by Shanxi Medical University Ethics Committee. Written informed consent to participate in this study and publication of any potentially identifiable images or data included in this article was provided by the participants or their legal guardian/next of kin. Author contributions LH, QF, and YL conceived the idea. YW, YY, and YL participated in data collection and statistical analysis. YC, XC, SL, and MQ gave many valuable comments on the draft and polished it. All authors have read and approved the manuscript. Funding Funding for this study was received from the Population Strategy Project from the Health Commission of Shanxi Province (RK05) and the Natural Science Foundation of Shanxi Province, 201901D111195 (QF).
2022-10-02T15:20:57.451Z
2022-09-30T00:00:00.000
{ "year": 2022, "sha1": "6ea1d54b3b7c273c73953d11e2d7ecb373609a9b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "6236a3fe1850e51a91faa977ba744ea284f53466", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
11071084
pes2o/s2orc
v3-fos-license
Fusion multiplicities as polytope volumes: N-point and higher-genus su(2) fusion We present the first polytope volume formulas for the multiplicities of affine fusion, the fusion in Wess-Zumino-Witten conformal field theories, for example. Thus, we characterise fusion multiplicities as discretised volumes of certain convex polytopes, and write them explicitly as multiple sums measuring those volumes. We focus on su(2), but discuss higher-point (N>3) and higher-genus fusion in a general way. The method follows that of our previous work on tensor product multiplicities, and so is based on the concepts of generalised Berenstein-Zelevinsky diagrams, and virtual couplings. As a by-product, we also determine necessary and sufficient conditions for non-vanishing higher-point fusion multiplicities. In the limit of large level, these inequalities reduce to very simple non-vanishing conditions for the corresponding tensor product multiplicities. Finally, we find the minimum level at which the higher-point fusion and tensor product multiplicities coincide. Introduction In a recent paper [1] we have shown how a higher-point su(r + 1) tensor product multiplicity may be expressed as a multiple sum measuring the discretised volume of a certain convex polytope. That work is an extension of our previous work [2] on ordinary three-point couplings where three highest weight modules are coupled to the singlet. The number of times the singlet occurs in the decomposition is the associated multiplicity. Both of these papers are based on generalisations of the famous Berenstein-Zelevinsky (BZ) triangles [3]. They also rely on the use of so-called virtual couplings, that relate different (true) couplings associated to the same tensor product. Our long-term objective is to extend these results to affine su(r +1) fusions. Here we make a start by considering su (2). It turns out that all our results on N -point tensor products [1] have analogous and level-dependent counterparts in N -point fusions. Firstly, a fusion multiplicity admits a polyhedral combinatorial expression, where it is characterised by the discretised volume of a convex polytope. Secondly, this volume may be measured explicitly expressing the fusion multiplicity as a multiple sum. We also work out very simple, easily remembered conditions determining when an N -point su(2) fusion exists, i.e., when the associated multiplicity is non-vanishing. For infinite level, these "mnemo-friendly" conditions reduce to even simpler ones, solving the analogous problem for tensor products. The second part of the present work deals with the extension of the above results to highergenus su (2) fusions. The first result is a characterisation of a general genus-h N -point fusion multiplicity as the discretised volume of a convex polytope. The volume is measured explicitly, whereby the fusion multiplicitly is expressed as a multiple sum. In order to reduce the number of summations, we then modify our approach slightly. The main building blocks in these considerations are the genus-one two-point couplings. Combining these allows one to describe general higher-genus N -point fusion multiplicities using fewer parameters than inherent in our polytope description. In terms of this reduced set of parameters, we provide explicit multiple sum formulas for the generic genus zero-, one-and two-point fusion multiplicities. Our expressions make manifest that the various fusion multiplicities are non-negative integers, and are non-decreasing functions of the affine level. su(2) N -point fusion multiplicities Let M λ denote an integrable highest weight module of an untwisted affine Lie algebra. The affine highest weight is uniquely specified by the highest weight λ of the simple horizontal subalgebra (the underlying Lie algebra), and the affine level k. Fusion of two such modules may be written as where N (k) ν λ,µ is the fusion multiplicity. Determining these multiplicities is equivalent to studying the more symmetric problem of determining the multiplicity of the singlet in the expansion of the triple fusion If ν + denotes the highest weight conjugate to ν, we have N . The associated and level-independent tensor product multiplicity is denoted T λ,µ,ν . It is related to the fusion multiplicity as All of this extends readily to N -point couplings: which are the subject of the present work. In particular, we have the relation In the following we will focus on su (2). For su(2) the three-point fusion multiplicity is λ 1 denotes the finite or first Dynkin label of the weight λ. The level-independent information (6) is encoded in the trivial BZ triangle where a = 1 2 and hence The level dependence is contained in the affine condition In Ref. [1] we outlined a general method for computing higher-point tensor product multiplicities. It is based on gluing BZ triangles (7) together using "gluing roots" (we refer to Ref. [1] for details). An illustration is provided by the following N -point diagram (in this example N is assumed odd): The role of the gluing is to take care of the summation over internal weights in a tractable way. The dual picture of ordinary (Feynman tree-) graphs is shown in thinner lines. Along a gluing, the opposite weights must be identified (for higher rank su(r + 1) one must identify a weight with the conjugate weight to the opposite one, cf. [1]). The weights are simply given by sums of two entries (9). Our starting point [1] was to relax the constraint that the entries (8) should be non-negative integers. A diagram of that kind is called a generalised diagram. Any such generalised diagram, respecting the gluing constraints and the outer weight constraints (11), will suffice as an initial diagram. All other diagrams (associated to the same outer weights) may then be obtained by adding integer linear combinations of so-called virtual diagrams: adding a basis virtual diagram changes the weight of a given internal weight by two, leaving all other internal weights and all outer weights unchanged. Thus, the basis virtual diagram associated to a particular gluing is of the form: Enumerating the gluing roots (12) in (11) from right to left, the associated integer coefficients in the linear combinations are −g 1 ,...,−g N −3 1 . Now, re-imposing the condition that all entries must be non-negative integers, results in a set of inequalities in the entries defining a convex polytope in the euclidean space R N −3 : By construction, its discretised volume is the tensor product multiplicity T λ (1) ,...,λ (N ) . In (13) we have introduced the quantity That S is an integer is a consistency condition, i.e., for S a half-integer the multiplicity vanishes. The extension to fusion is provided by supplementing the set of inequalities (13) with the associated affine conditions (cf. (10)), one for each triangle, i.e., one for each line in (13). This results in the following definition of a convex polytope in the euclidean space R N −3 (the affine conditions are written on separate lines): 1 We are using a slightly different notation for these variables than that employed in [1]. By construction, its discretised volume is the associated N -point fusion multiplicity N (k) . This characterisation of the fusion multiplicity is a new result. It is stressed that (15) (and also (13)) is non-unique as it reflects our choice of initial diagram when deriving (13), cf. [2,1]. Any choice will define a convex polytope of the same shape and hence discretised volume, however. Changing the initial triangle merely corresponds to shifting the origin, or translating the entire polytope. We have seen that the fusion polytope (15) corresponds to "slicing out" a convex polytope embedded in the tensor product polytope (13). Thus, our approach offers a geometrical illustration of the statement that fusion is a truncated tensor product. The discretised volume of the convex polytope (15) may be measured explicitly. In order to avoid discussing intersections of faces we have to choose an "appropriate order" of summation (see Ref. [2,1]). However, such an order is easily found. In the following multiple sum formula we have made a straightforward choice: 1 . (16) Conditions for non-vanishing fusion and tensor product multiplicities Here we shall present necessary and sufficient conditions determining when an N -point fusion multiplicity is non-vanishing, N ≥ 2. A similar result for the associated tensor product multi-plicity is easily read off. Both sets of conditions are given as inequalities in the (finite) Dynkin labels. The conditions for fusion depend on the level k. A fusion multiplicity is non-vanishing if and only if the associated convex polytope has a nonvanishing discretised volume. In particular, the multiplicity is one when the polytope is a point. An analysis of the polytope (15), or equivalently of the multiple sum formula (16), leads to the following necessary and sufficient conditions for the fusion multiplicity to be non-vanishing: [x] denotes the integer value of x, i.e., the greatest integer less than or equal to x. Note that for d = 0 the associated inequalities reduce to 0 ≤ S − λ (l) 1 . These latter inequalities have been written separately for clarity. The upper bound on d is included to avoid redundancies. The conditions (17) may be proved by induction. In the set of inequalities (15) (or equivalently in the multiple sum formula (16)), one eliminates one after the other the variables g 1 , ..., g N −3 . The inequalities involving g 1 and g N −3 are different in form from those for the remaining N − 5 variables (15). Thus, the induction concerns the elimination of the middle N − 5 variables, g 2 , ..., g N −4 . First we eliminate g 1 , then g 2 etc. After having eliminated the first n − 1 variables, 2 ≤ n − 1 ≤ N − 4, we have obtained the following set of inequalities in addition to the original inequalities (15) involving only g n , ..., g N −3 . It is when proving (18) that we use induction in n, and we conclude that it is true for 2 ≤ n − 1 ≤ N − 4. Eliminating the final variable g N −3 results in the asserted conditions (17), which we believe are new. For high level k a fusion reduces to a tensor product (5). Necessary and sufficient conditions for a non-vanishing tensor product multiplicity are therefore easily read off (17): As discussed in Ref. [1], this result is easily verified for N ≤ 4. For general N it is believed to be a new result. We note that (17) and (19) are also valid for N = 2, despite the fact that, a priori, the inequalities were derived for N ≥ 3 only. Conditions on the level The lower bound on k is immediately read off (17). In ordinary three-point fusion the analogous bound is sometimes referred to as the minimum threshold level, and is denoted t min . It specifies the minimum value of k for which N (k) λ (1) ,...,λ (N ) is non-vanishing: It does not make sense to assign a minimum threshold level to a fusion for which the associated tensor product multiplicity T λ (1) ,...,λ (N ) vanishes. According to (17) we have with the parameters specified as in (17). The maximum threshold level, denoted t max , is defined as the minimum level k for which the fusion multiplicity equals the tensor product multiplicity: Again, it is not natural to assign a maximum threshold level to a fusion if T λ (1) ,...,λ (N ) vanishes. Though in this case, one could define it as t max = 0, since by assumption k ∈ Z Z ≥ , and (22) would still be respected. To compute t max in our case, we first observe that all affine conditions in (15) are redundant when k ≥ S. As an illustration, we have (assuming 3 ≤ m ≤ N − 3) This means that t max ≤ S. In order to show that we first assume that there exists integer n, 2 ≤ n ≤ N − 2, (n = 1 is trivial) such that We then consider the point defined by It is straightforward to show that it is in the fusion polytope (15) when k ≥ S, and that it is not when k < S. Finally, if there does not exist an n, 2 ≤ n ≤ N − 2, satisfying (25), we must have S ≤ λ (N −1) 1 + λ (N ) 1 . In that case we consider the point g l = 0, l = 1, ..., N − 3. For this point to be in the polytope, the condition on k (15) is S ≤ k, and we conclude that the maximum threshold level is given by (24). Higher-genus su(2) fusion multiplicities Here we will discuss the extension of our results above on genus-zero fusion to generic genus-h fusion. N (k,h) λ (1) ,...,λ (N ) denotes the genus-h N -point fusion multiplicity. Just as in the case of vanishing genus, we may choose the channel freely. A simple extension of (11) is the following genus-h N -point diagram (in this example N is assumed even, while h is arbitrary): Again, the dual trivalent fusion graph is represented by thinner lines and loops. h is the number of such loops or handles. The role of the two zeros in (27) will be discussed below. Independent of the choice of channel, the number of internal weights or gluings is N +3(h−1), while the number of vertices or triangles is N + 2(h − 1). The basis diagram associated to the "self-coupling" or tadpole diagram We call (29) a loop-gluing diagram. It is stressed that it differs from the gluing root (12) since it adds only one to the internal weight and not two. This discrepancy follows from the fact that the Dynkin labels satisfy λ 1 + µ 1 + ν 1 ∈ 2Z Z ≥ , so if two weights are changed simultaneously and equally, we can only require an even change of the sum of them. A similar situation arises when considering the genus-one two-point coupling A simple analysis shows that there are two basis loop-gluings associated to this coupling, and that they may be represented by the diagrams It is now easy to write down the inequalities defining the convex polytope. Our choice of initial diagram is indicated in (27) by the two zeros: all entries of the higher-genus part to the right of them are zero, while the N -point part follows the pattern of the initial diagram associated to (11) and (13) -see [1] for details. Enumerating the (loop-)gluings from right to left (and L before L ′ ), the integer coefficients in the linear combinations are g 1 , ..., g h , −g h+1 , ..., −g N +h−2 (the sign convention is merely for convenience), and l 1 , l ′ 1 , ..., l h−1 , l ′ h−1 , while l is associated to the tadpole at the extreme right. Listing the inequalities associated to the triangles from right to left, we have the following convex polytope (assuming h ≥ 1): By construction, its discretised volume is the fusion multiplicity N (k,h) λ (1) ,...,λ (N ) , which then provides a new way of characterising fusion multiplicities. The volume may be measured explicitly expressing N (k,h) λ (1) ,...,λ (N ) as a multiple sum: The summation variables are bounded according to This constitutes the first explicit result for the general genus-h N -point fusion multiplicities. In the following we will discuss a few examples, where the convex polytope characterisation is sacrifised in order to reduce the number of summation variables. Two-point couplings Let us first consider the genus-one two-point coupling (30). According to the general discussion above, one may express the associated fusion multiplicity in terms of two parameters. A further analysis shows that where the zero'th Dynkin label of the affine weight λ is λ 0 = k − λ 1 . Now, it is straightforward to construct higher-genus two-point diagrams by gluing together diagrams like (30) When computing the associated fusion multiplicities one uses the result (35), paying attention to the finite Dynkin labels being odd or even. For example, when λ 1 and µ 1 are both even, the sum formula reads It is easily adjusted to cover the situation when both labels are odd (see also (46)). If one label is odd and the other is even, the associated fusion multiplicity vanishes. Note that the number of summation variables is h − 1, while the number of summations in our previous treatment (33) was 3h − 1. Thus, from that point of view (37) is a considerable simplification. The summations in (37) are, in principle, straightforward to evaluate using the formula where (a) n ≡ a(a + 1)...(a + n − 1) . (38) is easily proven by induction. One-point couplings A one-point coupling simply corresponds to putting one of the weights of a two-point coupling equal to zero. It may be illustrated by the diagram and the associated fusion multiplicity is It is noted that the Dynkin label λ 1 must be even. For h = 1 (41) reduces to Zero-point couplings As for any other N , there are many possible choices of channels when discussing zero-point couplings. An immediate application of our discussion on two-point couplings (36) corresponds to the diagram This is obtained by putting both weights in (36) equal to zero, and the associated fusion multiplicity may be expressed as Another "natural" channel is governed by the diagram Following our general prescription above for computing the associated fusion multiplicity, results in the expression which differs considerably in form from (44). Nevertheless, by construction, the two multiple sums must be identical. We will not attempt to prove that explicitly. This identity provides a simple example of the result of identifying the fusion multiplicities computed using different channels. Comments We conclude by adding a few comments, primarily on the existing literature. In Ref. [4] Dowker discusses results on fusion multiplicities based on the Verlinde formula [5]. The results are expressed in terms of twisted cosec sums and Bernoulli polynomials, and pertain essentially to two-point couplings (and therefore also to one-and zero-point couplings). Particular emphasis is put on the classical limit where the level k tends to infinity, and previous results on that limit are recovered ( [6,7] for zero-point couplings and [7] for one-point couplings). In the language employed in [4], fusion multiplicities correspond to dimensions of certain vector bundles over the moduli space of an N -punctured Riemann surface of genus h. The results of [4] are essentially obtained by trigonometric manipulations of the Verlinde formula. They do not, therefore, display any transparent relationship with our convex polytope approach. Nevertheless, a comparison of results leads to interesting identities between different types of multiple sums, and some similarities of the final expressions are apparent. One could try to prove their equivalence by brute force. That is beyond the scope of the present work, though. The results of [4] do not offer an immediate resolution to the question of when a fusion multiplicity is non-vanishing. By construction, a characterisation in terms of a convex polytope, on the other hand, is "almost" designed to address such problems. Furthermore, our approach seems amenable to the treatment of higher rank su(r + 1) fusions, whereas an application of the Verlinde formula appears technically very complicated. We are currently considering such an extention of our approach based on previous results on the role of BZ triangles in affine su(3) and su(4) fusions [8,9]. A different approach to fusion based on the depth rule and the correspondence to three-point functions in Wess-Zumino-Witten conformal field theory may be found in our recent work [10,11]. In Ref. [12], Kirillov provides a combinatorial formula for the N -point su(2) fusion multiplicities. It is a fermionic-type formula, a sum of products of binomial coefficients, derived by applying the Bethe ansatz to certain solvable lattice models. (For a nice, brief review of formulas of fermionic and bosonic type, see the introduction to [13].) No formulas for higher-genus multiplicities are given, however. Kirillov's fermionic formula has also been generalised somewhat. See Theorem 6.2 of [14] for a q-deformed su(r + 1) generalisation, and the extensive bibliography of [15]. Although interesting for other reasons, these formulas are only valid for certain representations at the N -points, and they are also restricted to h = 0. Such restrictions do not appear to be necessary in our method.
2014-10-01T00:00:00.000Z
2001-04-26T00:00:00.000
{ "year": 2001, "sha1": "3377efad60586d39d78cc42373c4fdc8964afbe4", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-th/0104240", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "6727f02333d43d303bdafb9c11b4cd50c20fb07d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
249612903
pes2o/s2orc
v3-fos-license
Testicular torsion during the COVID-19 pandemic: Results of a multicenter study in northern Italy Introduction The literature reported an increased avoidance of the Emergency Department (ED) during COrona VIrus Disease 19 (COVID-19) pandemic, causing a subsequent increase of morbidity and mortality for acute conditions. Testicular torsion is a surgical emergency, which can lead to the loss of the affected testicle if a delayed treatment occurs. As testicular loss is time-related, outcome was hypothesized to be negatively affected by the pandemic. Objective The aim is to investigate whether presentation, treatment and outcomes of children with testicular torsion were delayed during COVID-19. Study design Medical records of pediatric patients operated for testicular torsion of six Paediatric Surgical Units in Northern Italy between January 2019 and December 2020 were retrospectively reviewed. Patients were divided as for ones treated during (dC) or before the pandemic (pC). To reflect possible seasonality, related to lockdown restrictions, winter and summer calendar blocks were also analysed. For all cohorts, demographic data, pre-operative evaluation, operative notes and post-operative outcomes were reviewed. Primary outcomes were referral time, time from diagnosis to surgery and ischemic time, while secondary outcomes were orchiectomy and atrophy rates. Statistic was conducted as appropriate. Results A total of 188 patients with acute testicular torsion were included in the study period, 89 in the pre-COVID-19 (pC) period and 99 during COVID-19 (dC). Time from symptom onset to the access to the Emergency Department (T1) was not different among the two populations (pC: 5,5 h, dC: 6 h, p 0.374), and similarly time from diagnosis to surgery (pC: 2,5 h, dC: 2,5 h, p 0.970) and ischemic time (pC: 8,2 h, dC: 10 h, p 0.655). T1 was <6 h in 46/99 patients (46%) pC and 45/89 patients (51%) dC (p = 0.88, Fisher's exact test). Subgroup analysis accounting for different lockdown measures, confirm the absence of any difference. Orchiectomies rate was 23% (23/99) dC and 21% (19/89) pC (p = 0.861, Fisher's exact test) and rate of post-operative atrophy was 9% dC (7/76) and 14% pC (10/70), p = 0,44, Fisher's exact test. Discussion Despite worldwide pediatric ED accesses reduction, we reported that neither ischemic time nor the long-term outcomes in children with testicular torsion increased during the COVID-19 pandemic. In the available literature, few studies investigated the topic and are controversial on the results. Similarly to our findings, some studies found that timing and orchiectomy rates were not significantly different during the pandemic, while others reported a correlation to pandemic seasonality. Furthermore, in the recent pediatric literature it has been reported a delayed testicular torsion diagnosis due to shame in informing parents. Strengths of this study are the large numerosity, its multicentric design and a long study period. Its main limitation is being retrospective. Conclusions We reported our large cohort from one of the most heavily COVID-19-affected regions, finding that referral, intra-hospital protocols and ischemic time in testicular torsion were not increased during to the pandemic, as well as orchiectomy rate and atrophy.Summary FigureSummary Figure Testicular torsion is a surgical emergency, which can lead to the loss of the affected testicle if a delayed treatment occurs. As testicular loss is timerelated, outcome was hypothesized to be negatively affected by the pandemic. Objective The aim is to investigate whether presentation, treatment and outcomes of children with testicular torsion were delayed during COVID-19. Study design Medical records of pediatric patients operated for testicular torsion of six Paediatric Surgical Units in Northern Italy between January 2019 and December 2020 were retrospectively reviewed. Patients were divided as for ones treated during (dC) or before the pandemic (pC). To reflect possible seasonality, related to lockdown restrictions, winter and summer calendar blocks were also analysed. For all cohorts, demographic data, pre-operative evaluation, operative notes and post-operative outcomes were reviewed. Primary outcomes were referral time, time from diagnosis to surgery and ischemic time, while secondary outcomes were orchiectomy and atrophy rates. Statistic was conducted as appropriate. Results A total of 188 patients with acute testicular torsion were included in the study period, 89 in the pre-COVID- 19 Discussion Despite worldwide pediatric ED accesses reduction, we reported that neither ischemic time nor the longterm outcomes in children with testicular torsion increased during the COVID-19 pandemic. In the available literature, few studies investigated the topic and are controversial on the results. Similarly to our findings, some studies found that timing and orchiectomy rates were not significantly different during the pandemic, while others reported a correlation to pandemic seasonality. Furthermore, in the recent pediatric literature it has been reported a delayed testicular torsion diagnosis due to shame in informing parents. Strengths of this study are the large numerosity, its multicentric design and a long study period. Its main limitation is being retrospective. Introduction The spread of COronaVIrus Disease 19 (COVID-19) has begun as a geographically confined pneumonia of unclear etiology and rapidly reached a pandemic dimension, impacting all aspects of life across the world. Since the World Health Organization (WHO) confirmed COVID-19 as a global pandemic on March 11, 2020, the management of the disease has required a rapid remodulation of health systems and the global transmission of evidence-based information. Italy was the first European country heavily affected by COVID-19, with a greater localization in the north of the country, where the healthcare system has rapidly been overwhelmed. As a way to contain the disease, the government established a stepwise strategy starting from the complete lockdown of initial foci in northern Italy on 20 February 2020 and subsequent adoption of progressively more stringent lockdown measures of the entire nation, as of 11 March. During that period of time, the medical literature reported an increased avoidance of the Emergency Department (ED) for non-COVID-19 illnesses during the pandemic [1e3]. Compared to the same period of past years, many Italian authors have reported a huge reduction of paediatric admissions to ED, ranging from 72% to 92% [4]. Also, during the COVID-19 pandemic in most of the centers elective surgical procedures were cancelled and surgery was limited to urgent surgical or trauma patients. These efforts to minimize unnecessary traffic through the healthcare facility resulted in a significant reduction in emergency department patient encounters, bringing to increase paediatric morbidity and mortality. Testicular torsion is a common surgical emergency, since it can lead to the loss of the affected testicle, especially when a delayed diagnosis occurs. The reported annual incidence of testicular torsion is 1:4000 in males aged under 18 years old, which accounts for 5e25% of acute scrotum in children. Prompt diagnosis and surgical management with scrotal exploration and detorsion within the first 6e8 h following symptom onset are important to prevent testicular loss [5,6]. Given the difficulty and high testis loss rate even under optimal conditions, COVID-19 has been hypothesized to have a negative impact on acute scrotum management [7]. The aim of our study is to investigate whether children with testicular torsion had a delayed presentation and treatment during the pandemic period in a pool of centres highly affected by COVID-19, thus resulting in an increased rate of orchiectomy and testicular atrophy. The investigation was conducted comparing time from symptoms onset to ED access, ED-to-operating room (OR) time and total ischemic time during COVID-19 pandemic and compared it to the pre-pandemic period, as well as orchiectomy rate and testicular atrophy rate. Materials and methods A multicentric retrospective study was conducted in six Paediatric Urology and Pediatric Surgery Departments of Northern Italy, representative of the three most severely affected areas during COVID Pandemic that is Lombardy, Piedmont and Veneto. Included patients were referred to Torino, Vicenza, Brescia, Bergamo, Padova, Treviso Hospitals. The medical records of all consecutive patients evaluated at the Emergency Department for acute scrotum and operated for testicular torsion in the last 2 years were reviewed. We included in the study all male patients aged between one month and 18 years with a diagnosis of acute testicular torsion and who underwent emergency scrotal exploration plus detorsion orchiopexy or orchiectomy at the included institutions. Patients who were not confirmed to have testicular torsion on surgical exploration were excluded. Patients were then divided in two cohorts: data from the pandemic period from March 2020 to January 2021 (COVID19 pandemic, dC) were compared with the pre-COVID period (pC), from January 2019 to February 2020, that served as control group for comparison. The timing of the pandemic cohort was determined based on the WHO declaration of a pandemic dated March 11, 2020. To account for possible correlation of the results to the lockdown restrictions, we also compared, within the COVID period, Summary Figure How referral and surgical times have been influenced by the pandemic 530.e2 outcomes during two different calendar blocks: the winter period with stricter lockdown (MarcheMay 2020 and October 2020eJanuary 2021, strict-lockdown) and the summer period with softer restraint policies (JuneeSeptember 2020, soft-lockdown). For both cohorts of patients, demographic data, ultrasonographic findings, recording of time and dates, information on COVID-19 swab results, operating theatre utilization were recorded. Few centers performed nonsurgical manual detorsion at ED access, although all patients undergoing manual untwisting are still subject to emergent surgical exploration as per centres protocol. Orchiectomy versus detorsion orchiopexy was determined from the operative records. Post-operative atrophy, defined either clinically or based on ultrasonographic finding, was also recorded. Atrophy was defined as the difference in testicular volume >80% by ultrasound compared with the contralateral testis measured or as a reduction in 3 or more sizes at the orchidometer. Primary outcomes were time from symptom onset to presentation to the ED (T1), time from diagnosis to surgery (T2) and ischemic time (T3), from symptom onset to surgical incision. Secondary outcomes were orchiectomy rate and rate of testicular atrophy at follow-up in preserved testes. Statistical analysis was conducted as appropriate: dichotomic variables were expressed using rates and percentages while continuous variables as median and interquartile ranges (IQR), unless otherwise specified. D'Agostino-Pearson test for normal distribution was applied to all variables and parameters not showing a Gaussian distribution were analysed with non-parametric tests. Comparative analyses were therefore performed with either Mann Whitney or KruskaleWallis tests for continuous variables and Fisher's exact test for categorical variables. P values < 0.05 were considered significant. Statistical analyses were conducted using GraphPad Prism software (version 6, San Diego, CA), used as well for displaying the tables. Results During the study period, a total of 188 patients with acute testicular torsion were included. Of these, 89 occurred in the pre-COVID-19 period and 99 during COVID-19. Of this latter, we further divided the soft-lockdown period with 36 patients from the strict-lockdown period with 63 patients. Median age at presentation was 13 age (range 6 monthse17 years). Referral time (T1, time from symptom onset to the access to the Emergency Department) was not statistically different: pC 5,5 h (IQR 3e15) versus dC 6 h (IQR 2,5e36) p 0.374 (Mann Whitney test, see Fig. 1). Cases occurred in March 2020, during the first national lockdown weeks, were also analysed separately and showed a slight median increase, despite not significant (10 h, p Z 0,36, Mann Whitney test, Fig. 1 red dots). The subgroup analysis of the patients presented within the pandemic period, comparing strict-and soft-lockdown months, still did not record any difference (p 0.772, Kruskal Wallis test). Also, T1 was <6 h in 46/99 patients (46%) pC and 45/89 patients (51%) dC (p Z 0.88, Fisher's exact test). Time from access to the ED to entry the operative room (T2, ED-to-OR) was identical in the two time-periods (Fig. 2). In fact, T2 dC was 2,5 h (IQR 2e3,5), same as pC 2,5 h (IQR 2e4), p 0.970, Mann Whitney test. Again, subgroup analysis accounting for lockdown variation, did not show any difference, with a p value of 0.268 (Krus-kaleWallis test). Finally, no differences were found in the ischemic time (T3), time from symptom onset to entry to the operative room (Fig. 3): pC T2 was a median of 8,2 h (IQR 6e19) while in dC period it was 10 h (IQR 5e10), not statistically different (p Z 0.655, Mann Whitney test). Both during pC and dC there was a comparable rate of patients that had a pre-operative derotation in the ED, pre- Of the patients operated during pandemic, 46/99 (47%) were operated in a dedicated COVID operating room; the remaining were operated in a non-COVID theatre because the swab was already proven to be negative. Discussion Since the declaration of the COVID-19 pandemic, a large number of countries in the world have applied severe restrictive measures to prevent viral spread and overwhelming national health systems. The aim of these measures was to reduce social contacts by closing school, suspending non-essential productive activities, stopping of mass gatherings and events and individual movement restrictions [8]. Italy was the first country outside Asia to experience a widespread epidemic and also to impose a generalized lockdown on March 11, 2020 allowing its citizens to leave their homes only for medical needs or grocery shopping, converting non-essential work in smart working and traditional face to face lessons to distance learning. Again in Italy, the need of postponing non-urgent ED access for both adult and paediatric population has been advocated by the press. As a consequence of these drastic measures and fear of contagion, it has been reported a substantial decrease in paediatric ED visits and a considerable reduction in clinical visits by family pediatricians [9]. In recent paediatric literature, these daily-life limitations have been a point of discussion due to the increased risk of delaying diagnosis of potentially serious clinical conditions. An e-survey conducted in the United Kingdom and Ireland found that 32% of pediatric consultants had seen children with delayed presentations of potentially life-threatening conditions such as diabetic ketoacidosis, sepsis, and malignancy [10]. In the surgical practice, several recent studies regarding the management of acute appendicitis during COVID-19 pandemic clearly showed that staying at home, due to public health safety orders, negatively impacted on children who developed appendicitis. The highest level of evidence about this topic is reported by a recent meta-analysis which emphasizes a significantly higher incidence of complicated appendicitis in children during the COVID-19 period than in pre-COVID-19 period [11]. For instance, during the pandemic, an increased rate of perforated appendicitis in pediatric patients, compared to pre-COVID-19 period, has been reported [11e13]. Multiple factors have been hypothesized to be responsible for this increased complicated appendicitis as delayed presentation of pediatric patients, socioeconomic factors or delay in time to surgery for restricted pandemic protocols [11]. Starting from these assumptions we compared presentation trends and outcomes among paediatric patients with testicular torsion before and during the COVID-19 period in several centers highly affected by the pandemic. Contrary to our expectations, we demonstrated that neither the time periods from symptoms onset to ED referral and intervention nor the long-term outcomes, such as orchiectomy and post-operative atrophy rate were statistically increased during the COVID-19 pandemic. In the available literature, only six studies investigated whether the COVID-19 pandemic caused increased of number of orchiectomies as a consequence of delayed presentation and diagnosis of acute testicular torsion in paediatric patients (Table 1). A recent meta-analysis compared all these studies focusing on the impact of the COVID-19 pandemic on pediatric testicular torsion in terms of duration of symptoms, proportion of children with delayed presentation (>24 h) and orchiectomy rate. Pogorelic et al. hypothesize that no significant difference in the outcomes existed between pre-and COVID-19 period [14]. Similar to our findings, studies from Nelson et al. and Littman et al. found that time from onset of symptoms to ED presentation, ischemic times, and orchiectomy rates for testicular torsion at their center were not significantly different during the COVID-19 pandemic period compared to pre-COVID period [15,16]. Shields et al. reported the same results but with a statistically significant increase in testicular torsion cases during the COVID-19 pandemic period [17]. However, unlike these above-mentioned studies, we decided in the presented study to extend the collection of data until January 2021, including the two major peaks of infection and the different grades of restriction measures. Our subgroup-analysis on the two time periods, namely the high-COVID19 incidence period during winter months, reflecting strict-lockdown measure, and the low-COVID-19 incidence period during summer, did not How referral and surgical times have been influenced by the pandemic 530.e4 highlight any statistical difference. This finding is in contrast with results from Holzman et al. that reported a difference in the two different analysed pandemic periods, but in this study the periods analysed were limited to summer and spring months, both characterized by softening of the lockdown measures. Moreover, in recent paediatric literature it has been reported a 13% rate of delayed testicular torsion diagnosis due to shame and fear in informing parents [18]. We acknowledge that our study is in contrast to previously published ones that reported a longest time to presentation and highest orchiectomy rates as effect of observed delay in seeking emergency care during COVID-19 period [7e19]. We explained these apparently abnormal finding with a longer lockdown period compared with other countries that has facilitated interaction between children, at home after school closed, and parents, at home for smart working or temporary unemployment. These changes have increased parental awareness of their child's physical condition and ability to respond in a timely manner to any acute symptoms, despite the pandemic restrictions. This hypothesis could support also the results reported by Lee et al. which referred a significantly fewer delayed presentation of testicular torsion and shorter ischemia time on presentation during COIVD-19 period [20]. Interestingly in our follow up time, we did not record an increase in the rate of testicular atrophy during COVID-19 period. Finally, the need to avoid intra-hospital spread of contagion and to ensure healthcare workers protection, were necessarily linked with the availability of rapid and sensitive testing for positive patients undergoing surgery or the presence of a COVID-19 dedicated operating room. These aspects and new protocols were supposed to have lengthened some diagnostic and therapeutic paths. However, to whom it may concern testicular torsion, we found that this time interval was not different between the COVID-19 cohort and pre-pandemic controls. We could postulate that limiting the number of family members allowed to enter the ED, having effective and rapid COVID-19 testing, dedicated operating rooms and a reduction in overall elective surgery cases to prioritize the treatment for emergency may be some of the key points to maintain a timely surgical exploration and therefore not influencing long term outcomes on testicular preservation. Our study has several important features such as the large number of patients, a multicenter design including the most affected Italian regions and a longer pandemic period than the remaining available literature, but also it has a main limitation due to its retrospective character to which we have tried to obviate through an in-depth statistical analysis. The presence of a large multicentric groups, despite giving a wide overview of the situation during the pandemic, it is affected by some limitation as the variability among the different centers in the management of this condition and in the organization during the pandemic, such as the possibility to directly access the OR in a dedicated pediatric fast-track service or depending on presence of other specialties within the Hospital; the different local guidelines for the pandemic restrain. Conclusions Management of testicular torsion from diagnosis in ED to arrive in OR should be very fast for staying in testicle-savetime window. We report that in a large cohort in one of the most heavily COVID-19 affected regions, referral, intrahospital protocols and thus total ischemic time due to testicular torsion were not increased due to the pandemic. As a consequence, orchiectomy rate and post-operative atrophy were also substantially not increased. Parent's awareness and the develop of appropriate protocols may lead to a maintenance of the standard-of-care for emergent surgery even during a worldwide pandemic.
2022-06-13T13:09:11.717Z
2022-06-01T00:00:00.000
{ "year": 2022, "sha1": "4ae601fa29c610d35424e7e759493dd8b6b5541e", "oa_license": null, "oa_url": "http://www.jpurol.com/article/S1477513122002777/pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "5c8c0a97024042c05446c5c32e9d296f8129b7a2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
51869103
pes2o/s2orc
v3-fos-license
Dietary Flavonoids, Copper Intake, and Risk of Metabolic Syndrome in Chinese Adults The effects of flavonoids and copper (Cu) on metabolic syndrome (MetS) have been investigated separately, but no information exists about the joint associations between flavonoids and Cu on the risk of MetS in population studies. In this cross-sectional study, a total of 9108 people aged 20–75 years from the Harbin Cohort Study on Diet, Nutrition, and Chronic Non-Communicable Diseases (HDNNCDS) were included. Flavonoid intakes were calculated based on the flavonoid database created in our laboratory. Cu and other nutrient intakes were estimated using the Chinese Food Composition Table. Among all study subjects, a total of 2635 subjects (28.9%) met the diagnostic criteria for inclusion in the MetS group. Total flavonoids (fourth vs. first quartile, odds ratio (OR): 0.77, 95% confidence interval (CI) 0.66–0.90, Ptrend = 0.002) and Cu (OR 0.81, 90% CI: 0.70–0.94, Ptrend = 0.020) were inversely associated with the risk of MetS after adjusting for potential confounders. Higher flavonoid intake was more strongly associated with a lower risk of MetS with high levels of Cu intake (Pinteraction = 0.008). Dose–response effects showed an L-shaped curve between the total intake of five flavonoids and the risk of MetS. These results suggest that higher flavonoid intake is associated with a lower risk of MetS, especially under high levels of Cu intake. Introduction Metabolic syndrome (MetS) is characterized by a cluster of metabolic abnormalities, including abdominal obesity, hyperglycemia, hypertension, and dyslipidemia [1]. MetS is a complicated interaction between genetic, metabolic, and environmental factors in which diet is a potent and modifiable environmental factor. Some diets and nutrients have been shown to play a protective role against the development of MetS [2,3]. As people's interest in dietary life and health has increased, some diet and nutrients have been shown to play protective roles against the development of MetS, but the roles of flavonoids and copper in the MetS have not yet been clearly determined [4,5]. Flavonoids are a class of plant secondary metabolites. They are the most common group of polyphenolics in the human diet. Flavonoids are relatively abundant in fruits, vegetables, grains, herbs, and beverages, having a wide range of biochemical and pharmacological effects, such as anti-inflammatory and anti-proliferative actions [6,7]. Flavonoids are strong antioxidants and metal chelators with beneficial therapeutic characteristics, encouraging their development as candidates for targeting metal-induced diseases [8]. Copper (Cu) is an essential trace metal that is required for the catalysis of several important cellular enzymes. Some studies have shown that dietary Cu intake is significantly and inversely associated with MetS [9][10][11]. The flavonoid-copper (Cu) complex was reported to have anti-tumor properties by promoting the cleavage of plasmid DNA and inducing Nutrients 2018, 10, 991 2 of 11 oxidative DNA damage [12][13][14]. So, we are interested in whether different levels of Cu intake can influence the relationship between flavonoids and MetS. Therefore, the aim of this study was to clarify the association of dietary flavonoids and Cu with MetS and explore the interaction between dietary flavonoids and Cu in their effect on MetS in a large cross-sectional study of adult residents in urban Harbin, North China. We hypothesis that total flavonoids and Cu are all inversely associated with the risk of MetS, and their combination can significantly reduce this risk. Study Population Our study subjects were from the Harbin Cohort Study on Diet, Nutrition, and Chronic Non-Communicable Diseases (HDNNCDS) (Trial Registration: ChiCTR-ECH-12002721 at http://www.chictr.org.cn/showproj.aspx?proj=6833) launched in 2010 [15]. The HDNNCDS covered 7 urban administrative regions of Harbin. Each region was divided into 3 strata according to financial situation, and a total of 42 communities were randomly selected from each stratum in each administrative region by performing a stratified multi-stage random cluster sampling design. Residents who had lived in their communities for more than two years and did not have cancer or type I diabetes were included in our survey. A total of 9734 persons aged 20-75 years participated in the HDNNCDS. We excluded subjects at baseline who reported extreme values for total energy intake (<500 kcal/day or >4500 kcal/day) (n = 323) and who had undergone dietary intervention for diabetes or other diseases (n = 214) or who had more than 10 items unfilled in the questionnaire (n = 89). Finally, a total of 9108 subjects were eligible for analysis. The study protocol of HDNNCDS was approved by the Ethics Committee of Harbin Medical University, and written informed consent was provided by all subjects. The methods in this study were in accordance with the approved guidelines ( Figure 1). ss-sectional study of adult residents in urban Harbin, North China. We hypothesis that tota vonoids and Cu are all inversely associated with the risk of MetS, and their combination ca nificantly reduce this risk. . Study Population Our study subjects were from the Harbin Cohort Study on Diet, Nutrition, and Chronic Non mmunicable Diseases (HDNNCDS) (Trial Registration: ChiCTR-ECH-12002721 at http://www ictr.org.cn/showproj.aspx?proj=6833) launched in 2010 [15]. The HDNNCDS covered 7 urba ministrative regions of Harbin. Each region was divided into 3 strata according to financia uation, and a total of 42 communities were randomly selected from each stratum in eac ministrative region by performing a stratified multi-stage random cluster sampling design sidents who had lived in their communities for more than two years and did not have cancer o e I diabetes were included in our survey. A total of 9734 persons aged 20-75 years participated i e HDNNCDS. We excluded subjects at baseline who reported extreme values for total energ ake (<500 kcal/day or >4500 kcal/day) (n = 323) and who had undergone dietary intervention fo betes or other diseases (n = 214) or who had more than 10 items unfilled in the questionnaire (n ). Finally, a total of 9108 subjects were eligible for analysis. The study protocol of HDNNCDS wa proved by the Ethics Committee of Harbin Medical University, and written informed consent wa ovided by all subjects. The methods in this study were in accordance with the approved guideline igure 1). Questionnaire Data Collection Detailed in-person interviews were administered by trained personnel using a structured questionnaire to collect information on demographic characteristics, dietary intake, lifestyle, and physical condition. The section on dietary intake was evaluated using the validated food frequency questionnaire (FFQ). A total of 103 food items were included in the questionnaire, which covered most of the commonly consumed food in urban Harbin. For each food item, the subjects were asked how frequently they had consumed that food over the preceding year, followed by a question on the amount consumed in liang (a unit of weight equal to 50 g) or mL (for liquid food items) per unit of time. Then, we used the amount of each item multiplied by the consumption frequency to obtain the daily intake of each food item. Cu and other nutrient intakes for each food item consumed were calculated by multiplying the nutrient content listed in the Chinese Food Composition Table [16]. The section on lifestyle and physical condition mainly included information about labor intensity, smoking, alcohol consumption, and taking medicines and health products over the past 12 months. Dietary Flavonoid Assessment The reverse phase high performance liquid chromatography (RP-HPLC) method was used to determine flavonoid levels among the food items commonly consumed in Harbin, China. A total of 41 food items in the questionnaire, including 3 potatoes and their products, 7 legumes and their products, 19 fresh vegetables, and 14 fresh fruits were assessed for flavonoid content. In the present study, we only quantified the content of the following flavonoids in plant species: three major flavonols-quercetin, kaempferol and isorhamnetin-and two major flavones-luteolin and apigenin-which have been most widely investigated in anti-carcinogenesis studies; the effects of other flavonoid subclasses on MetS were not considered [17]. Then, the detected flavonoid contents of each food item (flavonid-rich food) commonly consumed in Harbin city, multiplied by the reported consumption amount, as assessed by the FFQ, was considered to be the dietary flavonoid intake. Anthropometric Measurement and Biochemical Assessment Anthropometric measurements, including height, weight, and waist circumference, were taken by well-trained examiners, with subjects wearing light, thin clothing and no shoes. Body weight and height were measured to the nearest 0.1 kg and 0.1 cm, respectively. Body mass index (BMI) was calculated as weight (kg) divided by the square of the height in meters (m 2 ). Systolic blood pressure (SBP) and diastolic blood pressure (DBP) were measured 3 times with a standard mercury sphygmomanometer on the right arm of each subject after a 10-min rest in a sitting position, and the mean values were used for analysis. Fasting and postprandial (2 h after drinking a 75-g glucose-containing water) blood samples were taken from all participants at baseline. Fasting plasma glucose (FPG), 2-h postprandial plasma glucose (PPG), blood lipids, including total cholesterol (TC), triglyceride (TG), low density lipoprotein cholesterol (LDL), and high density lipoprotein cholesterol (HDL), were measured using an automatic biochemistry analyzer (Hitachi, Tokyo, Japan). Study Outcome Definition Metabolic syndrome (MetS) is diagnosed according to the International Diabetes Federation (IDF) 2005 guidelines [18] as the presence of central obesity (waist circumference ≥90 cm for men and ≥80 cm for women) plus any two of the following conditions: (1) hypertriglyceridemia (TG ≥ 1.7 mmmol/L or specific treatment for this lipid abnormality); (2) low HDL cholesterol (HDL < 1.03 mmol/L in men or <1.29 mmol/L in women, or specific treatment for this lipid abnormality); (3) high blood pressure (SBP ≥ 130 mmHg or DBP ≥ 85 mmHg, or treatment of previously diagnosed hypertension); and (4) hyperglycemia (FPG ≥ 5.6 mmol/L or previously diagnosed type 2 diabetes). Statistical Analysis Data were presented as percentages for categorical variables and means ± standard deviations for continuous variables. Differences in sociodemographics, body composition, clinical and metabolic parameters, and dietary nutrient and flavonoid intakes between MetS subjects and control subjects were evaluated using t-tests for continuous variables and the Chi-square tests for categorical variables. A multivariate logistic regression analysis was used to estimate the adjusted odds ratios (OR) and 95% confidence intervals (CI) of the flavonoid and copper intake in predicting MetS. Flavonoid and Cu intakes in the model were standardized by energy (the crude dietary flavonoid or nutrient intake per 1000 kcal of total energy) and then divided into quartile categories based on the cumulative average. To determine the confounding factors in this study, MetS prevalence difference according to basic factors, such as age (years), sex (%), smoking (%), drinking (%), physical activity (%), and energy intake (kcal/day), was verified. For energy adjusted fat (g/kcal), fiber (g/kcal), protein (g/kcal), and carbohydrate (g/kcal), no significant differences in the MetS group were observed (energy adjusted fat, p = 0.394; energy adjusted fiber, p = 0.975; energy adjusted protein, p = 0.673; and energy adjusted carbohydrate, p = 0.607). Therefore, in our study, potential confounders included age, body mass index, sex, drinking, smoking, and physical activity intensity. Cubic splines were performed to evaluate the shape of the flavonoid-MetS relationship and to assess the dose-response relation. All statistical analyses were performed using SPSS v21.0 (Beijing Stats Data Co., Ltd., Beijing, China) and R 2.15.1 (http://www.r-project.org/). A two-sided p < 0.05 was considered statistically significant. General Characteristics The sociodemographics, body composition, and clinical and metabolic parameters in both groups with and without MetS are shown in Table 1. Among this study's population (n = 9108), 2635 (28.9%) individuals were MetS subjects meeting the International Diabetes Federation (IDF) 2005 guidelines. Compared with the control group, the MetS group were older, had a larger BMI and waist circumference, and worse blood glucose and lipid profiles (p < 0.01). A larger percentage of smokers was found in the MetS group (p < 0.01). Moreover, subjects with the highest flavonoid intake were more likely to engage in light physical activity (p < 0.05). Dietary Nutrient and Flavonoid Intakes The dietary nutrient and flavonoid intakes of the subjects from the groups both with and without MetS are shown in Table 2. The average daily intake of energy in the control group was significantly higher than in the MetS group (p < 0.05), but no differences were observed for protein, carbohydrate, fat, or fiber between the two groups. In addition, the dietary intakes of total flavonoid (mg/kcal) and Cu (ug/kcal) in the control group were significantly higher than those in the MetS group (p < 0.05). In terms of the subclasses of flavonoids, quercetin and luteolin intakes (mg/kcal) were also significantly higher than in the MetS group (p < 0.05). Relationship between Flavonoid and Cu Intake, and MetS Risk The relationships between flavonoid and Cu intake and the risk of MetS are shown in Table 3. Total flavonoid and Cu intake were strongly inversely associated with the risk of MetS after adjusting for age and sex (Mode 1: fourth vs. first quartile, OR = 0.72, 95% CI = 0.63-0.82, P trend = 0.000 in flavonoid; OR = 0.79, 95% CI = 0.69-0.90, P trend = 0.001 in Cu). The strength of this relationship decreased but remained significant after adjusting for Model 1 plus BMI, physical activity, drinking, and smoking (Model 2: OR = 0.77, 95% CI = 0.66-0.90, P trend = 0.002 in flavonoid; OR = 0.81, 95% CI = 0.70-0.94, P trend = 0.020 in Cu). Flavonoid and Cu intakes in the model were first standardized by energy (the crude dietary flavonoids or nutrients intake per 1000 kcal total energy) and then divided into quartile categories based on the cumulative average. Data are presented as odds ratios (OR) and 95% CI of each quartile of flavonoid and copper intake. Model 1 included adjustment for sex and age, and Model 2 included Model 1 plus adjustment for BMI, drinking, smoking, and physical activity. Joint Analyses of Flavonoid and Cu Intake on MetS Risk In Table 4, flavonoid intake is divided into tertile levels according to the average energy-adjusted flavonoid intake. Compared with the lowest flavonoid intake group, the OR (95% CI) of MetS for the highest flavonoid intake group was 0.80 (0.70-0.91, P trend = 0.003) after adjusting for potential confounders. Cu intake was divided into secondary levels according to the average energy-adjusted Cu intake. Compared with the lower Cu group, the OR (95% CI) of MetS for the higher Cu group was 0.85 (0.77-0.95, p = 0.003) after adjusting for potential confounders. Joint analyses showed that interactions occurred between total flavonoid and Cu intakes on MetS. The reverse association of total flavonoid intake with MetS risk was remarkably stronger under high levels of Cu intake (P interaction = 0.000). In Figure 2, the OR (95% CI) of MetS under the high copper and high flavonoid patterns was 0.76 (0.66-0.89), compared with that under the low copper and low flavonoid pattern [1], representing a difference in relative risk of 24%. Flavonoid and Cu intakes in the model were first standardized by energy (the crude dietary flavonoids or nutrients intake per 1000 kcal total energy) and then divided into tertile categories and secondary categories based on the cumulative average, respectively. Data are presented as OR and 95% CI of quartile of flavonoid and copper intake. Model 1 adjusted for sex and age, and Model 2 includes Model 1 plus adjusts for BMI, drinking, smoking, and physical activity. In Figure 2, the OR (95% CI) of MetS under the high copper and high flavonoid patterns was 0.76 (0.66-0.89), compared with that under the low copper and low flavonoid pattern [1], representing a difference in relative risk of 24%. Figure 2. Joint associations of the odds ratio (OR) of flavonoid and copper on the risk of metabolic syndrome (MetS) (P interaction = 0.000). Data represent the OR and 95% confidence intervals of different levels of flavonoid-copper intake adjusted for age, body mass index, sex, drinking, smoking, and physical activity. Tertile-specific point estimates are provided for low, medium, and high flavonoid intakes in tertile categories of low (solid line) and high (dashed line) copper intake. Suggestions for Flavonoid Intake for the Prevention of MetS In a dose-response analysis using cubic spline (Figure 3), we used the median total intake of flavonoids (14 mg/kcal) as the reference point, with three knots (5th, 50th, and 95th percentile) to approximate the relationship between the total intake of flavonoids and the risk of MetS. The relationship between flavonoid intake (continuously measured) and MetS risk was non-linear. We observed that a higher intake of flavonoids was associated with a decreased risk of MetS. The risk of MetS declined steadily as the total intake of the five flavonoids increased, until the flavonoid intake reached 23 g/kcal. The OR (95% CI) of MetS was 0.92 (0.76-1.00). At this point, the risk of MetS development was the lowest; however, the downtrend then increased slightly in an L-shape. Suggestions for Flavonoid Intake for the Prevention of MetS In a dose-response analysis using cubic spline (Figure 3), we used the median total intake of flavonoids (14 mg/kcal) as the reference point, with three knots (5th, 50th, and 95th percentile) to approximate the relationship between the total intake of flavonoids and the risk of MetS. The relationship between flavonoid intake (continuously measured) and MetS risk was non-linear. We observed that a higher intake of flavonoids was associated with a decreased risk of MetS. The risk of MetS declined steadily as the total intake of the five flavonoids increased, until the flavonoid intake reached 23 g/kcal. The OR (95% CI) of MetS was 0.92 (0.76-1.00). At this point, the risk of MetS development was the lowest; however, the downtrend then increased slightly in an L-shape. . The approximated non-linear trend between the total intake of five flavonoids and the risk of MetS using restricted cubic spline. Data are presented as odds ratios (OR) and 95% confidence intervals of flavonoid intake adjusted for age, body mass index, sex, drinking, smoking, and physical activity. Discussion In this study, we explored the association of dietary flavonoid and Cu intakes with the risk of MetS in a large urban cross-sectional study of Chinese adults living in Harbin, North China. We observed that higher intakes of flavonoids and Cu were significantly associated with a lower risk of MetS, with 23% and 19% reductions in the highest versus the lowest (reference) intake category, respectively. The reverse association of total flavonoids with MetS became much stronger in the context of a high copper intake. In addition, dose-response effects showed an L-shaped curve between the total intake of flavonoids and the risk of MetS. Dietary intakes of total flavonoids have been reported to be inversely associated with metabolic syndrome among Polish adults [19]. In addition, a study found that higher consumption of total flavonoids was associated with a lower risk of MetS in Iranian adults [20]. In our study, we also found that total flavonoid intake was strongly inversely associated with the risk of MetS, even after adjusting for potential confounders, which was consistent with previous studies. The mechanisms for the beneficial effects of flavonoids against MetS may be their antioxidant and anti-inflammatory properties as well as their direct effects on endothelial function and nitric oxide (NO) bioavailability in the arterial vasculature [21,22]. Cu deficiency is not common, but Cu deficiency has been reported to increase HDL cholesterol levels in rats [23], blood cholesterol levels in adults [24], and even lead to arterial diseases, pigmentation loss, myocardial disease, and neurological effects [10]. Various studies have been completed on the relationship between Cu intake and MetS. Previous studies reported that Cu intake is related to MetS in states of insufficient or low Cu [25], whereas others reported that the association between Cu and MetS did not remain significant after adjustments [10]. Our results suggest that Cu is strongly inversely associated with the risk of MetS, even after adjusting for potential confounders. The mechanism for this relationship may be interpreted by Cu combining with superoxide dismutase (SOD), inhibiting the oxidation of cells, reducing free radicals, or through a reduction in glucose levels [10,11]. A variety of nutrients or bioactive substances in food may interact with each other during digestion, absorption, and metabolism in the body. An in vitro study reported that copper and flavonoids can form a metal ion complex. These metal chelating properties of flavonoids may play a role in metal-overload diseases and in all oxidative stress conditions involving a transition metal ion due to their anti-tumor properties [11,12,26,27]. However, we do not know whether these chelates of Figure 3. The approximated non-linear trend between the total intake of five flavonoids and the risk of MetS using restricted cubic spline. Data are presented as odds ratios (OR) and 95% confidence intervals of flavonoid intake adjusted for age, body mass index, sex, drinking, smoking, and physical activity. Discussion In this study, we explored the association of dietary flavonoid and Cu intakes with the risk of MetS in a large urban cross-sectional study of Chinese adults living in Harbin, North China. We observed that higher intakes of flavonoids and Cu were significantly associated with a lower risk of MetS, with 23% and 19% reductions in the highest versus the lowest (reference) intake category, respectively. The reverse association of total flavonoids with MetS became much stronger in the context of a high copper intake. In addition, dose-response effects showed an L-shaped curve between the total intake of flavonoids and the risk of MetS. Dietary intakes of total flavonoids have been reported to be inversely associated with metabolic syndrome among Polish adults [19]. In addition, a study found that higher consumption of total flavonoids was associated with a lower risk of MetS in Iranian adults [20]. In our study, we also found that total flavonoid intake was strongly inversely associated with the risk of MetS, even after adjusting for potential confounders, which was consistent with previous studies. The mechanisms for the beneficial effects of flavonoids against MetS may be their antioxidant and anti-inflammatory properties as well as their direct effects on endothelial function and nitric oxide (NO) bioavailability in the arterial vasculature [21,22]. Cu deficiency is not common, but Cu deficiency has been reported to increase HDL cholesterol levels in rats [23], blood cholesterol levels in adults [24], and even lead to arterial diseases, pigmentation loss, myocardial disease, and neurological effects [10]. Various studies have been completed on the relationship between Cu intake and MetS. Previous studies reported that Cu intake is related to MetS in states of insufficient or low Cu [25], whereas others reported that the association between Cu and MetS did not remain significant after adjustments [10]. Our results suggest that Cu is strongly inversely associated with the risk of MetS, even after adjusting for potential confounders. The mechanism for this relationship may be interpreted by Cu combining with superoxide dismutase (SOD), inhibiting the oxidation of cells, reducing free radicals, or through a reduction in glucose levels [10,11]. A variety of nutrients or bioactive substances in food may interact with each other during digestion, absorption, and metabolism in the body. An in vitro study reported that copper and flavonoids can form a metal ion complex. These metal chelating properties of flavonoids may play a role in metal-overload diseases and in all oxidative stress conditions involving a transition metal ion due to their anti-tumor properties [11,12,26,27]. However, we do not know whether these chelates of Cu and flavonoids can influence the effect of flavonoids on MetS in the human body. In our study, we observed that the flavonoid-MetS relationship was modestly modified by copper intake. Conversely, flavonoids' inverse associations with MetS appeared stronger in the context of a high copper intake, suggesting that flavonoids combined with a variety of resources of other nutrients, such as copper, contribute to the lower risk we observed. Further, these data point to flavonoid intake as being only one aspect of a healthy diet, and to the benefits of a varied diet in obtaining adequate intake for ubiquitous nutrients such as copper. These points become more important in the context of popular dietary trends that recommend a balanced diet. The mean intake of daily flavonoids for the total population was 34.68 mg/day, whereas the average intake of flavonoids in the US and Spain are 189.7 mg/day [28] and 313.26 mg/day [29], respectively, which are higher than our results. The highly variable average intake of flavonoids among studies in other countries and ours is partly because different flavonoid subclasses and different food resources were studied. According to our analysis, the inverse relationship between flavonoid intake and MetS (L-shaped) indicated that moving from a low to high intake of total flavonoids should be responsible for a reduction in the risk of MetS. However, the intensity of the reduction weakened and even slightly increased if more flavonoids were ingested. So, we recommend that eating flavonoid-rich foods in a reasonable range may help reduce the risk of MetS. The strength of our study is that it is the first large sample, population-based, cross-sectional study that has analysed the effect of flavonoids, Cu, and their joint effects on MetS. We further assessed dose-response effects of flavonoid intake on MetS, providing more practical suggestions. In addition, the calculation of flavonoid intake was based on measured values of the food items consumed in Harbin which greatly improved the flavonoid intake estimation accuracy. Our study also had limitations. First, we only included 41 food items when calculating the amount of flavonoids. Our results for the mean dietary intake of flavonoids may be underestimated by the lack of food resources for tea, red wine, cocoa, etc., which are also main food sources of flavonoids. Second, only five flavonoids were included in the analysis based on the detected data in the earlier stage. The effects of other flavonoid subclasses on MetS were not considered. Conclusions In conclusion, individuals with higher intakes of flavonoids showed a lower MetS risk, with modestly stronger inverse associations observed in the presence of high copper intake. We suggest that eating flavonoid-rich foods combined with mineral intake may help to reduce the risk of metabolic syndrome. Further investigations including clinical trials and cohort studies are required to confirm these findings. Author Contributions: L.N. designed the research; Y.J., J.L. and S.J. conducted the research; R.Q. and T.H. analyzed the data; R.Q. and L.N. wrote the paper, L.N. had primary responsibility for the final content. All authors provided input and approved the final manuscript.
2018-08-14T11:26:56.234Z
2018-07-29T00:00:00.000
{ "year": 2018, "sha1": "ca6c7194b2985d421f9af5d9daecc680b50eb079", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/10/8/991/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ca6c7194b2985d421f9af5d9daecc680b50eb079", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
251748035
pes2o/s2orc
v3-fos-license
The Impact of Probiotics and Prebiotics on Dry Eye Disease Signs and Symptoms Dry eye is considered an inflammatory disease. Gut microbiota are important in the regulation of low-grade chronic inflammation, including in the eye. Probiotics and prebiotics are increasingly used to regulate chronic-disease-associated gut dysbiosis. Therefore, this double-masked, randomized controlled clinical trial aimed to explore the potential of oral probiotics and prebiotics in the management of dry eye disease. In total, 41 participants with dry eye received probiotic and prebiotic supplements (treatment group, n = 23) or respective placebos (control group, n = 18) for 4 months. Dry eye symptoms and signs were evaluated using the Ocular Surface Disease Index (OSDI), Dry Eye Questionnaire 5, osmolarity, non-invasive keratograph break-up time (NIKBUT), ocular surface staining, tear meniscus height (TMH), lipid layer thickness, and conjunctival redness. After 4 months, the average OSDI score of the treatment group was significantly better compared to that of the controls (16.8 ± 5.9 vs. 23.4 ± 7.4; p < 0.001). The NIKBUT and TMH did not change significantly with treatment (p = 0.31 and p = 0.84) but reduced significantly for controls on average by −5.5 ± 1.0 secs (p = 0.03) and 0.2 ± 0.1 mm (p = 0.02). These data suggest that probiotics and prebiotics might be effective in the management of dry eye disease. Introduction Dry eye disease is considered one of the most common ocular surface diseases, with a global prevalence of 11.59%, depending on the chosen diagnostic criteria [1]. In 2017, the second TFOS DEWS report defined dry eye as a " . . . multifactorial disease of the ocular surface characterized by a loss of homeostasis of the tear film, and accompanied by ocular symptoms, in which tear film instability and hyperosmolarity, ocular surface inflammation and damage, and neurosensory abnormalities play etiological roles" [2]. This definition highlights the multifactorial etiology but also the use of "disease" suggests pathological outcomes that decrease of quality of life of patients [3]. Lack of homeostasis suggests that various perturbations of the ocular environment might trigger the disease [2,4,5]. Dry eye disease is an inflammatory condition that has many features in common with autoimmune disease [6]. Altered immunity is a significant factor in dry eye. As articulated by Stern and colleagues [7], dry eye disease is increasingly recognized as a localized autoimmune disease driven by dysregulated immunoregulatory and inflammatory pathways of the ocular surface. Mucosal tolerance disruption is integral to the pathogenesis of dry eye disease [8], initiated when the immune balance of the ocular surface is altered due to internal or external factors. Stress to the ocular surface initiates a cascade of acute response cytokines and sequestering of auto-response T cells that results in a chronic autoimmune response [7]. The gastrointestinal tract is inhabited by a vast number of microorganisms. The similar function of ocular surface mucins and glycoproteins to those in the gastrointestinal tract and the fact that the mucous membranes are connected throughout the body support the hypothesis that the gut microbiota can affect the health of different parts of the body, including the eye [9]. The gut microbiota, through the production of metabolites, mucosal mediators, and systemic immune responses, play an important role in the regulation of the immune system. An increasing number of studies have indicated alteration of the gut microbiota in Sjögren's syndrome [10][11][12] and the correlation of gut dysbiosis with dry eye severity [13,14]. Reduced gut microbiota diversity in dry eye patients compared to the control group has also been found [15]. The modification of gut composition through the normalization of its microbiota is a solution for the treatment of gut dysbiosis and may pave the way for novel therapeutic approaches to treat and manage various diseases in different parts of the human body, including the eye [16]. There are three common methods for altering the gut microbiota. One is fecal microbiota transplantation, another is the application of probiotics (potentially beneficial microorganisms), and the third is the application of prebiotics (for boosting specific populations of microorganisms). The last one can be used with probiotics, the combination of which is referred to as "symbiotic". Several small studies have provided evidence for the efficacy of this approach in the short term. Chisari et al., in 2016, found that a mixture of E. faecium LMG S-28935 and Saccharomyces boulardii MUCL 53837 decreased subjective symptoms with an increase in both tear secretion and tear break-up time [17]. Another pilot study by Chisari et al. reported that a 30-day supplementation of B. lactis and B. bifido significantly increased tear secretion and tear break-up time compared to placebo, in addition to alteration of the ocular microbiota, in 20 dry eye patients [18]. Similarly, Kawashima et al. noted that the consumption of E. faecium WB2000 mixed with fish oil for 8 weeks improved subjective symptoms, with increased tear secretion, in dry eye patients [19]. Although these reports are promising, this is a relatively novel field of research and the number and duration of studies have, so far, been limited. More comprehensive investigations are needed that can help inform clinicians about the practical application of such treatments. Therefore, the current hypothesis is that the administration of factors, such as probiotics, prebiotics, or symbiotic combinations, that can regulate the function of the gut microbiota can improve the outcomes (symptoms and/or signs) of dry eye disease. Consequently, this double-masked, randomized, controlled longitudinal trial aimed to explore the potential of oral probiotics and prebiotics in reducing the severity of signs and symptoms of dry eye disease through systemic and localized (ocular) immune function modulation. Materials and Methods Participants were included if DEQ-5 ≥ 6 or the OSDI ≥ 13 plus at least one of the following was present: non-invasive tear break-up time <10 s and ocular surface staining (>5 corneal spots or >9 conjunctival spots, lid margin ≥2 mm length, and ≥25% width) [20]. Participants with known dry eye disease were recruited from the databases of the Brien Holden Vision Institute, the UNSW Optometry Clinic, staff, and students at the UNSW School of Optometry and Vision Science. Participants included in the study were healthy, with no ocular and systemic inflammatory and autoimmune disorders. All participants were aged 18 years and above. Participants were excluded if they were taking probiotic/prebiotic commercial supplements. Participants were advised not to change their diet for the duration of the study. Exclusion criteria also included any systemic or topical medications that affect ocular physiology or the tear film, e.g., anti-acne medications (such as Roaccutane) and corticosteroids or immunosuppressant medications (such as Hydrocortisone and Prednisolone). Participants did not enroll in the study if they had undertaken ocular surgery within 12 weeks and corneal refractive surgery within 3 years prior to enrolment for this trial. Furthermore, participants were excluded if they were taking oral or topical antibiotics. Participants were asked to not wear their lenses on the day of study visits if they were contact lens wearers. Participants with ocular injury and active corneal infection or any active ocular disease were excluded from the study. Pregnant or lactating women were excluded from the study. This was a double-masked, randomized, controlled clinical trial in which a total of 41 subjects with mild to severe dry eye were enrolled and randomized through a web-based system into two groups, treatment and control. The treatment group received probiotic supplements (in the form of capsules) and prebiotic supplements (in the form of sachets). The control group received a probiotic placebo (in the form of capsules) and a prebiotic placebo (in the form of sachets). The treatment duration was 4 months, and the participants were followed up at 1 month and 4 months after commencing treatment and again 1 month after treatment cessation. MULTIBIOTIC™ Probiotics (Medlab Pty Ltd., Botany, NSW, Australia) and maltodextrin (placebo) in the shape of hard capsules were used. Sachets of NutriKane D (MediKane Pty Ltd., Macquarie Park, NSW, Australia) and maltodextrin were used as prebiotics and the prebiotic placebo, respectively. MULTIBIOTIC™ probiotic contains 21.075 billion CFU of bacteria per capsule, including Streptococcus, Lactobacillus, and Bifidobacterium species. NutriKane D contains Phytocell (Kfibre) and red sorghum flour. Previous studies have investigated the efficacy of these supplements in altering the gut microbiota and reducing systemic inflammation in the body [21,22]. All measurement were performed by one investigator (AT), who was masked regarding the allocation of interventions to participants. Dry eye symptoms: Ocular symptoms were assessed by the administration of the Ocular Surface Disease Index (OSDI) and Dry Eye Questionnaire 5 (DEQ-5) [20]. Both questionnaires were included in this study to provide a better understanding of ocular symptoms. Ocular surface health and staining: Slit-lamp biomicroscopy was used to check ocular health and integrity. Corneal staining was evaluated using sodium fluorescein (OptiStrips-FL). Conjunctival and lid margin staining was assessed using lissamine green (Green Glo). Corneal staining was evaluated under cobalt blue light, while conjunctival and lid staining was assessed under white light. Corneal and conjunctival staining was graded according to Sjögren's International Collaborative Clinical Alliance (SICCA) ocular staining score [23]. The sum of the staining in both eyes was analyzed. Eyelid staining was scored according to the modified grading scale by Korb et al. [24]. Tear film osmolarity: Tear film osmolarity was assessed using the I-PEN osmolarity system (I-MED Pharma Inc., Dollard-des-Ormeaux, QC, Canada, (https://imedpharma. com/, accessed on 20 June 2022). This is a portable battery-operated unit that consists of a handheld unit with a display screen to show the osmolarity test results and a single-use disposable card that comes in contact with the tear film. Osmolarity measurement for each eye was conducted as per the I-PEN manufacturer's instructions [25]. The repeatability of this device has been previously tested and reported [26]. Non-invasive keratograph break-up time (NIKBUT): Tear film stability was assessed automatically using the Oculus Keratograph (Oculus, Wetzlar, Germany). The participants were instructed to blink naturally two times and then to cease blinking until instructed to blink again. Three measurements were performed for each eye and the average for each eye included for analysis. Tear Lipid Layer assessment: The thickness of the lipid layer of the tear film was assessed with the LipiView interferometer (TearScience, Morrisville, NC, USA). The order of the measurements was from least invasive to most invasive, as follows: tear lipid layer assessment, TMH, NIKBUT, ocular redness assessment, osmolarity, and ocular staining. There was a 5 to 10 min gap between each measurement. The measurements were conducted in the same examination rooms with a stable temperature (20 ± 3 • C) for all participants. The data were first entered into Microsoft Excel (Microsoft Corp., Redmond, WA, USA) and then exported to IBM SPSS Statistics version 26.0 for statistical analysis (IBM Corp., Armonk, NY, USA). A generalized linear model was used to investigate changes over the study visit timelines. Differences between time points were checked with non-parametric and paired t-tests where the model indicated significance. Confidence intervals were set at 95%, and a p-value below 0.05 was used as an indicator of statistical significance. Results In total, 41 participants were recruited in this study, of which 32 completed the treatment period. Participants were aged 18 years and above (18-76), with a mean age of 41 ± 16 years. Among them, 30 were female and 11 were male. Interventions were given to the participants in a random order, resulting in 23 participants receiving treatment supplements and 18 receiving the placebo. The average age in the treatment and control groups were 41 ± 16 years and 41 ± 17 years, respectively. There were 14 females and 16 males in the treatment group and 9 females and 2 males in the control group. Dry Eye Symptoms There were no significant differences in comfort scores between control and treatment groups at baseline (p > 0.05). Figure 1 shows the changes in the Ocular Surface Disease Index (OSDI) score over time in the treatment and control groups. At the first-month visit, the ODSI score improved in both treatment (p = 0.03) and control (p = 0.02) groups. After 4 months of treatment from the baseline visit, the average OSDI score in the treatment group was significantly better than that in the control group (16.8 ± 5.9 vs. 23.4 ± 7.4, respectively; p < 0.001). At the follow-up visit, which occurred 1 month after treatment cessation, the average OSDI score in the control group was significantly worse than that in the treatment group (28.9 ± 12.7 vs. 18.4 ± 12.7, respectively; p = 0.03). The order of the measurements was from least invasive to most invasive, as follows: tear lipid layer assessment, TMH, NIKBUT, ocular redness assessment, osmolarity, and ocular staining. There was a 5 to 10 min gap between each measurement. The measurements were conducted in the same examination rooms with a stable temperature (20 ± 3 °C) for all participants. The data were first entered into Microsoft Excel (Microsoft Corp., Redmond, WA, USA) and then exported to IBM SPSS Statistics version 26.0 for statistical analysis (IBM Corp., Armonk, NY, USA). A generalized linear model was used to investigate changes over the study visit timelines. Differences between time points were checked with non-parametric and paired t-tests where the model indicated significance. Confidence intervals were set at 95%, and a p-value below 0.05 was used as an indicator of statistical significance. Results In total, 41 participants were recruited in this study, of which 32 completed the treatment period. Participants were aged 18 years and above (18-76), with a mean age of 41 ± 16 years. Among them, 30 were female and 11 were male. Interventions were given to the participants in a random order, resulting in 23 participants receiving treatment supplements and 18 receiving the placebo. The average age in the treatment and control groups were 41 ± 16 years and 41 ± 17 years, respectively. There were 14 females and 16 males in the treatment group and 9 females and 2 males in the control group. Dry Eye Symptoms There were no significant differences in comfort scores between control and treatment groups at baseline (p > 0.05). Figure 1 shows the changes in the Ocular Surface Disease Index (OSDI) score over time in the treatment and control groups. At the first-month visit, the ODSI score improved in both treatment (p = 0.03) and control (p = 0.02) groups. After 4 months of treatment from the baseline visit, the average OSDI score in the treatment group was significantly better than that in the control group (16.8 ± 5.9 vs. 23.4 ± 7.4, respectively; p < 0.001). At the follow-up visit, which occurred 1 month after treatment cessation, the average OSDI score in the control group was significantly worse than that in the treatment group (28.9 ± 12.7 vs. 18.4 ± 12.7, respectively; p = 0.03). After 1 month, the DEQ-5 score improved significantly in the control group (p = 0.03) but did not change in the treatment group (p = 0.08). After the treatment period, the DEQ-5 score did not change significantly in either the treatment group (8.8 ± 4.1; p = 0.06) or the control group (9.6 ± 3.2; p = 0.40). Changes in OSDI and DEQ-5 scores from the baseline After 1 month, the DEQ-5 score improved significantly in the control group (p = 0.03) but did not change in the treatment group (p = 0.08). After the treatment period, the DEQ-5 score did not change significantly in either the treatment group (8.8 ± 4.1; p = 0.06) or the control group (9.6 ± 3.2; p = 0.40). Changes in OSDI and DEQ-5 scores from the baseline were not influenced by sex (p > 0.05) nor did they correlate with age at any time point in either the test group or the control group (p > 0.33). Table 1 indicates the average and p-values for each clinical parameter over different study visit timelines. There was no significant difference in clinical measures between control and treatment groups at baseline (p > 0.05). Lipid layer thickness did not change significantly in the treatment group (p = 0.18) but reduced by an average of 8.5 ± 5.7 nm (p = 0.03) in controls after 1 month of the treatment. There were no significant changes in other clinical parameters, including TMH, NIKBUT, tear osmolarity, ocular staining, conjunctival bulbar redness, and meibomian gland secretion (p > 0.05), at the first-month visit in either the treatment group or the control group. After the 4-month treatment period, NIKBUT and TMH did not change significantly for the treatment group (p = 0.31 and p = 0.84) but reduced significantly for controls by an average of −5.5 ± 1.0 s (p = 0.03) and 0.2 ± 0.1 mm (p = 0.02), respectively. Figures 2 and 3 show how NIKBUT and TMH changed over time in the treatment and control groups. There were no significant changes in either the treatment group or the control group for other clinical parameters, including lipid layer thickness, tear film osmolarity, conjunctival redness, and ocular staining, at each visit (p > 0.05). At the first-month visit, one participant in the treatment group was found to no longer satisfy the criteria for dry eye and at 4 months, a further four participants in the treatment group were no longer dry eye positive. At the follow-up visit, this number was reduced to one participant in the treatment group. No one in the control group converted out of the dry eye diagnosis at any of the study time points. MGS 2.7 ± 0.8 2.4 ± 1.1 2.9 ± 0.3, 0.13 2.6 ± 0.8, 0.32 2.8 ± 0.7, 0.98 2.5 ± 1.0, 0.56 2.8 ± 0.4, 0.40 2.3 ± 1.0, 0.14 OSDI: Ocular Surface Disease Index; DEQ-5: Dry Eye Questionnaire 5; LLT: lipid layer thickness; TMH: tear meniscus height; NIKBUT: non-invasive keratograph break-up time; TO: tear osmolarity; CCS: corneal and conjunctival staining; LS: lid staining; BR: bulbar redness; MGS: meibomian gland secretion. In this table, the first and second numbers refer to the mean ± standard deviation, respectively, and the third numbers refer to the p-value relative to baseline. The numbers in bold are statistical significance p-values. At the first-month visit, one participant in the treatment group was found to no longer satisfy the criteria for dry eye and at 4 months, a further four participants in the treatment group were no longer dry eye positive. At the follow-up visit, this number was reduced to one participant in the treatment group. No one in the control group converted out of the dry eye diagnosis at any of the study time points. Discussion The results of this study indicate that regular consumption of probiotics and prebiotics can reduce dry eye symptoms, as assessed by the OSDI. Furthermore, taking these supplements may improve tear secretion and stability, therefore stabilizing some clinical signs, including tear break-up time and tear meniscus height, over time. Modulating the gut has been shown to reduce systemic inflammation. Given the connection between the ocular and gut mucosa, we hypothesize that ocular surface inflammation will also be improved, thereby reducing the signs and symptoms of dry eye. Modulating the gut microbiome has been shown to also modulate the proteins expressed by the lacrimal glands, with IL-10 increasing and IL-1β and IL-6 decreasing [30]. Therefore, the stability of some clinical features in this study, including TMH and NIKBUT, might be due to the effect of probiotics in changing the expression of inflammatory markers associated with immunomodulation in lacrimal glands. During the present study, tear film stability in the control group reduced after 4 months, but no change was seen for the treatment group over the same period. Dry eye disease is a multifactorial disease, and environmental factors are significant contributors to this disease. As such, the observed changes in the control group may have been due to the conditions prevailing at the time. For example, the data collection for this study was conducted mostly during the COVID-19 pandemic, when the widespread use of face Discussion The results of this study indicate that regular consumption of probiotics and prebiotics can reduce dry eye symptoms, as assessed by the OSDI. Furthermore, taking these supplements may improve tear secretion and stability, therefore stabilizing some clinical signs, including tear break-up time and tear meniscus height, over time. Modulating the gut has been shown to reduce systemic inflammation. Given the connection between the ocular and gut mucosa, we hypothesize that ocular surface inflammation will also be improved, thereby reducing the signs and symptoms of dry eye. Modulating the gut microbiome has been shown to also modulate the proteins expressed by the lacrimal glands, with IL-10 increasing and IL-1β and IL-6 decreasing [30]. Therefore, the stability of some clinical features in this study, including TMH and NIKBUT, might be due to the effect of probiotics in changing the expression of inflammatory markers associated with immunomodulation in lacrimal glands. During the present study, tear film stability in the control group reduced after 4 months, but no change was seen for the treatment group over the same period. Dry eye disease is a multifactorial disease, and environmental factors are significant contributors to this disease. As such, the observed changes in the control group may have been due to the conditions prevailing at the time. For example, the data collection for this study was conducted mostly during the COVID-19 pandemic, when the widespread use of face masks was required to prevent the spread of disease. A marked increase in dry eye symptoms among regular mask users has been reported [31][32][33], which can manifest as increased ocular irritation and reduced tear break-up time [34]. Moreover, fewer social interactions as a result of the pandemic contributed to increased computer time, which could contribute to evaporativetype dry eye disease [35]. The factors may have precipitated the increased dry-eye-type responses among controls; however, the data suggest that probiotic and prebiotic treatment can mitigate the effect of environmental factors on dry eye. In this study, the dry eye symptoms improved in both groups after 1 month of taking the interventions. This could be because of the placebo effect [36]. Nevertheless, this effect waned over the course of the study, and the treatment group showed better improvement in their symptoms after the full-term treatment period. In contrast to symptoms, the clinical features did not change after 1 month of taking the intervention for either group. This can indicate a longer time required for these supplements to change the gut composition and to be reflected in the clinical signs of dry eye disease. This study did not find an improvement in corneal, conjunctival, and lid wiper staining scores, potentially because participants did not have severe ocular staining from the outset. Moreover, people with autoimmune diseases such as Sjögren's syndrome were excluded from this study. It is possible that more substantial changes in dry eye signs and symptoms after treatment with probiotics and prebiotics could be observed in patients with Sjögren's syndrome as their gut microbiota are more significantly different compared to those with environmental dry eye or the healthy cohort, with a concomitantly higher level of gut dysbiosis [13,37]. Choi et al. reported reduced levels of ocular surface inflammation using probiotics in a mouse model of autoimmune dry eye. They found a lower ocular staining score and higher tear secretion in mice treated with probiotics [30]. However, the safety of probiotic use in human autoimmune disease is a matter of discussion because Lactobacillus spp. has been reported to act as a possible pathogen [38]. Considering the promising evidence of the beneficial impact of probiotics on the dry symptoms and clinical signs, future clinical studies are necessary to further investigate probiotics' benefits for patients with Sjögren's syndrome. The gut microbiota vary according to non-modifiable factors, such as ethnicity and gender, as well as being modifiable by diet [16]. In this study, participants were asked not to change their diet during their enrolment, but as this is difficult to control, there may have been some residual impact on the outcomes. New evidence is emerging on the role of the gut microbiota in inflammatory ocular disease [39,40], yet investigations into the effect of probiotics and/or prebiotics on dry eye disease are still in the early stages. Thus, there is currently no guidance regarding the proper dosage, duration, and formulation of these supplements. It is possible, therefore, that stronger effects than those observed here may be associated with alternative dosing regimens. Conclusions This study showed that the application of probiotics and prebiotics might be effective in the management of dry eye disease and suggests a potential alternative therapeutic treatment for dry eye disease management. Future investigations are necessary to establish customized probiotic and/or prebiotic interventions with an optimized modulation of the gut microbiota to treat dry eye disease. Institutional Review Board Statement: The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of the University of New South Wales (HC180853 7 January 2019). Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The data presented in this study are available on request from the corresponding author. The data are not publicly available due to ethical reasons.
2022-08-24T15:22:28.605Z
2022-08-01T00:00:00.000
{ "year": 2022, "sha1": "0bd97df2d3c1da3101407a325a7849f4dc7cecfd", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0383/11/16/4889/pdf?version=1660985366", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "98fbebedb1e23e1b3fff08c8b30332f3a2d6ef49", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
5963933
pes2o/s2orc
v3-fos-license
Radical esophagectomy for a 92-year-old woman with granulocyte colony-stimulating factor-producing esophageal squamous cell carcinoma: a case report Background Granulocyte colony-stimulating factor (G-CSF)-producing esophageal squamous cell carcinoma (ESCC) has been considered to have a poor prognosis. We successfully treated a case of G-CSF-producing ESCC in a 92-year-old woman. Case presentation A 92-year-old woman was admitted to our hospital with the complaints of choking while swallowing and dysphagia. Esophagogastroduodenoscopy and contrast-enhanced computed tomography revealed a type 2 esophageal cancer located 26–35 cm from the dental arch, with no distant metastasis. The patient was diagnosed with G-CSF-producing ESCC based on remarkable leukocytosis and high G-CSF levels. The patient underwent radical subtotal esophagectomy. Subsequently, the level of neutrophils (from 23,500/μL to 5000/μL) and the level of G-CSF (from 131 to <19.5 pg/mL) decreased significantly. Immunohistochemistry analysis of the resected tissue specimen showed positive staining for G-CSF in the cytoplasm of the tumor cells. Although the patient developed aspiration pneumonitis, after antibiotic treatment, she promptly recovered and was discharged. Conclusions Herein, we describe a case of successfully treated G-CSF-producing ESCC in a 92-year-old woman. Precise detection and safely performed immediate radical operation are considered essential to achieve a good clinical course. Background In addition to the mass tumor effects, granulocyte colonystimulating factor (G-CSF)-producing tumors display additional signs and symptoms of inflammation caused by G-CSF-producing malignant cells [1]. There have been a relatively high number of reports on G-CSF-producing lung carcinoma; however, reports on G-CSF-producing esophageal squamous cell carcinoma (ESCC) have been scarce. With the aging of the population, the number of oldest old patients with cancer comorbidities has been increasing [2]. Therefore, effort should be made to determine the effectiveness of each treatment plan. We report a very rare case of a 92-year-old woman who was promptly diagnosed with G-CSF-producing ESCC and successfully underwent surgical treatment. Case presentation A 92-year-old woman had a major complaint of choking when swallowing or dysphagia. The patient had been healthy and had no particular medical history besides cataract surgery. She had no history of oral medications, smoking, or alcohol. Another physician had previously attended to her was complaint of choking when swallowing. A narrowing of the lumen of the intrathoracic esophagus was detected by esophagogastroduodenoscopy, and the patient was referred to our hospital for detailed examination. On admission, abnormal symptoms such as fever, anemia, or jaundice were not detected and the performance status was good (score 0 according to the Eastern Cooperative Oncology Group). Laboratory data on admission showed remarkable leukocytosis (leukocytes 23,500/μL, neutrophils 86.1 %, and no blast cells) and slight decrease in the serum albumin (3.5 g/dL) and Creactive protein (CRP) levels (1.5 mg/dL). The levels of tumor markers, squamous cell carcinoma antigen (SCC-A), and p53 antibody were high (SCC-A, 3.4 ng/ mL; p53, 22.2 U/mL). The respiratory functions and electrocardiograms were within normal ranges. However, the renal function was a slight concern. Esophagogastroduodenoscopy revealed a type 2, circumferential cancer of the esophagus, approximately at 26-35 cm from the dental arch ( Fig. 1), and the biopsy showed SCC. Contrast-enhanced computed tomography of the chest and abdomen demonstrated circumferential thickening of the wall and narrowing of the lumen of the middle and lower intrathoracic esophagus, and small lymph nodes were detected between the lower mediastinum and paracardiac area. Pleural effusion and ascites or distant metastases were not detected (Fig. 2). Based on these findings, the patient was diagnosed with T3N0M0, stage IIA (according to the Union for International Cancer Control TNM classification of malignant tumors, 7th edition) ESCC. Furthermore, the laboratory data suggested G-CSF-producing carcinoma with serum G-CSF levels of 131 pg/mL. Despite her age, the patient had no comorbidities, and most importantly, she consented to a surgical operation. Therefore, we planned to perform esophagectomy. In Japan, the standard treatment for stage IIA esophageal carcinoma is subtotal esophagectomy with three-field lymph node dissection following preoperative chemotherapy [3]. However, considering the age disadvantage, multimodal management of disease with chemotherapy or radiotherapy was not performed. In fact, the subtotal esophagectomy under the right thoracolaparotomy, right lower partial lobectomy, twofield lymph node dissection (instead of three-field), posterior mediastinal route gastric tube reconstruction, and intra-pleural anastomosis were successfully performed. The operation lasted 4 h and 15 min, and the blood loss was 50 mL. The tumor and the right lobe of the lung were attached; therefore, they were resected en bloc because the tumor was considered infiltrative. Histopathological examination of the resected specimen revealed that the primary lesion sized 92 × 54 mm was a moderately differentiated squamous cell carcinoma with two lymph node metastases, and it was diagnosed as a stage III tumor (according to the Union for International Cancer Control TNM classification) (Fig. 3a, b). Immunohistochemistry of the resected tissue specimen stained positive for G-CSF in the cytoplasm of the tumor cells (Fig. 3c). After the operation, the patient developed aspiration pneumonitis; however, she promptly recovered with the administration of antibiotics. Three weeks after the operation, the leukocyte counts decreased to 5000/μL and the G-CSF levels to <19.5 pg/mL. Thereafter, the patient exhibited a good clinical course and she was discharged on the 29th postoperative day. The patient had neutrophilia without any signs of infection or myeloblast genesis before the operation. After esophagectomy, the number of leukocytes and the level of G-CSF had decreased significantly and the presence of G-CSF was confirmed pathologically. Therefore, the patient was definitively confirmed to have a G-CSFproducing tumor. There have not been any complaints or recurrence, and the patient has remained disease-free from 18 months after the operation until the present day. Discussion Robinson first described a G-CSF-producing tumor in 1974 [1], and the number of such cases has been increasing in the recent years. The primary sites of G-CSF cases have been reported as the lung, urinary tract, or the stomach [4][5][6]; however, reports of G-CSFproducing esophageal carcinoma have been scarce. G-CSF is a hematopoietic factor produced by the endothelium, macrophages/monocytes, and fibroblasts. It stimulates the bone marrow to produce granulocytes from stem cells and release neutrophils into the bloodstream [7]. It is also produced by the malignant cancer cells. An excess amount of aberrant production causes an inflammatory response such as fever and positive CRP, a kind of leukemoid reaction (leukocytosis >50,000 leukocytes/μL), and paraneoplastic syndrome in clinical oncology. A recombinant form of G-CSF is currently used to prevent infections after chemotherapy or radiological therapy, which causes myelosuppression and neutropenia. The diagnostic criteria for G-CSF-producing tumors include (1) a marked increase in the leukocyte counts, (2) elevated G-CSF activity, (3) a decrease in leukocyte counts following tumor resection, and (4) the verification of G-CSF production in the tumor [1]. Because all four criteria were fulfilled, we diagnosed the patient with G-CSF-producing ESCC. Esophageal carcinoma is a disease with a poor prognosis [8]. Furthermore, the prognosis of G-CSF-producing ESCC is considered even poorer (Table 1) [9][10][11][12][13][14][15][16][17][18]. All of the cases have been found at rather advanced stage, in 12 cases (including our case), and 9 cases were poor prognosis. The reason might include (1) G-CSF per se having a capacity to expand tumor growth in an autocrine manner, (2) acute renal failure or hyperuricemia (so-called tumor lysis syndrome) by cytolysis of increased neutrophils after chemotherapy, (3) thrombosis by platelet aggregation by G-CSF [19]. The surviving three patients had undergone tumor resection. Furthermore, among the poor prognosis group, the survival period of excised cases was estimated to be longer than that of non-excised cases. From the above, in cases of the G-CSF-producing ESCC, if possible, the complete tumor resection is considered to be important. Since the prognosis of this disease is much poor, if possible, surgery as well, multimodal therapy that combines radiotherapy and/or chemotherapy is considered preferable. According to Table 1, G-CSF-producing ESCC was male-dominated (83.3 %) and the average age of the 12 patients was 67 years old. These findings were considered to overlap with the population of normal ESCC. Association between leukocyte value, serum G-CSF value, tumor location, tumor stage, histologic grade, and prognosis was not clear. In addition, in one third in these 12 cases, a merger of other organs' tumor was observed. It is suggested that the characteristic of G-CSF, which was mentioned above, might have influence on tumor growth [19]. In addition, with the aging of the population, the chances that we encounter the oldest old patients are increasing [8]. The appropriate evaluation of overall conditions and the selection of operative method are critical. The operative reports of the elderly are few, and among those cases, the cytoreductive (limited) operations were often chosen [20][21][22] because of the increase of complications after the operation. In the present case, the oldest old patient has been alive with a good condition after the operation. In order to improve the quality of life of the oldest old patients, the practical consideration for esophageal carcinoma should be the individualization of therapeutic protocols, tailoring the extent of resection and inclusion or exclusion of preoperative and postoperative procedures. A curative resection with relatively minimal invasion appears to be mandatory for better prognosis with minimal morbidity and mortality in elderly patients. Conclusions We described a case of successfully treated G-CSFproducing esophageal squamous cell carcinoma in a 92year-old woman. We assessed the patient's will and overall condition and chose the best operative method of radical subtotal esophagectomy and could achieve a good clinical course.
2018-04-03T04:17:03.080Z
2016-10-13T00:00:00.000
{ "year": 2016, "sha1": "f46748e552f539943f32d2b188fba7cdf3420156", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s12957-016-1023-1", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f46748e552f539943f32d2b188fba7cdf3420156", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15586879
pes2o/s2orc
v3-fos-license
Long-term results of surgical angioplasty for left main coronary artery stenosis: 18-year follow-up Background The aim of this study was to determine the long-term outcomes of surgical angioplasty for left main coronary artery (SA-LMCA) stenosis. Methods We retrospectively analyzed data from 24 consecutive patients (mean age, 55 years; male/female, 12/12) who underwent a surgical angioplasty for the left main coronary artery (LMCA) stenosis at our institution between 1995 and 2002. We used autologous pericardium in 7 patients and bovine pericardium in 17 patients as a patch. We evaluated the late mortality and major adverse cardiac events (MACE) rate. Results There was no operative mortality. Control coronary angiography exhibited wide open and funnel-shaped LMCA in all patients. One patient was lost to follow-up. During the mean follow-up of 167 months, there were 3 sudden cardiac deaths, 4 non-cardiac related deaths, and 9 MACE with one death at reoperation. The Kaplan-Meier method identified freedom from cardiac death in 95.7, 87.0, and 82.4% of the patients, and freedom from MACE in 91.3, 69.6, and 57.7% of the patients at 5, 10, and 15 years, respectively. Conclusions This study demonstrated that the long-term outcomes of SA-LMCA with a pericardial patch are acceptable compared to those of coronary artery bypass grafting, despite the controversy over the indications and the patch material used. Background Surgical angioplasty of the left main coronary artery (SA-LMCA) was introduced in 1965 by Effler and Sabiston, and revived by Hitchcock et al. after 20 years [1][2][3]. Dion, Sullivan, Villemot, and coworkers refined the surgical approach and technique and contributed to its widespread use [4][5][6]. Despite its inherent advantage of restoring natural antegrade coronary blood flow, there has been much controversy concerning the indications for this procedure and the patch materials utilized during it. Additionally, the long-term results have rarely been reported, in contrast to the well-documented coronary artery bypass grafting (CABG) and percutaneous coronary intervention (PCI). The aim of this retrospective study was to evaluate the long-term outcomes of SA-LMCA with a pericardial patch over 18 years. Methods This study was approved by the investigational review board of Hallym University Kangdong Sacred Heart Hospital and an informed consent waiver was obtained. Between January 1995 and December 2002, SA-LMCA was performed in 24 consecutive patients with the left main coronary artery (LMCA) stenosis (mean age, 54.7 ± 11.7 years; 12 males and 12 females) at our institution. All operations were implemented by 2 surgeons (WYL and EJK) during their learning phase for coronary artery surgeries. LMCA stenosis involved the ostium or the proximal third in 15 patients (62.5%), the middle or distal third in 4 patients (16.7%), and the entire length in 5 patients (20.8%). Four patients received an intraaortic balloon pump (IABP) preoperatively and needed an emergency operation due to unstable hemodynamic conditions including one case with cardiac arrest. The main etiologic factor was atherosclerosis, although 5 patients were considered to have fibromuscular dysplasia. The preoperative characteristics of the study population are summarized in Table 1. All operations were performed utilizing median sternotomy, with a mild hypothermic cardiopulmonary bypass and blood cardioplegia. Antegrade cardioplegic delivery was followed by retrograde infusion. The left ventricle was vented through the right superior pulmonary vein. The LMCA was approached anteriorly in all cases. The main pulmonary artery was divided in 12 patients (50.0%) and retracted to the left in 12 other patients. The incision began on the anterior wall of the aortic root, proceeded towards the LMCA, and crossed the stenosis. Onlay pericardial patches were continuously sewn from the distal LMCA to the aortic incision to obtain the funnel shape. A continuous 6-0 polypropylene suture was used for the LMCA segment, and a continuous 5-0 polypropylene suture was used for the aortic wall. The patch materials included fresh autologous pericardium in 7 patients and bovine pericardium in 17 patients at the surgeon's discretion. Isolated SA-LMCA was performed in 16 (66.7%) patients. Among 8 other patients, this procedure was associated with 5 CABG, 1 mitral valve replacement, 1 repair of a partial atrioventricular septal defect, and 1 aortic valve replacement with surgical angioplasty of the right coronary ostium. Neither endarterectomy nor biopsy was performed in this series. The mean aortic cross clamp time was 102 ± 33 minutes, and the mean cardiopulmonary bypass time was 184 ± 61 minutes. None of the patients was given specific anticoagulation therapy, except for 200 mg of aspirin each day. The definition of major adverse cardiac events (MACE) in this study included cardiac death, unexplained sudden death without an unequivocal noncardiac cause, myocardial infarction (MI), and repeat revascularization. Statistical analysis We analyzed the overall survival, freedom from cardiac death, and freedom from MACE using the Kaplan-Meier method and a statistical software program (SPSS, version 13.0, Chicago, IL, USA). Results There was no reversal to CABG due to technical failure of SA-LMCA, and there were no operative mortalities. The following major postoperative complications were observed: 2 cerebrovascular accidents (CVA), 1 case of mediastinitis, 1 case of bleeding requiring reoperation, and 1 perioperative MI with low cardiac output syndrome. It was difficult to wean the patient with perioperative MI from cardiopulmonary bypass. After undergoing a backup CABG to the left anterior descending (LAD) coronary artery with placement of an IABP, he subsequently recovered from his perioperative MI, and demonstrated good patency of the LMCA and a graft to the LAD in a postoperative coronary angiography (CAG). All patients underwent a control CAG and exhibited a wide open and funnel-shaped LMCA. The patient characteristics, as well as the operative and follow-up data, are listed in Table 2. One patient was lost to follow-up (complete follow-up, 95.8%). The mean follow-up duration was 167 ± 51 months (range, 41 to 227 months). The patients in this study were followed up clinically and with echocardiography at our department during the early period and at the department of cardiology in our institution or a referring physician's office in the later period. With the recurrence of chest pain, a repeat CAG was performed except in the patients who refused. During follow-up, there were 4 non-cardiac related deaths from a traffic accident (TA), CVA, sepsis, and lung cancer at 128, 147, 194 and 195 postoperative months, respectively. Three sudden cardiac deaths occurred at 41, 60, and 96 months. A repeat CAG was performed in 12 (52.2%) patients at a mean interval of 93 postoperative months (range, 5-203 months) and revealed 4 cases of LMCA restenosis, 1 spasm of the LMCA, 1 distal right coronary artery (RCA) stenosis, and 6 wide open LMCA. Two of 4 patients with LMCA restenosis benefited from successful PCI at 79 and 119 months. Two other patients underwent repeat CABG at 111 and 132 months, with the second patient dying after the reoperation. The patient with RCA stenosis underwent a successful PCI at 179 months. In the 11 patients who were reluctant to undergo a Discussion Following its introduction by Effler and Sabiston in 1965, SA-LMCA was abandoned because of the high mortality and surgical failure. The excellent report by Hichcock and colleagues revived this procedure. Dion, Sullivan, Villemot, and coworkers published innovative reports concerning the surgical angioplasty approach and technique. We used the anterior approach exclusively and divided the main pulmonary artery for better visualization in half of our cases. This technique always provided an excellent view extending from the LMCA ostium to the bifurcation. The indications for SA-LMCA are controversial, particularly regarding the extent and location of the LMCA stenosis and the existence of calcification. Botman and colleagues excluded patients with visible calcification and disease extending to the LMCA bifurcation [7]. The involvement of the distal LMCA or bifurcation may make it more difficult to reconstruct the LMCA and cause disastrous consequences. Age is another issue related to the indications and surgical risks of this procedure. In this study, 5 of the 6 patients over 65 years of age died (2 cardiac and 3 non- [8]. However, they concluded that SA-LMCA should be carefully attempted in patients with LMCA calcification or patients over 60 years of age. Most surgeons, including the authors of this study, agree with the opinions expressed by Dion regarding these indications. The onlay patch material is the most important issue regarding the prevention of acute thrombosis and late restenosis of the LMCA. The saphenous vein and autologous pericardium have been commonly used for SA-LMCA [9]. The saphenous vein is well matched in size and preserves the fibrinolytic properties of the endothelium. However, its elasticity may cause a tendency to dilate. Dion and colleagues suggested that the saphenous vein might be preferable to autologous pericardium due to its potential fibrinolytic activity [4]. Martinovic and colleagues used a saphenous vein as the patch material in 27 patients and reported one aneurysmal dilatation [10]. In the present study, we used bovine pericardium in the majority of our patients (17/24) because of easy handling. We believe it would be more difficult to tailor and sew a saphenous vein or internal thoracic artery patch to the LMCA compared with a pericardial patch because of thinness and weakness. Currently, both the saphenous vein and bovine pericardium have been widely used in carotid endarterectomy (CEA) with patch angioplasty. Several studies have documented the safety, efficacy and durability of bovine pericardium as a CEA patch. In a previous study that analyzed 456 CEA cases over a 10-year period, both the carotid clamping time and total operation time were shorter in the bovine pericardial patch group compared with the saphenous vein patch group because of the easy handling and suturing. The study also revealed a similar incidence of restenosis between the two patch materials (2.8% for bovine pericardium versus 3.4% for saphenous vein) and identified 4 patients who developed late aneurysmal dilatation in saphenous vein patches compared with no cases involving bovine pericardial patches [11]. Another study also reported a low incidence of restenosis (1.6%, 4/256) over 12 years in patients who underwent CEA with bovine pericardial patch angioplasty [12]. Bovine pericardium also provides the benefit of off-the-shelf availability and, has a reliable consistency and strength to allow a tight fitting closure, which yields less suture line bleeding and prevents aneurysmal dilatation [13]. However, bovine pericardium also has disadvantage of incurring calcification, degeneration and restenosis. In this series, we observed no acute or early thrombosis after SA-LMCA using a pericardial patch, despite the absence of specific anticoagulation therapy except for aspirin. However, the late failure of a pericardial patch caused 8 target lesion related MACE (5 in the bovine pericardial patch group and 3 the autologous pericardial patch group). Both pericardial patches exhibited similar MACE rates (bovine pericardium, 5/16 [31.3%]; autologous pericardium, 3/7 [42.9%]), although these numbers were too small to evaluate for the presence of statistically significant differences. Nevertheless, 4 patients in the bovine pericardial group died eventually at 41, 60, 96, and 132 postoperative months, while 3 patients in the autologous pericardial group survived a catastrophe. Notwithstanding the higher number of fatal consequences in the bovine pericardial group, we did not implement repeat CAG for the patients who suffered sudden cardiac death. Unfortunately, we have no information about the conditions of the LMCA with regard to mortalities. As restenosis is frequently asymptomatic, several authors recommend frequent imaging studies including surveillance CAG to avoid catastrophic consequences for patients undergoing SA-LMCA, regardless of whether they have cardiac symptoms [14]. It is currently unclear whether restenosis is directly related to the patch material, surgical techniques, or LMCA stenosis disease process per se. More data are necessary to establish standard guidelines regarding patch materials because of the small number of reported cases and short follow-up periods studied. In addition to commonly used patches, Liska and colleagues proposed using a proximal segment of the internal thoracic artery, and Malyshev et al. introduced the pulmonary autograft patch. Both studies reported excellent early results. The proximal right internal thoracic artery is sizable, pliable, and sufficiently robust to reconstruct the LMCA, in contrast with the distal segment. The pulmonary artery shares a common embryological origin with the aorta and has similar endothelial properties [15,16]. Internal thoracic artery and pulmonary autograft patches may be ideal patch materials and are preferable to the saphenous vein and pericardium, provided that they exhibit excellent longterm outcomes. The cumulative estimates in this study at 5, 10, and 15 years are summarized as follows: overall survival of 96, 87, and 73%; freedom from cardiac death of 96, 87 and 82%; and freedom from MACE of 91, 70, and 58%, respectively. In a previous study evaluating a 15-year follow-up after CABG from the Coronary Artery Surgery Study (CASS) registry, the overall survival was 90, 74, and 56% at 5, 10, and 15 years, respectively [17]. Furthermore, another study from the CASS registry including 630 cases with left main equivalent coronary artery disease treated with CABG demonstrated a cumulative survival of 88, 69, and 44% at 5, 10, and 15 years, respectively [18]. Sabik and colleagues evaluated 3,803 patients treated with CABG for LMCA stenosis and reported an overall survival of 83, 64, and 44% at 5, 10, and 15 years, respectively [19]. Taggart and colleagues reviewed several studies of CABG and PCI for LMCA stenosis and reported an in-hospital mortality of 2-3% and a 30-day mortality of 3-4.2% after CABG for LMCA stenosis [14]. Considering that CABG is a wellestablished procedure in contrast with SA-LMCA, the early and long-term outcomes of this study are remarkable despite the small number of cases and great controversy surrounding patch materials. Notwithstanding our important findings, the present study had a few limitations such as a retrospective observational design with a small sample size and the lack of a control group. Because of the small number of patients included in this study, we could not perform a multivariate statistical analysis or draw appropriate conclusions regarding statistical significance. Conclusions SA-LMCA with a pericardial patch demonstrated acceptable late outcomes in selected patients with the LMCA stenosis, which were comparable to the well-documented CABG outcomes, despite the controversy regarding the indications and patch materials. Both pericardial patches exhibited similar MACE rates, whereas a bovine pericardial patch caused more fatal consequences.
2015-03-12T23:57:50.000Z
2015-01-17T00:00:00.000
{ "year": 2015, "sha1": "0b83543bac3e2030272c0d858895ff86df3d93ef", "oa_license": "CCBY", "oa_url": "https://cardiothoracicsurgery.biomedcentral.com/track/pdf/10.1186/s13019-015-0209-x", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0b83543bac3e2030272c0d858895ff86df3d93ef", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
238527638
pes2o/s2orc
v3-fos-license
A predictive model for the ichnological suitability of the Jezero crater, Mars: searching for fossilized traces of life-substrate interactions in the 2020 Rover Mission Landing Site Ichnofossils, the fossilized products of life-substrate interactions, are among the most abundant biosignatures on Earth and therefore they may provide scientific evidence of potential life that may have existed on Mars. Ichnofossils offer unique advantages in the search for extraterrestrial life, including the fact that they are resilient to processes that obliterate other evidence for past life, such as body fossils, as well as chemical and isotopic biosignatures. The goal of this paper is evaluating the suitability of the Mars 2020 Landing Site for ichnofossils. To this goal, we apply palaeontological predictive modelling, a technique used to forecast the location of fossil sites in uninvestigated areas on Earth. Accordingly, a geographic information system (GIS) of the landing site is developed. Each layer of the GIS maps the suitability for one or more ichnofossil types (bioturbation, bioerosion, biostratification structures) based on an assessment of a single attribute (suitability factor) of the Martian environment. Suitability criteria have been selected among the environmental attributes that control ichnofossil abundance and preservation in 18 reference sites on Earth. The goal of this research is delivered through three predictive maps showing which areas of the Mars 2020 Landing Site are more likely to preserve potential ichnofossils. On the basis of these maps, an ichnological strategy for the Perseverance rover is identified, indicating (1) 10 sites on Mars with high suitability for bioturbation, bioerosion and biostratification ichnofossils, (2) the ichnofossil types, if any, that are more likely to be present at each site, (3) the most efficient observation strategy for detecting eventual ichnofossils. The predictive maps and the ichnological strategy can be easily integrated in the existing plans for the exploration of the Jezero crater, realizing benefits in life-search efficiency and cost-reduction. INTRODUCTION Seeking signs of past life (biosignatures sensu Slater, 2009;Gargaud, 2011) in the geological record of Mars is one of the four primary goals of the NASA Mars 2020 mission (Mustard et al., 2013a;Manrique et al., 2020;NASA, 2020a). To this aim, the mission payload includes a robotic rover, Perseverance, which was launched from Earth on July 30th, 2020 (Maki et al., 2020). The mission landed on February 18th, 2021 on Jezero Crater, an impact crater that is located in the NE region of Mars (Shahrzad et al., 2019;Horgan et al., 2020;Maki et al., 2020). The detection of potentially life-supporting (habitable) palaeoenvironments and the identification of deposits with high potential to preserve possible biosignatures have been key aspects of landing site selection (Grant et al., 2018;Mangold et al., 2020). The fact that the Jezero crater hosted a palaeolake with two deltas, as well as inlet and outlet valleys, is one of the major reasons why it has been selected as the landing site for the Perseverance rover (Fig. 1). Evidence of current or past water is regarded as a key requirement for habitability because liquid water is required by all organisms on Earth (Tosca, Knoll & McLennan, 2008). Depositional environments dominated by hydrodynamically quiet, fine-grained sedimentation, such as the deltaic bottomsets within the Jezero crater, have a high concentration and preservation potential for organic matter (Summons et al., 2011;Mangold et al., 2020). The presence of lacustrine carbonates throughout the region and inside the Jezero crater make this palaeolake a landing site of great interest not only for in-situ studies but also for potential sample return (Ehlmann et al., 2008a;Ehlmann et al., 2008b;Mangold et al., 2020). In fact, lacustrine carbonates have a high potential of preserving morphologic, organic, and isotopic biosignatures (Berra, Felletti & Tessarollo, 2019;Horgan et al., 2020). The Perseverance payload includes several tools for detecting biosignatures (Williford et al., 2018). The SuperCam tool is a suite of four co-aligned instruments that allows detection of morphological biosignatures and organics on a broad survey scale using remote Raman, fluorescence, high-resolution micro-imaging and VISIR spectroscopy (Maurice et al., 2015). For instance, SuperCam will allow the identification of coatings and their possible relationship to biological activity and characterize the regolith potential for biosignature preservation (Maurice et al., 2015). Arm-mounted tools (PIXL, SHERLOC) will perform finer scale observations. Organics (e.g., hopanes, steranes, organic macromolecules) will be searched using SHERLOC, a Deep UV native fluorescence and resonance Raman spectrometer (Beegle et al., 2015). WATSON, based on the Mars Hand Lens Imager, has been added to the instrument, allowing fine-scale colour imaging of rock samples (Martin et al., 2020). The presence of morphological and chemical biosignatures will be investigated with PIXL, a micro-focus X-ray fluorescence spectrometer. It can reveal spatial variations in morphology and chemistry at hand lens-scale view, allowing the detection of (eventual) stromatolite laminations (Allwood et al., 2020). The resolution of the Perseverance tools allows imaging of (potential) products of life-substrate interactions, such as burrows, borings, trails, stromatolites and microbialinduced sedimentary structures (MISS). Nevertheless, their study (ichnology) received little attention in astrobiology (see the review of Baucon et al., 2017) and almost no attention in the context of the Mars 2020 mission. For instance, the PIXL and SuperCam tools can image eventual macroscopic and microscopic products of bioturbation, i.e., the process by which the primary consistency and structure of sediment are modified by the activities of organisms living within it (Frey & Pemberton, 1985;Bromley, 1996;Pemberton et al., 2001) (Figs. 2A, 2B). However, the Mars 2020 documents (Mustard et al., 2013b;Hays et al., 2017) do not include bioturbation structures among the biosignatures to be searched on Mars. Bioerosion, the mechanical or biochemical drilling of a rigid substrate ( Fig. 2C) (Frey & Pemberton, 1985;Bromley, 1996;Pemberton et al., 2001), is rarely mentioned in research related to Mars 2020 (e.g., Czaja et al., 2020;Ivarsson, Sallstedt & Carlsson, 2020). Biostratification, i.e., the process by which organisms impart stratification features to the substrate (Fig. 2D), can result in microbialites such as MISS and stromatolites, which are the only product of life-substrate interactions that received thorough attention in the context of the Mars 2020 mission (Mustard et al., 2013b;Hays et al., 2017). According to the Landing Site Data Sheets (NASA, 2020b), microbial-induced sedimentary structures may have been preserved in the quiet deltaic or lacustrine deposits of the Jezero Crater. This lack of attention is surprising because plausible ancient Martian biosignatures are considered to be similar to the types of biosignatures characterizing the Precambrian rock record of Earth (McMahon et al., 2018;Czaja et al., 2020), which is indeed rich in fossilized products of life-substrate interactions (ichnofossils). The Precambrian rock record comprises 1.7 Ga (billion years) microborings (Zhang & Golubic, 1987), 2.1 Ga macroscopic burrows (El Albani et al., 2019), 3.2 Ga MISS (Noffke et al., 2006;Heubeck, 2009;Noffke, 2009) and 3.49 Ga stromatolites (Allwood et al., 2007). The abundant Precambrian ichnofossil record is related to the excellent preservation potential of ichnofossils, which often record the activity of soft-bodied organisms that usually do not fossilize. Ichnofossils are resilient to processes (e.g., mechanical and chemical degradation, diagenesis, tectonism, metamorphism and meteorite impact) that obliterate other biosignatures, such as body fossils, as well as chemical and isotopic evidence for past life (Baucon et al., 2017). The occurrence of bioturbation, bioerosion and biostratification ichnofossils within ancient Earth deposits, even when putative, encourages the application of ichnological studies to the Mars 2020 mission. The ichnological approach is further supported by the presence of corresponding ichnofossil-like structures on Mars. Elongate structures, with sudden changes in orientation, resembling bioturbation ichnofossils have been reported from the Vera Rubin Ridge, in the Eastern part of Mars (Baucon et al., 2020a). Microboring-like structures, consisting of curved and dendritic microtunnels, have been observed in the Martian meteorites Nakhla and Yamato 000593 (Fisk et al., 2006;Gibson et al., 2006;McKay et al., 2006;White et al., 2014). Structures possibly related to biostratification have been reported from the <3.7 Ga Gillespie Lake Member on Mars (Noffke, 2015). Although the biogenicity of these ichnofossil-like structures from Mars is highly debated, they inform about the feasibility of the ichnological approach to the Mars 2020 mission. The goal of this paper is to fill this methodological gap by evaluating the suitability of the Mars 2020 Landing Site for ichnofossils. To this goal, this works applies palaeontological predictive modelling, a technique used to predict the location of fossil sites in uninvestigated areas on Earth (Oheim, 2007;Anemone, Emerson & Conroy, 2011). Before of palaeontological application, predictive modelling has been widely used by archaeologists to find new sites and to identify areas in greatest need of protection (Kohler & Parker, 1986;Mehrer & Wescott, 2005;Oheim, 2007;Verhagen, 2018). Predictive modelling assumes that palaeontological sites are not randomly distributed but their location is related to certain characteristics of the modern and past environment, e.g., percent of bedrock covered by vegetation, the permanence of water, and ancient oxygen levels (Oheim, 2007;Verhagen, 2007;Anemone, Emerson & Conroy, 2011). Such characteristics are typically ranked and combined to produce a predictive map, i.e., a raster map of cells (pixels) where each cell contains a probability value representing the potential of containing a palaeontological site (e.g., Oheim, 2007;Anemone, Emerson & Conroy, 2011). In parallel to these applications, the goal of this research will be delivered through a set of predictive maps showing which areas of the Mars 2020 landing site in the Jezero crater are more likely to preserve ichnofossils. The predictive nature of this study should be highlighted, i.e., the predictive model aims at detecting areas of high ichnological potential on Mars, but this does not necessarily imply the existence of life on Mars. Accordingly, predictive modelling can be used as a scientific tool to guide future efforts to the most ichnologically sensitive regions of Jezero crater, realizing benefits in life-search efficiency and cost-reduction. GEOLOGICAL SETTING Jezero is a 45-km wide impact crater in the north-eastern area of Mars in the Syrtis Major quadrangle (18.2 • N, 77.6 • E), a region dominated by a mafic crust (Horgan et al., 2020;Mangold et al., 2020). The central basin floor of the Jezero crater is capped by a ∼13 m thick volcanic unit, whereas sedimentary deposits are observed close to the crater rim (Schon, Head & Fassett, 2012;Shahrzad et al., 2019). Based on crater size-frequency distribution, the volcanic unit has been dated back to the Early Amazonian (Schon, Head & Fassett, 2012;Shahrzad et al., 2019). Two ancient fluvial valleys enter into the Jezero crater, Neretva Vallis to its west ( Fig. 1) and an unnamed valley to the north (Fassett & Head, 2005;Mangold et al., 2020). Deltaic deposits are found at the mouth of the corresponding palaeorivers within the Jezero crater. The Western Delta is dominated by Fe/Mg smectites and exhibits well defined sedimentary layering, whereas the Northern Delta is dominated by Mg-carbonates and associated olivine, but is less well preserved (NASA, 2020b). The delta plain environment is the most well-preserved depositional setting of the Jezero delta complex, whereas most of the prodelta deposits have been eroded by aeolian processes (Schon, Head & Fassett, 2012;Day & Dorn, 2019). Accordingly, the present front of the Jezero fan is not a primary depositional feature but a steep (≥10-30 • ) erosional escarpment (Schon, Head & Fassett, 2012). Isolated distal remnants of sedimentary material, located ∼3 km from the continuous deposit, rise ∼150 m above the basin floor and also serve as indicators of the larger previous extent of the delta (Schon, Head & Fassett, 2012). Jezero crater is the only known location on Mars where orbital detections of carbonates are found close to robust fluvio-lacustrine features (Horgan et al., 2020). Jezero crater has been studied for more than a decade, but the timing and duration of its fluvial and lacustrine activity are still poorly constrained . According to Fassett & Head (2008), incision of the Jezero valley system ended at approximately 3.8 Ga, at the Noachian-Hesperian boundary (see also Goudge et al., 2018). This sedimentologic, stratigraphic and geomorphic evidence allowed to reconstitute to a certain extent the palaeoenvironmental evolution of the Jezero crater. Accommodation space resulted from the formation of a Noachian-aged impact crater (Schon, Head & Fassett, 2012;Goudge et al., 2018). Successively, the Jezero crater rim was breached by crater degradation processes and precipitation-fed valley networks, initiating the filling of the basin (Schon, Head & Fassett, 2012). Formation of the outlet channel began once these valley networks flooded the crater basin (Schon, Head & Fassett, 2012). Although the Jezero delta is thought to have formed when a river flowed into the Jezero crater around the Late Noachian/Early Hesperian boundary (Fassett & Head, 2008;Schon, Head & Fassett, 2012), its age remains uncertain, and the duration of surface flows that formed the delta is poorly constrained (Lapôtre & Ielpi, 2020). It is hypothesized that the majority of Martian fluvial activity peaked at approximately the Noachian-Hesperian boundary and ceased shortly thereafter (Goudge et al., 2015). The carbonate unit may represent authigenic lacustrine carbonates, precipitated in the near-shore environment of the Jezero palaeolake (Horgan et al., 2020). The presence of significant residual accommodation space in the Jezero Crater indicates that sediment transport and deposition into the lake terminated before the basin was completely filled (Schon, Head & Fassett, 2012). Duration estimates for delta deposition and lake persistence vary from several years to millions of years (Schon, Head & Fassett, 2012;Goudge et al., 2015;Mangold et al., 2020). GIS organization This study applies predictive modeling, a technique used in palaeontology and archaeology (Oheim, 2007;Anemone, Emerson & Conroy, 2011), to predict the location of (eventual) ichnofossil sites in the Mars 2020 Landing Site. Predictive modelling typically uses a geographic information system software to combine attributes associated with the preservation and distribution of fossils (Oheim, 2007). In this study, the software QGIS 3.10.12 'A Coruña' (QGIS.org, 2021) is used to develop a geographic information system of the Mars 2020 Landing Site. The extent of the study area corresponds to the area of greatest scientific interest to the Mars 2020 Science Team and where high-resolution data of the High Resolution Imaging Science Experiment (HiRISE) are available . The map of Stack et al. (2020) (Fig. 3) is also used for deriving palaeoenvironmental information about Mars. The GIS of the study area is organized in six input layers and three output predictive layers (Table 1). Each input layer maps the suitability for one or more ichnofossil types (bioturbation, bioerosion, biostratification structure) based on the assessment of a single attribute (suitability factor) of the Martian environment. In other words, any location within a single input layer is associated with a suitability score about the most desirable or least desirable conditions relative to ichnological site location. As a result, each location of the Mars 2020 Landing Site is associated with six suitability scores (Table 2). We followed the predictive modelling procedure of Oheim (2007), which successfully used four levels of classification for suitability scores. Accordingly, in this study suitability scores range from 1 to 4, with 4 representing the most desirable conditions (e.g., uncovered bedrock) and 1 the least desirable ones (e.g., completely covered bedrock). The scoring system is a relative one, i.e., it informs about how a location of the study area ranks in relation to the others. Scores are attributed based on the characteristics of the geological units (Table 3) of the Mars 2020 landing site. The predictive layers result from the weighted overlay of multiple input layers ( Table 1). The predictive layers map the suitability for a specific ichnofossil type, e.g., the higher the value on the bioturbation map layer, the more suitable the corresponding location it is for the preservation of bioturbation ichnofossils. Source data collection The source data for constructing the predictive model comprise (1) ichnosite data, (2) vector data, and (3) raster data. 18 reference ichnosites on Earth are considered for assessing suitability scores and the importance (weight) of each suitability factor in controlling ichnofossil distribution. The reference ichnosites are selected because of their similarity with the Mars 2020 Landing Site in terms of depositional environment (fluvial, lacustrine), processes (deltaic processes), genetic surfaces (unconformities) or exploration conditions. Location, palaeoenvironment and age of the ichnosites are presented in Fig. 4 and Table 4. We conducted fieldwork in each of the reference ichnosites, with few exceptions. Specifically, four sites (Renox Creek, La Brava Lake, Little Muddy Creek and the Francevillian Basin) have been studied using bibliographic references (Hannon & Meyer, 2014;El Albani et al., 2014;El Albani et al., 2019;Tietze & Esquius, 2018;Zonneveld, Bartels & Clyde, 2003). During fieldwork, ichnofossils have been photographed using different cameras: Nikon Coolpix W300, Canon EOS 100D, Sony DSC-HX60V, Fujifilm FinePix S9500, Olympus X450. The specifications of these cameras are comparable to those of the imaging tools mounted on the Perseverance rover, thus allowing to test the feasibility of the ichnological approach on Mars. Specifically, the field of view, resolution, and focal length of the fieldwork cameras are within the range of the Perseverance imaging tools, as defined in Mars 2020 technical reports and analogue studies (Godin, Caudill & Osinski, 2017;Edgett, Caplinger & Ravine, 2019;Martin et al., 2020). Vector and raster data are used to evaluate the distribution of environmental parameters across the Mars 2020 landing site. Vector data comprise the shapefile of the photogeologic map of the Perseverance rover field site Team . Raster data include HiRISE image pairs (NASA, 2020c), the HiRISE visible base map (USGS Astrogeology Science Center, 2020a), and the HiRISE digital terrain model (USGS Astrogeology Science Center, 2020b). Workflow The predictive model of the Mars 2020 landing site is obtained by following a five-step workflow. The procedure is based on the predictive modelling workflow of Balla et al. (2014), which has been slightly modified to accommodate the lack of field data from the Jezero crater. The following steps have been applied: 1. Selection of the suitability factors: selecting the environmental parameters (suitability factors) controlling the distribution of potential ichnofossils; 2. Proxy assessment: determining the geological proxies that inform on the suitability factors selected in step 1; 3. Quantification of the suitability scores: attributing suitability scores to each location of the Mars 2020 Landing Site based on the assessment of suitability factors; 4. Assessment of suitability weights: estimating the suitability weights, namely the importance of each suitability factor in controlling ichnosite location; 5. Data aggregation: adding together the scores of multiple suitability factors (overlay analysis) to identify the most suitable locations for bioturbation, bioerosion and biostratification ichnofossils. The methodological aspects of the workflow are presented below, whereas the assessment of predictive variables (e.g., suitability factors, proxies, scores and weights) are thoroughly discussed in the next section because of the specificity of the subject. From the methodological perspective, the first step for developing a predictive model requires the selection of the predictive parameters, i.e., the suitability factors that control the distribution of the objects of interest. In fact, predictive models use multi-parametric spatial analysis of georeferenced data to identify areas of possible interest (Store & Kangas, 2001;Balla et al., 2014). In the present study, suitability factors are selected among the environmental attributes that are known to determine ichnosite location on Earth, provided that they are independent from Earth-type life. Except for the surficial cover, all the selected suitability factors are related to the palaeoenvironmental conditions of Mars, which cannot be directly observed. For this reason, the second step of the workflow requires the identification of environmental proxies, i.e., geological proxies that are informative of the ancient environmental conditions of Mars. This step is based on the principles of (palaeo)environmental analysis, which is the process by which the depositional environment of sediment is determined (Selley, 2000). The characteristics of a depositional environment have a fundamental control on the properties of the resulting rock unit (Nichols, 2009), including texture and sediment size, sedimentary structures, mineralogy and elevation range. In this paper, such characteristics are derived from the HiRISE visible map and the digital terrain model of the landing site, as well as by considering published observations on the Jezero crater (Hoefen, 2003;Ehlmann et al., 2008b;Ody et al., 2013;Goudge et al., 2015;Bramble, Mustard & Salvatore, 2017;Palumbo & Head, 2018;Rogers et al., 2018;Kremer, Mustard & Bramble, 2019;Horgan et al., 2020;Mandon et al., 2020;Stack et al., 2020). In the third step of the workflow, each location of the study site is attributed a set of suitability scores for ichnofossils. Following the ranking scheme of Oheim (2007), the scores of ichnological suitability range from 1 to 4, with 4 representing the most desirable condition and 1 representing the least desirable condition for ichnofossils. Score assessment is based on theoretical considerations and the characteristics of 18 reference ichnosites on Earth (Table 4). From the practical side, scores are first attributed to the geological units described in the most recent photogeologic map of the Mars 2020 Landing Site . Score assessment is based on the environmentally informative characteristics (proxies) of each unit. As a result of the scoring process, four scores are linked to each bedrock unit and a single score is associated to each surficial unit (Table 2). Successively, scores are related to the spatial distribution of each unit, which is derived from the vector file of the photogeologic map of the Mars 2020 Landing Site (Fig. 3). A code snippet is written to automate the process of relating scores to the spatial distribution of the geological units (Supplemental Information 1). The snippet is run using the field calculator of QGIS. A similar process is followed for the suitability score of the water table position (W). Using the digital terrain model of the landing site, high suitability scores (W = 4) are attributed to the locations below the most elevated position of the Jezero lake shoreline (shoreline elevation based on Salese et al., 2020), whereas low suitability scores (W = 1) are attributed to those above. Our predictive model takes into account the fact that some suitability factors are more influential than others in ichnological site distribution. The fourth step is therefore the assessment of the suitability weights, i.e., those suitability factors that have more importance in the model are given a higher percentage influence (weight) than the others. The relative importance of the suitability factors is based on theoretical considerations and observations at the reference sites of Fig. 4. The fifth step is data aggregation, according to which the scores of multiple suitability factors are weighted and added together to identify the most favourable locations for detecting bioturbation, bioerosion and biostratification ichnofossils. This process is a weighted overlay analysis. Following Balla et al. (2014), aggregation of palaeoenvironmental data is achieved by using Weighted Linear Combination, i.e., each suitability score related to the ancient Martian environment is multiplied with the value of its weight and the results are summed. Provided that the sum of all weights equals to 1, the result will have the same range (1-4) as the one specified for the suitability scores (Balla et al., 2014). Characteristics of the modern Martian environment can preclude the observation of the bedrock, e.g., thick deposits of unconsolidated sediment (surficial cover) can completely obscure the bedrock. To consider this aspect, a quantity describing the surficial cover conditions is subtracted from the result of the aggregation of palaeoenvironmenal data. The following Eq. (1), based on the formula of Balla et al. (2014: p. 122), expresses this aggregation process in a generalized form: where N is the aggregated suitability score, x i value of the suitability factor i, w i the weight of the suitability factor i, k is the suitability score for the surficial cover. Equation (1) is used to calculate the three aggregated suitability scores of our model, namely the bioturbation (A), bioerosion (B) and biostratification (C) suitability scores (Eqs. (2)-(4)). The same environmental condition can have a different importance for different trace types, therefore, the same suitability factor can be associated with a different weight when calculating A, B, or C (Table 5). To this aim, vector input layers are rasterized and aggregated. The result is a set of three predictive maps, each of which maps the suitability for a specific ichnofossil type (bioturbation, bioerosion or biostratification structure). Assessment of suitability scores Many different attributes of the environment, both modern and past, influence fossil preservation and accessibility on Earth (Oheim, 2007), being therefore eligible as suitability criteria for the Mars 2020 predictive model. On Earth, ichnosite location depends on the percentage of surficial cover concealing the bedrock and by the attributes of the palaeoenvironment controlling tracemaker activity, e.g., hydrodynamic energy, substrate cohesiveness, oxygenation, salinity, sedimentation rate, food supply, bathymetry, water turbidity, climate and position of the water table (Bromley, 1996;Buatois & Mángano, 2011;Knaust, 2017). However, only five of these factors are considered in the here proposed model: (1) surficial cover (variable K ); (2) energy regime (E); (3) substrate cohesiveness (variables L and H ); (4) sedimentation rate (R); (5) position of the water table (W ). These suitability factors were selected because they influence ichnological suitability independently from the planetary locale in which they are found. This criterion for selecting suitability factors is explained by the fact that extraterrestrial ecosystems, if any, may have differed for environmental conditions, biochemistry and evolutionary history from Earth ecosystems (Benner, Ricardo & Carrigan, 2004;McKay, 2010). Even if it is acknowledged that early Earth and Mars shared similar physical and chemical surface properties (Horneck, 2000;Kargel, 2004;Read, Lewis & Mulholland, 2015), their early environmental history was necessarily different, and there cannot be a perfect analogy between the two planets during their early history (Hipkin et al., 2013;Baucon et al., 2017). Also, the evolutionary history of any inhabited astronomical object should be unique (Morris, 1999), and environmental events can drive evolution via mass extinctions and directional selection (Schulze-Makuch, Irwin & Fairén, 2013). Factors excluded from the predictive model (e.g., oxygenation, salinity) are closely tied to terrestrial biology, being therefore inappropriate for predicting the ichnological suitability of Martian locations. For instance, ancient oxygenation levels are known to control the distribution of marine ichnofossils on Earth (Bromley & Ekdale, 1984;Baucon et al., 2020b), but this pattern is related to the fact that most metazoan life on Earth evolved to require oxygen (Danovaro et al., 2010). Each of the following sections presents a single selected suitability factor, describing (1) the specific criteria for its selection, (2) the geological proxies used to deduce its spatial variability, (3) the distribution of the related suitability scores. Surficial cover Selection criteria-The identification of ichnological sites on Earth is based on the observation of the bedrock where eventual ichnofossils are found. Surficial cover, consisting of unconsolidated superficial deposits covering solid rock, hampers the observation of the bedrock, thus precluding the detection of eventual ichnofossils (Fig. 5). Surficial cover (regolith, dune systems), if present, precludes the observation of the Martian bedrock as well, thus preventing the observation of eventual ichnofossils. Consequently, the surficial cover is selected as a suitability factor for ichnosite location. Recent erosional phenomena (e.g., wind weathering) are also known to influence trace fossil preservation on Earth (Henderson, 2006). However, the impact of wind erosion is difficult to predict because it requires to quantify either the present or the past prevailing wind direction and intensity. This is further complicated by the fact that Mars differs from Earth in several weather-related parameters, i.e., its greater distance from the Sun, its smaller size, its lack of liquid oceans and its thinner atmosphere, composed mainly of CO 2 (Henderson, 2006;Read, Lewis & Mulholland, 2015). In addition, wind erosion can obliterate but also enhance the visibility of trace fossils via selective weathering processes. Areas that are highly exposed to winds may not necessarily less suitable than sheltered ones. For these reasons, there is a high risk of overinterpreting the effect of wind on ichnofossil suitability. Consequently, we did not include it in the predictive model. Proxies for spatial distribution -The distribution of surficial cover across the Mars 2020 Landing Site can be deduced from the photogeologic map of Stack et al. (2020), which presents the distribution of surficial units. HiRISE imagery allows us to understand the degree to which each surficial unit covers the bedrock. Suitability scoring -Surficial units are attributed suitability scores ranging from 1 to 4, with 4 representing the most desirable cover conditions (i.e., uncovered bedrock) and 1 the least desirable ones (i.e., completely covered bedrock). Comparison with the Bayanzag ichnosite, which presents surficial units comparable with those found within the Jezero Crater, has been particularly informative for attributing suitability scores (Figs. 5A-5C). In fact, Cretaceous ichnofossils can be observed within the talus deposits of Bayanzag, consisting of fragmented bedrock accumulated at the base of the cliffs (Fig. 5A). Based on this observation, the talus deposits of the Jezero crater are assigned a relatively high score notwithstanding their nature of surficial cover. The aeolian and deflation units of Bayanzag also find immediate analogies with the aeolian (large and small) and undifferentiated smooth units of the Mars 2020 Landing Site. These surficial units significantly hamper the observation of ichnofossils, therefore a similar unsuitable condition is assumed for the areas of the Jezero crater covered by large aeolian bedforms (e.g., Neretva Vallis, crater floor) (Fig. 6). These areas are, therefore, unsuitable for the detection of ichnofossils, if any. By contrast, large areas of the Western Delta are uncovered, allowing the observation of the Martian bedrock and of the eventual ichnofossils preserved within it. Energy regime Selection criteria -Fluid flow is one of the most widespread transport and deposition processes in both subaerial and aqueous sedimentary environments. On Earth, the importance of fluid flow is exemplified by the pervasive action of currents, waves, and winds, among other processes, over the type and mobilization of the substrates. Geological evidence indicates persistent water flow on Early Mars (Malin & Edgett, 2003), as well as for ancient wind activity (Banham et al., 2018). Any flowing fluid possesses energy due to its motion, which is often referred as to hydrodynamic energy in the case of flowing water. The energy of a flowing fluid is one of the most common limiting factors in trace fossil distribution on Earth, influencing both the tracemaker behaviour and the preservation potential of ichnofossils (Buatois & Mángano, 2011). The energy regime is chosen as a selection criterion because it influences the suitability for ichnofossils independently from the planetary locale. In fact, substrate particles are increasingly removed from the bottom as the energy increases (Allen, 1992;Nichols, 2009;Duran Vinent et al., 2019). Particle removal is independent from the physical or biogenic nature of the sedimentary structures preserved within the substrate, i.e., a burrow obeys to the same physical rules governing, for instance, the preservation of ripples or mudcracks. Consequently, high-energy conditions tend to obliterate pre-existing fabrics and sedimentary structures regardless of their biogenic or abiogenic nature. This phenomenon depends solely on sediment and flow dynamics, holding on Earth and on Mars. Proxies for spatial distribution -The spatial variability of the past energy conditions at the Mars 2020 Landing site is here deduced from sedimentary architecture, grain size and landforms. Deltas are formed by deceleration of the river outflow into a basin with a standing body (Postma, 1990;Postma, 2003), therefore the distance from the mouths of the palaeorivers is used as a reliable indicator of hydrodynamic energy in the Jezero palaeolake. Grain size is also used as a proxy for hydrodynamic energy because transport of sediments occurs when the currents are high enough for the bed shear stress to exceed the threshold of motion, which depends upon the median sediment grain size (Ward et al., 2020). An additional hydrodynamic proxy is the geomorphic evidence of channels, which are recognized as areas of high-velocity flow in fluvio-deltaic systems (Shaw, Mohrig & Wagner, 2016). Since hydrodynamic energy usually decreases with increasing water depth, the current elevation of the landing site is also taken as a proxy for hydrodynamic conditions. Suitability scoring -The bedrock units of the landing site are attributed scores from 1 to 4, with 4 representing the best (lowest-energy) hydrodynamic conditions for ichnosite location. Low scores are attributed to high-energy settings because most surface or near-surface bioturbation traces are removed by erosion in high-energy environments, whereas the preservation potential increases with decreasing energy (Hallam, 1975;Curran, 1994;Bromley, 1996;Buatois & Mángano, 2011). This phenomenon is explained by the sedimentological nature of bioturbation structures, which are at one with the substrate before and after diagenesis (Hallam, 1975). The low ichnological suitability of high-energy settings is supported by observations at the Arda reference ichnosite, which encompasses a gradient from high-energy (fluvial, shoreface) to low-energy (offshore) fluvial-influenced settings (Crippa et al., 2018). High-energy deposits of the Arda section, Italy (Fig. 7A) display a lower bioturbation intensity than low-energy ones (Fig. 7B). Also, these highenergy deposits show how tracemakers need to invest in extra-efforts to cope with the shifting substrates associated to high energy conditions, i.e., the producers of Ophiomorpha reinforced burrows with pellets (Fig. 7A). Based on these observations, high suitability scores for bioturbation are attributed to the relatively quiet, distal areas of the delta, characterized the delta thinly layered and delta layered rough unit. By contrast, lower scores are attributed to the proximal areas of the Jezero delta, often characterized by channelized deposits consisting of the coarse-grained delta blocky unit (Fig. 8). This scoring is supported by the observations made at the Ventimiglia palaeodelta (Pliocene, Italy). Here, high-energy deltaic deposits are unbioturbated or sparsely bioturbated (Figs. 9A-9C), whereas higher bioturbation intensities are associated with lower energy ones (Fig. 9D). Similarly, the high-energy regime of the Neretva Valley, consisting of an incised fluvial channel, is interpreted as being particularly unsuitable for ichnofossil preservation. By removing substrate particles, high-energy conditions negatively influence the preservation of bioerosion and biostratification structures as well. However, high-energy conditions have a lesser influence on the preservation of bioerosion and biostratification structures because these are related to more cohesive, erosion-resistant substrates. This aspect is addressed in the data aggregation stage (step 5) of the workflow, i.e., a small weight is attributed to the energy regime when calculating the overall bioerosion and biostratification suitability. A higher weight is attributed to the energy regime for bioturbation suitability. More in detail, this choice is justified by the fact that bioerosion ichnofossils necessarily develop in hardgrounds, which are less prone to erosion than softgrounds by their nature of lithic substrates. For this reason, the energy regime is not regarded among the major factors influencing the suitability for (eventual) bioerosion ichnofossils in the Mars 2020 Landing Site. In parallel to bioerosion ichnofossils, biostratification structures tend to withstand high-energy conditions better than bioturbation ichnofossils. This is explained by the fact that biostratification tend to stabilize depositional surfaces and shelter the sediment against erosion (Noffke et al., 2001). As a result, biostratification structures are common in a wide hydrodynamic range. Modern subtidal Bahamian stromatolites are positively associated with strong tidal currents because these are unfavorable for competitors such as metazoans and macroalgae (Noffke & Awramik, 2013). Exclusion of competitors is also the reason explaining why stromatolites are common in hypersaline lagoons, exceedingly warm waters and macrotidal settings (Noffke & Awramik, 2013;Suosaari, Reid & Andres, 2019). Some MISS are associated to bland erosional regimes, e.g., erosional pockets are produced where pieces of microbial mat are removed by erosion, leaving an irregularly shaped mat border surrounding a depression through which the underlying sediment is exposed et al., 2012). Based on these observations, the high-energy deposits of the Jezero crater are here interpreted as less suitable for biostratification than low-energy ones. This parallels the impact of the energy regime on the suitability for bioturbation structures. However, because of the mentioned substrate-stabilizing effect, a relatively low weight is assigned to the suitability score of the energy regime for biostratification. Weight is used in overlay analysis, during which a weighted sum is computed across multiple layers to account the fact that some suitability factors are more influential than others in biostratification structure distribution. Substrate cohesiveness Selection criteria -Substrate cohesiveness is selected as a suitability factor because the mechanical properties of the substrate constrain the ichnofossil type that can (eventually) be produced. On Earth, the mechanisms of moving through solid substrata depend on the mechanical properties of the substrate (Dorgan, 2015). Organisms move through loose substrates by displacing sediment grains (bioturbation; Fig. 10A); they move by creating an opening in hard substrata by mechanical or chemical means (bioerosion; Fig. 11A) (Bromley, 1996;Dorgan, 2015). As a result, the cohesiveness of a given substrate constrains the type of ichnofossils that can be produced within it, i.e., bioturbation structures can be produced only in unconsolidated substrates because hardgrounds do not provide grains to displace. This relationship between substrate cohesiveness and ichnofossil type holds not only for Earth but also for Mars because it derives solely from the mechanical properties of the substrate, being independent of the planetary locale on which the substrate is found. This is a fact proved by the traces left by the Curiosity rover, producing 'bioturbational' trails on Martian softgrounds and 'bioerosional' drill holes into hardgrounds (Baucon et al., 2017). Consequently, the presence of softgrounds is here regarded as a suitability factor for bioturbation ichnofossils, whereas the presence of hardgrounds is a suitability factor for bioerosion. For this reason, the impact of the substrate on ichnofossil suitability is accounted by two different variables, L (substrate suitability for bioturbation) and H (substrate suitability for bioerosion; Table 1). When calculating the overall suitability for bioturbation (A; Table 1), H is ignored, and L contributes to the weighted sum. Conversely, L does not contribute to the weighted sum for bioerosion suitability (B; Table 1). Whereas substrate cohesiveness constrains the development of bioerosion and bioturbation structures, it exerts less influence on biostratification. This is counterintuitive because biostratification acts on loose particles, which however can be found not only within softground substrates but also in the water column. For instance, the most commonly cited pathway for Phanerozoic marine stromatolites is the trapping and binding model, according to which successive generations of microbial filaments (or extracellular polymeric substances) trap grains settling from the water column and bind the sediments via precipitated cements (Shapiro, 2007). Substrate type does not necessarily limit movements of biostratification-forming organisms, e.g., cyanobacteria on Earth move through the sediment that blanketed them by jet gliding upwards in the secreted exopolysaccharides (Hoiczyk, 2000;Foster et al., 2009). The weak influence of substrate type on biostratification is supported by the fact that fossil and modern stromatolites are reported from both softgrounds and hardgrounds (Reid et al., 1995;Stefano et al., 2002). It should be however noted that MISS are restricted to softgrounds by their nature of sedimentary structures. This is exemplified by the lacustrine MISS of the Collio Formation, Italy, which are dissected by mudcracks, being therefore related to softgrounds (Figs. 10B-10C). Proxies for spatial distribution -The cohesiveness of the ancient substrate is deduced from the sedimentological characteristics of the bedrock units. Specifically, deltaic deposits are interpreted as proxies for softgrounds based on the fact that delta formation requires unconsolidated sediments. Conversely, deposits that predate the Jezero impact are interpreted as hardgrounds. Carbonate-rich units were deposited as softgrounds, but the terrestrial record (Knaust, Curran & Dronov, 2012) show that they can undergo early cementation. For this reason, deposits rich in carbonate are taken as a proxy for both softground and hardground conditions. Suitability scoring -Bedrock units have been attributed bioturbation suitability scores ranging from 1 to 4, with 4 representing the most desirable substrate conditions (i.e., loose substrates) and 1 the most unsuitable conditions (i.e., hardgrounds) (Fig. 12). A specular scoring system is used for bioerosion, i.e., 4 represents the most suitable conditions (hardgrounds) and 1 the most unsuitable ones (loose substrates) (Fig. 13). Deltaic units are assigned high bioturbation scores because they necessarily derive from the deposition of unconsolidated sediments, allowing benthic organisms, if any, to displace sediment grains (Fig. 10B). By contrast, the crater rim units are attributed low suitability scores for bioturbation (Fig. 12) because they are part of the basement sequence that predates the formation of Jezero crater . Consequently, they plausibly represented hardgrounds at the time of the Jezero palaeolake. For the same reason, the crater rim is attributed a high suitability score for bioerosion (Fig. 13). Hard substrates were also provided by the crater rim breccia, which has been interpreted as an impact breccia formed during the Jezero impact event or the Isidis event . The above suitability scores are based on the fact that bioturbation and bioerosion require opposite substrate conditions (e.g., hardgrounds cannot be bioturbated). In most cases, sites with a high bioturbation score (L = 4) are attributed a low bioerosion score (H = 1), and vice-versa. However, the ichnofossil record of the Earth shows that, under specific environmental conditions, the same site can be suitable for both bioturbation and bioerosion ichnofossils. For instance, the unconformity surface of the Oura reference ichnosite (Portugal) displays crustacean bioturbation ichnofossils (Thalassinoides) intersected by bivalve bioerosion ichnofossils (Gastrochaenolites) (Figs. 11B-11D). The unconformity developed within a Miocene (Serravallian) bioturbated softground that successively lithified and formed a rocky shoreline, which was subsequently bioeroded (Cachão et al., 2009). A similar scenario is possible for the crater floor units of the Mars 2020 Landing Site, which may be cross-cut by an unconformity surface . Unconformities tend to become colonized by substrate-controlled trace fossil suites when exposed to aqueous conditions (Pemberton et al., 2001;MacEachern et al., 2007;Buatois & Mángano, 2009;Richiano et al., 2019). Similarly, the richness in carbonate of the margin fractured unit (Ehlmann et al., 2008b;Goudge et al., 2015;Horgan et al., 2020) may have favored fast diagenesis, producing a softground-to-hardground transition. High suitability scores for bioturbation are attributed to the crater floor deposits based on their possible nature of fluvio-lacustrine softgrounds. This interpretation is supported by the contact of crater floor fractured 2 unit with the deltaic units, as well as the textural similarities between crater floor fractured unit 1 and 2 . However, it should be highlighted that other plausible interpretations are available, e.g., lava flows, magmatic intrusions, impact condensates, tephra deposits, aeolian, airfall and fluvial deposits (Hoefen, 2003;Ody et al., 2013;Bramble, Mustard & Salvatore, 2017;Palumbo & Head, 2018;Rogers et al., 2018;Kremer, Mustard & Bramble, 2019;Mandon et al., 2020;Stack et al., 2020). Sedimentation rate Selection criteria -Sedimentation rate has long been recognized as among the major influences on the intensity of bioturbation, i.e., the degree to which the original fabric of the substrate has been modified by organisms (Bromley, 1996;Taylor, Goldring & Gowland, 2003;Buatois & Mángano, 2011). Data from Earth show that very low bioturbation intensities commonly correlate to elevated rates of sedimentation and massive bedding, while high bioturbation intensities are usually associated with slow sedimentation and heterolithic deposition (Gingras, MacEachern & Dashtgard, 2011). Low/null sedimentation also enables the colonization of hardgrounds by boring organisms (Łaska, Rodríguez-Tovar & Uchman, 2021). The relationship between sedimentation rate and the degree of biological reworking is explained in terms of availability of time, i.e., the degree to which a substrate is biologically reworked depends on the amount of time available for biogenic activity per unit accumulation of sediment (Howard, 1975). In other words, slow sedimentation rates provide organisms with a longer amount of time for reworking (bioturbating or bioeroding) the substrate. This phenomenon is not dependent upon the planetary locale, therefore, sedimentation rate is here selected as a suitability factor for the Mars 2020 Landing Site. Proxies for spatial distribution -On Earth, the sedimentation rate can be estimated by considering the thickness of a given sedimentary unit and the amount of time in which the unit deposited. These variables cannot be precisely estimated for the geological units of the Mars 2020 Landing Site; however, the identification of the major sources of sediment allows a qualitative estimate of the spatial variability of the sedimentation rate across the study area. It is not possible to provide absolute values for the sedimentation rate, but it is possible to assess the relative values based on the different architectural elements of the Jezero delta. In this regard, only two fluvial inlets entered the Jezero lake, bringing sediments from a mineralogically-diverse area into the lake (Schon, Head & Fassett, 2012). Consequently, the areas of maximum sedimentation coincided with the deltaic areas adjacent to the river mouths; conversely, the areas with the lowest sedimentation rates were located far away from the river mouths. This interpretation is supported by investigation of terrestrial deltas. These are not exact analogues of the Jezero delta but necessarily share similar sedimentary dynamics due to the rapid deceleration of water flow at the river mouth. For example, in the Fraser River delta, Canada, the maximum sedimentation (∼13 cm yr −1 ) occurs in the vicinity of the river mouth (Hart, Hamilton & Barrie, 1998;Ayranci & Dashtgard, 2013). Anyway, in deltaic systems where density currents can occur regularly, a significant proportion of riverine sediment input may be transferred to the distal part of the systems leading to important distal sediment accumulation zones (distal delta lobe). A sediment budget was calculated for the Rhone River delta system, in eastern Lake Geneva, Switzerland (Silva et al., 2019). Mean sedimentation rates in these areas vary from 0.0737 m year −1 (delta front) to 0.0246 m year −1 (distal delta lobe). The remaining area, lake basin background deposition, show sedimentation rates one order of magnitude smaller. Suitability scoring -The geological units of the study area have been attributed scores ranging from 1 to 4, with 4 representing the most desirable conditions (i.e., low sedimentation rate) for bioturbation and bioerosion ichnofossils (Fig. 14). High scores are attributed, for instance, to the distal areas of the Jezero delta where sedimentation rate was plausibly low, providing eventual organisms with longer amounts of time for bioturbating the substrate. By contrast, the areas in the vicinity of the palaeoriver mouth are attributed low suitability scores. This suitability scoring is motivated not only by ichnological theory (Bromley, 1996;Taylor, Goldring & Gowland, 2003;Buatois & Mángano, 2011;Gingras, MacEachern & Dashtgard, 2011;Tonkin, 2012), but also by empirical observations at the Pramollo Basin (Carboniferous-Permian; Italy-Austria) (Baucon & Neto de Carvalho, 2008;Baucon et al., 2015b). Here, delta front deposits present lower bioturbation intensities than the units deposited in more distal locations of the same basin (Fig. 15). The same phenomenon is reported from lake settings, i.e., sedimentation rate tends to exceed bioturbation rate in the more proximal sectors of lacustrine deltas (Figs. 16A, 16B) (Zonneveld, Bartels & Clyde, 2003;Buatois & Mángano, 2011). Conversely, higher bioturbation and bioerosion rates are normally associated with lower sedimentation rates (Figs. 16C, 16D). Water level Selection criteria -On Earth, the availability of water is a fundamental control factor on ichnofossil formation (Hasiotis & Honey, 2000;Buatois & Mángano, 2011). This mirrors the astrobiological principle by which water is an essential compound for the existence of life as we know it (Mottl et al., 2007;Jones & Lineweaver, 2010). The presence of water is considered so important for life that the astrobiological exploration of Mars has been guided by the search for water (Hubbard, Naderi & Garvin, 2002;Grotzinger, 2009). In the Jezero Crater there are clear proxies for the past presence of liquid water (Salese et al., 2020), which therefore plausibly represented an abundant resource for organisms, if any. For these reasons, the presence of water has been selected as a suitability factor for the Mars 2020 predictive model, although it should be noted that extraterrestrial life without water can be conceived as well (Mottl et al., 2007). Proxies for spatial distribution -The presence of water within the study area is deduced from sedimentological and geomorphological observations. In this regard, fluviodeltaic landforms are clear indicators of the presence of water in a liquid state, for which reason there is little doubt about the aqueous environment of the Jezero deltas (Salese et al., 2020). The elevation is another proxy for the presence of water. According to Salese et al. (2020), the basin was initially filled up to −2,243 m, which is here regarded as the reference elevation for establishing ichnological suitability of the water table level. After the breach of the crater rim, the water level dropped to −2,410 m and, during the deposition of the Jezero delta, the top of the delta had the same elevation as the bottom of the breach (Salese et al., 2020). According to Schon, Head & Fassett (2012), the base level within the palaeolake was controlled by the outlet channel and was near −2,400 m. Suitability scoring -Scores ranging from 1 to 4 have been attributed to the geological units of the Mars 2020 Landing Site, with 4 representing the most desirable conditions (i.e., presence of a permanent water table) for ichnofossils (Fig. 17). Since the availability of water is important for ichnofossil formation (Hasiotis & Honey, 2000;Buatois & Mángano, 2011), high scores are attributed to the locations sited below the most elevated position of the Jezero shoreline (i.e., −2,243 m according to (Salese et al., 2020). This high score treshold is extended to −2,200 m to include the most elevated areas that are within the fluvial channel in the Neretva Valley. This suitability scoring is also supported by empirical observations in the Nurra area (Permian-Triassic, Italy), where ichnofossil distribution is strongly controlled by the past water table level. Specifically, the Nurra sedimentary sequence records the transition from a fluvial ecosystem with a permanent water table (Permian) to hyper-arid conditions (Triassic) (Baucon et al., 2014). Bioturbation ichnofossils are abundant and diversified in the Permian deposits, whereas the Triassic hyper-arid deposits tend to be unbioturbated (Fig. 18). The lower Triassic hyper-arid period is documented in many other European deposits, often presenting comparable ichnological characteristics with those of the Nurra area (Durand, Meyer & Avril, 1989;Bourquin et al., 2007;Cassinis, Durand & Ronchi, 2007;Durand, 2008). Even if microbial life can tolerate hyper-arid environments (Bull & Asenjo, 2013), high suitability scores for biostratification (W = 4) are here attributed to past aqueous environments. This is explained by the fact that increasing aridity reduces microbial diversity and abundance (Maestre et al., 2015). In addition, carbonate precipitation in aqueous environments is an excellent mechanism for biosignature preservation (Farmer & Marais, 1999;Horgan et al., 2020). This suitability scoring approach is supported by observations at the Collio Basin (Permian, Italy), which preserves lacustrine carbonates with abundant biostratification ichnofossils (Berra, Felletti & Tessarollo, 2019). In the Collio Basin, oncoids, stromatolites and MISS are associated with lacustrine palaeoenvironments dominated by carbonates, i.e., spring-fed ponds at the toe of alluvial fans (Fig. 19). Similarly to the Collio Basin, the Jezero crater preserves carbonate-rich deposits in close proximity to the lake margin (Horgan et al., 2020). RESULTS Suitability for bioturbation, bioerosion and biostratification is estimated by data aggregation, that is, overlay analysis of the input layers (Table 1). Input layers map the suitability of (1) surficial cover (K ); (2) energy regime (E); (3) substrate cohesiveness (L: for bioturbation; H : for bioerosion); (4) sedimentation rate (R); (5) position of the water table (W ). Using Eq. (1) and the weights of Table 5, a bioturbation (A), bioerosion (B), biostratification (C) score is assigned to each location of the study area, the x y coordinates of which are indicated in subscript in the following formulas: It should be noted that suitability scores associated to null weights have been omitted from the equations. The ichnological suitability scores A, B, C equal to 4 when the conditions of the environment (past and present) are maximally favourable for ichnosite location. This implies the absence of surficial cover overlying bedrock (e.g., recent dune systems), so that the quantity 4−K xy equals to 0. Equations (2)-(4) result as a set of three predictive maps , each of which shows the suitability for a specific ichnofossil type, e.g., the higher the value on the map, the more suitable the corresponding location it is for ichnofossils. Each map shows threshold values (A ≥ 3 for Fig. 20; B ≥ 3 for Fig. 21; C ≥ 3 for Fig. 22) to identify where ichnofossils, if any, are more likely to occur than in other locations. The value of 3 has been conventionally selected as a threshold because it splits off the highest 1/4 scores from the lowest 3/4. From ichnological neglection to ichnological appreciation The predictive model presented in this paper identified the areas of the Mars 2020 landing site with the highest potential for ichnofossil location. This approach adds value to astrobiological research because ichnofossils have been largely neglected in the search of Martian life, with the only exception of microborings (e.g., Staudigel et al., 2008;McLoughlin et al., 2010;Lepot, 2020), stromatolites (e.g., Mustard et al., 2013b;Hays et al., 2017) and MISS (e.g., Noffke, 2015; see the review by Baucon et al., 2017 for a more complete list of ichnological approaches to astrobiology). Bioturbation ichnofossils have completely been disregarded in the search for Martian life, with few exceptions (e.g., Baucon et al., 2020a). The neglection of ichnofossils in astrobiology plausibly derives from a common assumption, that is, the microbial nature of (eventual) Martian life. Microorganisms are regarded as the most likely candidates for a putative biota of an extraterrestrial habitat (Horneck, 2000). By contrast, ichnofossils conjure up with images of macroscopic worm burrows. It should be however noted that ichnofossils ''are not the only result of the activity of burrowing or grazing macroorganisms, but also of microbes that interact with the sediment'' (Noffke, 2009: p. 173). On Earth, this is not only the case of microborings but also of macroscopic bioturbation structures such as Trichichnus. Trichichnus is a macroscopic (0.1-0.7 mm in diameter) cylindrical structure deriving from bioelectrical operations resulting from bacterial activities in the oxygen-depleted part of sediments (Kȩdzierski et al., 2015). Macroscopic bioturbation ichnofossils, consisting of winding burrows, are documented from 2.1 Ga deposits of Gabon, and are tentatively attributed to the bioturbating activity of ameboid cell aggregates (El Albani et al., 2019). Tubular titanite tubes, possibly representing microbioerosion ichnofossils, are widely reported from ∼3.5 to ∼2.5 Ga metamorphosed basaltic pillow-lavas, basaltic hyaloclastite breccias and metamorphosed volcanoclastic rocks (Staudigel et al., 2008;McLoughlin et al., 2010;Lepot, 2020). 1.7 Ga deposits preserve the oldest unquestionable microboring organisms on Earth (Zhang & Golubic, 1987 et al., 2006;Heubeck, 2009;Noffke, 2009). These examples show that the macroscopic record of microbial ichnofossils is abundant, spread by many geological environments, types of rocks and ages and well preserved on Earth, therefore the application of ichnology to astrobiology is much required. Although microbes are usually regarded as the most likely candidates for a putative extraterrestrial biota (Horneck, 2000), Bains & Schulze-Makuch (2016) suggest that the evolution of complex macroscopic life is nearly inevitable in any world where life has arisen and sufficient energy flux exists. This further encourages the search for extraterrestrial macroscopic ichnofossils. Advantages and limitations of ichnological predictive modelling Predictive modelling has never been used to detect ichnofossils, either on Earth or beyond. To date, predictive modelling methods have successfully been used to detect archaeological artefacts (Brandt, Groenewoudt & Kvamme, 1992;Kamermans, 2004;Balla et al., 2014;Nicu, Mihu-Pintilie & Williamson, 2019) and, more recently, body fossil sites (Oheim, 2007;Anemone, Emerson & Conroy, 2011). As such, this study opens new avenues not only for the search of extraterrestrial life but also for palaeontological research on Earth. There are multiple advantages in the application of predictive modelling to the search for ichnofossils on Mars (or other extraterrestrial locales). In parallel to the advantages of predictive modelling in archaeology (Balla et al., 2014), predictive modelling can contribute to astrobiological research by minimizing the requirements for trial observations and excavations, as it detects areas of high biological probability. On Earth, predictive modelling proved to enhance the chances of finding body fossils and archaeological artefacts (Oheim, 2007;Vaughn & Crawford, 2009), therefore, the same advantages can plausibly be expected for the search of life beyond Earth. By maximizing efforts in high probability areas, resources (time, energy, and money) used for surveying areas with little potential can be reallocated to mapping and observation efforts in higher probability areas (Vaughn & Crawford, 2009). The application of predictive modelling is particularly promising for the detection of ichnofossils. Indeed, most ichnofossils cannot be transported, with few and easily detectable exceptions (Seilacher, 2007;Buatois & Mángano, 2011). For instance, a burrow cannot be transported from delta front to prodelta settings without being destroyed by the transport processes themselves. For this reason, ichnological predictive models are particularly reliable because they mostly depend on the in situ characteristics of the rock record. By contrast, predictive modelling of body fossils should take into account also transport factors, e.g., the body fossil of a delta front organism can be found as an allochthonous element in prodelta deposits. The predictive modelling approach followed in this paper is not without limitations. As pointed out by other authors (Brandt, Groenewoudt & Kvamme, 1992), one weakness of the weighted approach to predictive modeling is the element of subjectivity in the weights assigned to each suitability factor. Other statistical approaches are available to deal with the subjectivity factor, e.g., Bayesian statistics (Millard, 2005), machine learning (Yaworsky et al., 2020) and graph similarity analysis (Mertel, Ondrejka & Šabatová, 2018) have been applied to predictive modelling. However, there are no field data for the Jezero crater and only Earth-type life is known, therefore, application of more sophisticate statistical methods would bring the risk of overinterpretation, that is, reading too much into the limited dataset available. Fortunately, the element of subjectivity is partly reduced by the peculiarities of the ichnofossil record. Ichnofossils can be universal biosignatures, i.e., they are ideally capable of detecting any type of life because they are independent from morphology, size and biochemistry of the life form they document (Baucon et al., 2017). For this reason, ichnological assumptions based on Earth-type life can readily be applied to the search of extraterrestrial ichnofossils. This universal nature of ichnofossils, and the study of reference ichnosites from different areas and geological periods, dampen the inherent subjectivity of the weighted approach to predictive modelling. Temporal resolution is a second major limitation of the approach followed in this paper. The ichnological predictive model resulting is aimed at detecting ichnofossils formed during the presence of the Jezero palaeolake, but it is not possible to exclude a priori the existence of habitable conditions prior or after the Jezero palaeolake. In parallel to the future directions purported by Brandt, Groenewoudt & Kvamme (1992), a future direction for astrobiological predictive modelling is analyzing separately each geological period of Mars, thus generating an ichnological model for each period. At present this approach is not feasible because of data limitations and the urgency of focusing on the most promising subject, i.e., the habitable Jezero palaeolake, where Preserverance touched down on Mars on Thursday, Feb. 18, 2021. Spatial resolution is only an apparent limitation of the approach. Specifically, the fine resolution of the ichnological pictures (e.g., Figs. 7,9,11) contrasts with the coarser resolution of the HiRISE-derived photogeologic map (Fig. 3) on which our model is based. Nevertheless, the resolution of the HiRISE imagery/photogeologic map does not preclude the results of our paper. In fact, HiRISE resolution allows to infer the characteristics of Mars at the scale of the depositional (sub)environment. These characteristics plausibly influenced the distribution of (eventual) life on Mars, therefore, they have a predictive value. The vast number of papers interpreting -directly or indirectly -HiRISE imagery further supports our approach (Hoefen, 2003;Ehlmann et al., 2008b;Ody et al., 2013;Goudge et al., 2015;Bramble, Mustard & Salvatore, 2017;Palumbo & Head, 2018;Rogers et al., 2018;Kremer, Mustard & Bramble, 2019;Horgan et al., 2020;Mandon et al., 2020;Stack et al., 2020). In addition, the distribution of trace fossils (fine-scale) depends upon the characteristics of the depositional environment (coarse-scale). This well-established relationship between the ichnofossil-scale and the scale of the depositional environment (see Seilacher, 2007 and references therein) strongly supports our study. By contrast, the HiRISE resolution is too coarse to spot eventual centimetric ichnofossils on Mars, but this is not the aim of this study. In other words, the approach of our paper parallels the challenge of detecting sand grains from satellite imagery of Earth. A microscope is needed to detect sand grains, but satellite imagery is enough to assess the possibility of finding sand grains on a beach. The third limit is the lack of testing, i.e., at present is not possible to evaluate the performance of the model. Predictive modelling remains useless without some form of test (Brandt, Groenewoudt & Kvamme, 1992). However, this limitation will plausibly overcome in the next years, during which the Perseverance rover will thoroughly explore the deltaic deposits of the Jezero crater. This limitation can turn out to be an advantage because the here proposed model can be used as a planning tool for organizing the path and operations of the Perseverance rover. An ichnological strategy for the Perseverance rover Guidance in planning survey campaigns is among the major applications of predictive models on Earth (Balla et al., 2014). In parallel, the predictive maps resulting from this paper find their ideal application in suggesting a traverse plan capable of maximizing the scientific (ichnological) gain of the Perseverance rover. In other words, the predictive maps can guide or focus the Perseverance rover on the sites with the highest potential for ichnofossils. Accordingly, we identified an ichnological strategy for the Perseverance rover (Fig. 23), indicating (1) ten high-suitability sites, (2) the ichnofossil types that are more likely to be present at each site, (3) the detection strategy that is best suited for each site. With the regard to the third aspect, visible light photography is enough to carry out an ichnological survey on Mars. Ichnofossils are morphological evidence of biological behaviour, therefore they offer the practical advantage of requiring no sophisticate tools to be detected. This said, the Perseverance rover provides the opportunity of more sophisticate analyses. The Mars 2020 mission will not only seek the signs of ancient life in situ but also cache a maximum of 43 samples for a possible return to Earth by a follow-on mission (Williford et al., 2018;Grady, 2020;Muirhead et al., 2020;NASA, 2021). Predictive maps can therefore suggest the optimal cache strategy, based on the assumption that each type of biogenic structure (bioturbation, bioerosion, biostratification) requires a specific type of analysis due to its size constraints. Specifically, grain size constrains the size of the smallest bioturbation and biostratification ichnofossils that may be present in a given sedimentary unit. Bioturbation and biostratification structures are formed by the displacement and reorganization of sediment grains, therefore they cannot be smaller than the smallest grain. Such size constraint does not hold for bioerosion structures, which are formed by creating an opening in hard substrata. Consequently, the imaging tools of the Perseverance rover are well suited for (potential) bioturbation and biostratification structures, but their resolution may be not enough for imaging the smallest microbioerosion structures if any are present on Mars. Imaging of microborings requires specific sample preparation techniques (e.g., casting-and-embedding technique) and SEM observations (Golubic, Brent & LeCampion-Alsumard, 1970;Wisshak, 2012). Returned samples allow similar sophisticated sample preparation techniques and analytical techniques (Grady, 2020). The first site (Fig. 23) of the ichnological traverse is located within the 7.7 × 6.6 km ellipse in which the rover landed (see Farley et al., 2020 for more detail about the landing ellipse). Site 1 displays distal delta remnants, which are suitable for both bioerosion and biostratification. Site 2 and 3 are also related to a deltaic depositional setting, although their position is more proximal with respect to site 1. Our results suggest employing the imaging tools of the Perseverance rover for the search of eventual bioturbation and biostratification ichnofossils preserved in the deltaic units. The crater floor units outcrop at sites 4 and 5, which have a high potential for bioturbation and a moderate potential for bioerosion. The possible presence of microbioerosional ichnofossils suggests collecting samples for a future sample-return mission. Site 6 displays deltaic units that are particularly promising for bioturbation, thus encouraging imaging-based observations. Close to site 6, the carbonate-rich deposits of site 7 outcrop. Here, sampling is suggested in order to better evaluate the presence of stromatolite laminations. We suggest visiting site 8, located south of the Belva crater, because here the delta truncated curvilinear layered units outcrop. These units outcrop over a relatively limited area, therefore site 8 offers one of the few excellent exposures of these units. The substrate conditions of site 9 and 10 are particularly suitable for bioerosion, for which reason sampling is suggested. These sites are located along the border between areas suitable for bioerosion and bioturbation, hence they allow to maximize scientific gain. It should be noted that scientific gain is not the only element of traverse planning, which can be seen as an optimization process in which science gain and the level of safety are maximized while driving energy is minimized (Ono et al., 2015;Ono et al., 2020;Fink et al., 2019). Identifying and avoiding terrain hazards are particularly important aspects for the safety of planetary rovers, e.g., the Spirit rover ended its mission because it got stuck in soft terrain and pointy rocks damaged the wheels of the rover Opportunity (Ono et al., 2015). The predictive maps resulting from this paper are released as raster files (Supplemental Information 2-4) and therefore they can be easily integrated in any optimization process related to traverse planning. Supplemental Materials (SM) also include a QGIS-ready zipped archive with both input and predictive layers (SM5). The raster maps provided in this paper can be combined with other information that is useful to optimize the success of the traverse, e.g., terrain smoothness maps and/or maps of terrain hazards. This step is important because some of the predicted areas may be not safely accessible by the Perseverance rover. CONCLUSIONS This study applied-for the first time-predictive modelling to the search of ichnofossils on Mars. The resulting predictive maps show which areas of the Mars 2020 Landing Site are more suitable for potential ichnofossils, i.e., the delta remnants are particularly favourable for bioturbation ichnofossils; the crater rim is suitable for bioerosion ones; the crater margin is amenable for biostratification structures. These predictions are referred to the time in which the Jezero crater hosted a lake, but further research is needed to provide a more complete predictive picture addressing the ichnological suitability for units deposited prior and after the Jezero palaeolake. The ichnological predictive maps allowed to deliver an imaging and sampling plan capable of maximizing the scientific gain of the Perseverance rover. As such, this study provides planning tools that are useful not only for the upcoming in-situ analyses conducted by the Perseverance rover but also for the followup sample-return missions. Based on our research on reference ichnosites, we furthermore conclude that, if life ever existed on Mars, it presumably left traces of interaction with the substrate, preserved as bioturbation, bioerosion or biostratification structures that can be easily detected through Preserverance instruments, extending exponentially the chances of finding evidences of the (past) activity of a single life form. By contrast, the preservation of other biosignatures (body fossils, isotopic and chemical evidence) is more difficult. Also, many ichnofossils, by definition corresponding to sedimentary structures, are independent from the characteristics (morphology, biochemistry and size) of their producers, allowing to make robust predictions on the morphology and spatial/palaeoenvironmental distribution of (eventual) ichnofossils produced by extraterrestrial organisms that differ from Earthtype life. For these reasons, ichnofossils represent a promising new frontier in the search of extraterrestrial life, and predictive modelling is the ideal complementary tool for detecting their presence on Mars.
2021-10-10T05:20:24.885Z
2021-09-23T00:00:00.000
{ "year": 2021, "sha1": "62ec3f1a4cb2c38ccffce9d5e5755c85372c03fa", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7717/peerj.11784", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "62ec3f1a4cb2c38ccffce9d5e5755c85372c03fa", "s2fieldsofstudy": [ "Geology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
245635812
pes2o/s2orc
v3-fos-license
Semi-automated Detection of the Timing of Respiratory Muscle Activity: Validation and First Application Background: Respiratory muscle electromyography (EMG) can identify whether a muscle is activated, its activation amplitude, and timing. Most studies have focused on the activation amplitude, while differences in timing and duration of activity have been less investigated. Detection of the timing of respiratory muscle activity is typically based on the visual inspection of the EMG signal. This method is time-consuming and prone to subjective interpretation. Aims: Our main objective was to develop and validate a method to assess the respective timing of different respiratory muscle activity in an objective and semi-automated manner. Method: Seven healthy adults performed an inspiratory threshold loading (ITL) test at 50% of their maximum inspiratory pressure until task failure. Surface EMG recordings of the costal diaphragm/intercostals, scalene, parasternal intercostals, and sternocleidomastoid were obtained during ITL. We developed a semi-automated algorithm to detect the onset (EMG, onset) and offset (EMG, offset) of each muscle’s EMG activity breath-by-breath with millisecond accuracy and compared its performance with manual evaluations from two independent assessors. For each muscle, the Intraclass Coefficient correlation (ICC) of the EMG, onset detection was determined between the two assessors and between the algorithm and each assessor. Additionally, we explored muscle differences in the EMG, onset, and EMG, offset timing, and duration of activity throughout the ITL. Results: More than 2000 EMG, onset s were analyzed for algorithm validation. ICCs ranged from 0.75–0.90 between assessor 1 and 2, 0.68–0.96 between assessor 1 and the algorithm, and 0.75–0.91 between assessor 2 and the algorithm (p < 0.01 for all). The lowest ICC was shown for the diaphragm/intercostal and the highest for the parasternal intercostal (0.68 and 0.96, respectively). During ITL, diaphragm/intercostal EMG, onset occurred later during the inspiratory cycle and its activity duration was shorter than the scalene, parasternal intercostal, and sternocleidomastoid (p < 0.01). EMG, offset occurred synchronously across all muscles (p ≥ 0.98). EMG, onset, and EMG, offset timing, and activity duration was consistent throughout the ITL for all muscles (p > 0.63). Conclusion: We developed an algorithm to detect EMG, onset of several respiratory muscles with millisecond accuracy that is time-efficient and validated against manual measures. Compared to the inherent bias of manual measures, the algorithm enhances objectivity and provides a strong standard for determining the respiratory muscle EMG, onset. INTRODUCTION Respiratory muscle activity to generate ventilation is mainly automated under the control of the respiratory centers located in the pontomedullary region of the brainstem (Feldman and Del Negro, 2006;Hudson et al., 2016). The respiratory centers' output to the respiratory muscle will determine whether a muscle is active, regulates the amplitude of its activation, and coordinates the timing of its activity. In humans, the respiratory centers' output cannot be directly measured (Vaporidi et al., 2020). The respiratory drive to the respiratory muscles measured via electromyography (EMG) is used as its surrogate (Domnik et al., 2020). Respiratory muscle EMG allows the identification of whether a muscle is active and provides a relative indication of the amplitude of the muscle electrical activity [e.g., the root mean square (RMS) of the EMG signal]. In addition, the timing of activation can be evaluated by the onset and offset of their activity (EMG, onset and EMG, offset, respectively; Epiu et al., 1985;Nguyen et al., 1985;Hodges and Gandevia, 2000;Luo and Moxham, 2005;Hudson et al., , 2016Sinderby et al., 2013;Estrada et al., 2019;Domnik et al., 2020;Vaporidi et al., 2020). Measurements of the respiratory drive via respiratory muscle EMG have now been used for over 100 years (Domnik et al., 2020). Most studies have focused on the amplitude of the respiratory muscle activation (e.g., EMG RMS), while the coordination between the timing of different respiratory muscle activity (e.g., their EMG, onset, EMG, offset, and duration of activity) has been less investigated. Most studies that have investigated the coordination between different respiratory muscle EMG, onset, EMG, offset, and duration of activity during the respiratory cycle have visually identified when the inspiratory modulation of the EMG activity began and ended (Epiu et al., 1985;Nguyen et al., 1985;Hodges and Gandevia, 2000;Hudson et al., , 2016Sinderby et al., 2013). This approach also allows to quantify the duration of the muscle activity in relation to the respiratory cycle (Epiu et al., 1985;Nguyen et al., 1985;Hodges and Gandevia, 2000;Hudson et al., , 2016Sinderby et al., 2013). However, this method can be time-consuming, especially for recordings that contain many breathing cycles (e.g., during exercise). Also, this method is prone to subjective interpretations to determine "when the modulation of the EMG activity began. " It could lead to between-assessors variability in defining when the modulation of the EMG signal occurred. We have previously shown that such subjective interpretation can influence effect size estimates in pre-vs. post-interventional studies of EMG data . Moreover, the respiratory muscle EMG is highly contaminated by electrocardiogram (ECG) artifacts (Luo et al., 2008;Dacha et al., 2019;Domnik et al., 2020). In breaths where the EMG, onset coincides with the ECG artifact, the detection of the EMG, onset is not possible. Filters that exclude the part of the EMG signal that is contaminated by the ECG artifacts have been applied (Sinderby et al., 1985;Luo et al., 2008;Hudson et al., 2016). However, these could lead to a time difference of 0.10-0.12 s in the EMG, onset detection -based on the length of the QRS complex of the ECG signal. For example, this could be equivalent to approximately 10% of the inspiratory time at a rate of ≈20 breaths/min and 1 s inspiratory time, and an even greater proportion of the cycle with faster respiratory rates (e.g., during exercise). Despite being small, these differences could either under-or overestimate breath-by-breath or between muscle comparison of the EMG, onset, EMG, offset, or activity duration. For instance, the 5th dorsal external intercostal has been described as the latest intercostal activated during inspiration, at approximately 14.5% of the inspiratory time, compared to the costal diaphragm at −2.5% or the 3rd external intercostal at −1.0% in measurements performed using needle EMG (Saboisky et al., 1985;. We previously developed an algorithm that filters out ECG artifacts from the EMG signal of the respiratory muscle without the need to exclude any segment of the EMG signal . Herein, our main objective, building on the previous version of this algorithm, was to develop and validate a method to investigate the EMG, onset of different respiratory muscles in an objective and time-efficient manner. Specifically, we aimed to: (1) develop a semi-automated algorithm to detect the EMG, onset of the respiratory muscles; (2) determine the interrater reliability of the manual method of detecting the EMG, onset of the respiratory muscles; (3) validate the EMG, onset detected by our semi-automated algorithm against manual measures. Secondly, we applied the algorithm to explore (1) Frontiers in Physiology | www.frontiersin.org EMG, onset differences among four respiratory muscles during a constant load inspiratory threshold loading (ITL) task performed to task failure in healthy young adults, and (2) whether the timing of the EMG, onset is affected by the intensity of the inspiratory effort, the peak inspiratory flow or the amplitude of the EMG activity. MATERIALS AND METHODS Healthy adults aged between 18 and 35 years were included in this study. All participants provided written informed consent at the time of enrollment. Participants were enrolled between March and July 2017. The conduct of the analysis presented herein was approved by the University of Toronto Health Sciences Research Ethics Board (#39041). EMG and ventilatory parameters were recorded during an ITL performed to task failure. During the ITL, participants were cued to target a respiratory rate of 10 breaths per minute by listening to an audio recording but were not instructed to begin inspirations from a particular lung volume. The participants performed the ITL until task failure at 50% of their maximum inspiratory pressure (MIP) by inhaling against a spring-loaded threshold device [PowerBreathe™, Classic MR (range 10-90 cmH 2 O), International Ltd., England, United Kingdom] connected to a two-way non-rebreathing valve (Hans Rudolph, Kansas City, MO) in line with a mouthpiece with a port connected to a differential pressure transducer (MP45-36-871; Validyne™, Northridge, CA). Task failure was defined as when the participants took their mouth off the mouthpiece or could no longer overcome the spring-loaded threshold valve for inspiration for three consecutive breaths. Patients' age (years), height (cm), and mass (kg) were determined. Body mass index (BMI; kg m −2 ) was calculated. The forced volume in the first second (FEV 1 ; ml) and forced vital capacity (FVC; ml) were measured by spirometry according to international guidelines (Miller et al., 2005) and the FEV 1 / FVC ratio was calculated. Dyspnea was measured before the ITL and at task failure using the modified Borg score (Borg, 1982). MIP was measured before the ITL according to the standardized procedure (American Thoracic Society/European respiratory Society, 2002;Laveneziana et al., 2019) except that participants were in a half-lying position. Participants were instructed to forcefully inspire after having performed a full expiration to residual volume while breathing through a flanged mouthpiece connected to an occluded three-way stopcock (2100 series, Hans Rudolph, Kansas City, MO) with a small port connected to a pressure transducer (MP45-36-871; Validyne™, Northridge, CA). MIP was measured 3 to 10 times until the variability between the three highest values was ≤10%. MIP was expressed in absolute values (cmH 2 O) and as percent of the predicted (%) according to Evans and Whitelaw (2009). Surface electromyography (EMG) was acquired throughout the ITL to measure muscle activation from the costal diaphragm/ intercostals, scalene, parasternal intercostal, and sternocleidomastoid. The participant's skin was prepared with shaving when necessary and cleaned with alcohol. EMG signals were acquired by placing electrodes 2.5 cm apart on the right hemithorax overlying: (1) the costal diaphragm/intercostals at the seven or eight intercostal space between the anterior axillary line and midclavicular line in accordance with the best signal captured; (2) the scalene in the posterior triangle of the neck at the level of the cricoid process; (3) the parasternal intercostals on the second intercostal space close to the sternum; (4) the sternocleidomastoid midway between the suprasternal notch and the mastoid process. EMG was acquired using an eightchannel bioamplifier (BioAmp; ADInstruments, Colorado Springs, CO), converted to digital signals (PowerLab; ADInstruments), and recorded at 1000 Hz by a data acquisition software (LabChart, ADInstruments, Colorado Spring, CO). Ventilatory parameters, including mouth pressure (Pm), respiratory frequency (fr), inspiratory flow, and inspiratory tidal volume (vt), acquired by a pneumotach connected between the threshold loading device and the inspiratory port of the two-way non-rebreathing valve of the ITL apparatus, and electrocardiography signals (ECG), were acquired synchronously with the EMG throughout the ITL using the same system (PowerLab; ADInstruments) and recorded by the same data acquisition software (LabChart, ADInstruments, Colorado Spring, CO). The Algorithm to Detect the EMG Onset EMG, inspiratory flow, and ECG signals collected throughout the ITL were exported from the data acquisition software (LabChart, ADInstruments, Colorado Spring, CO) at 1000 Hz into a text file. This file was imported into the LABVIEW software (National Instruments, Austin, TX, United States) in which an in-house algorithm was developed by a biomedical engineer (LJ) to first filter out ECG artifacts from the EMG signals and then automatically detect the onset of each muscle EMG activity breath-by-breath. The development and validation of the ECG removal process have been previously published by our group . Briefly, we used a bidirectional high pass filter at 20 Hz 2nd order Butterworth. A bidirectional filter was applied to avoid lagging and/or leading the EMG signals as it would have occurred if a unidirectional filter had been used, especially at low frequencies, potentially affecting the timing of the EMG, onset (Willigenburg et al., 2012). The ECG artifacts were removed from the EMG signals using the least mean square adaptive filter . This is a pattern recognition method that uses a timealigned ECG signal to remove the ECG frequency content from the EMG signals without the need of deleting any part of the EMG signal, which would also potentially alter the timing of the EMG, onset. The ECG filtered EMG signals from each muscle were transformed into root mean square (RMS) and the first derivative function of each muscle EMG RMS was calculated. Based on the derivative function of each muscle EMG RMS, we identified the "rising" and "descending" phases of the EMG RMS. While a positive derivative indicates a rising EMG RMS, a negative derivative indicates a descending EMG RMS. It was necessary to use the derivative function to determine the rising and descending phases of the EMG RMS because: (1) the "baseline" EMG RMS varied from participant to participant and (2) the EMG RMS does not always return to the "baseline" after an activity burst. Therefore, identifying the rising and descending phases of the EMG RMS based on an absolute value would not consistently detect timing of the EMG onset and offset. Using the flow signal, we identified the beginning of each breath's inspiratory phase (INSP, onset) within ±1 msec accuracy. The maximum value of the rise of the EMG RMS was identified breath-by-breath. The onset time of the EMG activity (EMG, onset) was defined as the timepoint when the rise of the EMG RMS reached 5% of its maximum (±1 msec). We used a 5% threshold to detect the EMG, onset to avoid the inherent variability of the baseline EMG signal to be mistakenly identified as activation. The EMG, onset could occur either before or after the beginning of the INSP, onset depending on when the 5% threshold occurred. The EMG filtering and the EMG, onset detection could be carried out in parallel for EMG signals from multiple muscles. Figure 1 shows the EMG, onset detection for the diaphragm/intercostal, scalene, parasternal intercostal, and sternocleidomastoid in a representative breath. One additional function was implemented to the algorithm to detect the offset of the neural inspiratory drive to the respiratory muscles (EMG, offset) using thresholds previously validated in the literature (Sinderby et al., 2013;Estrada et al., 2019). EMG, offset was defined as when the EMG RMS dropped by 30% after reaching its peak (Sinderby et al., 2013;Estrada et al., 2019). The analyzed data were then exported from the LABVIEW software (National Instruments, Austin, TX, United States) to a text file containing each muscle EMG, onset and EMG, offset and INSP, onset and INSP, offset for each breath. The text file was saved for statistical analysis. Validation of the Algorithm to Detect the EMG Onset The validation of the algorithm was performed by random sampling 10 min of data from each participant's ITL. EMG signals Frontiers in Physiology | www.frontiersin.org from all muscles, ECG, and flow signals during the ITL were available. First, we used the algorithm to detect the EMG, onset for each muscle breath-by-breath as described above. Second, the EMG, onset was detected manually by two independent assessors. Both assessors were provided with the ECG filtered EMG RMS to perform the manual analysis (see above). Both assessors opened the ITL files using a dedicated software (Spike 2, Cambridge Electronic Design Limited, Cambridge, United Kingdom). For each breath, both assessors were instructed to determine each muscle's EMG, onset, as defined by a visual increase in the EMG RMS, by placing a cursor at the location where they judged the onset occurred. The software then provided the assessors with the timing at which they placed the cursor. Figure 2 shows an example of the process for detecting one EMG onset. For each breath, both assessors typed each muscle EMG, onset time with a millisecond's accuracy into a Excel file that was saved for posterior analysis. Hence, we could posteriorly calculate the interrater reliability between the manual method performed by two independent assessors, and the validity was studied by determining the agreement between each assessor and the algorithm (see below for more details). Both assessors were blinded to each other results. Of note, if the EMG, onset of any muscle could not be identified for a given breath due to artifacts, small amplitude or any other reason, both assessors were instructed to type "NA" instead of the EMG, onset time on the Excel file. Figure 3 shows an example of a breath that the EMG, onset was judged not clear for the diaphragm/intercostal and thus not identified using the manual method. Additionally, both assessors recorded the time required for their manual analyses. Both assessors had at least 1-year experience analyzing EMG signals, have received previous training and practiced detecting the onset of the EMG signal for each of these four muscles in at least 300 breaths (total of at least 1200 onset detections) before performing the analysis reported herein. We did not perform the validation of the EMG, offset because the threshold we implemented in our algorithm has already been validated in physiological studies (Sinderby et al., 2013;Estrada et al., 2019) and is widely used in the clinical setting (e.g., to detected the offset of the neural drive in mechanically ventilated patients using neurally adjusted ventilatory assist). Timing of Respiratory Muscles EMG Onset Relative to Flow For each breath and each respiratory muscle, the absolute and relative timing differences between EMG RMS and inspiratory flow or were determined as follows: (1) EMG, onset and the INSP, onset and (2) EMG, offset, and INSP, offset. Absolute differences were calculated for the phase difference (dP), in milliseconds whereas relative time differences were determined by normalizing the dP to the duration of the inspiratory time FIGURE 2 | Example of the detection of the onset of the EMG activity by the manual method. The figure shows the software layout viewed by the assessors when performing the manual analysis. The red arrow indicates where the assessor judged the onset of the diaphragm/intercostals EMG onset occurred. The black arrow indicates the time in seconds (provided by the software) when the assessor indicated the diaphragm/intercostal EMG onset occurred. The assessor would type the seconds indicated by the software in a Excel file with a millisecond accuracy. This processed was repeated breath-by-breath for each muscle. EMG RMS, root mean square of the EMG signal; DIA/IC, diaphragm/intercostal; PI, parasternal intercostal; SCM, sternocleidomastoid. Frontiers in Physiology | www.frontiersin.org based on the flow signal (%Ti), as previously described (Nguyen et al., 1985;Hodges and Gandevia, 2000;Hudson et al., , 2016. The normalization was performed to account for breathto-breath variations in the inspiratory time and to allow us to compare our results to those previously published in the literature (Nguyen et al., 1985;Hodges and Gandevia, 2000;Hudson et al., , 2016. The following equations were applied to each breath for each muscle's EMG RMS: 1. Absolute difference between EMG onset and inspiratory flow (or Pm): Respiratory muscle dPon ms EMG onset ms INSP onset ms For the timing differences in EMG, onset or EMG, offset in milliseconds (dP) or normalized by the inspiratory time (%Ti), a value of zero indicates synchrony of the EMG and flow signal or mouth pressure, that is, EMG, onset, and INSP, onset or EMG, offset and INSP, and offset; if less than zero, it indicates that the change in the EMG signal preceded the flow signal or pressure signal; and if greater than zero, it indicates that the change in the EMG signal occurred after the flow signal or pressure signal. Timing of Respiratory Muscles EMG Onset and Offset During the ITL After the algorithm was developed and validated, we analyzed the ITL trials from start up to isotime and during task failure to explore EMG, onset, and EMG, offset differences among respiratory muscles during ITL. Isotime was defined as the highest equivalent time achieved by all participants during the ITL rounded to the nearest minute. Task failure was defined as the last 2 min of each participant's ITL. In addition to the EMG, onset, and EMG, offset, we also analyzed Pm as surrogate for inspiratory muscle effort. Additionally, because the ITL can introduce variability in the delay between the EMG, onset, and INSP, onset we also analyze the timing of EMG activity based on the pressure signal. INSP, onset, peak Pm, peak inspiratory flow, vt and diaphragm/intercostals, scalene, parasternal intercostal, and sternocleidomastoid EMG, onset, and EMG, offset were determined breath-by-breath during the ITL using the algorithm. For each breath, diaphragm/intercostals, scalene, parasternal intercostal and sternocleidomastoid EMG, onset, and EMG, offset in milliseconds and normalized by the duration of the inspiratory time (based on both flow and pressure signals) were calculated as described in equations 1-4. We also calculated the duration of activation during each muscle contraction breath-by-breath, defined as the time elapsed between each muscle EMG, onset and EMG, offset in both milliseconds and normalized by the duration of the inspiratory time (based on both flow and pressure signals). Diaphragm/intercostals, scalene, parasternal intercostal and sternocleidomastoid EMG, onset, EMG, offset, the duration of their activation, Ttot, inspiratory time, peak inspiratory flow, peak Pm, and vt were averaged every 2 min during the ITL up to isotime and at task failure to evaluate the time course of these variables. Statistical Analysis No statistical power or sample size calculations were conducted a priori since this was a secondary analysis (Derbakova et al., 2020). Ten minutes of ITL data from each participant for the validation of the algorithm was selected because it provided approximately 100 breaths. Descriptive statistics were reported as frequencies, mean ± SD or median (25-75% IQR) according to the data distribution assessed using the Shapiro-Wilk test unless otherwise stated. For the validation of the algorithm, the interrater reliability between assessor 1 and 2 and between each assessor with the algorithm was calculated using the two-way mixed effects Intraclass Correlation Coefficient (ICC) and visualized using Bland-Altman plots. ICCs and Bland and Altman plots were done for each muscle independently in milliseconds and normalized by the duration of the inspiratory time. Interrater reliability was classified as "poor" (ICC < 0.5), "moderate" (ICC 0.5-0.75), "good" (ICC 0.75-0.9), or "excellent" (ICC > 0.9). To assess differences of the timing of the respiratory muscle EMG, onset and EMG, offset during the ITL, two-way ANOVAs were conducted to test the main effect of "time" (ITL start up vs. isotime vs. task failure as factor 1) and "between muscle" (factor 2) and their interaction. Similar two-way ANOVAs were also performed on EMG RMS and the duration of activation. Repeated measures ANOVAs were used to test the main effect of time (from ITL start up to isotime and during task failure) on the Ttot, inspiratory time, inspiratory flow, vt and Pm. Repeated measures ANOVAs were also used to compare the EMG, onset of each muscle between the ITL start, isotime, and task failure. Post hoc testing for significant variables was carried out using Tukey adjustment for multiple comparisons. Paired t test was used to compare Borg scores before the ITL and at task failure. The chi-square test was used for comparing frequencies. Density plots using kernel density estimates were built to show the distribution of the diaphragm/intercostals, scalene, parasternal intercostal and sternocleidomastoid EMG, onset normalized by the duration of the inspiratory time at the start, isotime, and peak ITL. Boxplots were built to supplement the density plots visualization. Pearson correlation coefficient (r) was used to assess whether the timing of each muscle EMG, onset, or the duration of their activation was associated with the magnitude of the inspiratory effort (Pm), the peak inspiratory flow, or the amplitude of their EMG activity (EMG RMS). The strength of the correlations (r) were classified as "small" (0.1-0.3), "medium" (0.3-0.5), or "large" (0.5-1). A p < 0.05 was considered statistically significant for all analyses. Participants' Characteristics Seven participants were included. Participant's characteristics are presented in Table 1. Participants' age was 24 ± 1 years, their body composition was normal according to their BMI, and they had preserved FEV 1 /FVC ratio. Four were male. MIP was 90 ± 13% of predicted values (Evans and Whitelaw, 2009). Validation of the Algorithm For the validation of the semi-automated algorithm, 70 min of data from the 7 participants were analyzed. Ten minutes of data randomly selected from each participant's ITL generated 92 ± 4 breaths per participant resulting in a total of 646 breaths and 2576 EMG RMS signals (7 participants × 92 breaths × 4 muscles). Table 2 shows the number of EMG, onset detected by the algorithm and by each assessor for each muscle. A total of 2,486 EMG, onset were detected by the algorithm, 2376 by assessor 1 and 2403 by assessor 2. Assessor 1 detected significantly fewer EMG, onset for the diaphragm/intercostal than both the algorithm and A B C D FIGURE 4 | Bland-Altman plots and ICC's values for the breath-by-breath detection of dP of EMG, onset by assessor 1 and 2 for the diaphragm/intercostal (A), parasternal intercostal (B), scalene (C) and, sternocleidomastoid (D). AVG: average bias between the results from both assessors. UL and LL: 95% confidence interval of the difference between the results from both assessors. n: number of EMG, onset analyzed. dP: phased difference between the onset of the electrical activity of the muscle and the start of the inspiration calculated (see text for further details). assessor 2 (p < 0.05). Assessor 2 detected less EMG, onset for the scalene, parasternal intercostal, and sternocleidomastoid than both assessor 1 and the algorithm (p < 0.05). Figure 4 shows a representative Bland-Altman plot with the interrater reliability between Assessor 1 and 2 and Figure 5 between Assessor 1 and the algorithm, while Bland-Altman plots for each muscle between Assessor 1 and 2 and each assessor and the algorithm are shown in the online supplement (Supplementary Figures S1-S6). Overall, interrater reliability for detecting the EMG, onset was classified as good between Assessor 1 and 2 (Figure 4; Supplementary Figures S1, S2), moderate to excellent between assessor 1 and the algorithm (Figure 5; Supplementary Figures S3, S4), and good to excellent between assessor 2 and the algorithm (Supplementary Figures S5, S6; p < 0.001 for all). Mean times required per analysis were greatest for Assessor 2, intermediate for Assessor 1, and least for the algorithm (174 ± 10 min vs. 66 ± 13 min vs. 1 ± 0 min, respectively; p < 0.001 between all). Timing of Respiratory Muscles EMG Onset and Offset During the ITL To assess the diaphragm/intercostal, scalene, parasternal intercostal, and sternocleidomastoid EMG, onset and EMG, offset during the ITL, we analyzed the ITL data from start up to isotime and during task failure from the 7 participants. A total of 1,434 breaths were analyzed (401 ± 144 breaths per participant). ITL load was 49 ± 8 cmH 2 O, isotime was 18 min, and mean time to task failure was 38 ± 13 min. Borg scores increased from baseline to task failure (0 ± 0 vs. 4 ± 2; p = 0.001). Ttot and inspiratory time (Figure 6) did not significantly change from ITL start to task failure (p = 0.74 and p = 0.63, respectively). Peak inspiratory flow was reduced at task failure compared to ITL start and minutes 8, 10 12, and 14 (Figure 6; p < 0.05), vt was reduced at minute 14 and 16 compared to minute 8 and at task failure compared to minutes 2, 4, 6, 8, 10 and 12 (Figure 6; p < 0.05). Diaphragm/intercostals, scalene, parasternal intercostal, and sternocleidomastoid EMG RMS (Supplementary Figure S7) did not significantly change from ITL start to task failure (p = 0.96). EMG RMS was lower in the diaphragm/intercostal than in the scalene, parasternal intercostal, and sternocleidomastoid throughout the ITL (Supplementary Figure S7; p < 0.001). EMG RMS was also lower in the parasternal intercostal than in the scalene (Supplementary Figure S7; p = 0.02). The EMG, onset, and EMG, offset and duration of activation of the diaphragm/intercostal, scalene, parasternal intercostal, and sternocleidomastoid in milliseconds and normalized by the duration of the inspiratory time based on both flow and pressure signals are shown in Figures 7, 8, respectively. The EMG, onset, and EMG, offset of all muscles did not change throughout the ITL (p ≥ 0.63). However, the EMG, onset of the diaphragm/intercostal was significantly greater (later) than the scalene, parasternal intercostal, and sternocleidomastoid (p < 0. 001; Figures 7, 8). The EMG, offset occurred synchronously across all muscles (p ≥ 0.98). The duration of the activation did not change throughout the ITL for all muscles (p ≥ 0.97). However, the diaphragm/intercostal duration of activation normalized by the duration of the inspiratory time based on both flow and pressure signal was shorter than the parasternal intercostal, scalene, and sternocleidomastoid (p ≤ 0.001), whereas in milliseconds was shorter than the parasternal intercostal and scalene only (p ≤ 0.01). Likewise, time from EMG, onset to peak pressure was consistent during ITL for all muscles (p = 0.99) but late for the diaphragm compared to the parasternal intercostal, scalene, and sternomastoid (p = 0.001). Figure 9 shows the density plots and boxplots for the EMG, onset normalized by the duration of the inspiratory time of the diaphragm/intercostal, scalene, parasternal intercostal, and sternocleidomastoid during the ITL start, isotime, and task failure. The diaphragm/intercostal had a lower peak density than the scalene, parasternal intercostal, and sternocleidomastoid during the ITL start, isotime, and task failure, whereas the highest peak density is visualized during ITL start for the parasternal intercostal. Boxplots show there was no change in the EMG, onset variability for the diaphragm/intercostal, parasternal intercostal and sternocleidomastoid during ITL start, isotime and task failure (p = 0.46, p = 0.25, and p = 0.72, respectively). The scalene EMG, onset variability was greater at task failure compared to isotime (p = 0.02). DISCUSSION We developed and validated a semi-automated algorithm to detect the onset, offset, and duration of several respiratory muscles' EMG activity with millisecond accuracy. The algorithm was an extension of a previous algorithm that filtered out ECG artifacts from the EMG of the respiratory muscles. The algorithm had good to excellent reliability compared to the manual detection performed by two different assessors. The EMG, onset detection by a semiautomated algorithm was at least 66 times faster per participant than the manual detection; it required a total of 7 min for 7 participants compared to 7.7 to 20 h for the manual assessments. Additionally, we used the algorithm to explore differences in the timing of the diaphragm/intercostal, scalene, parasternal intercostal, and sternocleidomastoid EMG, onset, EMG, offset, and duration of activation during inspiratory loading, as well as to explore whether their timing activation would be disrupted close to task failure. An ITL trial at 50% of the maximum inspiratory pressure (MIP) performed up to task failure did not change the timing of their EMG, onset, EMG offset, or duration of activation in this small sample of healthy adults (n = 7). To the best of our knowledge, this is the first time it is shown that the timing of the respiratory muscle EMG activity minimally affected by increased inspiratory load, despite a significant increase in breathless and the inability of the participants to overcome that load further. However, the EMG, onset of the diaphragm/intercostal occurred later and its duration of activation was shorter compared to the scalene, parasternal intercostal, and sternocleidomastoid throughout the ITL. The timing of the EMG, onset had statistically significant but small to medium correlations with peak inspiratory flow and respiratory muscle effort (Pm) and the amplitude of the EMG activity (RMS) for the diaphragm/intercostal, scalene, parasternal intercostal, and sternocleidomastoid. The algorithm validity with the manual detection performed by both assessors ranged from "moderate" to "excellent. " Reliability between assessor 1 and 2 was "good. " Despite the instructions being similar to both assessors, the manual detection relies on each assessor's interpretation of "a visual increase" on the EMG signal. Such interpretation is influenced by factors related to the quality of the signal, how much increase in the EMG signal each assessor characterize to suffice a "visual increase, " the visual acuity of the assessor, as well as other aspects such as the size of the screen available for performing the analysis. The reliability between assessor 1 and 2 was classified as "good" for all muscles (Figure 4; Supplementary Figures S1, S2) but there was great breath-by-breath variability as revealed by the upper and lower limits of agreement from the Bland-Altman plots (Figure 4; Supplementary Figures S1-S6). We have previously shown that the interrater variability in analyzing the EMG signal, for instance while analyzing the EMG RMS, can influence estimates of effects size in interventional studies despite two assessors achieving an "excellent" interrater reliability . The algorithm, therefore, removes an inherent limitation of the manual analysis, that is the assessor's subjectivity, while significantly improving time efficiency. Nevertheless, the algorithm relies in an a priori established threshold to determine the EMG, onset. Whether the 5% threshold used herein is the optimal threshold to estimate the timepoint of the EMG, onset dependent on the experimental protocol. However, the algorithm will consistently apply the same standardized criteria to all muscles during breath-bybreath analysis limiting potential inherent bias of manual assessments. The timing of the EMG, onset, EMG, offset, and the duration of activation was consistent throughout ITL to task failure for all muscles. Overall, the EMG, onset of the respiratory muscles started before the beginning of the inspiration independently if based on the flow or pressure signal; however, this was different for the diaphragm/intercostals. The diaphragm/ intercostals were activated significantly later (Figures 7A,D, 8A,D) and had greater variability in the timing of the EMG, onset (Figure 9), and its activity duration was shorter than the scalene, parasternal intercostal, and sternocleidomastoid ( Figures 7C,F, 8C,F) even in this small sample of participants. On the other hand, previous studies have described that during quiet breathing the costal diaphragm is activated earlier than the 2nd parasternal intercostal and scalene when measurements were evaluated with needle EMG and the EMG, onset was detected visually based on the beginning of the modulation of the EMG signal (Saboisky et al., 1985;. These discordant results can be due to the different protocols used (i.e., quiet breathing vs. ITL), the methods used to measure muscle electrical activity (e.g., needle vs. surface EMG), and/ or how the EMG, onset was detected (e.g., beginning of the modulation of the EMG signal vs. an 5% increase in the baseline EMG signal). During quiet breathing, coordinated activity of the obligatory muscles of inspiration (i.e., diaphragm, external and parasternal intercostals, and scalene) is essential (De Troyer and Estenne, 1984;De Troyer et al., 2005;De Troyer and Boriek, 2011). Their coordinated activity allows the chest wall to move outward, decreasing intrathoracic/pleural pressure, increasing transpulmonary pressure, and ultimately generating inspiratory flow. Different stimuli will lead to changes in the way the respiratory centers activate the respiratory muscles, (De Troyer and Estenne, 1984;De Troyer and Wilson, 1985;De Troyer et al., 1998Butler, 2007;De Troyer and Boriek, 2011;Rodrigues et al., 2019) and the respiratory center may use different strategies to meet its new demands, namely increasing the activation intensity to one or multiple obligatory muscles or activating accessory muscles of inspiration (e.g., sternocleidomastoid). The output to each specific muscle is regulated based on the muscle's mechanical advantage (De Troyer et al., 1998;De Troyer and Boriek, 2011;Hudson et al., 2019) which depends on its morphology and length-tension relationship, and can be influenced by changes in factors such as body posture and lung volumes. The scalene and the sternocleidomastoid are preferentially activated over the diaphragm during ITL (De Troyer and Estenne, 1984;Laghi et al., 2014;Rodrigues et al., 2019) The sternocleidomastoid has a greater proportion of type II fibers and can generate stronger and faster contractions than the diaphragm, conferring an advantage to overcome the ITL load (De Troyer and Boriek, 2011). Activating extra diaphragmatic inspiratory muscles such as scalene, parasternal intercostal, and sternocleidomastoid during ITL is a mechanism to protect the diaphragm against contractile fatigue and improve its neuromechanical coupling by limiting diaphragmatic shortening (Laghi et al., 2014). Therefore, while having a greater advantage, the scalene, parasternal intercostal, and sternocleidomastoid became the primary flow generators during ITL and are activated earlier than the diaphragm (Figure 7). Nonetheless, methodological differences cannot be ruled out. Previous studies analyzed quiet breathing, detected the EMG, onset as the time when inspiratory modulation of the EMG activity began and analyzed fewer breaths compared to ours (Saboisky et al., 1985;. Also, we identified the EMG, onset based on a 5% increase in the surface EMG RMS instead of determining the beginning of the inspiratory modulation from needle EMG signals. The EMG, onset for all muscle had small to medium correlations with the flow, Pm and EMG RMS. Pm and EMG RMS did not change throughout the ITL. Changes in inspiratory flow, despite statistically significant, were small. Usually, lesser variance decreases the possibility of finding a correlation. The absence of variability in these variables may reduce the validity of these correlations. Therefore, we cannot make inferences regarding whether changes in inspiratory flow, respiratory muscle effort, or the intensity of the inspiratory drive (EMG RMS) would affect or modulate the EMG's onset timing for the different respiratory muscles. Nevertheless, the control of the intensity and the timing activation of the respiratory muscles may be independent from each other. For instance, in a study investigating the influence of posture on the timing activation of the diaphragm and the scalene, Hudson et al. (2016) showed that the timing activation of the scalene was similar while standing or upside-down, but the amplitude of its activation decreased by about 50% in the upside-down posture. The diaphragm, however, maintained both the timing and intensity of its activation in both the seated and upside-down positions. The EMG RMS of the diaphragm/intercostal, scalene, parasternal intercostal, and sternocleidomastoid did not change throughout the ITL. Pm was constant and, to avoid changes in body posture to modify muscle mechanical advantage, participants were kept in the same position throughout the ITL. Hence, no changes in the EMG, RMS would be expected due to either changes in muscle effort or mechanical advantage during our protocol. Factors associated with changes in the timing of respiratory muscle EMG, onset remains to be fully elucidated. The duration of activation did not change throughout the ITL for any muscle (Figures 7C,F, 8C,F). The duration of activation was greater than the inspiratory time for all muscle throughout the ITL (Figures 7C, 8C). The EMG, onset occurring before the beginning of inspiration would be anticipated since it is necessary to overcome the resistance applied by the threshold loading apparatus before inspiratory flow begins. The fact that the timing of the EMG, onset did not change overtime, suggest that no change in the coordination of the timing the inspiratory muscles were deployed by the respiratory centers (De Troyer and Boriek, 2011). Nevertheless, this was a cohort of healthy young adults with preserved lung function and inspiratory muscle strength. Whether this phenomenon is similar or altered in patients with impaired inspiratory muscle function (e.g., patients with diaphragm paralysis, spinal cord injury, or neuromuscular diseases) deserves further investigation. Strengths and Limitations Surface EMG of the costal diaphragm is prone to crosstalk from expiratory activity of the intercostals and abdominals, which can contaminate the EMG signal (Hodges and Gandevia, 2000). Body composition and anatomy can also affect the quality of surface EMG. To minimize such influences, we only included lean individuals (sample BMI 24 ± 3 kg m −2 ) and the electrode position was adjusted to optimize the signal quality. Moreover, each subject was used as his/her own control (from ITL start to task failure) and therefore differences in body composition or anatomy would not be expected to affect the absolute microvolts measured throughout the ITL. Nevertheless, it has been shown that the timing of the EMG, onset is similar to both costal and crural diaphragm (Nguyen et al., 1985). Because previous studies have used needle EMG to detect the respiratory muscles EMG, onset and we used surface EMG, comparisons across studies may be complex. Needle EMG measures the activation of local motoneurons and represents the activity of a specific region of the muscle. We used surface EMG, which measures the superimposed signal of multiple motoneurons and represents the overall activity of the muscle region where the EMG sensor was placed (Suh et al., 2021). Nevertheless, surface EMG has been well recognized as a valid and widely used tool to study respiratory muscle function in disease and health states. Its advantages include its ease of use, and it is less risky to the patients and allows recordings to be performed during dynamic activities such as exercise. The validation of the algorithm to detect the EMG, onset included more than 2000 EMG, onset s, which, to the best of our knowledge, is considerably more than the number of EMG, onset s measured in any previous study. Additionally, we validated the algorithm by investigating its reliability with two assessors and considering four muscles. Unfortunately, we were unable to assess the timing of the EMG, onset during quiet breathing because these data were not collected during the primary study. Our sample was composed of physically fit young healthy participants, and it may not be possible to extrapolate our results to people with lung or chest wall diseases. Also, our findings were limited to one type of load, which may limit applicability to other types of load (e.g., hyperpnea). Therefore, the timing activation of the respiratory muscle EMG remains to be investigated in other populations and different types of loads. CONCLUSION We developed and validated an algorithm to detect the onset of the EMG activity of the respiratory muscles with millisecond accuracy. The algorithm had good reliability with the manual detection performed by two independent assessors, can perform the analysis more than 60 times faster than a manual assessor, and ensures that breath-by-breath differences in the timing of the onset of the EMG activity are not biased by the assessor's subjective interpretations. We further demonstrated that in healthy young adults (n = 7), an inspiratory loading task at 50% of the maximum inspiratory pressures did not disrupt the timing of the onset of the EMG activity. We propose this method may be used for objective detection of the onset of the EMG activity for the respiratory muscles. DATA AVAILABILITY STATEMENT The data analyzed in this study are subjected to the following licenses/restrictions: The data sets generated for this study are available on request to the corresponding author. Requests to access these data sets should be directed to antenor.rodrigues@ unityhealth.to. ETHICS STATEMENT The studies involving human participants were reviewed and approved by University of Toronto Health Sciences Research Ethics Board. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS AR and WR contributed to the conception and design of the study. AR, LJ, and UM contributed to data analysis. AR wrote the first draft of the manuscript. All authors contributed to the manuscript revision and read and approved the submitted version.
2022-01-03T14:28:13.686Z
2022-01-03T00:00:00.000
{ "year": 2022, "sha1": "0aaa62502f065d5ab69ccdd409da638776aee479", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "0aaa62502f065d5ab69ccdd409da638776aee479", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
238477116
pes2o/s2orc
v3-fos-license
Characteristics of Emergency Medical Service Missions in Out-of-Hospital Cardiac Arrest and Death Cases in the Periods of Before and After the COVID-19 Pandemic Background: Some studies in countries affected by the coronavirus disease of 2019 (COVID-19) pandemic have shown that the missions of Emergency Medical Service (EMS) have changed during the COVID-19 pandemic, and the rate of death and out-of-hospital cardiac arrest (OHCA) has been increased due to the direct and indirect effects of COVID-19. Objective: The aim of this study was to determine the effect of the COVID-19 pandemic on the process of EMS missions, death, and OHCA. Methods: This cross-sectional study was performed in Tehran, Iran. All conducted missions in the first six months of the three consecutive solar years of March 21 until September 22 of 2018-2020, which were registered in the registry bank of the Tehran EMS center, were assessed and compared. Based on the opinion of experts, the technician’s on-scene diagnoses were categorized into 14 groups, and then death and OHCA cases were compared. Results: In this study, the data of 1,050,376 missions performed in three study periods were analyzed. In general, the number of missions in 2020 was 17.83% fewer than that of 2019 (P < .001); however, the number of missions in 2019 was 30.33% more than that of 2018. On the other hand, the missions of respiratory problems, cardiopulmonary arrest, infectious diseases, and poisoning were increased in 2020 compared to that of 2019. The raw number of OHCA and death cases respectively in 2018, 2019, and 2020 were 25.0, 22.7, and 28.6 cases per 1,000 missions. Of all patients who died in 2020, 4.9% were probable/confirmed COVID-19 cases. The history of heart disease, hypertension, diabetes, and respiratory disease in patients in 2020 was more frequent than that of the other two years. Conclusion: This study showed that the number of missions in the Tehran EMS in 2020 were decreased compared to that of 2019, however the number of missions in 2019 was more than that of 2018. Respiratory problems, infectious diseases, poisoning, death, and OHCA were increased compared to the previous two years and cardiovascular complaints, neurological problems, and motor vehicle collisions (MVCs) in 2020 were fewer than that of the other two years Introduction The effects of the coronavirus disease 2019 (COVID-19) pandemic on various aspects of the health care systems of countries have been investigated. It seems that by implementing interventions to reduce the spread of the virus, including social distancing and "stay at home" protocols, specific patterns were created in the status of health care applicants in various communities. 1,2 In some countries, patients' visits to emergency departments and the number of calls to Emergency Medical Service (EMS) centers have been decreased; however, the severity of the disease has been increased. [3][4][5][6] There is some evidence that patients have avoided calling the EMS or going to the hospital emergency room, even with serious complaints such as chest pain or shortness of breath. 2,7 Studies show that as a result of such changes, as well as COVID-19's nature, the EMS in several countries have encountered an increase in out-of-hospital cardiac arrest (OHCA) cases in areas which are most affected by COVID-19. 8 Cases of OHCA are among the most challenging cases that EMS must reach out. 9 Various mechanisms have been suggested for the association between COVID-19 and the incidence of OHCA; this disease can lead to cardiovascular damage and myocarditis through the development of acute respiratory syndrome and subsequent cytokine storm. 10 In addition, some medications, such as Hydroxychloroquine or Azithromycin, may increase the risk of OHCA, especially in people with a history of heart disease like heart failure and arrhythmia. 11 Also, the risk of thromboembolic events and acute coronary syndrome is higher in patients with COVID-19, which can increase the incidence of OHCA. 12 Altogether, checking the accuracy of the above-mentioned matters in different societies requires research and investigation. Therefore, the present study aimed to investigate and compare characteristics of Tehran EMS (Iran) missions in OHCA and death cases in the periods of before and after the COVID-19 pandemic. Study Design This cross-sectional study was performed in Tehran, Iran. The necessary permission to access the information and conduct the study was obtained from the Tehran EMS organization, as well as the ethics committee of the Tehran University of Medical Sciences (Tehran, Iran; IR.TUMS.MEDICINE.REC.1399.776). To comply with the principles of confidentiality, all required information was extracted, analyzed, and presented anonymously. Study Population Sampling was done by the census method and all conducted missions in the first six months of the three consecutive solar years of 1397-1399 (contemporaneous with March 21 until September 22 of 2018-2020) were assessed. Inclusion criteria were the missions in which an ambulance was sent for any complaint. The missions with incomplete information were excluded. Data Gathering The Tehran EMS center has a registry data bank, and all information of any patient, from the moment that a client calls the dispatch center until ending the related mission, whether transported to hospital or not, are recorded routinely. For conducting the current study, a researcher-made checklist was used to extract the data, which included the following variables from registered data in the data bank of the Tehran EMS center: patient's demographics, patient's medical history, technician's diagnosis at the scene, mission outcome, and being a probable/confirmed COVID-19 case (for the missions of 2020). Based on the opinion of experts, the technician's on-scene diagnoses were categorized into 14 groups including cardiovascular, respiratory, neurologic, abdominal pain, gynecology/obstetric, diabetes related, motor vehicle collisions (MVCs), trauma (other than MVCs), psychologic, infectious diseases, wounds and bleeding, poisoning, cardiopulmonary arrest, and others/unknown. Statistical Analysis The qualitative variables were described using frequency (percentage) and the quantitative variables were described using mean (standard deviation [SD]). The Chi-square test was used to compare the frequency of qualitative variables such as gender and types of diseases over three years and the Chi-square test was used to examine the trend of changes. The one-way analysis of variance (ANOVA) was used to compare quantitative variables over three years. The post-hoc Bonferroni was used for 2:2 comparisons for significantly meaningful variables. In analytical tests, the P value <.05 is considered statistically significant. All analyzes were performed using the Stata software version 14 (StataCorp; College Station, Texas USA). Results In this study, the data of 1,050,376 missions performed in three study periods were analyzed. The number of missions in all months of 2020 (the year of the COVID-19 epidemic) was decreased compared to that of 2019 ( Figure 1). In general, the number of missions in 2020 was 72,328 cases (17.83%) fewer than that of 2019 (P <.001); however, the number of missions in 2019 was 94,402 cases (30.33%) more than that of 2018. It was found that 57.5%, 56.6%, and 57.1% of the missions in 2018, 2019, and 2020 were for men, respectively. The age range of the patients was 1-115 years, and the mean ages were 47.42 (SD = 21.56), 47.17 (SD = 21.44), and 48.23 years (SD =20.99) in 2018, 2019, and 2020, respectively. In all three study periods, most of the missions were related to cardiovascular complaints, neurological problems, and MVCs. However, the rate of these three categories in 2020 was fewer than that of the other two years. On the other hand, the missions of respiratory problems, cardiopulmonary arrest, infectious diseases, and poisoning were increased in 2020 compared to that of 2019 (Table 1). OHCA and Death Cases in All Missions The raw number of OHCA and death cases approached by Tehran EMS in 2018, 2019, and 2020 were 7,787; 9,204; and 9,521, respectively, and there were 25.0, 22.7, and 28.6 cases per 1,000 missions, respectively. In general, the rate of OHCA and death in 2020 (COVID-19 epidemic year) compared to the previous year ) 2019 ) had a significant increase of 25.9%, and compared to two years before (2018) had a significant increase of 14.2% (P <.001). This was while this amount in 2019 compared to 2018 was decreased by 9.3% ( Table 2). The mean age of OHCA and death cases had risen from 68.0 years in 2018 to 69.5 years in 2020 (P <.001). In all three years of the study, approximately 65% of the assessed cases were men (P = .342). In total, the shockable status was known for 18,503 cases. The prevalence of shockable cases in 2020 with a decrease of 48.4% compared to that of 2019 had reached its lowest level in three years (0.5%; P <.001; Table 2). Although the number of trauma missions in 2020 was decreased, the OHCA and death rate in trauma missions was increased from 4.2 per 1,000 trauma missions in 2018 to 6.4 per 1,000 trauma missions in 2020. The increase of OHCA and death rate of trauma missions in 2020 compared to that of 2019 and 2018 was 40.6% and 51.9%, respectively, which were significant. The OHCA and death rate of non-trauma missions in 2019 compared to that of 2018 was decreased by 10.2%; however, in 2020 compared to 2019 (the year before the COVID-19 epidemic), it was increased by 21.9%. Also, the percentage change in the ratio of non-traumatic OHCA and deaths to traumatic deaths in 2020 compared to that of the previous year (2019) was increased by 9.2% (Table 2). OHCA and Death Cases in COVID-19 Missions Of all patients who died in 2020 (n = 9,521), 463 (4.9%) were probable/confirmed COVID-19 cases. The mean age of probable/confirmed COVID-19 patients was significantly higher than that of the other patients in 2020 (71.5 [SD = 15.1] versus 69.3 [SD = 19.5]; P = .003). Of all OHCA and death cases, 72.4% had at least one medical history. This amount was 71.5% and 72.4% for probable/confirmed COVID-19 and the other patients, respectively (P = .662). In all three study periods, the history of any heart disease and then hypertension were the highest medical history in OHCA and death cases (P = .061). The history of heart disease and hypertension in patients in 2020 was 44.2%, which was more frequent than that of the other two years. Also, the history of diabetes and respiratory disease were among the most frequent medical histories in 2020, which were more than that of the previous years and had significant differences (Table 3). In 2020, probable/confirmed COVID-19 OHCA and death cases had more history of heart disease or hypertension compared to the other patients (49.5% versus 44.0%; P value = .020). Also, history of respiratory diseases (24.8% versus 14.2%; P <.001), infectious diseases (7.6% versus 2.2; P <.001), and diabetes (21.6% versus 16.0%; P <.001) in COVID-19 OHCA and death cases were significantly higher than that of the other OHCA and death cases that died in 2020. However, the history of substance abuse, stroke, gastrointestinal diseases, and malignancy were significantly lower in probable/confirmed COVID-19 OHCA and death cases than that of non-COVID-19 patients (P <.05; Table 3). Discussion In this study, it was found that the number of missions performed by the Tehran EMS in 2019 was increased compared to 2018, but in 2020 (the year that the COVID-19 epidemic appeared), it was decreased compared to that of 2019. In all three study periods, most missions were related to cardiovascular emergencies, neurological problems, and MVCs; their rate in 2020 (with the COVID-19 epidemic) was less than that of the two years before. The missions of respiratory problems, cardiopulmonary arrest, infectious diseases, and poisoning were increased in 2020 compared to that of 2019. Studies in Germany, Finland, and Kentucky (USA) have shown a decrease in the number of missions, while in some countries, such as Italy, there has been an increase in the number of missions. [13][14][15][16] Opioid overdose EMS missions were increased significantly during COVID-19 period in Kentucky. 17 Stella found a reduction in calls for chest pain without ST-segment elevation myocardial infarction (STEMI), OHCA, and major trauma and an increase in respiratory distresses. 18 The rate of OHCA and death cases in the EMS missions was significantly higher than that of the previous years. The results of the present study showed that in the period of the COVID-19 epidemic (2020), the rate of OHCA and death cases in the EMS missions was increased by approximately 26% compared to that of the previous year, while comparing 2019 to 2018, this rate was decreased by almost 10%. An increase of OHCA and death cases in the EMS missions during the COVID-19 epidemic has also been shown in other studies. A study in the United States found that the number of missions between the weeks 10 to 16 after the epidemic onset was decreased by 26.1% compared to the similar time in previous years; however, the number of OHCA and death cases reported by the EMS had been almost doubled between the weeks 11 to 15 in 2020. 8 Another study in London showed that during the first wave of the COVID-19 pandemic, there was a significant increase in the incidence of OHCA and death cases, along with a significant reduction in survival. 19 This conclusion is also confirmed by a Portuguese study that reported an increase in the number of deaths, all of which could not be explained by the reported losses of COVID-19. 20 Some other studies have also shown that the pandemic period was significantly associated with a low survival, 21,22 and the confirmed or suspected COVID-19 was responsible for approximately one-third of the increase in OHCA and death case occurrence during the pandemic, 23 which was 4.9% in this study. An increase of 1.5 years in the mean age of OHCA and death cases during the COVID-19 epidemic compared to that of the previous year was another finding of this study. In another study, 521 of OHCA and death cases during the pandemic (March 16 to April 26, 2020) were compared to the total average number of 3,052 cases during similar weeks in the non-pandemic period (weeks [12][13][14][15][16][17][2012][2013][2014][2015][2016][2017][2018][2019]. It showed that the maximum number of weekly incidences of OHCA and death cases was increased from 13.42 to 26.64 per million inhabitants, although the demographic information of patients like age (mean age: 69.7 versus 68.5 years old) was not significantly different in the pandemic versus the nonpandemic period. 22 The results of the present study showed that the rate of traumatic and non-traumatic OHCA and death cases in the COVID-19 epidemic year was increased by more than 40% and 20%, respectively, compared to that of the previous year (2019). In another study conducted in Italy, OHCA and death cases due to medical reasons were increased by 6.5% in 2020. 23 The present study showed that the number of accidents during the COVID-19 epidemic has been decreased by more than 30% compared to that of the previous year. However, the number of OHCA and death cases of trauma missions has been increased by more than 40%. This finding could be due to a reduction in the number of mild trauma cases as a result of COVID-19-induced conditions. A study conducted in Turkey in 2020 surveyed the reduction in the number of car accidents, as well as OHCA and death cases, and injuries during the months of the COVID-19 epidemic when house staying protocols were carried out in Turkey and found that the accidents leading to death were decreased by 72% and the ones leading to injury were decreased by 19%. 24 A United States study also found that declaring the national emergency condition of COVID-19, the traffic load was drastically reduced, but the number of OHCA and death cases and injuries from MVCs as well as violation of speed limits were increased; and even returning to the previous condition of traffic load, these rates were still as high as before. The number of OHCA and death cases from MVCs, measured as casualties per 100 million driving miles, was raised sharply comparing to that of 2019; it was increased from monthly 1.21 in March, 1.48 in April, 1.56 in May, and 1.63 in June. In addition, the percentage of injuries resulting from crashes in New York City (New York, USA) and driving offenses in New York City and Massachusetts (USA) were 25 In the current study, in line with the above study, it was shown that although the number of missions related to trauma was decreased in 2020, the number of OHCA and death cases was not decreased significantly; the mortality rate of trauma cases in 2020 was 6.4 cases per thousand trauma missions, which comparing to that of 2019 was increased by 40.6% and comparing to that of 2018 was increased by 51.9%. The results of the present study showed that the number of shockable cases in the epidemic period had almost halved and was decreased from 7.1%-9.7% in the previous years to 5.0%. Another study in New York City found that the incidence of non-traumatic OHCA and death cases with EMS resuscitation from March 1 through April 25, 2020 was three-times higher than that of the same period one year earlier. Shockable rhythms were significantly lower, although the survival, return of spontaneous circulation, and bystander cardiopulmonary resuscitation (CPR) rates had not been changed comparing to that of the previous year, 26 which were contrary to the findings of the Yuan Po study that the proportion of patients resuscitated by bystanders was 15.6% lower, and the incidence of OHCA and death among patients resuscitated by the EMS was 14.9% higher comparing to that of 2019. 23 Another study in France found that the rate of shockable was lower than non-shockable rhythm)9.2% versus 19.1(%, 22 which was consistent with the findings of the present study; however, the prevalence of shockable rhythm in the current study was slightly lower. In patients without COVID-19, OHCA and death are mainly due to ischemic heart disease, while in COVID-19 patients, additional factors such as hypoxia, myocarditis, pulmonary embolism, vascular thrombosis, and endothelial dysfunction are involved. 17,27,28 Significant differences in the frequency of shockable rhythms between patients with and without COVID-19 confirm this fact. 29,30 According to the meta-analysis findings conducted by Magdalena, decreased survival at hospital discharge of COVID-19 patients following OHCA and death appears to be more likely due to COVID-19 rather than the poor quality of CPR performed by a bystander or EMS technician. This meta-analysis showed that survival at hospital discharge amount was decreased in patients with suspected or confirmed COVID-19 after OHCA, which seems to be due to the low rate of shockable rhythms in COVID-19 patients rather than the reluctance of bystanders to perform CPR. 31 According to the findings of the current study, it seems that residents of Tehran demanded less EMS services after the onset of the COVID-19 epidemic than before, and this may be due to the indirect impacts of the pandemic on lifestyle, like staying at home protocol, using fewer vehicles, reducing occupational accidents as a result of quarantine and off, and less presence in recreational places. The decline in demand may also be due to authorities' notifications of limited medical resources, and people who do not have highlevel medical care needs seek out other services, including online counseling and out-patient clinics, and use fewer EMS services; this is the positive effect of this phenomenon on the health system and the emergency system. On the other hand, part of this decline in demand and change in the way of getting a service may be related to the fear of COVID-19 transmission from EMS or hospital. The decrease in the number of missions and the increase in the mortality rate at the EMS mission scene indicate that people who have an emergency do not seek care on time, which may cause delays or inadequate use of health services and leads to serious consequences for people, especially those with underlying diseases, which is a risk factor to develop COVID-19 and according to the results of the present study, a high risk of OHCA and death occurrence. In order to address emerging threats in epidemics and other subsequent public health emergencies, it is important to strike a balance between the need of individuals to find and receive the care with the community's requirements to run measures such as social distancing. Limitations One of the limitations of this study was the impossibility of assessing the survival rate of OHCA cases because of no record of information and lack of a comprehensive registry bank of information of all hospitals. Also, categorizing of diagnoses was based on the clinical findings of the technician at the scene, which was considered as the probable diagnosis. If it was possible to access the hospital data, it could have been determined the type of urgency with greater certainty. Another limitation of this study was the inability to accurately identify the relationship between suspected COVID-19 cases and OHCA and death cases because patients' symptoms were not recorded before death, and considering the retrospective nature of the study, further investigation was not possible after death; only the relationship between cardiac arrest and confirmed/probable COVID-19 cases (according to the World Health Organization [WHO; Geneva, Switzerland]) was investigated. Conclusion This study showed that the number of missions in the Tehran EMS in 2020 were decreased compared to that of 2019 (17.83%); however, the number of missions in 2019 was more than that of 2018 (30.33%). The number of respiratory problems, infectious diseases, poisoning, death, and OHCA were increased compared to the previous two years. In all three study periods, most of the missions were related to cardiovascular complaints, neurological problems, and MVCs. However, the rate of these three categories in 2020 was fewer than that of the other two years. The mean age of OHCA and death cases had risen from 68.0 years in 2018 to 69.5 years in 2020 (P <.001). Of all OHCA and death cases, 72.4% had at least one medical history and history of heart disease, hypertension, diabetes, and respiratory disease in patients in 2020 was more frequent than that of the other two years. Authors' Contributions The conception and design of the work by PHS, PS, and AB; Data acquisition by PHS, MS, and SMM; Analysis and interpretation of data by PHS, PS, and AB; Drafting the work by PHS and AB; Revising it critically for important intellectual content by PS, MS, and SMM; All the authors approved the final version to be published; AND agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work. Note: Each subscript letter denotes a subset of year categories whose column proportions do not differ significantly from each other based on the Bonferroni Post-hoc method. Abbreviation: OHCA, out-of-hospital cardiac arrest. *Percentage of changes in the ratio of non-traumatic to traumatic deaths.
2021-10-09T06:17:19.855Z
2021-10-08T00:00:00.000
{ "year": 2021, "sha1": "8c56af36e64a91861832c38ba4f79bd74d580805", "oa_license": null, "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8529353", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "a06ee7083568e790cdad406a09dbbcb7548ea17c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
203116273
pes2o/s2orc
v3-fos-license
Research on and Application of the Control of Manhole Cover in the Launching Silo Based on Machine Vision Technology The manhole cover is an important for the launching device as a whole because its uncovering is vital to the whole switching process. The contact switch is now mainly used to judge whether the manhole cover is in place; however, in this way, the state of the manhole cover cannot be directly observed. In order to make improvement in this regard, this paper adopts machine vision technology to detect the state of the manhole cover and observe the switching process of the manhole cover and detect the in-position signal in real-time by setting a target at a specific position and identifying the target signal using digital image processing technology. Experiment shows that machine vision technology can quickly identify the target signal and directly observe the switching process of the manhole cover. Introduction The manhole cover plays a vital role in the guided missile launcher [1] ,because its uncovering state determines the normal operation of the entire task. Today, the contact switches at different positions using an open-loop control system are used to judge the position and state of the manhole cover, which is low in control precision and poor in repeatability of the manhole cover switching process. The error in the opening position of the manhole cover sometimes seriously hinders the rapid and accurate response of the entire missile launching system [2] . Therefore, improving the control accuracy is crucial to the operation of the cover switching system. By setting corresponding targets at different positions of the manhole cover and obtaining its accurate position and state in real time through machine vision technology, the control precision of the manhole movement is effectively improved. Movement analysis of manhole cover The structure of a typical silo cover [3] is shown in Figure 1. In the traditional control system, the electronic control device is responsible for adjusting the oil pressure in the hydraulic cylinder by controlling the proportional speed control valve, thus controlling the cover opening process; on receiving the contact switch signal, the cover is judged to be in place. The machine vision technology is to use a camera to continuously observe the moving state during its opening and cyclically detect the target signal by installing corresponding target signals on the linkage mechanical structure during the rotation of the manhole cover. After receiving the video signal, it the electronic control device processes each captured image to directly judge whether the cover is stuck during opening through the liquid crystal display; when the target signal is detected, it quickly identifies the target and displays the in-position signals, during which the most critical part is the detection of the target signal. Figure 1. Typical launcher cover structure schematic The detection algorithm of the target signal includes target image pre-processing and shape recognition of the target image. The former means to extract the contour information in the target image using edge filtering, edge extraction and other algorithms, while the latter means to calculate the length and width and the angle of the target image by calibration, and compare them with those in the ideal situation in order to determine the in-place state of the manhole cover. Distortion correction Image distortion, as is a typical barrel distortion, is caused by the wide-angle lens. The distortion is small in the center of the image, but gets bigger away from the center [4][5][6] , as shown in Figure 2. Original image Distorted image Figure 2. Barrel ditortion Therefore, it is necessary to first perform distortion correction on the target image acquired by the camera. This paper adopts the calibration method proposed by Zhang Zhengyou [7] to obtain the distortion correction parameters of the image acquisition system. For the basic principle, see Equation (1). • K-the internal parameter matrix of the camera • t, [r1 r2 r3]-the translation vector and rotation matrix of the camera coordinate system relative to the world coordinate system • [X Y 1] T -The homogeneous coordinates of the points on the calibration template Since only the contour information of the target is acquired during the recognition process, it can be assumed that the plane of the calibration template is the Z = 0 plane of the world coordinate system. Specifically, it means to collect the checkerboard images of different positions in the camera field, calculate the distortion correction parameters of the system based on the feature points in the image, and perform distortion correction on the target image P, as shown in Formula (2). The comparison picture before and after correction is shown in Figure 3. Contour extraction In order to minimize the influence of noise in the target image on the target edge detection result, it is necessary to filter out noise in the target image F after distortion correction to prevent false detection caused by noisy dots. The Gaussian smoothing filter can smoothen the image so as to reduce the effect of noise points on the edge detector. Its kernel size is chosen to be (2k+1) X (2k+1), which can be generated by Equation (3). Perform the convolution operation on the pixel points of the target image F after distortion correction using the Gaussian convolution kernel (each operation is for a 3×3 adjacent pixel), thus obtaining the filtered image G, then: fi,j represents the gray value of pixel in the i-th row and the j-th column in the target image after the distortion correction; gi,j represents the gray value of the pixel in the i-th row and the j-th column of the image after the Gaussian filtering. Then, calculate the gradient strength A and the direction Φ of the filtered image G using the Sobel operator, wherein the horizontal Sx and vertical operators Sy of the Sobel operator are as follows: Sx detects the edge of the horizontal direction of the image, while Sy detects that of the vertical direction. The two operators perform convolution operations on the image G, respectively, thus obtaining a horizontal image H and a vertical image V, as shown in Equations (5) and (6). (8) Due to the fact that the extracted target edges are blurred after gradient calculation, the exact response of the target image should be one and only one. Therefore, a non-maximum suppression algorithm is needed to minimize the gradient values apart from the local maximum to zero. If the gradient intensity of the current pixel is greater than that of the other two adjacent pixels, identify it as an edge poin with its gray value of 255 and that of its two adjacent pixels minimized to zero. Shape recognition Calibrated the plane of the target by setting 50 mm X 50 mm white squares, as shown in Figure 4. The contour information of the white squares is extracted using the above method, thus obtaining the correspondence between the actual length of the white square and the pixel value, thereby calculating the actual size of the single pixel. Extract the peripheral contour information of the target image using the above method, and complete the straight line fitting using the least square method; in this way, the length of each side of the target image can be calculated by combining the parameters of the single pixel obtained from calibration. Then, calculate the angle between each line and its intersecting line based on the gradient value of each straight line. Experiment and analysis This experiment adopts the Imaging Source DFK 23GP031 industrial camera and Computar industrial lens with a focal length of 12 mm for image acquisition; it adopts CLA1B-WKW LED light source of OSRAM company as the light source and conducts the machine recognition algorithm on Windows Visual Studio 2008. The camera is first used to collect several checkerboard images to obtain the parameters of distortion correction, as shown in Figure 5; then the 10mm*10mm white square is used as the target image, as shown in Figure 6. The target image is corrected by using the acquired distortion correction matrix, and then filtered using the Gaussian filter. The target contour information is acquired using a Sobel operator and the non-maximum suppression algorithm. Finally, fit the linear equation of the target image contour using the least square method; calculate the linear length of the outline of the target image and the angle between the intersecting straight lines based on the parameters obtained by calibration. . Target recognition Calculate the target length and gradient based on the parameters obtained by calibration, and obtain the angle of the target contour based on the gradient, as shown in Table 1. Calculation shows that the information of the identified target contour is consistent with that of the actual target image, that is, a cross cursor is displayed on the target image, indicating that the inposition signal is sent, as shown in Figure 7. Conclusion It can be seen from the result that the machine vision technology can extract the target contour information and give the in-position signal more quickly than the contact switch. Therefore, setting the target at the specific position of the manhole cover is conducive to the direct observation of the switching process of the manhole cover and the rapid identification of the in-position signal.
2019-09-17T02:58:46.979Z
2019-09-05T00:00:00.000
{ "year": 2019, "sha1": "ed42e26218bd2138f5e36eb0f66b40d3809df7df", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/310/2/022021", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "a8fe5a4640d1b981eff4a52680888eb9f55bb03d", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Engineering", "Physics" ] }
219560729
pes2o/s2orc
v3-fos-license
Influence of dual vibrations on control of percussive hand-operated power tools – experimental investigations Abstract This article presents the results of pioneer experimental research into the impact of whole body vibration (WBV) and hand arm vibration (HAV) with different amplitudes and frequencies on percussive tools which are controlled by hand. In these studies the human operator is considered as the active element of the control system. The quality of control executed by the participants of the tests was assessed by the typical parameters existing in control system engineering, such as rise time, settling time, overshoot and integral square error (ISE). The tests were performed on an especially built stand. Results are presented in the form of time histories of step reference force. Forces realized by the operator were statistically analysed and graphically presented. Local (HAV) vibration of tools and accompanying dynamic forces between the handle and the vibrating tool have a big influence on control of the tool. An increase in the frequency of the tools’ vibration increases the adjustment range. The influence of platform vibrations on the adjustment range is relatively small. Introduction Human -machine interactions are one of the most interesting and difficult branches of science. The reason for this is because of the variety of the human form, physical connections between machine and operator and various other dependencies. All of these factors determine the final results of each measurement, modelling and assessment of man-machine systems. The first significant works concerning the interaction between the pilot and the control system of an aeroplane were provided by [8]. A more general approach was presented in [9]. Some aspects of the influence of whole body vibration upon human-operators' reactions and comfort were described in [2] and [10]. From the 1960s up until the time of writing this paper, many works and experiments concerning various approaches to human-machine studies have been conducted, predominantly in military laboratories. The experimental investigations described in this paper are a continuation of a series of experiments carried out by [1,5] showing the new effects of various factors on the manual control of hand-operated power tools. In the tests, the human operator is treated as an element of the control-visual feedback regulator. The general scheme of the block diagram of the human -operator feedback control system is shown in Fig. 1. Materials and methods The experimental studies were conducted on the modified test bench used previously in the works of [3,4,7]. In the present paper, a new approach was applied and described for the human-operator subjected simultaneously to two sources of vibrations acting upon legs and hands. A schema of the used test bench with its principal components is shown in Fig. 2. A photograph of the experimental test bench is presented in Fig. 3. The modified stand was designed as a system composed of a platform, two vertical bars and a horizontal beam placed on the piston of the electro-hydraulic Heckert SHA 140 shaker. Figure 3 shows the vertical and horizontal shakers and corresponding directions of applied excitations. As is shown in Fig. 3, the human-operator stands with one foot forward to the front of the tool on the vibrating platform holding in his hands the handle of the vibrating hand-tool whilst watching the monitor screen. The task of the operator is to control the motion of the tool by exerting pressure on the handle in accordance with the tracking value displayed on the monitor's screen. Real-time visualisation of the two signals is realised on the front panel of a virtual instrument within LabView 7.1, shown in Fig. 4, in the form of two mobile, coloured indicators, allowing their comparison. The human-operator tool system is treated here as a typical system of control with visual feedback. As the input signal is assigned to the monitor (a sudden step force of 80 [N]), the operator must match this value by applying pressure on the handle of the tool. The tool is connected to the other components of the test-bench through the integrated force sensor. The force exerted by the operator is continuously displayed on the monitor screen. It allows, by visual feedback and observation of state error between two signals displaced on the monitor, comparison between applied and reference forces. Execution of the task by the operator has deliberately been made more difficult by introducing distortions coming from the vibrating platform (WBV-whole body vibration) and the vibrations of the tool (HAV-hand arm vibrations). The design of the measuring test-bench allows the introduction of different frequencies and amplitudes of vibrations submitted to the operator. The Heckert electro-hydraulic shaker vibrates the measuring platform in accordance with instructions from the controlling unit. During the tests, the shakers were controlled by sine functions with frequencies 3, 5, 10 and 15 Hz and the amplitudes corresponding to the data frequency exposure nuisance limit for standard exposures of 15 minutes. The hand tool controlled by the operator was set in oscillatory motion by the horizontal electromagnetic shaker shown in Figures 3 and 4. The frequencies generated by the shaker and the platform were assumed to be the same in order to fortify the effect of the influence of vibration disturbances on the operator. The following factors have been implemented and realized in the LabView software environment: the generation of input force; the measurement of the input force; the force exerted by the operator; the visualization and registration of both forces. Figure 5 shows the signal flow diagram of the program. The test consists of a strain gauge force sensor in the full bridge dedicated to the sensor amplifier, the strain gauge power supplier, measurement card NI DAQ 6024E together with the connections and measurement computer equipped with LabView software. A measuring track diagram is shown in Fig. 5. Results The tests were conducted in the laboratory at the Department of Dynamics of Material Systems of Cracow University of Technology for 7 selected volunteers. Each of the participants stood on the measuring platform in a defined position under the following three conditions: 1) control of pressure force on the tool handle without whole body vibration (WBV) but with hand arm vibration (HAV); 2) control of pressure force on the tool handle with whole body vibration (WBV) only; 3) control of the realized pressure force on the tool handle with both whole body vibration (WBV) and hand arm vibration (HAV) acting simultaneously. All of the tests were carried out under the influence of vibrations with sequential frequencies 3, 5, 10, 15 Hz. Each attempt was recorded in the form of time histories of the input sudden step reference forces and the response forces exerted by the operator on the tool handle. The measured signals were sampled with a frequency of 5 kHz. Figures 6-13 show examples of typical time responses registered for different trails. Discussion The timing signals of the input force and the realized force have been processed in the D-plot software (HydeSoft USA). Because of the high gain signals from the bridge failures and performing measurements in a hall in which other used devices were submitted to some excitations (general and local vibrations), the signals from the sensor force had to be prefiltered to remove the interference of external perturbations. Scaling was then made using earlier measurement calibration. The timeline was moved so that the sudden force stroke began at time 0 [s]. For each time history, two reference lines were generated with values equal to the input step force value of 80 [N] +/-5%. Thus, prepared time responses were used to analyze the quality of tool control by the operator in individual trials. The measures of quality tool control executed by the participants of the test were represented by typical parameters existing in control system engineering such as rise time, settling time, overshoot and integral square error (ISE) based on the difference e(t) between realized r(t) and reference f(t) forces shown in formula (1). The integral square error was calculated according to formula (2). The time of integration for each test was aggregated from t = 0 to t k = 4.5 [sec]. These parameters were calculated and presented on the basis of all registered time responses. During the analysis of the test results of conditions with distortion in the form of local vibration tools, a problem appeared relating to the settling time evaluation. It happened that some operators repeatedly failed to find the exerted force within the defined a priori channel limited by the two reference lines +/-5% of the reference force value. Another measure of quality control used during the analysis was the ISE performance index. The ISE was calculated for the chosen a priori, fixed settling time durations. Both the value of the possible ranges of adjustment in individual trials, as well as the value of ISE, were tabulated. Drawn-up scoreboards were used for statistical calculations in which the medians and ranges of considered values of parameters were assessed. The results of the experiments were presented in the form of bar charts showing the median and ranges of the results for individual trials in Figures 15-17. Rise time is an indicator, which gives information about the time in which the considered system managed to achieve an adjustable value in the range of 0.1 to 0.9 of its final value. In the case of the human-tool system, it also includes information about the human response time and on the nature of this reaction, e.g. whether it is fast and violent or slow. Distortions in the form of ground shakes and tool shakes can influence and affect the human-operator's reaction causing lack of concentration and difficulties in controlling the tools. The above charts indicate a longer response time and a greater dispersion of results for lower frequency vibration platforms and tools. A large dispersion of the results and a long rise time especially apply to tests for vibration of the platform itself -that is, the effect of the whole body vibration with a frequency of 5 [Hz]. These results are consistent with indicated standard curves [6], where a frequency of 5 [Hz] is particularly troublesome for the human being. Fig. 16. Possible control range for subsequent tests A typical indicator of control quality adjustment can be the time within which the test begins to set the size to a layout of +/-5% and keeps the value within this range. Unfortunately, in the described studies of the human-tool system, reaching the desired value in the required range was unsuccessful in many cases. Therefore, the assessment of all the attempts by a single Fig. 17. Values of ISE index for subsequent tests criterion, such as time regulation, was not possible. Instead, another assessment of operator action was proposed by the statistical charts representing the extent to which the operator tried to place exerted force into the required range. The index ISE allows for aggregating errors during each trial and comparison of all attempts made. The same time (4.5 [sec]) was assumed for the comparison of all tests. Figure 17 presents the corresponding results. Conclusions The human-operator as the control element in human-machine systems, which is presented in this paper, largely affects the dynamic characteristics of the system as a whole. The graphs show that the biggest impact here is had by local (HAV) vibration of tools and accompanying dynamic forces between the handle and the vibrating tool. An increase in the frequency of the tools' vibration increases the adjustment range. The influence of platform vibrations on the adjustment range is relatively small. Typical for dynamic systems without human interaction parameters, such as settling time, rise time or overshoot, are fundamentally different from those of the human-machine parameters in terms of both quality and quantity that can be noted in . This is the result of the inherent mental features of the individual operator. As a result of this, each dynamic layout containing the human must be assessed as the system with parameters stochastically variable with time and described by using stochastic differential equations. However, this is related to many mathematical and experimental difficulties concerning the probability distributions of parameters of the system, and therefore, scientific works with this approach have been relatively over-looked in previous literature.
2019-04-22T13:08:30.678Z
2017-09-01T00:00:00.000
{ "year": 2017, "sha1": "940ed405450f2bf379edbb0a9ce254fada030e8c", "oa_license": null, "oa_url": "http://www.ejournals.eu/pliki/art/9881/", "oa_status": "GOLD", "pdf_src": "DeGruyter", "pdf_hash": "b0ddb066ae520b90792aadc78bef194bee72d6d1", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }